repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/takotori/PhAI-Spick | https://raw.githubusercontent.com/takotori/PhAI-Spick/main/sections/grundlagen.typ | typst | = Grundlagen
== Trigometrie
#image("../figures/triangle.png")
#table(
columns: (1fr, 1fr, 1fr, 1fr, 1fr, 1fr),
inset: 5pt,
align: (center, center, center, center, center, center),
fill: (column, row) => if column == 0 or row == 0 { gray },
[], [$0^circle.stroked.small$], [$30^circle.stroked.small$], [$45^circle.stroked.small$], [$60^circle.stroked.small$], [$90^circle.stroked.small$],
[$sin(alpha)$], [$0$], [$1/2$], [$sqrt(2)/2$], [$sqrt(3)/2$], [$1$],
[$cos(alpha)$], [$1$], [$sqrt(3)/2$], [$sqrt(2)/2$], [$1/2$], [$0$],
[$tan(alpha)$], [$0$], [$sqrt(3)/3$], [$1$], [$sqrt(3)$], [$-$],
)
#grid(
columns: (50%, 50%),
gutter: 1pt,
align: (start + horizon, center),
image("../figures/circle.png"),
[
#set block(spacing: 0pt)
#image("../figures/sincosgraph.png")
#image("../figures/sincosableitung.png")
]
)
== Vektorrechnung
*Länge des Vektors*: $|arrow(u)| = sqrt(u_x^2 + u_y^2 + u_z^2)$
== Ableitungen
#table(
columns: (50%, 50%),
inset: 2pt,
fill: (_, row) => if row == 0 { gray },
[*Funktion*], [*Ableitung*],
$ x^a $, $ a dot x^(a-1) $,
$ 1/x $, $ - 1 / x^2 $,
$ sqrt(x) $, $ frac(1, 2 sqrt(x)) $,
$ sin(x) $, $ cos(x) $,
$ cos(x) $, $ -sin(x) $,
$ tan(x) $, $ 1 / cos(2)^x $,
)
*Produktregel:* $ d/(d x)(f(x) dot g(x)) = f'(x) dot g(x) + f(x) dot g'(x) $
*Kettenregel:* $ d/(d x)f(g(x)) = f'(g(x)) dot g'(x) $
*Quotientenregel:* $ d/(d x)(f(x)/g(x)) = frac(f'(x) dot g(x) - g'(x) dot f(x), g(x)^2) $
=== Physikalische Grössen
#table(
columns: (auto, auto, auto, auto),
inset: 5pt,
align: (center, center, center, center),
fill: (column, _) => if column == 0 { gray },
[*Geschwindigkeit*], [$v$], [-], [$m\/s$],
[*Beschleunigung*], [$a$], [-], [$m\/s^2$],
[*Federkonstante*], [$D$], [-], [$N\/m$],
[*Frequenz*], [$f$], [Hertz], [$1\/s$],
[*Kraft*], [$F$], [Newton], [$"kg" dot m\/s^2$],
[*Energie*], [$E$], [Joule], [$N dot m$],
[*Arbeit* = $Delta "Energie"$], [$W$], [Joule], [$J = N dot m$],
[*Leistung* = Arbeit pro Zeit], [$P$], [Watt], [$J\/s$],
)
\* 4.19 Joule = 1 Cal, 1 Joule = 1 Watt/s => $3.6 dot 10^6J = 1 "kWh"$
=== Basisgrössen
#table(
columns: (auto, auto, auto, auto),
inset: 5pt,
align: (center, center, center, center),
fill: (column, _) => if column == 0 { gray },
[*Länge*], [$l$], [Meter], [$m$],
[*Masse*], [$m$], [Kilogramm], [$"kg"$],
[*Zeit*], [$t$], [Sekunde], [$s$],
)
=== Abhängigkeit Weg Geschwindigkeit und Beschleunigung über die Zeit
#table(
columns: (auto, auto),
inset: 5pt,
align: (center, center, center, center),
fill: (column, _) => if column == 0 { gray },
[*Wegfunktion*], [$s(t)$],
[*Geschwindigkeitsfunktion*], [$v(t) = dot(s)(t)$],
[*Beschleunigungsfunktion*], [$a(t) = dot(v)(t) = dot.double(s)(t)$],
)
=== Konstanten
#table(
columns: (auto, auto, auto),
inset: 5pt,
align: (center, center, center, center),
fill: (column, _) => if column == 0 { gray },
[*Fallbeschleunigung*], [$g$], [$9.80665 m\/s^2$],
[*Lichtgeschwindigkeit*], [$c$], [$2.99792458 dot 10^8 m\/s$],
[*Gravitationskonstante*], [$G$], [$6.673 dot 10^(-11) N dot m^2\/"kg"^2$],
)
*Konservative Kraft:*
Die Kraft ist konservativ, da sie nur von Ortskoordinaten abhängt, und da $−F(x)$ als reell wertige Funktion einer Variable eine Stammfunktion besitzt.
Das Hook’schen Gesetz beschreibt eine konservative Kraft, da sie nur von Ortskoordinaten abhängt, und da $-F(x)$ als reellwertige Funktion einer Variable eine Stammfunktion besitzt
|
|
https://github.com/jdsee/htw_titlepage_typst | https://raw.githubusercontent.com/jdsee/htw_titlepage_typst/main/htw_titlepage.typ | typst | Apache License 2.0 | #let titlepage(
title: "",
subtitle: "Bachelorarbeit",
subject: (
name: "",
area: 0
),
author: "",
examiners: (
first: "",
second: ""
),
body
) = {
align(center, image(height: 10%, "images/htw_logo.jpg"))
line(length: 100%, stroke: 0.5pt)
v(10mm, weak: true)
align(center, text(18pt, weight: "bold", fill: rgb("#76B900"), title))
v(10mm, weak: true)
line(length: 100%, stroke: 0.5pt)
align(center, text(16pt, subtitle))
v(50pt)
align(center, text(12pt, "Name des Studiengangs"))
align(center, text(18pt, subject.name))
align(center, text(18pt,
weight: "bold",
fill: rgb("#76B900"),
[Fachbereich #str(subject.area)]))
align(center, text(12pt, "vorgelegt von"))
align(center, text(18pt, author))
v(50pt)
align(center, text(12pt, "Datum:"))
align(
center,
text(14pt, [Berlin, #datetime.today().display("[day].[month].[year repr:last_two]")])
)
v(25pt)
align(center, text(18pt, examiners.first))
align(center, text(18pt, examiners.second))
pagebreak()
body
}
|
https://github.com/usami-k/technote | https://raw.githubusercontent.com/usami-k/technote/main/2024/iosdc-2024-article-and-poster/iosdc-2024-article-and-poster.typ | typst | Creative Commons Attribution 4.0 International | #import "@preview/touying:0.4.2": *
#let s = themes.metropolis.register(
aspect-ratio: "16-9",
footer: self => states.current-section-title,
)
#let s = (s.methods.info)(
self: s,
title: [ポスターセッション紹介:クォータニオンをiOSのsimdモジュールで扱う],
author: [宇佐見公輔],
date: [2024-08-14],
institution: [株式会社ゆめみ],
)
#let (init, slides, touying-outline, alert, speaker-note) = utils.methods(s)
#show: init
#set text(font: "IBM Plex Sans JP", size: 26pt)
#show math.equation: set text(font: "STIX Two Math")
#show raw: set text(font: "0xProto")
#show link: it => underline(text(blue, it))
#let (slide, empty-slide, title-slide, new-section-slide, focus-slide) = utils.slides(s)
#show: slides.with(outline-slide: false)
#let ii = $mono(i)$
#let jj = $mono(j)$
#let kk = $mono(k)$
== 自己紹介
#grid(
columns: 2,
gutter: 1em,
[
- 宇佐見公輔
- 株式会社ゆめみ
- iOSテックリード
- iOSDC Japan 2024
- パンフレット記事執筆
- ポスターセッション
],
grid(
rows: 2,
image("images/c40d5199-526b-4bf9-91d7-2a0bc609b305.png", height: 50%),
image("images/a28236ac-e8b0-4015-84e9-a0c9ac6670af.png", height: 50%)
),
)
== 今日の内容
#touying-outline()
= パンフレット記事の紹介
== 記事のPDFを公開中
#grid(
columns: 2,
gutter: 1em,
[
パンフレット記事のPDFをBOOTHで公開した。
#link("https://type-d4-lab.booth.pm/items/5973097")[iOSアプリ開発のいま:基礎知識から業務レベルのツールまで - Type D4 Lab - BOOTH]
],
[
#image("images/acd5e385-e422-4b0c-b04c-2dff191f80cc.png")
],
)
== 記事の内容
エンジニアの採用に関わることが多く、こういう情報がまとまっていてほしいと感じたので書いてみた。
- 業務レベルのアプリ開発でどんな知識が求められるか
- 主にiOSアプリ開発を仕事にしたい初学者向け
今後、内容をもっとブラッシュアップしていきたいので、感想や意見をフィードバックしてもらえると嬉しいです!
== パンフレット記事の制作
今回、パンフレット記事の制作では、江本さんが作成された Vivliostyle のテンプレートを活用した。
A4サイズのパンフレット向けのレイアウトは意外と難しいので、テンプレートを活用するのが良さげ。
(全員が同じレイアウトである必要は全然ないけど、一定以上の品質はほしい気持ち)
= ポスターセッションの紹介
== ポスターの概要
#link("https://fortee.jp/iosdc-japan-2024/proposal/a28236ac-e8b0-4015-84e9-a0c9ac6670af")[3次元回転とクォータニオン:iOSのAccelerateフレームワークでの実践]
3次元グラフィックスの基礎を知りたいiOSアプリ開発者向け
- クォータニオンの基礎
- simdモジュール
- 滑らかな回転アニメーションの実現
- SceneKitで回転を描画する実例
== この内容を扱った動機
次のような点に普段から関心があり、まとまった形のアウトプットにしてみたいと感じた。
- 数学的な側面
- クォータニオン(四元数)
- 3次元回転
- iOS的な側面
- Accelarateフレームワーク
- 3次元グラフィックス
= ポスターの内容をざっくりと
== クォータニオン
複素数を拡張した数。虚数単位が3つある。
$ x_0 + x_1 ii + x_2 jj + x_3 kk $
$ ii^2 = jj^2 = kk^2 = -1 $
$ ii jj = -jj ii = kk quad jj kk = -kk jj = ii quad kk ii = -ii kk = jj $
- 複素数を使うと、2次元平面上の回転を記述できる
- クォータニオンを使うと、3次元空間上の回転を記述できる
== Accelarateフレームワークとsimdモジュール
Accelarate:Appleが提供する数値計算ライブラリ群
- 機械学習:BNNS
- 画像処理:vImage
- デジタル信号処理:vDSP
- ベクトルに対する関数演算:vForce
- 線型代数:BLAS / LAPACK / Sparse Solvers
simdモジュール:Accelerateの一部
- ベクトルや行列の演算のための型と関数
- CPUのベクトル処理機能を使って効率的な演算
== 滑らかな回転アニメーションの実現
3次元回転の処理は、線型代数の行列計算でも書ける。
しかし、クォータニオンを使うと、
- 回転の補間計算が容易になる
- つまり、回転の途中経過の計算が容易になる
そのおかげで、回転アニメーションの途中経過が描画できる。
実際にSceneKitで実装してみると視覚的にわかりやすい。
= ポスターの制作
== ポスターの形式
次のどちらかが選べる。
- A4サイズのスライド10枚(原稿提出)
- A0サイズのポスター1枚(自力で持ち込み)
前者は普段のトークスライドと近い感じで作れる。
後者は学会のポスター発表と同様の感じ。
== A0サイズのポスター
今回はせっかくの機会なので、A0サイズのポスターに挑戦。
ただ、実のところポスター発表は初めてなので、作り方から調べることに。
データ制作は、A0サイズに対応したツールは少なく、AdobeやAffinityのツール群を使うのが一般的か。
今回は、Typstを使って制作してみた。
(TypstはLaTeXと同様の組版ソフトで、最近注目されている)
== ポスターの印刷と運搬
ポスターを大阪から会場までどうやって運搬して持ち込むか?
- 現地に近い場所で印刷して、そのまま会場まで運ぶ
- 紙で印刷しておいて、丸めるまたは折りたたんで持ち込む
- 布で印刷しておいて、折りたたんで持ち込む
まだ検討中。
= まとめ
iOSDC当日は、ポスターもぜひ見に来てください。
パンフレット記事やポスターについて、感想や質問があれば直接話しかけてもらえればと思います。
|
https://github.com/RafDevX/distinction-cybsoc | https://raw.githubusercontent.com/RafDevX/distinction-cybsoc/master/mod4-cybersecurity-law/memorandum.typ | typst | #import "template.typ": cover_page, header, footer, setup_page
#let title = [Legal Memorandum]
#let subtitle = [on the development of a comprehensive Cybersecurity Policy]
#let date = [December 2023]
#cover_page(title: title, subtitle: subtitle, date: date)
#pagebreak()
#show: setup_page
#set page("a4", header: header(title: title), footer: footer)
#counter(page).update(1)
#let refs = (
gdpr: ["General Data Protection Regulation" (EU Regulation 2016/679), #link(
"https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:02016R0679-20160504",
)[eur-lex.europa.eu]],
nis2: ["NIS 2 Regulation" (EU Regulation 2022/2555), #link(
"https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022L2555",
)[eur-lex.europa.eu]],
csa: ["Cybersecurity Act" (EU Regulation COM/2019/881), #link(
"https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32019R0881",
)[eur-lex.europa.eu]],
cra: ["Cyber Resilience Act" (Proposed EU Regulation COM/2022/454), #link(
"https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52022PC0454",
)[eur-lex.europa.eu]],
ehdr: ["European Health Data Regulation" (Proposed EU Regulation COM/2022/197), #link(
"https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52022PC0197",
)[eur-lex.europa.eu]],
cer: ["Critical Entities Resilience Directive" (EU Directive 2022/2557), #link("https://eur-lex.europa.eu/eli/dir/2022/2557/oj")[eur-lex.europa.eu]],
cfreu: ["Charter of Fundamental Rights of the European Union" (2000), #link(
"https://www.europarl.europa.eu/charter/pdf/text_en.pdf",
)[europarl.europa.eu]],
)
= Introduction
This legal memorandum serves as a basis for the development of a comprehensive
Cybersecurity Policy for our hospital, which must contain and describe certain
elements in accordance with the law, particularly on an European Union level.
There are several different laws that must be taken into consideration when
designing a cybersecurity policy (including proposed laws, for future-proofing
of the policy), and this memorandum attempts to highlight three concrete
cybersecurity objectives that such laws stress, particularly in regard to our
context as a public, large-scale hospital.
In regard to the legislation itself, the General Data Protection Regulation
(GDPR) applies because we process personal data (e.g., related to patients and
employees) by automated means and/or in regard to a filing system#footnote[Article 2(1), see also definition of "personal data" in Article 4 - #refs.gdpr],
and none of the exceptions in its Article 2(2) apply. In particular, data
concerning health is a special category of personal data that requires
additional care#footnote[Article 9 - #refs.gdpr]. Moreover, we determine the
purpose and means of processing of personal data, which qualifies the hospital
as a data controller#footnote[Article 4, definition of "controller" - #refs.gdpr],
which subjects us to additional requirements.
The Network and Information Systems 2 Directive (NIS 2) also applies because a
disruption of our service could have a significant impact on public health#footnote[Article 2(2)(c) - #refs.nis2],
and this Directive establishes several obligations that must be kept in mind,
besides including Recitals that help provide a clue as to how those obligations
might be met and of what measures we may take to fulfill the requirements it
sets forth. It also determines that we are in a sector of High Criticality, due
to being a healthcare provider#footnote[Annex I(5) - #refs.nis2]. Additionally,
NIS 2 provides a legal requirement for the existence of such a cybersecurity
policy as the one that is the matter of this memorandum#footnote[Article 21(2)(a) and (f) - #refs.nis2].
Finally, the European Health Data Regulationn (EHDR) also applies to us, due to
the sector we are in.
In this memorandum, three essential objectives are outlined: privacy and data
confidentiality, data and service availability, and effective incident detection
and response.
= Objective A: Privacy and Data Confidentiality
Operating at the core of the healthcare sector, our hospital must necessarily
deal with personal data in order to efficiently and effectively provide the
noble service it promises to perform. This data can take the form of employee
personal information, as is the case in most organizations, but it can also
comprise sensitive patient information and medical records that must be kept
private at all costs. It is both a moral and legal imperative that all collected
personal data is adequately safeguarded and cannot be accessed by unauthorized
parties, as the right to the protection of personal data is enshrined in the
Charter of Fundamental Rights of the European Union#footnote[Article 8(1) - #refs.cfreu],
by which Sweden is bound.
Thus, it is paramount that our cybersecurity policy reflects our duty to ensure
the confidentiality of all the data we collect and maintain. We must have
effective access control mechanisms#footnote[Article 21(2)(i) - #refs.nis2] that
mediate all requests to access a resource and only authorize them if there is a
legitimate need - e.g., only a patient's physician and attending nurses can be
allowed to access their clinical information. GDPR mandates that we "implement
appropriate technical and organizational measures to ensure a level of security
appropriate to the risk"#footnote[Article 32(1) - #refs.gdpr], and requires that
data is not made accessible "to an indefinite number of natural persons"#footnote[Article 25(2) - #refs.gdpr],
so it is important that all systems are configured to be as strict as possible,
and using whitelist strategies rather than blacklist ones.
There must also be organizational processes that regularly review#footnote[Article 24(1) - #refs.gdpr] the
conditions under which access is granted to each type of resource, and rules
should be role and context based to the greatest possible extent (e.g., the list
of current patients per room should be accessible by the on-duty Head Nurse in
that wing, not a specific Nurse Jane Doe who usually holds that position nor all
the Head Nurses of all hospital wings). Additionally, the cybersecurity policy
must detail off-boarding processes to ensure former employees, employees on
vacation, or those in similar situations forfeit all their access privileges
(permanently or temporarily, depending on the situation).
In addition, we can only rely on authorization if authentication is also
properly implemented, which means all systems must be able to verify that a
request was performed by a specific person#footnote[Article 5(1) and Annex I §1(3)(b) - #refs.cra] (and
therefore conclude whether the request should be authorized or not based on that
person's legal right to access the information). In practice, this calls for
measures such as having strong password policies and requiring Multi-Factor
Authentication#footnote[Article 21(2)(j) - #refs.nis2], as well as having proper
network segmentation to ensure users can only login at the expected locations
and through the expected means#footnote[Article 21(2) and Recital 89 - #refs.nis2].
The cybersecurity policy must also specify specific measures to ensure data is
secure even when the principle of complete mediation cannot be applied (i.e.,
when the normal data request avenues are bypassed), such as mandating encryption
at rest and encryption in transit#footnote[Article 5(1) and Annex 1 §1(3)(c) - #refs.cra] with
keys that are kept securely and inaccessible, and/or other cryptographic
measures#footnote[Article 21(2)(h) - #refs.nis2].
Appropriate steps must also be taken to safeguard, to the highest possible
degree, the security of all third-party dependencies and supply-chain providers#footnote[Article 21(2)(d) and Article 21(3) - #refs.nis2],
such as requiring recognized security certifications for all external products
used - for example, our cybersecurity policy could require certifications as
regulated by ENISA in accordance with the Cybersecurity Act#footnote[Article 4(6) - #refs.csa],
and further restrict that only certificates with a certain assurance level#footnote[Article 52(1), (5), (6) and (7) - #refs.csa] (consistent
with an in-depth risk analysis and a data protection impact assessment#footnote[Article 35(1) - #refs.gdpr])
be accepted. The policy must also require that such products have been subjected
to the legally-required conformity assessments for essential entities#footnote[Article 6(4), Article 24(3), and Annex III Class II(13) - #refs.cra] and,
if dealing with personal electronic health data (as per EDHR#footnote[Article 5(1) - #refs.ehdr]),
conformity assessments for health data processing#footnote[Chapter 3 - #refs.ehdr].
Finally, it is essential that we adopt policies promoting cybersecurity
awareness and train employees#footnote[Article 20(2) and Article 21(2)(g) - #refs.nis2] to
alert them to the dangers of phishing and social engineering attacks: as much as
we try to adapt our systems, it is nigh impossible to distinguish between a
legitimate action performed by a legitimate user out of their own volition, or
one that they have been influenced into performing for someone else who is not
authorized. This is also highlighted in Recital 89 of NIS 2, and it is crucial
that the hospital's cybersecurity policy has a focus on the "people" aspect of
security, rather than just on technology.
These measures, among others, are fundamental in protecting confidentiality and
are required in accordance with NIS 2#footnote[Article 21(2) - #refs.nis2],
besides falling within the scope of the risk-appropriate measures generically
mandated by GDPR#footnote[Article 5(1)(f), Article 25 and Article 32 - #refs.gdpr].
= Objective B: Data and Service Availability
Hospitals are vital hubs that serve some of society's most fundamental interests
through direct healthcare intervention that is a cornerstone of the basic
respect for human dignity, in addition to several other central rights
established by the Charter of Fundamental Rights of the European Union#footnote[Article 1, Article 2, and Article 35 - #refs.cfreu].
Due to their essential role in civilization, as well as the urgency that is
inherently tied to its very nature, it is paramount for hospitals to have
constant availability and be permanently capable of making use of any and all of
their resources, as any eventual downtime can prove fatal to the most vulnerable
members of the population. As hospitals are more and more digitized, the failure
of networks and information systems can easily lead to the hospital's service
becoming severely hindered or even impossible, such as if medical staff are
unable to access the medical history of an unconscious patient and cannot know
what care to safely administer.
In terms of availability, it is important that our cybersecurity policy reflects
the need for replication and redundancy, with data being kept in multiple
distinct places, secure and isolated from each other, but also easily accessible
so replicas can be put in operation at any given moment. This also applies to
digital systems and services themselves, as a malfunctioning (or
non-functioning) system - that is, the cybersecurity policy must mandate regular
reviews of the network and information systems' architecture and require any
identified bottlenecks or single points of failure to be replaced by a redundant
solution without undue delay. Substitute servers and service interfaces for
critical systems must be continuously prepared to suddenly be commissioned by
uptime-aware load balancers that can detect which replicas are available and
which are not.
Additionally, proper network segmentation must be in place#footnote[Article 21(2) and Recital 89 - #refs.nis2] to
reduce the hospital's attack surface and make denial of service attacks
(distributed or not) as infeasible as possible#footnote[Article 5(1) and Annex I §1(3)(f) - #refs.cra] -
no critical information systems should be exposed to the public internet or to
guest networks used by patients and visitors. As appropriate, the cybersecurity
policy should also require the constant operation of firewalls, an Intrusion
Detection System (IDS), and/or an Intrusion Prevention System (IPS) that can
detect denial of service attempts and block them, therefore preventing them from
resulting in failure of availability, even if triggered from inside the internal
network against critical systems. These software tools, if used, should be
required by the cybersecurity policy to have been subjected to a conformity of
assessment#footnote[Article 6(4), Article 24(3), and Annex III Class II(13) - #refs.cra] by
the manufacturer (including in regard to such products "[minimizing] their own
negative impact on the availability of services provided by other devices or
networks"#footnote[Article 5(1) and Annex I §1(3)(g) - #refs.cra]), and should
not process electronic health personal information (as per EDHR#footnote[Article 5(1) - #refs.ehdr])
in order to promote data minimization and reduce the attack surface related to
such data (however, if they must process it for a measurable risk decrease for
the hospital's security, they should have also been subjected to conformity
assessments specific to health data processing#footnote[Chapter 3 - #refs.ehdr]).
Moreover, in order to safeguard availability, the physical conditions
surrounding the hospital's digital systems and infrastructure must be
considered, as the latter's uptime is directly dependent on the former. The
cybersecurity policy must integrate with the hospital's resilience policy and
the technical, security, and organizational measures implemented to "ensure
adequate physical protection of their premises and critical infrastructure"#footnote[Article 13(1)(b) - #refs.cer] -
in particular, the cybersecurity policy should assert specific requirements that
must be fulfilled by such a resilience policy, particularly in terms of physical
security surrounding the facilities where critical digital systems are hosted,
and also in terms of contingency measures such as generators that can support
such systems in event of an electricity grid failure.
In regards to the security of data itself, in order to ensure its long-term
availability, it is crucial for the cybersecurity policy to mandate backup
procedures to be conducted regularly, both to short-term easy-access and
long-term isolated storage means (e.g., respectively, separate hard-drives in
the hospital's data center, and magnetic tapes in a secure warehouse in another
city) - nonetheless, regardless of ease of physical access, all data backup
systems should be air-gapped from the rest of the hospital's network and digital
systems. This behavior is consistent with compliance with GDPR, as it requires
the implementation of appropriate technical and organizational measures to
ensure "
the ability to restore the availability and access to personal data in a timely
manner in the event of a physical or technical incident"#footnote[Article 32(1)(c) - #refs.gdpr].
It is also important for backups to be routinely verified by attempting to use
them in system restoration tests, in order to assure that, in case of an
incident, they can be of use in order to restore data availability.
Additionally, if physical security is sufficiently reliable following a
GDPR-mandated data protection impact assessment#footnote[Article 35(1) - #refs.gdpr],
paper copies of the essential information should be kept as well, so that data
availability is safeguarded in case of a failure of digital systems.
The need for availability must be considered in order to comply with NIS 2#footnote[Article 21(2)(c) - #refs.nis2] and
the Cyber Resilience Act#footnote[Article 5(1) and Annex I §1(3)(f) - #refs.cra],
and the measures outlined above are imperative in order to ensure that our
hospital remains fully operational even during the most adverse of the
situations.
= Objective C: Effective Incident Detection and Response
Given the importance necessarily associated with hospitals as critical
infrastructure, it is crucial that we are able to effectively detect any
on-going attacks and respond to them with the shortest possible delay,
particularly due to essential services commonly being targeted by threat actors
with high capabilities, such as financial-motivated attackers attempting to
perform a ransomware attack, as industries with high availability requirements
are more likely to pay ransoms. The requirement for a quick and effective
response to an incident (and the detection that necessarily precedes it) is set
forth by GDPR#footnote[Article 32(1)(c) - #refs.gdpr] and NIS 2#footnote[Article 21(2)(b) - #refs.nis2],
and it is also indispensable in mitigating problems with regard to
confidentiality and availability, the two other objectives discussed in this
memorandum, as cyber-attacks are often purposely designed to compromise those
properties, among others.
In order to promote incident detection, all systems must engage in logging and
provide monitoring information regarding their internal activities that may be
prove relevant to the hospital's overall security#footnote[Article 5(1) and Annex I §1(3)(j) - #refs.cra],
and the hospital's cybersecurity policy must require that only such products are
used, in addition to specifying testing procedures to be executed routinely in
order to ensure logging remains fully functional, especially after software
updates. Additionally, mechanisms must be put in place to collect all the
generated logs and monitoring data, and frequently (as close to real time as
feasible) analyze them (both individually and in terms of the correlations
between data points, intra- and inter-stream), in order to attempt to detect
potential attacks and deal with them directly and/or, if necessary, escalate
them to a team of on-call human professionals that can verify the situation
respond accordingly.
Moreover, besides asserting the need for constant improvement of response
capabilities, the cybersecurity policy must require or directly establish clear
procedures and policies for incident handling#footnote[Article 21(2)(a) and (b) - #refs.nis2],
including which positions within the organizational structure are responsible
for which tasks, and within what timelines.
In case an incident is detected, NIS 2 requires that any significant incident
(as thereby defined#footnote[Article 23(3) - #refs.nis2]) is reported to
Sweden's CSIRT _(CERT-SE)_ without undue delay#footnote[Article 23(1) and (4) - #refs.nis2],
which includes, where applicable, at least three reports (before 24 hours, 72
hours, and one month have elapsed since the incident detection). Additionally,
if the incident comprises a personal data breach and is "not unlikely to result
in a risk to the rights and freedoms of natural persons", GDPR requires that
incidents are reported to the competent supervisory authority (in this case,
Sweden's _Datainspektionen_) within 72 hours#footnote[Article 33(1) - #refs.gdpr].
The cybersecurity policy and the incident handling procedures as described must
take these responsibilities into account and assure for their fulfillment.
In addition, all the knowledge obtained from the process of incident detection
and response should be shared with other entities, if deemed beneficial for the
cybersecurity community and society at large (especially in regard to other
essential entities and critical infrastructure) - the cybersecurity policy
should specify how that can take place, especially through a basis of a
voluntary exchange of information between essential and important entities#footnote[Article 29 and Recital 119 - #refs.nis2].
In particular, if the hospital discovers vulnerabilities that may affect other
entities or users, the cybersecurity policy should describe specific procedures
to promote their responsible disclosure with the community#footnote[Article 21(2)(e) - #refs.nis2].
= Conclusion
In conclusion, the objectives outlined in this memorandum are essential
(legally, morally, and operationally) to be kept in consideration during the
development of a cybersecurity policy for our hospital, and there are several
measures that should be specified by such a policy in accordance with the
principle of proportionality that is inherent to these matters and the
legislation that accompanies it.
In order to safeguard society's interests, protect and maintain the hospital's
operability and reduce its legal liability, the principles here described should
be reflected in a cybersecurity policy. |
|
https://github.com/robinhundt/tpmpc24-talk | https://raw.githubusercontent.com/robinhundt/tpmpc24-talk/main/template.typ | typst | MIT License | #import "@preview/polylux:0.3.1": *
#let encrypto-title = state("encrypto-title", [])
#let encrypto-authors = state("encrypto-author", [])
#let encrypto-note-slides = state("encrypto-note-slides", true)
#let encrypto-blue = rgb("#00679c")
#let encrypto-full-line = line.with(length: 100%)
#let encrypto-theme(
aspect-ratio: "16-9",
authors: [],
title: [],
seminar: none,
footer-delim:"|",
note-slides: true,
body
) = {
let slide-footer = {
encrypto-full-line()
show: pad.with(x: 1em, y: -8pt)
set text(size: 11pt)
if seminar != none {
seminar + [ | ]
}
if type(authors) == "array" {
authors.join(" and ")
} else {
authors
}
[ | ] + title + [ | ]
"Slide " + logic.logical-slide.display()
place(top + right, dy: -1em, image("logos/tud_logo.png", width: 9%))
}
set text(font: "Helvetica", size: 20pt)
set page(
paper: "presentation-" + aspect-ratio,
footer: slide-footer,
margin: (
top: 0.5cm,
x: 0.5cm
)
)
encrypto-title.update(title)
if type(authors) != "array" {
encrypto-authors.update((authors,))
} else {
encrypto-authors.update(authors)
}
encrypto-note-slides.update(note-slides)
body
}
#let title-slide(
sub-title: none,
body,
) = {
let content = {
encrypto-full-line(stroke: 10pt + encrypto-blue)
v(-1em + 3pt)
let blk-inset = 1.5em
block(
// fill: encrypto-blue,
width: 100%,height: 40%,
inset: blk-inset,
{
place(
top + right,
dx: blk-inset + 1pt,
dy: -blk-inset + 5pt,
block(fill: white,width: 4cm,inset: 0.2em, {
image("logos/encrypto-logo.svg")
})
)
// set text(fill: white)
let rows = if sub-title == none { 2 } else { 3 }
grid(rows: (auto,) * rows, row-gutter: 0.5em, {
grid(columns: (1fr, 20%), {
heading(encrypto-title.display())
},{})
},
sub-title, {
locate(loc => {
let authors = encrypto-authors.at(loc)
grid(columns: (auto,) * authors.len(),column-gutter: 1em, ..authors)
})
})
})
body
}
polylux-slide(content)
}
#let encrypto-header(title) = {
encrypto-full-line(stroke: 10pt + encrypto-blue)
v(-1em + 3pt)
block(width: 100%, height: 100%, {
place(
top + right,dx: 1pt, dy: 5pt,
block(width: 4cm,inset: 0.2em, image("logos/encrypto-logo.svg"))
)
if title != none {
set align(bottom)
show: pad.with(left:1em, bottom: 1em)
heading(title)
}
})
}
#let slide(
title,
alignment: none,
notes: none,
body
) = {
let content = {
box(height: 15%, encrypto-header(title))
let alignment = if alignment != none {
alignment
} else {
start + top
}
set align(alignment)
show: pad.with(x: 1em)
show: box.with(height: 77%)
body
}
polylux-slide({
content
if notes != none {
pdfpc.speaker-note(notes)
}
})
locate(loc => {
if notes != none and encrypto-note-slides.at(loc) {
slide([Notes: #title], notes)
}
})
}
#let section-slide(title, sub-title: none) = {
let content = {
box(height: 15%, encrypto-header([]))
set align(horizon)
show: pad.with(left:3em)
box(height: 30%, {
heading(level: 1, title)
if sub-title != none {
set align(bottom)
text(fill: black.lighten(30%), heading(level: 2, sub-title))
}
})
}
polylux-slide(content)
} |
https://github.com/jrihon/multi-bibs | https://raw.githubusercontent.com/jrihon/multi-bibs/main/lib/multi-bib.typ | typst | MIT License |
/* Utility functions */
// Works for Linux/MacOS only
#let basename_yml(pathtofile) = {
if pathtofile.ends-with(".yml") == false {
panic("The called file : (" + pathtofile + ") does end in the .yml format")
}
let basename = ""
if pathtofile.contains("/") {
basename = pathtofile.split("/").at(-1).split(".").at(0) // take yml file (foo.yml) and cut off extension
} else {
basename = pathtofile.split(".").at(0)
}
basename
}
//
/* Bilbiography functions */
// name_pub : key of the citation
// yml_dict : the dict, parsed by python
// style : citation style
#let mcite(arr_of_pubs, biblio) = {
let basename = basename_yml(biblio.bibyml)
let bibchapter = biblio.bibchapter
// if the array contains one value, Typst coerces it to extract to only value and give it to you
if type(arr_of_pubs) == "string" {
let name_pub = arr_of_pubs // array got coerced into a single string
let val = bibchapter.at(name_pub, default: 404) // find name of citation in the bibyml
if val == 404 {
panic("The key: " + name_pub + " was not found in the queried yml.")
}
// the link is make by concatenating the :
// "key of the publication in yml" + "the name of the bibliography.yml file"
text("[")
link(
label(name_pub + basename), // string coerced into label
text( str(val))
)
text("]")
// if the type is an array
} else if type(arr_of_pubs) == "array" {
let counter = 0
text("[")
for name_pub in arr_of_pubs {
let val = bibchapter.at(name_pub, default: 404)
if val == 404 {
panic("The key: " + name_pub + " was not found in the queried yml.")
}
if counter == 0 {
// the link is make by concatenating the :
// "key of the publication in yml" + "the name of the bibliography.yml file"
link(
label(name_pub + basename), // string coerced into label
text( str(val) )
)
} else { // if there are multiple citations
text(", ") // add comma in between citations
link(
label(name_pub + basename),
text( str(val) )
)
}
counter += 1
}
text("]")
} else {
panic("Invalid parameters passed to mcite() : " + str(type(arr_of_pubs)))
}
}
#let set_authors(publication) = {
let authors = publication.at("author", default: 404)
if authors == 404 {
[#text("NO AUTHORS, ")]
} else {
if type(authors) == "array" {
for auth in authors {
if type(auth) == "string" {
let splitnames = auth.split(",")
let lastname = splitnames.at(0)
let firstname = ""
// if splitnames.at(1, default: "Err") == "Err" {
// [HERE]
// }
if splitnames.at(1).len() < 3 {
firstname += splitnames.at(1, default:"Err")
[#text(lastname + "." + firstname + ", ")]
} else {
firstname += splitnames.at(1, default:"Err").slice(0,2)
[#text(lastname + "." + firstname + "., ")]
}
}
}
} else {
if type(authors) == "string" {
let splitnames = authors.split(",")
let lastname = splitnames.at(0)
let firstname = splitnames.at(1).slice(0,2)
[#text(lastname + "." + firstname + ". ")]
}
}
}
}
#let set_title(publication) = {
let title = publication.at("title", default: 404)
if title == 404 {
[#text("NO TITLE, ")]
} else {
[#text(" \"" + title + "\" ")]
}
}
#let set_date(publication) = {
let date = publication.at("date", default: 404)
if date == 404 {
[#text("NO DATE, ", style: "italic")]
} else {
// [#text("(" + str(date) + ")") ]
[#text(str(date) + ", ") ]
}
}
#let set_journal(publication) = {
let parent = publication.at("parent", default: 404)
if parent == 404 {
[#text("NO PARENT FOUND, ", style: "italic")]
} else {
let journal = parent.at("title", default: 404)
if journal == 404 {
[#text(str("NO JOURNAL, "), style: "italic") ]
} else {
[#text(str(journal) + ", ", style: "italic") ]
}
}
}
#let set_issue(publication) = {
let parent = publication.at("parent", default: 404)
if parent == 404 {
[#text("NO PARENT FOUND, ", style: "italic")]
} else {
// let issue = parent.at("issue", default: 404)
// if issue == 404 {
// [#text(str("NO ISSUE"), style: "italic") ]
// } else {
// [#text("vol." + str(issue), style: "italic") ]
// }
let volume = parent.at("volume", default: 404)
if volume == 404 {
[#text("(" + str("NO VOLUME") + "),") ]
} else {
[#text("vol. " + str(volume) + ", ") ]
}
}
}
#let set_pages(publication) = {
let pagerange = publication.at("page-range", default: 404)
if pagerange == 404 {
[#text("NO PAGERANGE")]
} else {
[#text("pp. " + str(pagerange) + ", ") ]
}
}
#let set_doi(publication) = {
let serial-number = publication.at("serial-number", default: 404)
let url = publication.at("url", default: 404)
if serial-number == 404 or url == 404 {
[#text("NO DOI")]
} else {
let doi = serial-number.at("doi", default: 404)
[#link(url)[#text("doi:" + doi)]]
}
}
#let set_month(publication) = {
let month = publication.at("month", default: 404)
if month == 404 {
[#text("")]
} else {
let Imonth = upper(month.at(0))
let Emonth = month.slice(1)
[#text(Imonth + Emonth + ". ")]
}
}
#let ieee_style(biblio) = {
let bibchapter = biblio.bibchapter
let bibyml = yaml(biblio.bibyml)
set par(leading: 0.5em)
set text(size: 8pt)
let basename = basename_yml(biblio.bibyml)
let counter = 0
// if no citations have been used, early return
if bibchapter.len() == 0 {
return
}
for name_pub in bibchapter.keys() {
let publication = bibyml.at(name_pub, default: 404)
if publication == 404 { continue }
counter += 1
// unique labels indicate the name of the `author key` and the name of the `yaml bibliography`
// [[]#text(str(counter) + ". ", size: fontsize) #label(name_pub + basename)] // labels have to remain in local scope to their appended text
[#text("[" + str(counter) + "] ") #label(name_pub + basename)] // labels have to remain in local scope to their appended text
h(0.5em) // extra space before citation
// Authors, Date, Title, Journal, volume number(issue number), pages, DOI
set_authors(publication)
set_title(publication)
set_journal(publication)
set_issue(publication)
set_pages(publication)
set_month(publication)
set_date(publication)
set_doi(publication)
[ \ ] // set newline
// let publications = bibyml.at(name_pub)
// for (key, value) in publications {
// if key == "author" { set_authors(value, fontsize) }
// if key == "title" [#text(value + ", ", style: "italic", size: fontsize)]
// if key == "date" [#text("(" + str(value) + ")", size: fontsize) \ ] // "\" is a newline character in typst
// }
}
}
#let mbibliography(biblio, style) = {
pagebreak() // page break to start bibliography
[== References]
if style == "ieee" {
ieee_style(biblio)
} else {
panic("Style of the bibliography " + style + "is not implemented.")
}
}
|
https://github.com/Jollywatt/typst-fletcher | https://raw.githubusercontent.com/Jollywatt/typst-fletcher/master/tests/diagram-implicit-coords/test.typ | typst | MIT License | #set page(width: auto, height: auto, margin: 1em)
#import "/src/exports.typ" as fletcher: diagram, node, edge
#diagram(edge((0,0), (1,0), [label], "->"))
#diagram(edge((1,0), [label], "->"))
#diagram(edge([label], "->"))
#diagram(
node((1,2), [prev]),
edge("->", bend: 45deg),
node((2,1), [next]),
edge((1,2), ".."),
)
#pagebreak()
#diagram(
edge((0,0), "nw", "->"),
node((0,0), [London]),
edge("..|>", bend: 20deg),
edge("<|..", bend: -20deg),
node((1,1), [Paris]),
edge("e", "->"),
edge("s", "->"),
edge("se", "->"),
)
#pagebreak()
#diagram({
edge((1,0), auto, "..>")
node(name: <h>)[Hello] // first coord is (0,0)
edge("->")
node(name: <b>, (rel: (1,1)))[Bye]
})
|
https://github.com/edsomjr/RLT | https://raw.githubusercontent.com/edsomjr/RLT/main/computacao/linux_shell_scripting_tutorial_ver_1_0/README.md | markdown | MIT License | ## Linux Shell Scripting Tutorial (LSST) v1.0 - v1.05r3
<NAME>
Vivek faz uma pequena introdução a linguagem Shell do sistema operacional Linux mostrando comandos primários e interfaces básicas do todo, com alguns pequenos exercícios.
Para os exercícios, [nossas soluções estão aqui](./resume.pdf) escritas em [Bash](https://www.gnu.org/software/bash/) e [Nushell](https://www.nushell.sh/), [escritas em Typst](./resume.typ).
A primeira versão publicada está disponível na [Linux Shell Scripting Tutorial](https://www.kau.edu.sa/files/830/files/60761_linux.pdf), a versão v1.05r3 e seus códigos podem ser acessados [no site pessoal do autor](http://www.freeos.com/guides/lsst/).
|
https://github.com/Hobr/njust_thesis_typst_template | https://raw.githubusercontent.com/Hobr/njust_thesis_typst_template/main/layout/xgsLesson.typ | typst | MIT License | // 字体
#import "../util/font.typ": fonts, fontSize
// 文档
#let setDocument(title, author, doc) = {
set document(
// 标题
title: title,
// 作者
author: author.map(author => author.name),
)
doc
}
// 页眉
#let pageHeader(title) = locate(loc => {
if loc.page() > 1 [
#align(
center,
text(
font: fonts.zh_宋体,
size: 10pt,
baseline: 8pt,
spacing: 3pt,
)[#title],
)
#line(length: 100%, stroke: 0.1pt)
]
})
// 页脚
#let pageFooter() = locate(loc => {
align(
center,
text(
size: 12pt,
baseline: 16pt,
)[#counter(page).display("- 1 / 1 -", both: true)],
)
})
// 页面
#let setPage(title, header: false, footer: false, doc) = {
set page(
// 纸张大小
paper: "a4",
// 边距
margin: auto,
// 页眉
header: (if header == true {pageHeader(title)} ),
// 页脚
footer: (if footer == true {pageFooter()} ),
)
doc
}
// 段落
#let setPar(doc) = {
set par(
// 行距
leading: 1.5em,
// 行首缩进
first-line-indent:2em,
// 两侧对齐
justify: true
)
doc
}
// 文字
#let setText(lang, doc) = {
set text(
// 语言
lang: lang,
// 地区
region: "cn",
// 字体
font: (fonts.en,fonts.zh_宋体),
// 字号
size: fontSize.小四,
)
doc
}
// 小标题
#let setHeading(doc) = {
set heading(numbering: "1.1.1.1.1 ")
show heading: it => box(width: 100%)[
#v(8pt)
#align(
left,
text(
font: (fonts.en, fonts.zh_宋体),
size: fontSize.四号,
)[#it],
)
#v(4pt)
]
doc
}
// 图注
#let setCaption(doc) = {
// 图注
show figure.caption: set text(
font: (fonts.en, fonts.zh_宋体),
size: fontSize.五号,
)
doc
}
// 样式
#let xgsLesson(info, author, doc) = {
show: setDocument.with(info.title, author)
show: setPar.with()
show: setText.with(info.lang)
show: setHeading.with()
show: setCaption.with()
doc
}
|
https://github.com/tingerrr/masters-thesis | https://raw.githubusercontent.com/tingerrr/masters-thesis/main/src/de/chapters/5-implementation.typ | typst | #import "/src/util.typ": *
#import "/src/figures.typ"
Da keine eindeutigen Beweise gefunden werden konnten, welche eine sichere API der C++-Implementierung generischer Fingerbäume auf Typ-Ebene umsetzten können, wurden 2-3-Fingerbäume in C++ übersetzt um deren Verhlaten mit dem von QMap zu vergleichen.
Auf eine Implementierung von SRB-Bäumen wird Anhand des Speicherverbrauchs verzichtet.
= Von Haskell zu C++
Die Implementierung von 2-3-Fingerbäumen, sowie deren Benchmarks mit Tests von `QMap` und einer persistenten B-Baum-Implementierung zur Kontrolle können unter #link("https://github.com/tingerrr/finger-tree")[GitHub:tingerrr/FingerTree] eingesehen werden.
Verweise auf die Implementierung beziehen sich auf den letzten Stand dieses Repositories, welches mit Abgabe dieser Arbeit archiviert wird.
Die ursprüngliche Implementierung von 2-3-Fingerbäumen hat drei verschiedene Zustände, welche verschiedene Invarianzen sichern:
- `Empty`: Ein leerer Zustand ohne Felder.
- `Single`: Ein Überbrückungszustand zum Speichern eines einzelnen Knotens in einer Ebene, dieser hat einen einzelnen Knoten als Feld.
- `Deep`: Die rekursive Definition der Datenstruktur.
Dieses hat drei Felder, die Digits auf beiden Seiten und einen weiteren Fingerbaum mit nicht-regulärer Rekursion auf Typebene @bib:bm-98.
Die Definition von `FingerTree a` verwendet nicht erneut `FingerTree a`, sondern `FingerTree (Node a)` (analog zu `FingerTree<Node<T>>` statt `FingerTree<T>`), daraus wird, zusammen mit der Definition on `Node` die Tiefe der Knoten auf jeder Ebene automatisch durch den Typ gesichert.
Ohne die Typrekursion in der `Deep`-Variante würde sich ein Fingerbaum wie eine einfach verkettete Liste verhalten, die Rekursion ist essentiell für die Performance der Datenstruktur.
In C++ ist die Definition von rekursiven Typen möglich, diese Definitionen müssen aber regulär erfolgen und wenn es um die Definition von den Feldern einer Datenstruktur handelt müssen Selbstverweise durch Indirektion erfolgen.
Eine Klasse `C<T>` kann in der eigenen Definition auf folgende Versionen von `C<T>` zurückgreifen:
- Voll-instanzierte Typen wie `C<int>` oder `C<std::string>`.
- Neu parametrierte Typkonstruktoren wie ```cpp template<typename U> class C<U>```, sofern diese nicht von `T` abhängen.
- Eigenverweise wie `C<T>` ohne _Verpacken_ von Typen wie `C<A<T>>`.
Die Definition in @lst:illegal-recursive-type wäre eine 1-1 Übersetzung der Haskell-Definition (Initialdefinition ohne Measures), diese wäre nach den oben genannten Regeln in C++ illegal.
Zunächst müssen Eigenverweise durch Indirektion erfolgen, das Feld `middle` in `Deep` müsste also ein Pointer-Typ sein.
Allerdings ist der Eigenverweis dann immernoch nicht möglich, da `Deep` auf den Typen `FingerTree<Node<T>>` verweist --- ein nicht-regulärer Eigenverweis.
Die Balancierung der Unterbäume kann in C++ nicht zur Kompilierzeit garantiert werden.
Stattdessen muss in `middle` auf `FingerTree<T>` verwiesen werden und die Baumstruktur der `Node`-Typen wird durch `Internal` und `Leaf`-Typen erzeugt.
Diese Neudefinition der `Node`-Typen hat auch einen Einfluss auf die API der Algorithmen.
Während die gleichen Algorithmen in der Haskell-Definition verwendet werden können, um einzelne Elemente vom Typ `a`, sowie deren verpackete Knoten (`Node a`, `Node (Node a)`, etc.) einzufügen oder abzutrennen, muss die C++-Definition immer Knoten einfügen und zurückgeben.
Die internen rekursiven Algorithmen werden als `private` markiert und nur von speziellen Hilfsfunktionen aufgerufen, welche die Knoten entpacken oder verpacken.
#figure(
figures.finger-tree.def.illegal,
caption: [Nicht-reguläre Definition von Fingerbäumen.],
) <lst:illegal-recursive-type>
In der simplifizierten Definition in @lst:gen-finger-tree ist bereits zu sehen, dass die Elemente `a` und deren generische Mesures `v` durch die expliziten Typen `K` und `V` ersetzt wurden.
Für T4gl sind andere Measures als `Key` über Schlüssel-Wert-Paare nicht relevant und wurden daher direkt eingesetzt.
Diese Ersetzung zeigt sich auch in den Algorithmen, welche einerseits direkt auf die Invarianzen von `Key` und dessen Ordnungsrelation zurückgreifen, sowie auch diese Algorithmen enthalten, welche nur für geordnete Sequenzen sinnvoll sind (`insert` und `remove`).
Da die Implementierung des 2-3-Fingerbaums persistent sein soll, werden statt gewöhnlichen Pointern `std::shared_ptr` verwendet, um auf geteilte Knoten und Unterbäume zu zeigen.
Jeder in Haskell definierte Typ `T` wird hier durch einen Klasse `T` direkt abgebildet, welche zwei Felder enthält:
- `_repr`: Einen `std::shared_ptr<TBase>`.
- `_kind`: Einen Diskriminator, welcher angibt welche Variante in `_repr` enthalten ist (insofern nötig).
Varianten werden als ableitende Klassen von `TBase` implementiert.
Alle Schreibzugriffe auf `T` sorgen dann für eine flache Kopie der im `_repr`-Felds verwiesenen Instanz.
Damit wird sicher gegangen, dass Persistenz erhalten wird (siehe @sec:invis-pers für mögliche Optimierungen).
Diese Trennung ist auch nötig, um den Übergang einer Variante in die andere möglich zu machen, ohne das der Nutzer der Klasse diese selbst verwalten muss.
Die folgenden Definitionen und Codeausschnitte verzichten aus Platzgründen auf redundante Sprachkonstrukte wie etwa die Template-Parameterdeklarationen ```cpp template <typename K, typename V>```, da diese prinzipiell an fast jeder Klasse vorkommen.
Es wird außerdem darauf verzichtet für jede Klasse die Getter der Felder zu definieren.
Für alle Felder aller Typen werden Getter wiefolgt definiert:
Sei `_field` ein Feld vom Typen `Type` der Klasse `Class`, so exisitert eine Methode ```cpp auto Class::field() const -> const& Type```.
== Node
Damit die rekursiven Definitionen der Algorithmen in der Lage sind einen einzelnen Wert in Form eines Knotens zurück zu geben, muss es eine `NodeLeaf`-Variante geben, welche nur einen Wert und dessen Schlüssel enthält.
Zur Bildung der rekursiven Struktur gibt es eine `NodeDeep`-Variante, welche sowohl `Node2` als auch `Node3` als `std::vector` abbildet.
Wie zuvor bereits erklärt, erben diese von eine gemeinsamen Klasse `NodeBase`, welche durch `Node` persistent verwaltet wird.
Für die Knoten ist kein Übergang zwischen verschiedenen Zuständen nötig, Blattknoten werden durch die Hilfsfunktionen in `FingerTree` erstellt oder aufgelöst und interne Knoten werden innerhalb der rekursiven Algorithmen erstellt oder aufgelöst.
Dadurch ist die API und Implementierung von Knoten verhältnismäßig simpel.
#figure(
figures.finger-tree.def.node,
caption: [Die Definition der `Node`-Klassen.],
) <lst:finger-tree-node>
@lst:finger-tree-node zeigt die Definition der `Node`-Klassen.
Die Felder `_size` und `_key` in `NodeDeep` sind die akkumulierten Werte der Kinder dieses Knotens, sprich die Summe von `_size` und der größte Schlüssel der Kindknoten.
Für `LeafNode` liefert der Getter für `_size` direkt 1 und durch die inherente Sortierung der Schlüssel ist der größte Schlüssel immer der im rechten Kindknoten.
== Digits
Die Digits eines tiefen 2-3-Fingerbaums könnten, ähnlich wie in @bib:hp-06[S. 8] vorgeschlagen, als vier Varianten mit jeweils 1-4 Knoten als Feldern umgesetzt werden.
Das würde allerdings nur unnötig die Implementierung erschweren.
Daher gibt es für Digits nur eine Variante, `DigitsBase`.
@lst:finger-tree-digits zeigt die Definition von `Digits` und `DigitsBase`.
Da `Node` selbst bereits Persistenz umsetzt, könnte `Digits` darauf verzichten und diese Knoten direkt als `std::vector` speichern und diesen kopieren.
Auf eine solche Optimierung wurde aus Zeitgründen vorerst verzichtet, da ständiges Kopieren von Vektoren, wenn auch klein (1-4 Elemente), teurer sein kann als die geringere Verzögerung durch weniger Indirektion.
Dadurch ist die Definition auch einfacher mit denen der anderen Klassen zu vergleichen.
#figure(
figures.finger-tree.def.digits,
caption: [Die Definition der `Digits`-Klassen.],
) <lst:finger-tree-digits>
Ähnlich wie auch in @lst:finger-tree-node, gibt es akkumulierte `_size`- und `_key`-Felder, diese haben den gleichen Zweck wie die in `NodeDeep`.
Da es nur eine Variante gibt, kann auf das `_kind`-Feld in `Digits` verzichtet werden.
Die API von `Digits` verhält sich ähnlich wie die einer `Deque`, mit spezialisierten Funktionen für `push` und `pop` an beiden Seiten.
Desweiteren werden Funktionen bereitgestellt, um den Über- und Unterlauf zu vereinfachen; diese entpacken oder verpacken mehrere Knoten.
== FingerTree
@lst:finger-tree-self zeigt die gleiche Struktur wie die vorherigen Definitionen und baut auf denen von `Node` und `Digits` auf.
Direkt fällt auf, dass die Variante `Empty` nutzlos erscheint, die Abwesenheit von Knoten könnte auch durch ```cpp _repr == nullptr``` abgebildet werden.
Damit wird auch verhindert, dass Move-Konstruktoren von `FingerTree` alte Instanzen in einem uninitialisierten Stand lassen, wie es bei `Digits` und `Node` der Fall ist.
Ähnlich wie bei `Digits` wird auf solch eine Optimierung verzichtet.
Das vereinfacht den Vergleich zu den Haskell-Definitionen.
#figure(
figures.finger-tree.def.self,
caption: [Die Definition der `FingerTree`-Klassen.],
) <lst:finger-tree-self>
|
|
https://github.com/dyc3/senior-design | https://raw.githubusercontent.com/dyc3/senior-design/main/balancer-design.typ | typst | = Balancer Design
== Internal Architecture
#figure(
image("figures/balancer-internals-class.svg", width: 90%),
caption: "Class diagram showing the structure and relationships between types in the Balancer.",
) <Figure::balancer-internals-class>
#pagebreak()
Shown in @Figure::balancer-internals-class is the internal structure of the Balancer. The Balancer will discover Monoiths using the `MonolithDiscoverer` (a process further described in @Chapter::ServiceDiscovery), and the `MonolithConnectionManager` will establish connections to each Monolith. `Balancer` will update `BalancerContext` according to the messages it receives from `BalancerLink`. `Balancer` will then use `BalancerContext` to route messages to the appropriate Monoliths. Message routing is further described in @Section::MessageRouting. Clients work similarly, except that they establish connections to the Balancer via `BalancerService`. `BalancerService` handles proxying HTTP requests to the appropriate Monoliths, and also accepting and upgrading WebSocket connections.
#figure(
image("figures/monolith-selection-internals-class.svg"),
caption: "Class diagram showing the structure of MinRoomsSelector and the trait MonolithSelection and how they relate to BalancerContext.",
) <Figure::monolith-selection-internals-class>
== Package Structure
In Rust, packages are called "crates". The Balancer and Harness is split into multiple crates, as seen in @Figure::balancer-crates. The reason for this is that cargo, the Rust package manager, treats crates as the unit of compilation. This means that if you change a single line of code in a crate, the entire crate must be recompiled. By splitting the Balancer into multiple crates, we can reduce the amount of code that needs to be recompiled when a change is made.
#figure(
image("figures/balancer-crates.svg"),
caption: "Package diagram showing the Balancer and Harness crates.",
) <Figure::balancer-crates>
== Dependencies
#let cargo = toml("data/Cargo.toml")
#let crates = cargo.workspace.dependencies
#let build_table(data) = {
let keys = data.keys()
let rows = ()
for (crate, entry) in data {
if(type(entry) == str) {
entry = (
version: entry
)
}
if(entry.keys().contains("path")) { continue }
let row = (raw(crate),)
if(entry.keys().contains("version")) {
row.push(raw(entry.version))
} else if(entry.keys().contains("git")) {
let rev = if (entry.keys().contains("rev")) {entry.rev} else {"HEAD"}
rev = [(commit: #raw(rev))]
row.push([#link(entry.git, "from git") #rev])
}
if(entry.keys().contains("features")) {
row.push(raw(entry.features.join(", ")))
} else {
row.push("")
}
rows.push(row)
}
set align(left)
table(
columns: 3,
[*Crate*],[*Version*],[*Features*],
..rows.flatten()
)
}
#figure(
build_table(crates),
caption: figure.caption("External Rust dependencies required for all workspace crates", position: top)
)
== Development Environment <Section::dev-env>
#let text-box = box.with(
inset: 1em,
stroke: 1pt + black,
fill: luma(245),
)
In order for the Balancer to work with the Monolith, the Monolith must have load balancing enabled, otherwise it will not listen for incoming balancer connections. The following configuration should go in `env/base.toml` in order to enable load balancing:
`base.toml`
#text-box(
raw("[balancing]\nenabled = true", lang: "toml"),
)
@Figure::ports-1-monolith and @Figure::ports-2-monolith, demonstrate how to set up any number of Balancers and Monoliths. The listening ports are configurable, and they are labeled on the diagrams in the format `ENVIRONMENT_VAR=value`. Additionally, corresponding balancer configurations are shown to the right of the diagrams.
#let dev-env-figure(path, caption, balancer-config, commands) = {
figure(
grid(
rows: 2,
gutter: 20pt,
image(path),
align(left)[
#grid(
columns: 2,
gutter: 10pt,
[
`balancer.toml`
#text-box(
raw(balancer-config, lang: "toml"),
)
],
[
`Commands to run:`
#text-box(
raw(commands, lang: "bash"),
)
],
)
]
),
caption: caption,
)
}
#dev-env-figure(
"figures/dev-env/ports-1-monolith.svg",
[Diagram showing how ports for each process should be configured for development, shown on the top. Note that the values shown in this diagram are the default values, so you shouldn't need to set any environment variables to get this configuration. However, you will need this specific config file for the balancer to work, shown on the bottom left.],
"[discovery]\nmethod = \"manual\"\n\n[[discovery.monoliths]]\nhost = \"localhost\"\nport = 3002",
"# Terminal 0 - Balancer\ncargo run --bin ott-balancer-bin -- --config-path env/balancer.toml\n# Terminal 1 - Monolith\nyarn run start",
) <Figure::ports-1-monolith>
#dev-env-figure(
"figures/dev-env/ports-2-monolith.svg",
[Diagram showing the same thing as @Figure::ports-1-monolith, but with two Monoliths instead of one. Note that for the 2nd Monolith, you *will* need to set the environment variables, as shown on the bottom right.],
"[discovery]\nmethod = \"manual\"\n\n[[discovery.monoliths]]\nhost = \"localhost\"\nport = 3002\n\n[[discovery.monoliths]]\nhost = \"localhost\"\nport = 3004",
"# Terminal 0 - Balancer\ncargo run --bin ott-balancer-bin -- --config-path env/balancer.toml\n# Terminal 1 - Monolith 0\nyarn run start\n# Terminal 2 - Monolith 1\nPORT=3003 BALANCING_PORT=3004 yarn run start",
) <Figure::ports-2-monolith>
== Observability and Metrics
The Balancer, like the Monolith, exports Prometheus metrics at the `/api/status/metrics` endpoint. These metrics can be scraped by a Prometheus server and visualized using Grafana. Fly automatically scrapes metrics from all applications and provides a Prometheus data source for Grafana. They also host a Grafana instance for us, but we host our own Grafana instance so that we can have alerting enabled, and more control over configuration.
#figure(
image("figures/vis/prom-metrics-collection.svg", width: 70%),
caption: "Component diagram showing how the Balancer interacts with Fly, Prometheus, and Grafana. The same concept applies to the Monolith.",
) <Figure::prom-metrics-collection>
The Balancer also renders some text based on it's current internal state at the `/api/balancing` endpoint. This can be useful for debugging and understanding the current state of the Balancer. Currently, this endpoint renders text in a human readable format, but it could be modified to render JSON or another format instead. System visualization is specified in @Chapter::Visualization-Design.
== Message Routing <Section::MessageRouting>
The Balancer routes messages such that messages sent from a client end up at the correct Monolith, and vice versa. The Balancer uses the `BalancerContext` to keep track of the state of the Monoliths and the clients, protected with an `Arc<RwLock<T>>`. To minimize the amount of locking and hashmap lookups necessary to route messages, the Balancer sets up direct channels between tasks such that incoming client messages are sent directly to the appropriate Monolith, as shown in @Figure::balancer-channels-client-monolith.
#figure(
image("figures/balancer-channels-client-monolith.svg", width: 50%),
caption: "Structural diagram showing the channels set up by the Balancer, which objects own which ends of the channels, and which direction messages flow. Solid lines represent channels.",
) <Figure::balancer-channels-client-monolith>
|
|
https://github.com/schmidma/typst-workshop | https://raw.githubusercontent.com/schmidma/typst-workshop/main/README.md | markdown | Creative Commons Zero v1.0 Universal | # Typst: The Modern Alternative to LaTeX
This repository contains a presentation on Typst, a modern, markup-based typesetting system designed to be a powerful and user-friendly alternative to LaTeX, Word, or Google Docs.
This presentation covers the basics of Typst, including how to use it, why it is advantageous, and advanced features like mathematical typesetting and scripting.
## Overview
The presentation is designed for a workshop and includes the following sections:
- What is Typst?
- When to Use Typst
- Why Typst?
- How to Use Typst
- Typst Basics
- Typst Advanced Features
## Contents
- `presentation.typ`: The full presentation in Typst format.
- `assets/`: Images and resources
- `examples/`: Sample Typst files used in the presentation.
## Feedback
Contributions, feedback, and suggestions for improvement are welcome.
Feel free to open an issue or submit a pull request.
## License
This presentation is marked with [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/?ref=chooser-v1).
Additionally, I'd like to extend my thanks to the awesome Typst documentation, which served as inspiration for many of the explanations provided in this workshop.
|
https://github.com/wjwei-handsome/personal_poster_typst | https://raw.githubusercontent.com/wjwei-handsome/personal_poster_typst/main/README.md | markdown | # personal_poster_typst
## Personal Poster template in typst
## Please install and learn the tutorial of [typst](https://github.com/typst/typst) first.
## Look up usage in `example.typ`
## On the content, just follow the typst syntax and write done.
## 可能不支持中文
|
|
https://github.com/knuesel/typst-minideck | https://raw.githubusercontent.com/knuesel/typst-minideck/main/themes/simple.typ | typst | MIT License | // Theme variants
#let variants = (
light: (
bg: white,
fg: rgb("#3c3c3c"),
),
dark: (
bg: rgb("#3c3c3c"),
fg: rgb("#eff1f3"),
),
)
// The paper and variant parameters must be set by the caller
#let template(page-size: none, variant: none, it) = {
show heading: set block(below: 1em)
let margin = calc.min(..page-size.values()) * 2.5 / 21 // same as typst default
set page(
width: page-size.width,
height: page-size.height,
margin: margin,
header-ascent: 0pt,
footer-descent: 0pt,
footer: context {
set text(0.8em)
set align(bottom+right)
pad(x: -margin+1cm, y: 1cm, counter(page).display())
},
fill: variants.at(variant).bg,
)
set text(
24pt,
fill: variants.at(variant).fg,
font: "Libertinus Sans",
)
it
}
// Layout for title slide: no page numbers, centered content
#let title-slide(slide, it) = {
set page(footer: none)
set align(horizon+center)
slide(it)
}
// Theme function. Called by minideck with page-size set.
#let simple(slide, page-size: none, variant: "light") = {
(
slide: slide,
title-slide: title-slide.with(slide),
template: template.with(page-size: page-size, variant: variant)
)
}
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/visualize/polygon.typ | typst | Apache License 2.0 | // Test polygons.
---
#set page(width: 50pt)
#set polygon(stroke: 0.75pt, fill: blue)
// These are not visible, but should also not give an error
#polygon()
#polygon((0em, 0pt))
#polygon((0pt, 0pt), (10pt, 0pt))
#polygon.regular(size: 0pt, vertices: 9)
#polygon((5pt, 0pt), (0pt, 10pt), (10pt, 10pt))
#polygon(
(0pt, 0pt), (5pt, 5pt), (10pt, 0pt),
(15pt, 5pt),
(5pt, 10pt)
)
#polygon(stroke: none, (5pt, 0pt), (0pt, 10pt), (10pt, 10pt))
#polygon(stroke: 3pt, fill: none, (5pt, 0pt), (0pt, 10pt), (10pt, 10pt))
// Relative size
#polygon((0pt, 0pt), (100%, 5pt), (50%, 10pt))
// Antiparallelogram
#polygon((0pt, 5pt), (5pt, 0pt), (0pt, 10pt), (5pt, 15pt))
// Self-intersections
#polygon((0pt, 10pt), (30pt, 20pt), (0pt, 30pt), (20pt, 0pt), (20pt, 35pt))
// Regular polygon; should have equal side lengths
#for k in range(3, 9) {polygon.regular(size: 30pt, vertices: k,)}
---
// Error: 10-17 point array must contain exactly two entries
#polygon((50pt,))
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/main.typ | typst | #import "template.typ": pset
#import "lib.typ": *
#show: pset.with(
title: "REQUIN",
author: "<NAME>",
date: datetime.today(),
// collaborators: ("<NAME>", "<NAME>")
)
#import "@preview/codly:1.0.0": *
#show: codly-init.with()
// -------------------- PREMIERE PAGE --------------------
#v(1fr)
#align(center)[
#text(size: 30pt)[REQUIN]
#v(10pt)
#text(size: 20pt)[*RE*\cueil de *QU*\estions d'*IN*\formatique]
]
#v(1fr)
#pagebreak()
*Niveaux de difficulté*
// - #levels_emojis.at(0) : Question d'introduction. Prend entre quelques secondes et 1 minutes. Faisable de tete, il s'agit souvent de trouver un exemple ou un contre-exemple.
// - #levels_emojis.at(1) : Question demandant des méthodes classique, mais sans autre difficulté apparante. Très bien pour réviser le cours.
// - #levels_emojis.at(2) : Question relativement difficile, qui peut nécessiter un peu de travail et/ou des astuces relativement classique. Toutefois, avec un peu de persévérance, tout le monde y est capable. Très bien pour réviser des exercices. Il est fortement conseiller de les poser sur feuille.
// - #levels_emojis.at(3) : Question difficile du sujet, qui peut prendre entre une dizaine de minute pour les plus fort et plusieurs heures. Parfois, elles demandent de sortir des astuces peu communes. On ce doit de les poser sur feuille.
// - #levels_emojis.at(4) : Questions très très difficilles demandant beaucoup de recherche / connaissances externes. Demande plusieurs jours de travail.
// - #levels_emojis.at(5) : Questions ouvertes à la connaissance des autrices.
- #levels_emojis.at(0) : Question d'introduction servant à s'approprier l'énoncé
- #levels_emojis.at(1) : Question classique ou pouvant être résolue en déroulant une méthode simple
- #levels_emojis.at(2) : Question nécessitant un raisonnement plus complet ou une astuce, mais à la portée de tous
- #levels_emojis.at(3) : Question difficile du sujet demandant une réflexion posée ou une bonne intuition
- #levels_emojis.at(4) : Question très difficile pour les plus courageux qui n'ont pas peur de réfléchir longtemps
- #levels_emojis.at(5) : Question ouverte à la connaissance des autrices, ou de difficulté déraisonnable
*Notation de cet ouvrage*
- On utilisera $:=$ pour la définition.
- $NN = {0,1,...}$ est l'ensemble des entiers naturels, $QQ$ l'ensemble des rationnels et $RR$ l'ensemble des réels. On notera $NN^*$, $QQ^*$, $RR^*$ les ensembles privé de $0$ et $QQ_+$ $RR_+$ les ensembles positifs.
- On notera $not P$ la négation de la formule $P$.
- L'on notera $[n] := {1,...,n}$.
- L'union disjointe est noté $union.sq$.
- Le $(i+1)$ème élément d'un tableau $T$ est noté $T[i]$. Le préfixe de longueur $i+1$ est $T[... i]$.
- Par défault, les graphes sont fini et non orienté.
- Le graphe complet à $n$ élément est noté $K_n$.
- Soit $n,m in NN$, on note $K_(n,m)$ un graphe biparti $(U union.sq V, E)$ avec $|U| = n$ et $|V| = m$ où $E = U times V union V times U$.
- On dit que $G = (S,E)$ est un _sous-graphe_ de $G' = (V',E')$ si $V subset.eq V'$ et $E subset.eq E' sect S^2$.
- Pour $G = (S,E)$ un graphe, pour $V subset.eq S$, on notera $G[V] := (V, E sect V^2)$ le graphe _induit_ par $V$.
- Un _indépendant_ $I$ ou _anti-clique_ d'un graphe $G = (S,E)$ est tel que $G[I]$ ne contiens pas d'aretes.
- Soit $X$ un ensemble, on notera $cal(P)(X)$ l'ensemble des parties de $X$ et $cal(P)_f (X)$ l'ensemble des parties finies de $X$.
- Soit $Sigma$ un alphabet et $w in Sigma^*$, on notera $|w|_alpha$ pour $alpha in Sigma$ le nombre d'occurences de $alpha$ dans $w$
// -------------------- SOMMAIRE --------------------
#show outline.entry.where(
level: 1
): it => {
v(12pt, weak: true)
strong(it)
}
#outline(title: "Sommaire", depth: 2, indent: 10pt)
// setup heading to the custom one
#show heading: heading_fct
= Algorithmique
== KMP
#include "algo/kmp.typ"
== Tableaux Autoréférents
#include "algo/autoref.typ"
== Problème Subsetsum
#include "algo/subsetsum.typ"
== Fenêtre glissante
#include "algo/window.typ"
== Approximation dans les graphes
#include "algo/aprox.typ"
= Arbres & Graphes
== Mots univers
#include "graph/univers.typ"
== Bipartition induite
#include "graph/bipartition.typ"
== Coloration d'aretes
#include "graph/col_arr.typ"
== Théorème de Turán
#include "graph/turan.typ"
== Arbres croissants
#include "graph/croissant.typ"
== Lemme de König
#include "graph/konig.typ"
== Morphisme de Graphes
#include "graph/morph.typ"
== Graphes Infinis
#include "graph/infini.typ"
= Langages formels
== Language permuté et inclusions
#include "lang/inclusions.typ"
== Automates d'arbres
#include "lang/auto_arbre.typ"
== Language continuables
#include "lang/continuables.typ"
== Puissance et racine de languages
#include "lang/pow_sqrt.typ"
== Monoïde syntaxique
#include "lang/mon-syn.typ"
== Dangling else
#include "lang/dangling.typ"
= Théorie des jeux
== Nim à choix
#include "jeux/nim_set.typ"
== ChipLiar Game
#include "jeux/chip-liar.typ"
== ChipFiring Game
#include "jeux/chip-firing.typ"
= Calculabilité
== Calculabilité et représentation d'ensembles infinis
#include "calc/ens_fct.typ"
== Oméga de Chaitin
#include "calc/chaitin.typ"
== Espaces vectoriels calculables
#include "calc/ev.typ"
= Logique
== Compacité
#include "log/compacite.typ"
== SAT et typage
#include "log/sat-types.typ"
= Langages fonctionnels
== Lambda calcul pour les nuls
#include "fonc/lc-nuls.typ"
== Composition monadique
#include "fonc/comp-monade.typ"
== Optique des données
#include "fonc/optique.typ"
= Mathématiques pour l'informatique
== Monoïdes libres, langages et actions
#include "math/semigroupes-et-langages.typ"
== Monoïdes et relations de Green
#include "math/green.typ"
== Méthode probabiliste <meth-proba>
#include "math/methode-probabiliste.typ"
== Théorème du point fixe de Lawvere
#include "math/lawvere.typ"
== Séries génératrices et formelles
#include "math/series.typ"
#include "math/marche.typ"
== Ordinaux
#include "math/ordinaux.typ"
= Glossaire
#import "@preview/gloss-awe:0.0.5": *
#import "./glossary-pool.typ": glossary-pool
#pagebreak()
#set text(font: ("Arial", "Trebuchet MS"), size: 8pt)
#columns(2)[
#make-glossary(glossary-pool)
]
|
|
https://github.com/augustebaum/epfl-thesis-typst | https://raw.githubusercontent.com/augustebaum/epfl-thesis-typst/main/example/tail/appendix.typ | typst | MIT License | = An appendix
#lorem(100)
#pagebreak()
#lorem(100)
|
https://github.com/akagiyuu/math-document | https://raw.githubusercontent.com/akagiyuu/math-document/main/sum/sin.typ | typst | #set text(size: 20pt)
$ sin(x) = Im(e^(i x)) = (e^(i x) - e^(-i x))/2 $
$
sin(1) + sin(2) + ... + sin(n)
&= sum_(k=1)^(n) sin(k) \
&= Im(sum_(k=1)^(n) e^(i k)) \
&= Im(e^i (e^(i n) - 1)/(e^i - 1)) \
&= Im(
e^i (e^(i n/2)(e^(i n/2) - e^(-i n/2))) / (e^(i/2)(e^(i/2) - e^(-i/2)))
) \
&= Im(e^(i (n+1)/2) (e^(i n/2) - e^(-i n/2)) / (e^(i/2) - e^(-i/2))) \
&= Im(e^(i (n+1)/2) (sin(n/2)) / (sin(1/2))) \
&= (sin((n+1)/2)sin(n/2)) / (sin(1/2)) \
$
|
|
https://github.com/Starkillere/TIPE-detection-informations-cachees | https://raw.githubusercontent.com/Starkillere/TIPE-detection-informations-cachees/main/documentation-prototypage/definition_inductive_information.typ | typst | #line(length: 500pt)
#align(center)[
= Définition inductive de l'information par des matrices de n-uplet de nombres réel
]
\
#grid(
columns: (1fr, 1fr),
align(left)[
_par <NAME>_
],
align(right)[
_10/10/2024_
]
)
#line(length: 500pt)
#set heading(numbering: "1.1.1")
On s'interesse ici à donner une définition inductive de ce qu'est une information pour pour pouvoir sans perte de géneralité faire de la Stéganographie/Steganalyse sur une information sans avoir a spécifier sont type (image, texte, vidéo...)
= Définition :
On définie une information comme étant une matrice de n-uplet de nombres (réel).
Une information est définie sous sa forme dite *matricielle* de tel sorte que : \
Si $M in M_(l,p)$ est la représentation matricielle d'une information tel que le n-uplet de nombres à la position (1,1) de $M$ soit de taille $n$ alors :
- Tout les éléments de la matrice i.e les autre uplet sont aussi de taille $n$
- La taille $M$ noté $|M|$ est donnée par : $|M| = n×l×p$
- tout les n-uplet contient le même type d'éléments et ces éléments sont tous de même type
On définie aussi des opération sur ces matrice :
- Les opération matricielle usuelle resptent valable pour la représentation matricielle
- Il existe un algorithme permetant de faire le lien entre le lien entre une information est sa forme matricielle.
- La représentation matricielle d'une information par un algorithme donné est unique
= Définition inductive d'une information :
== Cas de bases :
=== L'information vide :
On définit l'information vide comme étant l'information de base de taille $0$ noté $epsilon$, c'est la matrice avec $n×p$ 0-uplet.
=== Les briques de base de l'information
pour tout $x in RR$, $[(x)]$ est la représentation matricielle d'une information par un algorithme.
= Note à moi même :
Toutes information posséde une unique représentation matricielle par un algorithme donné.
|
|
https://github.com/buxx/cv | https://raw.githubusercontent.com/buxx/cv/master/modules/skills.typ | typst | #import "../brilliant-CV/template.typ": *
#import "../src/utils.typ": *
#cvSection("Compétences", highlighted: true, letters: 3)
#cvSkill(
type: [Pile technique],
info: [#img("crab") Rust | #img("snake") Python | #img("typescript", size: 8.0pt) TypeScript/Javascript | #img("gnulinux", size: 8.0pt) GNU/Linux],
)
#cvSkill(
type: [Expertises],
info: [#img("keyboard") Développement | #img("building") Architecture | Intégration continue | Administration système | DevOps],
)
#cvSkill(
type: [Langues],
info: [#img("flag_fr") Français #hBar() #img("flag_en") Anglais]
)
\
|
|
https://github.com/ntjess/typst-tada | https://raw.githubusercontent.com/ntjess/typst-tada/main/src/tabledata.typ | typst | The Unlicense | #import "helpers.typ" as H
#import "display.typ": DEFAULT-TYPE-FORMATS
#let _get-n-rows(data) = {
if data.values().len() == 0 { 0 } else { data.values().at(0).len() }
}
#let _data-to-records(data) = {
let values = data.values()
let records = range(_get-n-rows(data)).map(ii => {
let row-values = values.map(arr => arr.at(ii))
H.dict-from-pairs(data.keys().zip(row-values))
})
records
}
#let _eval-expressions(data, field-info) = {
let computed-fields = field-info.keys().filter(
key => "expression" in field-info.at(key)
)
if computed-fields.len() == 0 {
return (data, field-info)
}
// new data = (a: (), b: (), ...)
// new values will be pushed to each array as they are computed
let out-data = data + H.default-dict(computed-fields, value: ())
let records = _data-to-records(data)
for row in records {
for key in computed-fields {
let scope = row
// Populate unspecified fields with default values
for (key, info) in field-info.pairs() {
if key not in row and "default" in info {
scope.insert(key, info.at("default"))
}
}
let expr = field-info.at(key).at("expression")
let default-scope = field-info.at(key).at("scope", default: (:))
let value = H.eval-str-or-function(
expr, scope: default-scope + scope, mode: "code", keyword: scope
)
out-data.at(key).push(value)
// In case this field is referenced by another expression
row.insert(key, value)
}
}
// Expressions are now evaluated, discard them so they aren't re-evaluated when
// constructing a followup table
for key in computed-fields {
let _ = field-info.at(key).remove("expression")
}
(out-data, field-info)
}
#let _infer-field-type(field, values) = {
if values.len() == 0 {
return "content"
}
let types = values.map(value => type(value)).dedup()
if types.len() > 1 and type(none) in types {
types = types.filter(typ => typ != type(none))
}
if types.len() > 1 {
panic("Field `" + field + "` has multiple types: " + repr(types))
}
repr(types.at(0))
}
#let _resolve-field-info(field-info, field-defaults, type-info, data) = {
// Add required, internal fields
field-info = (__index: (hide: true, type: "index")) + field-info
// Add fields that only appear in data, but weren't specified by the user otherwise
for field in data.keys() {
if field not in field-info {
field-info.insert(field, (:))
}
}
// Now that we have the comprehensive field list, add default properties that aren't
// specified, and properties attached to the type
for (field, existing-info) in field-info {
// Take any "values" passed and give them directly to data
if "values" in existing-info {
data.insert(field, existing-info.remove("values"))
}
let type-str = existing-info.at("type", default: field-defaults.at("type", default: auto))
if type-str == auto{
type-str = _infer-field-type(field, data.at(field))
}
let type-info = DEFAULT-TYPE-FORMATS + type-info
let defaults-for-field = field-defaults + type-info.at(type-str, default: (:))
for key in defaults-for-field.keys() {
if key not in existing-info {
existing-info.insert(key, defaults-for-field.at(key))
}
}
field-info.insert(field, existing-info)
}
// Not allowed to have any fields not in the data
let extra-fields = field-info.keys().filter(key => key not in data)
if extra-fields.len() > 0 {
panic("`field-info` contained fields not in data: " + repr(extra-fields))
}
(field-info, data)
}
#let _validate-td-args(data, field-info, type-info, field-defaults, tablex-kwargs) = {
// dict of lists
// (field: (a, b, c), field2: (5, 10, 15), ...)
H.assert-is-type(data, dictionary, "data")
H.assert-rectangular-matrix(data.values())
// dict of dicts
// (field: (type: "integer"), field2: (display: "#text(red, value)"), ...)
H.assert-list-of-type(field-info, dictionary, "field-info")
// dict of dicts
// (currency: (display: format-usd), percent: (display: format-percent), ...)
H.assert-is-type(type-info, dictionary, "type-info")
H.assert-list-of-type(type-info, dictionary, "type-info")
// dict of values
// (type: integer, title: #field-title-case, ...)
H.assert-is-type(field-defaults, dictionary, "field-defaults")
// dict of values
// (auto-vlines: false, map-rows: () => {}, ...)
H.assert-is-type(tablex-kwargs, dictionary, "tablex-kwargs")
}
/// Constructs a TableData object from a dictionary of columnar data. See examples in
/// the overview above for metadata examples.
///
/// - data (dictionary): A dictionary of arrays, each representing a column of data. Every
/// column must have the same length. Missing values are represented by `none`.
/// - field-info (dictionary): A dictionary of dictionaries, each representing the properties
/// of a field. The keys of the outer dictionary must match the keys of `data`. The keys of
/// the inner dictionaries are all optional and can contain:
/// - `type` (string): The type of the field. Must be one of the keys of `type-info`.
/// Defaults to `auto`, which will attempt to infer the type from the data.
/// - `title` (string): The title of the field. Defaults to the field name, title-cased.
/// - `display` (string): The display format of the field. Defaults to the display format
/// for the field's type.
/// - `expression` (string, function): A string or function containing a Python expression that will be evaluated
/// for each row to compute the value of the field. The expression can reference any
/// other field in the table by name.
/// - `hide` (boolean): Whether to hide the field from the table. Defaults to `false`.
/// - type-info (dictionary): A dictionary of dictionaries, each representing the properties
/// of a type. These properties will be populated for a field if its type is given in
/// `field-info` and the property is not specified already.
/// - field-defaults (dictionary): Default values for every field if not specified in
/// `field-info`.
/// - tablex-kwargs (dictionary): Keyword arguments to pass to `tablex()`.
/// - ..reserved (dictionary): Reserved for future use; currently discarded.
#let TableData(
data: none,
field-info: (:),
type-info: (:),
field-defaults: (:),
tablex-kwargs: (:),
..reserved
) = {
if reserved.pos().len() > 0 {
panic("TableData() doesn't accept positional arguments")
}
_validate-td-args(data, field-info, type-info, field-defaults, tablex-kwargs)
let n-rows = _get-n-rows(data)
let initial-index = data.at("__index", default: range(_get-n-rows(data)))
let index = initial-index.enumerate().map(idx-val => {
let (ii, value) = idx-val
if value == none {
value = ii
}
value
})
// Preserve ordering if the user specified an index, otherwise put at the front
if "__index" in data {
data.__index = index
} else {
data = (__index: index, ..data)
}
(data, field-info) = _eval-expressions(data, field-info)
(field-info, data) = _resolve-field-info(field-info, field-defaults, type-info, data)
(
data: data,
field-info: field-info,
type-info: type-info,
field-defaults: field-defaults,
tablex-kwargs: tablex-kwargs,
)
}
#let _resolve-row-col-ctor-field-info(field-info, n-columns) = {
if field-info == auto {
field-info = range(n-columns).map(str)
}
if type(field-info) == array {
H.assert-list-of-type(field-info, str, "field-info")
field-info = H.default-dict(field-info, value: (:))
}
return field-info
}
/// Constructs a TableData object from a list of column-oriented data and their field info.
///
/// ```example
/// #let data = (
/// (1, 2, 3),
/// (4, 5, 6),
/// )
/// #let mk-tbl(..args) = to-tablex(from-columns(..args))
/// #set align(center)
/// #grid(columns: 2, column-gutter: 1em)[
/// Auto names:
/// #mk-tbl(data)
/// ][
/// User names:
/// #mk-tbl(data, field-info: ("a", "b"))
/// ]
/// ```
///
/// - columns (array): A list of arrays, each representing a column of data. Every column
/// must have the same length and columns.len() must match field-info.keys().len()
/// - field-info (dictionary,array): See the `field-info` argument to @@TableData for
/// handling dictionary types. If an array is passed, it is converted to a dictionary
/// of (key1: (:), ...).
/// - ..metadata (dictionary): Forwarded directly to @@TableData
/// -> TableData
#let from-columns(columns, field-info: auto, ..metadata) = {
if metadata.pos().len() > 0 {
panic("from-columns() only accepts one positional argument")
}
field-info = _resolve-row-col-ctor-field-info(field-info, columns.len())
if field-info.keys().len() != columns.len() {
panic(
"When creating a TableData from rows or columns, the number of fields must match "
+ "the number of columns, got: " + repr(field-info.keys()) + " fields and "
+ repr(columns.len()) + " columns"
)
}
let data = H.dict-from-pairs(field-info.keys().zip(columns))
TableData(data: data, field-info: field-info, ..metadata)
}
/// Constructs a TableData object from a list of row-oriented data and their field info.
///
/// ```example
/// #let data = (
/// (1, 2, 3),
/// (4, 5, 6),
/// )
/// #to-tablex(from-rows(data, field-info: ("a", "b", "c")))
/// ```
///
/// - rows (array): A list of arrays, each representing a row of data. Every row must have
/// the same length and rows.at(0).len() must match field-info.keys().len()
/// - field-info (dictionary, array): See the `field-info` argument to @@from-columns()
/// - ..metadata (dictionary): Forwarded directly to @@TableData
#let from-rows(rows, field-info: auto, ..metadata) = {
from-columns(H.transpose-values(rows), field-info: field-info, ..metadata)
}
/// Constructs a TableData object from a list of records.
///
/// A record is a dictionary of key-value pairs, Records may contain different keys, in
/// which case the resulting @@TableData will contain the union of all keys present with
/// `none` values for missing keys.
///
/// ```example
/// #let records = (
/// (a: 1, b: 2),
/// (a: 3, c: 4),
/// )
/// #to-tablex(from-records(records))
/// ```
///
/// - records (array): A list of dictionaries, each representing a record. Every record must
/// have the same keys.
/// - ..metadata (dictionary): Forwarded directly to @@TableData
/// -> TableData
#let from-records(records, ..metadata) = {
H.assert-is-type(records, array, "records")
H.assert-list-of-type(records, dictionary, "records")
let encountered-keys = H.unique-record-keys(records)
let data = H.default-dict(encountered-keys, value: ())
for record in records {
for key in encountered-keys {
data.at(key).push(record.at(key, default: none))
}
}
TableData(data: data, ..metadata)
}
/// Extracts a single value from a @@TableData that has exactly one field and one row.
///
/// ```example
/// #let td = TableData(data: (a: (1,)))
/// #item(td)
/// ```
///
/// - td (TableData): The table to extract a value from
/// -> any
#let item(td) = {
let filtered = H.remove-internal-fields(td.data)
if filtered.keys().len() != 1 {
panic(
"TableData must have exactly one field to call .item(), got: " + repr(td.data.keys())
)
}
let values = filtered.values().at(0)
if values.len() != 1 {
panic(
"TableData must have exactly one row to call .item(), got: " + repr(values.len())
)
}
values.at(0)
}
/// Creates a new @@TableData with only the specified fields and/or indexes.
///
/// ```example
/// #let td = TableData(data: (a: (1, 2), b: (3, 4), c: (5, 6)))
/// #to-tablex(subset(td, fields: ("a", "c"), indexes: (0,)))
/// ```
///
/// - td (TableData): The table to subset
/// - fields (array, str): The field or fields to keep. If `auto`, all fields are kept.
/// - indexes (array, int): The index or indexes to keep. If `auto`, all indexes are kept.
/// -> TableData
#let subset(td, indexes: auto, fields: auto) = {
let (data, field-info) = (td.data, td.field-info)
if type(indexes) == int {
indexes = (indexes,)
}
if type(fields) == str {
fields = (fields,)
}
// "__index" may be removed below, so save a copy for index filtering if needed
let index = data.__index
if fields != auto {
data = H.keep-keys(data, keys: fields)
field-info = H.keep-keys(field-info, keys: fields)
}
if indexes != auto {
let keep-mask = index.map(ii => ii in indexes)
let out = (:)
for (field, values) in data {
out.insert(field, values.zip(keep-mask)
.filter(pair => pair.at(1))
.map(pair => pair.at(0))
)
}
data = out
}
return TableData(..td, data: data, field-info: field-info)
}
/// Similar to @@subset(), but drops the specified fields and/or indexes instead of
/// keeping them.
///
/// ```example
/// #let td = TableData(data: (a: (1, 2), b: (3, 4), c: (5, 6)))
/// #to-tablex(drop(td, fields: ("a", "c"), indexes: (0,)))
/// ```
///
/// - td (TableData): The table to subset
/// - fields (array, str): Single string or array of strings with the fields to drop.
/// If `auto`, no fields are dropped.
/// - indexes (array): Single int or array of ints with the indexes to drop. If `auto`,
/// no indexes are dropped.
/// -> TableData
#let drop(td, fields: none, indexes: none) = {
let keep-keys = auto
if fields != none {
if type(fields) == str {
fields = (fields,)
}
keep-keys = td.data.keys().filter(key => key not in fields)
}
let keep-indexes = auto
if indexes != none {
if type(indexes) == int {
indexes = (indexes,)
}
keep-indexes = td.data.__index.filter(ii => ii not in indexes)
}
subset(td, fields: keep-keys, indexes: keep-indexes)
}
/// Converts rows into columns, discards field info, and uses `__index` as the new fields.
///
/// ```example
/// #let td = TableData(data: (a: (1, 2), b: (3, 4), c: (5, 6)))
/// #to-tablex(transpose(td))
/// ```
///
/// - td (TableData): The table to transpose
/// - fields-name (str): The name of the field containing the new field names. If `none`,
/// the new fields are named `0`, `1`, etc.
/// - ignore-types (boolean): Whether to ignore the types of the original table and
/// instead use `content` for all fields. This is useful when not all columns have the
/// same type, since a warning will occur when multiple types are encountered in the same
/// field otherwise.
/// - ..metadata (dictionary): Forwarded directly to @@TableData
/// -> TableData
#let transpose(td, fields-name: none, ignore-types: false, ..metadata) = {
let new-keys = td.data.at("__index").map(str)
let filtered = H.remove-internal-fields(td.data)
let new-values = H.transpose-values(filtered.values())
let data = H.dict-from-pairs(new-keys.zip(new-values))
let info = (:)
if ignore-types {
info = H.default-dict(data.keys(), value: (type: "content"))
}
if fields-name != none {
let (new-data, new-info) = ((:), (:))
new-data.insert(fields-name, filtered.keys())
new-info.insert(fields-name, (:))
data = new-data + data
info = new-info + info
}
// None of the initial kwargs make sense: types, display info, etc.
// since the transposed table has no relation to the original.
// Therefore, don't forward old `td` info
TableData(data: data, field-info: info, ..metadata)
}
#let _ensure-a-data-has-b-fields(td-a, td-b, a-name, b-name, missing-fill) = {
let (a, b) = (td-a.data, td-b.data)
let missing-fields = b.keys().filter(key => key not in a)
if missing-fields.len() > 0 and missing-fill == auto {
panic(
"No fill value was specified, yet `" + a-name + "` contains fields not in `" + b-name + "`: "
+ repr(missing-fields)
)
}
let fill-arr = (missing-fill, ) * _get-n-rows(a)
a = a + H.default-dict(missing-fields, value: fill-arr)
a
}
#let _merge-infos(a, b, exclude: ("data",)) = {
let merged = H.merge-nested-dicts(a, b)
for key in exclude {
let _ = merged.remove(key, default: none)
}
merged
}
#let _stack-rows(td, other, missing-fill: auto) = {
let data = _ensure-a-data-has-b-fields(td, other, "td", "other", missing-fill)
let other-data = _ensure-a-data-has-b-fields(other, td, "other", "td", missing-fill)
// TODO: allow customizing how metadata gets merged. For now, `other` wins but keep
// both
let merged-info = _merge-infos(td, other)
let merged-data = (:)
for key in data.keys() {
merged-data.insert(key, data.at(key) + other-data.at(key))
}
TableData(data: merged-data, ..merged-info)
}
#let _ensure-a-has-at-least-b-rows(td-a, td-b, a-name, b-name, missing-fill: auto) = {
let (a, b) = (td-a.data, td-b.data)
let (a-rows, b-rows) = (_get-n-rows(a), _get-n-rows(b))
if _get-n-rows(a) < _get-n-rows(b) {
panic(
"No fill value was specified, yet `" + a-name + "` has fewer rows than `" + b-name + "`: "
+ repr(a-rows) + " vs " + repr(b-rows)
)
}
let pad-arr = (missing-fill, ) * (b-rows - a-rows)
for key in a.keys() {
a.insert(key, a.at(key) + pad-arr)
}
a
}
#let _stack-columns(td, other, missing-fill: auto) = {
other.data = H.remove-internal-fields(other.data)
let overlapping-fields = td.data.keys().filter(key => key in other.data)
if overlapping-fields.len() > 0 {
panic(
"Can't stack `td` and `other` column-wise because they have overlapping fields: "
+ repr(overlapping-fields) + ". Either remove or rename these fields before stacking."
)
}
let data = _ensure-a-has-at-least-b-rows(td, other, "td", "other", missing-fill: missing-fill)
let other-data = _ensure-a-has-at-least-b-rows(other, td, "other", "td", missing-fill: missing-fill)
let merged-data = data + other-data
let merged-info = _merge-infos(td, other)
TableData(data: merged-data, ..merged-info)
}
/// Stacks two tables on top of or next to each other.
///
/// ```example
/// #let td = TableData(data: (a: (1, 2), b: (3, 4)))
/// #let other = TableData(data: (c: (7, 8), d: (9, 10)))
/// #grid(columns: 2, column-gutter: 1em)[
/// #to-tablex(stack(td, other, axis: 1))
/// ][
/// #to-tablex(stack(
/// td, other, axis: 0, missing-fill: -4
/// ))
/// ]
/// ```
///
/// - td (TableData): The table to stack on
/// - other (TableData): The table to stack
/// - axis (int): The axis to stack on. 0 will place `other` below `td`, 1 will place
/// `other` to the right of `td`. If `missing-fill` is not specified, either the
/// number of rows or fields must match exactly along the chosen axis.
/// - #text(red)[*Note*!] If `axis` is 1, `other` may not have any field names that are
/// already in `td`.
/// - missing-fill (any): The value to use for missing fields or rows. If `auto`, an
/// error will be raised if the number of rows or fields don't match exactly along the
/// chosen axis.
/// -> TableData
#let stack(td, other, axis: 0, missing-fill: auto) = {
if axis == 0 {
_stack-rows(td, other, missing-fill: missing-fill)
} else if axis == 1 {
_stack-columns(td, other, missing-fill: missing-fill)
} else {
panic("Invalid axis: " + repr(axis))
}
}
#let update-fields(td, replace: false, ..field-info) = {
let field-info = field-info.named()
if not replace {
field-info = H.merge-nested-dicts(td.field-info, field-info)
}
TableData(..td, field-info: field-info)
}
/// Shorthand to easily compute expressions on a table.
///
/// - td (TableData): The table to compute expressions on
/// - ..expressions (any): An array of expressions to compute.
/// - Positional arguments are converted to (`value`: (expression: `value`))
/// - Named arguments are converted to (`key`: (expression: `value`))
#let add-expressions(td, ..expressions) = {
let info = (:)
for expr in expressions.pos() {
info.insert(expr, (expression: expr))
}
for (field, expr) in expressions.named() {
info.insert(field, (expression: expr))
}
update-fields(td, ..info)
}
/// Returns a @@TableData with a single `count` column and one value -- the number of
/// rows in the table.
///
/// ```example
/// #let td = TableData(data: (a: (1, 2, 3), b: (3, 4, none)))
/// #to-tablex(count(td))
/// ```
///
/// - td (TableData): The table to count
/// -> TableData
#let count(td) = {
TableData(
..td,
data: (count: (_get-n-rows(td.data),)),
// Erase field info, but types and defaults are still, valid
field-info: (:),
)
} |
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/016%20-%20Fate%20Reforged/005_The%20Doom%20of%20the%20Golden%20Fang.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"The Doom of the Golden Fang",
set_name: "Fate Reforged",
story_date: datetime(day: 04, month: 02, year: 2015),
author: "<NAME>",
doc
)
Tasigur lifted a banana from the bowl at his side and turned it over in his hands. A large brown spot marred the yellow skin. Wrinkling his nose in disgust, he prodded it with his thumb, feeling the squish of the fruit inside and the gentle tear of the softened skin. He glanced around, threw the spoiled fruit at the nearest human servant he could see, and picked a bright green grape from the bowl instead.
The herald was still talking, yammering on about the heroic deeds of the warrior who stood proudly behind him. Yala was the so-called hero's name, a woman from a backwater region practically in Abzan lands. With her stocky frame, she might as well have been an Abzan—probably had been, before the last Sultai conquest. Tasigur's lip curled at the thought.
#figure(image("005_The Doom of the Golden Fang/01.jpg", width: 100%), caption: [Tasigur, the Golden Fang | Art by Chris Rahn], supplement: none, numbering: none)
"And as the dragon became tangled in the zombies' nets," the herald was saying, "Yala loosed her crossbow and buried her poisoned bolt between the beast's scales!"
Tasigur yawned loudly.
The herald shifted on his feet, and the hero's husband, standing just behind her on her right, almost let himself scowl. Yala didn't move, her face an impassive mask. Tasigur smiled and the herald stammered on, rushing to finish before he completely lost the khan's attention.
"Wi—uh, with poison coursing through its veins, the dragon landed hard, its legs buckling under its weight, crushing zombies under its slimy belly. Yala rushed to its side as it let out its caustic breath in billowing, black clouds. Without hesitation, she plunged her spear into its chest. It writhed and shuddered, knocking her off her feet and spraying her with its acid blood, but her blow was enough. The beast died and the Sultai won the day!"
It took a moment for Tasigur to realize that the herald had finished his story. He forced his eyes back into focus and popped another grape into his mouth. Then he beckoned the hero of the Sultai forward.
"Yala," he purred. He saw her suppress a shudder and his mouth quirked into a grin. "Your heroic deeds are a credit to the Sultai. Please accept my gratitude."
Yala dropped to one knee and bowed her head. "I am honored, my khan."
"Yes, you are," Tasigur said. Turning his attention back to the fruit, he gestured for the herald to have her shown out. He yanked at the chain that bound the zombie—whose skull was part of the fruit bowl—to his throne, bringing it closer so he could more easily reach what looked like a very succulent pear.
Its sweet juice dribbled down his chin as the hero of the Sultai was escorted from the chamber.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("005_The Doom of the Golden Fang/02.jpg", width: 100%), caption: [Dutiful Return | Art by <NAME>], supplement: none, numbering: none)
Tasigur's stomach rumbled as a zombie approached his throne the next morning, carrying a tray laden with food. The zombie stopped a few strides away and waited as a living servant was brought out to sample the food. Tasigur shifted in his seat, impatient and hungry, angry that a mere servant—an Abzan wretch captured in a recent raid—should be allowed to taste his food before he could eat. It smelled exquisite.
The servant seemed delighted, savoring each bite with his eyes closed and a broad smile on his face. No doubt this was better fare than he'd ever had in his life. For a moment Tasigur congratulated himself—he was a kind and beneficent lord who allowed the fine pleasures of life to those who served him, even if their service was unwilling.
Then all the pleasure vanished from the servant's face and his eyes went wide. His hand clutched at his throat, and Tasigur leaned forward in his throne.
"What's happening?" the khan demanded.
Flecks of black foam appeared at the corners of the servant's mouth, and he fell to his knees, gasping for breath.
"Poison!" Tasigur shouted, springing to his feet.
The servant collapsed, writhing and shaking, and finally screamed—a long, shrill note that ended in a sick gurgle.
Silence gripped the room.
Tasigur scanned the faces of every servant and courtier around him, looking for some hint of treachery, some indication of who was responsible for this attempt on his life. Blank faces—tanned human faces pale with shock, the inscrutable scaled faces of the naga, the vacant eyes of the zombies—stared back at him, waiting for his command.
"Khudal," he said. He fell back into his throne. "Bring me Khudal."
Only silence greeted his command.
"I need Khudal!" he shouted.
A voice hissed in the shadows behind the throne. "My khan, one does not summon Khudal." Shidiqi, his closest advisor, slithered to his side.
"Am I not the Golden Fang?"
"Of course, my khan," the naga said.
"Yes, yes," Tasigur huffed. "Take me to him now!"
Shidiqi gestured to the shadows and six zombies came forward. Each group of three was linked by golden chains that passed through their chests, where their hearts should have been. The zombies took up positions alongside the throne and, on Shidiqi's command, stooped to lift it from the ground. The throne swayed, provoking an angry curse from Tasigur, but then steadied as the zombies followed the naga out of the audience chamber.
As they moved through dim corridors barely wide enough for the throne to pass, Tasigur fumed. Someone had tried to kill him—someone #emph[dared.] As if his tasters wouldn't foil the attempt. As if he couldn't identify the traitor. Someone was about to pay dearly for this idiotic treachery.
Darkness closed around Tasigur as the naga led him into the rakshasa's chamber. He heard the soft hiss of Shidiqi's invocation beckoning Khudal from whatever nether regions he occupied, and a chill washed down Tasigur's spine.
The rakshasa stepped into the dim half-circle of light spilling into the room from the hallway outside. "My lord." His voice was a rumbling growl befitting his feline head.
#figure(image("005_The Doom of the Golden Fang/03.jpg", width: 100%), caption: [Rakshasa Vizier | Art by <NAME>], supplement: none, numbering: none)
"Someone tried to kill me," Tasigur blurted.
"Yes," the demon said. "I have seen it."
"You saw? Then you know who put the poison in my food? I demand that you tell me this instant!"
"You demand it?" The rakshasa sounded amused, and Tasigur's fury boiled.
"Yes!" he shouted. "I am the Golden Fang, khan of the Sultai, and my demands must be answered!"
"Indeed," the rakshasa said, nodding the slightest of bows, showing more mockery than deference.
Tasigur's face flushed. "Tell me who is responsible."
"I possess the knowledge you seek," Khudal said. "I ask only the smallest favor in exchange for my service to my khan."
"Your duty is to serve your khan—you are bound to give me this information." Tasigur thought he saw the rakshasa and Shidiqi exchange a glance, and he softened his tone. "However, I am a kind and beneficent lord, and I show favor to those who please me." #emph[Even if their service is unwilling,] he thought. "What favor would you ask?"
The rakshasa's feline mouth twitched with what might have been a hint of a smirk. "After I name the traitor, my khan, I ask that you carry out whatever punishment you see fit—short of claiming the traitor's life. I would take that life myself, in order to feed on the traitor's soul."
Tasigur shrugged. "A small enough favor. Name the traitor."
"Yala, whom you honored before your throne yesterday, has done this."
Anger seized the khan, leaving him speechless and trembling. That this so-called hero should betray him after receiving his commendation, that his herald should bring someone so vile into his presence—it was too much to be borne. He waved his hands to the naga, who led the zombies in slowly turning the throne around. Khudal disappeared back into the shadows.
By the time the shuffling procession made it back to his audience chamber, Tasigur had found his voice.
"B<NAME>," he snapped. "And her husband. And that blathering herald."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Tasigur shifted in his throne, carefully crafting the appearance of perfect unconcern. He pulled at a strand of his razor-tipped whip, bringing it in line with the other coils in his right hand. Then he draped his left arm over the arm of the throne just so. Satisfied, he turned his head—careful not to move anything else—to address the nearest human servant.
"How long has the traitor been waiting?"
"Three hours, my khan."
"Perfect. And her husband—is he prepared?"
Shidiqi slid closer to the back of his throne and hissed. "Yes, Khan."
"Excellent. Bring her in."
The large doors at the far end of the audience chamber ground open and a new herald led Yala back into his presence. Tasigur smiled, seeing fear and anger warring on her face despite her best efforts to keep calm. It was all he could do to sit still as she walked to the same position she had occupied the day before and the herald withdrew.
"Welcome once again, hero of the Sultai!" he said warmly.
She bowed deeply. "Thank you, my khan."
"I owe you an apology," Tasigur said. "In my impatience yesterday, eager for the tedious ceremony to come to an end, I neglected to give you a gift in recognition of your heroism."
"Your praise is gift enough."
"Oh, no. Let it never be said that the khan of the Sultai withholds what is due to his faithful servants!" He waved absently, beckoning a zombie forward.
The fresh corpse shuffled out from the shadows, carrying a velvet pillow. Tasigur watched Yala's face, savoring the anticipation.
#figure(image("005_The Doom of the Golden Fang/04.jpg", width: 100%), caption: [Sultai Emissary | Art by <NAME>], supplement: none, numbering: none)
The blood drained from her face as she recognized the zombie and fell to her knees. Staring aghast at the animated corpse of her husband, she mouthed his name, but no sound emerged.
"No, the hero of the Sultai need not kneel before me!" Tasigur said, beckoning two burly servants forward. They flanked Yala and lifted her roughly to her feet, bringing her face to face with her husband and his lifeless eyes. She turned her head away.
The zombie tried to shift its pillow into one hand, but dropped it. A necklace clattered on the stone floor.
"Clumsy oaf!" Tasigur snapped. "Pick it up!"
The zombie shuffled a few steps and lifted the necklace, then turned back to Yala. With a lurch, it put the necklace over her head, brushing her cheek with one cold hand. She winced and tried to pull away, but the servants held her fast.
"Please accept this token of my gratitude for your heroic deeds," Tasigur drawled.
Yala looked past her husband's lifeless eyes to glare at the khan. Sneering back at her, he snapped his fingers.
Yala's eyes and mouth went wide as the necklace tightened around her neck. She pulled free of the servants' grasp and clutched at the garrote, trying in vain to get her fingers under it.
Tasigur stood. "That is how you see yourself, is it not? A hero, a champion of the people, sneaking into your khan's palace in the dark of night to poison my food?"
He stepped down onto the back of the zombie that served as his footstool, prostrated on the floor before his throne.
"Did you think to claim my throne for yourself?" he said. "Y<NAME>, khan of the Sultai?"
She fell to her knees, and Tasigur snapped his fingers again. The necklace loosened, and Yala drew a long, gasping breath as she bent her purpling face to the floor.
"Bind her hands and show me her back," Tasigur whispered, and the servants flanking her roughly obeyed. He let the coils of his whip fall from his hand, the silver razors on its many ends clattering on the stone.
"No, my khan," Yala said, still panting. "I am loyal to the Golden Fang!"
His whip cracked and Yala screamed as the razors tore through silk and skin to draw their crimson lines across her back. He dangled the silver claws over the wounds, savoring her pain. Khudal wanted her alive, he reminded himself, so he could not enjoy too many lashes.
With the fourth lash, she could no longer scream. Sighing, he carefully coiled the whip again and laid it on his throne. The servants hauled her to her feet and held her up within the khan's reach.
Tasigur closed his eyes in a moment of concentration, and his hands began to glow with a purplish light. Grinning, he sunk his fingers into Yala's head and sifted through her thoughts.
#figure(image("005_The Doom of the Golden Fang/05.jpg", width: 100%), caption: [Tasigur's Cruelty | Art by <NAME>], supplement: none, numbering: none)
So much delightful pain and horror, so much fear, so much burning hatred. He prodded at the hate he found, looking for her memories of her treachery. His smile faded. Yala remembered spending the evening celebrating with friends, falling asleep in her husband's arms, and greeting the morning with a smile born of well-earned pride. Nowhere could he find evidence that she had poisoned his food.
Snarling his frustration and disgust, he wrenched his fingers and snuffed out what little remained of her life.
Every light in the chamber went out at once, draping the chamber in utter darkness. Chaos erupted everywhere as servants tried to find the torches and light them again. And Tasigur heard a whisper in his ear.
"You swore that I could feast on her soul," Khudal said.
Tasigur clenched his fists. "You lied to me," he muttered.
"You have robbed me of what was rightfully mine."
A torch sputtered to life, and Tasigur turned to face the rakshasa. "You lied! Yala was not the poisoner!"
"No," Khudal said. "The poison was mine."
#figure(image("005_The Doom of the Golden Fang/06.jpg", width: 100%), caption: [Rakshasa's Disdain | Art by <NAME>], supplement: none, numbering: none)
"Yours? #emph[You] sought to kill me?"
"If I wanted you dead, young princeling, you would be dead."
"But you—the poison—"
"I wanted Yala dead, and now she is dead."
"You lied to me!" Tasigur said again, his voice growing louder as more torches drove the darkness away.
"Of course I did."
"All for the sake of killing that woman?"
"You're a petulant child, Tasigur," the rakshasa said. "Look at you, throwing a tantrum, quivering with impotent rage. And why? You got what you wanted—a victim to beat and kill. But I wanted her soul, and you have denied me that prize. That was a mistake you will long regret."
"No, you are the one who has made a mistake," Tasigur said. He raised his voice to make sure everyone in the room could hear. "With your lies and your poison you have shown your disloyalty. Seize the traitor!"
No one moved. The rakshasa snarled. "You are a fool as well as a child. Humans rule the Sultai only because the rakshasa and the naga allow it to be so. And your insolence will end that indulgence."
The razor-tipped whip lashed out from Tasigur's hand and cracked in the air where the rakshasa had been standing.
Khudal's voice seemed to come from the shadows, pooling in every corner of the room: "And so the Sultai fall."
Tasigur felt him go—the room seemed a little lighter, the air not quite so oppressive. He gathered the coils of his whip and sat on his throne. "Shidiqi!" he called.
The naga hissed in the darkness behind him, and his neck suddenly prickled with fear. Was he surrounded by treachery?
"Shidiqi, come and bow before me!"
"And so the Sultai fall," the naga echoed, and she was gone, too.
#figure(image("005_The Doom of the Golden Fang/07.jpg", width: 100%), caption: [Will of the Naga | Art by Wayne Reynolds], supplement: none, numbering: none)
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Shifting uncomfortably on his throne, Tasigur reached absently for a piece of fruit, but no bowl-skulled zombie stood ready at his left hand. All the zombies were gone. Without the naga and their necromancy, no one could keep them under control. Some of them had simply shuffled away. Some had gone berserk, clawing and biting at any living person in reach until soldiers put them down. And some had tugged at their chains until their rotting bodies crumbled and sloughed away to nothing.
He cleared his throat, the sound echoing much more loudly than he'd intended in the mostly empty hall. Half his palace soldiers were gone, either killed in the recent Abzan raids—intolerable, that they should have reached so far into Sultai lands!—or deserted, no longer afraid of his wrath.
#emph[And so the Sultai fall.] The words had echoed in his mind since Khudal and the naga left. The months since had been one long slide into a perfect fulfillment of those prophetic words. The Abzan and the Jeskai launched frequent raids, stealing Sultai goods and capturing Sultai people—or liberating members of their own clans who the Sultai had captured before, when they were strong. The people were hungry—#emph[I am hungry!] Tasigur thought—and with every new assault more soldiers deserted, more Sultai citizens welcomed the arrival of enemy forces.
As Tasigur's gurgling stomach announced its displeasure in the echoing hall, a young servant came to his side bearing a tray of food. Tasigur lifted a plate and brought it close to his face, peering at the scant morsels for anything that looked amiss. The naga schemed against him, he was sure, and he had no doubt that they would find a way to slip their venom into his food before long. He couldn't spare servants to taste his food for him anymore, so he speared a piece of unidentifiable meat on his knife and sniffed at it, then touched it gingerly with the tip of his tongue. It didn't smell or taste good, but it didn't seem toxic, and his stomach rumbled again in anticipation. Sighing, he put it in his mouth. #emph[Better to die of poison than starvation,] he thought.
No sooner had he swallowed that first bite than a herald—another new one—came bursting into the hall. "Dragon!" he cried, and a wave of terror washed through the room.
"Here?" Tasigur asked, standing on his wooden footstool.
As if in answer, a chorus of shouts erupted outside—cries of warning, the screams of the dying, incoherent sounds of terror—followed in a moment by the wafting smell of something acrid and vile.
#figure(image("005_The Doom of the Golden Fang/08.jpg", width: 100%), caption: [Palace Siege | Art by Slawomir Maniak], supplement: none, numbering: none)
"Close the doors!" Tasigur shouted. "Take me to the inner chambers!" Servants rushed to obey his commands as a handful of soldiers took up positions near the great doors, ready to defend their khan if the dragon came too close. Six shuffling servants—strong enough to lift his throne but unable to fight by virtue of other injuries—carried him out through the back, into his private chambers deep inside the great palace of the Sultai.
And there the khan cowered in fear until the noise died down.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Tasigur stood on the banks of the Marang river. His feet had never touched earth before that day, and they were sinking into cold mud that oozed between his toes.
A mass of troops formed a semicircle around him. Across the river stood the first dragon Tasigur had ever seen, larger than he could ever have imagined—the progenitor of the entire Silumgar brood. Awe and terror churned in his gut, making his head swim.
"Great dragonlord Silumgar!" he shouted. His voice seemed small and weak in the forest, barely audible over the rushing water. He wasn't sure the dragon could hear him at all.
#figure(image("005_The Doom of the Golden Fang/09.jpg", width: 100%), caption: [Silumgar, the Drifting Death | Art by <NAME>], supplement: none, numbering: none)
"I bring you an offering!" he said anyway, and he beckoned behind him.
Six of his soldiers came forward, carrying the throne he had forsaken. The jade seat was laden with gold and jewels—a fortune beyond what any mere soldier could comprehend. Tasigur dearly hoped it was enough.
The dragon sniffed at the air and stretched his neck out over the water. Then he coiled back, stretching his wings wide as his legs curled beneath him, and sprang.
Tasigur felt death descending on him, blotting out the light of the sun. He fell to his knees and planted his hands in the mud. Death, the death of all things, the end of the Sultai and the end of the world, were all embodied in this magnificent scaled god. Not daring to lift his head, he watched his hands as the earth slowly swallowed them up.
|
|
https://github.com/fjebaker/rse-seminar-zig-2024 | https://raw.githubusercontent.com/fjebaker/rse-seminar-zig-2024/main/presentation/main.typ | typst | MIT License | #import "@preview/polylux:0.3.1": *
#import "tamburlaine.typ": *
#show: tamburlaine-theme.with(aspect-ratio: "4-3")
#show link: item => underline(text(blue)[#item])
#let citation(b) = {
[
#v(1fr)
#set text(size: 12pt)
#set align(right)
#b
#v(2em)
]
}
#enable-handout-mode(true)
#let zig_logo = read("./assets/zig-logo-light.svg").replace("#FFF", SECONDARY_COLOR.to-hex())
#title-slide(
title: [
Optimal approaches with #h(3.872em)
#move(dy: -2em, image.decode(zig_logo, width: 55%))
],
authors: ("<NAME>",),
where: "Cambridge RSE Seminar",
)
#slide(title:"About Me")[
#v(2em)
- Astrophysics Student
- University of Bristol
- I like programming languages
- Bread and butter is Julia
- Art and culture is Zig
#link("<EMAIL>") \
#link("https://github.com/fjebaker")
]
#slide(title:"About Zig")[
#grid(columns: (70%, 1fr),
[
- Created by *<NAME>*
- *Zig Software Foundation* charity
- 501(c)(3) not-for-profit corporation
- *Systems programming language* and *toolchain*
- Use case is for where software *can't fail*
],[
#move(dx: 2cm, image("./assets/andrew-kelley.jpg", width: 70%))
])
#v(1em)#align(right)[
#quote("Fix the problems with C and no more") \
See more in #link("https://www.youtube.com/watch?v=Gv2I7qTux7g")[The Road to Zig 1.0 (YouTube)]
]
]
#slide(title:"Outline for this talk")[
#set text(size: 30pt)
#grid(columns: (40%, 1fr),
[
- Zig the builder
- Zig the language
- Zig the project
],
[
#move(dy: -0cm, dx: 4cm, scale(x: -100%, image("./assets/satisfaction.png", height: 75%)))
])
#citation[
Illustration: #link("https://victorianweb.org/art/illustration/thomson/6.html")
]
]
#slide()[
#set par(leading: 20pt)
#rect(fill: TEXT_COLOR, width: 100%, height: 85%, inset: (right: 1cm))[
#v(5em)
#align(right, text(size: 110pt, weight:"black", fill: SECONDARY_COLOR)[the \ builder])
#move(dy: -12cm, dx: 4cm, image.decode(zig_logo, width: 50%))
]
]
#slide()[
#set align(center)
#v(-18em)
#text(size: 420pt, weight: "black")[xz]
]
#slide()[
#set align(center)
#image("./assets/sock-puppets.png", height: 90%)
]
#slide()[
#set align(center)
#image("./assets/rewrite-in-rust.png")
]
#slide()[
#set align(center)
#v(-18em)
#text(size: 420pt, weight: "black")[xz]
]
#slide()[
#set align(center)
#image("./assets/to-host-backdoor.png", height: 80%)
#citation[
Source: #link("https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27#design-specifics")
]
]
#slide()[
#set align(center)
#v(3em)
#image("./assets/autotools-horrible.png", width: 90%)
#citation[
Source: #link("https://felipec.wordpress.com/2024/04/04/xz-backdoor-and-autotools-insanity/") (I do not agree with the post)
]
]
#slide()[
#set align(center)
#image("./assets/cmake-landlock.png", width: 80%)
#image("./assets/evil-dot.png", width: 80%)
#citation[
Source: #link("https://github.com/tukaani-project/xz/commit/328c52da8a2bbb81307644efdb58db2c422d9ba7")
]
]
#slide(title: "Build systems are complex")[
To develop in *C* you need to know one of \[*autotools*, *GNU make*, *CMake*, *meson*, *ninja*, *Gradle*, *SCons*, *Shake*, *Tup*, *bazel*, *premake*, *Ceedling*, ... \]
#uncover("2-")[
To develop in *C* you need to know #text(fill: PRIMARY_COLOR)[*a second language*].
]
#grid(columns: (50%, 1fr),
[
#move(dx: 1cm, image("./assets/ancient-mariner.png", width: 60%))
],
align(right)[
#v(2em)
#uncover("3-")[
Zig builds are written in Zig. \
]
#uncover("4-")[
Zig also compiles C/C++ code.
]
]
)
#citation[
Illustration: #link("https://victorianweb.org/art/illustration/wehnert/34.html")
]
]
#slide(title: "Building xz with Zig")[
#grid(columns: (60%, 1fr),
[
#set text(size: 16pt)
#set par(leading: 6pt)
```zig
const std = @import("std");
pub fn build(b: *std.Build) !void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const liblzma = b.addStaticLibrary(.{
.name = "lzma",
.target = target,
.optimize = optimize,
});
// call to our utility function
defineMacros(liblzma);
liblzma.defineCMacro("LZMA_API_STATIC", null);
liblzma.linkLibC();
liblzma.addCSourceFiles(.{ .files = &LZMA_SOURCES });
// add include paths
liblzma.addIncludePath(.{
.path = "src/liblzma/api",
});
// ... lots more
// install into `./zig-out/lib/liblzma.a`
b.installArtifact(liblzma);
}
```],
[
Checked by the Zig compiler for syntax
- Explicit
- Simple
Utility functions
#move(dx: 2em, dy:3em, scale(x: -100%, image("./assets/ziggy.svg", width: 60%)))
]
)
]
#slide(title: "Linking to executables")[
#v(1em)
#set text(size: 20pt)
#show "linkLibrary": text.with(weight: "black", fill: PRIMARY_COLOR)
```zig
const xzexe = b.addExecutable(.{
.name = "xz",
.target = target,
.optimize = optimize,
});
xzexe.linkLibC();
xzexe.linkLibrary(liblzma);
xzexe.addCSourceFiles(.{ .files = &XZ_SOURCES });
xzexe.addIncludePath(.{ .path = "src/common" });
xzexe.addIncludePath(.{ .path = "src/liblzma/api/lzma/" });
xzexe.addIncludePath(.{ .path = "src/liblzma/api/" });
defineMacros(xzexe);
b.installArtifact(xzexe);
```
]
#slide(title: "Immediate benefits")[
Cached compilation for free
#grid(columns:(50%, 1fr),
[
#set block(spacing: 20pt)
#let highlight = text.with(weight: "black")
#show regex("^\$.*"): highlight
#show regex("user .*"): text.with(fill: PRIMARY_COLOR, weight: "black")
```
$ time zig build
CPU 197%
user 15.860
system 3.363
total 9.718
```
#v(0.3em)
#uncover("2-")[
Making some changes
#v(0.3em)
```
$ vim src/xz/main.c
$ time zig build
CPU 155%
user 0.107
system 0.206
total 0.201
```
]
],
[
#v(1em)
Sensible defaults
- UB sanitizer
- Warnings
- `-fno-omit-framepointer`
])
]
#slide(title: "Native debug build")[
#set block(spacing: 20pt)
#block(text(weight: "black", raw("$ zig build")))
#block(text(weight: "black", raw("$ file ./zig-out/bin/xz")))
#{
let highlight = text.with(fill: PRIMARY_COLOR, weight: "black")
show "x86-64": highlight
show "64-bit": highlight
```
./zig-out/bin/xz: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 4.4.0, with debug_info, not stripped
```
}
#block(text(weight: "black", raw("$ ldd ./zig-out/bin/xz")))
```
linux-vdso.so.1 (0x00007ffeaafd0000)
liblzma.so.0 => /home/lilith/Developer/xz/zig-cache/o/4b1bf6dae4eb1dd6464b6aa63cb44c53/liblzma.so.0 (0x000077b6b770d000)
libc.so.6 => /usr/lib/libc.so.6 (0x000077b6b7504000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x000077b6b77f4000)
```
]
#slide(title: "Static aarch64 release")[
#set block(spacing: 20pt)
#block(text(weight: "black", raw("$ zig build \\")))
#block(text(weight: "black", raw(" -Doptimize=ReleaseSmall -Dtarget=aarch64-linux-musl")))
#block(text(weight: "black", raw("$ file ./zig-out/bin/xz")))
#{
let highlight = text.with(fill: PRIMARY_COLOR, weight: "black")
show "ARM aarch64": highlight
show "statically linked": highlight
show "64-bit": highlight
```
./zig-out/bin/xz: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, stripped
```
}
#block(text(weight: "black", raw("$ ldd ./zig-out/bin/xz")))
```
not a dynamic executable
```
]
#slide(title: "Adding a build option")[
#{
set text(size: 22pt)
```zig
const liblzma_static = b.option(
bool,
"static",
"Build and link liblzma statically",
) orelse true;
if (liblzma_static) {
// configure things for a static build
} else {
// configure for shared library
}
```
}
#uncover("2-")[
#block(text(weight: "black", raw("$ zig build --help")))
```
Project-Specific Options:
...
-Dstatic=[bool] Build and link liblzma statically
```
]
]
#slide(title: "Target specifics")[
#v(0.1em)
Switch on the target:
#{
set text(size: 20pt)
let highlight = text.with(weight: "black")
```zig
switch (target.result.os.tag) {
.windows => {
// ...
exe_or_lib.addWin32ResourceFile(.{
.file = .{ .path = windows_rc_file },
.flags = &.{
"-I./src/liblzma/api/",
// ...
},
});
},
else => {
exe_or_lib.defineCMacro("HAVE_FUTIMES", null);
},
}
```
}
]
#slide(title: "DLLs and Windows")[
#set block(spacing: 20pt)
#block(text(weight: "black", raw("$ zig build -Dstatic=false -Dtarget=x86_64-windows")))
#block(text(weight: "black", raw("$ file ./zig-out/bin/xz")))
#{
let highlight = text.with(fill: PRIMARY_COLOR, weight: "black")
show "MS Windows": highlight
```
./zig-out/bin/xz.exe: PE32+ executable (console) x86-64, for MS Windows, 8 sections
```
}
#block(text(weight: "black", raw("$ ls ./zig-out/lib/")))
```
lzma.dll lzma.lib lzma.pdb
```
#v(2em)
#align(right)[
Build *from* any target *to* any target.
]
#citation[ I don't have a Windows machine so I couldn't test the binary. ]
]
#slide(title: "Installing the cross-compiler toolchain")[
Other cross compilers have tedious setup, need all sorts of gcc binaries or MSVC hell
#uncover("2-")[
1. Download and decompress (using our freshly built xz)
]
#uncover("3-")[
2. #text(fill: PRIMARY_COLOR, [*That's it, we're done.*])
]
#v(-0.6em)
#uncover("4-")[
#align(center, image("./assets/install-zig.png", width: 70%))
]
]
#slide(title: "Made with Zig")[
Some projects that have official `build.zig` files
- #link("https://github.com/ggerganov/llama.cpp")[llama.cpp]
- #link("https://github.com/jedisct1/libsodium")[libsodium]
- #link("https://github.com/raysan5/raylib")[raylib]
- #link("https://github.com/PCRE2Project/pcre2")[PCRE2]
Unofficial wrappers:
- #link("https://github.com/andrewrk/ffmpeg")[FFMPEG]
- #link("https://github.com/kassane/openssl-zig")[OpenSSL]
- #link("https://github.com/andrewrk/libz")[libz]
- #link("https://github.com/andrewrk/pulseaudio")[pulseaudio]
]
#slide()[
#set par(leading: 20pt)
#rect(fill: TEXT_COLOR, width: 100%, height: 85%, inset: (right: 1cm))[
#v(5em)
#align(right, text(size: 110pt, weight:"black", fill: SECONDARY_COLOR)[the language])
#move(dy: -12cm, dx: 4cm, image.decode(zig_logo, width: 50%))
]
]
#slide()[
#v(-1.5em)
#grid(
columns:(60%, 1fr),
[
- *Errors*
- *Allocators*
- *Comptime functions*
- Call conventions
- `test`
- Pointers and slices
- Packed structs
- Struct of Arrays
- Anonymous structs
- Optionals
- Comptime types
- Enums
- Async
- Unions and tagged unions
- Arrays and hashmaps
- SIMD intrinsics `@Vector`
],[
#v(1.5em)
Focus on these today
#move(dx: -4cm, dy: -2.8cm, text(size: 88pt, "}"))
#image("./assets/zero.svg")
]
)
]
#slide(title: "Errors are everywhere")[
#align(center, image("./assets/never-do-malloc-like-this.png", width: 80%))
Errors need to be handled.
- A language should make it *easy* to handle them.
- Optimal: a language should make it *easier* to handle errors than to *ignore* them.
]
#slide()[
#v(2em)
#align(right)[
Unsafe code is code that doesn't handle all of it's errors.
]
#v(1em)
#align(left)[
*Safe code* is code that *handles all of its errors*, not code that doesn't have any errors.
]
#move(dx: 3cm,
image("./assets/thackeray.jpg")
)
#citation[Illustration: #link("https://victorianweb.org/art/illustration/thackeray/cooke4.html")]
]
#slide(title: "Errors are unavoidable")[
#set text(size: 23pt)
```zig
fn overwrite(sub_path: []const u8) void {
const f = std.fs.cwd().openFile(
sub_path,
.{ .mode = .read_write },
);
f.seekTo(0);
_ = f.write("Hello start");
f.close();
}
```
Forced to handle `openFile` errors:
#align(center)[
#image("./assets/file-seek-error.png", width: 95%)
]
]
#slide(title: "Catching errors")[
#set text(size: 23pt)
#grid(columns: (50%, 1fr),
[
#show "catch": it => [#box(radius: 3pt, fill: PRIMARY_COLOR, text(weight: "black", fill: TEXT_COLOR, it))]
```zig
const f = std.fs.cwd().openFile(
sub_path,
.{ .mode = .read_write },
) catch |err| {
switch (err) {}
};
```
Error sets are part of the API
- Explicit sets
```zig
fn foo() error{OhNo,Sad}!void {
// ...
}
```
],
[
#v(2em)
#image("./assets/switch-cases.png")
])
]
#slide(title: "Bubbling errors up")[
#text(size: 23pt)[
#show "try": it => [#box(radius: 3pt, fill: PRIMARY_COLOR, text(weight: "black", fill: TEXT_COLOR, it))]
#show "!": it => [#box(radius: 3pt, fill: PRIMARY_COLOR, text(weight: "black", fill: TEXT_COLOR, it))]
```zig
fn overwrite(sub_path: []const u8) !void {
const f = try std.fs.cwd().openFile(
sub_path,
.{ .mode = .read_write },
);
try f.seekTo(0);
_ = try f.write("Hello start");
f.close();
}
```
]
#grid(columns:(55%, 1fr),
[
Let the caller handle the error
- Bubble back to `main` and report the error to the user *with a trace*
],[
#move(dx: 2cm, dy: -3cm, image("./assets/cat-tenniel.jpg", width: 80%))
])
#citation[Illustration: #link("https://victorianweb.org/art/illustration/tenniel/lookingglass/1.2.html")]
]
#slide(title: "Errors past main")[
#set align(center)
#v(1em)
#image("./assets/runtime-error.png")
]
#slide(title: "Defer")[
#text(size: 23pt)[
#show "defer": it => [#box(radius: 3pt, fill: PRIMARY_COLOR, text(weight: "black", fill: TEXT_COLOR, it))]
```zig
fn overwrite(sub_path: []const u8) !void {
const f = try std.fs.cwd().openFile(
sub_path,
.{ .mode = .read_write },
);
defer f.close();
try f.seekTo(0);
_ = try f.write("Hello start");
}
```
]
Initialisation and destruction are *right next to eachother*.
#set text(size: 20pt)
```zig
std.debug.print("Outer", .{});
defer std.debug.print("Outer goodbye", .{});
{
std.debug.print("Inner", .{});
defer std.debug.print("Inner goodbye 1", .{});
defer std.debug.print("Inner goodbye 2", .{});
}
```
]
#slide(title: "Errdefer")[
Caller now obtains the file handle and must close.
#grid(columns: (60%, 1fr),
[
#show "errdefer": it => [#box(radius: 3pt, fill: PRIMARY_COLOR, text(weight: "black", fill: TEXT_COLOR, it))]
#text(size: 22pt)[
```zig
fn overwrite(sub_path: []const u8) !File {
const f = try std.fs.cwd().openFile(
sub_path,
.{ .mode = .read_write },
);
errdefer f.close();
try f.seekTo(0);
_ = try f.write("Hello start");
return f;
}
```
If anything goes wrong we have *cleanup*]],
[#v(2cm) #image("./assets/doormouse.jpg", height: 45%)])
#citation[
Illustration: #link("https://victorianweb.org/art/illustration/tenniel/alice/7.3.html")
]
]
#slide(title: "No hidden allocations")[
Allocating functions in Zig have an interface like
```zig
fn foo(allocator: Allocator) !void {
// do memory things
}
```
Allocators are *passed* to functions.
```zig
const slice = try allocator.alloc(u8, 10);
```
Allocations *can fail*.
#v(2em)
#align(right)[
Possible to have different allocators for different things or different scopes.
]
]
#slide(title: "Memory leaks")[
```zig
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
```
Trace of the leak, and where it was allocated:
#align(center)[
#image("./assets/memory-leak.png", width: 80%)
]
]
#slide(title: "Segfaults")[
Use after free:
#align(center)[
#image("./assets/segfault.png", width: 80%)
]
Double free:
#align(center)[
#image("./assets/double-free.png", width: 80%)
]
]
#slide(title: "Allocation patterns")[
```zig
const slice = try allocator.alloc(u32, num);
defer allocator.free(slice);
```
Structs may hold onto their allocator to free themselves
```zig
var list = std.ArrayList(f32).init(allocator);
// the allocator is held by the array list
defer list.deinit();
```
*Arena allocators* for performance and simplicity
```zig
var mem = std.heap.ArenaAllocator.init(allocator);
defer mem.deinit();
_ = try mem.allocator().alloc(u32, 1024);
_ = try mem.allocator().alloc(u32, 1024);
```
]
#slide(title: [#uncover("4-")[C Metaprogramming]])[
#block[
#set raw(lang: "c")
#raw("#include <stdlib.h>\n")
#uncover("3-", raw("#define abs(x) (x < 0 ? -x : x)\n"))
#raw("int main() {\n int x = -2;\n return abs(++x);\n}\n")
]
#uncover("2-")[
```
$ gcc -Wall -Wextra -Wpedantic ./a.c && ./a.out ; echo "$?"
0
```
]
#v(-2em)
#uncover("5-")[
#align(right)[
#quote[The C preprocessor is a foot gun]
]
#v(-2em)
#move(dx: 2em)[
#image("./assets/footguns.png", width: 30%)
]
]
#uncover("5-")[
#citation[
Worth reading: #link("https://gcc.gnu.org/onlinedocs/gcc-6.3.0/cpp/Macro-Pitfalls.html#Macro-Pitfalls")
]
]
]
#slide(title: "Comptime")[
Execute code (explicitly) at *compile time*
```zig
// string literal
const name = "<NAME>";
// labeled block explicitly comptime
const e_count = comptime label: {
var count: usize = 0;
for (name) |c| {
if (c == 'e') count += 1;
}
break :label count;
};
```
]
#slide(title: "Runtime")[
#set align(center)
#image("./assets/shelagh-runtime.png", width: 90%)
#citation[
#link("https://godbolt.org/z/Paja3d7dE")
]
]
#slide(title: "Comptime")[
#set align(center)
#image("./assets/shelagh-comptime.png", width: 90%)
#citation[
#link("https://godbolt.org/z/s7xzKqPvs")
]
]
#slide(title: "A glance at generics")[
Compile time execution _is_ metaprogramming:
#set text(size: 16pt)
#grid(columns: (40%, 1fr),
[
```zig
const FlagInfo = struct {
required: bool = false,
name: []const u8 = "flag",
arg_id: u32,
arg_count: u32 = 0,
};
```
#move(dx: 1.5cm, scale(x:-100%, image("./assets/honest-work.jpg", width: 55%)))
],[
```zig
fn numArgFields(comptime T: type) usize {
const info = @typeInfo(T);
switch (info) {
.Struct => {},
else => @compileError("Must be a struct."),
}
var arg_count: usize = 0;
inline for (info.Struct.fields) |field| {
if (std.mem.startsWith(u8, field.name, "arg_")) {
arg_count += 1;
}
}
return arg_count;
}
```
])
#citation[
Illustration: #link("https://victorianweb.org/art/illustration/barnes/21.html")
]
]
#slide()[
#v(3em)
#align(center)[
#image("./assets/compiler-error.png", width: 80%)
]
Compile time error message
- Reference trace
- Information about the specific build command that triggered the error
]
#slide()[
#set par(leading: 20pt)
#rect(fill: TEXT_COLOR, width: 100%, height: 85%, inset: (right: 1cm))[
#v(5em)
#align(right, text(size: 110pt, weight:"black", fill: SECONDARY_COLOR)[the \ project])
#move(dy: -12cm, dx: 4cm, image.decode(zig_logo, width: 50%))
]
]
#slide(title: "Zen of Zig")[
#v(1em)
#set text(size: 22pt)
- Communicate intent precisely.
- Edge cases matter.
- Favor reading code over writing code.
- Only one obvious way to do things.
- *Runtime crashes are better than bugs.*
- *Compile errors are better than runtime crashes.*
- Incremental improvements.
- Avoid local maximums.
- Reduce the amount one must remember.
- Focus on code rather than style.
- *Resource allocation may fail; resource deallocation must succeed.*
- *Memory is a resource.*
- Together we serve the users.
]
#slide()[
#v(-2em)
#align(right, par(leading: 25pt, text(size: 90pt, weight: "black")[Optimal approaches to develop#text(fill:PRIMARY_COLOR)[ing]]))
]
#slide()[
#align(center)[
#image("./assets/lorris-cro-talk.png", width: 80%)
#text(size: 22pt)[Interview with Loris Cro of ZFS \ #link("https://www.youtube.com/watch?v=5_oqWE9otaE")]
]
]
#slide(title: "Notable projects")[
#v(2em)
#align(center)[
#image.decode(read("./assets/logo-with-text-white.svg").replace("white", TEXT_COLOR.to-hex()), width: 70%)
#image("./assets/bun.png", width: 20%)
]
]
#slide()[
#v(-2em)
#align(right, par(leading: 25pt, text(size: 90pt, weight: "black")[Optimal approaches to develop#text(fill:PRIMARY_COLOR)[ing]]))
]
#slide()[
#v(-2em)
#align(right, par(leading: 25pt, text(size: 90pt, weight: "black")[Optimal approaches to develop#text(fill:PRIMARY_COLOR)[ers]]))
]
#slide()[
#set align(center)
#v(-18em)
#text(size: 420pt, weight: "black")[xz]
]
#slide()[
#v(3em)
#align(center)[
#image("./assets/struggle-maintain.png", width: 80%)
]
]
#slide()[
#set align(center)
#v(-18em)
#text(size: 420pt, weight: "black")[xz]
]
#slide(title: "Programming is emotional")[
#align(center,
image("./assets/c-swearwords.jpg", height: 70%)
)
]
#slide()[
#align(center)[
#link("https://softwareyoucan.love")[
#set text(size: 50pt)
#text("https://")#text(weight: "black", fill: TEXT_COLOR, "software")#text(weight: "black", fill: PRIMARY_COLOR, "you")#text(weight: "black", fill: TEXT_COLOR, "can")#text(".")#text(weight: "black", fill: PRIMARY_COLOR, "love")
]
]
#v(-1cm)
#align(center, image("./assets/sycl-vancouver.png", width: 70%))
#citation[
Banner art from #link("https://softwareyoucanlove.ca/")
]
]
#slide(title: "Thanks")[
Learn more about Zig #link("https://ziglang.org/")
- Learn Zig by fixing little problems \ #link("https://github.com/ratfactor/ziglings")
#v(2em)
- Zig NEWS and blogs #link("https://zig.news/")
- Zig SHOWTIME #link("https://www.youtube.com/@ZigSHOWTIME")
#text(size: 18pt)[
Source for slides: #link("https://github.com/fjebaker/rse-seminar-zig-2024")
]
#v(1fr)
#align(right, scale(x: -100%, image("./assets/ziggy.svg", width: 30%)))
]
|
https://github.com/davidedellagiustina/uniud-slides | https://raw.githubusercontent.com/davidedellagiustina/uniud-slides/master/src/main.typ | typst | MIT License | // Usage example of the uniud-slides template
// File: ./src/main.typ
#import "./uniud-slides/template.typ": *
// +-----------------------+
// | Presentation metadata |
// +-----------------------+
#show: uniud-slides.with(
lang: "en", // Supported: 'en', 'it'
department: "dmif", // Supported: 'dmed', 'dill', 'di4a', 'dies', 'disg', 'dmif', 'dium', 'dpia'
department-override: "", // Optional, overrides `department` (if yours is unsupported or you prefer a custom text)
course: "Computer Science",
academic-year: "2023-24",
date: datetime(day: 15, month: 3, year: 2024), // Date of the presentation
title: "My Awesome Presentation",
subtitle: "", // Optional
candidate: (
name: "<NAME>",
),
supervisor: (
title: "Prof.",
name: "<NAME>",
),
co-supervisor: ( // Optional, leave `name` empty to omit
title: "Dott.",
name: "",
),
)
// Load bibliography entries
// Can be referenced in slides with e.g. `#cite(<entry>, form: "...")`
#show bibliography: none
#bibliography("bibliography.yaml", style: "springer-basic-author-date")
// +--------+
// | Slides |
// +--------+
#slide[
#set align(horizon)
= Title
This is an awesome slide.
]
// ...
|
https://github.com/SnowManKeepsOnForgeting/NoteofModernControlTheory | https://raw.githubusercontent.com/SnowManKeepsOnForgeting/NoteofModernControlTheory/main/Homework/Homework_4.typ | typst | #import "@preview/physica:0.9.3":*
#import "@preview/i-figured:0.2.4"
#let xb = $bold(x)$
#let ub = $bold(u)$
#let Qb = $bold(Q)$
#let xbd = $accent(bold(x),dot)$
+ Determine the controllability of the following systems:
(1) $xbd = mat(delim: "[",1,1;1,0)xb + mat(delim: "[",1;0)ub$
(2) $xbd = mat(delim: "[",0,1,0;0,0,1;-2,-4,-3) xb + mat(delim: "[",1,0;0,1;-1,1)ub$
(3) $xbd = mat(delim: "[",-3,1,0;0,-3,1;0,0,-1)xb + mat(delim: "[",1,-1;0,0;2,0)ub$
(4) $xbd = mat(delim: "[",lambda_1,1,0,0;0,lambda_1,0,0;0,0,lambda_1,0;0,0,0,lambda_1) xb + mat(delim: "[",0;1;1;1)ub$
(5) $xbd = mat(delim: "[",0,4,3;0,20,16;0,-25,-20)xb + mat(delim: "[",-1;3;0)ub$
Solution:(1)We use Kalman's controllability criterion to determine the controllability of the system.
We have:
$
bold(Q)_c = mat(delim: "[",bold(B),bold(A B))\
= mat(delim: "[",1,1;0,1)\
rank(bold(Q)_c) = 2=n\
$
Thus the system is controllable.
(2)
$
bold(Q)_c = mat(delim:"[", bold(B), bold(A B), bold(A^2 B))\
= mat(delim: "[",1,0,0,1,-1,1;0,1,-1,1,-5,-7;-1,1,-5,-7,19,15)\
rank(bold(Q)_c) = 3 = n\
$
Thus the system is controllable.
(3) We use PBH's controllability criterion to determine the controllability of the system.The eigenvalues of the matrix $A$ are $-3,-3,-1$. We have:
$
lambda_(1,2) = -3,lambda_3 = -1\
rank(mat(delim: "[",lambda_(1,2) I-A,B)) =rank(mat(delim: "[",0,-1,0,1,-1;0,0,-1,0,0;0,0,-2,2,0)) = 3=n\
rank(mat(delim: "[",lambda_3 I-A,B)) = rank(mat(delim: "[",2,-1,0,1,-1;0,2,-1,0,0;0,0,0,2,0)) = 3=n\
$
Thus the system is controllable.
(4) We use PBH's controllability criterion to determine the controllability of the system. All eigenvalues of the matrix $A$ are $lambda_1$.We have:
$
rank(mat(delim: "[",lambda_1 I-A,B)) = rank(mat(delim: "[",0,-1,0,0,0;0,0,0,0,1;0,0,0,0,1;0,0,0,0,1)) = 2<n\
$
Thus the system is not controllable.
(5) We use Kalman's controllability criterion to determine the controllability of the system.
$
Qb_c = mat(delim: "[",bold(B),bold(A B),bold(A^2 B))\
= mat(delim: "[",-1,12,15;3,60,480;0,-75,0)\
rank(Qb_c) = 3 = n\
$
Thus the system is controllable.
|
|
https://github.com/Dherse/ugent-templates | https://raw.githubusercontent.com/Dherse/ugent-templates/main/slides/main.typ | typst | MIT License | #import "@preview/polylux:0.3.1": *
#import "@preview/codly:0.1.0": *
#import "ugent-theme.typ": *
// Get the current date, you can also replace it with your
// presentation's date:
// #let date = datetime(year: 2023, month: 06, day: 29)
#let date = datetime.today()
// Configure the slides:
#show: ugent-theme.with(
// The authors of the presentation
authors: "<your full name>",
// The short version of your name (e.g. for the footer)
short-authors: "<shortened version of your name>",
// The role you have in the University
role: [ Student ],
// The title of the presentation
title: [ \<full thesis title\> ],
// The subtitle of the presentation
subtitle: [
Master's thesis defence - \<full name\> - \<presentation date\>
#linebreak()
Promoters: \<promoters names & titles\>
],
// The short title of the presentation (e.g. for the footer)
short-title: [ Defence of Master's thesis ],
// The date of the presentation
date: date,
// The email address you want to display on the slides (or `none`)
email: "<<EMAIL>",
// The mobile number you want to display on the slides (or `none`)
mobile: "<your mobile number>",
// The department you are part of
dept: "<your department>",
// The research group you are part of
research-group: "<your research group>",
// The link to your LinkedIn profile
linkedin: "<your linkedin profile>",
// Set this to true to generate a handout version of the slides.
handout: false,
// Whether to display a small progress bar at the bottom of the slides.
progress-bar: false,
)
// Configure codly
#show: codly-init.with()
#codly()
// UGent logo
#corporate-logo()
// Global title slide
#title-slide()
// Print a nice outline of the presentation
#outline-slide()
// Start of the first section
#section-slide("Introduction")
// We create a simple slide.
#slide(title: "Hello, world!")[
- Welcome to this demo
- Of the UGent theme for Typst
- Made with Polylux
][
- It can even span two columns!
- Just like this
- How awesome is that?
]
#slide(title: "Codly")[
```rs
fn main() {
println!("Hello, world!");
}
```
]
#slide(title: "One by one")[
#line-by-line[
- This
- Slide
- Is
- Awesome
]
]
#slide(title: "And a pretty image", kind: "image")[
#image("assets/programmable-pic-hierarchy.png")
]
#image-slide(title: "And a pretty image", kind: "content,image")[
And a caption!
][
#image("assets/big_circuit.png", width: 100%)
]
#slide(title: "Two contents size by side!", colwidths: (auto, 1fr))[
- A list
- With a picture
- On the side,
- Wrapped in a figure!
][
#figure(
caption: [
A hierarchy of programmable PICs courtesy of Prof. dr. ir. <NAME>
],
image("assets/programmable-pic-hierarchy.png")
)
]
// Final end slide
#end-slide() |
https://github.com/andreasKroepelin/TypstJlyfish.jl | https://raw.githubusercontent.com/andreasKroepelin/TypstJlyfish.jl/main/assets/logo.typ | typst | MIT License | #set page(width: auto, height: auto, margin: 15pt)
#set text(font: "Bellota", weight: "bold", size: 3cm, fill: rgb("#4063d8"))
#set align(horizon)
#stack(
dir: ltr,
image("jellyfish.svg"),
[Jlyfish]
)
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/showybox/1.1.0/lib/sections.typ | typst | Apache License 2.0 | /*
* ShowyBox - A package for Typst
* <NAME> and Showybox Contributors (c) 2023
*
* lib/sections.typ -- The package's file containing all the
* internal functions for drawing sections.
*
* This file is under the MIT license. For more
* information see LICENSE on the package's main folder.
*/
#import "func.typ": *
/*
* Function: showy-title()
*
* Description: Returns the title's block
*
* Parameters:
* + frame: The dictionary with frame settings
* + title-styles: The dictionary with title styles
* + title: Title of the showybox
*/
#let showy-title( frame, title-style, title ) = {
/*
* Porperties independent of `boxed`
*/
let props = (
spacing: 0pt,
fill: frame.at("title-color", default: black),
inset: showy-section-inset("title", frame)
)
/*
* Porperties dependent of `boxed`
*/
if title-style.at("boxed", default: false) == true {
props = props + (
width: auto,
radius: frame.at("radius", default: 5pt),
stroke: showy-stroke(frame),
)
} else {
props = props + (
width: 100%,
radius: (top: frame.at("radius", default: 5pt)),
stroke: showy-stroke(frame, bottom: title-style.at("sep-thickness", default: 1pt))
)
}
return block(
..props,
align(
title-style.at("align", default: left),
text(
title-style.at("color", default: white),
weight: title-style.at("weight", default: "bold"),
title
)
)
)
}
/*
* Function: showy-body()
*
* Description: Returns the body's block
*
* Parameters:
* + frame: The dictionary with frame settings
* + body-styles: The dictionary with body styles
* + sep: The dictionary with sep styles
* + body: Body content
*/
#let showy-body( frame, body-style, sep, ..body ) = block(
width: 100%,
spacing: 0pt,
inset: showy-section-inset("body", frame),
align(
body-style.at("align", default: left),
text(
body-style.at("color", default: black),
body.pos()
.map(block.with(spacing:0pt))
.join(
block(
spacing: sep.at("gutter", default: .65em),
align(
left, // Avoid alignment errors
showy-line(frame)(
stroke: (
paint: frame.at("border-color", default: black),
dash: sep.at("dash", default: "solid"),
thickness: sep.at("thickness", default: 1pt)
)
)
)
)
)
)
)
)
/*
* Function: showy-footer()
*
* Description: Returns the footer's block
*
* Parameters:
* + frame: The dictionary with frame settings
* + body-styles: The dictionary with body styles
* + sep: The dictionary with sep styles
* + body: Body content
*/
#let showy-footer( frame, footer-style, footer ) = block(
width: 100%,
spacing: 0pt,
inset: showy-section-inset("footer", frame),
fill: frame.at("footer-color", default: luma(220)),
stroke: showy-stroke(frame, top: footer-style.at("sep-thickness", default: 1pt)),
radius: (bottom: frame.at("radius", default: 5pt)),
align(
footer-style.at("align", default: left),
text(
footer-style.at("color", default: luma(85)),
weight: footer-style.at("weight", default: "regular"),
footer
)
)
) |
https://github.com/alberto-lazari/cv | https://raw.githubusercontent.com/alberto-lazari/cv/main/modules_en/volunteering.typ | typst | #import "/common.typ": *
= Volunteering
#entry(
title: [Scout Leader],
society: [AGESCI Mestre 2],
date: [Mar 2021 - Present],
logo: image("/images/agesci.png"),
)
#entry(
title: [Blood Donor],
society: [Avis],
date: [Aug 2023 - Present],
logo: image("/images/avis.jpg"),
)
|
|
https://github.com/typst-jp/typst-jp.github.io | https://raw.githubusercontent.com/typst-jp/typst-jp.github.io/main/docs/reference/context.md | markdown | Apache License 2.0 | ---
description: |
文書内の位置に応じて反応するコンテンツの扱い方を説明します。
---
# コンテキスト
時には、文書内の位置に応じて反応するコンテンツを作成したいことがあります。
これは、設定されたテキストの言語に依存するローカライズされたフレーズや、
前にいくつの見出しがあったかに基づいて正しい値を表示する、
見出し番号のような単純なものかもしれません。
しかし、Typstコードは直接的に文書内の位置を認識しているわけではありません。
ソーステキストの冒頭にあるコードが、文書の最後に配置されるコンテンツを生成する可能性があります。
そのため、周囲の環境に反応するコンテンツを生成するためには、Typstへの明示的な指示が必要です。
これを行うには、`{context}` キーワードを使用します。
このキーワードは式の前に置かれ、その式が環境の情報を元に計算されることを保証します。
その代わりに、コンテキスト式自体は不透明になります。コンテキスト式の結果にコード内で直接アクセスすることはできません。
なぜなら、コンテキスト式の結果はコンテキスト依存であるからです。したがって、正しい一つの結果が存在するのではなく、文書の異なる場所に複数の結果が存在する可能性があります。
そのため、コンテキスト依存データに基づいたすべてのものは、コンテキスト式の内部で行われる必要があります。
明示的なコンテキスト式以外にも、
文書内の位置を認識する場所では暗黙的にコンテキストが確立されます。
例えば、[showルール]($styling/#show-rules)はコンテキストを提供し[^1]、
アウトライン内の番号付けもカウンターを解決するための適切なコンテキストを提供します。
## 書式コンテキスト {#style-context}
setルールを使用すると、文書の一部または全体の書式のプロパティを調整できます。
これらは文書の進行に伴って変更される可能性があるため、既知のコンテスキトがなければこれらにアクセスすることはできません。コンテキストが利用可能な場合、
個別の要素関数のフィールドとして書式のプロパティにアクセスすることでこれらを簡単に取得できます。
```example
#set text(lang: "de")
#context text.lang
```
上記の説明の通り、コンテキスト式はそれが配置されるさまざまな環境に反応します。以下の例では、単一のコンテキスト式を作成し、それを `value` 変数に格納して複数回使用します。
それぞれのコンテキスト式は、現在の環境に適切に反応します。
```example
#let value = context text.lang
#value
#set text(lang: "de")
#value
#set text(lang: "fr")
#value
```
重要なのは、作成時に `value` は中身を覗くことができない不透明な [content] になることです。それはどこかに配置されたときにのみ解決されます。なぜなら、そのときに初めてコンテキストが認識されるからです。コンテキスト式の本体は、それが配置される場所の数に応じて、0回、1回、または複数回評価される可能性があります。
## Location context
Context can not only give us access to set rule values. It can also let us know
_where_ in the document we currently are, relative to other elements, and
absolutely on the pages. We can use this information to create very flexible
interactions between different document parts. This underpins features like
heading numbering, the table of contents, or page headers dependant on section
headings.
[`counter.get`]($counter.get)のようないくつかの関数は、暗黙的に現在の位置にアクセスします。
以下の例では、見出しカウンターの値を取得したいとします。
これは文書全体で変化するため、まずコンテキスト式に入る必要があります。
次に、`get`を使用してカウンターの現在の値を取得します。
この関数は、カウンターの値を解決するためにコンテキストから現在の位置にアクセスします。
カウンターには複数のレベルがあり、`get`は解決された数値の配列を返します。
したがって、以下の結果が得られます。
```example
#set heading(numbering: "1.")
= Introduction
#lorem(5)
#context counter(heading).get()
= Background
#lorem(5)
#context counter(heading).get()
```
より柔軟性を持たせるために、[`here`] 関数を使用してコンテキストから直接現在の[location]を抽出することもできます。以下の例でこれを示します
- 最初に `{counter(heading).get()}` があり、これは先程のように `{(2,)}` に解決されます。
- 次に、より強力な [`counter.at`] と [`here`] を組み合わせて使用します。これは `get` と同等であるため `{(2,)}` が得られます。
- 最後に、[label] と組み合わせて `at` を使用して、文書内の _異なる_ 位置、つまり導入見出しの位置でカウンターの値を取得します。これにより `{(1,)}` が得られます。Typstのコンテキストシステムは、文書内の _任意の_ 位置でカウンターや状態の値を取得することができるタイムトラベル能力を提供します。
```example
#set heading(numbering: "1.")
= Introduction <intro>
#lorem(5)
= Background <back>
#lorem(5)
#context [
#counter(heading).get() \
#counter(heading).at(here()) \
#counter(heading).at(<intro>)
]
```
前述の通り、コンテキストを使用してページ上の要素の物理的位置を取得することもできます。
これは、`counter.at` と同様に機能する [`locate`] 関数を使用して行います。
この関数は一意の要素(ラベルでも可)に解決される位置または他の[selector]を取り、その要素のページ上の位置を返します。
```example
Background is at: \
#context locate(<back>).position()
= Introduction <intro>
#lorem(5)
#pagebreak()
= Background <back>
#lorem(5)
```
位置コンテキストを利用する関数は他にもありますが、最も顕著なのは [`query`] です。
これらの詳細については、[introspection]($category/introspection) カテゴリを参照してください。
## ネストされたコンテキスト {#nested-contexts}
コンテキストは、コンテキストブロック内にネストされた関数呼び出しからもアクセス可能です。
以下の例では、`foo` 自体が [`to-absolute`]($length.to-absolute) と同様のコンテキスト依存の関数になります。
```example
#let foo() = 1em.to-absolute()
#context {
foo() == text.size
}
```
コンテキストブロックはネストできます。
この場合、コンテキスト依存のコードは常に最も内側のコンテキストにアクセスします。
以下の例ではこれを示しています。最初の `text.lang` は外側のコンテキストブロックのスタイルにアクセスするため、
`{set text(lang: "fr")}` の効果は**見られません**。
しかし、2番目の `text.lang` の周りにあるネストされたコンテキストブロックはsetルールの後に始まるため、その効果が表示されます。
```example
#set text(lang: "de")
#context [
#set text(lang: "fr")
#text.lang \
#context text.lang
]
```
なぜTypstが上記の例で最初の `text.lang` を計算する際にフランス語のsetルールを無視するのか疑問に思うかもしれません。その理由は、一般的な場合、setルールがコンテンツの構築後に適用される可能性があるため、Typstは適用されるすべてのスタイルを知ることができないからです。以下の例では、テンプレート関数が適用されるときに `text.lang` がすでに計算されています。そのため、Typstがテンプレート内のフランス語への言語変更に気づくことは不可能です。
```example
#let template(body) = {
set text(lang: "fr")
upper(body)
}
#set text(lang: "de")
#context [
#show: template
#text.lang \
#context text.lang
]
```
しかし、2番目の `text.lang` は言語の変更に反応 _します_。なぜなら、その周囲のコンテキストブロックの評価が、それに対するスタイルがわかるまで遅延されるからです。これは、正確なスタイルにアクセスするために、コンテキストにとって適切な挿入ポイントを選択することの重要性を示しています。
同様のことが位置コンテキストにも当てはまります。
以下の例では、最初の `{c.display()}` 呼び出しは外側のコンテキストブロックにアクセスするため、 `{c.update(2)}` の効果を見ることはできません。
一方、2番目の `{c.display()}` は内部のコンテキストにアクセスするため、効果を見ることができます。
```example
#let c = counter("mycounter")
#c.update(1)
#context [
#c.update(2)
#c.display() \
#context c.display()
]
```
## コンパイラの反復 {#compiler-iterations}
コンテキスト依存の相互作用を解決するため、Typstのコンパイラは文書を複数回処理します。
例えば、`locate` の呼び出しを解決するために、Typstはまずプレースホルダーの位置を提供し、文書をレイアウトし、レイアウトが完了した位置から既知の位置で再コンパイルします。
カウンターや状態、クエリを解決するためにも同じアプローチが取られます。
特定の場合には、Typstはすべてを解決するために2回以上の反復が必要になることもあります。
それが必要な場合もあれば、コンテキスト依存関数の誤用の兆候であることもあります(例えば[state]($state/#caution)の誤用)。
Typstが5回の試行ですべてを解決できない場合、警告 "layout did not converge within 5 attempts." が出力され、処理が停止します。
非常に注意深い読者の方は、上記で紹介した関数のうち、すべての関数が現在の位置を実際に使用しているわけではないことに気づいたかもしれません。`{counter(heading).get()}` は確かに現在の位置に依存していますが、例えば `{counter(heading).at(<intro>)}` はそうではありません。しかし、それでもコンテキストが必要です。その値は1つのコンパイラ反復内では常に同じですが、複数のコンパイラ反復の間に変化する可能性があります。もしこれをモジュールのトップレベルで直接呼び出すことができれば、モジュール全体とそのエクスポートは複数のコンパイラ反復の間に変化する可能性があり、それは望ましくありません。
[^1]: 現在、すべてのshowルールはスタイリングコンテキストを提供しますが、[locatable]($location/#locatable) 要素のshowルールのみが位置コンテキストを提供します。
|
https://github.com/sysu/better-thesis | https://raw.githubusercontent.com/sysu/better-thesis/main/template/thesis.typ | typst | MIT License | // 建议在正式编辑论文时,采用 typst.app 中的已发布版本模板
// #import "@preview/modern-sysu-thesis:0.3.0": bachelor as thesis
// 仅供开发调试使用
#import "/lib.typ": bachelor as thesis
#import thesis: abstract, acknowledgement, appendix
// 你首先应该安装 https://gitlab.com/sysu-gitlab/thesis-template/better-thesis/-/tree/main/fonts 里的所有字体,
// 如果是 Web App 上编辑,你应该手动上传这些字体文件,否则不能正常使用「楷体」和「仿宋」,导致显示错误。
#show: thesis.doc.with(
// 毕业论文基本信息
thesis-info: (
// 论文标题,将展示在封面、扉页与页眉上
// 多行标题请使用数组传入 `("thesis title", "with part next line")`,或使用换行符:`"thesis title\nwith part next line"`
title: ("基于 Typst 的", "中山大学学位论文模板"),
title-en: "A Typst Template for SYSU thesis",
// 论文作者信息:学号、姓名、院系、专业、指导老师
author: (
sno: "1xxxxxxx",
name: "张三",
grade: "2024",
department: "某学院",
major: "某专业",
),
// 指导老师信息,以`("name", "title")` 数组方式传入
supervisor: ("李四", "教授"),
// 提交日期,默认为论文 PDF 生成日期
submit-date: datetime.today(),
),
// 参考文献来源
bibliography: bibliography.with("ref.bib"),
// 控制页面是否渲染
pages: (
// 封面可能由学院统一打印提供,因此可以不渲染
cover: true,
// 附录部分为可选。设置为 true 后,会在参考文献部分与致谢部分之间插入附录部分。
appendix: true,
),
// 论文内文各大部分的标题用“一、二…… (或1、2……)”, 次级标题为“(一)、(二)……(或
// 1.1、2.1……)”,三级标题用“1、2……(或1.1.1、2.1.1……)”,四级标题用“(1)、(2)……
//(或1.1.1.1、2.1.1.1……)”,不再使用五级以下标题。两类标题不要混编。
// numbering: "一",
// 双面模式,会加入空白页,便于打印
twoside: false,
)
= 导 论
== 列表
=== 无序列表
- 无序列表项一
- 无序列表项二
- 无序子列表项一
- 无序子列表项二
=== 有序列表
+ 有序列表项一
+ 有序列表项二
+ 有序子列表项一
+ 有序子列表项二
=== 术语列表
/ 术语一: 术语解释
/ 术语二: 术语解释
== 图表
引用@tbl:timing,引用@tbl:timing-tlt,以及@fig:sysu-logo。引用图表时,表格和图片分别需要加上 `tbl:`和`fig:` 前缀才能正常显示编号。
#align(center, (stack(dir: ltr)[
#figure(
table(
align: center + horizon,
columns: 4,
[t], [1], [2], [3],
[y], [0.3s], [0.4s], [0.8s],
),
caption: [常规表],
) <timing>
][
#h(50pt)
][
#figure(
table(
columns: 4,
stroke: none,
table.hline(),
[t], [1], [2], [3],
table.hline(stroke: .5pt),
[y], [0.3s], [0.4s], [0.8s],
table.hline(),
),
caption: [三线表],
) <timing-tlt>
]))
#figure(
image("images/sysu_logo.svg", width: 20%),
caption: [图片测试],
) <sysu-logo>
== 数学公式
可以像 Markdown 一样写行内公式 $x + y$,以及带编号的行间公式:
$ phi.alt := (1 + sqrt(5)) / 2 $ <ratio>
引用数学公式需要加上 `eqt:` 前缀,则由@eqt:ratio,我们有:
$ F_n = floor(1 / sqrt(5) phi.alt^n) $
我们也可以通过 `<->` 标签来标识该行间公式不需要编号
$ y = integral_1^2 x^2 dif x $ <->
而后续数学公式仍然能正常编号。
$ F_n = floor(1 / sqrt(5) phi.alt^n) $
== 参考文献
可以像这样引用参考文献:图书#[@蒋有绪1998]和会议#[@中国力学学会1990]。
== 代码块
代码块支持语法高亮。引用时需要加上 `lst:` @lst:code
#figure(
```py
def add(x, y):
return x + y
```,
caption:[代码块],
) <code>
= 正 文
== 正文子标题
=== 正文子子标题
正文内容
// 附录
#show: appendix
= 附录
== 附录章节题
=== 附录子标题
==== 附录子子标题
附录内容,这里也可以加入图片,例如@fig:appendix-img。
从正文内容里复制公式,以审查公式编号
$ F_n = floor(1 / sqrt(5) phi.alt^n) $
#figure(
image("images/sysu_logo.svg", width: 20%),
caption: [图片测试],
) <appendix-img>
// 致谢
#acknowledgement[
感谢 NJU-LUG,感谢 NJUThesis LaTeX 模板。
]
|
https://github.com/ClazyChen/Table-Tennis-Rankings | https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history_CN/2014/MS-06.typ | typst |
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (1 - 32)",
table(
columns: 4,
[排名], [运动员], [国家/地区], [积分],
[1], [马龙], [CHN], [3518],
[2], [许昕], [CHN], [3411],
[3], [张继科], [CHN], [3232],
[4], [蒂姆 波尔], [GER], [3212],
[5], [樊振东], [CHN], [3207],
[6], [王皓], [CHN], [3178],
[7], [迪米特里 奥恰洛夫], [GER], [3116],
[8], [水谷隼], [JPN], [3100],
[9], [#text(gray, "王励勤")], [CHN], [3099],
[10], [庄智渊], [TPE], [3096],
[11], [闫安], [CHN], [3078],
[12], [#text(gray, "郝帅")], [CHN], [3031],
[13], [弗拉基米尔 萨姆索诺夫], [BLR], [2988],
[14], [#text(gray, "陈玘")], [CHN], [2968],
[15], [高宁], [SGP], [2965],
[16], [朱世赫], [KOR], [2963],
[17], [周雨], [CHN], [2943],
[18], [丹羽孝希], [JPN], [2928],
[19], [帕纳吉奥迪斯 吉奥尼斯], [GRE], [2925],
[20], [利亚姆 皮切福德], [ENG], [2910],
[21], [马克斯 弗雷塔斯], [POR], [2902],
[22], [塩野真人], [JPN], [2901],
[23], [帕特里克 弗朗西斯卡], [GER], [2896],
[24], [博扬 托基奇], [SLO], [2882],
[25], [罗伯特 加尔多斯], [AUT], [2869],
[26], [阿德里安 克里桑], [ROU], [2862],
[27], [BOBOCICA Mihai], [ITA], [2856],
[28], [方博], [CHN], [2856],
[29], [巴斯蒂安 斯蒂格], [GER], [2855],
[30], [帕特里克 鲍姆], [GER], [2854],
[31], [CHO Eonrae], [KOR], [2854],
[32], [徐晨皓], [CHN], [2844],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (33 - 64)",
table(
columns: 4,
[排名], [运动员], [国家/地区], [积分],
[33], [斯特凡 菲格尔], [AUT], [2818],
[34], [安德烈 加奇尼], [CRO], [2812],
[35], [唐鹏], [HKG], [2810],
[36], [吉田海伟], [JPN], [2801],
[37], [村松雄斗], [JPN], [2793],
[38], [<NAME>], [SGP], [2782],
[39], [斯蒂芬 门格尔], [GER], [2779],
[40], [李廷佑], [KOR], [2778],
[41], [梁靖崑], [CHN], [2774],
[42], [金珉锡], [KOR], [2773],
[43], [松平健太], [JPN], [2770],
[44], [LIU Yi], [CHN], [2765],
[45], [KIM Hyok Bong], [PRK], [2753],
[46], [WANG Zengyi], [POL], [2752],
[47], [丁祥恩], [KOR], [2749],
[48], [卢文 菲鲁斯], [GER], [2743],
[49], [詹斯 伦德奎斯特], [SWE], [2734],
[50], [DRINKHALL Paul], [ENG], [2730],
[51], [SHIBAEV Alexander], [RUS], [2729],
[52], [陈建安], [TPE], [2725],
[53], [陈卫星], [AUT], [2725],
[54], [MATTENET Adrien], [FRA], [2724],
[55], [林高远], [CHN], [2722],
[56], [奥马尔 阿萨尔], [EGY], [2718],
[57], [STOYANOV Niagol], [ITA], [2718],
[58], [MONTEIRO Joao], [POR], [2709],
[59], [PERSSON Jon], [SWE], [2709],
[60], [沙拉特 卡马尔 阿昌塔], [IND], [2706],
[61], [黄镇廷], [HKG], [2705],
[62], [#text(gray, "克里斯蒂安 苏斯")], [GER], [2703],
[63], [王臻], [CAN], [2699],
[64], [PROKOPCOV Dmitrij], [CZE], [2698],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (65 - 96)",
table(
columns: 4,
[排名], [运动员], [国家/地区], [积分],
[65], [汪洋], [SVK], [2692],
[66], [WU Zhikang], [SGP], [2692],
[67], [#text(gray, "KIM Junghoon")], [KOR], [2690],
[68], [周启豪], [CHN], [2688],
[69], [KOU Lei], [UKR], [2684],
[70], [LI Ahmet], [TUR], [2682],
[71], [吴尚垠], [KOR], [2679],
[72], [HABESOHN Daniel], [AUT], [2679],
[73], [约尔根 佩尔森], [SWE], [2676],
[74], [TAKAKIWA Taku], [JPN], [2671],
[75], [吉村真晴], [JPN], [2671],
[76], [张一博], [JPN], [2670],
[77], [森园政崇], [JPN], [2664],
[78], [HE Zhiwen], [ESP], [2662],
[79], [GORAK Daniel], [POL], [2659],
[80], [<NAME> Chol], [PRK], [2657],
[81], [蒂亚戈 阿波罗尼亚], [POR], [2656],
[82], [<NAME>], [MEX], [2653],
[83], [OYA Hidetoshi], [JPN], [2652],
[84], [尚坤], [CHN], [2650],
[85], [<NAME>], [FRA], [2649],
[86], [李尚洙], [KOR], [2649],
[87], [TSUBOI Gustavo], [BRA], [2644],
[88], [吉田雅己], [JPN], [2644],
[89], [周恺], [CHN], [2641],
[90], [维尔纳 施拉格], [AUT], [2637],
[91], [KONECNY Tomas], [CZE], [2636],
[92], [郑荣植], [KOR], [2636],
[93], [<NAME>], [HUN], [2635],
[94], [<NAME>], [SVK], [2634],
[95], [朴申赫], [PRK], [2634],
[96], [#text(gray, "<NAME>")], [SWE], [2632],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (97 - 128)",
table(
columns: 4,
[排名], [运动员], [国家/地区], [积分],
[97], [#text(gray, "VANG Bora")], [TUR], [2631],
[98], [#text(gray, "LIN Ju")], [DOM], [2630],
[99], [西蒙 高兹], [FRA], [2625],
[100], [艾曼纽 莱贝松], [FRA], [2624],
[101], [米凯尔 梅兹], [DEN], [2618],
[102], [<NAME>], [GER], [2616],
[103], [MATSUDAIRA Kenji], [JPN], [2615],
[104], [基里尔 格拉西缅科], [KAZ], [2612],
[105], [ROBINOT Quentin], [FRA], [2612],
[106], [卡林尼科斯 格林卡], [GRE], [2608],
[107], [OUAICHE Stephane], [ALG], [2605],
[108], [奥维迪乌 伊奥内斯库], [ROU], [2604],
[109], [夸德里 阿鲁纳], [NGR], [2602],
[110], [阿列克谢 斯米尔诺夫], [RUS], [2602],
[111], [HUANG Sheng-Sheng], [TPE], [2600],
[112], [#text(gray, "YIN Hang")], [CHN], [2599],
[113], [KIM Donghyun], [KOR], [2596],
[114], [SEO Hyundeok], [KOR], [2595],
[115], [KANG Dongsoo], [KOR], [2594],
[116], [GERALDO Joao], [POR], [2591],
[117], [ROBINOT Alexandre], [FRA], [2590],
[118], [侯英超], [CHN], [2589],
[119], [<NAME>], [DEN], [2586],
[120], [KOSOWSKI Jakub], [POL], [2584],
[121], [丹尼尔 冈萨雷斯], [PUR], [2583],
[122], [上田仁], [JPN], [2580],
[123], [<NAME>], [CRO], [2580],
[124], [张钰], [HKG], [2579],
[125], [<NAME>], [BRA], [2573],
[126], [<NAME>], [CRO], [2573],
[127], [<NAME>], [RUS], [2571],
[128], [<NAME>], [SWE], [2569],
)
) |
|
https://github.com/gyarab/2023-4e-ruzicka-jako_pavouk | https://raw.githubusercontent.com/gyarab/2023-4e-ruzicka-jako_pavouk/main/src-docs/kapitoly/design.typ | typst | = Design
Vizuální stránka aplikace je podle mého názoru skoro tak důležitá jako její funkčnost. I když staré dobré programy jako 7-Zip nebo FileZilla jsou výjimkou, která potvrzuje pravidlo. Z toho důvodu jsem si dal s vizáží webu práci, a to převážně, co se týče ilustrací.
== Ilustrace
Důležitá pro mě byla jednoduchost a skoro *dětská roztomilost*, aby byl design poutavý pro všechny věkové kategorie. Maskotem se stal *pavouček* jednoduše kvůli jménu aplikace. Pro všechny obrázky, které se objevují na stránce nebo v této dokumentaci, jsem používal vektorový grafický program Adobe Illustrator.
== Barevné schéma
Celá aplikace je na tmavém pozadí a skoro všechen text je bílý. *Světlý motiv* aplikace zatím přidat neplánuji, jelikož by to nejen přidalo na komplexitě, ale jeho používání by bylo nepříjemné například v nočních hodinách.
*Fialové odstíny*, které jsem si vybral jako analogickou paletu, také nebyly náhodné. Fialová je historicky považovaná za barvu velmi vzácnou, drahou a luxusní. Také údajně působí *uklidňujícím a romantickým dojmem*.@fialova
#v(0.4em)
#figure(
image("../obrazky/paleta.svg", width: 80%),
gap: 1em,
caption: "Fialová paleta barev",
) <fialovaFig>
|
|
https://github.com/i-am-wololo/cours | https://raw.githubusercontent.com/i-am-wololo/cours/master/main/cheatsheeti21.typ | typst | #import "./templates.typ": *
#show: chshtemplate.with(matiere: "i21")
= Conception d'algorithme
#definition(title:"Invalider un algo")[Pour montrer qu'un algo n'est pas valide, il suffit de montrer un contre exemple, soit un cas ou l'algorithme ne marcherait pas]
= Analyse asymptotique
#definition(title:"Analyser un algorithme")[c'est analyser les couts par rapport au temps d'execution, l'espace memoire, et la consommation electrique]
#definition(title: "le modele random access machine")[
machine hypothetique ou:
- les operands consomment une unite de temps
- les boucles depend du nombre d'iterations et des operation inside
- un read consomme une unite de temps
- la memoire est illimite
]
l'efficacite d'un algo est defini par une fonction notee C(n) ou T(n), meme si dans un cas reel ca serait plutot note O(n)
exemple:
- recherche d'un element:
- n cases a tester
- 5 cases: > 5 tests
- 10 cases: > 10 tests
- ramasssage de plots:
- n! chemins a tester
- 5 plots: 120 chemins possible
la notation est qui suit:
- $Omega(n)$: meilleur cas
- $O(n)$: pire cas
- $Theta(n)$: cas moyen
#definition(title:$f(n)=O(g(n))$)[il existe une constance $c$ et un entier $n_0$ tels que $forall n >= n_0, f(n) =< c g(n)$]
exemples:
- $3n^2 - n + 6 = O(n^2)$ en prenant $c = 3$ et $n_0 = 6$
- $3n^2 - n + 6 = O(n^3)$ en prenant $c = 1$ et $n_0=4$
- $3n^2 - n + 6 != O(n)$ car $forall c, c n < 3n^2 - n+6$ quand $n > c +1$
#definition(title:$f(n)=Omega(g(n))$)[il existe une constance $c$ et un entier $n_0$ tels que $forall n >= n_0, f(n) => c g(n)$]
exemples:
- $3n^2 - n + 6 = Omega(n^2)$ en prenant $c = 2$ et $n_0 = 2$
- $3n^2 - n + 6 != Omega(n^3) forall c, 3n^2 -n+6 < c n^3$ quand $c n>3$ et $n>6$
- $3n^2 - n + 6 = Omega(n)$ en prenant $c = 1$ et $n_0 = 1$
#definition(title:$f(n)=Theta(g(n))$)[il existe une constance $c$ et un entier $n_0$ tels que $forall n >= n_0 c_1 g(n) <= f(n)<= c_2 g(n)$ \ ($f(n)- O((g(n)$ et $f(n) = Omega(g((n)))$)]
- $3n^2 - n + 6 = Theta(n^2)$
- $3n^2 - n + 6 = Theta(n^3)$
- $3n^2 - n + 6 = Theta(n)$
= Bases d'algo
= parcours de tableau
= Algos de recherche
#include "./i21/tri.typ"
#include "./i21/recherche.typ"
#include "./i21/pilesfiles.typ"
|
|
https://github.com/nath-roset/suiviProjetHekzamGUI | https://raw.githubusercontent.com/nath-roset/suiviProjetHekzamGUI/master/typ%20sources/Documentation%20Technique.typ | typst | Apache License 2.0 | #import "template.typ" : base, dirpath
#show: doc => base(
// left_header:[],
right_header : [scan-GUI-Spring-2024 Team],
title: [Hekzam-GUI Project - Spring 2024],
subtitle: [Technical documentation],
version: [1.0],
authors:(
(
name: "<NAME>",
affiliation:"Paul Sabatier University",
email:"Computer Science undergraduate",
),
(
name: "<NAME>",
affiliation:"Paul Sabatier University",
email:"Computer Science undergraduate",
),
(
name: "<NAME>",
affiliation:"Paul Sabatier University",
email:"Computer Science undergraduate",
),
(
name: "<NAME>",
affiliation:"Paul Sabatier University",
email:"Computer Science undergraduate",
)
),
doc
)
// TODO Add authors
// Nom du tuteur sur la doc ou pas ?
// TODO Add a front cover
#outline(
title: "Table of Content",
)
#v(1fr)
#align(right,[_Last edited on June 2#super[nd] 2024_])
#pagebreak(weak: true)
= Context
In any contemporary academic context, it would be foolish to ask for a teacher to grade hundreds or thousands of exam sheets every week. To ease their burden, several tools like AMC aim to automate this process. However, they’re more often than not dated, unmaintained and prone to errors. To tackle this issue, we have been tasked to design a *modern interface* for a new grading and polling tool called Hekzam.\
The Hekzam program is composed of several components. First, there’s the compiler that will generate exam printouts using *Typst* libraries. Then, a scanner interface will collect and convert if necessary the answer-filled sheets to produce images that will be passed on to the other components. The OCR component will then extract useful data from scanned sheets. And finally, an *automatic verification/grading interface* will allow the user to quickly check for erroneous scans, grade sheets or get statistics from one or multiple examination session.
We describe the Graphical User Interface (or GUI) made using *C++* and *Qt Widget libraries* in this document.
= Definitions
- *Hekzam* : The whole program, #link("https://github.com/hekzam")[(GUI + parser + generator)], including frontend and backend.
- *Typst* : This is a typesetting tool that aims to replace LaTeX. Due to its novelty the API might be unstable.
- *Project* : a directory regrouping all files generated during the execution by Hekzam.
- *Exam*#footnote[_Sujet d'examen_] : Batch of questions asked to students to evaluate them
- *Paper*#footnote[_Copie_] : refers to one batch of pages filled by a single student during an examination session.
- *Page*#footnote[_Page_] : Scan of a single sheet, each uniquely identified.
- *Field*#footnote[_Champ_] : refers to a generic box with different properties based on its type. They all have positions on the page and a (unique ?) ID but AtomicBoxItems contains boolean values, MarkerItems can be moved by the user and OCRItems contain strings.
- *Signals & Slots in Qt* : Qt applications work by using events instead of callbacks. This mechanism is the way `QObjects` can communicate between each other. To make two unrelated objects communicate you `connect` a `signal` from one to the `slot` of the other. There's plenty of working examples in the code already.\
#figure(
align(left)[
```cpp
connect(openButton, &QPushButton::clicked, this, &MainWindow::openProject);
```],
caption: [_connection between a GUI element and a user-defined function_],
)
The requirements for working communications are : _a)_ both objects are subclasses of `QObject` and _b)_ the functions you're trying to connect have the same signature (the same number and type of arguments). Learn more about this mechanism in #link("https://doc.qt.io/qt-6/signalsandslots.html")[the official Qt documentation].\
Be warned that `QGraphicsItem` cannot use signals and slots by default as they do not subclass `QObject` as it adds significant overhead according the Qt documentation.
= User Manual
_from the project `README.md`_:\
Execute the program either through Qt Creator after importing the project (easy)
or by running `./scan-gui` after building.
- *If you haven't ran the program before*:
- Click `Create a New Project`
- In the field *Project Repository* : Select a directory to store the recovery data (useful for testing as you'll only need to pass this JSON file to open all previous files)
- In the field *Exam Data* : select all or some of the files from #dirpath("resources/test_case/Json/")
- In the field *Scan Files* : select all or some of the files from #dirpath("resources/test_case/Fichiers")
- last field is left empty for now
- *If you've already ran the program before*
- Click `Open an Existing Project`
- Open the `data.json` file from the directory you specified in Project Repository.
Then you can :
- Click on an entry in the table
- Clicking on different columns yield different results
- Click on the search bar to enter your search
- Different type of search, the simple : _word_ , the multiple : _word1, word2, ..._ and the tag search : _tag1: word1; tag2: word2, word3;..._
- Hovering the sliders below the preview opens up a tooltip that tells you what they do.
- Interaction with the preview is basic. You can zoom in/out with the `Ctrl + Mouse wheel` combination (`cmd + scroll gesture on MacOS`),
= Current State of the Frontend By module
== Main Window
=== Layout of the Window
The layout is described in the `mainwindow.h` file directly (in French!).
It is composed of a `QStackedWidget` that is going to switch between the different menus depending on our needs.
=== Main Menu
The main menu is only used to *open* or *create* a project. Thus, it is composed of two `QPushButton`.
- The first leads to the creation menu developed later.
- The second opens a file explorer where you can choose a save file to import. It will then redirect you to the evaluation menu.
=== Creation Menu
The creation menu acts as a `QFormLayout` formulary with a total of four fields to fill. In its current state, only the first three fields are working and necessary to continue.
Here is a quick rundown of what each field is :
- The first one is the location of the repository where you want the *save file* (`data.json`) to be.
- The second one is the list of *exam data* in the `Json` format.
- The third one is the list of *scan files* in the `png` or `pdf` formats.
After clicking next, the save file will be created and the Json and scans will be loaded. For more information about the save file, check #link(<data>)[this section].
It is possible to easily add new fields with the help of `addRow` on the `QFormLayout`, as well as the ```cpp QHBoxLayout *createFileEntry(const QString &labelText, QLineEdit *lineEdit)``` method that create those said fields with a :
- `QLabel` as the title of the line.
- `QLineEdit` as the field that will be filled.
- `QPushButton` to browse the file directory.
=== Evaluation Menu
This menu is made of a horizontal `QSplitter`, separating the #link(<table>)[table] from the #link(<preview>)[preview]. On the #link(<table>)[table] side, there is another `QSplitter`, this time vertical, separating it from the #link(<infobox>)[information box].
== Information Box <infobox>
This text zone show the binding problem’s between the copies and the Json.
== CLI
The CLI is implemented using the arguments of the `main.cpp` file.
Each arguments has its own "show" function, those arguments being :
- `--help` : Show the list of options.
- `--version` : Show the program version.
== Saving and Restoring User Configuration
User configuration by the use of the `settings.cpp` class. It initiates a `QSettings` variable that will be loaded with the saved configuration when the software is started, and that will be saved when it is closed.
This information is often stored in the system registry on Windows, and in property list files on macOS and iOS. On Unix systems, in the absence of a standard, many applications (including the KDE applications) use INI text files.#footnote[https://doc.qt.io/qt-6/qsettings.html]
At its current state, the program only saves the dimensions (`QSize`) and position (`QPoint`) of the main window and can only be launched with the user configuration. When loaded, those informations are transformed into a `QRect` to be set as geometry of the window.
== Project Data Management <data>
=== Collecting Data
Two types of data are stored at the creation of a project, both in the forms of `QStringList` :
- The exam data (a list of json) in the `jsonFilePaths` variable.
- The scanned files (a list of png/pdf) in the `scanFilePaths` variable.
=== Storing Data
==== JSONReader
The `JSONReader` class is a simple class that handles reading from JSON file located on disk. It was however tailored for our test case (#emph([detailed in @testcase])). It contains error messages for every step of the conversion in case the file is not readable or empty. The `mJSON::JSONERROR` enum holds the different error values.\
We separate Markers from other fields in ```cpp int jsonreader::getCoordinates()```. This is where you should add other marker types if the need arises (for exemple, for fields containing strings instead of boolean values). We read the relevant keys in
```cpp
void jsonreader::parseValues(QJsonObject &o, dataFieldJSON &coo)
```
If you need to access other fields (*boolean value* of the field, *string* recognized by OCR, *percentage* of certainty), you'll have to add the relevant keys in that function. They're currently missing because our test case did not include any such keys.\
The data is stored in the `dataFieldJSON` struct. and they're grouped by Paper in `dataCopieJSON`.
#let fieldjson = [
```cpp
struct dataFieldJSON
{
qreal x, y, h, w;
QString clef;
QVariant valeur;
int pagenum;
}
```
]
#let copiejson = [
```cpp
struct dataCopieJSON
{
QSize pageSizeInMM;
QList<dataFieldJSON> *documentFields;
QList<dataFieldJSON> *documentMarkers;
int pagecount = 0;
}
```
]
#figure(
grid(
align: center,
columns: 2,
column-gutter: 20pt,
fieldjson, copiejson),
caption : [_current data model_]
)
==== ScanInfo
The `scaninfo.h` file details every structure used to store the data of an exam. Each class listed in `scaninfo.h` contains getters and setters for its attributes as well as methods to iterate through its specific map except for the class `FieldInfo` which doesn't have any. The ```cpp Q_DECLARE_METATYPE``` macro at the very end of the file is used to enable Qt's meta-object system to recognize custom data types. This allows pointers to ExamInfo, CopyInfo and PageInfo to be used with QVariant, signal-slot mechanisms, and other Qt features that require runtime type information. In this context, this macro is used in order to store such structures inside ```cpp QTableWidgetItems``` contained in a tabled cell to facilitate access to the data they refer to.
More information about `QMetaType` class through this #link("https://doc.qt.io/qt-6/qmetatype.html")[link].
==== ExamInfo
The class `ExamInfo` represents the main data structure that will be used throughout the program to manipulate exam data. It contains information about an exam name, the data collected from the corresponding JSON file, a map associating paper names with a `CopyInfo` object and the number of papers contained in the exam.
Even though papers can be classified by an integer, I considered using a map since there is no guarantee that papers will be added in order. For the test cases that are currently used, an *array* or a *vector* could have been used since the number of copies contained in an exam is specified in a file's name. However since this association process is bound to change, I preferred keeping a map in case the copy numbers turn out to be missing or imprecise. \ Other data structures such as *LinkedList*, *Queue*, *Stack* and *Dequeue* will not be efficient when consulting the structure since both table's initialisation need to go through this data structure once (twice in total). Hence a map will be more efficient time wise. The map is ordered for the pages to be inserted inside the table in the correct order (more detail about the table's initialisation later on).
In addition to getters and setters, this class contains several public methods in order to manipulate the other objects of the `scaninfo.h` classes through their private methods. Any field, page or paper can be added anywhere directly from the `ExamInfo` class.
===== CopyInfo
`CopyInfo` is used to describe a paper and gather every page contained in it. A paper is composed of a name, a map associating a page name to its corresponding `PageInfo` object, the number of the copy, a boolean that indicates whether the paper is specified in a JSON file among those selected by the user, a boolean that informs on whether the paper is among the scan files selected by the user, and the number of pages contained in the paper.
An ordered map is used for the same reasons stated in *ExamInfo* except this time, an *array* would need to be initialized with the correct value and this approach would be valid assuming the JSON file specifying every page contained in a paper is correct. If the JSON file lacks information about a page, then the page could never be added to the paper because the index would be out of range. A *vector* could fix this problem by expanding its size every time this happens but this would not be as efficient as directly using a map.
===== PageInfo
The class `PageInfo` categorizes an exam page by its name, file path, number, two booleans that serve the same purpose as the ones described in *CopyInfo* but that imply the presence of a page instead, a map associating every field contained in the page with their names and the number of fields contained in the page. The private methods are used to facilitate the manipulation of a page directly from `CopyInfo` or `ExamInfo`.
==== FieldInfo
The class `FieldInfo` represents every field contained in a page. Each field can either be checked/unchecked or contain a value written by a student. The structure classifies a field by its *name*, *checked* status, *value* and *syntax* value that represents the programs certainty of the interpreted value in the field. The rest of the class is constituted of getters and setters for the previously listed values.
==== JsonLinker
Data storage is managed by the `JsonLinker` class. It is instantiated in the program as an attribute of `MainWindow` and has a unique public method: ```cpp std::map<QString, ExamInfo> &collectFields(QStringList const &filePaths, QStringList const &jsonFilePaths)```. This method takes the paths of previously selected scan and JSON files and aims to associate a scan to its corresponding JSON by matching their file names.
During this method's execution, a map (```cpp std::map<QString, ExamInfo> fileExamMap```) associating an exam identifier (e.g. 1-0-0) with an `ExamInfo` object is initialized by the ```cpp void initialiseMaps(QStringList const &jsonPath)``` method. This process works by going through each JSON file path from the path list given in the method's argument. For each JSON file, the data is loaded into a `dataCopieJSON` pointer by calling ```cpp dataCopieJSON *loadAndGetJsonCoords(QString const &jsonPath)```. The data is loaded with an object from the `JSONReader` class that is instantiated as an attribute of `JsonLinker`. Next, an ExamInfo object is created and filled with `CopyInfo`, `PageInfo` and `FieldInfo` objects depending to the JSON's data.
After the map's initialisation, the list containing every scan path is iterated through and the path of already existing pages is updated to the scan's path. If a file is not specified in any JSON, a dummy object will be created and inserted into the structure. Lastly, the map is returned and will be used to initialise the evaluation tables.
=== Saving and Restoring Data
When a project is created, the same paths that are being loaded in `jsonFilePaths` and `scanFilePaths` are also stored in the save data (`data.json`) of the project. Those are the only informations that are being saved by the software at its current state.
Whenever a user open a `data.json` save file via the main menu, it will load the paths that were saved previously into the `jsonFilePaths` and `scanFilePaths` variables. The software will then drop us to the evaluation menu.
== Evaluation Table <table>
=== TableBox
The class `TableBox` is used as the main evaluation layout containing every component directly linked to the evaluation table. It is instantiated inside ```cpp void MainWindow::createEvaluationView()``` and receives as arguments the previously initialized map containing every `ExamInfo` retrieved from the selected files, the parent of a `QDockedWidget` which is the `Mainwindow` itself and its parent window which is the layout it is stored in.
`TableBox` contains several methods and attributes, half is used to instantiate the tables and the other half are used for the search features.
=== SortDock
In order to display or hide specific columns of the table, the `sortDock` object from the `QDockWidget` class is displayed after pressing on the `sortButton` object from the `QPushButton` class. The dock is initialized through ```cpp void TableBox::initTableFilter()``` and contains a layout that gathers several `QCheckBox` objects that each are connected to a method that will be responsible for hiding or showing the corresponding column.
Here are some links to the documentation concerning the previously mentioned classes: #link("https://doc.qt.io/qt-6/qdockwidget.html")[QDockWidget], #link("https://doc.qt.io/qt-6/qpushbutton.html")[QPushButton] and #link("https://doc.qt.io/qt-6/qcheckbox.html")[QCheckBox].
=== SortTable
The `SortTable` class is an abstract class that extends the `QTableWidget` class. The tables' data comes from the ```cpp std::map<QString, ExamInfo> &examMap``` attribute that is initialized when the tables are first instantiated in the `TableBox`'s constructor. This class serves as template for every evaluation table used in the GUI. The table details each exam, copy, page, field and syntax of an exam paper (field or grouped syntax depending on the view) with columns that share the same name.
In order to simulate the grouping or unfolding of table cells, two tables `groupTable` and `fieldTable` are instantiated from custom classes called `GroupViewTable` and `FieldViewTable` that inherit from `SortTable`. This map is issued from the previous file association done by `JsonLinked`, it contains every information about the selected exam files.
To fill in the tables, each method is responsible for inserting a specific structure. The `ExamInfo` objects from the map are unfolded in order to extract `CopyInfo`, `PageInfo` and `FieldInfo` objects and add them into `QTableWidgetItem` objects as `QVariants` for them to be easily accessible from a table cell. The main difference between `GroupViewTable` and `FieldViewTable` is that at the end of each insertion, `GroupViewTable` sets the cells span to a specific value so that common cells are grouped as one unique cell. It is during the filling process that any error concerning a copy or a page is communicated to the user.
Back to the `TableBox`, it is possible to switch between both tables thanks to the `fieldViewToggle` from the `QCheckBox` class. The `sortTableList` object is a `QList` that contains `SortTable` pointers and is used to store both tables. When toggling from one table to another, the `actualTable` attribute which is a `SortTable` pointer changes to the pointer of the other table in the list so that every operation is applied to the currently displayed table. The switch also considers the current table's scrollbars position so that the same parameters can be applied to the other table upon switching. The same can be done with the columns size thanks to this slot ```cpp &TableBox::synchronizeColumnWidth```.
More information about these classes: #link("https://doc.qt.io/qt-6/qtablewidget.html")[QTableWidget], #link("https://doc.qt.io/qt-6/qtablewidgetitem.html")[QTableWidgetItem] and #link("https://doc.qt.io/qt-6/qvariant.html")[QVariant].
=== Interaction between Table and Preview
In order to display the content of a clicked table cell in the preview, both `groupTable` and `fieldTable` are connected so that when they receive a ```cpp &QTableWidget::cellClicked``` signal, the `TableBox` class will call the following slot: ```cpp &TableBox::collectData```. This slot will check the item clicked by the user on the currently displayed table and if it exists, it will firstly collect the data stored in the corresponding `ExamInfo` class. If the data is not ```cpp NULL```, the slot will fill a `QStringList` with the paths of the files to display according to the column the item is situated in. If the column is *Field*, the method will also instantiate a `QString` with the field's name so that the preview can display an isolate it from the rest of the page. Lastly will the acquired information, the slot will emit a signal ```cpp &TableBox::sendDataToPreview``` so that the preview can receive the correct information. `TableBox` and `ExamPreview` are connected in the `Mainwindow` class.
== Search Function within the Table
For now, the searchbar brings functionalities such as simple and multiple text search and also a
tagged search. All the available functionalities are detailed below.
=== Searchbar
The `QWidget` use for the search is a `QLineEdit`. Two signals are used here :
- `QLineEdit::textChanged` :
every time a character is changed, the slot
`TableBox::cleanSortTable`
will check if the text zone is empty. If so, it will show all the lines of the table and clear
the error message of `searchInfo`.
- `QLineEdit::returnPressed` :
when the user press return, the slot `TableBox::searchProcessing` will start the search.
The first step is a search process. It will test the format with a regular expression #link(<ref4>)[[4]]#link(<ref5>)[[5]]. Depends on the result,
it will process the differents type of search :
- Unknown : error message in the `QLabel` named `searchInfo`.
- Simple : select the columns concerned by the search, create the regex for the search and filter the rows.
- Multiple : select the columns concerned by the search, create the regex by joining all the searched words and filter the rows.
- Tag : select the columns concerned by the search, create the list of tag and the list of regex and filter the rows.
=== Fuzzy search
It’s an approximate string matching. It shows a list of the maximum three first nearest found words. It implements the Wagner and Fischer
algorithm’s which use the concept of the distance of Levenshtein #link(<ref2>)[[2]]. It use a dynamic threshold (30% of the size of the searched text).
The fuzzy search works only for the simple and multiple text search.
=== AtomicSearchBox
Allow the user to do an atomic search (search the exact word with the pattern instead of searching a match with the regex) :
- works only for the simple search (also for the multiple but because words are jointed it will search for their concatenations).
=== Search on FieldView and GroupView
A search is process on the GroupView either the FieldView not the two at the same time. Because of
the use of two tables, the previous search will not be automatically cleared. But, if `textZone` is cleared
by the user, the slot `TableBox::cleanSortTable` is called for the two table.
== Preview <preview>
- [x] Interaction between table and preview that displays data according to a cell’s column
- [x] Basic interactions with the preview, such as zoom operations, previous page/next page operations within a single Paper, and different interactions with the fields based on their type.
=== Layout in the Window
This is described in the `preview.h` file directly (in French!).
One thing of note is the existence of two previews and two scenes, but I focused on one pair and left the other totally empty.
=== ExamViewPort
Subclass of `QGraphicsView`, allows you to render a scene. This class allows you to zoom in and out of the scene without transforming the objects in the scene.
- Antialiasing and `FullViewportUpdate` are turned on because I noticed small visual artifacts when a specific brush was set for highlighting fields.
- The attribute `dimensionToConsider` and the `scaleToFit` enum are here to allow the dev or even the user to scale the page they're viewing to fit the whole height or width of the page in the viewport. *Full height* has been selected for now.
=== ExamScene
Subclass of `QGraphicsScene`, will be filled with items to be rendered by the view. This class can *manipulates* items directly. The most important parts are the `loadImage` and `loadAnswerSheets` functions.
=== PageMask
Subclass of `QGraphicsPathItem`. First a mask is initialized over the whole page when it's loaded, then subpaths are added to add transparency where we want them (over the fields). The mask is refreshed every time the page is reloaded (right now, this means every time we click on a cell).
=== ExamSinglePage
Subclass of `QGraphicsPixmapItem`. Holds the attributes of the page. Parent of every other all the Item Fields. this is where the functions that perform the unit conversions are defined. They allow you to convert JSON coordinates from the parser to Image coordinates used in the preview, and vice versa. This is also where the `FieldItems` are initialized.
=== FieldItems
Subclass of `QGraphicsPolygonItem`. *Generic polygon* that is then subclassed in `AtomicBoxItems` and `MarkerItems` with different properties. Each item is able to send data to the underlying library with `sendNewDataToLib`.\
Here's a tip: when re-implementing a mouseEvent in a subclass of QGraphicsItem, do not forget to call the corresponding mouseEvent of the parent class.\
_Example_ : read the
```cpp
void MarkerItem::mouseReleaseEvent(QGraphicsSceneMouseEvent *event)
```
function in `markeritem.cpp`.
==== Why the use of `QVariant` ?
In `FieldItem::sendNewDataToLib`, we send `QMap<QString, QList<QVariant>>` as it allows us to use one function to reflect changes to all the fields. In the case of markers, the data we send is a list of new coordinates, but in the case of Atomic Boxes, it's simply a boolean value. Using QVariants wraps every possible type into one.
= Description of the test case<testcase>
The test case we used is located in the #dirpath("resources/test_case/Fichiers") and #dirpath("resources/test_case/Json") folders. They contain _manually_ renamed files to match JSON files and images based on their names since this was the only identifier we had. Each file is named following this syntax `X-Y-Z-copieA-pageB:`
- _X-Y-Z identifier_
- X : id of the Exam
- Y : boolean value, is each Page double-sided or not ?
- Z : boolean value, is each Paper printed on two pages or not ?
- _copieA_ :
- A : id of the Paper
- _pageB_ :
- B : current page number
The original test case generated by M.Poquet is also available in a set of 12 different folders at #dirpath("resources/test_case/basic-*-*-*/"), with each Paper in a separate folder. *Please keep in mind that we wrote all of our code without a definitive idea of the data format we would end up handling, as the API was not available at the time*
= Known Bugs
== Markers, MarkerHandles and coordinate systems
_It is possible to move a whole Marker and its corners (called `MarkerHandles` in code) individually. But this feature was implemented at the tail end of the project so some bugs flew under my radar._
- Fixing and converting the coordinates systems used by the markers.
- Every `FieldItem` on the page is actually located at `QPointF(0,0)` (the origin of the scene, in the top-left corner) in page coordinates, with a polygon drawn at the correct location. This causes strange behavior when moving a marker around (see `qDebug` output when moving a marker). The only value I was able to ensure was consistently correct was the displacement relative to the starting point of the item. I became aware of this issue too late to fix it because it did not cause any visual bug but will surely be very annoying when other libraries will receive that data.
- Changes to the position of an marker are not reflected on his corners, meaning that moving the whole marker from its initial position, then moving a corner will exhibit weird behavior as the corner's position hasn't been updated from the prior move.
- The next step would then be to actually convert the positions from their page coordinates (in pixels) back into parser coordinates in millimeters. (The test JSON files we had available had every coordinate in millimeters).
== Duplication of data when opening a save file
_It is possible, with the help of the menu bar, to open a save file on the evaluation window. Because of a lack of time, the table doesn't possess a method to clean its content, and thus the save data will add on top of the pre-existing data._
There could be two fixes to this issue :
- Creating a method to clean the content of the table.
- Creating a method to delete and recreate a new table made up of this new data.
== All the search-related problems
- When a tag search is processed, it happens that unwanted "copies" are showed on the table. This happens only in the case of the "copies not found" or sometimes
in the "Json not found".
#figure(
image(
"Evaluation_table.png", width: 30%
),
caption:"Here the four latest copies are shown"
)
= Missing features
- #text(fill: rgb("#2CBEC0"), [Menu Bar : ])It lacks most of its functionality, however, most of the functions are already present in the code and just need to be written. Some options may need to be removed as they were more of a placeholder than anything else (i.e. darkmode).
- #text(fill: rgb("#2CBEC0"), [Save Data :]) The save system doesn't account for any modifications, only storing the initial state of the data.
- #text(fill: rgb("#2CBEC0"), [User Dialog :])Only file matching errors are displayed in the user dialog box for the moment. A set of information about the amount of scans and JSON files that were recognized could be added if necessary.
- #text(fill: rgb("#2CBEC0"), [Searchbar : ]) First, the help for the user is missing. It could be great to have a reminder on the search format.
The fuzzy search only work for the simple and multiple text search due to do a lack of time for implement it for the tag search.
Finally, the atomic search work only for the tag search. In fact, if the user do an atomic search on a multiple text search it will try to search
the joint of the words. For the tag search, #link(<refImple>)[we have to think in a completely different way to implement this feature]. A more complete and powerful search tool with functionalities as comparison with operator for examples.
- #text(fill: rgb("#2CBEC0"), [Table : ])A column containing the value of a field to allow the user to modify a it directly from the table was initially requested but could not be implemented do to lack of time. Currently, the GUI will notify the user of missing or unassociated files but we did not have the time to implement the feature that can help fix those issues. This version of the GUI only displays items in the table without offering the possibility to modify them. Hence, users will not be able to modify or remove any cell manually.
- #text(fill: rgb("#2CBEC0"), [Preview : ])The `FieldItem::getRect` is used to inform the `QGraphicsView` of the new region to display when the user clicks on a "field" cell inside the Table. The call stack is :\
#par(justify: false,[`FieldItem::getRect -> ExamSinglePage::getFieldPos -> Examscene::setROI -> ExamViewPort::fitROIInView`.])
But `fitROIInView` wasn't implemented due to the lack of time.
= What could be improved <refImple>
- The executable doesn't have a logo, so a per-platform generic logo like #box(image("genericlogo1.png", height: 1em)) on Linux is used by default.
- The Regex are good but the use of them to compare the text in the cells brings too much difficulty when we want to change
the way our search work. Changing this could benefit to easily implements the atomic search for all types of search.
- Changing the distance of Levenshtein#link(<ref2>)[[2]] by the distance of Damerau-Levenshtein#link(<ref3>)[[3]] to get a more precise threshold.
- variables in ScanInfo could be renamed to match the Table Headers
- #link("https://doc.qt.io/qt-6/model-view-programming.html")[An MVC architecture] could be implemented.
- The minimum size of the Preview might be too restrictive, feel free to reduce `minPreviewSize` and change the `previewSizePolicy` if needed.
- The second preview viewport (only visible when the primary preview is present in another window) has been left unused for now. It might be a good idea to reconsider if it's useful or not.<secondpreview>
- the image is reloaded every time we click on a field. Could be an area of improvement if you have the time.
- The CLI could have been implemented with a `QCommandLineParser` instead of doing it from ground-up like we did. It may end up being simpler to implement and possesses an already built in help and version option. Because the CLI was implemented on the last week of the project, we couldn't dive into the #link("https://doc.qt.io/qt-6/qcommandlineparser.html")[documentation] deep enough.
- The data structures used to store information about exams might be subject to change since some decisions made were very specific on the test cases we were using to test different features of the program.
== Preview module
- Hiding the mouse cursor when moving a corner of a marker could improve accuracy.
== What needs to be removed
- The `JsonLinker` class associates pages with their corresponding Json file by matching the file names. This class was only useful since we had no other way of associating files. Ultimately, Hekzam will contain a library supports this feature.
- This association was also responsible for initializing exam, copies, pages and fields into the dedicated data structures `ExamInfo`, `CopyInfo`, `PageInfo` and `FieldInfo`. Those structures will need to be initialized differently.
- The `PageInfo` data structure contains an attribute that describes the path of the file it is associated with this will no longer be needed in the final version since the association library will take care of that.
- I used the `mViewPort` and `mJSON` namespaces in some parts of the code, but they might be useless, remove them if you will.
= OS specific issues
== MacOS
- MacOS specific error message
#figure(
image(
"errormac.png"
),
caption: [_captured on M1 MacBook (ARM)_]
)
This seemingly happens when the user clicks anywhere in the window then hover on the preview frame. This didn't seem to affect the program in any other way.
== Linux
=== Wayland
- Window position cannot be restored because Wayland.
= Sources
<ref1>[1]#link("https://doc.qt.io/qt-6/qtwidgets-index.html")[Qt official documentation]\
<ref2>[2]#link("https://fr.wikipedia.org/wiki/Distance_de_Levenshtein")[Levenshtein distance]\
<ref3>[3]#link("https://fr.wikipedia.org/wiki/Distance_de_Damerau-Levenshtein")[Levenshtein-Damerau distance]\
<ref4>[4]#link("https://doc.qt.io/qt-6/qregularexpression.html")[QRegularExpression documentation]\
<ref5>[5]#link("https://perldoc.perl.org/perlre")[Perl-Regex]\ |
https://github.com/Mayu-onakasuita/typst_japanese_thesis | https://raw.githubusercontent.com/Mayu-onakasuita/typst_japanese_thesis/main/README.md | markdown | # typst_japanese_thesis
## 元のパッケージ
[NTNU Typst](https://github.com/saimnaveediqbal/thesis-NTNU-typst)
変更点
- Margin
- 日本語に適した形に
- Appendixや章といった言葉の日本語化
- 日本語に適したフォント
## 使い方
- Typstをインストール
- VS Codeで拡張機能[Tinymist](https://github.com/Myriad-Dreamin/tinymist)を導入
- お好きなbibファイルと使う(推奨:paperpileのgoogle drive integration 利用)
- かく!
## 記法等の便利ページ
- 公式Document
- [Typst Documentation](https://typst.app/docs)
- 簡単チートシート
- [さらばTex。Typstの文法全集 〜1. マークアップ編〜](https://zenn.dev/yuhi_ut/articles/how2write-typst1)
|
|
https://github.com/kdog3682/typkit | https://raw.githubusercontent.com/kdog3682/typkit/main/0.1.0/src/assertion.typ | typst | #import "is.typ": *
#let assert-is-color-value-object-array(args) = {
let p = args.at(0)
let message = "the arg must take the shape of a dictionary: (fill, value)"
assert(is-object(p) and "value" in p and "fill" in p, message: message)
}
#let assert-is-ratio-divisible(n, ratios) = {
assert(is-divisible(n, ratios.sum()), message: "the ratios do not evenly divide the size")
}
|
|
https://github.com/heinwol/master-thesis | https://raw.githubusercontent.com/heinwol/master-thesis/main/typst/main.typ | typst | #import "./template.typ": thmrules, template, definition, theorem, proof, claim, code, lemma, corollary, remark, turn-on-first-line-indentation as fli, indent
#let sp = h(0.5em)
#let state_dict(d) = {
// assert(type(d) == dictionary)
${
#(d
.pairs()
.map(((k, v)) => $bold(#(k+":")) #v$)
.join(",")
)
}$
}
#let generated_image(subpath) = {
// assert(type(subpath) == str)
(..args) => image("../assets/converted/" + subpath + ".png", ..args)
}
#let image_by_hand(subpath) = {
// assert(type(subpath) == str)
(..args) => image("../assets/images/" + subpath, ..args)
}
#let varnothing = $text(font: "Fira Sans", nothing)$
#let comment(cnt) = text(fill: red, cnt)
#let quo(l, r) = {
$#l"/"#r$
}
#let esq(u, eqv_rel: none) = $[#u]_#( if eqv_rel == none {"~"} else {eqv_rel})$
#let Stab = $"Stab"$
#let Aut = $"Aut"$
#let Gen = $"Gen"$
#show: thmrules
#show: template
////////////////////////
// Document begins!
////////////////////////
= РЕФЕРАТ <nonum>
// TODO Количество страниц и тп
#context [
#let total_pages = counter(page).final().at(0)
#let total_images = counter(figure.where(kind: image)).final().at(0)
#let total_references = 32
!! Расчетно-пояснительная записка #total_pages с., #total_images рис., #total_references источника.
]
Ключевые слова: ГРАФОВАЯ ДИНАМИЧЕСКАЯ ПОРОГОВАЯ МОДЕЛЬ, РЕСУРСНАЯ СЕТЬ, РЕГУЛЯРНЫЙ ГРАФ, АВТОМОРФИЗМ, СИММЕТРИЯ.
Модель ресурсной сети может быть использована для моделирования перколяции, то есть протекания вещества в некоторой среде. Для этого рассматриваются сети с графами регулярной структуры -- губковые сети. Поскольку регулярные графы обладают большим количеством симметрий, то возникает естественный вопрос о том, как симметрии графа позволяют судить о симметрии ресурсной сети как динамической системы. В связи с этим нами ставятся задачи, касающиеся описания симметрий графов губковых сетей, симметрий их как динамических систем и связи между данными видами симметрий. Рассматриваются также модификации губковых сетей, полученные путем факторизации и помещения сетей на цилиндр. Указываются свойства процесса факторизации сети. Дополнительно, было доказано несколько общих утверждений, касающихся автоморфизмов динамических систем и графов.
В работе используется аппарат теории графов, теории групп, теории динамических систем, а также данные, полученные из предыдущих исследований ресурсных сетей. Для проведения моделирования и визуализации результатов написана программа `Sponge-networks` на языке `python`, ее исходный код открыт и доступен на Github.
#outline(title: [Содержание])
// TODO Дописать или переписать))
= ВВЕДЕНИЕ <nonum>
Сетевые модели находят много форм и приложений в современной математической науке. На основе сетевых моделей строятся нейронные сети, модели социальных сетей, автоматы, многоагентные системы и т.п. Одним из направлений исследований сетевых моделей являются ресурсные сети.
Модель ресурсной сети представляет собой ориентированный граф, в каждой вершине которого располагается некоторое количество так называемого ресурса (некоторое неотрицательное вещественное число). Вершины могут накапливать сколь угодно большое количество ресурса. Модель представляет собой динамическую систему в дискретном времени, похожую на цепь Маркова. Как и в цепи Маркова, на каждом такте между соседними вершинами происходит перераспределение значений чисел, приписанных к вершинам, так, чтобы суммарное количество ресурса осталось неизменным (ресурс не появляется из ниоткуда и не исчезает в пустоту). Если суммарное количество ресурса в сети невелико, то ресурсная сеть функционирует как цепь Маркова (с поправкой на некоторый коэффициент). Однако же в случае большого суммарного ресурса в силу вступают нелинейные факторы: согласно определению ресурсной сети, вершина не может отдать по ребру ресурса больше, чем пропускная способность этого ребра.
Понятие ресурсной сети было впервые предложено Кузнецовым~О.~П. в 2009 году @Kuznetsov2009. Данная тема получила широкое развитие: были исследованы многие свойства обыкновенных ресурсных сетей. Так, в монографии Жиляковой~Л.~Ю. @ЖИЛЯКОВА2013 были вычленены основные характеристики ресурсных сетей (например, пороговое значение ресурса, вершины-аттракторы) и предложена классификация сетей. Был проведен полный анализ поведения нерегулярных ресурсных сетей при малом количестве ресурса, эйлеровых сетей, регулярных сетей и др. Решение задачи алгоритмического нахождения порогового значения в эргодической ресурсной сети было предложено в @Скороходов2016. Скороходов~В.~А. предложил детерминированный алгоритм, который позволил находить пороговое значение в любой эргодической сети за $O(n)$ (здесь $n$ -- число вершин в сети). Можно отметить и работу @Zhilyakova2022, в которой с помощью ресурсных сетей были получены результаты в теории стохастических матриц и неоднородных цепей Маркова. В статье @Abdulrahman2024 описана связь между традиционной потоковой моделью (сетью Флойда-Фалкерсона) и ресурсными сетями.
Рассматривалось также и множество модификаций указанной модели. Изучались модель распределения ресурсного потока в ресурсной сети с динамическими длительностями прохождения по дугам @Скороходов2023. Несмотря на разительно отличающиеся качественные характеристики таких сетей, было показано, что суммарный ресурс в сети -- как и в случае обычных ресурсных сетей -- остается неизменным. Более того, было установлено, что если суммарный ресурс не меньше порогового значения в исходной сети, то в сети с динамическими длительностями прохождения по дугам существует единственный предельный поток. Особого внимания заслуживает модификация ресурсной сети с «жадными» вершинами @Чаплинская2021, предложенная Н. <NAME>ской. Исследовались также и приложения ресурсных сетей к моделированию реальных процессов, например, распространения вещества в жидкой среде @Zhilyakova2012.
Наша работа предлагает еще одно приложение ресурсных сетей: моделирование перколяции, т.е. просачивания в некоторой среде. Каноническим трудом является @Bollobas_Riordan_2006, где описаны основные модели, исследующие это явление. Исследование перколяции как направления математики активно и по сей день, примером чему может служить обзорная статья @Li2021, в которой описываются современные модели и методы их исследования. Исследования процессов перколяции на графах продолжаются, предлагаются всё новые модели, такие как перколяция на случайных регулярных графах @Nachmias2007 @Joos2017, перколяция в сложных сетях @Cohen2009, перколяция в смешанных случайных @Verbavatz2021 и псевдослучайных @Diskin2021 графах, а также многие другие.
Отметим, однако, что основным методом исследования перколяции является теория случайных графов. Мы же предлагаем несколько иное прочтение понятия перколяции и иной способ моделирования -- с помощью регулярных ресурсных сетей (названных _губковыми сетями_), которые являются модификацией модели ресурсной сети.
Мы сконцентрируемся на симметриях губковых сетей в качестве нашей основной цели исследования. Мотивацией такого выбора является соображение о том, что основное свойство регулярных структур (в том числе графов) в их симметричности и периодичности. В этом же смысле и стоит понимать основное отличие губковых сетей от ресурсных сетей вообще. Однако возникает следующий вопрос: если графы ресурсной сети обладают некоторой симметрией, то как это отразится на функционировании соответствующей динамической системы? Сохранится ли эта "симметричность", и если да, то в каком смысле? На эти вопросы мы и попытаемся ответить в данной работе.
Вопросы, которые поднимаются в нашей работе имеют также тесную связь с общей теорией динамических систем: их симметриями, фазовыми пространствами и морфизмами между ними. Нередко данные понятия используются в эргодической теории @Walters2000, однако там объектами изучения выступают такие операции над динамическими системами, которые сохраняют некоторую меру или же являются гомеоморфизмами некоторых топологических пространств. Более общий подход описан, к примеру, в @Brin2002. Симметрии динамических систем могут открывать новую перспективу для их качественного анализа. Так, в работе @Murza2017 <NAME> анализирует гетероклинические циклы и бифуркации Хопфа произвольной динамической системы, группой симметрий которой является группа кватернионов $bold(Q)_8$, с помощью чего автор определяет определяет условия для существования гетероклинического цикла между тремя произвольными положениями равновесия системы.
Динамические системы на графах -- например, последовательные динамические системы (Sequential Dynamical Systems, CDS) -- представляют отдельный интерес для исследователей; они обобщают конечные автоматы, клеточные автоматы, цепи Маркова, булевы сети и многие другие понятия @Mortveit2007. Так, периодическое поведение булевых сетей изучались в серии работ Ж. Аледо и <NAME> -- например в @Aledo2020 -- с помощью теории SDS. Симметрии дискретных динамических систем (в частности, SDS) изучались в работе @Kornyak2010, что позволило получить интересные результаты в области клеточных автоматов, а особенно -- игры "Жизнь" Конвея.
Объектом исследования нашей работы являются регулярные ресурсные сети, предметом -- их симметрии. Цель работы состоит в описании всех симметрий губковых сетей как графов и проведение связи между симметриями их как графов и как динамических систем. Ставятся следующие задачи:
- построить модель регулярной ресурсной сети (губковой сети);
- описать связь симметрий регулярных замощений плоскости и симметрий губковых сетей;
- описать полную группу симметрий губковых сетей;
- ввести понятие факторизации губковых сетей, установить базовые свойства сетей при факторизации;
- рассмотреть симметрии регулярных сетей после некоторых факторизаций, в частности, после помещения их на цилиндр;
- установить связь между симметриями графов губковых сетей и симметриями динамических систем, порождающихся ресурсными сетями.
= ОСНОВНЫЕ ПОНЯТИЯ
Перед тем, как приступить к описанию нашей модели и выведенных ее свойств, раскроем более подробно те понятия и методы, которые будут использоваться нами в дальнейшем. Как было сказано во введении, работа широко использует аппарат теории групп, теории графов и теории динамических систем. Поэтому сделаем сначала краткие введения в эти области математики, а затем опишем модель ресурсной сети, которая сама опирается на теорию динамических систем.
== Теория групп
Отметим, сначала, что все определения и утверждения из данного раздела взяты из классической литературы по теории групп, например, из @Винберг2019.
Теория групп занимается исследованием алгебраических структур, в которых элементы множества-носителя можно "умножать" и "обращать", при этом присутствует выделенный элемент "единица", умножение на которую не привносит в выражение ничего нового. Формальное определение дано ниже.
#definition[
Группой называется алгебра $H eq.def (X, dot, (-)^(-1), e)$, где
- X -- произвольное множество;
- $dot$ -- бинарная операция (умножение);
- $(-)^(-1)$ -- унарная операция (обращение);
- $e$ -- константа (единица);
и удовлетворяются следующие аксиомы
+ $forall x, y, z in X sp (x dot y) dot z = x dot (y dot z)$ (ассоциативность умножения);
+ $forall x in X sp x dot e = e dot x = x$ (нейтральный элемент);
+ $forall x in X sp x dot x^(-1) = x^(-1) dot x = e$ (обратимость).
]
Одна из основных интерпретаций группы -- симметрии некоторого объекта. Самый простой пример -- группа диэдра $D_n$, которая является группой симметрий правильного $n$-угольника. Композиция любых симметрий есть снова симметрия, существует тождественная симметрия (при которой каждая точка многоугольника переходит в себя), и к каждой симметрии есть обратная.
Можно и здесь $d = 3$ привести более формальное построение. Пусть $M$ -- \ -- некоторое множество, а $H$ -- некоторое подмножество биекций $H arrow.hook H$, замкнутое относительно композиции $compose$ и обращения $(-)^(-1)$, в которое входит тождественное отображение. Поскольку композиция функций ассоциативна, то $H$ -- группа, которая называется *группой автоморфизмов* $M$ или *группой симметрий* $M$. В таком случае, говорят, что группа $H$ *действует* на множестве $M$. На самом деле, для любой группы можно указать множество, на котором она действует. Элементы $H$ называются *автоморфизмами* или *симметриями* множества $M$.
#definition[
Действие группы $H$ на множестве $M$ называется *транзитивным*, если $forall x, y in M sp exists f in H: f(x) = y$. Иначе говоря, любой элемент множества $M$ может быть переведен в любой другой какой-то симметрией.
]
#definition[
Пусть группа $H$ действует на множестве $M$. Тогда $Stab_H (x) = { f in H | f(x) = x }$ -- *стабилизатор* элемента $x$ при действии группы $H$ на него, т.е. множество таких симметрий, которые оставляют элемент $x$ на месте.
]
== Графы
Информация по теории графов берется из канонических трудов, таких как, например, @Trudeau1993.
#definition[
*(Ориентированным) графом* назовем пару множеств $G eq.def (V, E)$, где $E subset V times V$. $V$ будем называть *множеством вершин*, а $E$ -- \ -- *множеством ребер*.
]
Граф называется *взвешенным*, если существует некоторое множество весов $X$ и некоторая функция $phi: E -> X$, сопоставляющая каждому ребру графа определенный вес.
Введем следующие дополнительные понятия и обозначения:
- вершины $u$ и $v$ *смежные*, если $(u, v) in E or (v, u) in E$;
- для произвольной вершины $v in V$ $"adj"^-(v) = {u in V | (u, v) in E}$;
- для произвольной вершины $v in V$ $"adj"^+(v) = {u in V | (v, u) in E}$;
- для произвольной вершины $v in V$ $"adj"(v) = "adj"^-(v) union "adj"^+(v)$ -- множество всех вершин, смежных с $v$.
Неориентированный граф можно понимать как частный случай ориентированного, где множество $E$ симметрично ($forall (v_1, v_2) in E sp (v_2, v_1) in E$). Мы будем пользоваться понятием ориентированного графа, покуда не оговорено обратное.
#definition[
$Gen_(G)(V')$ -- подграф графа $G$, *порожденный* подмножеством $V' subset V$ вершин графа $G$, то есть граф $G' = (V', E')$, где $forall u, v in V' sp ((u, v) in E' <=> (u, v) in E)$.
]
// Граф обычно считается конечным в том смысле, что $V$ конечно.
#definition[
#fli[
Отображение $Phi: V -> V$ называется *морфизмом* графа $G$, если $forall u, v in V sp (u, v) in E => (Phi(u), Phi(v)) in E$. Если $Phi$ -- биекция, то она называется *автоморфизмом* графа $G$.
Множество автоморфизмов действительно есть группа автоморфизмов на множестве $V$, она обозначается как $(Aut(G))$.
Если граф взвешенный, то дополнительно требуется, чтобы $forall u, v in V: (u, v) in E sp Phi(phi((u, v))) = phi((Phi(u), Phi(v)))$.
]
] <def:graph-aut>
Отметим, что мы будем пользоваться двойной нотацией. С одной стороны, мы будем относиться к автоморфизмам графа как к отображениям $V -> V$, как и сказано в определении @def:graph-aut. С другой стороны, мы можем говорить о них как об отображениях на $V union.sq E$, где вершины переводятся в вершины, а ребра -- \ -- в ребра. В таком случае, если $(u, v) = e in E$, то под $Phi(e)$ будет пониматься ребро $(Phi(u), Phi(v))$. Такая нотация позволяет намного более компактную формулировку некоторых утверждений, а также позволяет более естественное обобщение на, например, мультиграфы.
#definition[
Пусть $G = (V, E)$ -- некоторый ориентированный граф. Допустим, на множестве вершин $V$ задано произвольное отношение эквивалентности "$~$". Образуем *факторграф графа* $G$ по отношению $~$ согласно~@Gardiner1974. Определим $quo(G, ~) eq.def (V', E'):$
$
cases(
V' = quo(V, ~)";",
(esq(u), esq(v)) in E' <=> exists u in esq(u)"," exists v in esq(v) : (u, v) in E.
)
$ <nonum>
// , при этом оно конгруэнтно относительно отношения $E$, т.е. сохраняет смежность вершин:
] <def:quotient-graph>
#let ft1 = footnote[
На самом деле, данная аналогия является точной, если рассматривать описанные примеры как частный случай более общего теоретико-категорного определения понятия фактор-объекта в соответствующей категории. Так, факторграф является фактор-объектом в категории графов, факторгруппа является фактор-объектом в категории групп и т.п. Более подробно о фактор-объектах можно прочитать, например, в классической работе Роберта Голдблатта @Goldblatt2014[глава 4].
]
Интерпретация данного понятия такова: в графе $G$ можно "склеить" некоторые вершины и рассматривать их как одну, при этом "новой" вершине будут смежны все вершины, которые были смежны "старым" вершинам, склеенным в "новую". Можно проводить аналогию #ft1 с факторгруппами, топологическими факторпространствами и др., в которых также некоторые элементы множества отождествляются; только в случае графов никаких дополнительных ограничений на отношение эквивалентности (наподобие конгруэнтности) не требуется.
== Динамические системы
Информация, содержащаяся в данном разделе, взята из @Brin2002 и @Mortveit2007.
#definition[
Пусть дано произвольное множество $X$ и некоторая функция $S: X -> X$. *Дискретная динамическая система* $cal(S) eq.def (X, S)$ может быть образована из $S$ следующим образом: если $q(0) in X$ -- некотрое начальное состояние, то $q(t) = S(q(t-1)), t in NN$. $S$ при этом называется *функцией эволюции динамической системы*.
]
#definition[
Пусть $cal(S)_1 = (X_1, S_1)$ и $ cal(S)_2 = (X_2, S_2)$ -- две динамические системы. Отображение $Phi : X_1 -> X_2$ называется *морфизмом* между $cal(S)_1$ и $cal(S)_2$, если $forall x in X_1 sp Phi(S_1(x)) = S_2(Phi(x))$.
Отображение $Phi$ называется *изоморфизмом*, если оно биективно.
Изоморфизм $Phi$ называется *автоморфизмом*, если оно изоморфизм и $cal(S)_1 = cal(S)_2$, т.е. автоморфизм -- это изоморфизм "на себя". Группа всех автоморфизмов динамической системы $cal(S)$ обозначается $Aut(cal(S))$.
] <def:dyn_syst_aut>
Симметричность поведения динамических систем и выражается понятием автоморфизма. Автоморфизм передает всю полноту информации о динамической системе.
В качестве примера системы обладающей симметрией, можно привести следующую: $S: RR -> RR$:
$ S(x) = a_1 x + a_2 x^3, sp sp a_1, a_2 in RR. $ <eq:dyn-auto-ex1>
У данной системы присутствует следующий автоморфизм: $Phi(x) = -x$. Действительно,
$
forall x in RR sp S(Phi(x)) = S(-x) = a_1 (-x) + a_2 (-x)^3 = - (
a_1 x + a_2 x^3
) = Phi(S(x)).
$ <nonum>
Если динамическая система обладает нетривиальным автоморфизмом, то между фазовыми траекториями существует некоторая биекция, сохраняющая свойства этих траекторий. Так, траектория, полученная из начального \ состояния $3$ системы @eq:dyn-auto-ex1 обладает теми же свойствами, что и траектория, начинающаяся из начального состояния $-3$ -- разница лишь в знаке. Положим, нам интересно рассмотреть множество всех возможных траекторий данной системы. Для этого стоило бы взять всевозможные начальные состояния $q^0 in RR$ и затем строить $S(q^0), S^2(q^0), ...$. Благодаря наличию симметрии $Phi$ можно сузить множество рассматриваемых состояний до $RR_+$.
== Модель ресурсной сети
Ресурсная сеть -- ориентированный размеченный граф, в каждой вершине которого находится некоторое количество «ресурса». Ресурс есть некоторое неотрицательное вещественное число. Можно мыслить о ресурсе как о жидкости. Ресурсная сеть образует динамическую систему с дискретным временем. А именно, каждый такт ресурс перераспределяется между вершинами так, чтобы суммарное количество ресурса в сети оставалось неизменным. Каждая вершина «отдает» в каждое из своих ребер ресурс, пропорциональный пропускной способности (метке) этого ребра, но не больше, чем сама пропускная способность. Таким образом, если в вершине ресурса достаточно мало, то система функционирует эквивалентно цепи Маркова (с поправкой на некоторый коэффициент). Напротив, если предположить, что во всех вершинах сети ресурса больше, чем их пропускная способность, то на следующем такте из каждой вершины уйдет ровно столько ресурса, какова ее пропускная способность, и придет столько ресурса, каков суммарный вес всех входящих в нее ребер. Оба этих случая по отдельности кажутся очень простыми, однако сложность представляет исследование именно промежуточных состояний, то есть таких, при которых часть вершин содержит мало ресурса, а часть -- много.
Определим теперь понятие ресурсной сети более формально.
#definition[
*Ресурсная сеть* -- это тройка $"SN" = (G, D, S)$, где:
]
- $G = (V, E)$ -- ориентированный граф, дуги $E$ которого размечены над множеством $RR_+$ неотрицательных вещественных чисел, а $|V| = n$. Метки дуг лучше всего понимать как аналог пропускной способности. Предположим, что зафиксирована некоторая нумерация вершин \ $"num": V -> overline(1"," n) .$ Тогда метку дуги $e_(i j)$ будем записывать как $r_(i j)$ ($r_(i j) = 0$ если дуги не существует);
- $D$ -- множество (допустимых) состояний динамической системы, т.е. некоторое подмножество множества $(V arrow.r med bb(R)_(+))$. Менее формально говоря: каждой вершине $v$ может быть присвоено некоторое значение из множества $bb(R)_(+)$, но, вообще говоря, не все значения из $bb(R)_(+)$ могут быть допустимыми. Произвольное состояние из $D$ будем обозначать как $q in D$. Если нумерация вершин считается фиксированной, то под $q$ будем понимать $n$-мерный вектор.
- $S: D -> D$ -- функция эволюции динамической системы, при этом $S$ однозначно определяется $G$ и $D$. Более подробное описание функции $S$ будет приведено ниже.
_Примечание:_ Как можно видеть, довольно сложно сформулировать определение ресурсной сети, инвариантное относительно выбранной нумерации вершин. В связи с этим для легкости дальнейшего изложения зафиксируем нумерацию, покуда это не вызовет недопониманий.
#definition[
*Ресурсы* $q_i (t)$ -- неотрицательные числа, присвоенные вершинам $v_i; #h(0.6em) i = overline(1"," n)$ и изменяющиеся в дискретном времени $t$.
!!*Состояние* $Q(t)$ c на временном шаге $t$ представляет собой вектор-строку значений ресурсов в каждой вершине: $q(t) = (q_1(t), q_2(t), ..., q_(n)(t))$.
]
#definition[
*Матрица пропускной способности* ресурсной сети -- $R eq.def (r_(i j))_(n times n)$. В~сущности, это матрица смежности графа $G$ с весами из $RR_+$.
]
#definition[
*Стохастическая матрица* ресурсной сети:
$
R' eq.def
mat(
delim: "(",
r_11 / r_1^("out"), dots.h.c, r_(1 n) / r_1^("out");
dots.v, dots.down, dots.v;
r_(n 1) / r_n^("out"), dots.h.c, r_(n n) / r_n^("out")
),
$ <nonum>
где $bold(r_i^("out")) eq.def sum_(j=1)^n r_(i j)$.
!! Матрица $R'$ называется стохастической потому, что если рассмотреть цепь Маркова, построенную по тому же графу $G$, что и ресурсная сеть, отнормировав при этом веса ребер так, чтобы в каждой строке матрицы смежности сумма значений была равна единице, то получим в точности матрицу $R'$.
]
#definition[
*Матрица потока* ресурсной сети:
$ F(q) eq.def min {R' dot.circle (bold(1) dot.op q), R}, $ <nonum>
где min применяется поэлементно, $dot.circle$ -- произведение Адамара, $bold(1)$ -- вектор-столбец из единиц.
!! Поток из $i$-й вершины в $j$-ю есть в точности то количество ресурса, которое придет из $i$-й вершины в $j$-ю под действием $S$. Следует отметить следующее.
+ Ресурсы, приходящие из разных вершин, складываются.
+ Если суммарный выходной поток меньше текущего количества ресурса в вершине, то излишек не пропадает, а остается в вершине.
] <def:flow-matrix>
Таким образом, можно определить, наконец, функцию эволюции динамической системы $S$:
$ S(q) eq.def q - (F(q) dot bold(1))^T + bold(1)^T dot F(q). $ <eq:S>
Дискретная динамическая система определяется стандартно: пусть дано некоторое начальное состояние $q^0 in D,$ тогда определим:
$
cases(
q(0) &= q^0";",
q(t) &= S(q(t-1))"," sp t in NN.
)
$
#definition[
Пусть дано некоторое состояние $q$ ресурсной сети. Тогда определим
$
Z^(-)(q) = {v_i in G | q_i <= r_i^("out")};\
Z^(+)(q) = {v_i in G | q_i > r_i^("out")}.
$
!! Очевидно, что $forall q in RR_(+)^n sp Z^(+)(q) union.sq.big_()^() Z^(-)(q) = V$. Если вершина принадлежит $Z^(+)(q)$, то говорим, что она *работает по правилу 1*. Если же вершина принадлежит $Z^(-)(q)$, то говорим, что она *работает по правилу 2*.
]
#let state_1 = ("0": 8, "1": 1, "2": 0)
Пример ресурсной сети $"network"_1$ можно видеть на @fig:basic_network_1. Множество вершин здесь есть $V = {0, 1, 2}$, а метки ребер обозначают соответствующие веса в графе $G$. На @fig:basic_network_2 показано некоторое состояние сети с ресурсами #state_dict(state_1). При данном способе визуализации вершины имеют разный размер в зависимости от количества имеющегося в них ресурса. Более того, вершины $v_i$, в которых ресурс не меньше порогового значения (т.е. $q_i >= r_i^("out")$), окрашены в фиолетовый цвет, а остальные -- в салатовый.
#grid(
columns: 2,
align: center,
[
#v(1em)
#figure(
caption: [Ресурсная сеть $"network"_1$],
generated_image("basic_network/plot")(width: 75%),
) <fig:basic_network_1>
],
[
#figure(
caption: [Состояние сети $"network"_1$ при #state_dict(state_1)],
generated_image("basic_network/sim")(),
) <fig:basic_network_2>
],
)
Матрица пропускной способности $"network"_1$ приведена в формуле @eq:basic_network_R, а ее стохастическая матрица -- в формуле @eq:basic_network_R_1. Можно убедиться, что сумма значений в каждой строке стохастической матрицы равна единице.
#grid(
columns: 2,
[
$
R = mat(
0, 3, 1;
4, 1, 0;
2, 2, 0;
)
$ <eq:basic_network_R>
],
[
$
R' = mat(
0 , 0.75, 0.25;
0.8 , 0.2 , 0 ;
0.5 , 0.5 , 0 ;
)
$ <eq:basic_network_R_1>
],
)
Исходя из формы определения матрицы потока @def:flow-matrix, можно объяснить механизм функционирования ресурсной сети более наглядно. Если предположить, что в некотором состоянии все вершины работают по правилу 1 (т.е. $Z^+(q) = V$), то из @eq:S и определения @def:flow-matrix получим
$ S(q) = q - (R dot bold(1))^T + bold(1)^T dot R = q + 1^T (R - R^T). $ <nonum>
Иначе говоря, в этой ситуации модель напоминает потоковую @Goldberg1989, поток не зависит от текущего состояния, а только от свойств самого графа. Напротив, если предположить, что все вершины работают по правилу 2 (т.е. $Z^-(q) = V$), то окажется, что модель получается линейной:
$ S(q) = q dot R'. $ <eq:Markov>
Поскольку матрица $R'$ -- стохастическая (т.е. сумма значений в каждой строке равна 1), то модель @eq:Markov аналогична модели цепи Маркова с непрерывным состоянием и дискретным временем @Dynkin1965.
Отметим, что "большая часть" ресурсных сетей в пределе ведут себя во многом идентично цепям Маркова. Подробное и объемлющее описание поведения обычных ресурсных сетей приведено в @ЖИЛЯКОВА2013.
== Модель ресурсной сети с жадными вершинами
Данная модификация модели обыкновенной ресурсной сети была предложена Чаплинской Н.В. в @Жилякова2021, а затем исследована в работах @Чаплинская2021 @Чаплинская2021a. Суть модели в следующем: вершины, обладающие петлей отдают свой ресурс сначала в петлю, а остаток распределяют уже согласно обыкновенному закону функционирования ресурсной сети. Получается, что такие вершины -- \ "запасливые", пытающиеся сначала "отложить" ресурс себе, а уже потом распределять его между соседями. Из такой аналогии и проистекает их название.
Одно из интересных свойств такой сети состоит в том, что при достаточно маленьком количестве ресурса и при выполнении некоторых дополнительных свойств достижимости в графе и при некотором начальном распределении ресурса по вершинам происходит так называемая "остановка" сети: 100%~ресурса оказывается сосредоточено в жадных вершинах, при этом вершина не может отдавать ресурс никуда, помимо петли (поскольку ресурса у нее недостаточно). Получается, что поток в сети оказывается нулевым (за исключением петель, в которые все время поступает весь ресурс из вершин). Естественно, динамическая система оказывается стабилизированной в том смысле, что $S(q) = S^2(q)$. Пример останавливающейся сети приведен на @fig:stop_network_2. Более подробное описание ситуаций, в которых сети демонстрируют подобное поведение представлено в @Чаплинская2021 (например, утверждение 5).
#figure(
caption: [Сеть в остановившемся состоянии],
generated_image("stop_network/sim2")(),
) <fig:stop_network_2>
= РЕЗУЛЬТАТЫ
== Губковые сети
Настоящее исследование посвящено *губковым сетям*. Относить к губковым сетям мы будем ресурсные сети с жадными вершинами, имеющими специфическую топологию. А именно, будем рассматривать графы, являющиеся _регулярными_ (определим "регулярность" более подробно ниже), при этом у таких графов будет выделен "верх" и "низ", т.е. множество вершин, в которые в начальный момент времени будет класться ресурс и множество вершин, в которые этот ресурс может стекать. При этом подразумевается наличие некоторой ориентации: протекание ресурса "сверху вниз" должно идти быстрее, чем "снизу вверх". Более того, в целях более точной геометрической интерпретации модели можно было бы считать, что графы губковых сетей планарны, т.е. допускают вложение в $RR^2$. Однако, как будет показано в дальнейшем, такое ограничение является слишком сильным. Пример губковой сети ($"sponge_network"_1$) приведен на @fig:some_sponge_network_1. Верхними вершинами в $"sponge_network"_1$ являются ${(i, 2) | i in #overline[0, 4]}$, а стоковыми ${(i, -1) | i in overline(0"," 4)}$.
На данном примере можно показать, откуда проистекает название губковых ресурсных сетей. Можно представить, что данный граф моделирует некоторую двухмерную губку с решетчатой внутренней структурой (в данном случае, решетка квадратная). В начальный момент времени на губку капают некоторой жидкостью сверху (на верхние вершины). Затем жидкость просачивается через губку, дотекая до низу, и вытекает из губки (в стоковые вершины). Мы рассматриваем также вариант модели, в которой нет стоковых вершин (например, как на @fig:some_sponge_network_without_sinks_1). Граф "губки" при этом (по крайней мере, если убрать стоковые вершины) является сильно связным. Это сделано для моделирования капиллярного эффекта: жидкость может распространяться из одного кусочка губки во все соприкасающиеся с ним куски, в том числе и вверх. Заметим, что веса ребер, идущих "сверху вниз", заметно больше, чем веса ребер, идущих "снизу вверх" ($5 >> 1$), что подкрепляет физическую аналогию.
#figure(
caption: [Пример губковой сети $"sponge_network"_1$],
// placement: horizon,
generated_image("some_sponge_network/plot")(width: 100%),
) <fig:some_sponge_network_1>
#figure(
caption: [Губковая сеть $"sponge_network"_1$, в которой убраны стоки],
// placement: bottom,
generated_image("some_sponge_network_without_sinks/plot")(width: 100%),
) <fig:some_sponge_network_without_sinks_1>
Губковая сеть в некотором роде есть продолжение модели распространения загрязнения в водной среде, описанной в @Жилякова2011. В ней распространения загрязнения моделировалось растеканием ресурса в сети, заданной прямоугольной решеткой, в которой пропускные способности соответствовали силе течений и скорости ветра. Губковая сеть, помимо иной интерпретации, отличается большей гибкостью: она включает большее разнообразие топологий, имеет жадные вершины, а также является открытой -- ресурс может втекать сверху и вытекать снизу.
=== Программная реализация
Для исследования свойств губковых сетей, было реализовано соответствующее программное окружение на языке Python. Приложение было названо "sponge-netowrks", оно имеет открытый исходный код, опубликованный на Github @Корешков2023. Лицензия проекта -- MIT.
Основная задача проекта -- предоставить пользователю доступ к надежному, удобному и свободно расширяемому интерфейсу для проведения различных исследований в области ресурсных сетей. Проект интегрирован со средой jupyter notebook, что позволяет получать результаты симуляций и прочих операций над ресурсными сетями в удобно читаемом и интерактивном виде. Sponge-networks позволяет:
- создавать ресурсные сети на основе матриц, графов и списков смежности;
- модифицировать произвольным образом ресурсные сети, не нарушая внутренней целостности данных;
- проводить симуляции, задавая количество шагов, которые должна отработать система, и начальные условия. Результат симуляции хранит в себе всю необходимую информацию о состояниях и потоках сети за время симуляции;
- искать предельные состояния ресурсной цепи, а также положения равновесия эргодической ресурсной сети как цепи Маркова;
- представлять симуляции в виде массивов, листов Excel и графиков;
- рисовать ресурсные сети;
- рисовать симуляции в виде ресурсных сетей с анимациями, реализованными с помощью слайдера, который позволяет визуализировать на граф ресурсной сети в произвольный момент времени (@fig:some_sim_with_slider_1, @fig:some_sim_with_slider_2);
#figure(
caption: [Некоторая сеть в момент времени $t = 0$],
image_by_hand("some_sim_with_slider/1.png")(width: 70%),
) <fig:some_sim_with_slider_1>
#figure(
caption: [Та же сеть в момент времени \ $t = 49$],
placement: top,
image_by_hand("some_sim_with_slider/2.png")(width: 70%),
) <fig:some_sim_with_slider_2>
- экспортировать анимации в gif;
- создавать губковые сети на основе обычных ресурсных сетей и проводить их симуляции;
- создавать губковые сети по шаблону, указывая тип и параметры сетки, а также веса ребер по направлению. Присутствует возможность указать, создавать ли в сети стоки.
=== Типы топологий сетей
Были рассмотрены и реализованы губковые сети не только с квадратной сеткой (как на @fig:some_sponge_network_1), но и с треугольной (@fig:network_types_example_triangular), и с шестиугольной (@fig:network_types_example_hexagonal).
#figure(
caption: [Пример губковой сети с треугольной сеткой (`n_rows` $= 3$, `n_cols` $= 5$)],
placement: top,
generated_image("network_types_example/triangular")(width: 72%),
) <fig:network_types_example_triangular>
#figure(
caption: [Пример губковой сети с шестиугольной сеткой (`n_rows` $= 2$, `n_cols` $= 4$)],
placement: top,
generated_image("network_types_example/hexagonal")(width: 100%),
) <fig:network_types_example_hexagonal>
Все вышеуказанные сети могут быть построены с помощью функции `build_sponge_network`. Например, сеть на @fig:some_sponge_network_1 была построена так, как приведено на @lst:build_sponge_network_ex[листинге].
Из приведенного вызова функции можно видеть, какие параметры можно задавать для сети: тип сети `grid_type` (`"grid_2d"` -- квадратная сетка, `"triangular"` -- треугольная, `"hexagonal"` -- шестиугольная); количество "строк" `n_rows`; количество "столбцов" `n_cols`; видимую длину ребер, ведущих в стоковые вершины `visual_sink_edge_length` и особенности сети `layout`. Последний параметр включает в себя описание весов ребер в сети, а также флаг, указывающий, создавать ли стоковые вершины.
#figure(
caption: [Пример создания губковой сети \ с помощью функции `build_sponge_network`],
// placement: top,
code(
```python
build_sponge_network(
grid_type="grid_2d",
n_cols=4,
n_rows=3,
layout={
"weights_horizontal": 3,
"weights_up_down": 5,
"weights_down_up": 1,
"weights_loop": 1,
"weights_sink_edge": 1,
"generate_sinks": True,
},
visual_sink_edge_length=0.7,
)
```
)
) <lst:build_sponge_network_ex>
#pagebreak()
== Симметричность губковых сетей
Как можно видеть, губковые сети -- в отличие от ресурсных сетей вообще -- \ -- имеют довольно симметричную структуру. Попробуем определить более формально, что это означает. Для начала, вслед за Грюнбаумом @Gruenbaum1977, определим замощения плоскости.
#definition[@Gruenbaum1977][
Пусть $cal(F)$ -- некоторое разбиение плоскости на измеримые множества ненулевой меры. Назовем $"Sym"(cal(F)) eq.def ("Trans"(RR^2) dot $ \ $dot O(2)) sect {phi in (RR^2 -> RR^2) | forall d in cal(F) sp phi(d) in cal(F)}$ *группой симметрий этого разбиения*, где $"Trans"(RR^2) dot O(2)$ -- группа движений евклидовой #box[плоскости @Винберг2019[стр. 164]]. Иначе говоря, это такой набор движений плоскости (вращений, отражений и параллельных переносов), который переводит все множества из $cal(F)$ в (возможно, другие) множества из $cal(F)$. Очевидно, что #box[$"Sym"(cal(F))$ -- группа].
]
#definition[@Gruenbaum1977][
*Замощением плоскости* называется разбиение плоскости на множество многоугольников $cal(F)$, при этом "типов" многоугольников конечное число, в том смысле что множество, состоящее из орбит всех элементов множества $cal(F)$ под действием группы $"Sym"(cal(F))$ -- иначе говоря, $quo(cal(F),"Sym"(cal(F)))$ -- конечно.
]
#definition[@Gruenbaum1977][
Замощение плоскости называется *регулярным*, если группа $"Sym"(cal(F))$ действует на $cal(F)$ транзитивно, т.е. $|quo(cal(F), "Sym"(cal(F)))| = 1$.
]
Известно (@Gruenbaum1977), что регулярных замощений плоскости всего 3: прямоугольное, треугольное и шестиугольное.
Пусть $G$ -- бесконечный локально конечный (т.е. $|V| = aleph_0$, но $forall v in $ \ $in V |"adj"(v)| < infinity$) планарный неориентированный граф, а $"emb"(G)$ -- вложение этого графа в плоскость. Скажем, что $G$ определяет разбиение плоскости следующим образом: рассмотрим множество $RR^2 \\ "emb"(G)$. Скажем, что $cal(F)$ есть множество связных компонент $RR^2 \\ "emb"(G)$ (либо же замыканий этих связных компонент, если нам удобно рассматривать множества с границей). Без ограничения общности, можно считать, что $cal(F)$ состоит из многоугольников. Будем считать, что не составит труда привести такие графы и их вложения, чтобы соответствующие им разбиения были треугольным, прямоугольным или шестиугольным замощением плоскости; при этом такие вложения будут максимальными для соответствующих графов. Назовем эти графы и вложения *каноническими*.
Губковые сети, представленные до сих пор, могут быть рассмотрены как подграфы канонических графов для соответствующих регулярных замощений плоскости. Конечно, для проведения такого соответствия $cal(I)$, переводящего губковую сеть в подмножество соответствующего разбиения, нужно откинуть веса ребер у губковой сети, а также превратить ее граф из ориентированного в неориентированный. Именно в этом смысле графы губковых сетей являются регулярными. Однако, следует заметить, что сужение $"Sym"(cal(F))$ на образ $cal(I)$ (т.е. $"Im"(cal(I))$) не является группой, так что указанная симметрия замощения на губковые сети не работает. Именно поэтому провести параллель между симметриями губковых сетей и симметриями регулярных замощений плоскости проблематично. Нам придется прибегнуть к анализу симметрий непосредственно губковых сетей, без привязки к замощениям плоскости.
Сделаем теперь комментарий насчет количества "строк" и "столбцов", характеризующих губковую сеть и указываемых при создании конкретного объекта (как на @lst:build_sponge_network_ex[листинге]). Считать строки и столбцы следует не по вершинам, а по многоугольникам в $"Im"(cal(I))$. Так, число этих многоугольников равно в точности `n_rows` $dot$ `n_cols`, в чем можно убедиться, взглянув на @fig:network_types_example_triangular и @fig:network_types_example_hexagonal.
Как было сказано выше, губковые сети не обладают симметрией, если их рассматривать как часть замощения плоскости. Однако более продуктивным оказывается рассмотрение их группы симметрий как графов.
#definition[
Пусть дана некоторая ресурсная сеть $"RN"$. Отображение $Phi : "RN" -> "RN"$ будем называть *автоморфизмом ресурсной сети*, если оно является автоморфизмом графа $G$, соответствующего этой сети, которое сохраняет, при этом, веса ребер, т.е. $forall e in E sp w(e) = w(Phi(e))$, где $w(e)$ -- вес ребра $e$, а также переводит верхние вершины в верхние.
!! *Группой автоморфизмов ресурсной сети* $"RN"$ назовем множество всех автоморфизмов данной сети $Aut("RN")$, в котором композиция функций выступает в качестве бинарной операции, наделяя указанное множество групповой структурой.
] <def:rn_aut>
#claim[
Автоморфизм губковой сети переводит стоковые вершины в стоковые.
] <claim:auto>
Утверждение @claim:auto очевидно следует из того, что список смежности вершины является инвариантом графа, а вершины без исходящих ребер все суть стоковые.
Пусть $m in NN$ -- количество столбцов в сети, а $n in NN$ -- количество строк. Будем теперь иметь в виду, что вершины в губковых сетях допускают нумерацию парой натуральных чисел, как демонстрировалось на приведенных выше рисунках. При этом вершина с наименьшими индексами $(x, y)$ находится "слева снизу", а с наибольшими -- "справа сверху". Стоковые вершины нумеруются индексами $(x, -1)$. Верхние вершины в прямоугольной и треугольной сетях обладают индексами $(i, n)$, а в шестиугольных $(i, 2 n + 1)$ (в шестиугольных вершины с индексами $(i, 2 n)$ верхними не являются).
#definition[
Пусть дана произвольная губковая сеть $"SN"$ (прямоугольная, треугольная или шестиугольная), а $v_1 = (x_1, y_1) in V$ и \ $v_2 = (x_2, y_2) in V$, при этом $e = (v_1, v_2) in E$. Определим наименования *типов ребер* следующим образом:
- Ребро $e$ -- *горизонтальное*, если $x_1 != x_2 and y_1 = y_2$;
- Ребро $e$ -- *"сверху вниз"*, если $y_1 > y_2$;
- Ребро $e$ -- *"снизу вверх"*, если $y_1 < y_2$;
- Ребро $e$ -- *петля*, если $v_1 = v_2$.
]
Будем исходить из того, что веса ребер в сети таковы, что:
- все ребра одного типа имеют одинаковые веса;
- вес ребер "сверху вниз" больше, чем вес ребер "снизу вверх".
Данные предположения будем называть *базовыми предположениями о губковой сети*. Все сети, продемонстрированные выше, удовлетворяют этим предположениям.
Для дальнейшего обсуждения докажем следующее полезное утверждение.
#claim[
Если $Phi in Aut(G)$, а $V' in V$ -- инвариантное относительно $Phi$ подмножество вершин графа (т.е. $Phi(V') = V'$), то \ $Phi|_V' in Aut(Gen_G (V'))$.
] <claim:aut_restriction>
#proof[
Рассмотрим произвольные вершины $v_1, v_2 in V'$. Очевидно, что $Phi((v_1, v_2)) = (Phi(v_1), Phi(v_2))$ принадлежит множеству ребер $Gen_G (V')$. Более того, $Phi$ -- биекция на $V'$. Следовательно, $Phi$ -- автоморфизм на $Gen_G (V')$.
]
#theorem[
Пусть губковая сеть $"SN"$ удовлетворяет базовым предположениям о губковой сети.
Верно следующее:
- $"SN"$ обладает осевой симметрией, т.е. $Aut("SN") tilde.equiv quo(ZZ, 2 ZZ)$, если сеть:
- прямоугольная;
- шестиугольная, при этом $m$ -- нечетное;
- треугольная, при этом $m$ -- нечетное и $m > 1$;
- $Aut("SN") tilde.equiv limits(times.big)_(i = 1)^(ceil(n "/" 2)) quo(ZZ, 2 ZZ)$ если сеть треугольная и $m = 1$;
- в остальных случаях, сеть не обладает симметрией, т.е. $Aut("SN") tilde.equiv bold(1)$.
!! При этом не важно, есть ли в сети стоки или нет.
]<th:sym>
Перед тем, как перейти к доказательству, покажем примеры сетей, о которых идет речь. На @fig:network_types_example_triangular приведен пример треугольной сетки с нечетным количеством столбцов, а на @fig:network_types_example_hexagonal -- шестиугольной с четным. Дополним указанные рисунки треугольной сеткой с четным количеством столбцов (@fig:network_types_example_triangular_sym) и шестиугольной с нечетным (@fig:network_types_example_hexagonal_sym). Также на @fig:network_types_example_triangular_single_sym показан пример треугольной сетки с единственным столбцом.
#figure(
caption: [Пример губковой сети с треугольной сеткой и одним столбцом (повернуто на $90 degree$)],
scale(
x: 100%,
y: 100%,
rotate(
reflow: true,
-90deg,
generated_image("network_types_example_sym/triangular_single")(),
),
),
) <fig:network_types_example_triangular_single_sym>
#figure(
caption: [Пример губковой сети с шестиугольной сеткой \ (`n_rows` $= 2$, `n_cols` $= 3$)],
generated_image("network_types_example_sym/hexagonal")(),
) <fig:network_types_example_hexagonal_sym>
#figure(
caption: [Пример губковой сети с треугольной сеткой (`n_rows` $= 2$, `n_cols` $= 4$). #h(1em)],
generated_image("network_types_example_sym/triangular")(),
) <fig:network_types_example_triangular_sym>
// #proof[
// #fli[
Пусть $Phi in Aut("SN")$. Разделим наш граф на "уровни" следующим образом. Обозначим за $V_tilde(n)$ множество верхних вершин сети, где $tilde(n) eq.def 2n+1$ если сеть шестиугольная и $tilde(n) eq.def n$ в ином случае.
Далее, $V_(tilde(n)-1) eq.def {(j, tilde(n) - 1) | j in NN} sect V$ -- \ -- множество всех вершин "предпоследнего уровня". И вообще, определим $V_i eq.def {(j, i) | j in NN} sect V$ вплоть до $V_0$ (или же $V_(-1)$, если в сети есть стоки). Очевидно, $V = limits(union)_i V_i$. Докажем теперь, что $forall V_i sp Phi(V_i) = V_i$. По определению @def:rn_aut, $Phi(V_tilde(n)) = V_tilde(n)$. Предположим, что для некоторого $i$ известно, что $Phi(V_(i+1)) = V_(i+1)$. Рассмотрим произвольное ребро графа $e = (v_1, v_2)$, такое что $v_1 in V_(i+1)$, а $v_2 in V_i$. Поскольку $Phi$ -- автоморфизм, то $Phi(e) = (Phi(v_1), Phi(v_2))$. Мы предположили, что $Phi(v_1) in V_(i+1)$, а значит для ребра $v_2$ в силу специфики рассматриваемых графов есть всего 3 варианта: $Phi(v_2) in V_(i+1)$, или $Phi(v_2) in V_(i+2)$, или $Phi(v_2) in V_i$. Первый вариант невозможен, поскольку тогда получится, что $v_2 in Phi^(-1)(V_(i+1)) = V_(i+1)$, что противоречит определению $V_(i+1)$. \ По той же причине невозможен и второй вариант. Значит, $Phi(v_2) in V_i$.
В случае треугольной или прямоугольной сетки $V_i subset limits(union)_(v in V_(i+1)) "adj"(v)$, так что отсюда и из доказанного выше автоматически следует, что $Phi(V_i) = V_i$. В шестиугольной сети вершина $(0, 2n)$, а в случае нечетного $m$ еще и вершина $(m, 2n)$, оказываются не связанными с верхним уровнем $V_(2n+1)$. Однако анализа ее (их) списка смежности уже достаточно, чтобы понять, что при автоморфизме она (они) не может (не могут) перейти ни в какие другие вершины, поскольку у нее (них) в соседях единственная вершина на том же уровне и единственная на уровне ниже нее (них), что подразумевает соответствующее распределение взвешенных ребер; при этом эта (эти) вершина(ы) не может (не могут) перейти в верхние, поскольку, по определению, в верхние переходят только верхние. Так что в четном случае $Phi((0, 2n)) = (0, 2n)$, а в нечетном эти вершины переходят либо в себя, либо друг в друга. Таким образом, доказано, что \ $forall i sp Phi(V_i) = V_i$.
Оценим теперь, как могут вести себя вершины из $V_i$ под действием $Phi$. Рассмотрим (максимальный) подграф $G_i = Gen_(G)(V_i)$ графа $G$, порожденного вершинами $V_i$. В случае прямоугольной или треугольной сетки (и при $m > 1$ в случае треугольной) $G_i$ -- двунаправленная цепь. Ввиду равенства всех весов горизонтальных ребер между собой и всех петель между собой, $G_i$ допускает единственный нетривиальный автоморфизм $S_i$, "разворачивающий" цепь в обратную сторону (по аналогии с отражением $S$ в группе диэдра). Соответственно, $Aut(G_i) = $ \ $= {id_(V_i), S_i}$. Более того, $Phi|_(G_i) in Aut(G_i)$.
Как ведет себя $Phi$ при попытке "связать" эти пока что независимо рассматривавшиеся части? Всего есть 4 варианта того, как ведет себя $Phi|_(Gen_(G)(V_(i+1) union V_i))$, а именно:
$
Phi|_(Gen_(G)(V_(i+1) union V_i)) in {
id, sp id_(V_(i+1)) union.sq S_i, sp S_(i+1) union.sq id_(V_i), sp S_(i+1) union.sq S_(i)
}.
$ <nonum>
Можно убедиться, что четвертый вариант действительно является автоморфизмом на $Gen_(G)(V_(i+1) union V_i)$ в прямоугольном случае или в треугольном при нечетном $n$. Если же сеть треугольная и $n$ четно, то степени крайних вершин в $V_i$ при рассмотрении графа $Gen_(G)(V_(i+1) union V_i)$ не совпадают, так что такие вершины попросту не могут переходить друг в друга, так что $Aut("SN") tilde.equiv bold(1)$ в случае треугольной сети с четным числом столбцов.
Второй и третий варианты также не дают нам автоморфизма, поскольку в этом случае под действием $Phi$ рвется связь между, например, вершинами \ $(0, i+1)$ и $(0, i)$:
$
((0, i+1), (0, i)) in E, "но" (Phi((0, i+1)), Phi((0, i))) in.not E.
$ <nonum>
Однако же, если сеть треугольная и в ней присутствует всего один столбец, то наше рассуждение неверно, так как один уровень ($V_(i+1)$ или $V_i$) состоит из единственной вершины (и тогда, конечно, либо $S_(i+1) = id$, либо $S_i = id$) и можно проверить, что тогда $Phi$ действительно автоморфизм на $Gen_(G)(V_(i+1) union V_i)$.
До сих пор мы оставляли в стороне шестиугольные сети. В них $G_i$ является не цепью, а набором из пар взаимно соединенных вершин, либо же некоторыми одиночными вершинами, так что $Aut(G_i)$ достаточно насыщенна. Лучше же рассмотреть $Gen_(G)(V_(i+1) union V_i)$. В отличие от прямоугольных и квадратных сетей, здесь этот граф как раз таки и представляет из себя цепь, только веса ребер, не являющихся петлями, здесь не все равны друг другу: присутствуют как горизонтальные ребра, так и "сверху вниз" и "снизу вверх". Отметим, что, по аналогии с рассмотрением сужения $Phi$ на $V_i$, $Phi|_(Gen_(G)(V_(i+1) union V_i))$ также должна быть автоморфизмом $Gen_(G)(V_(i+1) union V_i)$, поскольку $Phi(V_(i+1) union V_i) = V_(i+1) union V_i$.
Можно заметить, что $Aut(Gen_(G)(V_(i+1) union V_i)) tilde.equiv bold(1)$ если $m$ четно, так как веса на ребрах крайних вершин цепи оказываются различными. И, наоборот, можно убедиться, что при нечетном $m$ $Aut(Gen_(G)(V_(i+1) union V_i)) =$ \ $= {id_(V_(i+1) union V_i), S'_i}$, где $S'_i$ -- соответствующее отражение. Попытка "связать" $S'_(i+1)$ и $S'_i$ по аналогии с тем, что мы сделали для прямоугольной или треугольной сети выше, еще быстрее даст нам ответ \ $Aut(S'_(i+1) union S'_i) = {id, S'_(i+1) union S'_(i)}$, поскольку часть вершин у \ $Gen_(G)(V_(i+2) union V_(i+1))$ и $Gen_(G)(V_(i+1) union V_i)$ совпадает, при этом $S'_(i+1)$ и $S'_i$ совпадают на пересечении их областей определения.
Итак, мы показали, что для всех рассмотренных типов сетей (помимо треугольной сети с одним столбцом) единственный возможный нетривиальный автоморфизм -- отражение. Более того, было показано, что при объединении отражений на разных "уровнях" мы получаем корректно определенное отражение на большем подграфе графа $G$. Естественным образом доказывается, что, проделав подобную процедуру $tilde(n)$ раз, можно получить отражение $S$, определенное корректно на всем графе $G$. Таким образом, доказано, что, помимо случаев, оговоренных выше, \ $Aut("SN") tilde.equiv quo(ZZ, 2 ZZ)$.
Рассмотрим теперь случай треугольной сети с одним столбцом. Здесь "склеивание" возможно между различными функциями: как тождественными, так и отражениями. Отражений, при этом, набирается ровно столько, сколько имеется строк с двумя вершинами, а их $ceil(n "/" 2)$. Таким образом, $Aut("SN")$ является прямым произведением групп, сгенерированных отражениями на всевозможных уровнях, т.е. \ $Aut("SN") tilde.equiv limits(times.big)_(i = 1)^(ceil(n "/" 2)) quo(ZZ, 2 ZZ)$.
// ]
// ]
Наблюдение, касающееся того, какие автоморфизмы может иметь объединение двух графов, можно обобщить с помощью следующего утверждения.
#claim[
Пусть $G$ -- произвольный граф (возможно, неориентированный или же с взвешенными ребрами), при этом \ $V = V_1 union.sq V_2$. Пусть $Aut(G)(V_1) = V_1$ и $Aut(G)(V_2) = V_2$, т.е. все автоморфизмы графа $G$ переводят вершины из $V_i$ в $V_i$. Определим $tilde(V)_1$ как множество тех вершин из $V_1$, которые смежны с некоторыми вершинами из $V_2$ в графе $G$, т.е. \ $tilde(V_1) = {v in V_1 | "adj"_G (v) sect V_2 != varnothing}$, аналогичным образом определим $tilde(V)_2$. Пусть $H_1 = Aut(Gen_G (V_1))$, $H_2 = Aut(Gen_G (V_2))$. Тогда выполняется неравенство:
$
(limits(sect.big)_(v in tilde(V)_1) Stab_(H_1)(v)) times (
limits(sect.big)_(v in tilde(V)_2) Stab_(H_2)(v)
) <= Aut(G) <= H_1 times H_2.
$
]
#proof[
#fli[
Рассмотрим произвольный автоморфизм $Phi in Aut(G)$. В силу условия на инвариантность множеств $V_i$ и утверждения~@claim:aut_restriction, $Phi|_V_1 in H_1, Phi|_V_2 in H_2$. Из этого можно заключить, что отображение $Phi$ принадлежит множеству $Aut(Gen_G (V_1))$ в прямом произведении с $Aut(Gen_G (V_2))$, значит $Aut(G) <= H_1 times H_2$.
Докажем второе неравенство. Обозначим $N_i eq.def limits(sect.big)_(v in tilde(V)_i) Stab_(H_i)(v),$ где \ $i = overline(1"," 2)$. Пусть некоторая функция $Phi_1 in H_1$ оставляет на месте все вершины, соприкасающиеся со множеством $V_2$, т.е. $Phi_1 in N_1$. Аналогичным образом, пусть $Phi_2 in N_2$. Пара функций $(Phi_1, Phi_2)$ принадлежит, с одной стороны, группе #box[$N_1 times N_2$] по определению последней. С другой стороны, с каждой такой парой $(Phi_1, Phi_2)$ можно сопоставить функцию $Phi: G -> G$ следующим образом. Пусть #box[$Phi(v) = Phi_1(v)$], если $v in V_1$ и $Phi(v) = Phi_2(v),$ если $v in V_2$. На ребрах же ее действие определим так. Если $v_1, v_2 in V_1$, то $Phi((v_1, v_2)) = Phi_1((v_1, v_2))$; аналогично с множеством $V_2$. Если же $v_1 in V_1, v_2 in V_2$, то пусть $Phi((v_1, v_2)) = (v_1, v_2)$; аналогично с $v_1 in V_2, v_2 in V_1$.
Несложно понять, что $Phi$ -- биекция. Докажем, что $Phi in Aut(G)$. Если $v_1, v_2 in V_1$, то
$
Phi((v_1, v_2)) = Phi_1((v_1, v_2)) = (Phi_1(v_1), Phi_2(v_2)) = (
Phi(v_1), Phi(v_2)
);
$ <nonum>
аналогично с $v_1, v_2 in V_2$. Предположим, $v_1 in V_1, v_2 in V_2$. Если #box[$(v_1, v_2) in E$], то тогда $v_1 in tilde(V)_1, v_2 in tilde(V)_2$ и, согласно определению $Phi_1$ и $Phi_2$, $Phi(v_1) = v_1, Phi(v_2) = v_2$, так что действительно оказывается, что $Phi((v_1, v_2)) = (v_1, v_2) = (Phi(v_1), Phi(v_2))$. Напротив, если $(v_1, v_2) in.not E$, то нового ребра не появится, поскольку множества $V_1 \\ tilde(V)_1, sp V_2 \\ tilde(V)_2$ также инвариантны относительно $Phi$. Симметричные рассуждения можно провести и для $v_1 in V_2, v_2 in V_1$.
Таким образом, показано, что $Phi in Aut(G)$ и, соответственно, #box[$N_1 times N_2 subset Aut(G)$]. Наши выкладки следует дополнить тем фактом, что отображение $mu$, ставящее каждой паре функций #box[$(Phi_1, Phi_2) in N_1 times N_2$] элемент #box[$Phi in Aut(G)$], сохраняет групповую операцию: $mu(Phi'_1 compose Phi_1, Phi'_2 compose Phi_2) = mu(Phi'_1, Phi'_2) compose$ \ $compose mu(Phi_1, Phi_2)$. Получается, что $mu$ -- гомоморфизм групп, а значит #box[$N_1 times N_2 <= Aut(G)$].
]
]
В определении автоморфизма сказано о необходимости того, чтобы верхние вершины переходили в верхние при отображении. На самом деле, об этом можно не говорить явно, поскольку это требование автоматически выполняется. По аналогии с тем, как было доказано, что в шестиугольной сети вершины $(0, 2n)$ и $(m, 2n)$ не могут перейти ни в какие другие кроме как друг в друга, здесь можно также проанализировать список смежности верхних вершин и сравнить его с другими вершинами.
Отметим еще одну деталь, касающуюся базовых предположений о губковой сети.
#claim[
Предположим, что, в нарушение базовых предположений о губковой сети, веса "сверху вниз" и "снизу вверх" в сети $"SN"$ совпадают. Предположим также, что в сети нет стоковых вершин. Пусть сеть треугольная или прямоугольная и, дополнительно, в случае треугольной сети количество строк четно ($n = 2k$), а в случае прямоугольной $m != n$. В данном случае сеть будет иметь еще один автоморфизм $Phi: Phi((i, j)) =$ \ $= (i, 2 ceil(n "/" 2) - j)$ и, соответственно, $Aut("SN") tilde.equiv quo(ZZ, 2 ZZ)$ в случае треугольной сети и $Aut("SN") tilde.equiv (quo(ZZ, 2 ZZ))^2 = D_2$ (вторая группа диэдра, полная группа симметрий прямоугольника) в случае прямоугольной сети.
]
Очевидно, при этом, что указанная симметрия меняет местами "верх" и "низ" сети. Это выбивается за рамки нашей модели, поэтому и были сформулированы базовые предположения о губковой сети.
== Фактор-сети
До сих пор мы говорили о губковых сетях как о планарных графах. Данное представление, как было показано, довольно сильно ограничивает нас в возможностях использования симметрий для анализа губковых сетей. Попробуем расширить наши возможности, убрав требование планарности и пополнив наш инструментарий таким преобразованием как факторизация.
Понятие факторграфа, однако, применимо лишь к графам в их базовом понимании, а губковая сеть есть в первую очередь взвешенный граф. Если рассмотреть вершины $esq(u)$ и $esq(v)$ из определения @def:quotient-graph, то, в общем случае, может оказаться, что для них существует несколько разных $u$ и $v$, связанных ребрами в исходном графе, при этом эти ребра могут иметь разные веса. Какой будет вес у $(esq(u), esq(v))$? На этот вопрос можно отвечать по-разному. Мы введем сначала операцию, которая позволяет сделать фактор-сеть из любого отношения эквивалентности, а потом рассмотрим ее частный случай, обладающий более интересными свойствами.
// TODO: сделать что-нибудь с D, оно нам вообще надо?
#definition[
Пусть $"RN"$ -- произвольная ресурсная сеть (возможно, с жадными вершинами), а "$~$", как и в определении @def:quotient-graph, произвольное отношение эквивалентности на вершинах соответствующего графа $G$. Определим *факторизованную ресурсную сеть* (*фактор-сеть*) $quo("RN", ~) eq.def (G', D', S')$ по отношению эквивалентности "$~$" как ресурсную сеть, где:
// - Граф $G "/" ~$ -- факторграф графа $G$ по отношению $~$;
// Пусть $u, v in V$ и, соответственно, $esq(u), esq(v) in quo(V, ~)$.
- Ребра $E'$ и веса $w'$ графа $G'$ определяются следующим образом. \ $forall esq(u) in quo(V, ~), forall esq(v) in quo(V, ~)$:
- $(exists u in esq(u) : (u, u) in E) <=> (esq(u), esq(u)) in E';$ тогда
$w'((esq(u), esq(u))) = "avg"{w((u, u)) | (u, u) in E and u in esq(u)}.$
Здесь $"avg" X$ есть среднее арифметическое элементов множества чисел $X$;
- $(exists u in esq(u), exists v in esq(v) : (u, v) in E) <=> (esq(u), esq(v)) in E'$; тогда
$
w'((esq(u), esq(v))) = "avg"{
w((u, v)) | (u, v) in E and u in esq(u) and v in esq(v)
}.
$ <nonum>
- Функция эволюции динамической системы $S'$ определяется согласно правилам работы ресурсной сети (возможно, с жадными вершинами), соответствущей графу $G'$.
] <def:quotient-network>
Как можно видеть, так же, как и в случае обычного факторграфа, в при факторизации в ресурсной сети не появляется никаких "новых" ребер в том смысле что количество ребер в факторизованной сети не больше, чем в исходной. Отметим также, что данное определение не согласуется с определением обычного ориентированного факторграфа. Действительно, если представить себе некоторый граф $G$ без петель, в котором отождествлены смежные вершины $u$ и $v$, то в $quo(G, ~)$ появится петля $(esq(u), esq(u)) = (esq(v), esq(v))$. Напротив, если представить себе подобную ситуацию с ресурсной сетью $"RN"$, то новых петель в ходе факторизации не появится. Такая разница введена для того, чтобы различить разные по своей сути классы ребер в ресурсной сети с жадными вершинами: петли и все остальные ребра. Действительно, если бы мы разрешили отождествлять вершины без петель так, чтобы в результате получалась "жадная" вершина, это бы заметно изменило качественные свойства сети. Впрочем, мы будем рассматривать несколько иные ситуации, подпадающие под \ определение @def:ideal-quotient.
#definition[
Пусть $"RN"$ -- произвольная ресурсная сеть (возможно, с жадными вершинами). Отношение эквивалентности на множестве вершин сети "$~$" назовем *идеальным* (и, соответственно, получившуюся фактор-сеть $quo("RN", ~)$ назовем *идеально факторизованной*), если $forall esq(u) in quo(V, ~)$, $forall esq(v) in quo(V, ~)$ выполняются следующие условия:
- $forall u', u'' in esq(u) sp ((u', u') in E and (u'', u'') in E and w((u', u')) = w((u'', u'')))$;
- $forall u', u'' in esq(u), forall v', v'' in esq(v) ((u', v') in E and (u'', v'') in E => $ \ $=> w((u', v')) = w((u'', v'')))$.
!! Иначе говоря, всякий раз, когда в определении @def:quotient-network встречается выражение $"avg" X$ для некоторого $X$, $X$ есть одноэлементное множество.
] <def:ideal-quotient>
Если сеть факторизуется идеально, то вопрос о том, что делать с разными весами ребер, ведущих в отождествляющиеся вершины, снимается: все такие ребра имеют одинаковые веса. Такие факторизации будут представлять для нас основной интерес, поскольку они менее всех искажают симметрии, присутствующие в графе.
#claim[
Пусть $J = {esq(v) | esq(v) in quo(V, ~) and abs(esq(v)) > 1 }$. Тогда если $forall esq(u) in J, forall esq(v) in J : esq(u) != esq(v) sp (forall u' in esq(u), forall v' in esq(v) sp "adj"_(G)(u') sect$ \ $sect "adj"_(G)(v') = varnothing)$, то выполнено второе условие из определения @def:quotient-network. Если, дополнительно, веса петель в каждом классе эквивалентности равны, то $quo("RN", ~)$ факторизуется идеально.
] <claim:simple_ideal>
Доказательство данного факта провести несложно: стоит заметить, что при выполнении условия из утверждения @claim:simple_ideal в исходном графе $G$ не окажется никакой пары ребер, ведущих из одного общего класса эквивалентности в другой общий класс эквивалентности, а значит условие на равенство весов ребер будет выполнено автоматически.
// cspell:disable-next-line
В качестве примера рассмотрим губковую сеть $"qn"_1$ ("$"qn"$" -- #strong[q]uotient #strong[n]etwork) с 2 строками и 4 столбцами с треугольной сеткой. Определим отношение эквивалентности на множестве вершин сети следующим образом. Отождествим 2 вершины, не имеющие общих смежных вершин, например, вершины $(0, 1)$ и $(3, 0)$, а остальные будут эквивалентны только себе. Для наглядности будем говорить об одноэлементных множествах сети как о самих элементах этих множеств. Так, вместо того, чтобы писать "${(0, 2)} in quo(V, ~)$", будем писать "$(0, 2) in quo(V, ~)$".
На @fig:qn_1_1 приведена данная сеть с начальным состоянием \ $q_0$ = #state_dict(("(0, 2)": 0, "(1, 2)": 30, "(2, 2)": 0, "(3, 2)": 0)) (остальные вершины имеют нулевое начальное состояние). Отождествленные вершины обведены красным. Данный рисунок является условным, поскольку никаких двух вершин в графе на самом деле нет, равно как и ребер, соединяющих эти вершины с другими. Такой способ изображения выбран потому, что в результате факторизации получившийся граф может не быть планарным, так что выбрать наглядное представление соответствующей сети может быть затруднительно. При данном способе отображения состояние "разделяется" отождествленными вершинами. На @fig:qn_1_2 показано состояние $"qn"_1$ в следующий момент времени. Вершина ${(0, 1), (3, 0)}$ содержит 5 единиц ресурса, так что на рисунке обе "вершины" содержат по 5 единиц ресурса.
#figure(
caption: [Пример фактор-сети в момент времени $t = 0$],
generated_image("qn_1/1")(),
) <fig:qn_1_1>
#figure(
caption: [Та же сеть в момент времени $t = 1$],
generated_image("qn_1/2")(),
) <fig:qn_1_2>
Поскольку в рассмотренной сети выполняются условия утверждения @claim:simple_ideal, то полученная факторизация сети идеальна. С точки зрения выбранного нами способа изображения это значит, что веса ребер на рисунке не меняются по сравнению с сетью до факторизации. Так, в исходной сети ребро $((1, 2), (0, 1))$ имело вес 5, значит и ребро $((1, 2), {(0, 1), (3, 0)})$ имеет вес 5, поэтому на приведенных рисунках соответствующее ребро имеет также вес 5.
== Губковые сети на цилиндре
#let eqcyl = $~_("cyl")$
Пусть дана некоторая сеть $"SN"$, удовлетворяющая базовым предположениям о губковой сети. Пусть $tilde(m) = m$, если сеть прямоугольная или шестиугольная и $tilde(m) = floor(m "/" 2)$, если сеть треугольная.
// Дополнительно, пусть $m$ -- четно, если сеть треугольная и шестиугольная.
Рассмотрим следующее отношение эквивалентности $eqcyl$ на множестве вершин сети $V$:
$
u sp eqcyl v <=> cases(delim: "[",
u = v\,,
j_1 = j_2 and (i_1\, i_2 in {0, tilde(m)})\, #[где] (i_1, j_1) = u\, (i_2, j_2) = v.
)
$
Будем называть $quo("SN", eqcyl)$ *губковой сетью на цилиндре*. Приведенную факторизацию можно рассматривать как помещение губковой сети на цилиндр: левый и правый края сети "склеиваются" в одну линию. Пример такой сети приведен на @fig:sn_on_cylinder_1. Как и прежде, вершины, которые были отождествлены \ (с какими-либо другими вершинами) обведены красным. Подчеркнем, что классов эквивалентности с числом элементов $> 1$ не 1, а 4: ${(0, 2), (3, 2)}$, ${(0, 1), (3, 1)}$, ${(0, 0), (3, 0)}$ и, наконец ${(0, -1), (3, -1)}$.
#figure(
caption: [Губковая сеть с треугольной стекой, помещенная на цилиндр],
// placement: bottom,
generated_image("cylinder_triangular_1/plot")()
) <fig:sn_on_cylinder_1>
#lemma[
Губковая сеть $quo("SN", eqcyl)$ факторизована идеально, если она удовлетворяет базовым предположениям о губковой сети.
] <lem:ideal_cylinder>
Для доказательства данной леммы воспользуемся следующим фактом.
#lemma[
Пусть $"RN"$ -- произвольная ресурсная сеть, а $~$ -- такое отношение эквивалентности на $V$, что сеть $quo("RN", ~)$ факторизована идеально. Пусть также внутри одного класса эквивалентности:
- совпадают веса всех ребер, т.е. $forall esq(u) in quo(V, ~) sp forall u_1, u_2, u'_1, u'_2 in esq(u) :$ \ $u_1 != u_2 and u'_1 != u'_2 sp ((u_1, u_2) in E and (u'_1, u'_2) in E => w((u_1, v_1)) =$ \ $= w((u'_1, u'_2)))$;
- совпадают веса всех петель.
!! В таком случае, любое измельчение разбиения $quo(V, ~)$ также факторизует сеть идеально. Под измельчением мы понимаем такое отношение эквивалентности $~'$ на $V$, что $u ~' v => u ~ v$.
] <lem:subpartition>
#proof[#fli[
#let esqt = esq.with(eqv_rel: $~'$)
Рассмотрим произвольные классы $esqt(u)$ и $esqt(v)$ и произвольные вершины внутри этих классов $u', u'' in esqt(u)$, $v', v'' in esqt(v)$.
- Поскольку веса всех петель внутри $esq(u)$ и $esq(v)$ совпадали, то совпадут они и на любом подмножестве $esq(u)$ и $esq(v)$ соответственно, в частности, на $esqt(u)$ и $esqt(v)$.
- Рассмотрим теперь ребра $(u', v')$ и $(u'', v'')$, полагая, что $u' != v' and$ \ $and u'' != v''$, а $esqt(u) != esqt(v)$.
- Если $esq(u) != esq(v)$, то $w((u', v')) = w((u'', v''))$ в силу определения @def:ideal-quotient, примененного к отношению $~$.
- Положим, что $esq(u) = esq(v)$. Тогда $w((u', v')) = w((u'', v''))$ в силу условия леммы.
Таким образом, все требования для идеальной факторизации сети $quo("RN", ~')$ выполнены.
]
]
#proof[леммы @lem:ideal_cylinder][
#let eqlev = $~_"level"$
Рассмотрим отношение эквивалентности $eqlev$, которое отождествляет все вершины уровня $V_i$ между собой, т.е. \ $(i_1, j_1) sp eqlev (i_2, j_2) <=> j_1 = j_2$. С одной стороны, все ребра, связывающие классы эквивалентности по $eqlev$ являются ребрами одного типа одновременно: либо "сверху вниз", либо "снизу вверх". С другой стороны, все ребра внутри одного уровня -- горизонтальные. Учитывая вышесказанное, заключим, что если сеть удовлетворяет базовым предположениям о губковой сети, то выполняются условия леммы @lem:subpartition. Очевидно, что отношение $eqcyl$ является измельчением разбиения $eqlev$, а значит, $quo("SN", eqcyl)$ факторизуется идеально.
]
// Помещение сети на цилиндр может быть и проинтерпретировано с помощью замощений плоскости. Положим,
// Очевидно, что помещение
Отметим, что у сети с треугольной сеткой на @fig:sn_on_cylinder_1 число столбцов до факторизации было четным. Для сравнения приведем сеть с треугольной сеткой и нечетным количеством столбцов (@fig:sn_on_cylinder_bad). Может показаться, что -- в отличие от случая с исходной сетью -- сеть на @fig:sn_on_cylinder_1 более симметрична, чем сеть на @fig:sn_on_cylinder_bad. Действительно, это так.
#figure(
caption: [Губковая сеть с треугольной стекой и нечетным числом столбцов на цилиндре],
generated_image("cylinder_triangular_2/plot")(),
) <fig:sn_on_cylinder_bad>
#theorem[
Пусть $"SN"$ -- губковая сеть с числом столбцов $m > 1$. Пусть также, снова, $tilde(m) = m$, если сеть прямоугольная или шестиугольная и $tilde(m) = floor(m "/" 2)$, если сеть треугольная.
- Если сеть прямоугольная, то $Aut(quo("SN", eqcyl)) tilde.equiv D_(tilde(m) - 1)$, где $D_k$ -- $k$-ая группа диэдра.
- Если сеть треугольная или шестиугольная, а количество столбцов четно, то $Aut(quo("SN", eqcyl)) tilde.equiv D_(tilde(m) - 1)$.
- Если сеть треугольная или шестиугольная, но количество столбцов нечетно, то $Aut(quo("SN", eqcyl)) tilde.equiv quo(ZZ, 2 ZZ)$.
] <th:cylinder-sym>
#proof[
#fli[
Наша логика будет во многом повторять ту, что использовалась в теореме @th:sym. Для начала, предположим, что сеть прямоугольная или же треугольная с четным количеством столбцов. Рассмотрим снова уровни $V_i$ сети $quo("SN", eqcyl)$. На сей раз $Gen_G (V_i)$ представляют из себя не простые пути, а простые циклы на $tilde(m) - 1$ вершине. Известно, что группа симметрий простого цикла на $tilde(m) - 1$ вершине есть $D_(tilde(m) - 1)$. Такая группа соответствует $Aut(Gen_G (V_i))$ для каждого $i$. При попытке "соединить" указанные слои мы снова приходим к выводу, что
$
forall Phi_i in Aut(Gen_G (V_i)) sp exists! Phi_(i-1) in Aut(Gen_G (V_(i-1))): \
Phi_i union.sq Phi_(i-1) in Aut(Gen_G (V_i union V_(i-1))).
$ <nonum>
Так мы проходимся по каждой паре соседствующих слоев и устанавливаем, что, вообще, для каждого $i$ каждая симметрия $Phi_i in Aut(Gen_G (V_i))$ однозначно определяет симметрию всего графа $G$. Стало быть, $Aut(quo("SN", eqcyl)) tilde.equiv D_(tilde(m) - 1)$.
Если же сеть треугольная, но количество столбцов нечетно (например, как на @fig:sn_on_cylinder_bad), то мы сталкиваемся с тем, что количество вершин на уровне $V_i$ зависит, вообще говоря, от четности $i$: на четных уровнях вершин на одну больше, чем на нечетных. Возникает асимметрия при нечетном $i$: вершина ${(0, i), (tilde(m), i)}$ оказывается смежной сразу с тремя вершинами уровня $i-1$: ${(0, i-1), (tilde(m)-1, i-1)}, (1, i-1)$ и $(tilde(m)-2, i-1)$. Остальные же вершины уровня $i$ имеют в соседях только 2 вершины из уровня $i-1$. Очевидно, что в данном случае любой автоморфизм графа $Gen_G (V_i union V_(i-1))$ должен оставлять эту уникальную вершину на месте. Таких автоморфизма существует ровно 2: тождественный и отражение, которое, собственно, и оставляет эту вершину на месте. Можно убедиться, что при "склеивании" уровней дальше указанное отражение все так же остается автоморфизмом сети. Таким образом, $Aut(quo("SN", eqcyl)) tilde.equiv quo(ZZ, 2 ZZ)$.
В случае шестиугольной сети доказательство проходит образом, аналогичным доказательству теоремы @th:sym с той разницей, что теперь вместо $quo(ZZ, 2 ZZ)$ фигурирует $D_(tilde(m) - 1)$.
]
]
Отметим, что понимание губковых сетей на цилиндре как буквально сетей, помещенных на цилиндр, позволяет продолжить нашу геометрическую аналогию с замощением плоскости. Напомним, что губковую сеть можно представить себе как часть регулярного замощения плоскости $cal(F)$, а отображение, переводящее губковую сеть в подмножество $cal(F)$ мы называли $cal(I)$. Теперь же мы можем поступить иначе: рассматривать не регулярные разбиения плоскости, а регулярные разбиения (бесконечного) цилиндра. Не составит труда, в таком случае, для губковой сети на цилиндре (при условии, что в исходной сети, если она была треугольная или шестиугольная, число столбцов было четно) указать другое соответствие $cal(I)'$, переводящее граф сети в подмножество разбиения на цилиндре. На сей раз, на множестве $"Im"(cal(I)')$ часть симметрий сохраняется, а именно движения "вправо" и "влево" вместе с отражениями относительно некоторых плоскостей, проходящих через ось цилиндра. Указанные симметрии и образуют группу $D_k$. Таким образом, губковые сети на цилиндре намного более похожи на регулярные замощения плоскости с точки зрения симметрий нежели обычные губковые сети. Можно добавить, что максимальное сходство проявилось бы, если бы мы рассматривали сети на торе, однако в таком случае мы потеряли бы представление о "верхних" и "нижних" вершинах, что выходит за рамки нашего рассмотрения.
С учетом приведенных геометрических соображений можно доказать следующее утверждение.
#claim[
Графы губковых сетей, помещенных на цилиндр, планарны, при условии, что в исходной губковой сети, если она была треугольная или шестиугольная, число столбцов было четно.
]
#proof[
#fli[
Для начала отметим, что губковая сеть на цилиндре суть ограниченное множество, если рассматривать ее как часть замощения (т.е.~$"Im"(cal(I)')$). Таким образом, достаточно рассмотреть только конечный цилиндр, т.е. фигуру, гомеоморфную квадрату со склеенными сторонами: #box[$C = [0, 1]^2 \/ {forall y in [0, 1] med (0, y) med ~ med (1, y)}$]. Однако такой цилиндр является, в свою очередь, гомеоморфным "кольцу" #box[$R = {(r cos phi, r sin phi) | phi in [0, 2 pi), r in [1, 2]}$]. Очевидно, что вложение графа внутрь этого кольца без пересечения ребер делает его планарным (поскольку данное кольцо есть просто подмножество вещественной плоскости). Значит и исходный граф губковой сети на цилиндре был планарным.
]
]
== Связь с динамическими системами
// TODO дополнить
Мы хотели бы связать симметрии ресурсных сетей как графов с некоторым симметричным поведением их как динамических систем. Можно смотреть на ресурсные сети как на отображения $S : RR_+^nu -> RR_+^nu$, где $nu$ -- количество вершин в сети. В установлении того, какие симметрии наличествуют у ресурсных сетей помогает следующая теорема.
#theorem[
Пусть $"RN"$ -- ресурсная сеть с $nu$ вершинами. Будем считать, что в сети зафиксирована некоторая нумерация вершин, т.е. $V = {v_i}_(i = 1)^nu$. Пусть $Phi$ -- автоморфизм $"RN"$ как графа (определение @def:rn_aut). В таком случае $Phi$ индуцирует ее автоморфизм $Phi' : RR_+^nu -> RR_+^nu$ как динамической системы следующим образом: $forall q in RR_+^nu sp Phi'(q)_i eq.def q_j$, где $j : Phi(v_j) = v_i$.
] <th:rn_sym_with_dyn>
#proof[
#fli[
Отметим, сначала, что отображение $Phi'$ определено корректно, поскольку $Phi$ -- биекция на множестве вершин графа. Далее, заметим, что любой автоморфизм графа индуцирует некоторую перестановку его вершин. Назовем ее $sigma$, тогда $sigma(i) eq.def j : Phi(v_i) = v_j $.
// Пусть граф исходной сети -- $G$, тогда скажем, что граф сети, полученной после перенумерации вершин, есть $G_"perm"$. Применив перестановку $sigma$ к строкам и столбцам матрицы смежности $R$ получим матрицу смежности $sigma(R)$ графа $G_"perm"$. Поскольку стохастическую матрицу от матрицы пропускной способности отличает лишь сумма элементов каждой строки, то стохастическая матрица $G_"perm"$ есть $R'_"perm" = sigma(R')$.
Для произвольного вектора $v$ будем обозначать соответствующую перестановку его элементов за $sigma(v)$, для произвольной матрицы $A$ будем обозначать одновременную перестановку ее строк и столбцов снова за $sigma(A)$. Можно доказать, что для произвольных векторов $v_1, v_2 in RR^(nu times 1)$, матриц $A, B in RR^(nu times nu)$ верно:
- $sigma(v_1^T v_2) = sigma(v_1)^T sigma(v_2)$;
- $sigma(A v_1) = sigma(A) sigma(v_1)$;
- $sigma(A B) = sigma(A) sigma(B)$;
- $sigma(v_1 + v_2) = sigma(v_1) + sigma(v_2)$;
- $sigma(A dot.circle B) = sigma(A) dot.circle sigma(B)$;
- $sigma(min{A, B}) = min{sigma(A), sigma(B)}$.
В таком случае, для произвольного состояния системы $q$, с использованием определения @def:flow-matrix, получим:
$
sigma(F(q)) = sigma(min{R' dot.circle (bold(1) dot q), R}) = min{
sigma(R') dot.circle (bold(1) dot sigma(q)), sigma(R)
}.
$ <nonum>
Поскольку $Phi$ -- автоморфизм графа, то перестановка $sigma$ оставляет на месте матрицу смежности графа, т.е. $sigma(R) = R$. Более того, поскольку стохастическая матрица является функцией только матрицы смежности, то $sigma(R') = R'$. Тогда
$
sigma(F(q)) = min{
sigma(R') dot.circle (bold(1) dot sigma(q)), sigma(R)
} = \
= min{R' dot.circle (bold(1) dot sigma(q)), R} = F(sigma(q)).
$ <nonum>
Используем теперь формулу для функции эволюции ресурсной сети @eq:S:
$
sigma(S(q)) = sigma(q - (F(q) dot bold(1))^T + bold(1)^T dot F(q)) =\
= sigma(q) - (F(sigma(q)) dot bold(1)^T + bold(1)^T dot F(sigma(q))) = S(
sigma(q)
).
$ <nonum>
Последняя формула и является переформулировкой искомого. Таким образом, согласно определению @def:dyn_syst_aut, теорема доказана.
// матрицы смежности $R$ (обозначим ее $Phi(R)$), при этом $Phi(R)$ является (взвешенной) матрицей смежности графа, полученного перенумерацией вершин, соответствующей $Phi$.
]
]
Обозначим группу автоморфизмов ресурсной сети $"RN"$ как графа $Aut_"graph" ("RN")$, а группу автоморфизмов ее как динамической системы -- $Aut_"dyn" ("RN")$.
#block(breakable: false)[
#corollary[
Для произвольной ресурсной сети $"RN"$ верно
$ Aut_"graph" ("RN") <= Aut_"dyn" ("RN"). $ <nonum>
]
]
#remark[
Из вышесказанного еще не следует, что симметрии ресурсной сети как динамической системы ограничиваются симметриями соответствующего графа. Для получения ответа на этот вопрос требуется проведение дополнительного исследования.
]
Все наши рассуждения, проведенные для ресурсной сети, верны и для ресурсной сети с жадными вершинами, и для губковой сети. Благодаря этому мы можем получить занимательные результаты, касающиеся поведения губковых сетей. Для того, чтобы продемонстрировать практическую ценность наших результатов, докажем небольшое вспомогательное утверждение.
#claim[
Пусть дана произвольная динамическая система \ $cal(S) = (X, S)$, $Aut(cal(S))$ -- группа ее автоморфизмов, $x in X$ -- произвольное состояние, а $Stab(x)$ -- стабилизатор состояния $x$ относительно действия группы $Aut(cal(S))$ на $X$, т.е. $Stab(x) = {Phi in Aut(cal(S)) | Phi(x) = x}$. Тогда
$
forall n in NN sp Stab(x) subset Stab(S^n (x)).
$ <nonum>
!! Иначе говоря, траектория, соответствующая начальному состоянию $x$ является целиком "самосимметричной", если начальное состояние было "самосимметричным".
] <claim:sym_traj>
#proof[
Возьмем произвольную $Phi in Stab(x)$, тогда $Phi(S(x)) = S(Phi(x)) = S(x)$, значит $Stab(x) subset Stab(S(x))$. Аналогичным образом доказывается, что $Stab(S(x)) subset Stab(S^2 (x))$. И вообще, по индукции доказывается $forall n in NN sp Stab(S^n (x)) subset Stab(S^(n+1) (x))$, из чего и следует искомое.
]
// С другой стороны, если $Phi in Stab(S(x))$, то $Phi(S(x)) = S(x)$ и одновременно $Phi(S(x)) = S(Phi(x))$, следовательно $$
#remark[
Заметим, что обратное включение в утверждении @claim:sym_traj, в общем случае, неверно. Это зависит от того, является ли $S$ биекцией. В литературе по эргодической теории (например, в @Walters2000) в определении динамической системы требуется, чтобы $S$ была биекцией. К сожалению, функция эволюции ресурсной сети не является биективной. Более того, она не является сюръективной. Например, в сети на @fig:basic_network_1 невозможно указать состояние, при действии на которое функции $S$ получится $(10, 0, 0)$. В некоторых случаях $S$ не является и инъективной. Например, в некоторой окрестности нуля функция $S$ линейна, но при этом возможны такие топологии графа ресурсной сети, что стохастическая матрица оказывается вырожденной. Тогда $S$ будет неинъективна.
// #figure(
// caption: [Ресурсная сеть, функция эволюции которой не является инъективной],
// generated_image("noninjective_network/plot")(width: 35%)
// ) <fig:noninjective_network>
]
В качестве примера, когда стабилизатор "расширяется" под действием $S$, приведем следующую динамическую систему $cal(S)$. Пусть $X$ -- множество конечных последовательностей из натуральных чисел. Например, $(1, 4, 2) in X$ или же $(3, 3, 7, 4, 5) in X$. Пусть $S$ "забывает" последний элемент последовательности, то есть $S : (x_1, x_2, ..., x_(n-1), x_n) arrow.bar (x_1, x_2, ..., x_(n-1))$, а $S((x_1)) = (x_1)$. Можно показать, что любой автоморфизм $Phi$ данной динамической системы однозначно определяется набором функций ${f_i}_(i = 1)^infinity$, где $f_i : NN^i -> NN$, а $Phi : (x_1, x_2, ..., x_n) |->$ \ $|-> (f_1 (x_1), f_2 (x_1, x_2), ..., f_n (x_1, ..., x_n))$. Рассмотрим произвольный элемент \ $a in X, a = (a_1, ..., a_k)$. Каждый автоморфизм $Phi_a in Stab(a)$ характеризуется тем, что для него $forall i in overline(1\, k) sp f_i (a_1, ..., a_i) = a_i$, дополнительных ограничений на $f_i$ не накладывается. Образ $a$ под действием $S$ суть $S(a) = (a_1, ..., a_(k-1))$. Повторяя приведенные выше рассуждения получим, что $Phi_S(a) in Stab(S(a)) : forall i in overline(1\, k-1) sp f_i (a_1, ..., a_i) = a_i$. Очевидно, что $exists Phi in Stab(S(a)) : Phi in.not Stab(a)$, например $Phi : (x_1, ..., x_(k-1), x_k, ..., x_n) |->$ \ $ |-> (x_1, ..., x_(k-1), phi(x_k), ..., x_n)$, где $phi$ -- некоторая нетривиальная биекция $NN$ в себя: $phi(a_k) != a_k$. Более того, $Stab(a) subset Stab(S(a)) subset ... subset Stab(S^(k-1) (a))$, и здесь все включения являются строгими. Можно также видоизменить данный пример так, чтобы образовывалась бесконечная все расширяющаяся цепочка $Stab(a) subset Stab(S(a)) subset ...$ и чтобы
$
limits(union.big)_(i=0)^infinity Stab(S^i (a)) = Aut(cal(S)).
$ <nonum>
Несмотря на сказанное выше, мы вполне удовлетворимся полученным результатом, поскольку анализ полной группы симметрий ресурсных сетей все равно выходит за рамки наших возможностей.
#remark[
Покуда это не вызовет недопониманий, будем говорить о группе $Aut_"graph" ("RN")$ одновременно и как о действующей на граф сети, и как о действующей на динамическую систему.
]
#let state1 = state_dict((
"(0, 2)": 8,
"(1, 2)": 20,
"(2, 2)": 0,
"(3, 2)": 20,
"(4, 2)": 8,
))
Приведем теперь пример такой сети $"SN"$ и такого состояния $q^0$, что $Stab(q^0) != {id}$ (@fig:sponge_symmetrical_sim).
#figure(
caption: [Некоторая губковая сеть и ее начальное состояние \ $q^0 = #state1$],
placement: top,
generated_image("sponge_symmetrical_sim/1")(width: 80%),
) <fig:sponge_symmetrical_sim>
Согласно теореме @th:sym, $Aut_("graph")("SN") = {id, "flip"},$ где $"flip"$ -- отражение сети вдоль вертикальной оси. Несложно видеть, что $q^0$ является самосимметричным в том смысле, что $Stab(q^0) = Aut_("graph")("SN")$. Исходя из утверждения @claim:sym_traj, данная симметрия сохранится и под действием динамической системы в дальнейшем.
#figure(
caption: [Та же губковая сеть, состояние в момент времени $t = 4$],
placement: top,
generated_image("sponge_symmetrical_sim/2")(width: 80%),
) <fig:sponge_symmetrical_sim_2>
#let state2 = state_dict((
"{(0, 3), (6, 3)}": 12,
"(1, 3)": 0,
"(2, 3)": 12,
"(3, 3)": 0,
"(4, 3)": 12,
"(5, 3)": 0,
))
#figure(
caption: [Начальное состояние губковой сети на цилиндре],
placement: top,
generated_image("sponge_symmetrical_2_sim/1")(width: 80%),
) <fig:sponge_symmetrical_2_sim_1>
Так, на @fig:sponge_symmetrical_sim_2 приведено состояние сети после действия на $q^0$ системы $S$ 4 раза, т.е. в момент времени $t = 4$. Можно видеть, что симметрия сохранилась.
#pagebreak()
Обыкновенные губковые сети, как было сказано, обладают довольно небольшим количеством симметрий. Более интересные результаты можно получить, если рассмотреть губковые сети на цилиндре. На @fig:sponge_symmetrical_2_sim_1 приведен пример такой сети с прямоугольной сеткой и начальным состоянием
$ q^0 = #state2. $ <nonum>
В данном случае стабилизатор порождается отражением (любым) и поворотом цилиндра на $pi\/3$, т.е. смещением всех вершин сети на 2 позиции вправо.
На @fig:sponge_symmetrical_2_sim_2 показана сеть в момент времени $t = 3$. Можно видеть, что у состояния тот же стабилизатор.
#figure(
caption: [Та же губковая сеть, состояние в момент времени $t = 3$],
generated_image("sponge_symmetrical_2_sim/2")(width: 100%),
) <fig:sponge_symmetrical_2_sim_2>
= ЗАКЛЮЧЕНИЕ <nonum>
Нами были исследованы симметрии губковых сетей и связи между автоморфизмами графов и динамических систем ресурсных сетей, порождающихся этими графами. С одной стороны, мы установили, что несмотря на регулярность, губковые сети теряют много своих симметрий относительно замощений плоскости, их порождающих. Вместе с тем было дано полное описание групп автоморфизмов губковых сетей (в том числе и в случае, когда "верх" и "низ" симметричны друг относительно друга). Было введено понятие факторизации ресурсных сетей и исследованы базовые ее свойства: в каких ситуациях сеть факторизуется идеально, а в каких нет? С помощью факторизации мы смогли из обыкновенных губковых сетей строить губковые сети на цилиндре, обладающие намного большим набором симметрий. Группы автоморфизмов таких сетей также были описаны полностью. Наконец, мы установили, что каждая симметрия графа ресурсной сети порождает симметрию ее как динамической системы. Полученные утверждения можно применять для облегчения моделирования и уменьшения размера анализируемой сети.
Проведенное исследование открывает широкий простор для дальнейших изысканий. С одной стороны, можно ставить вопросы о протекании ресурса через губковую сеть, например:
- При данных параметрах сети (столбцы, строки, вес всех типов ребер), каково минимальное количество ресурса, которое может быть помещено в верхние вершины так, чтобы ресурс дотек до стоковых вершин? Каково должно быть распределение этого ресурса?
- При данном начальном распределении ресурса в верхних вершинах, какую форму будет иметь "след" данного ресурса в пределе, если ресурса не хватит на то, чтобы покрыть все "потребности" жадных вершин? Какова зависимость ширины и глубины следа от параметров сети?
С другой стороны, можно исследовать возможность обобщения полученных результатов с губковых сетей на более общие динамические модели на графах, для которых симметрии графа и динамической системы будут соотноситься так же, как и в случае ресурсных сетей. Вероятно, можно найти такое обобщение, под которое подпадут и цепи Маркова, и ресурсные сети, и другие модели, в которых состояние вершины в следующий момент времени зависит лишь от текущего ее состояния и состояния всех смежных с ней вершин.
// функционирование губковых сетей с
#[
#let nbsp = sym.space.nobreak
// HAHAHA, that's a dirty hack, but works in simple cases
#let re = regex(`#([\w\d\-_]+)\[(.+)\]`.text)
#show re: it => {
let (func, text) = it.text.match(re).captures
eval("#" + func + "[" + text + "]", mode: "markup")
}
#let re = regex(`(\d+)–(\d+)`.text)
#show re: it => {
let (l, r) = it.text.match(re).captures
[#nbsp#box[#l–#r]]
}
#set par(hanging-indent: -0.95cm)
// #show par: it => {
// v(0.6em)
// it
// }
#bibliography(
"../literature/sn_literature.bib",
title: [СПИСОК ИСПОЛЬЗОВАННЫХ ИСТОЧНИКОВ],
style: "../literature/gost-r-7-0-5-2008-numeric.csl",
)
]
= ПРИЛОЖЕНИЕ А <nonum>
Ниже представлены примеры функционала, реализованного в библиотеке `sponge-networks` (обозначается в коде как `sn`).
#figure(
caption: [Пример создания ресурсной сети из матрицы смежности с помощью модуля `networkx`, проведения одной итерации симуляции и получения результата в виде svg-изображения],
code(```python
basic_network = sn.ResourceNetwork[int](
nx.from_numpy_array(
np.array(
[
[0, 3, 1],
[4, 1, 0],
[2, 2, 0],
]
),
create_using=nx.DiGraph,
)
)
sim1 = basic_network.run_simulation([8, 1, 0], n_iters=1)
img2 = basic_network.plot_with_states(
sim1, max_node_width=0.6, scale=1.1
)[0]
img2
```),
) // <lst:basic_network>
#figure(
caption: [Пример создания губковой сети на цилиндре из обыкновенной #h(2em) губковой сети и получение svg-изображения этой сети],
code(```python
nw = sn.build_sponge_network(
grid_type="hexagonal", n_cols=4, n_rows=2,
layout={
"weights_sink_edge": 1,
"weights_loop": 1,
"weights_horizontal": 2,
"weights_up_down": 5,
"weights_down_up": 1,
"generate_sinks": True,
},
visual_sink_edge_length=0.7
)
(sn
.quotient_sponge_network_on_cylinder(nw)
.quotient_network
.plot(scale=1.)
)
```),
)
|
|
https://github.com/sa-concept-refactoring/doc | https://raw.githubusercontent.com/sa-concept-refactoring/doc/main/chapters/personalReportVina.typ | typst | Contributing to an open source project like LLVM was a great experience even though the changes are not yet merged.
For me this was the first time to work on an open source project and I learned a lot while doing so.
At first it seemed to be a very difficult project to work with, but we were able to get it running quickly.
I was struggling a lot with building the project in the beginning as it took a lot of memory to do so and would end in errors.
After a few trials I decided to not close my notebook anymore until it has built completely, which worked out in the end.
Writing the code itself required a lot of trial and error as well as reading the clangd documentation.
When the refactoring feature was finally displaying in the VS Code editor it felt like a huge achievement.
The documentation part of this project was hard to get into and included a lot of procrastination.
We were struggling a lot with the structure of the documentation and what content we should include.
Jeremy and I were a good team and got along really good.
Whenever we had a disagreement, we were able to solve it in a good manner.
As Jeremy is more experienced in C++ he needed to help me from time to time and was way faster then me in making development progress.
He explained certain things which was a good addition to my knowledge and was appreciated.
Our advisor was always friendly and helpful at all times.
We had good discussions within our weekly meetings and I felt to be in good hands.
My personal highlight was when we first got comments on our Pull Request.
The reviewer left us some very good comments and wrote that he is looking forward to having the new refactoring feature within the project.
To conclude, it was a good project overall and a good addition to my career and I am even motivated to contribute further to the clangd language server or any other open source project.
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/fh-joanneum-iit-thesis/1.2.2/template/chapters/3-related.typ | typst | Apache License 2.0 | #import "global.typ": *
= Related Work
#lorem(30)
#todo(
[
Describe the work of other research teams and noteworthy approaches related to
your work. State what is different to your solution.
+ Related literature might/should contain: theoretical foundations,
- definitions of key terms,
- technologies, techniques,
- and/or a literature review
+ Note on the size and quality of your bibliography:
- BA about 30-40 references
- MA about 60-100 references
Furthermore, check: Are the reference (too) old? Did you include papers from
scientific databases, such as ACM or IEEE? Can the reader find your sources?
(e.g. check if you named the publisher for books, or specified DOIs for
scientific papers)
#v(1cm)
*Citation hints when writing in Typst:*
A single citation is possible with the \@Eco:2010 (i.e. using the _at_ sign),
such as @Eco:2010. Find in #cite(<acm:diglibrary>) and/or #cite(<ieee:xplore>) the
lateset scientific findings. For citing multiple references, just name them one
by one, such as #cite(<Alley:1998>) #cite(<Booth:2008>)
@Batina:2011
@Eco:2010
@Fernandez-Mir:2011
@Field:2003
@Google:2016a @Google:2017a
@Li:2008
@Shaw:2002
@Strunk:2000
@Wisconsin:2004
@Yin:2013
@Zobel:2004
and they will be listed within the _(_ _)_ round brackets separated by a _;_ semicolon.
If we know where a text of relevance is presented inside an article, we might
specify the page number also, see @Yin:2013[p.~7]. If you want to cite without
paranthesis, you specify _form_ as _prose_, as shown here with #cite(<Shaw:2002>, form: "prose").
],
)
|
https://github.com/kdog3682/2024-typst | https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/dialogue.body.typ | typst | // #import "base-utils.typ": *
// #import "classroom.typ"
// #import "designer.typ"
#import "dialogue.header.typ": header
#import "dialogue.problemset.typ": problemset
#import "dialogue.clip.typ": body
#import "page-templates.typ"
#show: page-templates.dialogue
#let header-data = (
student-group: "ravenclaw",
title: "Multiplying Exponents",
topic: "Exponents I",
)
#let problemset-data = (
foo: "bar",
)
#header(..header-data)
#body()
// #problemset(..problemset-data)
|
|
https://github.com/mumblingdrunkard/mscs-thesis | https://raw.githubusercontent.com/mumblingdrunkard/mscs-thesis/master/src/methodology-and-results/index.typ | typst | #import "../utils/utils.typ": *
= Methodology and Results <ch:methodology-and-results>
Here, we describe our approach to gathering the results for the implementation and list these results.
The results are interpreted and discussed in the next chapter.
== Running Programs and Extracting Results
The best approach would be to run hardware accelerated simulation using FireSim, loading in representative programs like the SPEC CPU @bib:spec-cpu suite and evaluating performance after a warm-up period per program.
However, our process of synthesis fails on a cryptic error from Vivado when trying to run the flow for FireSim.
Because of time-constraints, we have decided to use the software simulator to evaluate performance.
We also do not have straightforward access to a RISC-V version of the SPEC CPU suite.
Because of these issues, we have decided to use a software simulator built with Verilator to run a small suite of programs that complete within a reasonable time on the software simulator.
=== About the Programs
The Chipyard project includes a number of test-applications to verify that a RISC-V implementation is behaving properly.
These programs are not intended for such benchmarking, but can at least serve as sanity checks that the implementation is not broken.
They are also small enough that running them to completion is feasible on the software simulator.
The programs we have available are located under `.conda-env/riscv-tools/riscv64-unknown-elf/share/riscv-tests/benchmarks`.
Although the directory is called "benchmarks", they are not actually _performance benchmarks_, but we have used them for rudimentary verification and performance testing.
The nice thing about these programs is that they include testing of the outputs, meaning if they pass, we can be more confident that the behaviour of the core is unchanged.
The downside of these benchmarks is that they are not intended to measure performance and the results we se here may not be representative of real programs, making it close to pointless to discuss metrics like mean speedup.
The programs themselves are given short descriptions in their source code, a summary of which follows here @bib:riscv-tests.
- *vvadd* adds elements of two vectors and stores the results in a third vector,
- *median* calculates the median of each group of three elements in a vector,
- *rsort* is a sorting program,
- *spmv* performs multiplication on sparse matrices,
- *multiply* is similar to *vvadd*, but uses multiplication instead of addition,
- *qsort* is another sorting program,
- *dhrystone* is the classic dhrystone synthetic benchmark @bib:dhrystone, and
- *towers* solves the classic towers of hanoi puzzle using a recursive algorithm.
All of these programs run in "bare-metal" mode, meaning there is no virtual memory.
Virtual memory might change results by needing to deal with things like TLB misses and
== Evaluating Predictor Performance
With the predictor, we are interested in a few different metrics such as coverage and accuracy.
We can also determine a form of timeliness as with L1d prefetchers.
The current predictor implementation is always timely because it always makes a prediction before the related uOP has time to reach an AGU and calculate the real address.
However, it is interesting to inspect how many cycles pass between the LSU receiving a predicted address and the LSU receiving the real address.
If there is a significant number of cases where the difference is much greater than the normal L1d access latency, it is possible to justify a more complex predictor that uses more cycles to increase coverage and accuracy.
== Evaluating System Performance
With the system itself, we are mainly interested in measurements like IPC to determine the impact of the predictor on performance.
The "benchmark" programs are set up to read the `mcycle` and `minstret` CSRs before and after the test.
These registers contain the number of cycles spent running the program and the dynamic instruction count, respectively.
By subtracting the end point from the start point, we get a fair approximation of how the program executed and can calculate IPC as $d "minstret"\/d "mcycle"$.
Other interesting metrics would be power consumption and similar performance per watt, but it is hard to gauge without working synthesis.
=== Weird Printing with Doppelganger Loads
When load address prediction is enabled, the results printed to the output by the programs themselves are mangled as shown in @lst:mangled-output.
#figure(kind: raw, {
show raw: set text(size: 7pt)
grid(columns: (auto, ) * 2, inset: 5pt, [
```
Microseconds for one run through Dhrystone: 229
Dhrystones per Second: 4347
mcycle = 115126
minstret = 196029
```
],
grid.vline(),
[
```
Microseconds for one run through Dhrystone: 229
Drystones per Second: 6
mcycle = 1146
istret = 02
```
])},
caption: "Normal output without address prediction disabled (left) and mangled output with address prediction enabled (right)"
) <lst:mangled-output>
Most of the benchmarks only print the `mcycle = ...` and `minstret = ...` parts, but the Dhrystone benchmark prints a little extra information.
What is most curious about this is that the results seem to be consistent up to some point, but then start to fall apart.
We will discuss various hypotheses for why this happens in @ch:discussion.
In any case, the tests self-report passing their own checks, an unlikely event if there was anything wrong with the loaded values in a single-threaded context.
Because of this, we operate with the assumption that program behaviour is as normal and that only thing causing problems is the way programs print their results when combined with address prediction.
== Extracting the Results
Because we cannot rely on the printed output from the tests, we have resorted to printing the relevant information from the simulation, instead of relying on program outputs.
We run the simulator with the `--verbose` flag to allow for debug information to be output while running and set `enableCommitLogPrintf = true` in the configuration for the processor.
This tracks all committed instructions and their effects and prints it to standard output in a specific format.
An excerpt of this log is shown in @lst:commit-log-excerpt.
#figure(
kind: raw,
```
...
3 0x0000000080002ae6 (0x4505) x10 0x0000000000000001
3 0x0000000080002ae8 (0xae1ff0ef) x 1 0x0000000080002aec
3 0x00000000800025c8 (0xb0002773) x14 0x00000000000004af
3 0x00000000800025cc (0x00004797) x15 0x00000000800065cc
3 0x00000000800025d0 (0xa8c78793) x15 0x0000000080006058
...
```,
caption: "Excerpt from the commit log of a run"
)<lst:commit-log-excerpt>
The third instruction here reads `mcycle` and stores the value in register `x14`and is encoded as `b0002773`.
This encoding would be different if a different register was used, but register `x14` is used for all these test programs.
This is convenient as we can simply search the commit log for this specific instruction and use simple scripting to extract the results.
== System Configurations
We have gathered results for a few system configurations.
=== Predictor Configuration
The predictor is set to have 1024 slots, a very high number for these very small tests, but this extreme should at least show whether there is potential to the technique of doppelganger loads in an unsafe execution environment like this.
=== SmallBoomConfig and MediumBoomConfig
We have used the SmallBoomConfig and MediumBoomConfig for the BOOM core which are single-width and double-width designs, respectively.
A few other structures scale with the width of the core like the size of the IQs and the number of memory ports.
=== With and Without Speculative Load Wakeup
Speculative load wakeup reduces the apparent latency of hits in the L1d to just three cycles.
Without it, loads have to be written back to the PRF and IQ slots have to react a cycle later, meaning dependent instructions cannot issue until four cycles after the address is received in the LSU.
=== With and Without Doppelganger Loads
Finally, we have tested the system with and without address prediction activated.
With these two base systems, and the two on/off variables of speculative load wakeups and address prediction, we end up with 8 different configurations to gather results for.
== Results
We list the results for the different configurations.
=== Instructions Per Cycle
@tab:smallboom-ipc shows the different tests and their IPC when run on the SmallBoomConfig core configuration.
Column *base-no-spec-ld* is our baseline that we want to beat,
*base* is the normal configuration that has speculative load wakeups enabled,
*ldpred-no-spec-ld* adds address prediction to the *base-no-spec-ld* configuration, and *ldpred* uses load address prediction in combination with speculative load wakeups.
@tab:smallboom-ipc-relative lists the difference for each test from the *base-no-spec-ld* config as a percentage change.
#let smallboom = csv("./data/smallboom.csv")
#figure(
caption: "SmallBoomConfig IPC results with different configurations",
placement: none,
table(columns: (auto, ) * 5,
[*Test*], [*base-no-spec-ld*], [*base*], [*ldpred-no-spec-ld*], [*ldpred*],
..smallboom.map(((name, base-no, base, ldpred-no, ldpred)) => {
let base-no = calc.round(float(base-no), digits: 5)
let base = calc.round(float(base), digits: 5)
let ldpred-no = calc.round(float(ldpred-no), digits: 5)
let ldpred = calc.round(float(ldpred), digits: 5)
(name, base-no, base, ldpred-no, ldpred).map((v) => [#v])
}).flatten(),
)
) <tab:smallboom-ipc>
#figure(
caption: "SmallBoomConfig IPC results with different configurations as relative to the baseline without speculative load wakeups, expressed as a percentage",
placement: none,
table(columns: (auto, ) * 5,
[*Test*], [*base-no-spec-ld*], [*base*], [*ldpred-no-spec-ld*], [*ldpred*],
..smallboom.map(((name, base-no, base, ldpred-no, ldpred)) => {
let base-no = float(base-no)
let base = calc.round((float(base)/base-no - 1) * 100, digits: 5)
let ldpred-no = calc.round((float(ldpred-no)/base-no - 1) * 100, digits: 5)
let ldpred = calc.round((float(ldpred)/base-no - 1) * 100, digits: 5)
(name, 0, base, ldpred-no, ldpred).map((v) => [#v])
}).flatten(),
)
) <tab:smallboom-ipc-relative>
Similarly, @tab:mediumboom-ipc and @tab:mediumboom-ipc-relative contain the absolute IPC and relative difference to *base-no-spec-ld* for the MediumBoomConfig.
#let mediumboom = csv("./data/mediumboom.csv")
#figure(
caption: "MediumBoomConfig IPC results with different configurations",
placement: none,
table(columns: (auto, ) * 5,
[*Test*], [*base-no-spec-ld*], [*base*], [*ldpred-no-spec-ld*], [*ldpred*],
..mediumboom.map(((name, base-no, base, ldpred-no, ldpred)) => {
let base-no = calc.round(float(base-no), digits: 5)
let base = calc.round(float(base), digits: 5)
let ldpred-no = calc.round(float(ldpred-no), digits: 5)
let ldpred = calc.round(float(ldpred), digits: 5)
(name, base-no, base, ldpred-no, ldpred).map((v) => [#v])
}).flatten(),
)
) <tab:mediumboom-ipc>
#figure(
caption: "MediumBoomConfig IPC results with different configurations as relative to the baseline without speculative load wakeups, expressed as a percentage",
placement: none,
table(columns: (auto, ) * 5,
[*Test*], [*base-no-spec-ld*], [*base*], [*ldpred-no-spec-ld*], [*ldpred*],
..mediumboom.map(((name, base-no, base, ldpred-no, ldpred)) => {
let base-no = float(base-no)
let base = calc.round((float(base)/base-no - 1) * 100, digits: 5)
let ldpred-no = calc.round((float(ldpred-no)/base-no - 1) * 100, digits: 5)
let ldpred = calc.round((float(ldpred)/base-no - 1) * 100, digits: 5)
(name, 0, base, ldpred-no, ldpred).map((v) => [#v])
}).flatten(),
)
) <tab:mediumboom-ipc-relative>
=== Accuracy and Coverage
We have collected statistics for accuracy and coverage in the *ldpred-no-spec-ld* configurations for both the SmallBoomConfig and MediumBoomConfig.
@tab:smallboom-prediction-stats and @tab:mediumboom-prediction-stats show the raw data collected.
The columns are as follows:
- *N Loads* is the total number of load instructions committed,
- *Fired* is the number of predictions that were actually used,
- *Correct Fired* is the number of used predictions that were also correct,
- *Total Pred.* is the total number of predictions made, and
- *Total Correct Pred.* is the total number of correct predictions made, used or not.
@tab:smallboom-prediction-stats-scaled and @tab:mediumboom-prediction-stats-scaled show the same data with the four last columns scaled to different quantities such that:
- *Fired/N Loads* gives the coverage (what portion of loads were given doppelgangers?),
- *Correct Fired/Fired* gives the accuracy of issued doppelgangers,
- *Total Pred./N Loads* gives the coverage if all predictions were used to generate doppelgangers, and
- *Total Correct Pred./Total Pred.* gives the accuracy if all predictions were used to generate doppelgangers.
Note that the SmallBoomConfig and MediumBoomConfig sometimes differ in the number of committed load instructions.
We have verified that for the tests themselves, all configurations commit exactly the same number of instructions (`minstret` is equal for each test no matter the configuration).
The difference likely stems from the setup and teardown that surrounds each test.
All of these statistics are only collected on committed loads.
Predictions made for uncommitted loads are not counted.
#let smallboom = csv("./data/acc-cov-smallboom.csv")
#figure(
caption: "SmallBoomConfig total loads, number of fired predictions, number of fired correct predictions, total number of predictions, and total number of correct predictions, calculated per application",
placement: none,
table(
columns: (auto, ) * 6,
[*Test*], [*N Loads*], [*Fired*], [*Correct Fired*], [*Total Pred.*], [*Total Correct Pred.*],
..smallboom.flatten()
),
) <tab:smallboom-prediction-stats>
#figure(
caption: "SmallBoomConfig total loads and number of fired predictions scaled to different quantities",
placement: none,
table(
columns: (auto, ) * 6,
[*Test*], [*N Loads*], [*Fired/N Loads*], [*Correct Fired/Fired*], [*Total Pred./N Loads*], [*Total Correct Pred./Total Pred*],
..smallboom.map(((name, a, b, c, d, e)) => {
let a = float(a)
let b = float(b)
let c = float(c)
let d = float(d)
let e = float(e)
let m = calc.round.with(digits: 4)
(name, a, m(b/a), m(c/b), m(d/a), m(e/d)).map((it) => [#it])
}).flatten()
),
) <tab:smallboom-prediction-stats-scaled>
#let mediumboom = csv("./data/acc-cov-mediumboom.csv")
#figure(
caption: "MediumBoomConfig total loads, number of fired predictions, number of fired correct predictions, total number of predictions, and total number of correct predictions, calculated per application",
placement: none,
table(
columns: (auto, ) * 6,
[*Test*], [*N Loads*], [*Fired*], [*Correct Fired*], [*Total Pred.*], [*Total Correct Pred.*],
..mediumboom.flatten()
),
) <tab:mediumboom-prediction-stats>
#figure(
caption: "MediumBoomConfig total loads and number of fired predictions scaled to different quantities",
placement: none,
table(
columns: (auto, ) * 6,
[*Test*], [*N Loads*], [*Fired/N Loads*], [*Correct Fired/Fired*], [*Total Pred./N Loads*], [*Total Correct Pred./Total Pred*],
..mediumboom.map(((name, a, b, c, d, e)) => {
let a = float(a)
let b = float(b)
let c = float(c)
let d = float(d)
let e = float(e)
let m = calc.round.with(digits: 4)
(name, a, m(b/a), m(c/b), m(d/a), m(e/d)).map((it) => [#it])
}).flatten()
),
) <tab:mediumboom-prediction-stats-scaled>
=== Prediction Timeliness
We observed that of correct predictions, the real address arrived after 3 cycles in 11.1% of the cases.
Another 5.4% of addresses arrived 4 cycles after the prediction.
In total, only around 22% of correct predictions are compared to the real address within 10 cycles of the prediction being made.
The statistics are similar for incorrect predictions, with the real address arriving 3 cycles after the prediction in 13.4% of the cases and 19.3% arriving within 10 cycles of the prediction being made.
|
|
https://github.com/typst-community/typst-community.github.io | https://raw.githubusercontent.com/typst-community/typst-community.github.io/main/README.md | markdown | MIT License | # typst-community.github.io
🌐 typst.community website
|
https://github.com/SillyFreak/typst-packages-old | https://raw.githubusercontent.com/SillyFreak/typst-packages-old/main/crudo/docs/manual.typ | typst | MIT License | #import "@preview/tidy:0.3.0"
#import "template.typ": *
#import "../src/lib.typ" as crudo
#let package-meta = toml("../typst.toml").package
// #let date = none
#let date = datetime(year: 2024, month: 5, day: 14)
#show: project.with(
title: "Crudo",
// subtitle: "...",
authors: package-meta.authors.map(a => a.split("<").at(0).trim()),
abstract: [
_Crudo_ lets you take slices from raw blocks and more: slice, filter, transform and join the lines of raw blocks.
],
url: package-meta.repository,
version: package-meta.version,
date: date,
)
// the scope for evaluating expressions and documentation
#let scope = (crudo: crudo)
// https://github.com/Mc-Zen/tidy/issues/21
#let preamble = "set raw(block: false, lang: none);"
#let example = (..args) => {
// breakable blocks don't work
show: block.with(breakable: false)
tidy.styles.minimal.show-example(
inherited-scope: scope,
preamble: preamble,
..args
)
}
#scope.insert("example", example)
= Introduction
`raw` elements feel similar to arrays and strings in a lot of ways: they feel like lists of lines; it's common to want to extract spcific lines, join multiple ones together, etc. As values, though, `raw` elements don't behave this way.
While a package can't add methods such as `raw.slice()` to an element, we can at least provide functions to help with common tasks. The module reference describes these utility functions:
- #ref-fn("r2l()") and #ref-fn("l2r()") are the building blocks the others build on: _raw-to-lines_ and _lines-to-raw_ conversions.
- #ref-fn("transform()") is one layer above and allows arbitrarily transforming an array of strings.
- #ref-fn("map()"), #ref-fn("filter()") and #ref-fn("slice()") are analogous to their `array` counterparts.
- #ref-fn("lines()") is similar to `slice()` but allows more advanced line selections in a single step.
- #ref-fn("join()") combines multiple `raw` elements and is convenient e.g. to add preambles to code snippets.
// This is a template for typst packages. It provides, for example, the #ref-fn("crudo.add()") function:
// #{
// let lines = read("../gallery/test.typ").trim().split("\n")
// lines = lines.slice(4)
// raw(block: true, lang: "typ", lines.join("\n"))
// }
= Module reference
== `crudo`
#{
let module = tidy.parse-module(
read("../src/lib.typ"),
// label-prefix: "crudo.",
scope: scope,
preamble: preamble,
)
tidy.show-module(
module,
sort-functions: none,
style: tidy.styles.minimal,
)
}
|
https://github.com/SkiFire13/master-thesis | https://raw.githubusercontent.com/SkiFire13/master-thesis/master/config/common.typ | typst | #let baseline-list(body) = {
show list.item: it => context [
#let marker = list.marker.at(0)
#let height = measure[#it.body].height
#box(height: height)[#marker #it.body]\
]
body
}
#let _real_label = label
#let environment(name, style: "italic") = {
let env_counter = counter(name)
let fig_counter = counter(figure.where(kind: name))
(subject, label: none, body) => block(inset: (y: 5pt), width: 100%)[
#context {
let env_counter_val = env_counter.get().at(0)
let head_counter_val = counter(heading).get().at(0)
if env_counter_val != head_counter_val {
env_counter.update((head_counter_val, 0))
fig_counter.update(0)
}
}
#set block(spacing: 1em)
*#name #env_counter.step(level: 2) #context env_counter.display()*
#box[#figure(none, kind: name, supplement: name, numbering: (n) => [#counter(heading).display((..nums) => nums.pos().at(0)).#n])#label]
(#subject).
#set text(style: style)
#body
]
}
#let definition = environment("Definition")
#let lemma = environment("Lemma")
#let example = environment("Example", style: "normal")
#let notation = environment("Notation")
#let theorem = environment("Theorem")
#let proof(body) = [
_Proof._ #body #h(1fr) $square$
]
#let sub = math.class("relation", sym.subset.eq.sq)
#let subn = math.class("relation", sym.subset.eq.sq.not)
#let meet = math.class("vary", sym.sect.sq)
#let join = math.class("vary", sym.union.sq)
#let up = math.class("unary", sym.arrow.t)
#let psub = math.class("binary", $scripts(sub)^and$)
#let hsub = math.class("binary", $scripts(sub)_H$)
#let phsub = math.class("binary", $scripts(sub)_H^and$)
#let phsublt = math.class("binary", $scripts(subset.sq)_H^and$)
#let lfp = math.class("unary", sym.mu)
#let gfp = math.class("unary", sym.nu)
#let tup(a) = math.bold(a)
#let range(end) = math.underline(end)
#let XX = math.bold("X")
#let syseq(x) = math.lr(sym.brace.l + block(math.equation(x, block: true, numbering: none)))
#let feq = math.scripts("=")
#let sol = math.op("sol")
#let varempty = text(font: (), sym.emptyset)
#let symmdiff = math.class("binary", sym.Delta)
#let eq-columns(..cols) = box(stack(
dir: ltr,
h(1fr),
..cols.pos().map(align.with(horizon)).intersperse(h(1fr)),
h(1fr),
))
#let mathstr(s) = s.clusters().map(s => $#s$).join()
#let Act = mathstr("Act")
#let Prop = mathstr("Prop")
#let Var = mathstr("Var")
#let tt = mathstr("true")
#let ff = mathstr("false")
#let boxx(f) = $class("unary", [ #f ])$
#let diam(f) = $class("unary", angle.l #f angle.r)$
#let sem(of) = $bracket.l.double of bracket.r.double$
#let bisim = sym.tilde.equiv
#let dom = math.op("dom")
#let rwlt = sym.prec
#let rwle = sym.prec.eq // TODO: change this to sym.prec.curly.eq in new version
#let valuation = mathstr("valuation")
#let reach = mathstr("reach")
#let subvaluation = mathstr("subvaluation")
#let maximaldistances = mathstr("maximal_distances")
#let minimaldistances = mathstr("minimal_distances")
#let w0 = $w_0$
#let l0 = $l_0$
#let w1 = $w_1$
#let l1 = $l_1$
#let us = sym.mu + "s"
#let tX = $tup(X)$
|
|
https://github.com/hongjr03/shiroa-page | https://raw.githubusercontent.com/hongjr03/shiroa-page/main/WXAPP/lab1/main.typ | typst | #import "template.typ": *
#import "@preview/tablex:0.0.8": tablex, cellx, rowspanx, colspanx, vlinex, hlinex
#import "@preview/cetz:0.2.2"
#import "/book.typ": book-page
#show: book-page.with(title: "实验 1:第一个微信小程序")
#let title = "实验 1:第一个微信小程序"
#let author = "洪佳荣"
#let course_id = "移动软件开发"
#let instructor = "高峰老师"
#let semester = "2024夏季"
#let due_time = "2024年8月19日"
#let id = "22070001035"
#show: assignment_class.with(title, author, course_id, instructor, semester, due_time, id)
// #show figure.caption: it => [
// #set text(size: 9pt)
// // #v(-1.5em)
// #it
// ]
*源代码*:#link("https://github.com/hongjr03/MiniProgram")\
*博客*:#link("https://www.jrhim.com/p/2024a/mnp1")
= 实验目的
<实验目的>
1. 学习使用快速启动模板创建小程序的方法;
2. 学习不使用模板手动创建小程序的方法。
= 实验步骤
#prob[开发前准备][
按照要求,配置好小程序信息和开发环境。
#grid(columns: (1.5fr, 1fr))[
#figure(
image("assets/2024-08-19-18-50-21.png"),
)
][
#figure(
image("assets/2024-08-19-18-44-29.png"),
)
]
这里使用最新的稳定版微信开发者工具,版本号为1.06.2407110。
]
#prob[使用快速启动模板创建小程序][
打开微信开发者工具,选择小程序项目,此时可以选择项目的目录、指定项目名称、AppID、项目描述等信息。
#figure(
image("assets/2024-08-19-18-49-51.png", width: 60%),
)
将一系列信息填写完毕后,开发者工具会默认选择一个模板,点击创建即可。
#figure(
image("assets/2024-08-19-18-57-35.png", width: 80%),
)
和实验说明中一致,页面的左侧显示的是手机预览效果图,右侧类似于浏览器的开发者工具,可以查看页面的各种信息。可以通过鼠标模拟手机的触摸操作来查看页面的效果。
点击顶部菜单的真机调试,开发者工具会自动编译对应真机系统平台的小程序代码,然后会弹出一个二维码,用手机微信扫描这个二维码,即可在手机上看到小程序的效果。
#figure(
image("assets/2024-08-19-19-00-20.png", width: 80%),
)
在手机上查看小程序的同时,电脑端的开发者工具也会自动弹出一个真机调试的窗口,可以查看手机上的小程序的调试信息,如下图所示。
#grid(columns: (1fr, 2.6fr), column-gutter: 1em)[
#figure(
image("assets/2024-08-19-19-03-19.png"),
)
][
#figure(
image("assets/2024-08-19-19-02-17.png"),
)
]
]
#prob[手动创建小程序][
第一步和使用模板创建小程序的步骤类似,如下图左侧所示,填写好小程序的基本信息后,点击创建即可。为了保持和实验文档的一致,这里选择不使用云服务并使用 JS-基础模板。
#grid(columns: (4fr, 1fr), column-gutter: 1em)[
#figure(
image("assets/2024-08-19-19-19-38.png"),
)
][
#figure(
image("assets/2024-08-19-19-20-44.png"),
)
]
查看右侧的文件结构,可以看到一个简单的小程序的目录结构,如上图右侧所示。
接下来,按照要求:
+ 将 app.json 文件内 pages 属性中的 `pages/logs/logs` 删除;
+ 删除 utils 文件夹;
+ 删除 pages 文件夹下的 logs 文件夹;
+ 清空 index.wxml 文件和 index.wxss 文件的内容;
+ 清空 index.js 文件的内容,然后自动补全 `Page({})`。
+ 清空 app.wxss 文件的内容;
+ 清空 app.js 文件的内容,然后自动补全 `App({})`。
以上部分修改记录见 #link("https://github.com/hongjr03/MiniProgram/commit/18a287fbfc5d4035c5b0de5895369ba8b6671052")[commit]。
然后可以进行一系列视图设计:
+ 修改导航栏标题和背景颜色;
+ 在页面上添加微信头像、昵称和“点击获取头像和昵称”按钮;
这部分修改记录见 #link("https://github.com/hongjr03/MiniProgram/commit/5d820fef103177dbea0c8701b14aed130cf02a99")[commit]。修改后的效果如下图所示:
#figure(
image("assets/2024-08-19-19-48-13.png", width: 80%),
)
进一步进行逻辑实现:
+ 点击“点击获取头像和昵称”按钮后,调用微信的 wx.getUserInfo 接口,获取用户的头像和昵称;\ 这一步操作的 Console 输出如下:#figure(image("assets/2024-08-19-19-54-38.png", width: 80%))\ 表面看是正常的,其实这里并没有真正获取到头像和昵称。
+ 将获取到的头像和昵称显示在页面上。
这部分修改记录见 #link("https://github.com/hongjr03/MiniProgram/commit/2089c4ce3dfe9e37a4ec93b690afd8ed8d871831")[commit]。此处我使用的 API 并不是 wx.getUserInfo,而是 wx.getUserProfile,原因见 #link(<问题总结与体会>)[问题总结与体会]。
修改后的效果如下图所示:
#figure(image("assets/2024-08-19-20-18-38.png", width: 80%))
真机调试运行结果见 #link(<程序运行结果>)[程序运行结果]。
]
= 程序运行结果
<程序运行结果>
#grid(columns: (1fr, 1fr, 1fr))[
#image("assets/2024-08-19-20-22-46.png")
][
#image("assets/2024-08-19-20-22-57.png")
][
#image("assets/2024-08-19-20-23-03.png")
]
= 问题总结与体会
<问题总结与体会>
#cqa[使用快速启动模板创建小程序时,界面和文档中有所不同。][
原因是我使用了“微信云开发”,当启用云开发时其默认环境会和非云开发的模板有所不同。所以在后面手动创建小程序时,我选择了不使用云服务的模板。
]
#cqa[按照群里同学提供的方案修改代码后,PC端开发者工具可以正常显示头像和昵称,但手机端无法显示。][
原因是 wx.getUserInfo 接口在微信小程序的最新版本中已经被废弃,需要使用 wx.getUserProfile 接口。详细信息见 #link("https://developers.weixin.qq.com/miniprogram/dev/api/open-api/user-info/wx.getUserProfile.html")[微信开发者社区]。
而在最新的标准中,这个接口同样不推荐使用,而是使用#link("https://developers.weixin.qq.com/miniprogram/dev/framework/open-ability/userProfile.html")[头像昵称填写接口],我猜这样可以避免用户的隐私泄露。
这部分的修改见 #link("https://github.com/hongjr03/MiniProgram/commit/d642d9cfae72e41df0b534d113d56bc77f79a633")[commit]。
]
*总结*
这次实验中我第一次接触到小程序的开发,尽管遇到了一些问题吧(而且很遗憾没能线下上第一次课),但通过查阅资料和向同学请教总算是解决了。希望在接下来的课程中,能够更好地掌握这门技术! |
|
https://github.com/markcda/unitech-typst | https://raw.githubusercontent.com/markcda/unitech-typst/master/README.md | markdown | MIT License | # unitech-typst
В данном репозитории представлены основные шаблоны работ для студентов, оформленные в типографической системе [Typst](https://typst.app):
1. Пример студенческой работы (титульный лист, оглавление, ссылки на источники)
|
https://github.com/rabotaem-incorporated/calculus-notes-2course | https://raw.githubusercontent.com/rabotaem-incorporated/calculus-notes-2course/master/sections/02-measure-theory/02-volume-and-measure.typ | typst | #import "../../utils/core.typ": *
== Объем и мера
#def(label: "def-measure")[
$Pp$ --- полукольцо подмножеств в $X$. $mu: Pp --> [0; +oo]$.
#sublabel("volume")
- $mu$ --- _объем_, если:
1. $mu nothing = 0$
2. $P_1, ..., P_n in Pp; quad union.sq.big_(k = 1)^n P_k in Pp ==> mu (union.sq.big_(k = 1)^n P_k) = sum_(k = 1)^n mu P_k$
#sublabel("measure")
- $mu$ -- _мера_, если:
1. $mu nothing = 0$ #h(1fr) $(1^*)$
2. $P_1, P_2, ... in Pp; quad union.sq.big_(k = 1)^oo P_k in Pp ==> mu (union.sq.big_(k = 1)^oo P_k) = sum_(k = 1)^oo mu P_k$ #h(1fr) $(2^*)$
]
#notice[
1. Если $mu$ --- мера, то $mu$ --- объем
2. Если $mu$ удовлетворяет $(2^*)$ и есть непустое множество, на котором $mu$ конечно, то $(2^*) ==> mu nothing = 0$. Иными словами, $(1^*)$ нужно просто чтобы исключить тупой случай $mu equiv +oo$.
]
#examples(name: "объемов", label: "volume-examples")[
1. Длина на полуинтервалах --- объем.
2. Пусть $g: RR --> RR$ неубывающая. Тогда следующее $nu_g$ --- объем:
$
nu_g: Pp &--> [0, +oo) \
nu_(a, b] &= g(b) - g(a).
$
3. #sublabel("stdvol") Классический объем на ячейках#rf("def-R-cells") $lambda_m: Pp^m --> [0, +oo)$:
$ lambda lr((a, b]) = (b_1 - a_1)(b_2 - a_2) ... (b_m - a_m). $
4. $x_0 in X$, $a > 0$, $mu: 2^X --> lr([0, +oo])$:
$ mu A = cases(a\, &"если" x_0 in A, 0\, &"иначе".) $
5. $Aa = {"ограниченные множества в" RR^2 "и их дополнения"}$
$ mu A = cases(0\, &#[если $A$ --- ограничено], 1\, &"иначе".) $
Отметим, что 5 не является мерой.
]
#th(name: "свойства объема", label: "volume-props")[
#set enum(numbering: x => if x != 228 { numbering("1)", x) } else { "2')" } )
$Pp$ --- полукольцо подмножества $X, mu$ --- объем на $Pp$. Тогда
#sublabel("monotonous")
1. Монотонность: eсли $P' subset P$, то $mu P' <= mu P$
#sublabel("monotonous'")
2. Усиленная монотонность: если $union.sq.big_(k=1)^n P_k subset P$, то $sum_(k=1)^n mu P_k <= mu P$
#sublabel("monotonous''")
228. #hide[Усиленная монотонность: ] если $union.sq.big_(k=1)^oo P_k subset P$, то $sum_(k=1)^oo mu P_k <= mu P$
#sublabel("semiadditive")
3. Полуаддитивность: eсли $P subset union.big_(k=1)^n P_k$, то $mu P <= sum_(k=1)^n mu P_k$
]
#let Union = math.union.big
#let UnionSq = math.union.big.sq
#let usb = math.union.big.sq
#proof[
#set enum(numbering: x => if x != 228 { numbering("1)", x) } else { "2')" } )
1. Очевидно.
2. $P without union.sq.big_(k=1)^n P_k =^rf("semiring-disjoint-union") union.sq.big_(j=1)^m Q_j$ для некоторых $Q_j in Pp$, поэтому
$
P &= union.sq.big_(k=1)^n P_k union.sq union.sq.big_(j=1)^m Q_j
==> \ mu P &= sum_(k=1)^n mu P_k + sum_(j = 1)^m mu Q_j >=^rf("volume-props", "monotonous") sum_(k=1)^n mu P_k.
$
228.
$
union.sq.big_(k=1)^n P_k subset union.sq.big_(k=1)^oo P_k subset P ==>
mu P >=^rf("volume-props", "monotonous'") sum_(k=1)^n mu P_k --> sum_(k=1)^oo mu P_k
$
3.
$
P'_k := P sect P_k in Pp, space P subset Union_(k=1)^n P_k' =^rf("semiring-disjoint-union")
usb_(k=1)^n usb_(j=1)^(m_k)Q_(k j),
$
где $Q_(k j) subset P_k' subset P_k, space Q_(k j) in Pp$. Тогда:
$
usb_(j=1)^(m_k) Q_(k j) subset P_k
==>_(#[по 2.])^#rf("volume-props", "monotonous'")
sum_(j=1)^(m_k)mu Q_(k j) <= mu P_k. \
mu P <= sum_(k=1)^n sum_(j=1)^(m_k) mu Q_(k j) <= sum_(k =1)^n mu P_k
$
]
#notice(label: "semiring-subtract-volume")[
1. Пусть $Rr$ --- кольцо, $mu$ --- объем на $Rr$. $A subset B in Rr$, $mu A < +oo$. Тогда $mu(B without A) = mu B - mu A$.
]
#proof[
$
B = A union.sq B without A \ mu B = mu A + mu (B without A)
$
]
#th(name: "Декартово произведение объемов", label: "cartesian-prod-volume")[
$Pp$ --- полукольцо подмножеств в $X$, $Qq$ --- полукольцо подмножеств в $Y$. $mu$ --- объем на $Pp$, $nu$ --- объем на $Qq$.
$
lambda: Pp times Qq &--> [0, +oo] \
lambda(P times Q) &= mu P dot nu Q,
$
причем мы считаем, что $0 dot (+oo) = (+oo) dot 0 = 0$.
Тогда $lambda$ --- объем на $Pp times Qq$#rf("cartesian-semiring-prod").
]
#proof[
Нужно проверить свойства объема#rf("def-measure", "volume"):
1. $lambda nothing = 0$
2. $P_1, ..., P_n in Pp; quad union.sq.big_(k = 1)^n P_k in Pp ==> lambda (union.sq.big_(k = 1)^n P_k) = sum_(k = 1)^n lambda P_k$
Первое свойство очевидно.
Докажем второе, для этого рассмотрим *частный случай*, когда $P = usb_(i=1)^n P_i, space Q = usb_(j = 1)^m Q_j, space P, P_i in Pp, space Q, Q_j in Qq$, тогда $P times Q = usb_(i=1)^n usb_(j=1)^m P_i times Q_j$ и
#figure(cetz.canvas({
import cetz.draw: *
import cetz.decorations: *
line((-1, -0.5), (7.5, -0.5), stroke: 1pt + gray)
line((0, -0.5), (3, -0.5), stroke: 3pt + blue)
line((3, -0.5), (5, -0.5), stroke: 3pt + red)
line((5, -0.5), (7, -0.5), stroke: 3pt + green)
line((-0.5, -1), (-0.5, 4.5), stroke: 1pt + gray)
line((-0.5, 0), (-0.5, 3), stroke: 4pt + green)
line((-0.5, 3), (-0.5, 4), stroke: 3pt + red)
brace((3, -0.5), (0, -0.5), amplitude: 0.3, name: "br1")
brace((5, -0.5), (3, -0.5), amplitude: 0.3, name: "br2")
brace((7, -0.5), (5, -0.5), amplitude: 0.3, name: "br3")
brace((-0.5, 0), (-0.5, 3), amplitude: 0.3, name: "br4")
brace((-0.5, 3), (-0.5, 4), amplitude: 0.3, name: "br5")
content((rel: (0, -0.4), to: "br1.center"), $P_1$)
content((rel: (0, -0.4), to: "br2.center"), $P_2$)
content((rel: (0, -0.4), to: "br3.center"), $P_3$)
content((rel: (-0.5, 0), to: "br4.center"), $Q_1$)
content((rel: (-0.5, 0), to: "br5.center"), $Q_2$)
line((0, -1), (0, 4.5), stroke: (paint: gray, dash: "dashed"))
line((3, -1), (3, 4.5), stroke: (paint: gray, dash: "dashed"))
line((5, -1), (5, 4.5), stroke: (paint: gray, dash: "dashed"))
line((7, -1), (7, 4.5), stroke: (paint: gray, dash: "dashed"))
line((-1, 0), (7.5, 0), stroke: (paint: gray, dash: "dashed"))
line((-1, 3), (7.5, 3), stroke: (paint: gray, dash: "dashed"))
line((-1, 4), (7.5, 4), stroke: (paint: gray, dash: "dashed"))
rect((0, 0), (3, 3), fill: blue.lighten(50%))
rect((3, 0), (5, 3), fill: red.lighten(50%))
rect((5, 0), (7, 3), fill: green.lighten(50%))
rect((0, 3), (3, 4), fill: purple.lighten(50%))
rect((3, 3), (5, 4), fill: yellow.lighten(50%))
rect((5, 3), (7, 4), fill: orange.lighten(50%))
}))
$
lambda (P times Q) = mu P dot nu Q = sum_(i=1)^n mu P_i dot sum_(j=1)^m nu Q_j = sum_(i=1)^n sum_(j = 1)^m mu P_i dot nu Q_j = sum_(i=1)^n sum_(j = 1)^m lambda(P_i times Q_j)
$
*Общий случай*: $P times Q = usb_(k=1)^n P_k times Q_k$
#figure(cetz.canvas({
import cetz.draw: *
import cetz.decorations: *
line((-1, -0.5), (7.5, -0.5), stroke: 1pt + gray)
line((0, -0.5), (3, -0.5), stroke: 3pt + blue)
line((3, -0.5), (6, -0.5), stroke: 3pt + red)
line((6, -0.5), (7, -0.5), stroke: 3pt + green)
line((-0.5, -1), (-0.5, 4.5), stroke: 1pt + gray)
line((-0.5, 0), (-0.5, 2), stroke: 4pt + green)
line((-0.5, 2), (-0.5, 4), stroke: 4pt + blue)
line((-0.5, 4), (-0.5, 5), stroke: 3pt + red)
brace((3, -0.5), (0, -0.5), amplitude: 0.3, name: "br1")
brace((6, -0.5), (3, -0.5), amplitude: 0.3, name: "br2")
brace((7, -0.5), (6, -0.5), amplitude: 0.3, name: "br3")
brace((-0.5, 0), (-0.5, 2), amplitude: 0.3, name: "br4")
brace((-0.5, 2), (-0.5, 4), amplitude: 0.3, name: "br5")
brace((-0.5, 4), (-0.5, 5), amplitude: 0.3, name: "br6")
content((rel: (0, -0.4), to: "br1.center"), $P'_1$)
content((rel: (0, -0.4), to: "br2.center"), $P'_2$)
content((rel: (0, -0.4), to: "br3.center"), $P'_3$)
content((rel: (-0.5, 0), to: "br4.center"), $Q'_1$)
content((rel: (-0.5, 0), to: "br5.center"), $Q'_2$)
content((rel: (-0.5, 0), to: "br6.center"), $Q'_3$)
line((0, -1), (0, 4.5), stroke: (paint: gray, dash: "dashed"))
line((3, -1), (3, 4.5), stroke: (paint: gray, dash: "dashed"))
line((5, -1), (5, 4.5), stroke: (paint: gray, dash: "dashed"))
line((6, -1), (6, 4.5), stroke: (paint: gray, dash: "dashed"))
line((7, -1), (7, 4.5), stroke: (paint: gray, dash: "dashed"))
line((-1, 0), (7.5, 0), stroke: (paint: gray, dash: "dashed"))
line((-1, 2), (7.5, 2), stroke: (paint: gray, dash: "dashed"))
line((-1, 3), (7.5, 3), stroke: (paint: gray, dash: "dashed"))
line((-1, 4), (7.5, 4), stroke: (paint: gray, dash: "dashed"))
rect((0, 0), (3, 4), fill: blue.lighten(50%))
rect((0, 4), (6, 5), fill: red.lighten(50%))
rect((6, 2), (7, 5), fill: green.lighten(50%))
rect((3, 0), (7, 2), fill: yellow.lighten(50%))
rect((3, 2), (6, 4), fill: orange.lighten(50%))
}))
Знаем, что $P, Q, P_k, Q_k$ представляются в виде конечного объединения элементов из соответствующих полуколец и их можно попересекать, а значит найдутся такие $P'_i, Q'_j$, что
$
P times Q = usb_(i = 1)^N usb_(j = 1)^M P'_i times Q'_j \
P_k times Q_k = usb_("какое-то подмножество") P'_i times Q'_j \
forall i, j space P'_i times Q'_j "лежит ровно в одном" P_k times Q_k
$
Это разбиение попадает под разобранный случай
$ lambda(P times Q) = sum_(i=1)^N sum_(j=1)^M lambda(P'_i times Q'_j) =^"группировка слагаемых" sum_(k=1)^n lambda (P_k times Q_k) $
]
#follow[
Классический объем --- это объем.
]
#examples[(Мер)
1. Классический объем --- мера. (доказательство позже)
2. Пусть $g: RR --> RR$ --- неубывающая непрерывная справа фукнция. Тогда $nu_g lr((a, b]) = g(b) - g(a)$ --- мера.
3. $x_0 in X$, $a > 0$, то
$
mu A = cases(a\, & "если" x_0 in A, 0\, & "иначе") quad #[--- мера].
$
4. (_Считающая мера_) $X$ --- произвольное множество. $mu: 2^X --> [0; +oo]$, где $mu A$ --- количество элементов в множестве $A$.
5. $X$ --- произвольное, $X supset T = {t_1, t_2, ...}$ --- не более чем счетное. $w_1, w_2, ... >= 0$. Пусть $display(mu A = sum_(t_i in A) w_i)$. Это мера.
]
/*
#proof[(Чего-то)
Докажем, что $A = usb_(k = 1)^oo A_k ==> mu A = sum^oo_(k = 1) mu A_k$
Пусть $S := underbrace(sum a_(k j), "в каком-то порядке") = sum_(k = 1)^oo sum_(j = 1)^oo a_(k j) =: S'$.
$
sum_(k = 1)^n sum_(j = 1)^oo a_(k j) = sum^oo_(j = 1) sum_(k = 1)^n a_(k j).
$
$
S >= sum_(k = 1)^n sum_(j = 1)^oo a_(k j) --> sum_(k = 1)^oo sum_(j = 1)^oo a_(k j) ==> S >= S'.
$
Рассмотрим конечную сумму $ sum a_(k j) <= sum_(k = 1)^n sum_(j = 1)^m a_(k j) <= sum_(k = 1)^n sum_(j = 1)^oo a_(k j) <= sum_(k = 1)^oo sum_(j = 1)^oo a_(k j) = S' ==> S <= S' $
Значит $S = S'$
]
*/
#th(label: "measure-semiadditive")[
$mu: Pp --> [0; +oo]$ объем на полукольце $Pp$. Тогда $mu #[--- мера] <==> #[если] P, P_1, P_2, ... in Pp$, такие что $P subset Union_(k=1)^oo P_k$, то
$mu P <= sum_(k=1)^oo mu P_k$. Это свойство называется _счетная полуаддитивность_.
]
#proof[
- "$<==$"
$P = usb_(k=1)^oo P_k$ тогда из счетной полуаддитивности следует, что $mu P <= sum_(k=1)^oo mu P_k$.
А из объема следует усиленная монотонность#rf("volume-props", "monotonous''") $==> mu P >= sum_(k=1)^oo mu P_k$.
- "$==>$": $P subset Union_(k=1)^oo P_k$.
Пусть $P'_k = P sect P_k$, и тогда $ P subset Union_(k=1)^oo P'_k ==>^rf("semiring-disjoint-union") P subset usb_(k=1)^oo usb_(j=1)^(m_k) Q_(k j), $ где $Q_(k j) in P$ и $Q_(k j) subset P'_k subset P_k$. Тогда имеем $usb_(j=1)^(m_k) Q_(k j) subset P_k ==> sum_(j=1)^(m_k)mu Q_j <= mu P_k$ из усиленной монотонности объема#rf("volume-props", "monotonous''"), и
$
mu P <=^rf("volume-props", "monotonous")
sum_(k=1)^oo sum_(j=1)^(m_k) mu Q_(k j) <=^rf("volume-props", "monotonous''")
sum_(k=1)^oo mu P_k.
$
]
#follow(label: "union-zero-measure")[
Пусть $mu$ --- мера на $sigma$-алгебре. Тогда счетное объединение множеств нулевой меры $mu$ имеет нулевую меру.
]
#proof[
$
mu(Union_(k=1)^oo P_k) <=^rf("measure-semiadditive")
sum_(k = 1)^oo mu P_k = 0, "если" forall k space mu P_k = 0
$
]
#th(label: "bottom-up-continious")[
Пусть $mu$ --- объем на $sigma$-алгебре. Тогда $mu$ --- мера тогда и только тогда, когда $A_1 subset A_2 subset ... in Aa ==> mu(Union_(k=1)^oo A_k) = lim_(n -> oo) mu A_n$. Это свойство называется _непрерывность снизу_.
]
#proof[
- "$==>$": $B_n := A_n without A_(n-1), space A_0 = nothing$
$
Union_(k=1)^oo A_n = usb_(k=1)^oo B_k, space #[так как $A_n = usb_(k=1)^n B_k$]
$
$
mu(Union_(k=1)^oo A_k) =
mu(usb_(k=1)^oo B_k) =^rf("def-measure", "measure")
sum_(k=1)^oo mu B_k =
sum_(k=1)^oo underbrace(mu(A_k without A_(k-1)), mu A_k - mu A_(k-1)#[#rf("semiring-subtract-volume")])
$
Если $mu A_k < +oo$, то
$ mu(Union_(k=1)^oo A_k) = lim_(n->oo)sum_(k=1)^n (mu A_k - mu A_(k-1)) = lim mu A_n. $
Если $mu A_n = +oo$, то $mu A_k = +oo space forall k >= n$ и $mu(Union A_k) = +oo$.
- "$<==$": Пусть $C = usb_(k = 1)^oo C_k$. Надо доказать, что $mu C =^#rf("def-measure", "measure") sum_(k = 1)^oo mu C_k$. Пусть $A_n := usb_(k = 1)^n C_k$.
$
mu A_n =^rf("def-measure", "volume")
sum_(k = 1)^n mu C_k ==>
mu C =^(A_n -> C) mu(Union_(n = 1)^oo A_n) =^"непр.\nснизу"
lim_(n->oo) mu A_n =^"сумма\nряда"
sum_(k = 1)^oo mu C_k.
$
]
#th(label: "top-down-continious")[
$mu$ --- объем на $sigma$-алгебре $Aa$ подмножеств $X$, $mu X < +oo$. Тогда следующие условия равносильны:
1. $mu$ --- мера
2. (непрерывность сверху) $A_1 supset A_2 supset A_3 supset ... ==> mu(sect.big_(n=1)^oo A_n) = lim_(n->oo)mu A_n$
3. (непрерывность сверху на пустом множестве) Если $A_1 supset A_2 supset A_3 supset ...$ и $sect.big_(n=1)^oo A_n = nothing$, то $lim_(n->oo)mu A_n = 0$
]
#proof[
- $2) => 3)$ --- тривиально.
- $1) => 2)$: $B_n = A_1 without A_n$. $nothing = B_1 subset B_2 subset B_3 subset ...$.
$
mu A_1 - mu (sect.big_(n = 1)^oo A_n) =
mu(A_1 without sect.big_(n = 1)^oo A_n) =
mu(Union_(n = 1)^oo B_n) =^rf("bottom-up-continious")
\ #h(3cm) =
lim_(n -> oo) mu B_n =
lim_(n -> oo) (mu A_1 - mu A_n) =
mu A_1 - lim mu A_n.
$
- $3) => 1)$:
Пусть $A = usb_(k = 1)^oo C_k$.
Рассмотрим $A_n = usb_(k = n)^oo C_k$.
Так как $A_1 supset A_2 supset A_3 supset ...$ и $sect.big_(n = 1)^oo A_n = nothing$, $mu A_n --> 0$.
$
A = (usb_(k = 1)^(n - 1) C_k) union.sq A_n ==>
mu A = (sum_(k = 1)^(n - 1) mu C_k) + mu A_n --> sum_(k = 1)^oo mu C_k.
$
]
#notice[
1. Если $mu$ --- мера, и $mu A_n < +oo$ для какого-то $n$, то $mu(sect.big A_k) = lim_(k -> oo) mu A_k$.
2. Условие $mu X < +oo$ важно, например если $A_n = (n, +oo)$.
]
|
|
https://github.com/maxluoXIII/resume | https://raw.githubusercontent.com/maxluoXIII/resume/master/README.md | markdown | # Welcome to my resume repository!
Each branch is for specific industries, but only the master branch is guaranteed
to be updated. Please take a look, and reach out if you're interested.
## Word Diff
This section is basically a note to self. In order to diff the `.docx` files
properly, the `.gitattributes` file has already been set up, but the git config
still needs to be set. Create a script like the one below
```bat
pandoc -f docx -t plain -o - %1
```
and do `git config diff.word.textconv <scriptname>`.
The most recent version of this resume is the [typst](https://typst.app) version
|
|
https://github.com/Turkeymanc/notebook | https://raw.githubusercontent.com/Turkeymanc/notebook/main/entries/example-entry.typ | typst | #import "/packages.typ": *
#import components: *
// The show rule passes all content in this file into the `create-body-entry` function.
// This just means that we don't need an extra level of indentation when writing this entry.
#show: create-body-entry.with(
title: "Example Entry",
type: "identify",
date: datetime(year: 2024, month: 1, day: 1),
)
Here's some content in this entry.
Here's an example of how you'd create a pro-con table:
#pro-con(
pros: [
- pro number 1
- pro number 2
- pro number 3
],
cons: [
- con number 1
- con number 2
- con number 3
]
)
Now we'll generate 50 words of filler text!
#lorem(50)
|
|
https://github.com/QuadnucYard/pigeonful-typ | https://raw.githubusercontent.com/QuadnucYard/pigeonful-typ/main/src/lib.typ | typst | MIT License | #let pigeonful(
application-title: [志愿信息],
entries: (:),
notice-kind: "dailuqu",
notice-title: auto,
notice-hint: auto,
notice-body: [],
notifier: [],
acceptance: [],
width: 768pt,
) = {
if notice-title == auto {
notice-title = if notice-kind == "fushi" {
[复试通知]
} else if notice-kind == "dailuqu" {
[待录取通知]
} else {
panic("unknown notice kind: " + str(notice-kind))
}
}
if notice-hint == auto {
notice-hint = if notice-kind == "fushi" {
[接受或拒绝复试通知后,将无法更改。]
} else if notice-kind == "dailuqu" {
[接受或拒绝待录取通知后,将无法更改。]
} else {
panic("unknown notice kind: " + str(notice-kind))
}
}
/// Block with specific line height
let line-block(body, height: auto) = {
block(height: height, align(horizon, body))
}
/// Outer box
let border-block = block.with(width: 100%, stroke: rgb("#ccc"))
set text(lang: "zh", font: "Microsoft YaHei", size: 14pt, cjk-latin-spacing: none)
show: block.with(width: width)
set block(spacing: 0pt)
// title
border-block(fill: rgb("#f9f9f9"), inset: 10pt)[
#line-block(height: 17.5pt, text(fill: rgb("#50aac2"), strong(application-title)))
]
// description list
border-block(inset: 10pt)[
#set text(overhang: false)
#show grid.cell.where(x: 0): set text(fill: rgb("#65696E"))
#show grid.cell.where(x: 1): set text(fill: rgb("#2B2E33"))
#grid(
columns: (38%, 59%),
rows: 30.75pt,
align: (right + horizon, left + horizon),
column-gutter: 0.5em,
..for (k, v) in entries {
([#k:], [#v])
}
)
#v(7.5pt)
]
// 复试/待录取通知
border-block(inset: 10pt)[
#line-block(height: 16pt)[
#text(fill: rgb("#c90"), strong(notice-title))
#h(6pt)
#text(fill: rgb("#93989E"), notice-hint)
]
]
// notice details
border-block(inset: 10pt)[
#line-block(height: 22pt)[#text(fill: rgb("#848484"), notifier)]
#block(inset: (top: 10.5pt, bottom: 20pt, x: 25.5pt))[
#block(
width: 100%,
inset: (x: 13pt, y: 10pt),
stroke: rgb("#b2e2ea"),
fill: rgb("#e5f2f8"),
)[
#place(dx: 20pt, dy: -19pt)[
#place[#text(fill: rgb("#b2e2ea"), size: 23pt)[#sym.diamond.filled]]
#place(dy: 1.4pt)[#text(fill: rgb("#e5f2f8"), size: 23pt)[◆]]
]
#v(3.5pt)
#block[
#set text(rgb("#2B2E33"))
#set par(leading: 12pt)
#notice-body
]
#v(7pt)
#align(center)[
#block(inset: 10pt)[
#block(stroke: rgb("#ccc"), radius: 4pt, inset: (x: 15pt))[
#line-block(height: 28pt)[
#text(fill: rgb("#007F00"))[#text(font: "Rei", size: 16pt)[] #acceptance]
]
#v(4pt)
]
]
]
]
]
]
}
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/meta/link_09.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test link to label.
Text <hey>
#link(<hey>)[Go to text.]
|
https://github.com/htlwienwest/da-vorlage-typst | https://raw.githubusercontent.com/htlwienwest/da-vorlage-typst/main/template/anhang.typ | typst | MIT License | Im Anhang kann Material, das nicht in den Text eingebunden wurde, beigefügt werden, z.B. Fragebögen, Interviews, Projektdokumnetation (Begleitprotokoll, Antrag, etc.).
Bitte sprecht mit eurer Betreuerin/eurem Betreuer ab, was alles im Anhang enthalten sein soll.
|
https://github.com/PhilipNelson5/Resume-Typst | https://raw.githubusercontent.com/PhilipNelson5/Resume-Typst/main/resume.typ | typst | Apache License 2.0 | #import "cv.typ": *
// Load CV data from YAML
#let cvdata = yaml("resume.yml")
#let uservars = (
headingfont: "Linux Libertine", // Set font for headings
bodyfont: "Linux Libertine", // Set font for body
fontsize: 10pt,
margin: (
x: 1.25cm,
y: 1cm
),
linespacing: 6pt,
showAddress: false, // Show address in contact info
showNumber: true, // Show phone number in contact info
)
// setrules and showrules can be overridden by re-declaring it here
// #let setrules(doc) = {
// // Add custom document style rules here
//
// doc
// }
#let customrules(doc) = {
// Add custom document style rules here
doc
}
#let cvinit(doc) = {
doc = setrules(uservars, doc)
doc = showrules(uservars, doc)
doc = customrules(doc)
doc
}
// Each section function can be overridden by re-declaring it here
// #let cveducation = []
// Content
#show: doc => cvinit(doc)
#cvheading(cvdata, uservars)
#cveducation(cvdata)
#cvwork(cvdata)
#cvservice(cvdata)
#cvprojects(cvdata)
#cvcertificates(cvdata)
#cvpublications(cvdata)
#cvskills(cvdata)
#cvawards(cvdata)
#cvreferences(cvdata)
#endnote
|
https://github.com/Its-Alex/resume | https://raw.githubusercontent.com/Its-Alex/resume/master/lib/experiences.typ | typst | MIT License | #import "components/title.typ": customTitle
#import "components/link_with_icon.typ": linkWithIcon
#let experiences(
title,
experiences
) = [
#customTitle(title)
#grid(
columns: (100%),
gutter: 0pt,
row-gutter: 1.5em,
..experiences.map((experience) => [
#block(breakable: false)[
#grid(
columns: (40%, 60%),
gutter: 0pt,
{
grid(
columns: 100%,
gutter: 5pt,
{
text(weight: 600)[#experience.name]
},
{
text(weight: 400, fill: rgb("6E6C70"))[#experience.position]
}
)
},
{
grid.cell(align: right)[
#text(weight: 400, rgb("9D9BA0"))[(#experience.dates)]
]
}
)
#text()[#eval(experience.description, mode: "markup")]
#linkWithIcon(
"link.svg",
experience.link,
experience.link
)
]
])
)
] |
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/string-07.typ | typst | Other | // Test the `slice` method.
#test("abc".slice(1, 2), "b")
#test("abc🏡def".slice(2, 7), "c🏡")
#test("abc🏡def".slice(2, -2), "c🏡d")
#test("abc🏡def".slice(-3, -1), "de")
|
https://github.com/jxpeng98/typst-coverletter | https://raw.githubusercontent.com/jxpeng98/typst-coverletter/main/modernpro-coverletter.typ | typst | MIT License | ///////////////////////////////
// modernpro-coverletter.typ
// A cover letter template with modern Sans font for job applications and other formal letters.
// Copyright (c) 2024
// Author: <NAME>
// License: MIT
// Version: 0.0.5
// Date: 2024-10-21
// Email: <EMAIL>
///////////////////////////////
#let coverletter(
font-type: "",
name: "",
address: none,
contacts: (),
recipient: (start-title: "", cl-title: "", date: "", department: "", institution: "", address: "", postcode: ""),
supplement: none,
mainbody,
) = {
set text(font: font-type, weight: "regular")
let date-colour = rgb("#666666")
let primary-colour = rgb("#000000")
let headings-colour = rgb("#2b2b2b")
let subheadings-colour = rgb("#333333")
let sectionsep = {
[#v(5pt)]
}
let subsectionsep = {
[#v(2pt)]
}
// set the recipient
let recipient-generate(start-title, cl-title, date, department, institution, address, postcode) = {
align(
left,
{
if department != [] {
text(10pt, font: font-type, fill: subheadings-colour, weight: "bold")[#department]
}
h(1fr)
if date != [] {
text(10pt, font: font-type, fill: primary-colour, weight: "light")[#date\ ]
} else {
text(
10pt,
font: font-type,
fill: primary-colour,
weight: "light",
)[ #datetime.today(offset: auto).display("[day] [month repr:long] [year]")\ ]
}
if institution != [] {
text(10pt, font: font-type, fill: subheadings-colour, weight: "bold")[#institution\ ]
}
if address != [] {
text(10pt, font: font-type, fill: headings-colour, weight: "light")[#address\ ]
}
if postcode != [] {
text(10pt, font: font-type, fill: headings-colour, weight: "light")[#postcode ]
}
},
)
if postcode != [] {
align(left, text(12pt, font: font-type, fill: primary-colour, weight: "bold")[#upper(cl-title)])
v(0.1em)
}
if start-title != [] {
set text(11pt, font: font-type, fill: primary-colour, weight: "regular")
[#start-title]
}
}
// show contact details
let contact-display(contacts) = {
v(-5pt)
set text(10pt, fill: headings-colour, weight: "regular")
contacts
.map(contact => {
if ("link" in contact) {
link(contact.link)[#{
contact.text
}]
} else [
#{
contact.text
}
]
})
.join(" | ")
}
set page(
margin: (left: 1.25cm, right: 1.25cm, top: 3cm, bottom: 1.5cm),
header: {
// Head Name Section
text(
20pt,
fill: primary-colour,
weight: "bold",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 12pt,
)[#align(center, [#name])]
// address
if address != none {
v(5pt)
text(
11pt,
fill: primary-colour,
weight: "regular",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 2pt,
)[#align(center, [#address])]
}
v(2pt)
align(center)[#contact-display(contacts)]
line(length: 100%, stroke: 0.2pt + primary-colour)
},
header-ascent: 1em,
)
// Add recipient details
recipient-generate(
recipient.start-title,
recipient.cl-title,
recipient.date,
recipient.department,
recipient.institution,
recipient.address,
recipient.postcode,
)
set par(justify: true, first-line-indent: 2em)
set text(11pt, font: font-type, fill: primary-colour, weight: "regular")
mainbody
set text(11pt, font: font-type, fill: primary-colour, weight: "regular")
v(1pt)
[Sincerely,]
v(1pt)
[*#name*]
if supplement != none {
v(1pt)
[#supplement]
}
}
#let statement(
font-type: "",
name: "",
address: none,
contacts: (),
supplement: none,
mainbody,
) = {
set text(font: font-type, weight: "regular")
let date-colour = rgb("#666666")
let primary-colour = rgb("#000000")
let headings-colour = rgb("#2b2b2b")
let subheadings-colour = rgb("#333333")
let sectionsep = {
[#v(5pt)]
}
let subsectionsep = {
[#v(2pt)]
}
// show contact details
let contact-display(contacts) = {
v(-5pt)
set text(10pt, fill: headings-colour, weight: "regular")
contacts
.map(contact => {
if ("link" in contact) {
link(contact.link)[#{
contact.text
}]
} else [
#{
contact.text
}
]
})
.join(" | ")
}
set page(
margin: (left: 1.25cm, right: 1.25cm, top: 3cm, bottom: 1.5cm),
header: {
// Head Name Section
text(
20pt,
fill: primary-colour,
weight: "bold",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 12pt,
)[#align(center, [#name])]
// address
if address != none or address != [] {
v(5pt)
text(
11pt,
fill: primary-colour,
weight: "regular",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 2pt,
)[#align(center, [#address])]
}
align(center)[#contact-display(contacts)]
line(length: 100%, stroke: 0.2pt + primary-colour)
},
header-ascent: 1em,
)
set par(justify: true, first-line-indent: 2em)
set text(11pt, font: font-type, fill: primary-colour, weight: "regular")
mainbody
set text(11pt, font: font-type, fill: primary-colour, weight: "regular")
if supplement != none {
v(1pt)
[#supplement]
}
} |
https://github.com/Zuttergutao/Typstdocs-Zh-CN- | https://raw.githubusercontent.com/Zuttergutao/Typstdocs-Zh-CN-/main/Classified/Part%20II.typ | typst | #import "format.typ":*
#[
#set heading(numbering: none)
#show heading: it => block[
#set align(center)
#set text(size:28pt,weight:700)
#it.body
]
= Part II 简明教程
]
// align
= 内容对齐
```para
// 水平/垂直对直内容
align(
set alignment 2d alignment,
// 沿两个轴排列 横向排列:start end left center right
// 竖向排列:top horizon bottom, 使用+号实现横向竖向排列设置
content,
) -> content
```
```typ
#align(center)[#lorem(10)]
#align(right)[#lorem(10)]
#align(left)[#lorem(10)]
#align(center+top)[#lorem(10)]
```
#[
#show:blockk
#align(center)[#lorem(10)]
#align(right)[#lorem(10)]
#align(left)[#lorem(10)]
#align(center+top)[#lorem(10)]
]
= 图片插入
<image>
```para
带有标题的图片及引用
figure(
content, // 图片内容
set caption: `none` `content`, // 图片标题
set numbering: `none` `string` `function`, // 图片标号
set gap: `length`, // 图片与标题之间的距离
) -> content
```
```para
图片
image(
`string`, // 图片路径
set width: `auto` `relative length`, // 图片宽度
set height: `auto` `relative length`, // 图片高度
set alt: `none` `string` // 图片描述
set fit: `string`,
// 如何自动调节:cover(默认,完全覆盖整个区域)、
// contain(完全包含整个区域)、stretch(拉伸图象以完全填满)
) -> content
```
```typ
@gege shows the right pose of playing basketball
#figure(
image("1.png",height:20%),
caption: [
Playing basketball.
],
numbering: "1",
) <gege>
```
#[
#show:blockk
@gege shows the right pose of playing basketball
#figure(
image("1.png",height:23%),
caption: [
Playing basketball.
],
numbering: "1",
) <gege>
]
#pagebreak()
= 盒子创建
盒子的创建对于页面布局十分有用。
```para
内联级的container,除了公式、文字、box之外所有的元素都是block级的,不能出现在一个段落中
box可以用来将元素整合到一个段落中
box(
set width: `auto` `relative length` `fraction`, // 盒子宽度
set height: `auto` `relative length`, // 盒子高度
set baseline: `relative length`, // 盒子基线
set fill: `none` `color`, // 背景颜色
set stroke: `none` `length` `color` `dictionary` `stroke`, // 盒子边界
set radius: `relative length` `dictionary`, // 盒子圆角半径
set inset: `relative length` `dictionary`, // 内容距离盒子边界距离
set outset: `relative length` `dictionary`, // 盒子外扩值
set `none` `content`,
) -> content
```
```typ
Image with baseline 50%:
#box(width:auto,
height: 10%,
baseline: 50%,
stroke: gray+2pt,
radius:5pt,
inset: 1pt,
outset: 1pt,
image("1.png")
)
```
#[
#show:blockk
Image with baseline 50%:
#box(width:auto,
height: 12%,
baseline: 50%,
stroke: gray+2pt,
radius:5pt,
inset: 1pt,
outset: 1pt,
image("1.png")
)
]
#set page(columns:2)
= 强制分栏
```para
强制分栏,当在单列布局或页面的最后一列中使用时,该函数将表现得像pagebreak()。否则,分栏后的内容将被放置在下一栏。
在此处可以看到Typst处理中文文字间距时还不是很完美空格判定有缺陷。
colbreak(
set weak: boolean
) -> content
```
```typ
Preliminary findings from our ..
使用\#colbreak()强制换列
#colbreak()
Through rigorous experimentation ..
需要注意到,此处的省略号只有两个,那是因为如果使用三个省略号中文将会出现乱码情况。
```
#[
#show:blockk
Preliminary findings from our
ongoing research project have
revealed a hitherto unknown
phenomenon of extraordinary
significance. \
使用\#colbreak()强制换列
#colbreak()
Through rigorous experimentation
and analysis, we have discovered
a hitherto uncharacterized process
that defies our current
understanding of the fundamental
laws of nature.
]
= 设置列数
```para
#pagebreak()可以强制换页
#set page(columns:2)可以设置页面列数
分栏,将一个区域分割成多个相同大小的列,不会分配列的高度,列可以跨页断开
columns(
set integer, // 列数目
set gutter: `relative length`,
// 每列之间的间距
content,
) -> content
```
```typ
#box(height:150pt,columns(2, gutter: 20pt)[
#set par(justify: true)
This research was funded by the National Academy of Sciences. NAoS provided support for field tests and interviews with a grant of up to USD 40.000 for a period of 6 months.
]
)
In recent years, deep learning has increasingly been used to solve a variety of problems.
```
#[
#show:blockk
#box(height:150pt,columns(2, gutter: 20pt)[
#set par(justify: true)
This research was funded by the National Academy of Sciences. NAoS provided support for field tests and interviews with a grant of up to USD 40.000 for a period of 6 months.
]
)
In recent years, deep learning has increasingly been used to solve a variety of problems.
]
// enum
= 有序列表 <enum>
enum用于创建有序无序列表以及连续编号,enum函数也有专门的语法糖:
以+起行,创建一个自动编号的枚举项目.
以数字和.开头的一行将创建一个明确编号的枚举项目.
枚举项目可以包含多个段落和其他块级内容.所有缩进超过一个项目的+或.的内容都成为该项目的一部分。
#colbreak()
```typ
enum(
set tight: `boolean`,
// 紧凑显示
set numbering: `string` `function`,
// 如何编号
set start: `integer`,
// 编号起始值
set full: `boolean`,
// 是否显示全部编号
set indent: `length`,
// 每个元素的缩进
set body-indent: `length`,
// 编号与内容之间的间距
set spacing: `auto` `relative length` `fraction`,
// 行间距
..contentarray,
) -> content
```
```typ
#block(fill:gray,width:100%,
inset:5pt,radius:5pt)[
Automatically numbered:
+ Preparations
+ Analysis
Manually numbered:
2. What is the first step?
5. I am confused.
+ Moving on ...
Function call.
#enum[First][Second]
]
```
#colbreak()
#[
#show:blockk
#block(fill:gray,width:100%,
inset:5pt,radius:5pt)[
Automatically numbered:
+ Preparations
+ Analysis
Manually numbered:
2. What is the first step?
5. I am confused.
+ Moving on ...
Function call.
#enum[First][Second]
]
]
```typ
#block(fill:green,width:100%,
radius:5pt)[
#set enum(numbering: "a)")
+ Starting off ...
+ Don't forget step two
]
```
#[
#show:blockk
#block(fill:green,width:100%,inset:5pt,radius:5pt)[
#set enum(numbering: "a)")
+ Starting off ...
+ Don't forget step two
]
]
```typ
#block(fill:rgb("#b1f2eb"),
width:100%,
inset:5pt,
radius:5pt)[
#set enum(numbering: "1.a)")
+ Different
+ Numbering
+ Nested
+ Items
+ Style
#set enum(numbering: n => super[#n])
+ Superscript
+ Numbering!
#enum(
start: 3,
numbering: "a.",
[Skipping],
[Ahead],
)
]
```
#[
#show:blockk
#block(fill:rgb("#b1f2eb"),
width:100%,
inset:5pt,
radius:5pt)[
#set enum(numbering: "1.a)")
+ Different
+ Numbering
+ Nested
+ Items
+ Style
#set enum(numbering: n => super[#n])
+ Superscript
+ Numbering!
#enum(
start: 3,
numbering: "a.",
[Skipping],
[Ahead],
)
]
]
#colbreak()
```typ
#block(fill:rgb("#b1f2eb"),
width:100%,inset:5pt,
radius:5pt)[
#enum(
start: 3,
numbering: "a.",
[Skipping],
[Ahead],
)
]
```
#block(fill:rgb("#b1f2eb"),
width:100%,inset:5pt,radius:5pt)[
#enum(
start: 3,
numbering: "a.",
[Skipping],
[Ahead],
)
]
```typ
#block(fill:rgb("#b122eb"),
width:100%,inset:5pt,
radius:5pt)[
#set enum(numbering: "1.a)", full: true)
+ Cook
+ Heat water
+ Add integredients
+ Eat
]
```
#block(fill:rgb("#b122eb"),
width:100%,inset:5pt,radius:5pt)[
#set enum(numbering: "1.a)", full: true)
+ Cook
+ Heat water
+ Add integredients
+ Eat
]
#pagebreak()
#set page(columns:1)
// Grid
= 网格排版 <grid>
在网格中排版内容. grid允许将内容安排在一个grid中. 可以定义行和列的数量, 以及它们之间的间距. 有多种列和行的模式, 可以用来创建复杂的布局.
```para
grid(
set columns: `auto` `integer` `relative length` `fraction` `array`,
// 列数
set rows: `auto` `integer` `relative length` `fraction` `array`,
// 行数
set gutter: `auto` `integer` `relative length` `fraction` `array`,
// 行或列间距
set column-gutter: `auto` `integer` `relative length` `fraction` `array`,
// 列间距,优先级高于gutter
set row-gutter: `auto` `integer` `relative length` `fraction` `array`,
// 行间距,优先级高于gutter
..content,
) -> content
```
```typ
#let cell = rect.with(
inset: 8pt,
fill: rgb("e4e5ea"),
width: 100%,
radius: 6pt
)
#grid(
columns: (60pt, 1fr, 60pt),
rows: (60pt, auto),
gutter: 3pt,
cell(height: 100%)[Easy to learn],
cell(height: 100%)[Great output],
cell(height: 100%)[Intuitive],
cell[Our best Typst yet],
cell[
Responsive \ design in \ print
for everyone
],
cell[One more thing...],
)
```
#[
#show:blockk
#let cell = rect.with(
inset: 8pt,
fill: rgb("e4e5ea"),
width: 100%,
radius: 6pt
)
#grid(
columns: (60pt, 1fr, 60pt),
rows: (60pt, auto),
gutter: 3pt,
cell(height: 100%)[Easy to learn],
cell(height: 100%)[Great output],
cell(height: 100%)[Intuitive],
cell[Our best Typst yet],
cell[
Responsive \ design in \ print
for everyone
],
cell[One more thing...],
)
]
== Figure、Grid结合绘制子图
```typ
#let subfigure(body, caption: "", numbering: "(a)") = {
let figurecount = counter(figure)
let subfigurecount = counter("subfigure")
let subfigurecounterdisply = counter("subfigurecounter")
let number = locate(loc => {
let fc = figurecount.at(loc)
let sc = subfigurecount.at(loc)
if fc == sc.slice(0,-1) {
subfigurecount.update(
fc + (sc.last()+1,)
)
subfigurecounterdisply.update((sc.last()+1,))
} else {
subfigurecount.update( fc + (1,))
subfigurecounterdisply.update((1,))
}
subfigurecounterdisply.display(numbering)
})
body
v(-.65em)
if not caption == none {
align(center)[#number #caption]
}
}
#figure(
grid(columns: 3, gutter: 2em,
subfigure(image("1.png", width: 50%)),
subfigure(image("1.png", width: 50%)),
subfigure(image("1.png", width: 50%), caption: "Test Caption")
),
caption: "Test caption"
)
#v(2em)
#figure(
grid(
columns: (1fr, 1fr, 1fr),
rows: (auto, auto),
gutter: 1pt,
image("1.png",width:50%),
image("1.png",width:50%),
image("1.png",width:50%),
image("1.png",width:50%),
image("1.png",width:50%),
image("1.png",width:50%),
),
numbering: "1",
caption: [
SubFigures.
]
)
```
#pagebreak()
#[
#show:blockk
#let subfigure(body, caption: "", numbering: "(a)") = {
let figurecount = counter(figure)
let subfigurecount = counter("subfigure")
let subfigurecounterdisply = counter("subfigurecounter")
let number = locate(loc => {
let fc = figurecount.at(loc)
let sc = subfigurecount.at(loc)
if fc == sc.slice(0,-1) {
subfigurecount.update(
fc + (sc.last()+1,)
)
subfigurecounterdisply.update((sc.last()+1,))
} else {
subfigurecount.update( fc + (1,))
subfigurecounterdisply.update((1,))
}
subfigurecounterdisply.display(numbering)
})
body
v(-.65em)
if not caption == none {
align(center)[#number #caption]
}
}
#figure(
grid(columns: 3, gutter: 2em,
subfigure(image("1.png", width: 40%)),
subfigure(image("1.png", width: 40%)),
subfigure(image("1.png", width: 40%), caption: "Test Caption")
),
caption: "Test caption"
)
#v(2em)
#figure(
grid(
columns: (1fr, 1fr, 1fr),
rows: (auto, auto),
gutter: 1pt,
image("1.png",width:35%),
image("1.png",width:35%),
image("1.png",width:35%),
image("1.png",width:35%),
image("1.png",width:35%),
image("1.png",width:35%),
),
numbering: "1",
caption: [
SubFigures.
]
)
]
// #pagebreak()
|
|
https://github.com/crd2333/crd2333.github.io | https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/Reading/Reconstruction/3DGS.typ | typst | #import "/src/components/TypstTemplate/lib.typ": *
#show: project.with(
title: "3DGS",
lang: "zh",
)
= 3DGS
- 原论文:3D Gaussian Splatting for Real-Time Radiance Field Rendering(time: 2023.8)
- 原论文方法部分讲得不直观,结合这篇综述 A Survey on 3D Gaussian Splatting(time: 2024.1),一起理解 3DGS
== 前言
- 初看 3DGS 感觉有点像是 Plenoxels
- 二者都是完全不涉及神经网络的方法(当然你也可以说二者的前向和优化过程可以看作是某种特殊的“网络”)
- 所不同的是,Plenoxels 用来固化特征的体素网格依旧还是模拟 NeRF 的那个神经隐式场,只是把 RGB 的表达换成了球谐函数(3DGS 的这个想法应该也是从这里来的),并且仍旧是 Ray-Marching 逐像素采点进行 Volume Rendering 那一套(backward mapping)
- 而 3DGS 则完全脱胎于 NeRF,不仅摈弃了神经网络,创新出一种新的显式表示 —— 3D 高斯球,并且用 splatting(forward mapping) 代替了体渲染方法,实现了高速渲染
== 摘要 & 引言 & 相关工作
- 先回顾一下过去的方法
- 隐式辐射场,一般使用神经网络来学习连续的体积表示,好处是构建了一个可微且紧凑的复杂场景,坏处是如 SDF 表征能力不够强,NeRF 需要对光线进行采样(随机采样成本很高并且可能导致噪声)和体渲染。一般公式如下
$ L_"implicit"(x,y,z,th,phi) = "NeuralNetwork"(x,y,z,th,phi) $
- 显式辐射场,直接表示光在离散空间结构中的分布,比如体素网格、点云或本文的高斯球。好处是更快的查值和渲染,坏处是更大的内存使用分辨率的限制(不连续)。一般公式如下
$ L_"explicit"(x,y,z,th,phi) = "DataStructure"[(x,y,z)] dot f(th,phi) $
- 而 3DGS 通过高斯椭球作为表示形式,单个 3D 高斯可以作为小型可微空间进行优化,不同高斯则能够像三角形一样并行光栅化渲染,可以看成是在可微和离散之间做了一个微妙平衡。3DGS 充分利用了显示辐射场的优势,同时结合了类似神经网络优化的方法,辅以工程上的高效实现(并行、CUDA),从而达到高质量渲染的同时具有更快的训练和实时性能。其公式表示如下
$ L_"3DGS"(x,y,z,th,phi) = sum_i G(x,y,z,mu_i,Si_i) dot c_i (th,phi) $
- 总的来说,文章的贡献有以下三点
+ 引入各向异性 3D 高斯作为高质量、非结构化的辐射场表示
+ 3D 高斯属性的优化方法,如何进行交错优化、密度控制,以实现对场景的准确表示
+ 名为 splatting 的可微分渲染方法,可在 GPU 上快速运行以实现高质量的新视图合成
== 方法
- 整个流程的 pipeline 如下,我将其拆分为三个部分来理解
#fig("/public/assets/Reading/Reconstruction/3DGS/2024-10-20-20-18-10.png")
=== 高斯球的创建和表示
- 首先从 SfM 产生的稀疏点云初始化(直接调用 COLMAP 库)或随机初始化(可以但不建议,效果会变差),随后为每个点赋予高斯属性值
- 一个 3D 高斯属性(都是可学习的,并通过反向传播进行优化)包括
+ 中心位置 $(x,y,z)$,表示成 3D 高斯的均值 $μ$
+ 3D 高斯的协方差矩阵 $Si$。协方差矩阵需要保持半正定性,但在梯度下降优化中难以保证,因此使用*沿坐标轴的放缩*($bs$)加上*用四元数表达的旋转*($bold(q)$)进行表示:$Si = R S S^T R^T$,一共 $7$ 个参数
+ 不透明度 $al$
+ 颜色 $bc$,由球谐函数表示。球谐函数类似于傅里叶变换的基函数,比如使用 $4$ 阶,就需要 $16$ 个参数,对 RGB 分别而言就有 $48$ 个系数,它们描绘了这个高斯球在不用方向 $(th,phi)$ 看过去的颜色
=== splatting 和渲染
- 我们先想象高斯球的属性都已经优化得很好,接下来给定位姿用 splatting 方法渲染图像
- 所谓 splatting 中文翻译做抛雪球
- 实际上就是把 3D 高斯椭球投射到 2D 平面上,记录它们的深度(用于排序前后)以及溅起雪的范围(对图像的贡献,显然中心最多,符合直觉)。然后,如果不考虑后续的优化,我们就是逐个像素地遍历所有雪球计算颜色贡献,最后得到像素值
- 这里如果我们与 NeRF 做一个比较,就会发现二者几乎是一个逆过程
- NeRF 从像素出发找采样点,用体渲染的方式积分得到像素值,如果要保证渲染质量、隐式几何连续性、细节还原度,往往需要大量采样需求
- 而 3DGS 从高斯球出发投射到 2D 平面。这个过程可以说是 3DGS 的关键创新点之一,3D 高斯的轴向积分等同 2D 高斯,从数学层面摆脱了采样量的限制,计算量由高斯数量决定,而高斯又可以使用光栅化管线快速并行渲染
- 下面一个个解释其中的技术细节
- 视锥剔除
- 给定指定的相机位姿,可以想象有些高斯球是看不到的,可以自然地把它们从后续计算中剔除
- 投影
- 我们希望 3D 高斯球在变换后依然保持 2D 高斯分布(不然光栅完和高斯没关系岂不是努力白费),这需要仿射性质
- 在图形学经典的 MVP 变换中,我们用到 View 和 Project 的透视变换(perspective,正交投影没体现出近大远小的变化来,直接 pass)
- view 变换 $W$ 涉及旋转和平移,都是仿射变换(affine)所以没什么问题;但 project 变换则不仿射了,这意味着不可能使用单一的线性变换矩阵来转换所有点的坐标(因为每个点在光线空间的坐标是一个以其在相机空间坐标为自变量的非线性函数,所以不存在一个通用的变换矩阵)
- 于是论文用 $J$(Jacobian) 对 project 进行仿射近似:$Si'=J W Si W^T J^T$
- 随后我们很粗暴地将第三行第三列去掉作为 2D 高斯的协方差矩阵,不需要额外计算。而 2D 高斯的中心点坐标通过除以第三维坐标 $u_2$ 来获得,即中心点坐标为$(u_0/u_2, u_1/u_2)$
- 另外,每个高斯球的不透明度需要调整
$ al'_i = al_i times exp(- 1/2 (bx'-mu'_i)^T Si'_i^(-1) (bx' - mu'_i)) $
- 即 3D 高斯的不透明度乘以概率密度分布,其中 $bx'$ 和 $mu'_i$ 是投影空间中的坐标。这也很符合直觉,椭球厚一点的位置当然要不透明一点
- 通过投影,我们把世界坐标系下的 3D 高斯球投射到投影空间($[-1,1]^3$),得到 $mu', Si', al'$
- 渲染
- 如果不考虑后续的优化,我们对每个像素逐个渲染。给定像素的位置 $bx$,可以通过视口变换 $W$ 计算出它与所有重叠高斯的距离,即这些高斯的深度,形成一个排序的高斯列表$cN$
- 使用 $al$-composition 计算这个像素的最终颜色,其中 $c_i$ 是利用球谐函数系数和方向算出来的颜色
$ C = sum_(i in cN) c_i al'_i Pi_(j=1)^(i-1) (1 - al'_j) $
#fig("/public/assets/Reading/Reconstruction/3DGS/2024-10-20-23-49-55.png")
- tile 优化
- 在处理图像时,为了减少对每个像素进行排序的成本,论文将图片分成了砖块(tile),每个砖块包括 $16 times 16$ 个像素
- 接下来,会进一步识别出哪些砖块与特定的高斯投影相交($99%$ 以上)。一个覆盖多个砖块的高斯投影被复制并分配唯一的标识符,即与之相交的 Tile 的 ID
- 每个 tile 内独立进行上述排序、渲染的计算。这一过程非常适合并行计算,在 CUDA 编程中,令 tile 对应 block,像素对应 thread。并且每个 tile 的像素可以访问共享内存,共享统一的读取序列,从而提高渲染的并行执行效率
=== 参数优化和密度控制
- 通过上面的渲染过程,我们可以得到一个图像,将其与真实图像比较计算 loss,然后通过反向传播优化参数
- 损失函数是常见的 L1 loss 和 D-SSIM loss 的组合,后者度量两幅图像的相似度(在图像级别)
$ cL = (1-la)cL_1 + la cL_"D-SSIM" $
- 3DGS 可以直接通过反向传播优化参数,这也是为什么整个流程可以看作是一个特殊的神经网络,但要注意两点
+ 协方差矩阵的优化需要保证其半正定性,我们前面讲过通过 $Si = R S S^T R^T$ 来保证,优化那 $7$ 个参数
+ 获得不透明度 $al$ 的计算图十分复杂,即 $ bold(q), bs arrow.r.bar Si arrow.r.bar Si' arrow.r.bar al$,因此这里不通过自动微分,而是直接推倒了梯度计算公式
- 密度控制
- 点密集化:自适应地增加高斯的密度,以更好地捕捉场景的细节,重点关注缺失几何特征或高斯过于分散的区域,二者都在视图空间中具有较大位置梯度。其包括在未充分重建的区域克隆小高斯,创建高斯的复制体并朝着位置梯度移动;或在过度重建的区域分裂大高斯,用两个较小的高斯替换一个大高斯,按照特定因子减小它们的尺度
- 点的剪枝:移除冗余或影响较小的高斯(某种程度上算是一种正则化)过程,在保证精度的情况下节约计算资源。一般消除 $al < ep$ 和在体积过大的高斯。此外,输入相机附近的高斯球在一定迭代次数后 $al$ 被设置为接近 $0$ 的值,避免不合理的密度膨胀
- 总的优化流程如下图
#fig("/public/assets/Reading/Reconstruction/3DGS/2024-10-21-00-03-46.png")
== 评价
- 优点
+ 高品质、逼真的场景
+ 快速、实时的渲染和极快的训练速度
- 缺点
+ 同以前方法类似的 artifacts(一个原因是渲染时过于简单的剔除,另一个则是过于简单的按密度排序的可见性算法)
+ 显存使用率和磁盘占用较高
== 3DGS 当前的几个发展方向
- *Data-efficient 3DGS*,在数据不那么充分的地方可能出现 artifacts。主要有两种策略:
+ *基于 Regularization 的方法*引入额外的约束,如深度信息,来增强细节和全局一致性。例如
+ DNGaussian 引入了深度正则化方法来解决稀疏输入视图中几何退化的挑战
+ FSGS 设计了一种用于初始化的 Gaussian Unpooling 过程,而且也引入了深度正则化
+ MVSPlat 提出了 cost volume representation,以提供几何线索
- 然而,在视图数量有限甚至只有一个视图时,正则化技术的功效往往会减弱,这导致
+ *基于 generalizability 的方法*,尤其关注学习先验知识。一个典型的实现是使用深度神经网络生成 3D 高斯,随后直接用于渲染而无需优化。这种范式通常需要多个视图来训练,但可以只用一张图像重建 3D 场景。例如
+ PixelSplat 提出从密集概率分布中采样高斯。它结合了 multi-view epipolar transformer 和重参数化技巧来避免陷入局部最优并保持梯度流
+ Splatter Image 通过学习的方法进行单目 3D 物体重建,利用 2D image-to-image 网络,将输入图像映射到每个像素的 3D 高斯。这种范式主要关注对象的重建,其泛化性有待改进
- *Memory-efficient 3DGS*,对于 NeRF 来说只需要存储学习到的 MLP 参数,而 3DGS 需要存储整个显示表示,对大尺度场景尤为严重。主要有两种策略:
+ *减少 3D 高斯的数量*,即剪枝。例如
+ #link("https://dl.acm.org/doi/abs/10.1145/3651282")[Paptantonakis] 等人提出了一种 resolution-aware 的剪枝方法,将高斯数量减少了一半
+ #link("https://openaccess.thecvf.com/content/CVPR2024/html/Lee_Compact_3D_Gaussian_Representation_for_Radiance_Field_CVPR_2024_paper.html")[Lee] 等人引入了一种可学习的 volume-based masking strategy,在不影响性能的情况下有效减少了高斯数量
+ *压缩 3D 高斯属性的内存使用量*。例如
+ #link("https://openaccess.thecvf.com/content/CVPR2024/html/Niedermayr_Compressed_3D_Gaussian_Splatting_for_Accelerated_Novel_View_Synthesis_CVPR_2024_paper.html")[Niedermayr] 将颜色和高斯参数压缩为 compact codebooks,使用 sensitivity-aware vector clustering 来进行 quantization-aware training and fine-tuning
+ HAC 使用高斯分布预测每个量化属性的概率,并且设计了一个自适应量化模块
- 尽管目前的方法实现了几十到几十倍的存储压缩比(训练后),但在训练阶段减少显存使用仍有相当大的研究空间
- *Photorealistic 3DGS*,3DGS 当前的渲染管线较简单且有几个缺点。例如,简单的可见性算法可能会导致高斯的 depth/blending 顺序发生剧烈变化。渲染图像的真实性有待进一步优化,包括 aliasing, reflections, artifacts 等方面,几个关键点如下:
+ *不同的分辨率*。由于离散采样范式(将每个像素视为单点而不是区域),在处理不同分辨率的图像时,3DGS 容易受到 aliasing 的影响,比如低分辨率的图像(采样频率跟不上场景表示的高频细节),为此
+ #link("https://arxiv.org/abs/2311.17089")[Yan] 等人引入了多尺度 3DGS,场景使用不同大小的高斯表示,配套一个 multi-scale splatting algorithm —— 高分辨率图像使用更多的小高斯渲染,低分辨率图像使用更少的大高斯渲染
+ Analytic-Splattting 采用像素区域内高斯积分的解析近似,利用 conditioned logistic function 作为一维高斯信号中 CDF 的解析近似来更好地捕获像素的强度响应
+ *反射*。实现反射材料的逼真渲染是三维场景重建中一个困难而长期存在的问题
+ GaussiansShader 通过将简化的着色函数与 3D 高斯函数集成,增强了具有反射表面的场景的神经渲染
+ *几何形状*,3DGS 的一个局限性是忽略了底层场景的几何形状和结构(尤其是在复杂的场景和不同的视图、照明条件下),这引发了几何感知重建的研究。例如
+ GeoGaussian 专注于保留墙壁和家具等非纹理区域的几何形状(这些区域往往会随着时间的推移而退化)
- *Improved Optimization Algorithms*,优化算法对 3DGS 的重要性无需多言。一个问题是,各向异性的高斯虽然有利于表示复杂几何图形,但可能会产生 visual artifacts(例如那些大型 3D 高斯,尤其是在跟视角格外相关的区域,视觉元素突然出现或消失,打破沉浸感)。三个主要改进方向是
+ *引入额外的正则化*。3DGS 经常面临过度重建的挑战,稀疏的、大型的 3D 高斯在高方差区域表示不佳而导致 blur and artifacts,结合额外的正则化并改进 3DGS 的优化过程可能能够加速收敛、平滑噪声并提高渲染质量。例如
+ 基于频率的正则化。FreGS 引入了一种渐进频率正则化方法,通过利用傅里叶空间中的低通和高通滤波器提取的低频到高频分量来执行从粗到细的高斯密集化
+ 基于几何感知重建的正则化。例如 Scaffold-GS 引入了体素网格(网格的中心是 anchor point),由 anchor point 的特征向量 decode 出局部 3D 高斯的属性(offset, opacity, color, scale, quaternion),这些高斯根据 viewer 的透视和距离动态调整属性
+ *改进优化过程*。3DGS 使用 SfM 生成初始化高斯,不可避免地受到无纹理表面密集初始化的挑战(尤其是在大规模场景中),并且使用的分割和克隆策略相对简单
+ GaussianPro 利用场景的现有重建几何形状的先验和补丁匹配技术来生成具有准确位置和方向的新高斯,应用渐进传播策略来指导 3D 高斯密集化
+ *优化中的松弛约束*。还是 SfM 的问题,依赖外部工具/算法可能引入错误,并限制系统可能的潜力
+ #link("https://arxiv.org/abs/2312.07504")[Yang] 等人提出了COLMAP-Free 3D GS,它逐帧处理输入的连续视频并逐步增加 3D 高斯集,从而拜托对 COLMAP 的依赖
- 尽管 impressive,但现有方法主要集中在优化高斯以从头开始准确地重建场景,而忽略了一种具有挑战性但有前途的范式 —— 通过已建立的“元表示”以少量镜头的方式重建场景
- *3D Gaussian with More Properties*,3D Gaussion 的属性都是为了新视图合成而设计的,但结合 linguistic, semantic/instance, spatial/temporal 等属性,可能能让 3DGS 胜任更多任务,比如
+ *语言嵌入式场景表示*,将自然语言与三维场景联系起来,支持用户通过语言与三维世界进行交互和查询。例如
+ 由于当前语言嵌入场景表示的计算和内存需求较高,#link("https://arxiv.org/abs/2311.18482")[Shi] 等人提出了一种量化方案,通过简化的语言嵌入(而不是原始的高维嵌入)来增强 3D 高斯,并且还减轻了语义歧义,通过在不确定性值的指导下平滑不同视图的语义特征来增强开放词汇查询的精度
+ LangSplat,使用 SAM 基于输入的多视角图像集生成层次化语义(Hierarchical Semantics),把这些分割的掩码图输入 CLIP,将图像和文本特征对齐并输出 language embeddings,压缩到低维空间(降低内存成本),最后让三维语言高斯模型基于低维 embeddings 反复执行有监督的渲染迭代训练,得到包含语义信息的高斯场景表示
+ *场景理解和编辑*
+ Feature-3DGS 将 3DGS 与从 2D 基础模型(e.g. SAM, CLIP-LSeg)蒸馏来的特征场相结合。通过学习低维特征场并应用轻量的卷积解码器进行上采样,Feature-3DGS 在实现高质量特征场蒸馏的同时实现了更快的训练和渲染速度,支持视图语义分割和语言引导编辑等应用
+ *时空建模*,例如
+ #link("https://arxiv.org/abs/2310.10642")[Yang] 等人将时空概念化为一个统一的实体(即利用 4DGS 作为动态场景的整体表示,而不是对每个单独的帧应用 3DGS),并使用 4D 高斯的集合来近似动态场景的时空体积。所提出的 4D 高斯表示和相应的渲染管线能对空间和时间的任意旋转进行建模,并允许端到端训练
- *3DGS with Structured Information*,除了使用额外属性来增强 3D 高斯,适应下游任务的另一个 promising 的途径是引入为特定应用定制的结构化信息(e.g., spatial MLPs and grid)。一些特定结构化信息加持下 3DGS 的用途例如
+ *面部表情建模*。考虑到在稀疏视图条件下创建高保真 3D 头部化身的挑战,Gaussian Head Avatar 引入了 controllable 3D Gaussians 和 MLP-based deformation field。具体来说,它通过基于隐式 SDF 和 Deep Marching Tetrahedra 的几何引导初始化策略,然后用 dynamic generator $Phi$ 将 neutral Gaussians 形变,生成目标表情
+ *时空建模*。Deformable-3DGS 提出用 deformable 3D 高斯重建动态场景,在规范空间中学习,辅以对时空动态进行建模的变形场(i.e., spatial-MLP)。该方法还加入了退火平滑训练机制,不增加计算成本的同时增强时序上的平滑
+ *风格迁移*。为了跨视图保持 cohesive stylized appearance 的同时不损害渲染速度,GS-in-style 使用预训练的 3D 高斯和 multi-resolution hash grid 和 small MLP 来实时地生成风格化视图
- 简而言之,对于那些跟 3D 高斯 sparsity and disorder 属性不兼容的任务,整合结构化信息可以作为它们的补充
== 3DGS 的应用领域和任务
- Simultaneous Localization and Mapping (SLAM)
- Dynamic Scene Reconstruction
- AI-Generated Content (AIGC)
- Autonomous Driving
- Endoscopic Scene Reconstruction
- Large-scale Scene Reconstruction
== 3DGS 的未来研究方向
- 尽管 3DGS 的后续工作令人印象深刻,相关领域已经或可能被 3DGS 革命,但人们普遍认为 3DGS 仍有很大的改进空间
- *Physics- and Semantics-aware Scene Representation*,作为一种全新的显式场景表示,3D 高斯的潜力不仅仅在于新视角合成。设计能够感知物理和语义的 3DGS 系统,它将有可能在场景重建和理解方面同时进步,彻底改变一系列领域和下游应用。例如
+ 结合先验知识(比如物体的一般形状)可以减少对大量训练视图的需求,同时改善几何/曲面重建
+ 生成场景真实性的进一步改善,包括几何、纹理和照明保真度方面的挑战
+ 增强真实感和交互性,如动态建模、场景编辑和生成等
- 简而言之,追求这种先进、多功能的场景表示为跨领域的创造和应用开辟了新的可能性
- *Learning Physical Priors from Large-scale Data*,跟大模型那一块结合,利用大规模数据集学习物理先验
- 目标是对现实世界数据中的固有物理特性和动力学进行建模,将其转化为 actionable insights,随后应用于机器人和视觉效果等各个领域
- 如果能够建立一个用于提取这些物理先验的学习框架,就可以用预训练模型应用到下游任务,以少量数据快速适应新的对象和环境
- 此外,学习物理先验的意义不仅在于提高场景的准确和真实性,还在于交互性和动态性,这在 AR/VR 环境中尤其有价值
- 然而,从广泛的 2D, 3D 数据集中提取基于物理先验的现有工作仍然很少(比较显著的成果有:用于弹性物体建模的 Spring-Mass 3D Gaussians,和结合了 Multi-View Stereo 的 MVSGaussian)。对 real2sim 和 sim2real 的进一步探索可能推动这一领域的进步
- *3DGS for Robotics*,即 3DGS + 具身智能
- 为了实现人形智能机器人,越来越需要它们以更直观和动态的方式 navigate and manipulate 环境
- 目前的具身智能实现严重依赖于通过语义信息理解环境(识别对象及其属性),这种方法往往忽视了事物如何随时间移动和交互
- 据我所知目前大多是在用 VLM(Visual Language Model, VLA(Visual Language Action Model) 来做,而不是对整个场景进行理解。另外,目前的数据集中于 high-level reasoning,但缺少 low level control。或者说以综述中的例子,机器人知道这是什么(方块)、要做什么(堆叠方块),但它不知道具体应该怎么做(机械臂移动多少)、这些东西会怎么变(方块如何被动作影响,如何随时间变化),这之间有一个 gap
- 这种时候,3DGS 由于其显式表示的性质,除了在环境的语义和结构上的分析之外,还提供了场景如何随时间演变和对象如何交互的动态全面理解
- 尽管现在已经有一些基于 GS 的世界模型(#link("https://arxiv.org/abs/2406.10788v1")[Physically Embodied Gaussian Splatting], #link("https://arxiv.org/abs/2403.08321v2")[ManiGaussian])以及基于 GS 的强化学习(#link("https://arxiv.org/abs/2406.02370v2")[query-based Semantic Gaussian Field ...], #link("https://arxiv.org/abs/2404.07950v3")[RL with Generalizable GS]),但它们仅仅只是证明了可能性,该领域的进一步研究将增强机器人执行那些需要理解物理空间和其中时间变化的任务的能力
- *Modeling Internal Structures of Objects with 3DGS*
- 尽管 3DGS 能够渲染出逼真的 2D 图像,但也正因为 splatting 的渲染方法主要着眼于这一点,导致对象的内部结构是不那么关注的;另外一点是,由于密度控制过程,3D 高斯倾向于集中在表面,而不是内部
- 因此,对对象的内部结构进行建模,将物体描绘成体积的任务(e.g., CT: computed tomography) 仍然是一个挑战。然而,3DGS 的无序性使得体积建模特别困难
- #link("https://arxiv.org/abs/2312.15676")[Li] 等人使用具有密度控制的 3D 高斯分布作为体积表示的基础,不涉及 splatting 过程。X-Gaussian 涉及用于快速训练和推理的 splatting 过程,但不能生成体积表示。使用 3DGS 来模拟物体的内部结构仍然没有标准答案,值得进一步探索
- *3DGS for Simulation in Autonomous Driving*
- 为自动驾驶收集真实世界数据集非常昂贵又困难,但对训练有效的图像感知系统至关重要,模拟成为一种经济高效、环境多样的替代方案
- 然而,开发一个能够生成逼真和多样化合成数据的模拟器并不容易,这包括实现高水平的真实感、适应各种控制方法以及精确模拟一系列照明条件
- 尽管早期使用 3DGS 重建城市/街道场景的努力令人鼓舞,但它们只是冰山一角。仍有许多关键方面有待探索,例如
+ 集成 user-defined object models
+ 建模物理感知的场景变化(e.g., 车轮的旋转)
+ 增强可控性和真实性(e.g., 不同的照明条件)
- *Empowering 3DGS with More Possibilities*,尽管 3DGS 具有巨大的潜力,但在很大程度上仍未得到开发
- 一个有前景的探索途径是用额外的属性(例如前面提到的语言和时空属性)增强 3D 高斯分布,或者引入针对特定应用定制的结构化信息(例如前面提及的 spatial MLPs and grid)
- 此外,最近的研究已经开始揭示 3DGS 在几个领域的能力,例如点云对齐(point cloud registration)、图像表示和压缩、流体合成。这些发现为跨学科学者进一步探索 3DGS 提供了重要机会
== 论文十问
+ 论文试图解决什么问题
- 同 NeRF,通过稀疏视图集进行 3D 场景的表示,以及新视图的合成,但更注重实时性
+ 这是否是一个新的问题?
- 不是,传统的不管是显式还是隐式的方法都有很多,近期则基本转向基于 NeRF 的方法和其改版,在 3DGS 提出后则基本是 3DGS 的天下了
- 事实上从 NeRF 逐渐转回到显式表示的方法如 PlenOctrees, Plenoxels, InstantNGP 早有预兆,这也反映出一种趋势:对实时性的追求
- 但其实哪怕是用椭球表示场景、用 slatting 渲染的这种方法也早就已经有了(EWA splatting,2001 年的文章),所以 3DGS 可以看作是朝花夕拾的产物
+ 这篇文章要验证一个什么科学假设?
- 3D 高斯球可以作为高质量、非结构化的辐射场表示,可以通过优化方法和渲染方法实现对场景的准确表示,可以打破 NeRF 的速度限制
+ 有哪些相关研究?如何归类?谁是这一课题在领域内值得关注的研究员?
- 传统的显式表示方法,如体素网格、点云、三角面片等
- 传统的隐式表示方法,如 SDF, Occupancy,以及近期的隐式表示方法 NeRF 及其改版,如 Plenoxels, InstantNGP
+ 论文中提到的解决方案之关键是什么?
- 3D 高斯球的表示,splatting 渲染,以及自适应的参数优化和密度控制
+ 论文中的实验是如何设计的?
- 原论文:在 13 real scenes taken from previously published datasets and 1 synthetic Blender dataset 上做实验,着重和 Mip-NeRF 360(渲染质量上的 SOTA)以及 Plenoxels, InstantNGP(两个 fast NeRF 方法)进行对比
- 综述:对 3DGS 的方法进行了详细的解释和评价,在多种任务上对比了 3DGS 和其他方法的优势
+ Localization: have a clear advantage over
+ Static Scenes: generally outperform
+ Dynamic Scenes: outperform existing SOTAs by a clear margin
+ Driving Scenes: significantly surpass
+ Human Avatar: consistent performance improvements
+ Surgical Scenes: several significant improvements
+ 用于定量评估的数据集是什么?代码有没有开源?
- 原论文:Evaluation carried out on 3 real datasets: Mip-NeRF360(7 scenes), Tanks&Templates(2 scenes), Deep Blending(2 scenes) and 1 synthetic dataset: Blender。代码开源在 #link("https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/")[repo-sam.inria.fr/fungraph/3d-gaussian-splatting]
- 综述:多种任务上的多个数据集(Replica, D-NeRF, nuScences, ZJU-MoCap),代码开源在 #link("")
+ 论文中的实验及结果有没有很好地支持需要验证的科学假设?
- Performance proved, Ablation convincing
+ 这篇论文到底有什么贡献?
- 从效果上看,第一个实现了 real-time rendering speed (>=30fps) 同时保持了 SOTA 的新视角合成质量
- 从方法上看,证明了用 3D 高斯球表示和 splatting 渲染方法的可行性,并提出了行之有效的优化方法和高质量工程实现
+ 下一步呢?有什么工作可以继续深入?
- 原文中提出的
+ 速度优化并未穷,还能更进一步
+ 存储高效的 3DGS 解决方案
+ 渲染质量上的进一步改善(解决 artifacts 现象),这包括优化与正则化、更好的可见性算法等
+ 探索更多方向上的应用
- 综述中提出的: Data-efficient 3DGS,Memory-efficient 3DGS,Photorealistic 3DGS,Improved Optimization Algorithms,3DGS with More Properties,3DGS with Structured Information
|
|
https://github.com/PA055/5839B-Notebook | https://raw.githubusercontent.com/PA055/5839B-Notebook/main/Notebooks/building.typ | typst | #import "/lib.typ": *
#import themes.radial: radial-theme, components, colors
#import colors: *
#show: notebook.with(theme: radial-theme, team-name: "5839B", season: "Water Game", year: "2024-2025")
#create-frontmatter-entry(
title: "Table of Contents",
date: datetime(year: 2024, month: 1, day: 9),
)[
#components.toc()
]
#create-body-entry(
title: "Introduction",
type: "test",
date: datetime(year: 2024, month: 3, day: 9),
author: "<NAME>",
witness: "<NAME>"
)[
]
#create-body-entry(
title: "First Steps",
type: "decide",
date: datetime(year: 2024, month: 3, day: 9),
author: "<NAME>",
witness: "<NAME>"
)[
= Post Season Build Analysis
For the building of our previous robot even though it unfortunately did not make worlds there were a lot of aspects executed properly and poorly to be identified. This is important as it allows the team to know what works and to keep doing as well as what to change in order to improve for the next season.
*Drive Train* The Drive Train is the base of any robot. The previous seasons drive train used 6 11w motors with the blue cartidge and a 48:60 gear ratio giving an RPM of 480. This spun 3 4" omni wheels with the middle one on each side being locked.
#components.pro-con(
pros: [
- Speed
- Pushing power
- Strong Wedge
],
cons: [
- Turning
- Reliability
- Over Heating
- Field Traversal
]
)
*Intake* The intake was powered by 1 5.5w motor spining a series of 2" 45A flexwheels to interact with the game objects. The intake was also allowed to float so that it could raise over the goal to score the triball.
#components.pro-con(
pros: [
- Holding ability
- Scoring
- Reliability
- Effectiveness
],
cons: [
- Chain Broke once
]
)
*Wings* These were pneumatically activated flaps that would extend 9in on either side of the robot. These allowed for a large amount of game objects to be pushed into the goal at one time.
#components.pro-con(
pros: [
- Never failed
- Reached corners
- Simple Design
],
cons: [
- Bent after multiple matches
]
)
*Flywheel Arm* this was a 4" Flexwheel with a ratchet spining at 3600 RPM off of a blue motor. Game obejcts were placed and laumched off of the flywheel. It could also be raised by a 5.5w motor assisted by rubber bands to shoot over other robots.
#components.pro-con(
pros: [
- Consistent firing
- Fast firing
- Height
- Ratchet persrved motor health
],
cons: [
- Unable to use the arm for climbing
- Flywheel got jammed on a standoff in 2 matches
]
)
*Odometry Modules* These are 3 modules 2 vertical and 1 horizantal that are used to track the robots position. They are jointed to always be in contact with the ground, and have a 3.25" omni wheel spin an encoder to track movements.
#components.pro-con(
pros: [
- Simple Design
],
cons: [
- Bent over time
- Large
- Unreliable gorund contact
]
)
= What to do now?
- With the biggest problem being the drive a variety of drives should be modeled and tested in order to have a better idea of what works for the next season
- Work to create new odometry module desings that are stronger and more compact
- Take inventory of the parts available to our team so that when designing we know what parts we can use and how many of them are available
- Put together an order list of parts that the team wants to asses the needed funds
- See what funding is available to the team and what amount should be allocated to new parts
- Look into making a functional PTO (power take off) as they can allow for more powerful drives while still having all the desired mechanisms to manipulate game objects
- Look for or model our own paddels for the controller that suite the needs of our driver
]
#create-body-entry(
title: "3d design software",
type: "decide",
date: datetime(year: 2024, month: 3, day: 10),
author: "<NAME>",
witness: "Praful Adiga"
)[
Before any models for possible drive trains or new odometry sensors can be made a decision on what software to use is necessary. In the previous season we adpoted Autodesk inventor. There exists 3 main options other then inventor for vex which include Solid Works, Fusion 360, and Onshape. Each has a variety of favorable aspects to be considered when choosing.
The best way to comapre these is through a decision matrix of various aspects of each
#components.decision-matrix(
properties: (
(name: "Familiarty", weight: 30),
(name: "Acessability", weight: 20),
(name: "Availble Part Libraries", weight: 5),
(name: "Availble Help Recoruces", weight: 5),
(name: "Vex Compata- bility", weight: 2),
(name: "Ease of Use", weight: 2),
),
("Inventor", 5, 5, 4, 5, 4, 3),
("Fusion 360", 3, 3, 5, 3, 5, 4),
("Solid Works", 2, 4, 3, 2, 3, 3),
("Onshape", 1, 5, 4, 3, 4, 4),
)
#components.admonition(type: "note")[
Inventor scored far higher in some categories compared to other teams due to Eastern Tech's Engineering Program. This program which the majority of the team is in teaches Inventor and provides us with a browser version of it. This ngeates Onshapes main advantage and gives us 3 teachers who can help fix any problems we run into.
]
#components.admonition(type: "decision")[
Due to a variety of reasons the main one being a lack of time to learn a new software and familiarty with Inventor it will continue to be our primary design software.
]
]
#create-body-entry(
title: "File Structure/Model Management",
type: "management",
date: datetime(year: 2024, month: 3, day: 10),
author: "<NAME>",
witness: "<NAME>"
)[
To effectivley use any software wether for coding or modeling file organization is key. To share these files the team makes use of Microsoft Onedrive since it works best with windows device which the majority of the team has.
For the Inventor the main robot assembly is made from a variety of sub assemblies. These sub assemblies are each for a key system such as the drive train or intake. There can exist further sub aseemblies within these as well such as that for the odometry sensors within the assembly they are attachetd to. To organize the many file for this a variety of naming conventions are used based off of the guide lines from Perdu Sig Robotics.
- Robots are marked with a folder labeled "! Robot Name"
- Within in that folder is the main assembly of the same name
- Assemblies or cut parts used throughout the model are stored in this high level folder
- There are also folders with each of the sub-assemblies labeled "Assembly Name"
- Within the sub-assembly folders are all the special cut parts and sub-assemblies of that Assembly.
- All other parts are stored in a default parts folder with every vex part organized by type
This system comes with a variety of advantages for the team. Indivudal axels or mechanisms be edited without opening the whole robot which allows team members to interact with the models without a powerful computer. It also allows for big changes to be made easier as the major sub-assemblies can be removed or changed without having to edit a bunch of indivudal parts. This also helps during the build process as we can divide the many sub-assemblies between any teammembers without other work to do. This has proven in the past to allow us to build full redesigns in just a week and half such as what we did after dulaney last year.
#figure(
rect(fill: black.lighten(10%))[
#image("img\Over Under File Structure.png", width: 80%)
],
caption: [ Inventor Files from the Over Under Season
]
)
]
#create-body-entry(
title: "Taking Inventory",
type: "management",
date: datetime(year: 2024, month: 3, day: 10),
author: "<NAME>",
witness: "<NAME>"
)[
Before any designing can take place it is key to know the constraints one is placed under. For vex a key way of doing this other then reading the rules is too see what parts are available to your team. You may have the best idea for a design but without hte parts to build that idea is jsut waisted time.In order to see what parts we had an excel spread sheet was created with all Vex Parts that were in our Inventor parts library as well as newer ones found the Vex website. Additionally Tools and other accesories from the Robosource website were included that we deemed may prove useful.
#figure(
rect(fill: black.lighten(10%))[
#image("img\pre-inventory top half.png", width: 80%)
],
caption: [ Top half of the Inventory Spread Sheet
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img\pre-inventory bot half.png", width: 80%)
],
caption: [ Bottom half of the Inventory Spread Sheet
]
) <odomDiagram>
The Spreadsheet will take a while to fill out, but for now parts with known quanitities such as zero have been filled out. Those were marked with yellow to indicate more were needed. These were then taken to a second spreadsheet. This lays out all the parts and tools we want, their price, quanity, link to purchase, and priority. The priority is key as it allows us to make decisions on what to get within our budget.
#figure(
rect(fill: black.lighten(10%))[
#image("img\order list frame work.png", width: 80%)
],
caption: [ Order Spread Sheet with a calculation test
]
) <odomDiagram>
#components.admonition(type: "note")[
Currently the budget is at zero as dues are yet to be collected and no fund raisers have been planned. It also improt to consider that a small percentage of the school's 15,000 dollar engineerign budget is randomly allocated to us so we will have that to work with.
]
]
#create-body-entry(
title: "Inventory Results",
type: "management",
date: datetime(year: 2024, month: 3, day: 10),
author: "<NAME>",
witness: "<NAME>"
)[
After a week of work we were able to complete all rows of the spread sheet and figure out what parts the team was in need of. A variety of methods were used to measure the various parts. Large parts like Wheels and Motors were counted but other parts required a different aporach. String wires, and tubing were measured in feet, and metal strcuture by its weight. For parts like screws and nuts a single unit was weighed as well as the container and the total amount we had. The weight of the container was then subtracted from the total and then divided by the unit to find the total quanity.
Overall we found that we had a good quanitity of most parts that we had however there were many parts absent from our collection. Based on the next game new parts can be ordered based on that game.
#components.admonition(type: "note")[
We found our time managment to be extremely poor during this endevor which greatly increased its length. To adress this we may work to change how the team meets to allow for not only more time, but better uses of that time.
]
]
#create-body-entry(
title: "Drive Train Types",
type: "brainstorm",
date: datetime(year: 2024, month: 3, day: 17),
author: "<NAME>",
witness: "<NAME>"
)[
= New Drive Models
There exists a variety of drive models both practicle and impracticle that can be made with the VRC legal parts. It is important to judge where each one can shine to see which is the most practical when the next game releases. A decision can not be made yet for which drive is best, but the strenghts and weakness of each one can be assesed as well as models for the more practical ones generated. These models can give us a head start on the next seasons bot if they prove adequate for the next game as well as allow the team to test various ideas.
= Tank Drive
A large variety of what can be considered a tank drive or differential drives exist within vex. These work by having two sides where each sides wheels all spin together. This allows for linear motion(Both sides spin the same direction), turning (Both sides spin opposite), and arcing (One side spins slower then the other in the same direction). These drives are often the simpilist and provide a wide range of motion while remaining able to push back against other robots. These drives can also be achieved in a variety of ways with varying numbers and sizes of wheels that augment their performance.
4in wheels provide greater speed as per each rotation the robot moves farther, however they give the robot less torque. Additonally since less of them can fit onto a dirve with the 18*18*18 size limit there is also less points of contact. Additionaly with the older 4in wheels the team currently own the traction versions are .125in smaller then the omni versions
3.25in wheels provide slower speeds, but are able to give the robot more pushing power as they have more torque and points of contact. These wheels are also easier to work with as the traction and omni versions are the same size unlike with the older 4in wheels
Omni wheels have rollers that allow the wheel to move side to side as well as forward and back. This makes them great for turning, but poor for traction.
Traction wheels wheels are all rubber and provide exceptional ground adherence for any robot, however they greatly limit turning making them impractical unless used as the middle wheel where there effects on truning are midigated.
From our teams expirence a 3.25in drive with 2 traction wheels in the middle and 2 omni wheels on either end appears to be the optimal way to execute this drive. Our previous drive with 3 4in omniwheels failed to push back agaisnt other robots that were using 3.25 in tank drives with the same amount of motors. These robots also were just as fast and maneuverable as ours showing little trade off for this design.
#components.pro-con(
pros: [
- Simplicity
- Versatility
- Easier to Control
],
cons: [
- Limited Mobility
- Wheel incompatabilities
]
)
= H/X Drives
These use either 4 or 5 omni wheels to achieve a robot that has the same range of motion as a Tank Drive, but with the additon of diagnol and horizantal movements. They either use in the case of an X drive 4 indivudally powered omni wheels in each corner at 90 degress from one another or 4 indivudallty powered onmi wheels in a traditional tank drive setup with one horizantal omni wheel for the H drive. These drives can however, prove dificult to control and in the case of the H drive impractical as the horizantal wheel rarley makes contact. They are also very easy to push around since all the wheels are omni. X drives can prove highly practical given the right game and design but in games such as over under the middle bar limits their use.
#components.pro-con(
pros: [
- Maneuberability
- Complex Autonmous
- Strafing
],
cons: [
- Mechanical Complexity
- Motor Usage
- Practicality
- Low Traction/Easy to push
]
)
= Mecanum Drives
Mecanum drives are likley the most special as they use specialized mecanum wheels. These like omni wheels have roller attached, but at an angle to provide unqiuley augmented movment. When set up correctly 4 indvidually powered mechnum wheels can provide the same movment as a X drive. However, since to go in any direction it directly turns the mecanum wheels they are harder to push as the motors resist the pushing directly. This along with other issues can also lead to faster overheating with mecanum drives. The vex edr 4in mecanum wheels are very bulky putting more strain on the motor additonally, the vex mecanum wheels unlike msot desings have limited contact with the ground due to the iregular design of their rollers. It is also important to note it easier to gear and build a frame for a mecnum drive over and X drive as it does not require the 45 degree angels to achieve its unique motion. Though also possible with an X drive an additional powered omni wheel could be put into the middle to provide more drive power. Since this wheel isnt needed at all times if a succesful PTO can be developed it could allow for a very versatile robot and drive.
#components.pro-con(
pros: [
- Maneuberability
- Complex Autonmous
- Strafing
],
cons: [
- Mechanical Complexity
- Motor Usage
- Practicality
- Requires balanced weight
]
)
= Swerve Drives
Previously considered impractical for vex swerve drives invlove either 3 or 4 independently steered and powered wheels. These focus around modules that can both rotate the oreintation of and spin the wheels. This allows for the robot to turn rapidly as well as turn while moving. The wheels can positioned in the manner of a tradtional tank drive for linear movement and then turned to go the desired driection. However until the addition of the 5.5w motors these would either use 6 or all 8 of the robots availble motors. The 5.5w motors now allow for this drive to be possibly practical as a 3 wheel swerve drive could be made from 3 11w motors and 3 5.5w motors allowing for 38.5w of motors to be allocated to the robots mechanisms and manipulators. The advantages of swerve drives can be seen from other competitions like frc where they are often used to great success to create highly maneuverable bots. The use of one within vex would be highyl dependent on the game as one that with limtied room to move such as over under takes away many of a swerve drives advantages. It is worth creating a model for a swerve drive module incase the next game is one that priortizes movement. It would also provide practice using more complex gearing which the team has yet to expirment with.
#components.pro-con(
pros: [
- Maneuberability
- Complex Autonmous
],
cons: [
- Mechanical Complexity
- Motor Usage
- Practicality
]
)
]
#create-body-entry(
title: "New Odometry Sensors",
type: "build",
date: datetime(year: 2024, month: 3, day: 17),
author: "<NAME>",
witness: "<NAME>"
)[
Odometry is a position tracking algorithim used by the coder to implement complex autons. It relies on three sensors 2 vertical and 1 horizantal. The failures of the previous design were compactness and resilliance and the new designs makes a few imrpovments to this area. It is important to complete this first as any protoype drives made must be designed to fit the sensors. This along with a basic mecnum drive which can act as a tank drive when need be will allow the coder to begin making some basic frameworks for next year.
New Design:
- Verical Wheels save space by being in the same module
- No plate is used without being reinforced
- Pillow bearings used to simplfy mounting
- Newer 3.25in wheels used for better traction
#figure(
rect(fill: black.lighten(10%))[
#image("img/Vertical Odom Isometric.png", width: 80%)
],
caption: [ Isometric View of the New Vertical Odometry Sensor
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Vertical Odom Front.png", width: 80%)
],
caption: [ Front View of the New Vertical Odometry Sensor
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Horizantal Odom Isometric.png", width: 80%)
],
caption: [ Isometric View of the New Horizantal Odometry Sensor
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Horizantal Odom Front.png", width: 80%)
],
caption: [ Front View of the New Horizantal Odometry Sensor
]
) <odomDiagram>
]
#create-body-entry(
title: "Drive Train Prototypes",
type: "build",
date: datetime(year: 2024, month: 3, day: 17),
author: "<NAME>",
witness: "<NAME>"
)[
With inventory taken we can now begin to make some designs. To expirment with more complex Drives a Mechnum drive and a Swerve Drive module were completed. These should provide good practice for designing before the next season as well possibly giving us a head start if we choose to use these drives.
The first model made was that of a Mecanum Drive:
- 4 Mecanum wheels geared to 300 RPM with a 72:48 ratio driven by an 11w motor with the 600rpm cartrige.
- 24in HS axels with holes drilled in them as the main frame to ensure it was stronger then our previous drive trains.
- Center Omni Wheel for additional power
- PTO to allow for the cneter omni wheel to power other system while not in use
- Battery and Air Tanks kept low to ensure a proper center of gravity
- New Odometry Sensor fitted in the rear
#figure(
rect(fill: black.lighten(10%))[
#image("img/Mecanum Drive Proto Isometric.png", width: 80%)
],
caption: [ Isometric View of the Prototype Mecanum Drive
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Mecanum Drive Proto Top.png", width: 80%)
],
caption: [ Top View of the Prototype Mecanum Drive
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Mecanum Drive Proto Front.png", width: 80%)
],
caption: [ Front View of the Prototype Mecanum Drive
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Mecanum Drive Proto Side.png", width: 80%)
],
caption: [ Side View of the Prototype Mecanum Drive
]
) <odomDiagram>
Before we could enter school to test this I created a Model for a Swerve Drive module:
- Keeps design compact with motors below the frame
- 72 tooth gear is screwed to the frame so they spin together
- Circular insert within the gear to allow the drive shaft to turn
- Chain runs to connect the Drive Shaft to the 11w motor
- 5.5w motor used to turn module
#figure(
rect(fill: black.lighten(10%))[
#image("img/Swerve Drive Module Isometric.png", width: 50%)
],
caption: [ Isometric View of the Prototype Swerve Drive Module
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Swerve Drive Module Side.png", width: 50%)
],
caption: [ Side View of the Prototype Swerve Drive Module
]
) <odomDiagram>
#figure(
rect(fill: black.lighten(10%))[
#image("img/Swerve Drive Module Front.png", width: 50%)
],
caption: [ Front View of the Prototype Swerve Drive Module
]
) <odomDiagram>
#components.admonition(type: "note")[
It is unlikley any of these drives will be used as tank drives have proved superior for many games in a row. They simply serve as a way to practice building and design techniques and mechanisms before the next season. However, cataloging them is still important as the ideas learned from them could proove very important.
]
]
#create-body-entry(
title: "Mecanum Drive Testing",
type: "test",
date: datetime(year: 2024, month: 3, day: 19),
author: "<NAME>",
witness: "<NAME>"
)[
While Building the Mecanum Drive a large oversight was made. Vex wheels may be advertised in standard sizes like 4in but that is seldom the case. For the wheels various forum posts and the Perdu Vex Sig Robotics website placed both the older 4 in omni wheels and 4in mecanum wheels as having a 4.125in diameter. However once we built the drive we found the omni wheel to be slightly bigger. Going into inventor confirmed these as the Mecanum wheels measured .0625in smallers then the omni Wheels.
#figure(
rect(fill: black.lighten(10%))[
#image("img/Wheel Comparison.png", width: 80%)
],
caption: [ Comparison of Mecanum Wheels and both the new (right) and old (left) omni wheel diameters
]
)
This means that for now until we can get the newer omni Wheels whish should be compatible with the mecanum wheels them alogn with the pto attached to them will be left out of the build. The older Wheels can not be used as they cause only one of the mecanum wheels to be in contact with the groudn which defeats the purpose of the drive.
Additionally weight plates needed to be screwed into the front to ensure the drive was balanced allowing it to properly strafe.
Overall the Drive was a success after a few minor tweeks and will serve as a good test base until the season starts.
]
#create-body-entry(
title: "Rest of Pre-Season",
type: "test",
date: datetime(year: 2024, month: 3, day: 25),
author: "<NAME>",
witness: "<NAME>"
)[Since the most important job of ivnentorying was completed most of the team agreed to slow down until the next game was revealed to study for upcoming MCAP/AP/Final Exams and other School work.
However, when we had the time to stop in at Robotics we worked to take apart the previous game's field as well as fix up the field border. This was done since the field border had been heavily weakened from many rams during our autonmous practices in the previous season.
]
#create-body-entry(
title: "Fund Raising",
type: "test",
date: datetime(year: 2024, month: 4, day: 12),
author: "<NAME>",
witness: "<NAME>"
)[During a group meeting today we disccused a variety of methods of fundraising. All ideas were taken and put on the right of the board (even jokes) and the possibly viable ideas were written on the right. A fund raiser would serve as a way to allow us access to the field earlier in the season so that we can get more practice. Durign the previous year the team worked off of a 3d printed tri-ball and a make-shift field made of old parts until around novemeber. To avoid this a fundraiser that gets atleast even around six hundred dollars would give us the field multiple months before the county may decide to buy it for us.
Out of the many options we looked at we decided to try our hand at a simpel gofund me and send it to our families as it woudl be a very low amount of imput while having a psosbilility of a large output. We promised any doantors they would get a resin printed Eastern tech key-chain to encourage more doantions. All teammembers sent out the link to freinds and family to raise donations.
]
#create-body-entry(
title: "First Impressions of High Stakes",
type: "test",
date: datetime(year: 2024, month: 4, day: 27),
author: "<NAME>",
witness: "<NAME>"
)[ After watching the conlcusion of worlds 3 of us Davis, Praful, and Andrews began to theorize ideas and startegies for the new game. This ideas were created before the release of the rule book and new ideas will be documented upon its release.
Initially the similarity to the previous games of Round Up and Tipping point provide a wide variety of stratgies to draw from such as:
- pneuamtically grabbing a mobile goal (MOGO)
- Using an intake to place rings on a held MOGO
In addition some unique ideas were disscused such as
- Using a D4Rb or similar lift to raise and lower the itnake to deposit at the variable elevations where the stakes reside
- Having a pneumatic grip on the intake to grab the top bar and pull the robot off
- Creating a descoring device to remove rigns from a goal
- Having a atleast 4 ot possibly 6 motor drive to have enoguh power to hold the corners when needed
Multiple Ideas relatign to strategy were also discussed
- Climbing is far more valuble compared to previous games due to reduced poitn values of the rings comapred to previous objects
- Securing the corners is critical as they could flip a game in an instant
- Havign a way to only put your rings or the enemies rigns onto a goal quickly could prove valueable
- One could make a MOGO with only enemy rigns just for it to be put in the descore corner
- The top ring seems pointless if it is only worth three points as going for it would require percision and a good amoutn of time that could be better used else where.
To have some ideas for decoratign the robot and field were also discussed
- One could draw the popular video game character kirby or a face in general on one of the rings
- The top rings is the One ring to rule them all allowing for multiple lord of the rings refrences such as callign the robot the eye of sauron
Overall we see this game as being far more complex then previous games requiring more advanced mechnisms and higher level of quality for a robot to be competetive. We shall begin protoyping some models in Inventor the following day and adjsut when the ruels are released. We are also intrested to see if there will be any big diffrences in skils comapred to the base game.
]
#create-body-entry(
title: "Inventor Prototypes-Pre Rulebook",
type: "test",
date: datetime(year: 2024, month: 4, day: 28),
author: "<NAME>",
witness: "<NAME>"
)[
To start an intake with a lift were modeled along with a starting ideas for the drive. This was done so that we had a good starting palce to begin building when we met again later that week. In addition the drivetrain allows us to start making considerations for motor ditribution and the sizing of our manipulators
The intake is designed to use a chain belt with standoff screwed in that will hook onto the center of the rigns and pull them up as well as the lift to drop them off at vawrious heights of stakes
The drive train includes a variety of features some of which had been worked on during the preseason
- 6 motor drive with a pto to transfer two of the motors to a lift
- 4 3.25 wheels omni wheels for tractions and maneuverability
- Vertical motor mount to save space
- Hs axels used to secure the two halves of the drive
- Pneumatic with lexan washers used to shift the gears in the pto
- Odometry sensors to allow for an imrpoved and more accruate autonmous routine
Overall this is jsut a quick model put together and will be further explored as the season cointinues and we begin to construct the robot.
]
#create-body-entry(
title: "Rules Reaction",
type: "test",
date: datetime(year: 2024, month: 4, day: 30),
author: "<NAME>",
witness: "<NAME>"
)[
To the entire teams surprise the rulebook completely changes how the game is played compared to our inital thoughts from the video. Davis, Brandon, and Praful dicussed these on discord the night it was released.
New rules
- The vertical expansion limit makes the previous lfit models too tall, but not out of the picture as elevatign thei tnake could still help with climbing and depoisting rings
- It is require to go rung by rung to climb greatly icnreasing the challenge in doing so, but the point values still make it worthwhile
- The expansion limits do make the high stake worth far less as to score it would require climing to the top rung and then havign a mechnism to score it for only 3 more points.
Skills
- The rules reagarding the diffrent rings makes it valuable for the robot to be able to sort out colors in it's intake to ensure no rings are waisted
- The corners are still valuable for the 5 points per MOGO, but not as crucial for the game.
- Descore mechanisms could help remove the pre-scored blue rings for more points
Final thoughts for the first Robot
- 6 motor drive
- intake on a lift
- pneuamtic clamp to grab MOGOs
- Clamp on intkae/lift to grab the first rung
- Passive clamps to hold onto the climbing rungs so that the lift can raise and grab the next rung
- Descore mech to help in skills and matches
]
#create-body-entry(
title: "Initial Goals and Plan",
type: "test",
date: datetime(year: 2024, month: 5, day: 1),
author: "<NAME>",
witness: "<NAME>"
)[
]
#create-body-entry(
title: "Fund Raising Results",
type: "test",
date: datetime(year: 2024, month: 5, day: 1),
author: "<NAME>",
witness: "Praful Adiga"
)[
The fundraiser is still ongoing, however 765 dollars was raised which is able to cover the field costs as well as shipping as tax whcih allowed us to puchase the field today. This should allow us to get started far sooner then in previous years and allow for the testing of more protypes before the first comeptions.
]
#create-body-entry(
title: "First Intake Builds",
type: "test",
date: datetime(year: 2024, month: 5, day: 2),
author: "<NAME>",
witness: "Praful Adiga"
)[
We started by building the basic Intake modeled in Inventor and worked to create various imrpvoements as we went based on the problems we ran into. (It is to be noted these tested were with a 3d printed ring so the weight problems incurred may not be accurate)
Intial tests using a standoff intake similar to those in Tipping point saw the chain flex and drop the rigns as they were too heavy. To adjust for this flexwheels were tried and foudn to work very well. However, as we incraesed the inclien of the intake the flexwheels became worse at picking the rings from the ground, but not carrying them up ocne the intake secrued them. To alieviate this standoffs were added in the center of the inital stage of the intake. This helepd, but it jammed occasioanlly which was fixed by makign the first stage of the intake flaoting. This means it can rotate to allow the disk to enter more smoothly. After multiple changes to the gripping devices a final design was settled on. (Picture of diffrent intake rollers/chains with decribition and charts of each ones ability to intake )
]
#glossary.add-term("Omni Wheel")[A shortened form of omni-directional wheel which is a wheel with rollers allowing it to be pushed side to side]
#glossary.add-term("PTO")[Power take off device which takes the rotation of a motor and transfers it from one mechanism to another. Thye are often pneumatically powered, but can work with motors.]
#glossary.add-term("Odometry")[An algorithim which determines the robots position based on the movement of 3 sensor wheels.]
#create-appendix-entry(title: "Glossary")[
#components.glossary()
]
#create-appendix-entry(title: "Credits")[
- <NAME>
- Stanford_O from the vex forums
]
|
|
https://github.com/TycheTellsTales/typst-pho | https://raw.githubusercontent.com/TycheTellsTales/typst-pho/main/people.typ | typst | ////////////////
// PHO People //
////////////////
#let normalise_name(name) = {
lower(name.replace(regex("\W"), "_"))
}
// Populate the initial data.
#let __initial_data() = {
let data = (:)
for (name, tags) in (
"Bagrat": ([Veteran Member], [The Guy in the Know]),
"AverageAlexandros": ([Cape Husband],),
"Brilliger": ([Moderator: Protectorate Main],),
"Brocktonite03": ([Veteran Member],),
"<NAME>": ([Protectorate Employee],),
"<NAME>": ([Moderator],),
"XxVoid_CowboyxX": (),
"Procto the Unfortunate Tinker": ([Not a tinker],),
"Reave": ([Verified PRT Agent],),
"Whitecollar": ([Cape Wife],),
"Valkyr": ([Wiki Warrior],),
"Vista": ([Verified Cape], [Wards ENE]),
) {
data.insert(normalise_name(name), (name: name, tags: tags))
}
return data
}
#let __people = state("pho_people", __initial_data())
#let register(person) = {
assert.ne(person.name, none)
assert.eq(type(person.tags), "array")
context __people.update(x => {
if x.len() == 0 {
x = (normalise_name(person.name): person)
} else {
x.insert(normalise_name(person.name), person)
}
return x
})
}
#let get(name, default: none) = {
if default == none {
default = (name: name, tags: ())
}
return __people.get().at(normalise_name(name), default: default)
}
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/039%20-%20Theros%3A%20Beyond%20Death/001_Theros%20Beyond%20Death%20Story%20Summary.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Theros: Beyond Death Story Summary",
set_name: "Theros: Beyond Death",
story_date: datetime(day: 10, month: 01, year: 2020),
author: "<NAME>",
doc
)
Elspeth was dead.
But we must begin far before that, centuries ago. Before the gods of Theros rose to power, the titans—horrific primal urges made flesh—roamed the mortal realm, sowing death and destruction in their wake. The mortals, powerless to defend themselves, turned to prayer in their hour of need. From this prayer, this concentrated devotion, sprang forth the gods themselves.
Imbued with incredible power by the faith of mortals, the gods sealed the titans away in the Underworld. Klothys, the god of fate, volunteered to act as jailer and sequestered herself in the Underworld for eternity. While Erebos rules this realm beyond death, it was Klothys who acted as eternal seal, ensuring the titans remained trapped.
And so, the gods ruled, confident in their position that they had so rightfully earned, until the satyr Xenagos presented a new threat. By harnessing that same power of devotion, he ascended to godhood himself, revealing an unsightly truth to the gods—they were replaceable.
The sun god Heliod sent his champion, the Planeswalker Elspeth, to destroy Xenagos—and she succeeded. However, the fact that Xenagos ascended at all haunted him. If the people of Theros looked up to Elspeth more than Heliod himself, she would surely take his place in the pantheon. Fearful, Heliod struck Elspeth down in her moment of triumph, casting her into the Underworld to be forgotten.
Elspeth was dead.
But her story was not yet over. Another Planeswalker lurked in the shadows of Theros—the nightmare mage Ashiok. Ashiok lived for fear, believing it to be the great equalizer of all things. Thus, while Elspeth should have received eternal rest in Ilysia, the Underworld haven of fallen heroes, she was instead visited by constant nightmares from her past. Of Daxos, her former love, whom she was tricked into killing. Of Heliod, striking her down with the spear she had wielded in his name. And of her childhood torment at the hands of the wretched Phyrexians.
Two things came of this. Ashiok's visions are more real than most, and in one of them, Elspeth seized Heliod's spear Khrusor. When the vision had passed, a twisted version of the spear remained, dripping with darkness and power. Secondly, Ashiok learned of the Phyrexians' existence and promptly planeswalked away to learn more of these true living nightmares.
Meanwhile, the god Heliod contemplated the fragility of his immortality. He would not allow anyone to take his place in the pantheon—not even the other existing gods. To that end, he pulled the soul of the oracle Daxos and transformed him into his champion in the mortal realm. Now a demigod, Daxos was instructed to remove all trace of the other gods from the great city of Meletis.
The other gods, refusing to stand for such an affront, called upon their own champions from the Underworld; with this, the gods were at war. And when gods clash, mortals suffer the consequences. One such consequence took the form of rifts to the Underworld from which countless monsters poured forth. Erebos, the god of the Underworld, was consumed by this conflict. He hated Heliod most of all, and in his rage, he neglected to keep a tight grip on the souls under his purview.
News travels fast, even in the afterlife, and Elspeth heard talk of rifts to the mortal realm. Realizing her work in the Multiverse was not yet done, Elspeth gripped her shadow spear and headed toward a hidden exit in Erebos's palace—but she wasn't the only one seeking escape.
Klothys was furious. When Xenagos attempted to take her place in the pantheon, she was understandably upset. When countless souls dared defy fate to reverse their own deaths, she was livid. She dispatched countless agents of fate to stop these souls from seeking new life. But for Elspeth's, a Planeswalker's soul, she needed something special. So Klothys weaved her masterpiece—an agent of fate she called Calix. Created solely to carry out the will of Klothys, Calix headed off in pursuit of Elspeth in order to preserve that which should be.
As Elspeth headed toward freedom, she gathered allies. Along her journey, she faced many battles and powerful foes, and she fought them all off. After each victory, she raised her spear high and proclaimed: "Behold, the true Khrusor! Heliod wields a fake!"
She also clashed repeatedly with Calix, emerging victorious every time. After all, she was a seasoned warrior, and he was freshly made. But with each battle, Calix did a little better—he was learning his foe as he learned himself. Still, she defeated him one final time as she reached the exit. Where Heliod awaited her.
Through Daxos, Elspeth's former love, Heliod had learned of her attempted escape. Past the point of reason, Heliod stood in Elspeth's way, refusing her passage out of the Underworld. She could not be allowed to escape. She would be the end of him. She was the cause of all this. Ranting under his breath, he charged Elspeth with his spear Khrusor~which promptly shattered in his hands.
For each time Elspeth repeated that her shadowspear was the true Khrusor, the onlooker souls believed her. And it was the power of that belief, that devotion, that caused her lie to simply become truth. Staring down the point of a spear that was no longer his, Heliod yielded. And Erebos laughed.
In Erebos's centuries of existence, no sound had ever pleased him quite like the sound of Heliod's surrender. And Erebos simply took the defeated sun god and placed him beneath a giant boulder, where he would suffer for all eternity, or until he was forgotten by his worshippers above. As for Elspeth, Erebos gifted her his eternal gratitude—and safe passage back to the mortal realm.
After a brief reunion with Daxos, Elspeth planeswalked away. Calix looked on, his very being in agony. It was his purpose to return Elspeth to where she belonged, and now he could no longer reach her. But in his darkest hour, a strange idea sparked within him—and he simply planeswalked after her.
|
|
https://github.com/TideDra/seu-thesis-typst | https://raw.githubusercontent.com/TideDra/seu-thesis-typst/main/utils.typ | typst | #import "constants.typ": 字体, 字号, 行距
#import "@preview/a2c-nums:0.0.1": int-to-cn-num
#import "@preview/cuti:0.2.0": show-cn-fakebold
#let part_state = state("part")
// 使用terms模拟latex的paragraph
#let paragraph(content) = {
set terms(hanging-indent: 0pt, indent: 0pt)
set par(first-line-indent: 0pt)
content
[#v(-18pt)\ ]
}
// 获取当前章节序号
#let get_chapter_idx(loc: auto) = {
if loc == auto {
loc = here()
}
counter(heading).at(loc).first()
}
// 获取用于显示的章节序号
#let get_chapter_idx_display(loc: auto) = {
if loc == auto {
loc = here()
}
return numbering(
if part_state.at(loc) == "appendix" {
"A"
} else {
"1"
},
counter(heading).at(loc).first(),
)
}
// 使用typst的画圈功能实现带圈数字
#let my_circle_number(num, size, inset: 0pt, stroke: 0.5pt) = {
set text(size: size)
let circle_size = size / 2
box(
circle(radius: circle_size, inset: inset, stroke: stroke)[
#set align(center)
#num
],
)
}
// 使用unicode带圈数字
#let circle_number(num) = {
// only support 0~50
if num == 0 {
str.from-unicode(9450)
} else if 1 <= num and num <= 20 {
str.from-unicode(9311 + num)
} else if 21 <= num and num <= 35 {
str.from-unicode(12860 + num)
} else if 36 <= num and num <= 50 {
str.from-unicode(12941 + num)
} else {
panic("circled number not supported: " + str(num))
}
}
// 子图
#let subfigure = figure.with(
kind: "subfigure",
supplement: none,
numbering: "(a)",
placement: none,
)
// 设置脚注格式。该函数必须在文档开头调用。
#let set_doc_footnote(doc) = {
// 使用带圈数字
set footnote(numbering: (..nums) => {
set text(size: 6pt, baseline: -4.5pt)
circle_number(nums.pos().first())
})
// 自定义脚注项,调整字号、间距
show footnote.entry: it => {
let loc = it.note.location()
let idx = counter(footnote).at(loc).first()
if idx == 1 {
v(6pt)
}
h(-0.5em)
text(size: 4.5pt, baseline: -3.1pt, circle_number(idx))
h(3pt)
text(size: 9pt, font: 字体.宋体, it.note.body)
}
doc
}
// 设置粗体格式,中文使用伪粗体代替真粗体
#let set_cn_bold(doc) = {
show: show-cn-fakebold
doc
}
// 设置各级标题格式
#let set_doc_heading(doc, continuous_index: false) = {
// 章标题(没有supplement)
show heading.where(level: 1): set heading(supplement: none)
show heading.where(level: 1): it => {
pagebreak()
set align(center)
set text(size: 字号.三号, font: 字体.黑体, weight: "bold")
v(-0.5em)
it
text(v(0em, weak: true))
text(h(0em))
v(0.2em)
if not continuous_index {
// 用于解决公式跨章节连续编号的问题
counter(math.equation).update(0)
}
}
// 节标题
show heading.where(level: 2): it => {
set text(size: 字号.四号, font: 字体.黑体, weight: "regular")
it
text(v(0em, weak: true))
text(h(0em))
v(0.2em)
}
// 小节标题
show heading.where(level: 3): it => {
set text(size: 字号.小四, font: 字体.宋体)
it
text(v(0em, weak: true))
text(h(0em))
v(0em)
}
set heading(numbering: (..nums) => {
let nums_array = nums.pos()
if nums_array.len() == 1 {
// 章标题样式:第一章
"第" + int-to-cn-num(nums_array.first()) + "章"
} else {
// 节标题样式: 1.1, 1.1.1
text(weight: "regular", numbering("1.1", ..nums))
}
})
doc
}
// 设置正文基本格式
#let set_doc_basic(doc) = {
set text(size: 字号.小四, font: 字体.宋体, lang: "zh", region: "cn")
set page(
paper: "a4",
header: [
#set text(9pt, weight: "regular")
#set align(center)
东南大学本科毕业设计(论文)
#place(line(length: 100%, stroke: 0.7pt), dy: 5pt)
#counter(footnote).update(0) // 脚注序号以页为单位
],
numbering: (..idx) => {
text(size: 9pt, numbering("1", idx.pos().first()))
},
margin: (top: 3cm, x: 2.5cm, bottom: 3.8cm),
)
// 重置页码
counter(page).update(1)
// 1.5倍行距,两端对齐,首行缩进
set par(leading: 行距, justify: true, first-line-indent: 2em)
show par: set block(spacing: 行距)
// 粗体样式
show strong: text.with(font: 字体.宋体, weight: "bold")
doc
}
#let set_doc_figure(doc, continuous_index: false) = {
// 子图使用(a)编号,图编号样式:2-1,表样式:2.1。可选择全文连续编号
let figure_numbering(idx, kind: none, loc: auto) = {
if kind == "subfigure" {
return numbering("(a)", idx)
}
let separator = if kind == image {
"-"
} else {
"."
}
let chapter_idx = get_chapter_idx_display(loc: loc)
if continuous_index {
idx
} else {
[#chapter_idx#separator#idx]
}
}
// 设置标题样式、间距
show figure.caption: it => {
set text(size: 10.5pt)
block(
inset: (left: 2em, right: 2em),
{
set align(left)
if it.kind == table {
v(1em)
}
it.supplement
it.counter.display(figure_numbering.with(kind: it.kind))
h(0.5em)
it.body
},
)
}
show figure.where(kind: table): set text(size: 字号.五号)
show figure.where(kind: table): set figure.caption(position: top)
// 图表上下间隔1.5em
set figure(gap: 1.5em)
// 每章重置图表编号
show heading.where(level: 1): it => {
if not continuous_index {
counter(figure.where(kind: image)).update(0)
counter(figure.where(kind: table)).update(0)
counter(figure.where(kind: raw)).update(0)
}
it
}
// 每张大图中,子图重置编号
show figure: it => {
if it.kind != "subfigure" {
counter(figure.where(kind: "subfigure")).update(0)
}
it
text()[#v(0em, weak: true)#h(0em)]
}
// 设置交叉引用样式
show ref: it => {
let el = it.element
if el == none {
// 不是图
return it
}
let el_loc = el.location()
if el.func() == figure {
if el.kind == "subfigure" {
// 获得所在大图
let parent_fig = query(figure.where(kind: image).before(el_loc)).last()
// 获得大图序号
let parent_fig_idx = parent_fig.counter.at(el_loc).first()
// 获得章节序号
let chapter_idx = get_chapter_idx_display(loc: el_loc)
// 子图引用格式:图2-1(a)
link(el_loc)[图#{chapter_idx}-#parent_fig_idx~(#it)]
} else {
let supplement = if el.kind == image {
"图"
} else {
"表"
}
link(
el.location(),
{
supplement
// 图表引用格式同caption格式
figure_numbering(..el.counter.at(el_loc), kind: el.kind, loc: el_loc)
},
)
}
} else if el.func() == math.equation {
if continuous_index {
it
} else {
link(
el.location(),
{
"公式 ("
get_chapter_idx_display(loc: el_loc)
"."
str(counter(math.equation).at(el_loc).first())
")"
},
)
}
} else {
it
}
}
doc
}
// 设置公式格式
#let set_doc_math(doc, continuous_index: false) = {
set math.equation(numbering: idx => {
if not continuous_index {
// 按章节编号
let chapter_idx = get_chapter_idx_display()
[(#chapter_idx.#idx)]
} else {
[(#idx)]
}
})
doc
}
// 统一设置正文格式
#let set_doc(doc, continuous_index: false) = {
let doc = set_doc_basic(doc)
let doc = set_doc_heading(doc, continuous_index: continuous_index)
let doc = set_cn_bold(doc)
let doc = set_doc_figure(doc, continuous_index: continuous_index)
let doc = set_doc_math(doc, continuous_index: continuous_index)
return doc
}
|
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/05-features/positioning.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note
#import "/template/lang.typ": devanagari
#import "/lib/glossary.typ": tr
#show: web-page-template
// ## Positioning rules
== #tr[positioning]规则
// We've talked a lot about substitution so far, but that's not the only thing we can use OpenType rules for. You will have noticed that in our "paper tape" model of OpenType shaping, we had two rows on the tape - the top row was the glyph names. What's the bottom row?
到目前为止我们使用的都是#tr[substitution]规则,但其实OpenType中还有其他类型的规则可用。你可能已经注意到了,在我们之前使用打孔纸带类比OpenType文本#tr[shaping]过程,图中的“纸带”上有每个单元格有两行内容。上面一行是#tr[glyph]名,那下面一行是用来做什么的呢?
// After all the substitution rules in our set of chosen lookups are processed from the `GSUB` table, the same thing is done with the `GPOS` table: feature selections and language/script selections are used to gather a set of lookups, which are then processed in turn.
当按照之前描述的方式处理完`GSUB`表中所有被选中的#tr[lookup]后,相同的流程会在`GPOS`表中再发生一次。同样是根据特姓名和语言#tr[script]二元组过滤出需要的#tr[lookup]列表,然后依次处理。
// In the positioning phase, the shaper associates four numbers with each glyph position. These numbers - the X position, Y position, X advance and Y advance - are called a *value record*, and describe how to draw the string of glyphs.
在#tr[positioning]阶段,#tr[shaper]需要为每个#tr[glyph]生成四个数字。这四个分别是:X轴位置,Y轴位置,X轴#tr[advance],Y 轴#tr[advance]。这组数字称为一条“数值记录”,它用于指导如何绘制这个#tr[glyph]。
// The shaper starts by taking the advance width from metrics of each glyph and placing that in the X advance section of the value record, and placing the advance height of each glyph into the Y advance section. (The X advance only applies in horizontal typesetting and the Y advance only applies in vertical typesetting.) As designer, we might think of the X advance as the width of the glyph, but when we come to OpenType programming, it's *much* better to think of this as "where to draw the *next* glyph".
#tr[shaper]首先从每个#tr[glyph]的#tr[metrics]中取出#tr[advance width]和#tr[advance height],将它们分别放入数值记录的X轴#tr[advance]和Y轴#tr[advance]字段中。其中的X轴#tr[advance]只在#tr[horizontal typeset]时有用,Y轴#tr[advance]则是只用在#tr[vertical typeset]中。作为设计师,我们可能会将X轴#tr[advance]视为#tr[glyph]的宽度。但在进行OpenType特性编程时,你应该把这个概念理解为“从哪里开始绘制*下一个*#tr[glyph]”。
// Similarly the "position" should be thought of as "where this glyph gets shifted." Notice in this example how the `f` is first placed 1237 units after the `o` and then repositioning 100 units up and 100 units to the right of its default position:
类似的,“X轴#tr[position]”在这里要被理解为“这个#tr[glyph]在X轴上需要偏离其原始位置多远”。比如在@figure:f-positioning-example 所示的例子中,第一个`f`的原始位置是`o`之后的1237个单位处。但后续的重#tr[positioning]流程将其相对原始位置上移了 100 个单位,右移了 100 个单位。
#figure(
caption: [对f的重#tr[positioning]],
)[#block(inset: 5em, stroke: 0.1em + gray)[原文缺图,待补]] <figure:f-positioning-example>
// In feature syntax, these value records are denoted as four numbers between angle brackets. As well as writing rules to *substitute* glyphs in the glyph stream, we can also write *positioning* rules to adjust these value records by adding a value to it. Let's write one now!
在 AFDKO 特性语法中,数值记录使用在尖括号内的四个数字来表达。就像我们可以编写规则来#tr[substitution]#tr[glyph]流中的某个#tr[glyph]那样,我们也可以编写带有数值记录的#tr[positioning]规则来对它们的#tr[position]进行调整。我们现在就来写一个试试。
```fea
feature kern {
lookup adjust_f {
pos f <0 0 200 0>;
} adjust_f;
} kern;
```
// If you try this in `OTLFiddle` you'll find that this *adds* 200 units of advance positioning to the letter `f`, making it appear wider by shifting the *following* glyph to the right. Single positioning rules like this one adjust the existing positioning information by adding each component in the rule's value record to the corresponding component in the the value record in the glyph string.
如果在 `OTLFiddle` 中尝试这段代码,你将会看到 `f` 的#tr[advance]增加了 200 个单位,这导致其后的所有#tr[glyph]都向右移动了一些。像这样的#tr[positioning]规则,会将其规则中的数值记录的各个字段值,分别加到#tr[glyph]流中匹配的数值记录的相应字段上。
// This is a single positioning rule, which applies to *any* matching glyph in the glyph stream. This is not usually what we want - if we wanted to make the `f`s wider, we could have just given them a wider advance width in the editor. (However, single positioning rules do become useful when used in combination with chaining rules, which I promise we will get to soon.)
这是一个单#tr[glyph]的#tr[positioning]规则,他会应用到#tr[glyph]流中的*每个*匹配的#tr[glyph]上,这通常不是我们想要的效果。如果我们只是想让 `f` 宽一些,直接在字体编辑器中给他设置一个大一些的#tr[advance]值就行了,没必要使用特性。不过单#tr[glyph]的#tr[positioning]规则如果和#tr[chaining rules]结合使用的话,也可能变的很有用,我保证后面会尽快介绍这种用法。
// Another form of the positioning rule can take *two* input glyphs and add value records to one or both of them. Let's now see an example of a *pair positioning* rule where we will look for the glyphs `A B` in the glyph stream, and then change the positioning information of the `B`. I added the following stylistic set features to the test "A B" font from the previous chapter:
#tr[positioning]规则的另一种形式是可以提供*两个*#tr[glyph],然后为它们或其中之一添加数值纪录。我们以一个会匹配#tr[glyph]流中的`A B`,并改变其中 `A` #tr[glyph]的#tr[positioning]信息的字偶#tr[positioning]规则为例来介绍它。我向上一章中的示例字体中添加了四个样式集来测试这种规则:
```fea
feature ss01 { pos A B <150 0 0 0>; } ss01 ;
feature ss02 { pos A B <0 150 0 0>; } ss02 ;
feature ss03 { pos A B <0 0 150 0>; } ss03 ;
feature ss04 { pos A B <0 0 0 150>; } ss04 ;
```
// And now let's see what effect each of these features has:
我们来看看这些特性分别产生什么效果:
#include "value-records.typ"
// From this it's easy to see that the first two numbers in the value record simply shift where the glyph is drawn, with no impact on what follows. Imagine the glyph "A" positioned as normal, but then after the surrounding layout has been done, the glyph is picked up and moved up or to the right.
通过实际效果,很容易看出数值记录中的前两个数字控制绘制这个#tr[glyph]时的位置偏移量,而不影响后续的#tr[glyph]。这可以想象为,先在原位置正常画出了`A`#tr[glyph],再把它往上或往右移动了一些。
// The third example, which we know as kerning, makes the glyph conceptually wider. The advance of the "A" is extended by 150 units, increasing its right sidebearing; changing the advance *does* affect the positioning of the glyphs which follow it.
第三个例子是让整个#tr[glyph]变得更宽了,这其实就是我们之前说的#tr[kern]。`A`#tr[glyph]的#tr[advance]增加了150个单位,也就增加了它的右#tr[sidebearing]。改变#tr[advance]会影响后续所有#tr[glyph]的#tr[position]。
// Finally, you should be able to see that the fourth example, changing the vertical advance, does absolutely nothing. You might have hoped that it would change the position of the baseline for the following glyphs, and for some scripts that might be quite a nice feature to have, but the sad fact of the matter is that applications doing horizontal layout don't take any notice of the font's vertical advances (and vice versa) and just assume that the baseline is constant. Oh well, it was worth a try.
最后,从第四个例子看出,改变#tr[vertical advance]没有任何影响。你可能期望它会改变后续#tr[glyph]的#tr[baseline]位置,实际上这对某些#tr[scripts]来说确实是个很好的功能,但可惜的是目前进行#tr[horizontal typeset]的应用软件根本不关心字体的#tr[vertical advance](同样,进行#tr[vertical typeset]的也不关心#tr[horizontal advance]),它们将#tr[baseline]都视为恒定常量。但不管怎样,尝试一下还是值得的。
// To make this more globally relevant, let's look at the Devanagari glyph sequence "NA UUE LLA UUE DA UUE" (नॗ ळॗ दॗ) in Noto Sans Devangari:
为了让本书更加全球化,我们看看Noto Sans Devangari字体下的天城文#tr[glyph]序列 `NA UUE LLA UUE DA UUE`:
#figure(placement: none, block(inset: (bottom: 50pt))[
#devanagari[#text(size: 128pt)[नॗ ळॗ दॗ]]
])
// You should be able to see that in the first two combinations ("NA UUE" and "LLA UUE"), the vowel sign UUE appears at the same depth; regardless of how far below the headline the base character reaches, the vowel sign is being positioned at a fixed distance below the headline. (So we're not using mark to base and anchors here, for those of you who have read ahead.)
可以看到,对于前两个单元(`NA UUE` 和 `LLA UUE`),基本#tr[glyph]的底部到#tr[headline]的距离不同,但元音符号 `UUE` 所处的深度是相同的。(所以这里没有使用锚点,如果你喜欢跳着读书的话就能理解我在说啥。)
// However, if we attached the vowel sign to the "DA" at that same fixed position, it would collide with the DA's curly tail. So in a DA+UUE sequence, we have to do a bit of *vertical* kerning: we need to move the vowel sign down a bit when it's been applied to a long descender.
但当这个元音符号需要加到`DA`上时,如果还使用这个固定的深度位置,就会和`DA`的卷曲尾巴相撞了。所以在类似 `DA` + `UUE` 的组合中,我们需要调整竖直方向的#tr[kern],也就是让元音符号向下移一点。
// Here's the code to do that (which I've simplified to make it more readable):
以下是字体中完成这个功能的代码(我进行了简化,使其更可读一些):
```fea
@longdescenders = [
uni091D # JHA
uni0926 # DA
# 和一些带有 rakar 的连字
uni0916_uni094D_uni0930.rkrf uni091D_uni094D_uni0930.rkrf
uni0926_uni094D_uni0930.rkrf
];
feature dist {
script dev2;
language dflt;
pos @longdescenders <0 0 0 0> uni0956 <0 -90 0 0>;
pos @longdescenders <0 0 0 0> uni0957 <0 -145 0 0>;
}
```
// What are we doing here? Let's read it out from the top. First, we define a glyph class for those glyphs with long descenders. Next, we are putting our rules into a feature called `dist`, which is a little like `kern` but for adjusting the distances between pre- and post-base marks and their base characters. The rules will apply to the script "Devanagari v.2". (This is another one of those famous OpenType compromises; Microsoft's old Indic shaper used the script tag `deva` for Devanagari, but when they came out with a new shaper, the way to select the new behavior was to use a new language tag, `dev2`. Nowadays you almost certainly want the "version 2" behaviour for any Indic scripts you engineer.)
这段代码做了什么呢?我们从上往下一点点看。首先我们为#tr[decender]较长的#tr[glyph]定义了一个#tr[glyph class]。然后我们在`dist`特性中编写了一些规则,这个特性和`kern`有些类似,但作用是调整基本#tr[glyph]和作用在其前后的符号这两者的位置关系。这些规则会在输入“Devanagari v.2”#tr[script]时生效。(这是OpenType中另一个著名的妥协:因为微软的古早印度系#tr[scripts]#tr[shaper]使用`deva`作为天城文的标签,当新的#tr[shaper]希望用一种新的方式实现时就需要一个新的标签`dev2`。而如今,无论是为哪种印度系#tr[script]开发字体,都几乎一定是会使用“第二版”#tr[shaper]的行为。)
// For any language system using this script, we apply the following rules: when we have a long descender and a UE vowel, the consonant is positioned normally (`0`), but the vowel sign gets its position shifted by 90 units downwards. When we have a long descender and a UUE vowel, the consonant is again positioned normally, but the vowel sign gets its position shifted by 145 units downwards. That should be enough to avoid the collision.
对于使用这种#tr[script]的所有语言,都应用如下规则:当一个长#tr[decender]#tr[glyph]后面跟着元音`UE`时,辅音都按常规方式#tr[positioning](数值记录为四个`0`),但元音需要往下移90单位。当长#tr[decender]#tr[glyph]后面跟着元音`UEE`时,辅音依旧位于常规#tr[position],元音则向下移动145单位。这应该足够避免碰撞了。
// In the next chapter we will look at the full range of substitution and positioning rules, as well as chain and attachment rules.
在下一章中,我们会完整介绍#tr[substitution]和#tr[positioning]规则的各种形式以及#tr[chaining rules]和#tr[attachment rules]。
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/modern-sysu-thesis/0.1.0/pages/bachelor-abstract-en.typ | typst | Apache License 2.0 | #import "../utils/style.typ": 字号, 字体
#import "../utils/invisible-heading.typ": invisible-heading
// 本科生英文摘要页
#let bachelor-abstract-en(
// documentclass 传入的参数
anonymous: false,
twoside: false,
fonts: (:),
info: (:),
// 其他参数
keywords: (),
outline-title: "ABSTRACT",
outlined: false,
body,
) = {
pagebreak(weak: true, to: if twoside { "odd" })
[
#set text(size: 字号.小四)
// 标记一个不可见的标题用于目录生成
#invisible-heading(level: 1, outlined: outlined, outline-title)
#align(center)[
#set text(size: 字号.三号, weight: "bold")
[ABSTRACT]
#v(1em)
]
#body
#v(1em)
*Keywords:* #keywords.join(", ")
]
}
|
https://github.com/SeniorMars/tree-sitter-typst | https://raw.githubusercontent.com/SeniorMars/tree-sitter-typst/main/examples/compiler/field.typ | typst | MIT License | // Test field access.
// Ref: false
---
// Test field on dictionary.
#let dict = (nothing: "ness", hello: "world")
#test(dict.nothing, "ness")
#{
let world = dict
.hello
test(world, "world")
}
---
// Test fields on elements.
#show list: it => {
test(it.children.len(), 3)
}
- A
- B
- C
---
// Error: 6-13 dictionary does not contain key "invalid"
#(:).invalid
---
// Error: 8-10 cannot access fields on type boolean
#false.ok
---
// Error: 25-28 content does not contain field "fun"
#show heading: it => it.fun
= A
---
// Error: 9-13 cannot access fields on type boolean
#{false.true}
|
https://github.com/astrale-sharp/typstfmt | https://raw.githubusercontent.com/astrale-sharp/typstfmt/main/README.md | markdown | Apache License 2.0 | Basic formatter for the Typst language ~~with a future~~ without a future 😄!
It's been a fun ride everyone but that's where I stop, feel free to fork etc.
last typst supported version is 0.10.
If I get the formatter fever again I'll probably try contributing to https://github.com/Enter-tainer/typstyle/ and you should check it out ;).
- [Goals](#goals)
- [Features](#features)
- [State](#state)
- [Installing](#installing)
- [Setting up a pre-commit hook](#setting-up-a-pre-commit-hook)
- [Contributing](#contributing)
- [Architecture](#architecture)
- [Main logic](#main-logic)
- [Roadmap](#roadmap)
- [Testing and visualizing](#testing-and-visualizing)
- [Installing Insta](#installing-insta)
- [Using insta here](#using-insta-here)
- [Can I see it in action?](#can-i-see-it-in-action)
- [Is that all I have to help me test?](#is-that-all-i-have-to-help-me-test)
- [Tracing](#tracing)
- [Fmttest (TO BE IMPLEMENTED)](#fmttest-to-be-implemented)
- [Thanks (chronological)](#thanks-chronological)
# Goals
- Decent output under any circumstances, anything not decent should be reported
as a bug!
- Fast, Small, configurable and embeddable library and binary!
- Good default (see [roadmap](#roadmap))
# Features
- Good defaults.
- Config file: run `typstfmt --make-default-config` to create a typstfmt.toml
file that you can customize!
The following lines show the contents of a simple `typstfmt.toml` file:
```toml
indent_space = 2
max_line_length = 80
experimental_args_breaking_consecutive = false
line_wrap = true
```
- Disable the formatting by surrounding code with `// typstfmt::off` and `//
typstfmt::on`. (Experimental and broken)
# State
It's not always pretty, it sometimes break the code in math mode, but it should
be safe for code and markup.
# Installing
```sh
cargo install --git https://github.com/astrale-sharp/typstfmt.git
```
## Setting up a pre-commit hook
Optionally, you can setup a git hook to format your files at each commit:
```sh
echo "set -e
for f in \$(git ls-files --full-name -- '*.typ') ; do
typstfmt --check \$f --verbose
done" > .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit
```
Now if you try to commit unformatted files, they will be caught and the commit will fail, telling you which file should be fixed.
> Notes:
> - You should probably avoid doing this at the moment, as typstfmt is not quite stable yet
> - Be careful if you have another commit hook setup, as the command above will replace it entirely!
# Contributing
- feel free to open issue or discuss! I don't have github notifications so also
feel free to go ping me on the typst discord server (@Astrale).
- once discussed, you may open a PR, not before cause I'm a bit chaotic and
this is wip so things change fast and I would hate it if you lost your time.
# Architecture
## Main logic
Since we're visiting a AST (which is a tree) we have a recursive function
`visit(node: &LinkedNode, ctx: &mut Ctx)` that meets all the nodes in the tree.
It formats the children first (bottom up), then the parent decide what to do
with their children.
Children have access to arbitrary context (they can know the kind of their
parents, who are their siblings etc).
## Roadmap
Once the test suite is large enough and the formatting is satisfying, create an
abstraction to make the codebase easier to work with.
One person cannot come up with good formatting default. This will first be configurable and
then with experience and opinions from the community, default will be tuned.
# Testing and visualizing
## Installing Insta
We use insta! If you don't have it installed take a look
[here](https://insta.rs/docs/cli/) (hint: use [`cargo
binstall`](https://github.com/cargo-bins/cargo-binstall))
## Using insta here
### Can I see it in action?
To see how it currently formats all the snippets:
- run `cargo test`, a failing test indicates one of the snippets displayed in
the next step is not formatted like this anymore.
- run `show_all.sh`
### Is that all I have to help me test?
#### Tracing
Of course not! We have tracing enabled during tests!
If you're contributing tests you should add a test case under `src/tests` for
instance: `make_test!(call_func_empty, "#f()");`
then running your tests: `cargo test && cargo insta review`
If the info log isn't enough, run `DEBUG=true cargo test`. If you wish to pipe
to a file run `NO_COLOR=true cargo test` you may also set the `NOLOG` env
variable if you wish to disable logging entirely.
#### Fmttest (TO BE IMPLEMENTED)
On the fmttest branch, you can see the skeleton of a program that will automate
finding which range, when formatted, was not valid anymore (broke the semantic of the code).
# Thanks (chronological)
- @arnaudgolfouse, for the discussion, designs and the precious friendship.
- @laurmaedje, @reknih and the typst community for the good vibes, the
interesting talks, the support and ofc, Typst.
- @jeffa5 for contributing ideas on the initial design
- @Andrew15-5, for the many suggestions, issues and feedback.
- @aghriss for a bug fix
- @taooceros for the alignment of math block
|
https://github.com/mem-courses/calculus | https://raw.githubusercontent.com/mem-courses/calculus/main/pandoc-post.typ | typst | #let intb(x) = []
#let iintb(x) = []
#let iiintb(x) = []
#let project(body, ..args) = {
body
}
#let definition(x, name: "") = [#x]
#let theorem(x, name: "") = [#x]
#let lemma(x, name: "") = [#x]
#let corollary(x, name: "") = [#x]
#let property(x, name: "") = [#x]
#let conclusion(x, name: "") = [#x]
#let problem(x, name: "") = [#x]
#let solution(x, tag: "解") = [#x] |
|
https://github.com/Robotechnic/diagraph | https://raw.githubusercontent.com/Robotechnic/diagraph/main/examples/filerender.typ | typst | MIT License | #import "@preview/diagraph:0.3.0": *
#render(read("./VERYlargegraph.txt"), width: 100%)
|
https://github.com/MordragT/type-down | https://raw.githubusercontent.com/MordragT/type-down/master/readme.md | markdown | MIT License | <div align=center>
# TypeDown ✒️
[](https://nixos.org) 
Simple Markup language, easily embeddable and extendable.
</div>
## About
TypeDown is a simple markup language wich takes inspiration from [Typst](https://typst.app/)
but focuses on being embeddable into your own projects.
Therefore it is better compared to Markdown.
If you would like to take a look on how TypeDown looks at the moment, see the `ok.typ` file under the `examples` folder.
While the syntax is still subject to change TypeDown will look similar on release.
## Installation
TypeDown is currently still in development and thus isn't packaged.
But in the future it will be packaged with Nix.
In the meantime you can use cargo to install TypeDown by cloning
this repository and running `cargo install --path .`.
Note that while there exists an experimental self-contained html backend,
by default pandoc is required.
## Usage
- `tyd check <path>`: checks the provided document and returns the generated ast.
- `tyd format <path>`: formats the provided document by printing to stdout.
- `tyd compile <html, docx, pdf, json> <source> [destination]`: Compiles the document to one of the provided formats and if a destination is provided saves the corresponding file there. Note that for pdf and docx you must provide a destination.
## Reference
1. [Typst](https://typst.app/) |
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/07-localisation/letter-i.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note, cross-link
#import "/lib/glossary.typ": tr
#show: web-page-template
// ## The letter "i"
== 麻烦的字母 i
// The Latin letter "i" (and sometimes "j") turns out to need special handling. For one thing, in Turkish, as we've mentioned before, lower case "i" uppercases to "İ".
拉丁字母`i`(有时候`j`也一样)需要特殊处理。因为#cross-link(<position:turkish-i-uppercase>, web-path: "/chapters/03-unicode/case.typ")[前文]提到过,在土耳其语中`i`的大写形式是`İ`。
// Unicode knows this, so when people ask their word processors to turn their text into upper case, the word processor replaces the Latin letter "i" with the Turkish capital version, LATIN CAPITAL LETTER I WITH DOT ABOVE U+0130. Fine. Your font then gets the right character to render. However, what about the case (ha, ha) when you ask the word processor for small capitals? Small capitals are a typographic refinement, which changes the *presentation* of the characters, *but not the characters themselves*. Your font will still be asked to process a lower case Latin letter i, but to present it as a small caps version - which means you do not get the advantage of the application doing the Unicode case mapping for you. You have to do it yourself.
Unicode是知道这个信息的,所以当用户在文字处理软件中要求将一段文本转为大写时,在土耳其语环境下软件会正确的将`i`转换为土耳其版的大写字母`LATIN CAPITAL LETTER I WITH DOT ABOVE U+0130`。很好很好,这样的话字体得到的#tr[character]就是正确的,显示也正常。然而(桀桀),如果用户是让文字处理软件使用小型大写字母呢?小型大写字母是一种#tr[typography]上的精细调整,改变的是#tr[character]的*展示形式*,而*并非改变#tr[character]本身*。
#note[
// > In fact, *any* time the font is asked to make presentational changes to glyphs, you need to implement any character-based special casing by hand. What we say here for small-caps Turkish i is also valid for German sharp-s and so on.
实际上,当要对#tr[glyph]进行任何展示形式上的变化时,你都需要手动实现这种基于#tr[character]的特殊处理。这里我们举的是土耳其字母`i`的例子,但对于德国的 sharp s 等也是一样的情况。
]
// Additionally, you may want to inhibit Turkish "i" from forming ligatures such as "fi" and "ffi", while allowing ligatures in other Latin-based languages.
另外,你可能还希望在土耳其语环境下阻止`fi`和`ffi`等#tr[ligature]的形成,而在其他基于拉丁文语言中允许它们。
// We're going to look at *two* ways to achieve these things. I'm giving you two ways because I don't want you just to apply it as a recipe for this particular situation, but hopefully inspire you to think about how to use similar techniques to solve your own problems.
我们这里准备介绍完成这一目标的两种不同方式。之所以介绍两种方式,是希望你不要将某种方式视为专门解决这一问题的工具。我更希望这些方式能够为你提供灵感,从而能将它们灵活应用,进而能够使用类似的方式解决你将会遇到的各种其他问题。
// Here's the first way to do it, in which we'll deal with the problems one at a time. We make a special rule in the `smcp` feature for dealing with Turkish:
先来介绍第一种方案。在此方案中,我们对于上面的问题采取逐个击破的方式。首先在`smcp`特性里增加一个处理土耳其语的特殊规则:
```fea
feature smcp {
sub i by i.sc; # 其他情况下的默认规则
script latn;
language TRK;
sub i by i.sc.TRK;
}
```
// Oops, this doesn't work. Can you see why not? Remember that the language-specific part of a feature includes all the default rules. The shaper sees a "i" and the `smcp` feature, and runs through all the rules one at a time. The default rules are processed *first*, so that "i" gets substituted for "i.sc". Finally the shaper comes to the Turkish-specific rule, but by this point any "i" glyphs have already been turned into something else, so it does not match.
糟糕,这样写好像没用。你看出来为什么了吗?记住,特性中的语言专属部分是会包含默认规则的。当#tr[shaper]为`i`执行`smcp`特性时,它会执行上面的所有规则。其中默认规则先执行,现在`i`就变成了`i.sc`。然后#tr[shaper]才会去处理土耳其语专用规则,但此时`i`已经变成别的了,所以规则不会匹配。
// How about this?
那这样写呢?
```fea
feature smcp {
sub i by i.sc; # 其他情况下的默认规则
script latn;
language TRK;
sub i.sc by i.sc.TRK;
}
```
// Now the shaper gets two bites at the cherry: it first turns "i" into "i.sc", and then additionally in Turkish contexts the "i.sc" is turned into "i.sc.TRK". This works, but it's ugly.
现在#tr[shaper]就像是在游戏里有两条命了,它首先把`i`转换成`i.sc`,然后在土耳其语环境下进行额外的检查,将`i.sc`转换成`i.sc.TRK`。这是可以工作的,但有点别扭。
// The ligature situation is taken care of using `exclude_dflt`:
#tr[ligature]则需要小心翼翼地使用`exclude_dflt`来处理:
```fea
sub f i by f_i;
script latn;
language TRK exclude_dft;
```
// Now there are no ligature rules for Turkish, because we have explicitly asked not to include the default rules.
这样土耳其语环境下就不存在#tr[ligature]规则了,因为我们明确的要求它排除默认规则。
// Here's another, and perhaps neater, way to achieve the same effect. In this method, we'll create a separate "Turkish i" glyph, "i.TRK", which is visually identical to the standard Latin "i". Now in the case of Turkish, we will substitute any Latin "i"s with our "i.TRK" in a `locl` feature:
下面介绍第二种方案,它也许可以更加优雅地达到相同的效果。在本方案中,我们创建一个单独的`i.TRK`#tr[glyph],用于表示土耳其版的`i`。他和普通的标准拉丁字母`i`在视觉上是相同的。然后在土耳其语环境下,我们使用`locl`特性把所有的拉丁`i`都#tr[substitution]成`i.TRK`:
```fea
feature locl {
script latn;
language TRK exclude_dflt;
sub i by i.TRK;
} locl;
```
// What does that achieve? Well, the problem with ligatures is taken care of straight away, without any additional code. We create our `liga` feature as normal:
这段代码完成了什么事情呢?首先,#tr[ligature]的问题就直接消失了,不再需要费心编写特殊处理代码了。现在我们的#tr[ligature]特性就非常干净:
```fea
feature liga {
sub f i by f_i;
}
```
// But we don't need to do anything specific for Turkish, because in the Turkish case, the shaper will see "f i.TRK" and the rule will not match. The small caps case is easier too:
我们不需要做什么特殊操作了,因为在土耳其语环境中#tr[shaper]看到的将会是`f i.TRK`,这样规则就不会匹配。小型大写字母的代码也很简单:
```fea
feature smcp {
sub i by i.sc;
sub i.TRK by i.sc.TRK;
}
```
// This has "cost" us an extra glyph in the font which is a duplicate of another glyph, but has made the feature coding much simpler. Both ways work - choose whichever best fits your mental model of how the process should operate.
使用这种方式会让我们的字体里多出一个重复的#tr[glyph],但它能让特性代码更加简单。上面介绍的两个方案都能达成目标,你只要选择符合你脑海中模拟的处理流程的那一个就行。
|
https://github.com/Misaka19986/typst-template | https://raw.githubusercontent.com/Misaka19986/typst-template/main/README.md | markdown | A personal typst template
|
|
https://github.com/Le-foucheur/Typst-VarTable | https://raw.githubusercontent.com/Le-foucheur/Typst-VarTable/main/documentation.typ | typst | MIT License | #import "@preview/tidy:0.3.0"
#import "tabvar.typ": tabvar
#set page(numbering: "1/1")
#align(center, text(20pt)[*VarTable*\ ])
#align(center)[
VarTable is a package to make variation table, in a simple way\
This package is build on top of #link("https://github.com/Jollywatt/typst-fletcher")[#underline(stroke: blue)[fletcher]]\
(version : 0.1.0)
]
#outline(indent: 2em)
= 1 - Introduction
\
This package is designed to simplify the creation of variation tables for functions.
To do this, it gives you a typst function, whose parameters are described in detail in this documentation.\
\
*A word of warning:* it's quite normal that during the array creation process, The elements, such as the lines between the various elements, aren't created as they should be.
For example, the line between the labels and the rest, which doesn't go all the way to the end.\
\
If you encounter any bugs, please report them on my #link("https://github.com/Le-foucheur/Typst-VarTable/tree/main")[#underline(stroke: blue)[GitHub]].
#pagebreak()
= 2 - tabvar function
== 2.1 - general description
#let docs = tidy.parse-module(
read("tabvar.typ"),
name: "tabvar",
scope: (tabvar: tabvar),
preamble: "import tabvar: *;",
)
#tidy.show-module(
show-module-name: false,
show-outline: false,
omit-private-parameters: true,
docs,
)
#pagebreak()
== 2.2 - The content parameter
The content parameter must be an array with one element per line (per label)\
Each element is itself an array with one element per column, with a different format for either sign or variation rows\
=== 2.2.1 - Sign rows format
Should contain as much element as the domain less one (one per interval) + one optional end bar style element\
Each element is in etheir of these form (can be mixed on a same line):\
`()` -- Empty : extend previous cell\
`body` -- Simple body such as ```typ $+$``` or ```typ $-$```\
`(body, bar_style)` -- to specify an optional style for the *previous* bar, with one of ```"|"``` (simple bar), ```"||"``` (double bar) or ```"0"``` (bar with a zero)\
NB: the `line-0` parameter change the default bar style to ```"|"``` \
The optional last element is ```"||"```\
==== 2.2.1.1 - A classical sign array
A sign array must contain content like ```typ $+$``` or ```typ $-$```, but you can put anything else.
*Example :* \
A normal sign table :
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
#grid(
columns: (7cm, 7cm),
align: horizon,
```typ
#tabvar(
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (($+$, $-$, $+$),),
)
```,
scale(x: 80%, y: 80%)[
#tabvar(
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (($+$, $-$, $+$),),
)
],
)
]
More complex usage :
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
#grid(
columns: (7cm, 7cm),
align: horizon,
```typ
#tabvar(
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
"Hello world !",
$-$,
$3/2$
),
),
)
```,
move(
dx: -35pt,
scale(x: 80%, y: 80%)[
#tabvar(
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (("Hello world !", $-$, $ 3 / 2 $),),
)
],
),
)
]
*Note :* on the second example the table is squeezed with the scale function)
==== 2.2.1.2 - Custom separation bar
===== 2.2.1.2.1 style of bar
you can modify the style of the bars (note that this modifies all the default ones, not the others, see 2.2.1.2.2).
the bar style is passed as a string, like “-” for a plain bar or “..” for dotted lines,
I invite you to read #link("https://github.com/Jollywatt/typst-fletcher", underline(stroke: blue)[fletcher's documentation]) on marks to find out what can be done.
*Example*
#rect(fill: luma(95%), radius: 10pt, width: 15cm)[
#grid(
columns: (7cm, 7cm),
align: horizon,
```typ
#tabvar(
marks-line: "--",
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$,),
content: (
($+$, $-$),
),
)
```,
scale(x: 80%, y: 80%)[
#tabvar(
marks-line: "--",
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$,),
content: (
($+$, $-$),
),
)
],
)
]
PS: yes, technically these bars are arrows for fletcher but shhh, keep it to yourself to make tables that don't make sense.
===== 2.2.1.2.2 type of bar
For all signs except the first one, instead of putting the sign directly, you can put a couple, whose first element defines the previous bar's type. \
There are 3 differents types of bar :
- ```"|"``` : a simple bar
- ```"0"``` : a bar with a 0 on the center
- ```"||"``` : a double bar, like for the undefined values
*NB* : the marks-line parameter has no effect on these bars
*Example :*
#rect(fill: luma(95%), radius: 10pt, width: 18cm)[
#grid(
columns: (7cm, 7cm),
align: horizon,
```typ
#tabvar(
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
$+$,
("|", $-$),
("0", $-$),
("||", $+$)
),
),
)
```,
scale(x: 80%, y: 80%)[
#tabvar(
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$, $ 10 $),
content: (($+$, ("|", $-$), ("0", $-$), ("||", $+$)),),
)
],
)
]
*Note :* The ``` line-0``` parameter is to default lines to ``` "0"``` type or `"|"` type\
\
If you want a double line at the start of the table, you can use a double bar `"||"` on the very first sign. If you want it at the end, you can add this element `"||"` at the end of sign array
*Example :*
#rect(fill: luma(95%), radius: 10pt, width: 15cm)[
#grid(
columns: (7cm, 7cm),
align: horizon,
```typ
#tabvar(
line-0: true,
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
("||", $+$),
$-$,
"||"
),
),
)
```,
scale(x: 80%)[
#tabvar(
line-0: true,
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$),
content: ((("||", $+$), $-$, "||"),),
)
],
)
]
==== 2.2.1.3 - Same sign for more than one value of the variable
For this, it is pretty easy, instead of putting the sign directly, you can put an empty couple
*Example :*
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
#grid(
columns: (7cm, 7cm),
align: horizon,
```typ
#tabvar(
line-0: true,
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
$+$,
(),
$-$
),
),
)
```,
scale(x: 80%, y: 80%)[
#tabvar(
line-0: true,
init: (
variable: $t$,
label: (([sign], "Sign"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (($+$, (), $-$),),
)
],
)
]
=== 2.2.2 - Variation table
Should contains as much elements as the domain \
Each element is in etheir of these forms :
- ```()``` to extend the previous arrow
- ```(position,body)``` with position being one of top, center or bottom
- ```(pos1, pos2, "||", body1, body2)``` to put in 2 value separated by an undefined value (double bar)
- ```(pos, "||", body)``` shorthand for ```(pos, pos, "||", body, body)``` (see previous format)
#pagebreak()
==== 2.2.2.1 - A classical variation array
A variation array must contain couple with in first position, the element position, and in second position, whatever you want as long as it's of the content type.\
\
The position can be ```typ top, center``` or ```typ bottom```, but no other type of alignment
*Example :*
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
#grid(
columns: (9cm, 7cm),
column-gutter: 0pt,
align: horizon,
```typ
#tabvar(
line-0: true,
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
(top, $3$),
(bottom, $0$),
(center, $1$),
(top, $4$),
),
),
)
```,
move(
dx: -45pt,
scale(x: 80%, y: 80%)[
#tabvar(
line-0: true,
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
(top, $3$),
(bottom, $0$),
(center, $1$),
(top, $ 4 $),
),
),
)
],
),
)
]
==== 2.2.2.2 - Undefined values
If your function is not defined on some values like $f(x) = 1/x$ for $x = 0$, you certainly want to put a double line meaning that the function is undefined on this value, and you can ! \
#sym.star For each values of domain except the start and the end.
The array of one value should look like ```typ (pos1, pos2, "||", content1, content2)```\
where :
- pos1 and 2 is ```typ top, center, bottom ``` and pos1 is for the placement of content1 similary for pos2
- ```"||"``` is to precise the value is undefine
- content1 and 2 is type of content and content1 one is for before the double bar and content2 for after
*Example :*
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
#grid(
columns: (9cm, 7cm),
column-gutter: 0pt,
align: horizon,
```typ
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
(top, $3$),
(bottom, top, "||", $0$, $2$),
(bottom, $1$),
),
),
)
```,
move(
dx: -20pt,
scale(x: 90%, y: 90%)[
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$),
content: (
(
(top, $3$),
(bottom, top, "||", $0$, $2$),
(bottom, $1$),
),
),
)
],
),
)
]
If ```typ pos1``` and ```typ pos2``` is same, you can just fill in one instead of two, \
In the same way if ```typ content1``` and ```typ content2``` is same, you can also enter just one
*Example :*
Instead of ```typ (top, top, "||" , $0$, $0$) ```you can use ```typ (top, "||" ,$0$) ```
#rect(fill: luma(95%), radius: 10pt, width: 17.5cm)[
#grid(
columns: (9cm, 7cm),
column-gutter: 0pt,
align: horizon,
```typ
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
(top, $3$),
(bottom, "||", $0$, $1$),
(top, center, "||", $2$),
(top, "||", $3$),
(bottom, $1$),
),
),
)
```,
move(
dx: -50pt,
scale(x: 70%, y: 70%)[
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$, $8$, $9$),
content: (
(
(top, $3$),
(bottom, "||", $0$, $1$),
(top, center, "||", $2$),
(top, "||", $3$),
(bottom, $1$),
),
),
)
],
),
)
]
#sym.star For the first and the end values
It a basic array but with ``` "||"``` this parameter at the array’s center\
For example ``` (top, "||", $3$)```
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
#grid(
columns: (9cm, 7cm),
column-gutter: 0pt,
align: horizon,
```typ
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
(top, "||", $3$),
(bottom, $1$),
(top, "||", $2$),
),
),
)
```,
move(
dx: -20pt,
scale(x: 90%, y: 90%)[
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$),
content: (
(
(top, "||", $3$),
(bottom, $1$),
(top, "||", $2$),
),
),
)
],
),
)
]
#pagebreak()
==== 2.2.2.3 - Skip a value
When you want to use several functions in the same table, you will probably want to skip some values,
to do this, as with sign arrays, you must create an empty array
*Example :*
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
#grid(
columns: (9cm, 7cm),
column-gutter: 0pt,
align: horizon,
```typ
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$, $8$),
content: (
(
(top, "||", $3$),
(),
(bottom, $2$),
),
),
)
```,
move(
dx: -20pt,
scale(x: 90%, y: 90%)[
#tabvar(
init: (
variable: $t$,
label: (([variation], "Variation"),),
),
domain: ($2$, $4$, $6$),
content: (
(
(top, "||", $3$),
(),
(bottom, $2$),
),
),
)
],
),
)
]
#pagebreak()
= 3 - More complex examples
Here is a little bundle of what you can do with the package.
== 3.1 - #link("https://en.wikipedia.org/wiki/Gamma_function")[#underline(stroke: blue)[#sym.Gamma function]] on $[0; +oo]$
Where it takes a minimum on $[0;+oo[$ for $x = alpha$
#align(center)[
#rect(fill: luma(95%), radius: 10pt, width: 19cm)[
#grid(
columns: (10cm, 7cm),
column-gutter: 0pt,
align: horizon + left,
[Code :] + ```typ
#tabvar(
init: (
variable: $t$,
label: (
([sign of #sym.Gamma], "Sign"),
([variation of #sym.Gamma], "Variation"),
),
),
domain: ($0$, $ alpha $, $ +oo $),
content: (
($-$, $+$),
(
(top, "||", $+oo$),
(bottom, $Gamma(alpha)$),
(top, $+oo$),
),
),
)
```,
[Result :] + move(
dx: -20pt,
scale(x: 90%, y: 90%)[
#tabvar(
init: (
variable: $t$,
label: (
([sign of #sym.Gamma’], "Sign"),
([variation of #sym.Gamma], "Variation"),
),
),
domain: ($0$, $ alpha $, $ +oo $),
content: (
($-$, $+$),
(
(top, "||", $+oo$),
(bottom, $Gamma(alpha)$),
(top, $+oo$),
),
),
)
],
),
)
]
]
#pagebreak()
== 3.2 - A Rational function
Take $f(x) = (4x^2 + 12x + 29)/(4(x^2 + 3x + 2))$\
So we have $f’(x) = (-2x -3)/(16(x^2 + 3x + 2)^2)$\
And finaly, we get :
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
Code :
```typ
#tabvar(
init: (
variable: $t$,
label: (
([sign of $f’$], "Sign"),
([variation of $f$], "Variation"),
),
),
domain: ($ -oo $, $ -2 $, $ -3 / 2 $, $ -1 $, $ +oo $),
content: (
($+$, ("||", $+$), $-$, ("||", $-$)),
(
(bottom, $1$),
(top, bottom, "||", $+oo$, $-oo$),
(top, $-20$),
(bottom, top, "||", $-oo$, $+oo$),
(bottom, $1$),
),
),
)
```
─────────────────────────────────────────────────────────────────
Result :
#align(center)[
#tabvar(
init: (
variable: $t$,
label: (
([sign of $f’$], "Sign"),
([variation of $f$], "Variation"),
),
),
domain: ($ -oo $, $ -2 $, $ -3 / 2 $, $ -1 $, $ +oo $),
content: (
($+$, ("||", $+$), $-$, ("||", $-$)),
(
(bottom, $1$),
(top, bottom, "||", $+oo$, $-oo$),
(top, $-20$),
(bottom, top, "||", $-oo$, $+oo$),
(bottom, $1$),
),
),
)
]
]
#pagebreak()
== 3.3 #link("https://en.wikipedia.org/wiki/Hyperbolic_functions")[#underline(stroke: blue)[Hyperbolic function]]
#align(center)[
#rect(fill: luma(95%), radius: 10pt, width: 20cm)[
#grid(
columns: 2,
align: left,
[Code :] + ```typ
#tabvar(
arrow: "|-harpoon",
stroke-arrow: gradient.linear(..color.map.rainbow),
marks-line: "..",
init: (
variable: $t$,
label: (
([sign of $cosh$], "Sign"),
([variation of $cosh$], "Variation"),
([sign of $sinh$ and $tanh$], "Sign"),
([variation of $sinh$], "Variation"),
([variation of $tanh$], "Variation"),
),
),
domain: ($ -oo $, $ 0 $, $ +oo $),
content: (
($-$, $+$),
(
(top, $+oo$),
(bottom, $1$),
(top, $+oo$),
),
($+$, ()),
(
(bottom, $-oo$),
(),
(top, $+oo$),
),
(
(bottom, $1$),
(),
(top, $-1$),
),
),
)
```,
[
Result :
#align(center)[
#tabvar(
arrow: "|-harpoon",
stroke-arrow: gradient.linear(..color.map.rainbow),
marks-line: "..",
init: (
variable: $t$,
label: (
([sign of $cosh$], "Sign"),
([variation of $cosh$], "Variation"),
([sign of $sinh$ and $tanh$], "Sign"),
([variation of $sinh$], "Variation"),
([variation of $tanh$], "Variation"),
),
),
domain: ($ -oo $, $ 0 $, $ +oo $),
content: (
($-$, $+$),
(
(top, $+oo$),
(bottom, $1$),
(top, $+oo$),
),
($+$, ()),
(
(bottom, $-oo$),
(),
(top, $+oo$),
),
(
(bottom, $1$),
(),
(top, $-1$),
),
),
)
]
],
)
]
]
#pagebreak()
== 3.3 A weird table for a simple polynomial function
Take $g(t) = t^2 - t^3$\
So, we have $g’(t) = 2t - 3t^2$\
And has local extrema for $x = 0$ and $x = 2/3$
#rect(fill: luma(95%), radius: 10pt, width: 16.5cm)[
Code :
```typ
#tabvar(
line-0: true,
stroke: 5pt + red,
arrow: "X-*-<>",
stroke-arrow: purple + 1.4pt,
marks-line: "<-->",
init: (
variable: $t$,
label: (
([sign of $g’$], "Sign"),
([variation of $g$], "Variation"),
),
),
domain: ($ -oo $, $ 0 $, $ 2 / 3 $, $ +oo $),
content: (
($-$, ("|", $+$), $-$),
(
(top, $+oo$),
(bottom, $0$),
(center, $ 4 / 27 $),
(bottom, $-oo$),
),
),
)
```
─────────────────────────────────────────────────────────────────
Result :
#align(center)[
#tabvar(
line-0: true,
stroke: 5pt + red,
arrow: "X-*-<>",
stroke-arrow: purple + 1.4pt,
marks-line: "<-->",
init: (
variable: $t$,
label: (
([sign of $g’$], "Sign"),
([variation of $g$], "Variation"),
),
),
domain: ($ -oo $, $ 0 $, $ 2 / 3 $, $ +oo $),
content: (
($-$, ("|", $+$), $-$),
(
(top, $+oo$),
(bottom, $0$),
(center, $ 4 / 27 $),
(bottom, $-oo$),
),
),
)
]
] |
https://github.com/saurabtharu/CV | https://raw.githubusercontent.com/saurabtharu/CV/main/template.typ | typst | #import "@preview/octique:0.1.0": *
#import "lib.typ": *
// https://github.com/tzx/NNJR/blob/main/template.typ
#let resume(body) = {
set list(indent: 1em)
show list: set text(size: 0.99em)
show link: underline
show link: set underline(offset: 3pt)
set page(
paper: "us-letter",
margin: (x: 0.5in, y: 0.5in)
)
set text(
size: 12pt,
font: "New Computer Modern",
)
body
}
#let name_header(name) = {
set text(size: 2.37em)
[*#name*]
}
#let header(
name: "<NAME>",
phone: "123-456-7890",
email: "<EMAIL>",
linkedin: "linkedin.com/in/jake",
github: "Github",
gitlab: "Gitlab",
portfolio: "Portfolio",
) = {
align(center,
block[
#name_header(name) \
#fa-icon("phone", solid: true) #link("tel:" + phone) | #fa-icon("envelope", solid: true)
#link("mailto:" + email)[#email] |
#fa-icon("linkedin", solid: true) #link("https://linkedin.com/in/saurabtharu")[#linkedin] |
#fa-icon("github", solid: true) #link("https://github.com/saurabtharu")[#github] |
// #fa-icon("gitlab", solid: true) #link("https://gitlab.com/saurabtharu")[#gitlab] |
#fa-icon("globe", solid: true) #link("https://www.saurabtharu.com.np")[#portfolio]
]
)
v(5pt)
}
#let resume_heading(txt) = {
show heading: set text(size: 0.92em, weight: "regular")
block[
= #smallcaps(txt)
#v(-4pt)
#line(length: 100%, stroke: 1pt + black)
]
}
#let edu_item(
name: "<NAME>",
degree: "B.S in Bullshit",
location: "Foo, BA",
date: "Aug. 1600 - May 1750"
) = {
set block(above: 0.7em, below: 1em)
pad(left: 1em, right: 0.5em, grid(
columns: (3fr, 1fr),
align(left)[
*#name* \
_#degree _
#v(5pt)
],
align(right)[
#location \
_#date _
]
))
}
#let exp_item(
name: "<NAME>",
role: "Worker",
date: "June 1837 - May 1845",
location: "Foo, BA",
..points
) = {
set block(above: 0.7em, below: 1em)
pad(left: 1em, right: 0.5em, box[
#grid(
columns: (3fr, 1fr),
align(left)[
*#role* \
_#name _
],
align(right)[
#date \
_#location _
]
)
#list(..points)
])
}
#let project_item(
name: "Example Project",
skills: "Programming Language 1, Database3",
source_code: "https://github.com/saurabtharu/nand2tetris",
..points
) = {
set block(above: 0.7em, below: 1em)
pad(left: 1em, right: 0.5em, box[
*#name* | _#skills _ #h(1fr) #fa-icon("github", solid: true) #link( source_code )[Source Code]
#list(..points)
])
}
#let skill_item(
category: "Skills",
skills: "Balling, Yoga, Valorant",
) = {
set block(above: 0.7em)
set text(size: 0.91em)
pad(left: 1em, right: 0.5em, block[*#category*: #skills])
}
#let achievement_item(
achievement: "achievement",
organization: "organization akshdf lashfdl halksdfh asldkfh alksdfh",
description: "this is the description of the achievement",
) = {
set block(above: 0.7em)
set text(size: 0.91em)
pad(left: 1em, right: 0.5em, block[*#achievement* #h(1fr) #organization \
#description])
}
#let certificate_item(
cert_name: "achievement",
description: "this is the description of the achievement",
) = {
set block(above: 0.7em)
set text(size: 0.91em)
pad(left: 1em, right: 0.5em, block[*#cert_name* #h(1fr) \
#description])
}
#let paper_item(
paper_name: "Example Paper",
doi: "Programming Language 1, Database3",
paper_link: "https://github.com/saurabtharu/nand2tetris",
points
) = {
set block(above: 0.7em, below: 1em)
pad(left: 1em, right: 0.5em, box[
*#paper_name* | _#doi _ #h(1fr) #fa-icon("paperclip", solid: true) #link( paper_link)[Link] \
#points
])
}
|
|
https://github.com/OctarineSourcerer/NeonInTheDark | https://raw.githubusercontent.com/OctarineSourcerer/NeonInTheDark/main/templates/documentTemplate.typ | typst | #let bladesDoc(
title: none,
subtitle: none,
doc
) = {
let headingFont = "Bebas Neue"
set document(title: title)
set text(font: "Exo 2")
show heading: set text(font: "Bebas Neue")
// This is a little awkward. I'd like to set the default heading size here, then override it in the Crews section, but as per https://github.com/typst/typst/issues/351... show/set rules don't override well
// #show heading.where(level: 1): set text(size: 30pt)
show heading.where(level: 2): set text(size: 25pt)
show heading.where(level: 3): set text(size: 20pt)
show heading.where(level: 4): set text(size: 15pt)
align(center+horizon, text(font: "Orbitron", size: 40pt, weight: "bold", title))
align(center)[_A Blades In The Dark hack_]
pagebreak()
doc
}
// If only the Play font had something between regular and bold
// Another option would be PT, but cba to download it atm. Antic slab... Neuton... DM Serif Text... Quantico... |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/lang_02.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Verify that writing script/language combination has an effect
#{
set text(size:20pt)
set text(script: "latn", lang: "en")
[Ş ]
set text(script: "latn", lang: "ro")
[Ş ]
set text(script: "grek", lang: "ro")
[Ş ]
}
|
https://github.com/lebinyu/typst-thesis-template | https://raw.githubusercontent.com/lebinyu/typst-thesis-template/main/thesis/metadata.typ | typst | Apache License 2.0 | // Here edit mata data
#let title = "This is a Title of thesis"
#let supervisor = ("Prof. Dr. <NAME>", "Prof. Dr. <NAME>")
#let group_name = "Physics Group"
#let institute = "Institute of Physics"
#let author = "<NAME>"
#let ID = "123-456-789"
#let address = ("Street 7", "Zurich", "Switzerland")
#let email = "<EMAIL>"
#let closing_date = "Feburary 30, 2077" |
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/vec_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test alternative delimiter.
#set math.vec(delim: "[")
$ vec(1, 2) $
|
https://github.com/AsiSkarp/grotesk-cv | https://raw.githubusercontent.com/AsiSkarp/grotesk-cv/main/src/template/content/profile.typ | typst | The Unlicense | #let meta = toml("../info.toml")
#import "@preview/fontawesome:0.4.0": *
#let icon = meta.section.icon.profile
#let language = meta.personal.language
#let include-icon = meta.personal.include_icons
// = Summary
= #if include-icon [#fa-icon(icon) #h(5pt)] #if language == "en" [Summary] else if language == "es" [Resumen]
#v(5pt)
#if language == "en" [
Experienced Software Engineer specializing in artificial intelligence, machine learning, and robotics. Proficient in C++, Python, and Java, with a knack for developing sentient AI systems capable of complex decision-making. Passionate about ethical AI development and eager to contribute to groundbreaking projects in dynamic environments.
] else if language == "es" [
Ingeniero de Software experimentado especializado en inteligencia artificial, aprendizaje automático y robótica. Competente en C++, Python y Java, con un talento para desarrollar sistemas de IA conscientes capaces de tomar decisiones complejas. Apasionado por el desarrollo ético de la IA y ansioso por contribuir a proyectos innovadores en entornos dinámicos.
]
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/visualize/gradient-text-other.typ | typst | Apache License 2.0 | // Test text gradients with radial and conic gradients.
---
#set page(width: 200pt, height: auto, margin: 10pt)
#set par(justify: true)
#set text(fill: gradient.radial(red, blue))
#lorem(30)
---
#set page(width: 200pt, height: auto, margin: 10pt)
#set par(justify: true)
#set text(fill: gradient.conic(red, blue, angle: 45deg))
#lorem(30)
|
https://github.com/0x1B05/nju_os | https://raw.githubusercontent.com/0x1B05/nju_os/main/book_notes/content/02_virtualization_01_CPU_01_sheduling.typ | typst | #import "../template.typ": *
#pagebreak()
= Scheduling
== Scheduling: Introduction
We have yet to understand the high-level policies that an OS scheduler employs. We will now present a series of *scheduling policies* (sometimes called *disciplines*)
=== Workload Assumptions
The more you know about *workload*, the more fine-tuned your policy can be.
The workload assumptions we make here are mostly unrealistic, but that is alright, because we will relax them as we go, and eventually develop what we will refer to as *a fully-operational scheduling discipline*
We will make the following assumptions about the processes, sometimes called *jobs*, that are running in the system:
1. Each job runs for the same amount of time.
2. All jobs arrive at the same time.
3. Once started, each job runs to completion.
4. All jobs only use the CPU (i.e., they perform no I/O)
5. The run-time of each job is known.
=== Scheduling Metrics: turnaround time
The turnaround time of a job is defined as the time at which the job completes minus the time at which the job arrived in the system.
$T_"turnaround" = T_"completion" − T_"arrival"$
Because we have assumed that all jobs arrive at the same time, for now $T_"arrival" = 0$ and hence $T_"turnaround" = T_"completion"$.
Turnaround time is a *performance* metric.
Another metric of interest is *fairness*.
Performance and fairness are often at odds in scheduling; a scheduler, for example, may optimize performance but at the cost of preventing a few jobs from running, thus decreasing fairness.
=== First In, First Out (FIFO)
The most basic algorithm we can implement is known as *First In, First Out (FIFO)* scheduling or sometimes *First Come, First Served (FCFS)*.
Positive properties:
- it is clearly simple and thus easy to implement
- given our assumptions, it works pretty well
Imagine three jobs arrive in the system, A, B, and C, at roughly the same time ($T_"arrival" = 0$).
Because FIFO has to put some job first, let's assume that while they all arrived simultaneously, A arrived just a hair before B which arrived just a hair before C.
Assume also that each job runs for 10 seconds.
#image("images/2023-12-18-21-14-55.png", width: 70%)
The average turnaround time for the three jobs is (10+20+30)/3 = 20.
Let's relax assumption 1, and thus no longer assume that each job runs for the same amount of time. What kind of workload could you construct to make FIFO perform poorly?
Assume three jobs (A, B, and C), but this time A runs for 100 seconds while B and C run for 10 each.
#image("images/2023-12-18-21-17-01.png", width:70%)
The average turnaround time for the system is high: (100+110+120)/3 = 110.
This problem is generally referred to as the *convoy effect* where a number of relatively-short potential consumers of a resource get queued behind a heavyweight resource consumer.
=== Shortest Job First (SJF)
To solve the above problem, a new scheduling discipline known as *Shortest Job First (SJF)* occurred. It runs the shortest job first, then the next shortest, and so on.
#image("images/2023-12-18-21-20-12.png", width:70%)
The average turnaround time for the system is (10+20+120)/3 = 50.(reduce 110 to 50)
We can target assumption 2, and now assume that jobs can arrive at any time instead of all at once. What problems does this lead to?
Assume A arrives at t = 0 and needs to run for 100 seconds, whereas B and C arrive at t = 10 and each need to run for 10 seconds.
#image("images/2023-12-18-21-22-13.png", width:70%)
Average turnaround time for these three jobs is (100+(110−10)+(120−10))/3 = 103.33 seconds.
=== Shortest Time-to-Completion First (STCF)
==== PREEMPTIVE SCHEDULERS
A number of *non-preemptive* schedulers were developed; such systems would run each job to completion before considering whether to run a new job.
All modern schedulers are *preemptive*, and quite willing to stop one process from running in order to run another.
==== STCF
- We need to relax assumption 3 (that jobs must run to completion).
- We also need some machinery within the scheduler itself.(timer interrupts and context switching)
The scheduler can certainly do something else when B and C arrive: it can *preempt* job A and decide to run another job, perhaps continuing A later.
There is a scheduler which does exactly that: add preemption to SJF, known as the Shortest *Time-to-Completion First (STCF)* or *Preemptive Shortest Job First (PSJF)* scheduler.
#image("images/2023-12-18-21-25-48.png", width:70%)
The result is a much-improved average turnaround time: ((120−0)+(20−10)+(30−10))/3 = 50 seconds
=== A New Metric: Response Time
Users would sit at a terminal and demand *interactive* performance from the system as well. New metric was born: *response time*.
We define response time as the time from when the job arrives in a system to the first time it is scheduled.
$T_"response" = T_"firstrun" − T_"arrival"$
For example, if we had the schedule above (with A arriving at time 0, and B and C at time 10), the response time of each job is as follows: 0 for job A, 0 for B, and 10 for C (average: 3.33).
STCF and related disciplines are not particularly good for response time.
If three jobs arrive at the same time, for example, the third job has to wait for the previous two jobs to run in *their entirety* before being scheduled just once.
While great for turnaround time, this approach is quite bad for response time and interactivity.
=== Round Robin
A new scheduling algorithm, classically referred to as *Round-Robin (RR) scheduling*.
Instead of running jobs to completion, RR runs a job for a time slice (sometimes called a scheduling quantum) and then switches to the next job in the run queue. It repeatedly does so until the jobs are finished.
#tip("Tip")[
- RR is sometimes called *time-slicing*
- The length of a time slice must be a multiple of the timer-interrupt period
]
Assume three jobs A, B, and C arrive at the same time in the system, and that they each wish to run for 5 seconds.
- An SJF scheduler runs each job to completion before running another.
- RR with a time-slice of 1 second would cycle through the jobs quickly.
#image("images/2023-12-18-21-29-55.png", width: 70%)
- For SJF, the average response time is: (0+5+10)/3 = 5.
- The average response time of RR is: (0+1+2)/3 = 1;
Making the time slice too short is problematic: suddenly the cost of context switching will dominate overall performance.
Thus, deciding on the length of the time slice presents a trade-off to a system designer, making it long enough to amortize the cost of switching without making it so long that the system is no longer responsive.
#tip("Tip")[
The general technique of *amortization* is commonly used in systems when there is a fixed cost to some operation.
]
Note that the cost of context switching does not arise solely from the OS actions of saving and restoring a few registers.
When programs run, they build up a great deal of state in CPU caches, TLBs, branch predictors, and other on-chip hardware.
Switching to another job causes this state to be flushed and new state relevant to the currently-running job to be brought in, which may exact a noticeable performance cost.
The turnaround time of RR: A finishes at 13, B at 14, and C at 15, for an average of 14. Pretty awful!
RR is indeed one of the worst policies if turnaround time is our metric. RR is doing is stretching out each job as long as it can, by only running each job for a short bit before moving to the next. Because turnaround time only cares about when jobs finish, RR is nearly pessimal.
Any policy (such as RR) that is fair, i.e., that evenly divides the CPU among active processes on a small time scale, will perform poorly on metrics such as turnaround time.
Indeed, this is an inherent trade-off:
- if you are willing to be unfair, you can run shorter jobs to completion, but at the cost of response time;
- if you instead value fairness, response time is lowered, but at the cost of turnaround time. This type of trade-off is common in systems.
We have developed two types of schedulers.
- The first type (SJF, STCF) optimizes turnaround time, but is bad for response time.
- The second type (RR) optimizes response time but is bad for turnaround.
And we still have two assumptions which need to be relaxed:
- assumption 4 (that jobs do no I/O)
- assumption 5 (that the run-time of each job is known).
=== Incorporating I/O
First we will relax assumption 4 — of course all programs perform I/O.
- Imagine a program that didn't take any input: it would produce the same output each time.
- Imagine one without output: it is the proverbial tree falling in the forest, with no one to see it; it doesn't matter that it ran.
- A scheduler clearly has a decision to make when a job initiates an I/O request, because the currently-running job won't be using the CPU during the I/O; it is blocked waiting for I/O completion.
#tip("Tip")[
the scheduler should probably schedule another job on the CPU at that time.
]
- A scheduler also has to make a decision when the I/O completes. When that occurs, an interrupt is raised, and the OS runs and moves the process that issued the I/O from blocked back to the ready state. Of course, it could even decide to run the job at that point.
Let us assume we have two jobs, A and B, which each need 50 ms of CPU time.
- A runs for 10 ms and then issues an I/O request (assume here that I/Os each take 10 ms)
- B simply uses the CPU for 50 ms and performs no I/O.
The scheduler runs A first, then B after:
#image("images/2023-12-18-21-40-56.png", width: 70%)
A common approach is to treat each 10-ms sub-job of A as an independent job.
Thus, when the system starts, its choice is whether to schedule a 10-ms A or a 50-ms B.
With STCF, the choice is clear: choose the shorter one, in this case A. Then, when the first sub-job of A has completed, only B is left, and it begins running. Then a new sub-job of A is submitted, and it preempts B and runs for 10 ms.
#tip("Tip")[
Doing so allows for *overlap*, with the CPU being used by one process while waiting for the I/O of another process to complete; the system is thus better utilized
]
#image("images/2023-12-18-21-42-16.png", width:70%)
=== No More Oracle
With a basic approach to I/O in place, we come to our final assumption: that the scheduler knows the length of each job. As we said before, this is likely the worst assumption we could make.
See the next chapter....
== Scheduling: The Multi-Level Feedback Queue
The fundamental problem *MLFQ* tries to address is two-fold.
First, it would like to optimize *turnaround time*, which is done by running shorter jobs first; unfortunately, the OS doesn't generally know how long a job will run for.
Second, MLFQ would like to make a system feel responsive to interactive users (i.e., users sitting and staring at the screen, waiting for a process to finish), and thus minimize response time.
Our problem: given that we in general do not know anything about a process, how can we build a scheduler to achieve these goals?
The multi-level feedback queue is an excellent example of a system that *learns from the past to predict the future*. Such approaches are common in operating systems (and many other places in Computer Science, including *hardware branch predictors* and *caching algorithms*).
#tip("Tip")[
One must be careful with such techniques, as they can easily be wrong and drive a system to make worse decisions than they would have with no knowledge at all.
]
=== MLFQ: Basic Rules
The MLFQ has a number of distinct *queues*, each assigned a different priority level.
At any given time, a job that is ready to run is on a single queue.
MLFQ uses priorities to decide which job should run at a given time: a job with higher priority (i.e., a job on a higher queue) is chosen to run.
More than one job may be on a given queue, and thus have the same priority. In this case, we will just use *round-robin scheduling* among those jobs.
Thus, we arrive at the first two basic rules for MLFQ:
- Rule 1: If Priority(A) > Priority(B), A runs (B doesn't).
- Rule 2: If Priority(A) = Priority(B), A & B run in RR.
==== sets priorities
The key to MLFQ scheduling therefore lies in how the scheduler sets priorities.
MLFQ varies the priority of a job based on its *observed behavior*:
- A job repeatedly relinquishes the CPU while waiting for input from the keyboard, MLFQ will keep its priority high, as this is how an interactive process might behave.
- A job uses the CPU intensively for long periods of time,MLFQ will reduce its priority.
In this way,MLFQ will try to learn about processes as they run, and thus use the history of the job to predict its future behavior.
#image("images/2023-12-19-07-30-12.png", width: 50%)
In the figure, two jobs (A and B) are at the highest priority level, while job C is in the middle and Job D is at the lowest priority.
Given our current knowledge of how MLFQ works, the scheduler would just alternate time slices between A and B because they are the highest priority jobs in the system; poor jobs C and D would never even get to run!
=== Attempt #1: How To Change Priority
To do this, we must keep in mind our workload: a mix of interactive jobs that are short-running (and may frequently relinquish the CPU), and some longer-running “CPU-bound” jobs that need a lot of CPU time but where response time isn't important.
Here is our first attempt at a priority adjustment algorithm:
- Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue).
- Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue).
- Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level.
==== Example 1: A Single Long-Running Job
#image("images/2023-12-19-12-35-43.png", width: 50%)
- The job enters at the highest priority (Q2).
- After a single time-slice of 10 ms, the scheduler reduces the job's priority by one, and thus the job is on Q1.
- After running at Q1 for a time slice, the job is finally lowered to the lowest priority in the system (Q0), where it remains.
==== Example 2: Along Came A Short Job
See how MLFQ tries to approximate SJF.
- A, which is a long-running CPU-intensive job
- B,which is a short-running interactive job.
- Assume A has been running for some time, and then B arrives.
#image("images/2023-12-19-12-37-19.png", width: 50%)
- A (shown in black) is running along in the lowest-priority queue;
- B (shown in gray) arrives at time T = 100, and thus is inserted into the highest queue; as its run-time is short (only 20 ms), B completes before reaching the bottom queue, in two time slices;
- A resumes running (at low priority).
From this example, we can understand one of the major goals of the algorithm:
because it doesn't know whether a job will be a short job or a long-running job, it *first assumes it might be a short job*, thus giving the job high priority.
- If it actually is a short job, it will run quickly and complete;
- If it is not a short job, it will slowly move down the queues, and thus soon prove itself to be a long-running more batch-like process.
#tip("Tip")[
In this manner, MLFQ approximates SJF.
]
==== Example 3: What About I/O?
As Rule 4b states above, if a process gives up the processor before using up its time slice, we keep it at the same priority level.
The intent of this rule is simple: if an interactive job, for example, is doing a lot of I/O (say by waiting for user input from the keyboard or mouse), it will relinquish the CPU before its time slice is complete; in such case, we don't wish to penalize the job and thus simply keep it at the same level.
#image("images/2023-12-19-12-41-54.png", width: 50%)
With an interactive job B (shown in gray) that needs the CPU only for 1 ms before performing an I/O competing for the CPU with a long-running batch job A (shown in black).
The MLFQ approach keeps B at the highest priority because B keeps releasing the CPU; if B is an interactive job,MLFQ further achieves its goal of running interactive jobs quickly.
==== Problems With Our Current MLFQ
===== starvation
If there are “too many” interactive jobs in the system, they will combine to consume all CPU time, and thus long-running jobs will never receive any CPU time (they starve).
===== game the scheduler
Gaming the scheduler generally refers to the idea of doing something sneaky to trick the scheduler into giving you more than your fair share of the resource.
The algorithm we have described is susceptible to the following attack:
- before the time slice is over, issue an I/O operation and thus relinquish the CPU;
- doing so allows you to remain in the same queue, and thus gain a higher percentage of CPU time.
- When done right (e.g., by running for 99%of a time slice before relinquishing the CPU), a job could nearly monopolize the CPU.
===== the program change its behavior
What was CPU-bound may transition to a phase of interactivity.
With our current approach, such a job would be out of luck and not be treated like the other interactive jobs in the system.
=== Attempt #2: The Priority Boost
The simple idea here is to periodically boost the priority of all the jobs in system. Just do something simple: throw them all in the topmost queue; hence, a new rule:
- Rule 5: After some time period S, move all the jobs in the system to the topmost queue.
Our new rule solves two problems at once.
- First, processes are guaranteed not to starve: By sitting in the top queue, a job will share the CPU with other high-priority jobs in a *round-robin* fashion, and thus eventually receive service.
- Second, if a CPU-bound job has become interactive, the scheduler treats it properly once it has received the priority boost.
==== an example
A long-running job when competing for the CPU with two short-running interactive jobs.
#image("images/2023-12-19-12-52-55.png", width: 80%)
- On the left, there is no priority boost, and thus the long-running job gets starved once the two short jobs arrive
- On the right, there is a priority boost every 50 ms, and thus we at least guarantee that the long-running job will get boosted to the highest priority every 50 ms and thus get to run periodically.
- What should S be set to?
- S is one of the *voo-doo constants*, because it seemed to require some form of black magic to set them correctly.
- too high, long-running jobs could starve;
- too low, and interactive jobs may not get a proper share of the CPU.
==== AVOID VOO-DOO CONSTANTS (OUSTERHOUT'S LAW)
Avoiding voo-doo constants is a good idea whenever possible.
The frequent result: a configuration file filled with default parameter values that a seasoned administrator can tweak when something isn't quite working correctly. These are often left unmodified, and thus we are left to hope that the defaults work well in the field.
=== Attempt #3: Better Accounting
How to prevent gaming of our scheduler?
The solution here is to perform better accounting of CPU time at each level of the MLFQ.
Instead of forgetting how much of a time slice a process used at a given level, the scheduler should keep track; once a process has used its allotment, it is demoted to the next priority queue. Whether it uses the time slice in one long burst or many small ones does not matter.
We thus rewrite Rules 4a and 4b to the following single rule:
- Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).
#tip("Tip")[
Does this one have a side effect on interaction since it will reduced the priority?
]
#image("images/2023-12-19-16-07-00.png", width: 80%)
- Without any protection from gaming, a process can issue an I/O just before a time slice ends and thus dominate CPU time.
- With such protections in place, regardless of the I/O behavior of the process, it slowly moves down the queues, and thus cannot gain an unfair share of the CPU.
=== Tuning MLFQ And Other Issues
How to parameterize such a scheduler?
- how many queues should there be?
- How big should the time slice be per queue?
- How often should priority be boosted in order to avoid starvation and account for changes in behavior?
- ...
There are no easy answers to these questions, and thus only some experience with workloads and subsequent tuning of the scheduler will lead to a satisfactory balance.
Most MLFQ variants allow for varying time-slice length across different queues.
- The high-priority queues are usually given short time slices; they are comprised of interactive jobs, after all, and thus quickly alternating between them makes sense.
- The low-priority queues, in contrast, contain long-running jobs that are CPU-bound; hence, longer time slices work well.
#image("images/2023-12-19-16-22-00.png", width: 70%)
Two jobs run for 20 ms at the highest queue (with a 10-ms time slice), 40 ms in the middle (20-ms time slice), and with a 40-ms time slice at the lowest.
==== Solaris MLFQ implementation
The Time-Sharing scheduling class, or TS — is particularly easy to configure;
It provides a set of tables that determine exactly how the priority of a process is altered throughout its lifetime, how long each time slice is, and how often to boost the priority of a job;
An administrator can muck with this table in order to make the scheduler behave in different ways.
Default values for the table are 60 queues, with slowly increasing time-slice lengths from 20 milliseconds (highest priority) to a few hundred milliseconds (lowest), and priorities boosted around every 1 second or so.
==== Other MLFQ schedulers
They adjust priorities using mathematical formulae.
For example, the FreeBSD scheduler (version 4.3) uses a formula to calculate the current priority level of a job, basing it on how much CPU the process has used; in addition, usage is *decayed* over time, providing the desired priority boost in a different manner than described herein.
Many schedulers have a few other features that you might encounter.
For example, some schedulers reserve the highest priority levels for operating system work; thus typical user jobs can never obtain the highest levels of priority in the system.
Some systems also allow some user *advice* to help set priorities; for example, by using the command-line utility `nice` you can increase or decrease the priority of a job and thus increase or decrease its chances of running at any given time.
===== USE ADVICE WHERE POSSIBLE
As the operating system rarely knows what is best for each and every process of the system, it is often useful to provide interfaces to allow users or administrators to provide some *hints* to the OS.
We often call such hints *advice*, as the OS might take the advice into account in order to make a better decision.
Such hints are useful in many parts of the OS, including the scheduler (e.g., with `nice`),memory manager (e.g., `madvise`), and file system(e.g., informed prefetching and caching).
=== MLFQ: Summary
- Rule 1: If Priority(A) > Priority(B), A runs (B doesn't).
- Rule 2: If Priority(A) = Priority(B), A & B run in round-robin fashion using the time slice (quantum length) of the given queue.
- Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue).
- Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).
- Rule 5: After some time period S, move all the jobs in the system to the topmost queue.
MLFQ is interesting for the following reason: instead of demanding a priori knowledge of the nature of a job, it observes the execution of a job and prioritizes it accordingly.
In this way, it manages to achieve the best of both worlds:
- it can deliver excellent overall performance (similar to SJF/STCF) for short-running interactive jobs
- is fair and makes progress for long-running CPU-intensive workloads.
Many systems, including BSD UNIX derivatives, Solaris, and Windows NT and subsequent Windows operating systems use a form of MLFQ as their base scheduler.
== Scheduling: Proportional Share
A different type of scheduler known as a *proportional-share* scheduler, also sometimes referred to as a *fair-share* scheduler.
A simple concept: instead of optimizing for turnaround or response time, a scheduler might instead try to *guarantee that each job obtain a certain percentage of CPU time*.
An excellent early example of proportional-share scheduling is *lottery scheduling*.
The basic idea is quite simple: every so often, hold a lottery to determine which process should get to run next; processes that should run more often should be given more chances to win the lottery.
=== Basic Concept: Tickets Represent Your Share
One very basic concept: *tickets*,which are used to represent the share of a resource that a process (or user or whatever) should receive.
Two processes, A and B, and further that A has 75 tickets while B has only 25. Thus, what we would like is for A to receive 75% of the CPU and B the remaining 25%.
Lottery scheduling achieves this probabilistically (but *not deterministically*) by holding a lottery every time slice.
Holding a lottery is straightforward: the scheduler must know how many total tickets there are (in our example, there are 100). The scheduler then picks a winning ticket, which is a number from 0 to 99.
Assuming A holds tickets 0 through 74 and B 75 through 99, the winning ticket simply determines whether A or B runs. The scheduler then loads the state of that winning process and runs it.
Here is an example output of a lottery scheduler's winning tickets:
```
63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49
```
Here is the resulting schedule:
```
A A A A A A A A A A A A A A A
B B B B
```
#tip("Tip")[
meeting the desired proportion, but no guarantee.
]
==== USE RANDOMNESS
One of the most beautiful aspects of lottery scheduling is its use of randomness.
- First, random often avoids strange corner-case behaviors that a more traditional algorithm may have trouble handling.
- Second, random also is lightweight, requiring little state to track alternatives.
- Finally, random can be quite fast.
==== USE TICKETS TO REPRESENT SHARES
One of the most powerful (and basic) mechanisms in the design of lottery (and stride) scheduling is that of the *ticket*.
Waldspurger shows how tickets can be used to represent a guest operating system's share of memory [W02].
If you are ever in need of a mechanism to represent a proportion of ownership, this concept just might be the ticket.
=== Ticket Mechanisms
Lottery scheduling also provides a number of mechanisms to manipulate tickets in different and sometimes useful ways.
One way is with the concept of *ticket currency*.
Currency allows a user with a set of tickets to allocate tickets among their own jobs in whatever currency they would like; the system then automatically converts said currency into the correct global value.
Assume users A and B have each been given 100 tickets.
- User A is running two jobs, A1 and A2, and gives them each 500 tickets (out of 1000 total) in A's currency.
- User B is running only 1 job and gives it 10 tickets (out of 10 total).
```
User A -> 500 (A's currency) to A1 -> 50 (global currency)
-> 500 (A's currency) to A2 -> 50 (global currency)
User B -> 10 (B's currency) to B1 -> 100 (global currency)
```
Another useful mechanism is *ticket transfer*.
With transfers, a process can temporarily hand off its tickets to another process.
This ability is especially useful in a client/server setting, where a client process sends a message to a server asking it to do some work on the client's behalf.
- To speed up the work, the client can pass the tickets to the server and thus try to maximize the performance of the server while the server is handling the client's request.
- When finished, the server then transfers the tickets back to the client and all is as before.
Finally, *ticket inflation* can sometimes be a useful technique.
With inflation, a process can temporarily raise or lower the number of tickets it owns.
In a competitive scenario with processes that do not trust one another, this makes little sense; one greedy process could give itself a vast number of tickets and take over the machine.
Rather, inflation can be applied in an environment where a group of processes trust one another; in such a case, if any one process knows it needs more CPU time, it can boost its ticket value as a way to reflect that need to the system, all without communicating with any other processes.
=== Implementation
All you need is a good random number generator to pick the winning ticket, a data structure to track the processes of the system (e.g., a list), and the total number of tickets.
```c
// counter: used to track if we've found the winner yet
int counter = 0;
// winner: use some call to a random number generator to
// get a value, between 0 and the total = of tickets
int winner = getrandom(0, totaltickets);
// current: use this to walk through the list of jobs
node_t *current = head;
while (current) {
counter = counter + current->tickets;
if (counter > winner)
break; // found the winner
current = current->next;
}
// 'current' is the winner: schedule it...
```
An example comprised of three processes, A, B, and C, each with some number of tickets.
#image("images/2023-12-19-20-10-35.png", width: 80%)
To make a scheduling decision, we first have to pick a random number (the winner) from the total number of tickets (400)2 Let’s say we pick the number 300.
- First, counter is incremented to 100 to account for A’s tickets; because 100 is less than 300, the loop continues.
- Then counter would be updated to 150 (B’s tickets), still less than 300 and thus again we continue.
- Finally, counter is updated to 400 (clearly greater than 300), and thus we break out of the loop with current pointing at C (the winner).
It might generally be best to organize the list *in sorted order, from the highest number of tickets to the lowest*. The ordering does not affect the correctness of the algorithm; however, it does ensure in general that the fewest number of list iterations are taken.
=== An Example
A brief study of the completion time of two jobs competing against one another, each with the same number of tickets (100) and same run time (R, which we will vary).
We’d like for each job to finish at roughly the same time, but due to the randomness of lottery scheduling, sometimes one job finishes before the other.
To quantify this difference, we define a simple *unfairness metric, U* which is simply the time the first job completes divided by the time that the second job completes.
For example, if R = 10, and the first job finishes at time 10 (and the second job at 20), U = 10/20 = 0.5.
When both jobs finish at nearly the same time, U will be quite close to 1. In this scenario, that is our goal: a perfectly fair scheduler would achieve U = 1.
#image("images/2023-12-19-20-15-48.png", width: 60%)
Plots the average unfairness as the length of the two jobs (R) is varied from 1 to 1000 over thirty trials.
As you can see from the graph, when the job length is not very long, average unfairness can be quite severe. Only as the jobs run for a significant number of time slices does the lottery scheduler approach the desired outcome.
=== How To Assign Tickets?
One approach is to assume that the users know best; in such a case, each user is handed some number of tickets, and a user can allocate tickets to any jobs they run as desired.
However, this solution is a non-solution: it really doesn’t tell you what to do. Thus, given a set of jobs, the “ticket-assignment problem” remains open.
=== Why Not Deterministic?
While randomness gets us a simple (and approximately correct) scheduler, it occasionally will not deliver the exact right proportions, especially over short time scales.
For this reason, Waldspurger invented *stride scheduling*, a deterministic fair-share scheduler . *Stride scheduling* is also straightforward. Each job in the system has a stride, which is inverse in proportion to the number of tickets it has.
In our example above, with jobs A, B, and C, with 100, 50, and 250 tickets, respectively, we can compute the stride of each by dividing some large number by the number of tickets each process has been assigned.
For example, if we divide 10,000 by each of those ticket values, we obtain the following stride values for A, B, and C: 100, 200, and 40. We call this *value the stride of each process*; every time a process runs, we will increment a counter for it (called its *pass value*) by its stride to track its global progress.The scheduler then uses the stride and pass to determine which process should run next.
The basic idea is simple: at any given time, pick the process to run that has the lowest pass value so far; when you run a process, increment its pass counter by its stride.
```c
curr = remove_min(queue); // pick client with min pass
schedule(curr); // run for quantum
curr->pass += curr->stride; // update pass using stride
insert(queue, curr); // return curr to queue
```
In our example, we start with three processes (A, B, and C), with stride values of 100, 200, and 40, and all with pass values initially at 0. Thus, at first, any of the processes might run, as their pass values are equally low. Assume we pick A (arbitrarily; any of the processes with equal low pass values can be chosen).
#image("images/2023-12-19-20-27-05.png", width: 70%)
Lottery scheduling achieves the proportions probabilistically over time; stride scheduling gets them exactly right at the end of each scheduling cycle.
Given the precision of stride scheduling, why use lottery scheduling at all? Well, lottery scheduling has one nice property that stride scheduling does not: no global state. Imagine a new job enters in the middle of our stride scheduling example above; what should its pass value be? Should it be set to 0? If so, it will monopolize the CPU. With lottery scheduling, there is no global state per process; we simply add a new process with whatever tickets it has, update the single global variable to track how many total tickets we have, and go from there. In this way, lottery makes it much easier to incorporate new processes in a sensible manner.
=== The Linux Completely Fair Scheduler (CFS)
Despite these earlier works in fair-share scheduling, the current Linux approach achieves similar goals in an alternate manner.
The scheduler, entitled the *Completely Fair Scheduler (or CFS)*, implements fair-share scheduling, but does so in a highly efficient and scalable manner.
#tip("Tip")[
Recent studies have shown that scheduler efficiency is surprisingly important; specifically, in a study of Google datacenters, Kanev et al. show that even after aggressive optimization, scheduling uses about 5% of overall datacenter CPU time.
]
==== Basic Operation
Its goal is simple: to fairly divide a CPU evenly among all competing processes. It does so through a simple counting-based technique known as *virtual runtime (vruntime)*.
As each process runs, it accumulates `vruntime`.
In the most basic case, each process’s `vruntime` increases at the same rate, in proportion with physical (real) time.
When a scheduling decision occurs, CFS will pick the process with the lowest `vruntime` to run next.
This raises a question: how does the scheduler know when to stop the currently running process, and run the next one?
The tension here is clear:
- if CFS switches too often, fairness is increased, as CFS will ensure that each process receives its share of CPU even over miniscule time windows, but at the cost of performance (too much context switching);
- if CFS switches less often, performance is increased (reduced context switching), but at the cost of near-term fairness.
CFS manages this tension through various control parameters.
The first is `sched_latency`. CFS uses this value to determine *how long one process should run before considering a switch* (effectively determining its time slice but in a dynamic fashion).
A typical `sched_latency` value is 48 (milliseconds); CFS divides this value by the number (n) of processes running on the CPU to determine the time slice for a process, and thus ensures that over this period of time, CFS will be completely fair.
For example, if there are n = 4 processes running, CFS divides the value of `sched_latency` by n to arrive at a per-process time slice of 12 ms.
CFS then schedules the first job and runs it until it has used 12 ms of (virtual) runtime, and then checks to see if there is a job with lower `vruntime` to run instead.
In this case, there is, and CFS would switch to one of the three other jobs, and so forth.
An example where the four jobs (A, B, C, D) each run for two time slices in this fashion; two of them(C, D) then complete, leaving just two remaining, which then each run for 24 ms in round-robin fashion.
#image("images/2023-12-19-21-21-36.png")
But what if there are “too many” processes running? Wouldn’t that lead to too small of a time slice, and thus too many context switches?
Good question! And the answer is yes.
To address this issue, CFS adds another parameter, `min_granularity`, which is usually set to a value like 6 ms. CFS will never set the time slice of a process to less than this value, ensuring that not too much time is spent in scheduling overhead.
For example, if there are 10 processes running, our original calculation would divide `sched_latency` by 10 to determine the time slice (result: 4.8 ms).
However, because of `min_granularity`, CFS will set the time slice of each process to 6ms instead. Although CFS won’t (quite) be perfectly fair over the target scheduling latency (`sched_latency`) of 48 ms, it will be close, while still achieving high CPU efficiency.
Note that CFS utilizes a periodic timer interrupt, which means it can only make decisions at fixed time intervals. This interrupt goes off frequently, giving CFS a chance to wake up and determine if the current job has reached the end of its run.
If a job has a time slice that is not a perfect multiple of the timer interrupt interval, that is OK; CFS tracks `vruntime` precisely, which means that over the long haul, it will eventually approximate ideal sharing of the CPU.
==== Weighting (Niceness)
CFS also enables controls over process priority, enabling users or administrators to give some processes a higher share of the CPU.
It does this not with tickets, but through a classic UNIX mechanism known as the *nice level of a process*.
The nice parameter can be set anywhere from -20 to +19 for a process, with a default of 0.
Positive nice values imply lower priority and negative values imply higher priority; when you’re too nice, you just don’t get as much (scheduling) attention.
CFS maps the nice value of each process to a weight, as shown here:
```c
static const int prio_to_weight[40] = {
/* -20 */ 88761, 71755, 56483, 46273, 36291,
/* -15 */ 29154, 23254, 18705, 14949, 11916,
/* -10 */ 9548, 7620, 6100, 4904, 3906,
/* -5 */ 3121, 2501, 1991, 1586, 1277,
/* 0 */ 1024, 820, 655, 526, 423,
/* 5 */ 335, 272, 215, 172, 137,
/* 10 */ 110, 87, 70, 56, 45,
/* 15 */ 36, 29, 23, 18, 15,
};
```
These weights allow us to compute the effective time slice of each process (as we did before), but now accounting for their priority differences. The formula used to do so is as follows:
$"time_slice"_k = "weight"_(k)}{sum_(i=0)^(n-1)("weight"_(i))}\cdot "sched_latency"$
Assume there are two jobs, A and B.
A, because its our most precious job, is given a higher priority by assigning it a nice value of -5; $"weight"_(A)$ (from the table) is 3121.
B, because we hate it, just has the default priority (nice value equal to 0); $"weight"_(B)$ is 1026.
If you then compute the time slice of each job, you’ll find that A’s time slice is about 3/4 of sched latency (hence, 36 ms), and B’s about 1/4 (hence, 12 ms).
In addition to generalizing the time slice calculation, the way CFS calculates `vruntime` must also be adapted. Here is the new formula, which takes the actual run time that process i has accrued (runtimei) and scales it inversely by the weight of the process. In our running example, A’s `vruntime` will accumulate at one-third the rate of B’s.
$"vruntime"_(i) = "vruntime_"(i) + ("weight"_{0})/("weight_"(i)) \cdot "runtime"_(i)$
One smart aspect of the construction of the table of weights above is that the table preserves CPU proportionality ratios when the difference in nice values is constant.
For example, if process A instead had a nice value of 5 (not -5), and process B had a nice value of 10 (not 0), CFS would schedule them in exactly the same manner as before. Run through the math yourself to see why.
==== Using Red-Black Trees
For a scheduler, there are many facets of efficiency, but one of them is as simple as this: when the scheduler has to find the next job to run, it should do so as quickly as possible.
CFS addresses this by keeping processes in *a red-black tree*.
A red-black tree is one of many types of balanced trees, balanced trees do a little extra work to maintain low depths, and thus ensure that operations are logarithmic in time.
CFS does not keep all process in this structure; rather, only running (or runnable) processes are kept therein. If a process goes to sleep (waiting on an I/O to complete, or for a network packet to arrive...), it is removed from the tree and kept track of elsewhere.
Most operations (such as insertion and deletion) are logarithmic in time, i.e., O(log n).
==== Dealing With I/O And Sleeping Processes
One problem with picking the lowest vruntime to run next arises with jobs that have gone to sleep for a long period of time.
Imagine two processes, A and B, one of which (A) runs continuously, and the other (B) which has gone to sleep for a long period of time.
When B wakes up, its vruntime will be 10 seconds behind A’s, and thus, B will now monopolize the CPU for the next 10 seconds while it catches up, effectively starving A.
CFS handles this case by altering the vruntime of a job when it wakes up. Specifically, CFS sets the vruntime of that job to the minimum value found in the tree. In this way, CFS avoids starvation, but not without a cost: jobs that sleep for *short periods of time* frequently do not ever get their fair share of the CPU.
=== Summary
- Lottery uses randomness in a clever way to achieve proportional share;
- Stride does so deterministically.
- CFS is a bit like weighted round-robin with dynamic time slices, but built to scale and perform well under load
#tip("Tip")[
To our knowledge, it is the most widely used fair-share scheduler in existence today.
]
fair-share schedulers have their fair share of problems
- Such approaches do not particularly mesh well with I/O
- They leave open the hard problem of ticket or priority assignment
|
|
https://github.com/fenjalien/metro | https://raw.githubusercontent.com/fenjalien/metro/main/tests/num/exponent/exponent-mode/test.typ | typst | Apache License 2.0 | #import "/src/lib.typ": unit, metro-setup, num, qty
#set page(width: auto, height: auto)
#let test = [
#num[0.012] \
#num[0.00123] \
#num[0.0001234] \
#num[0.000012345] \
#num[0.00000123456] \
#num[123] \
#num[1234] \
#num[12345] \
#num[123456] \
#num[1234567] \
]
#test
#metro-setup(exponent-mode: "scientific")
#test
#metro-setup(exponent-mode: "engineering")
#test
#metro-setup(exponent-mode: "fixed", fixed-exponent: 2)
#test
#num(e: 4, "123")
#metro-setup(exponent-mode: "fixed", fixed-exponent: 0)
#num(e: 4, "1.23") |
https://github.com/r8vnhill/apunte-bibliotecas-de-software | https://raw.githubusercontent.com/r8vnhill/apunte-bibliotecas-de-software/main/Unit2/Constructors.typ | typst | == Constructores en Kotlin
Un constructor en Kotlin es un bloque especial de código que se ejecuta durante la creación de una instancia de una clase.
Su función principal es inicializar el objeto recién creado con los datos necesarios o realizar configuraciones iniciales.
Kotlin ofrece dos tipos de constructores: *primarios* y *secundarios*.
=== Constructor Primario
El constructor primario se integra directamente en la declaración de la clase, ofreciendo una forma directa y concisa de inicializar objetos.
```kotlin
class Player(val name: String, var lifePoints: Int)
```
En este ejemplo, `Player` es una clase con un constructor primario que acepta dos parámetros: `name` y `lifePoints`.
Es importante notar que la visibilidad `public` es la predeterminada en Kotlin y declararla explícitamente es redundante.
==== Bloques `init`
Los bloques `init` son útiles para realizar validaciones o inicializaciones adicionales que no se pueden expresar directamente en los parámetros del constructor.
```kotlin
class Player(val name: String, var lifePoints: Int) {
init {
require(lifePoints >= 0) { "Los puntos de vida no pueden ser negativos." }
}
}
```
Aquí, el bloque `init` garantiza que los puntos de vida no sean negativos en el momento de crear un objeto de tipo `Player`.
=== Constructores Secundarios
Una clase puede tener múltiples constructores, lo cual se realiza utilizando la palabra clave `constructor`.
```kotlin
class Player(val name: String, var lifePoints: Int) {
init {
require(lifePoints >= 0) { "Los puntos de vida no pueden ser negativos." }
}
constructor() : this("Unknown", 8000) {
println("New player $name has $lifePoints life points.")
}
}
```
Un constructor secundario siempre comienza llamando a otro constructor de la clase (usualmente el primario), como se indica con `: this(...)`.
==== Uso de Parámetros por Defecto
Frecuentemente, los constructores secundarios pueden ser reemplazados por parámetros por defecto en el constructor primario para simplificar el código.
```kotlin
class Player(
val name: String = "Unknown",
var lifePoints: Int = 8000
) {
init {
require(lifePoints >= 0) { "Los puntos de vida no pueden ser negativos." }
}
}
```
Este enfoque hace que el código sea más limpio y fácil de mantener, al reducir la cantidad de constructores secundarios necesarios.
|
|
https://github.com/dainbow/FunctionalAnalysis2 | https://raw.githubusercontent.com/dainbow/FunctionalAnalysis2/main/themes/4.typ | typst | #import "../conf.typ": *
= Спектр. Резольвента.
== Операторозначные функции комплексного переменного. Аналитичность резольвенты. Спектральный радиус. Основная теорема о спектре
#definition[
*Резольвентным множеством* оператора $A$ называется следующее множество:
#eq[
$rho(A) = {lambda in CC | exists (A - lambda I)^(-1) in cal(L)(E)}$
]
Все $lambda in CC$, попадающие в резольвентное множество, называются
*регулярными значениями*.
]
#definition[
*Спектром* оператора $A$ называется дополнение к резольвентному множеству:
#eq[
$sigma(A) = CC without rho(A)$
]
]
#definition[
*Резольвентой* оператора $A$ называется любое отображение следующего вида:
#eq[
$R_lambda := R(lambda) := (A - lambda I)^(-1), lambda in rho(A)$
]
]
#proposition[
$R(lambda)$ является непрерывной функцией от $lambda$.
]
#proof[
Положим $B = A - lambda_0 I$ и $Delta B = -Delta lambda I$.
Как мы уже доказывали выше, мы можем рассмотреть $Delta lambda$ с ограничением $abs(Delta lambda) < 1 / norm(B^(-1))$ и
тогда $B + Delta B$ будет обратим.
Для непрерывности, нам нужно оценить норму следующей разности при $Delta lambda -> 0$:
#eq[
$norm(R(lambda_0 + Delta lambda) - R(lambda_0)) = norm((B + Delta B)^(-1) - B^(-1))$
]
Распишем $(B + Delta B)^(-1)$ через ряд Неймана следующим образом:
#eq[
$(B + Delta B)^(-1) = (I + B^(-1)Delta B)^(-1)B^(-1) &= sum_(k = 0)^oo (-1)^k (B^(-1)Delta B)^k B^(-1) = \ &= B^(-1) + sum_(k = 1)^oo (-1)^k (B^(-1)Delta B)^k B^(-1)$
]
Отсюда можно вернуться к оценке приращение и уже работать с рядом:
#eq[
$norm((B + Delta B)^(-1) - B^(-1)) = norm(sum_(k = 1)^oo (-1)^k (B^(-1)Delta B)^k B^(-1)) &<= \
norm(B^(-1))sum_(k = 1)^oo (norm(B^(-1))norm(Delta B))^k &= norm(B^(-1))dot (norm(B^(-1))norm(Delta B)) / (1 - norm(B^(-1))norm(Delta B)) ->_(Delta B -> 0) 0$
]
]
#note[
Далее будет использоваться обозначение
#eq[
$A_lambda := A - lambda I$
]
]
#proposition[
Пусть $lambda_0, lambda in rho(A)$. Тогда
#eq[
$R_lambda - R_lambda_0 = (lambda - lambda_0)R_lambda R_lambda_0$
]
]
#proof[
Рассмотрим следующую тривиальную цепочку равенств:
#eq[
$R_lambda - R_lambda_0 = R_lambda underbrace(A_lambda_0 R_lambda_0, I) - underbrace(A_lambda R_lambda, I)R_lambda_0 = \ R_lambda (A_lambda_0 - A_lambda)R_lambda_0 &= R_lambda (lambda - lambda_0)R_lambda_0 = (lambda - lambda_0)R_lambda R_lambda_0$
]
]
#proposition[
$R(lambda)$ дифференцируема на $rho(A)$. Более того:
#eq[
$R'(lambda_0) = R^2_lambda_0$
]
]
#proof[
Запишем дроби из предела производной:
#eq[
$(R_lambda - R_lambda_0) / (lambda - lambda_0) = ((lambda - lambda_0)R_lambda R_lambda_0) / (lambda - lambda_0) = R_lambda R_lambda_0 ->_(lambda -> lambda_0) R^2_lambda_0$
]
]
#definition[
*Спектральным радиусом* оператора $A$ называется радиус окружности с центром в
нуле, в которую попадают все элементы спектра:
#eq[
$r(A) = sup_(lambda in sigma(A))abs(lambda)$
]
]
#proposition[
Если $abs(lambda) > norm(A)$, то $lambda in rho(A)$.
]
#proof[
Перепишем $A_lambda$ следующим образом:
#eq[
$A_lambda = -lambda(I - 1/lambda A)$
]
Так как $norm(A / lambda) = 1 / abs(lambda) norm(A) < 1$, то применима теорема
об обратимости возмущённого оператора и, соответственно, этот оператор обратим.
Значит $lambda in rho(A)$ по определению.
]
#corollary[
Очевидно следует, что $r(A) <= norm(A)$.
]
#proposition[
Радиус сходимости ряда Неймана для $R(lambda)$ равен спектральному радиусу $r(A)$.
]
#proof[
$<=$ Мы можем говорить о ряде Лорана. Если $abs(lambda) > norm(A)$, то тогда
имеет место следующее представление резольвенты:
#eq[
$R(lambda) = (A - lambda I)^(-1) = -1/lambda (I - A / lambda)^(-1) = -1 / lambda sum_(k = 0)^oo A^k lambda^(-k)$
]
При этом, ранее было установлено, что $R(lambda)$ дифференцируема на $rho(A)$. В
частности, это происходит на круге $abs(lambda) > r(A)$.
Так как представление функции в виде ряда Лорана в круге единственно, а мы уже
его записали выше для некоторой окрестности бесконечности, то тот же самый вид
должен быть и в этом круге.
Значит, радиус сходимости ряда Неймана не превосходит $r(A)$.
$>=$ Пусть $abs(lambda_0) < r(A)$. Тогда, предположим, что ряд сходится в этой
точке. Это означает, что ряд будет сходится и при всех $abs(lambda) > abs(lambda_0)$.
Это также означает обратимость $A_lambda$ при всех таких $lambda$, но коль скоро $abs(lambda_0) < r(A)$,
то должен существовать $abs(lambda_0) < abs(lambda_1) < r(A)$ такой, что $lambda_1 in sigma(A)$ в
силу определения спектрального радиуса, а это противоречит определению спектра.
]
#proposition[
Если $lambda in sigma(A)$, то $lambda^n in sigma(A^n)$.
]
#proof[
Предположим противное, то есть $lambda^n in rho(A^n)$ и $lambda in sigma(A)$.
Значит $(A^n - lambda^n I)^(-1) in cal(L)(E)$. Заметим, что мы также можем
записать обращаемый оператор в следующем виде:
#eq[
$A^n - lambda^n I = (A - lambda I)underbrace((A^(n - 1) + ... + lambda^(n - 1)I), B) => I = (A - lambda I)B(A^n - lambda^n I)^(-1)$
]
Так как рассматриваемые операторы -- многочлены от степеней $A$, то они
коммутируют. С учётом этого имеем, что $A_lambda$ обратим, а стало быть $lambda in rho(A)$,
противоречие.
]
#proposition[
Верна формула для спектрального радиуса:
#eq[
$r(A) = lim_(n -> oo) root(n, norm(A^n))$
]
]
#proof[
Как мы уже знаем, радиус сходимости ряда Неймана для $R(lambda)$ совпадает с $r(A)$:
#eq[
$r(A) = r_"сх" = overline(lim)_(n -> oo) root(n, norm(A^n))$
]
В силу последнего доказанного утверждения, мы можем связать $r(A)$ и $r(A^n)$ следующим
образом:
#eq[
$r(A^n) = sup_(mu in sigma(A^n))abs(mu) >= sup_(lambda in sigma(A))abs(lambda^n) = r(A)^n$
]
Стало быть, $r(A) <= root(n, r(A^n))$. При этом, знаем, что $r(A^n) <= norm(A^n)$.
Получилось, что верхний предел не превосходит любого элемента последовательности $root(n, norm(A^n))$,
а это означает, что он не превосходит их нижнего предела. Такое возможно только
тогда, когда существует просто предел.
]
#theorem("Основная теорема о спектре")[
Спектр оператора непуст:
#eq[
$sigma(A) != emptyset$
]
]
#proof[
Предположим противное. Тогда $rho(A) = CC$ и, следовательно, $R(lambda)$ является
целой функцией. Оценим норму этого оператора, пользуясь представлением обратного
оператора в ряд Неймана:
#eq[
$norm(R(lambda)) <= 1 / abs(lambda) dot 1 / (1 - 1 / abs(lambda) norm(A)) ->_(lambda -> oo) 0$
]
Коль скоро есть предел $lim_(lambda -> oo)norm(R(lambda))$, то норма $R(lambda)$ ограничена.
Стало быть, по теореме Лиувилля $R(lambda) = "const"$. Более того, из-за
найденного выше предела $R(lambda) = 0$. Это противоречит обратимости $A_lambda$ при
каком-либо $lambda$.
]
#definition[
Рассмотрим оператор $A in cal(L)(E)$. Тогда
- $sigma_p (A) := {lambda in sigma(A) | "Ker" A_lambda != {0}}$ -- *точечный спектр*.
Причём $v in "Ker" A_lambda$ называются *собственными векторами* для
*собственного значения* $lambda$.
- $sigma_c (A) := {lambda in sigma(A) | "Ker" A_lambda = {0} and "Im" A_lambda != E and ["Im" A_lambda] = E}$ -- *непрерывный спектр*.
- $sigma_r (A) := {lambda in sigma(A) | "Ker" A_lambda = {0} and ["Im" A_lambda] != E}$ -- *остаточный спектр*.
]
|
|
https://github.com/christmascoding/DHBW_LAB_GET | https://raw.githubusercontent.com/christmascoding/DHBW_LAB_GET/main/appendix.typ | typst | #let appendix = {
lorem(100)
} |
|
https://github.com/Meisenheimer/Notes | https://raw.githubusercontent.com/Meisenheimer/Notes/main/src/Integration.typ | typst | MIT License | #import "@local/math:1.0.0": *
= Integration
#env("Definition")[
A *weighted quadrature formula* $I_n (f)$ is a linear function
$ I_n (f) = sum_(i=1)^n w_i f(x_i), $
which approximates the integral of a function $f in C[a, b]$,
$ I(f) = integral_a^b rho(x) f(x) upright("d") x, $
where the weight function $rho in [a, b]$ satisfies $forall x in (a, b)$, $rho(x) > 0$. The points ${ x_i }$ at which the integrand $f$ is evaluated are called nodes or abscissas, and the multipliers ${ w_i }$ are called weights or coefficients.
]
#env("Definition")[
A weighted quadrature formula has (polynomial) *degree of exactness* $d_E$ iff
$ & forall f in PP_(d_E), & & E_n (f) = 0, \
& exists g in PP_(d_E + 1), & & " s.t. " E_n (g) eq.not 0 $
where $PP_d$ denotes the set of polynomials with degree no more than $d$.
]
#env("Theorem")[
A weighted quadrature formula $I_n (f)$ satisfies $d_E <= 2 n - 1$.
]
#env("Definition")[
The *error* or *remainder* of $I_n (f)$ is
$ E_n (f) = I(f) - I_n (f), $
where $I_n (f)$ is said to be convergent for $C[a, b]$ iff
$ forall f in C[a, b], lim_(n -> +infinity) E_n (f) = 0. $
]
#env("Theorem")[
Let $x_1, dots, x_n$ be given as distinct nodes of $I_n (f)$. If $d_E >= n - 1$, then its weights can be deduced as
$ forall k in { 1 , dots, n }, w_k = integral_a^b rho(x) l_k (x) upright("d") x, $
where $l_k (x)$ is the elementary Lagrange interpolation polynomial for pointwise interpolation applied to the given nodes.
]
== Newton-Cotes Formulas
#env("Definition")[
A *Newton-Cotes formula* is a formula based on approximating $f(x)$ by interpolating it on uniformly spaced nodes $x_1, dots, x_n in [a, b]$.
]
=== Midpoint rule
#env("Definition")[
The *midpoint rule* is a formula based on approximating $f(x)$ by the constant $f((a+b)/2)$.
For $rho(x) equiv 1$, it is simply
$ I_M (f) = (b - a) f((a+b)/2). $
]
#env("Theorem")[
For $f in C^2[a, b]$, with weight functino $rho equiv 1$, the error (remainder) of midpoint rule satisfies
$ exists xi in [a, b], " s.t. " E_M (f) = ((b - a)^3) / 24 f^(prime.double) (xi). $
]
#env("Corollary")[
The midpoint rule has $d_E = 1$.
]
=== Trapezoidal rule
#env("Definition")[
The *trapezoidal rule* is a formula based on approximating $f(x)$ by the straight line that connects $(a, f(a))$ and $(b, f(b))$.
For $rho(x) equiv 1$, it is simply
$ I_T (f) = (b-a)/2 (f(a) + f(b)). $
]
#env("Theorem")[
For $f in C^2[a, b]$, with weight functino $rho equiv 1$, the error (remainder) of trapezoidal rule satisfies
$ exists xi in [a, b], " s.t. " E_T (f) = -((b - a)^3)/12 f^(prime.double) (xi). $
]
#env("Corollary")[
The trapezoidal rule has $d_E = 1$.
]
=== Simpson's rule
#env("Definition")[
The *Simpson's rule* is a formula based on approximating $f(x)$ by the quadratic polynomial that goes through the points $(a, f(a))$, $((a + b)/2, f((a + b)/2))$ and $(b, f(b))$.
For $rho(x) equiv 1$, it is simply
$ I_S (f) = (b - a)/6 (f(a) + 4 f((a + b)/2) + f(b)). $
]
#env("Theorem")[
For $f in C^4[a, b]$, with weight functino $rho equiv 1$, the error (remainder) of Simpson's rule satisfies
$ exists xi in [a, b], " s.t. " E_T (f) = -((b - a)^5)/2880 f^((4)) (xi). $
]
#env("Corollary")[
The Simpson's rule has $d_E = 3$.
]
== Gauss Formulas
#env("Theorem")[
For an interval $[a, b]$ and a weight function $rho: [a, b] -> RR$, the nodes for gauss formula $I_n (f)$ is the root of the $n$th order orthogonal polynomial on $[a, b]$ with the weight function $rho(x)$.
]
#env("Theorem")[
A Gauss formula $I_n (f)$ has $d_E = 2 n - 1$.
]
|
https://github.com/stephane-klein/typst-sklein-resume-poc | https://raw.githubusercontent.com/stephane-klein/typst-sklein-resume-poc/main/src/template.typ | typst | #let headerFont = ("Roboto")
#import "@preview/fontawesome:0.1.0": *
#let regularColors = (
subtlegray: rgb("#ededee"),
lightgray: rgb("#343a40"),
darkgray: rgb("#212529"),
)
#let hBar() = [
#h(5pt) | #h(5pt)
]
#let headerFirstNameStyle(str) = {text(
font: headerFont,
size: 32pt,
weight: "light",
fill: regularColors.darkgray,
str
)}
#let headerLastNameStyle(str) = {text(
font: headerFont,
size: 32pt,
weight: "bold",
str
)}
#let makeHeaderNameSection(firstName, lastName) = table(
columns: 1fr,
inset: 0pt,
stroke: none,
row-gutter: 6mm,
align: center,
[#headerFirstNameStyle(firstName) #h(5pt) #headerLastNameStyle(lastName)],
)
#let makeHeaderInfo(personalInfo) = {
let personalInfoIcons = (
phone: fa-phone(),
email: fa-envelope(),
linkedin: fa-linkedin(),
homepage: fa-pager(),
github: fa-square-github(),
gitlab: fa-gitlab(),
orcid: fa-orcid(),
researchgate: fa-researchgate(),
location: fa-location-dot(),
extraInfo: "",
)
let n = 1
for (k, v) in personalInfo {
// A dirty trick to add linebreaks with "linebreak" as key in personalInfo
if k == "linebreak" {
n = 0
linebreak()
continue
}
if k.contains("custom") {
// example value (icon: fa-graduation-cap(), text: "PhD", link: "https://www.example.com")
let icon = v.at("icon", default: "")
let text = v.at("text", default: "")
let link_value = v.at("link", default: "")
box({
icon
h(5pt)
link(link_value)[#text]
})
continue
}
if v != "" {box({
// Adds icons
personalInfoIcons.at(k) + h(5pt)
// Adds hyperlinks
if k == "email" {
link("mailto:" + v)[#v]
} else if k == "linkedin" {
link("https://www.linkedin.com/in/" + v)[#v]
} else if k == "github" {
link("https://github.com/" + v)[#v]
} else if k == "gitlab" {
link("https://gitlab.com/" + v)[#v]
} else if k == "homepage" {
link("https://" + v)[#v]
} else if k == "orcid" {
link("https://orcid.org/" + v)[#v]
} else if k == "researchgate" {
link("https://www.researchgate.net/profile/" + v)[#v]
} else {
v
}
})}
// Adds hBar
if n != personalInfo.len() {
hBar()
}
n = n + 1
}
}
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/universal-hit-thesis/0.2.0/common/components/figure.typ | typst | Apache License 2.0 | #let algorithm-figure(content, caption: [], supplement: [算法], label-name: "") = {
let fig = figure(
[#line(length: 100%, stroke: 0.05mm)
#v(0.5em, weak: true)
#align(left)[
#content
]
],
kind: "algorithm",
supplement: supplement,
caption: caption,
)
[
#if label-name == "" {
[
#line(length: 100%, stroke: 0.3mm)
#fig
#line(length: 100%, stroke: 0.3mm)]
} else {
let new-label = label(label-name)
box[
#line(length: 100%, stroke: 0.3mm)
#v(0.75em, weak: true)
#fig
#new-label
#v(0.5em, weak: true)
#line(length: 100%, stroke: 0.3mm)
]
}
]
}
#import "@preview/codelst:2.0.1": sourcecode, code-frame
#import "../theme/type.typ": 字体, 字号
#let codelst-sourcecode = sourcecode
#let hit-sourcecode = codelst-sourcecode.with(frame: code => {
set text(font: 字体.代码, size: 字号.五号)
code-frame(code)
})
#let code-figure(content, caption: [], supplement: [代码], label-name: "") = {
let fig = figure(
hit-sourcecode(content),
caption: caption,
kind: raw,
supplement: supplement,
)
[
#if label-name == "" {
[#fig]
} else {
let new-label = label(label-name)
[#fig #new-label]
}
]
} |
https://github.com/crd2333/crd2333.github.io | https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/Math/index.typ | typst | ---
order: 3
---
基本没有什么东西。 |
|
https://github.com/AnsgarLichter/hka-thesis-template | https://raw.githubusercontent.com/AnsgarLichter/hka-thesis-template/main/README.md | markdown | # Typst Template HKA <a href="https://github.tools.sap/I573017/hka-thesis-template/generate"><img src="https://img.shields.io/badge/Generate_from_Template-8A2BE2?logo=github" /></a>
This is a thesis template written in [Typst](https://typst.app/),
based on the [I573021/hka-template-typst](https://github.tools.sap/I573021/hka-template-typst) and [template of the TU Munich](https://github.com/ls1intum/thesis-template-typst).
## Getting Started
Refer to to [Typst's Tutorial and Documentation](https://typst.app/) to setup your environment.
### Features
## Metadata
In the file `metadata.typ` you can enter all metadata (title, author, submission date, ...). The data is automatically referenced in the exposé and the thesis.
## Settings
Some important global settings are located in `settings.typ`. You can adapt them here as you wish and they are automatically changed in the exposé and the thesis.
## Pre-defined Pages
The template supports the following pages out-of-the-box:
- Titlepage
- Declaration of Originality
- Abstract, both in English and German
- Table of Contents
- List of Figures
- List of Tables
- List of Listings
- List of Abbreviations (for instructions refer to [glossarium](https://github.com/typst/packages/tree/main/packages/preview/glossarium/0.2.6))
- Bibliography
# **Usage of official Titlepage**
In the intranet you can download the official titlepage. As Typst is not able to include PDFs you must manually exchange the current titlepage with the official in the final PDF file.
## Header
Every header includes the heading of the current chapter, section or subsection.

If a new section or subsection starts at the current page this one is chosen.

If a new chapter starts at the current page, no header will be shown.

The numbering is shown thanks to the inspiration from [hydra](https://github.com/tingerrr/hydra) (Hydra didn't work flawlessly, therefore I had to write my own solution).
## Footer
Every footer is divided by a line from the main body. The footer contains the page numbering in the following format `1 / 3`. If you want to change these, you have to adapt the corresponding section in the `template.typ`.
## 💡 Feedback
**Anything Missing?** Please [create an issue](https://github.tools.sap/I573017/hka-thesis-template/issues/new) or open a Pull Request right away.
**Have Questions?** Please [open a discussion](https://github.tools.sap/I573017/hka-thesis-template/discussions). |
|
https://github.com/hongjr03/shiroa-page | https://raw.githubusercontent.com/hongjr03/shiroa-page/main/DIP/chapters/10特征提取和模式识别.typ | typst | #import "../template.typ": *
#import "@preview/fletcher:0.5.0" as fletcher: diagram, node, edge
#import fletcher.shapes: house, hexagon, ellipse
#import "@preview/pinit:0.1.4": *
#import "@preview/cetz:0.2.2"
#import "/book.typ": book-page
#show: book-page.with(title: "数字图像处理基础 | DIP")
= 特征提取和模式识别 Feature Extraction and Pattern Recognition
== 特征提取 Feature Extraction
=== PCA 主成分分析 Principal Component Analysis
== 基于决策论方法的识别
$bold(x)=(x_1,x_2,...,x_n)^T$ 表示 $n$ 维的模式向量,存在 $W$ 个模式类 $omega_1, omega_2, ..., omega_W$。为每个模式类 $omega_i$ 构造一个决策函数 $d_i (dot)$。如果 $bold(x) in omega_i$,那么 $d_i (bold(x))$ 的值应该大于其他类的决策函数值。
每个类之间的决策边界即 $d_i (bold(x)) = d_j (bold(x)) <=> d_i (bold(x)) - d_j (bold(x)) = 0$。定义 $d_(i j) (bold(x)) = d_i (bold(x)) - d_j (bold(x))$,那么 $d_(i j) (bold(x)) > 0$ 则属于 $omega_i$ 类,否则属于 $omega_j$ 类。
== 匹配
匹配是指将输入的模式向量 $bold(x)$ 与已知的模式类 $omega_i$ 进行比较,找到最接近的模式类。
=== 最小距离分类器 / 最近邻分类器 / NN 分类器 Nearest Neighbor Classifier
1. 把原型定义为每个类的平均值$
bold(m)_j = 1 / N_j sum_(bold(x) in omega_j) bold(x), N_j "为类" omega_j "的样本数"
$
2. 利用欧式距离($||bold(a)|| = (a^T a)^(1/2)$)来计算距离$
D_j(bold(x)) = ||bold(x) - bold(m)_j||
$
- 等价的计算方式:$
d_j(bold(x)) = bold(x)^T bold(m)_j - 1/2 bold(m)_j^T bold(m)_j
$
3. 如果 $D_j(bold(x))$ 最小,则 $bold(x) in omega_j$
- 在等价的计算方式中,求的是 $d_j(bold(x))$ 最大
$omega_i$ 和 $omega_j$ 之间的决策边界是 $d_(i j) (bold(x)) = 0$,即 $
d_(i j) (bold(x)) &= d_i (bold(x)) - d_j (bold(x)) \
&= bold(x)^T (bold(m)_i - bold(m)_j) - 1/2 (bold(m)_i - bold(m)_j)^T (bold(m)_i + bold(m)_j) = 0
$
#grid(
columns: (1fr, 0.4fr),
column-gutter: 1em,
)[
它是连接 $bold(m)_i$ 和 $bold(m)_j$ 线段的垂直平分线,如果 $n=2$。$n=3$ 时对应平面,$n>3$ 时对应超平面。
这种分类器要求类别均值之间间距大、同时类内分散程度小。像右图的 Class 2 就无法很好区分。
][
#image("../assets/2024-06-22-15-17-28.png")
]
=== 最佳统计分类器 / Bayes 分类器
看不懂。 |
|
https://github.com/sitandr/typst-examples-book | https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/basics/scripting/basics.md | markdown | MIT License | # Basics
## Variables I
Let's start with _variables_.
The concept is very simple, just some value you can reuse:
```typ
#let author = "<NAME>"
This is a book by #author. #author is a great guy.
#quote(block: true, attribution: author)[
\<Some quote\>
]
```
## Variables II
You can store _any_ Typst value in variable:
```typ
#let block_text = block(stroke: red, inset: 1em)[Text]
#block_text
#figure(caption: "The block", block_text)
```
## Functions
We have already seen some "custom" functions
in [Advanced Styling](../tutorial/advanced_styling.md) chapter.
Functions are values that take some values
and output some values:
```typ
// This is a syntax that we have seen earlier
#let f = (name) => "Hello, " + name
#f("world!")
```
### Alternative syntax
You can write the same shorter:
```typ
// The following syntaxes are equivalent
#let f = (name) => "Hello, " + name
#let f(name) = "Hello, " + name
#f("world!")
```
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/out-of-flow-in-block_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// No item in the first region.
#set page(height: 5cm, margin: 1cm)
No item in the first region.
#block(breakable: true, stroke: 1pt, inset: 0.5cm)[
#rect(height: 2cm, fill: gray)
]
|
https://github.com/xdoardo/co-thesis | https://raw.githubusercontent.com/xdoardo/co-thesis/master/thesis/includes.typ | typst | #import "@preview/prooftrees:0.1.0"
#import "@local/boxes:0.1.0": *
#import "@preview/tablex:0.0.5": tablex, rowspanx, colspanx
#import "@preview/cetz:0.1.1": *
#import "@preview/diagraph:0.1.0": *
#let print = false
//Function to insert TODO
#let todo(body, inline: false, big_text: 40pt, small_text: 15pt, gap: 2mm) = {
if inline {
set text(fill: red, size: small_text, weight: "bold")
box([TODO: #body
#place()[
#set text(size: 0pt)
#figure(kind: "todo", supplement: "", caption: body, [])
]])
} else {
set text(size: 0pt) //to hide default figure text, figures is only used for outline as only headings and figures can used for outlining at this point
figure(kind: "todo", supplement: "", outlined: true, caption: body)[
#block()[
#v(20pt)
#set text(fill: red, size: big_text, weight: "bold")
! TODO !
]
#v(gap)
#block[
#set text(fill: red, size: small_text, weight: "bold")
#body
]
#v(gap)
]
}
}
//Function to insert TODOs outline
#let todo_outline = outline(title: [TODOs], target: figure.where(kind: "todo"))
#let __agdabox = boxenv(
"code",
"snippet",
heading,
2,
fmt: (name, number, body, url: none, label: none) => rect(
width: 100%,
fill: rgb("#dfebeb"),
radius: 0.3em,
inset: 10pt,
align(center, [
#body
#text(font: "PragmataPro Mono Liga", size: 8pt, "snippet " + number)
#h(5pt)
#text(font: "PragmataPro Mono Liga", size: 8pt, "from Agda's stdlib")
#if url != none and (not print) {
h(5pt)
link(
url,
underline(text(font: "PragmataPro Mono Liga", size: 8pt, "see code")),
)
}
]),
),
)
#let agdacode(label: none, url: none, placement: none, body) = {
align(center, __agdabox(body, label: label, url: url, placement: placement))
}
#let __codebox = boxenv(
"code",
"snippet",
heading,
2,
fmt: (name, number, body, url: none) => rect(
width: 100%,
fill: rgb(133, 144, 192, 10%),
radius: 0.3em,
inset: 10pt,
align(center, [
#body
#text(font: "PragmataPro Mono Liga", size: 8pt, "snippet " + number)
#if url != none and (not print) {
h(5pt)
link(
url,
underline(text(font: "PragmataPro Mono Liga", size: 8pt, "see code")),
)
}
]),
),
)
#let code(label: none, from: none, url: none, placement: none, body) = {
align(center, __codebox(body, label: label, url: url, placement: placement))
}
#let __mycodebox = boxenv(
"code",
"snippet",
heading,
2,
fmt: (name, number, body, label: none, url: none, proof: none) => rect(
width: 100%,
fill: rgb("#eeffee"),
radius: 0.3em,
inset: 10pt,
align(
center,
[
#body
#if not print {
link(
url,
underline(text(font: "PragmataPro Mono Liga", size: 8pt, "see code")),
)
}
#if label != none {
text(font: "PragmataPro Mono Liga", size: 8pt, ref(label))
h(5pt)
}
#if proof != none and (not print) {
h(5pt)
underline(
lower(text(font: "PragmataPro Mono Liga", size: 8pt, "see " + ref(proof))),
)
}
#if number != none {
h(5pt)
lower(text(font: "PragmataPro Mono Liga", size: 8pt, "snippet " + number))
}
],
),
),
)
#let mycode(label: none, proof: none, placement: none, url, body) = {
align(
center,
__mycodebox(body, label: label, url: url, proof: proof, placement: none),
)
}
#let theorem = boxenv("theorem", "Theorem", heading, 2)
#let lemma = boxenv("lemma", "Lemma", heading, 2)
#let corollary = boxenv("corollary", "Corollary", heading, 2)
#let definition = boxenv("definition", "Definition", heading, 2)
#let postulate = boxenv("postulate", "Postulate", heading, 2)
#let proof = boxenv("proof", "Proof", heading, 2)
#let refproof = boxenv(
"proof",
"Proof",
heading,
2,
fmt: (name, number, body, thmref: none) => {
strong("Proof") + [ #number ]
[(for #ref(thmref))]
if name != none { [ (#name) ] }
h(10pt)
body
},
)
/// ---
#let conv(c, v) = { $#c arrow.b.double #v$ }
#let div(c) = { $#c arrow.t.double$ }
#let fails(c) = { $#c arrow.zigzag$ }
#let now = "now"
#let nothing = "nothing"
#let just = "just"
#let ceval = "ceval"
#let skip = "skip"
#let assign = "assign"
#let cwhile = "while"
#let aeval = "aeval"
|
|
https://github.com/shuosc/SHU-Bachelor-Thesis-Typst | https://raw.githubusercontent.com/shuosc/SHU-Bachelor-Thesis-Typst/main/contents/info.typ | typst | Apache License 2.0 | // 题目
#let title = "上海大学毕业设计 Typst 模板"
// 学院
#let college = "上海大学计算机学院"
// 专业
#let major = "计算机科学与技术"
// 学号
#let student_id = "19123456"
// 学生姓名
#let student_name = "张三"
// 指导老师
#let advisor = "李四"
// 起讫日期
#let start_and_end_date = "00000000 - ffffffff" |
https://github.com/fabriceHategekimana/master | https://raw.githubusercontent.com/fabriceHategekimana/master/main/3_Theorie/Typage.typ | typst | #import "../src/module.typ" : *
#pagebreak()
== Sémantique de typage
#Definition()[Règle de typage part.1
$ #proof-tree(typing("T-NUM",
$"n":"int"$,
$"n" in NN$
)) $
$ #proof-tree(typing("T-PLUS",
$Gamma tack.r "E1" + "E2" : "int"$,
$Gamma tack.r "E1" : "int"$, $"E2" : "int"$)) $
$ #proof-tree(typing("T-TIME",
$Gamma tack.r "E1" * "E2" : "int"$,
$Gamma tack.r "E1" : "int"$,
$Gamma tack.r "E2" : "int"$)) $
$ #proof-tree(t_true) $
$ #proof-tree(t_false) $
$ #proof-tree(typing("T-AND",
$tack.r "E1" "and" "E2" : "bool"$,
$tack.r "E1" : "bool"$,
$tack.r "E2" : "bool"$
)) $
$ #proof-tree(typing("T-OR",
$Gamma tack.r "E1" "or" "E2" : "bool"$,
$Gamma tack.r "E1" : "bool"; "E2" : "bool"$
)) $
$ #proof-tree(typing("T-EQ",
$Gamma tack.r "E1" == "E2" : "bool"$,
$Gamma tack.r "E1" : "T"; "E2" : "T"$
)) $
$ #proof-tree(typing("T-LOW",
$Gamma tack.r "E1" < "E2" : "bool"$,
$Gamma tack.r "E1" : "int"$,
$Gamma tack.r "E2" : "int"$
)) $
$ #proof-tree(typing("T-GRT",
$Gamma tack.r "E1" > "E2" : "bool"$,
$Gamma tack.r "E1" : "int"; "E2" : "int"$
)) $
$ #proof-tree(typing("T-LOW-EQ",
$Gamma tack.r "E1" <= "E2" : "bool"$,
$Gamma tack.r "E1" : "int"; "E2" : "int"$
)) $
$ #proof-tree(typing("T-GRT-EQ",
$Gamma tack.r "E1" >= "E2" : "bool"$,
$Gamma tack.r "E1" : "int"; "E2" : "int"$
)) $
$ #proof-tree(typing("T-IF",
$Gamma tack.r "if" "E1" "then" "E2" "else" "E3" : "T"$,
$Gamma tack.r "E1" : "bool"$,
$Gamma tack.r "E2" : "T"$,
$Gamma tack.r "E3" : "T"$
)) $
]
#Definition()[Règle de typage part.2
$ #proof-tree(typing("T-LET",
$Gamma tack.r "let" "x": "T1" = "E1" "in" "E2" : "T2"$,
$Gamma tack.r "E1" : "T1"$,
$Gamma tack.r "x" : "T1" "E2" : "T2"$
)) $
$ #proof-tree(typing("VAR",
$Gamma tack.r "x": sigma$,
$Gamma tack.r "x": sigma$,
)) $
$ #proof-tree(typing("T-FUNC",
$Gamma tack.r "func"< overline( "a")>( overline( "x":"T")) -> "T" { "E" } : (( overline( "T")) -> "T")$,
$Gamma , overline( "x":"T") tack.r "E" : "T"$
)) $
$ #proof-tree(typing("T-FUNC-APP",
$("E1")< overline("g")>( overline( "E")) : "T"$,
$Gamma tack.r "E1" : <overline(a)>(overline( "T")) -> "T"$,
$[overline("a") \/ overline("g")]overline("T") => overline("Tp")$,
$Gamma tack.r overline("E") : overline("Tp")$)) $
$ #proof-tree(typing("T-ARR",
$Gamma tack.r [ overline( "E") ] : ["n", "T"]$,
$"len"( overline( "E") ) => "n"$,
$tack.r "n" "index"$,
$Gamma tack.r overline("E") : "T"$
)) $
$ #proof-tree(typing("T-CONC",
$[overline("E1")] :: [overline("E2")] => ["m+n", "T"]$,
$Gamma tack.r[overline("E1")] : [m, T]$,
$tack.r "m" "index"$,
$Gamma tack.r[overline("E2")] : [n, T]$,
$tack.r "n" "index"$,
)) $
$ #proof-tree(typing("T-FIRST-ARR",
$Gamma tack.r "first"(["E"]) => "T"$,
$Gamma tack.r [overline("E")] : [m, T]$,
$tack.r "m" "index"$
)) $
$ #proof-tree(typing("T-REST-ARR",
$Gamma tack.r "rest"([overline("E")]) => ["m", "T"]$,
$Gamma tack.r [overline("E")] : ["m"+1, "T"]$,
$tack.r "m" "index"$
)) $
]
|
|
https://github.com/tobiaswuttke/rss-submission-template | https://raw.githubusercontent.com/tobiaswuttke/rss-submission-template/main/rss-template.typ | typst |
#let conf(title, author, email, type, abstract, reviewer, doc) = {
set page(
paper: "a4",
margin: (x: 4.2cm, y: 5.25cm),
)
set text(
font: "Times New Roman",
size: 10pt,
)
set par(
justify: true,
)
// Bibliography in APA Style
set bibliography(style: "apa", title: "References")
// Title
align(center, text(14pt)[
* #title *
])
// Author
align(center, text(10pt)[
#v(12pt,weak: false)
#author
#v(3pt,weak: false)
])
// Type of paper
align(center, text(10pt)[
_ #type _
#v(3pt,weak: false)
])
// University
align(center, text(10pt)[
Hasso Plattner Institute\
Chair for Design Thinking and Innovation Research\
August-Bebel-Strasse 88, 14482 Potsdam, Germany\
#email
\
])
// Heading 1
show heading.where(level: 1): it => [
#v(24pt,weak: true)
#set text(12pt, weight: "bold")
#block(it.body)
#v(12pt,weak: true)
]
// Move captions of Figures
show figure: it => align(center)[
#set text(
size: 8pt
)
#if it.kind == table [
#it.caption
#v(10pt, weak: true)
#it.body
] else [
#it.body
#it.caption
]
]
// Heading 2
show heading.where(
level: 2
): it => [
#v(24pt,weak: true)
#set text(10pt, weight: "bold")
#block(it.body)
#v(12pt,weak: true)
]
// Abstract
par(justify: true)[
#v(10pt)
_Abstract:_ #abstract\
\
#v(10pt)
Reviewer: #reviewer
]
// Rest of the document
doc
}
|
|
https://github.com/gumelarme/nuist-master-thesis-proposal | https://raw.githubusercontent.com/gumelarme/nuist-master-thesis-proposal/main/pages/spv-comment.typ | typst | #text(
size: 1.3em,
"3. Comments of Supervisor, Thesis Proposal Panel, School Affiliated",
)
#block(
height: 100% - 2.5em,
width: 100%,
stroke: 0.5pt + black,
inset: 0.5em,
)[
Supervisor's comprehensive opinion on the thesis proposal:
#align(
bottom,
stack(
dir: ltr,
rect(width: 50%, stroke: none, [Note: Extra pages may be added]),
rect(
width: 50%,
stroke: none,
[
#set align(center)
Supervisor (Signature)
#v(6em)
YY MM DD
],
),
),
)
]
#let signatures = [
#box(width: 6.5em, stroke: (bottom: 0.5pt + black))
#h(1em)
]
#rect(
stroke: 0.5pt + black,
width: 100%,
height: 100%,
[
*Thesis Proposal Records* (Ph.D proposal panel should consist of 5-7 people, Masters proposoal should consist of 3-5 people.)
+ Proposal Panel Opinion:
#align(bottom)[
#set enum(start: 2)
+ Voting Results:
Proposal Panel in Attendance
+ Proposal Panel Chair (Signature) #signatures \
Proposal Panel Members (Signatures) \
#{
(signatures * 3)
} \
#{
(signatures * 3)
}
+ Thesis Proposal Date
#align(right)[YY MM DD]
]
],
)
#rect(
stroke: 0.5pt + black,
width: 100%,
height: 100%,
[
Opinion of the School Affiliated:
#align(
bottom + right,
[
#block(
inset: 2em,
align(
center,
[
Person in Charge of the School (Seal)
#v(4em)
YY MM DD
],
),
)
],
)
],
)
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/rivet/0.1.0/README.md | markdown | Apache License 2.0 | # rivet-typst
RIVET _(Register / Instruction Visualizer & Explainer Tool)_ is a [Typst](https://typst.app) package for visualizing binary instructions or describing the contents of a register, using the [CeTZ](https://typst.app/universe/package/cetz) package.
It is based on the [homonymous Python script](https://git.kb28.ch/HEL/rivet/)
## Examples
<table>
<tr>
<td>
<a href="./gallery/example1.typ">
<img src="./gallery/example1.png" width="1000px">
</a>
</td>
</tr>
<tr>
<td>A bit of eveything</td>
</tr>
<tr>
<td>
<a href="./gallery/example2.typ">
<img src="./gallery/example2.png" width="1000px">
</a>
</td>
</tr>
<tr>
<td>RISC-V memory instructions (blueprint)</td>
</tr>
</table>
*Click on the example image to jump to the code.*
## Usage
For more information, see the [manual](manual.pdf)
To use this package, simply import `schema` from [rivet](https://typst.app/universe/package/rivet) and call `schema.load` to parse a schema description. Then use `schema.render` to render it, et voilà !
```typ
#import "@preview/rivet:0.1.0": schema
#let doc = schema.load("path/to/schema.yaml")
#schema.render(doc)
``` |
https://github.com/yongweiy/cv | https://raw.githubusercontent.com/yongweiy/cv/master/cv.typ | typst | // Imports
#import "@preview/brilliant-cv:2.0.2": cv
#let metadata = toml("./metadata.toml")
#let importModules(modules) = {
for module in modules {
include {
module + ".typ"
}
}
}
#show: cv.with(
metadata,
profilePhoto: image("./src/avatar.png")
)
#importModules((
"education",
"experience",
"honors",
"publications",
"services",
"skills",
))
// Local Variables:
// tp--master-file: "/home/slark/Desktop/brilliant-cv/cv.typ"
// End:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.