repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/alberto-lazari/computer-science | https://raw.githubusercontent.com/alberto-lazari/computer-science/main/advanced-topics-cs/quantum-algorithms/chapters/computational-model.typ | typst | #import "/common.typ": *
= Computational model
In order to understand quantum algorithms it is necessary to define the tensor product operation between matrices and to present some primitive constructs.
== Tensor product
Let $A in CC^(m times n), B in CC^(p times q), D in CC^(m p times n q)$
The tensor product $tensor$ is defined as a bilinear operator between vector spaces:
$ tensor : A times B -> D $
In the course the $tensor$ operator always refers to the Kronecker product, a special case of the more generic tensor product.
/ Kronecker product:
$ D := A tensor B = mat(
a_11 B, ..., a_(1 n) B;
a_21 B, ..., a_(2 n) B;
dots.v, , dots.v;
a_(m 1) B, ..., a_(m n) B
)
$
== Primitive constructs
/ Bra-kets: Ket $ket(psi)$ denotes a column vector in $CC^n$, while bra $bra(psi)$ denotes a row vector in $(CC^n)^dagger$, such that $bra(psi) = ket(psi)^dagger$
Properties:
- $ket(psi)ket(phi) = ket(psi) tensor ket(phi)$
- $inner(psi, phi) = angle.l psi, phi angle.r =>$ inner product
/ Binary strings: $jstr in {0, 1}^q$ denotes a binary string of $q$ digits, such that
$j = display(sum_(k = 1)^q) jstr_k dot 2^(q - k)$
$ket(jstr) = ket(j_1 j_2 ... j_q) =
ket(j_1)ket(j_2)...ket(j_q) =
$ basis vector of $(CC^2)^(tensor q)$ with a $1$ in position $j$
== Quantum states
A state in a quantum computer coincides with a single (possibly composite) register of qubits. \
A register of $q$ qubits can be represented with a vector $ket(psi)$ in $(CC^2)^(tensor q)$ (which is $subset.eq CC^(2^q)$).
A state $ket(psi)$ can be represented as $display(sum_(jstr in {0, 1}^q)) alpha_j ket(jstr)$, $alpha_i in CC$,
which is equivalent to
$display(mat(
alpha_1;
alpha_2;
dots.v;
alpha_(2^q);
))$
/ Basis states / superpositions: A $q$-qubits register $ket(psi)$ represents a basis state if
$ket(psi) = ket(jstr)$, for some $jstr in {0, 1}^q$.
Otherwise, it is a superposition.
/ Product states / entanglements: A $q$-qubits quantum state $ket(psi)$ is a product state if
$ket(psi) = ket(psi_1)ket(psi_2)...ket(psi_q)$, where $q_i$ is a 1-qubit state.
Otherwise, it is an entangled state.
== Operations on qubits
/ Operations: An operation, or gate, is a unitary matrix $U in CC^(2^q times 2^q)$
$==>$ Operations on quantum states are always:
- Linear
- Reversible
/ Measurement: Gate allowing to gather information on the state.
Applying a measurement to a $q$-qubits state $ket(psi) = display(sum_(jstr in {0, 1}^q)) alpha_j ket(jstr)$ returns the binary string $arrow(k) in {0,1}^q$ with probability $|alpha_k|^2$
After applying a measurement gate on a state, the original state is not recoverable.
=== Basic operations
There are some basic gates that are used to create more complex operations:
- $X$ gate: performs a bit flip $=> X#ket(0) = ket(1), quad X#ket(1) = ket(0)$
$ X = mat(0, 1; 1, 0) $
- $H$ (Hadamard) gate: returns uniform superposition of basis states (when run in parallel on all qubits):
$ H^(tensor q)#ket($arrow(0)$) = 1 / (sqrt(2^q)) display(sum_(jstr in {0, 1}^q)) ket(jstr) $
- $"CX"$ (Controlled $X$/NOT) gate: two-qubit gate that can create or destroy entanglement.
$"CX"_12$ flips target qubit (qubit 2), when control qubit (qubit 1) is $ket(1)$, leaves qubit 2 as it is otherwise:
$"CX"_21#ket(01) = ket(01),
"CX"_21#ket(11) = ket(10)
$ \
$"CX"_21$ is the same, but with target qubit 1 and control qubit 2:
$"CX"_21#ket(10) = ket(10),
"CX"_21#ket(11) = ket(01)
$
$ "CX"_12 = mat(
1, 0, 0, 0;
0, 1, 0, 0;
0, 0, 0, 1;
0, 0, 1, 0;
)
#h(4em)
"CX"_21 = mat(
0, 1, 0, 0;
1, 0, 0, 0;
0, 0, 1, 0;
0, 0, 0, 1;
) $
|
|
https://github.com/DieracDelta/presentations | https://raw.githubusercontent.com/DieracDelta/presentations/master/polylux/book/src/utils/fill-remaining.typ | typst | #import "../../../polylux.typ": *
#set page(paper: "presentation-16-9")
#set text(size: 40pt)
#polylux-slide[
#fit-to-height(1fr)[BIG]
]
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/compiler/show-text.typ | typst | Apache License 2.0 | // Test text replacement show rules.
---
// Test classic example.
#set text(font: "Roboto")
#show "Der Spiegel": smallcaps
Die Zeitung Der Spiegel existiert.
---
// Another classic example.
#show "TeX": [T#h(-0.145em)#box(move(dy: 0.233em)[E])#h(-0.135em)X]
#show regex("(Lua)?(La)?TeX"): name => box(text(font: "New Computer Modern")[#name])
TeX, LaTeX, LuaTeX and LuaLaTeX!
---
// Test that replacements happen exactly once.
#show "A": [BB]
#show "B": [CC]
AA (8)
---
// Test caseless match and word boundaries.
#show regex("(?i)\bworld\b"): [🌍]
Treeworld, the World of worlds, is a world.
---
// Test there is no crashing on empty strings
// Error: 1:7-1:9 text selector is empty
#show "": []
---
// Error: 1:7-1:16 regex selector is empty
#show regex(""): [AA]
---
// Error: 1:7-1:42 regex matches empty text
#show regex("(VAR_GLOBAL|END_VAR||BOOL)") : []
---
// This is a fun one.
#set par(justify: true)
#show regex("\S"): letter => box(stroke: 1pt, inset: 2pt, upper(letter))
#lorem(5)
---
// See also: https://github.com/mTvare6/hello-world.rs
#show regex("(?i)rust"): it => [#it (🚀)]
Rust is memory-safe and blazingly fast. Let's rewrite everything in rust.
---
// Test accessing the string itself.
#show "hello": it => it.text.split("").map(upper).join("|")
Oh, hello there!
---
// Replace worlds but only in lists.
#show list: it => [
#show "World": [🌎]
#it
]
World
- World
---
// Test absolute path in layout phase.
#show "GRAPH": image("/files/graph.png")
The GRAPH has nodes.
|
https://github.com/taylorh140/typst-graphviz-plugin | https://raw.githubusercontent.com/taylorh140/typst-graphviz-plugin/master/README.md | markdown | # typst-graphviz-plugin
A method to compile a small subset of graphviz functions to wasm for use as a typst plugin
It should contain everything needed to compile but the following tools are needed:
- Linux OS (It might work elsewhere but i didn't try it)
- python3 (for stub replacement)
- wabt (for conversion between wat/wasm)
- emcc (for compiling the plugin)
after cloning the repo (do it recursively) the ./Build should create the wasm file (let me know if there are problems).
# Work left to do
- Fonts are not appropriately handled and basically cause the plugin to lock up. I think this needs to be figured out, but is fairly easy to work around.
|
|
https://github.com/Anastasia-Labs/project-close-out-reports | https://raw.githubusercontent.com/Anastasia-Labs/project-close-out-reports/main/README.md | markdown | # project-close-out-report-template
# Typst Project Close-out Template
Project Close-out Report template using Typst
## How To Use
### Web App
- The easiest way to build this template is to create a project on [typst.app](https://typst.app)
- Copy the contents of the [template](https://github.com/Anastasia-Labs/project-close-out-report-template/blob/main/close-out-template.typ) and the corresponding images from the `images` folder into your new Typst project and start editing.
## Resources
### Tutorials
The [tutorial page](https://typst.app/docs/tutorial/) for Typst is already well documented and I would recommend to follow it - an additional source of information would be this [GitHub repo](https://sitandr.github.io/typst-examples-book/book/about.html).
### VSCode Extensions for Local Editing
- [Language Server](https://marketplace.visualstudio.com/items?itemName=nvarner.typst-lsp)
- [Previewer](https://marketplace.visualstudio.com/items?itemName=mgt19937.typst-preview)
## Showcase
[Preview the PDF](https://github.com/Anastasia-Labs/project-close-out-report-template/blob/main/close-out-template.pdf)
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/modernpro-coverletter/0.0.1/modernpro-coverletter.typ | typst | Apache License 2.0 | ///////////////////////////////
// modernpro-coverletter.typ
// A cover letter template with modern Sans font for job applications and other formal letters.
// Copyright (c) 2024
// Author: <NAME>
// License: MIT
// Version: 0.0.1
// Date: 2024-04-19
// Email: <EMAIL>
///////////////////////////////
#let main(
font-type: "openfont",
name: "",
address: "",
contacts: (),
recipient: (
start-title: "",
cl-title: "",
date: "",
department: "",
institution: "",
address: "",
postcode: "",
),
mainbody,
) = {
let date_colour = rgb("#666666")
let primary-colour = rgb("#2b2b2b")
let headings-colour = rgb("#6A6A6A")
let subheadings-colour = rgb("#333333")
let sectionsep = {
[#v(5pt)]
}
let subsectionsep = {
[#v(2pt)]
}
// Set font type for all text
// #let font-type = "macfont"
let font-head = {
if font-type == "macfont" {
"Helvetica Neue"
} else if font-type == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
let font-term = {
if font-type == "macfont" {
"Heiti TC"
} else if font-type == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
let font-descript = {
if font-type == "macfont" {
"Heiti SC"
} else if font-type == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
let font-info = {
if font-type == "macfont" {
"Helvetica"
} else if font-type == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
let recipient-generate(
start-title,
cl-title,
date,
department,
institution,
address,
postcode,
) = {
align(
left,
{
if department != [] {
text(
10pt,
font: font-info,
fill: subheadings-colour,
weight: "bold",
)[#department]
}
h(1fr)
if date != "" {
text(10pt, font: font-info, fill: primary-colour, weight: "light")[#date\ ]
} else {
text(
10pt,
font: font-info,
fill: primary-colour,
weight: "light",
)[ #datetime.today(offset: auto).display("[day] [month repr:long] [year]")\ ]
}
if institution != [] {
text(
10pt,
font: font-info,
fill: subheadings-colour,
weight: "bold",
)[#institution\ ]
}
if address != [] {
text(
10pt,
font: font-info,
fill: headings-colour,
weight: "light",
)[#address\ ]
}
if postcode != [] {
text(
10pt,
font: font-info,
fill: headings-colour,
weight: "light",
)[#postcode ]
}
},
)
align(left, text(
12pt,
font: font-head,
fill: primary-colour,
weight: "bold",
)[#upper(cl-title)])
v(0.1em)
set text(11pt, font: font-head, fill: primary-colour, weight: "regular")
[#start-title]
}
// show contact details
let display(contacts) = {
set text(
11pt,
font: font-term,
fill: headings-colour,
weight: "medium",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 2pt,
)
contacts.map(contact =>{
if contact.link == none [
contact.text
] else {
link(contact.link)[#{ contact.text }]
}
}).join(" | ")
}
set page(margin: (left: 2cm, right: 2cm, top: 3.2cm, bottom: 1.5cm), header: {
// Head Name Section
text(
25pt,
font: font-head,
fill: primary-colour,
weight: "light",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 12pt,
)[#align(center, [#name])]
text(
11pt,
font: font-descript,
fill: headings-colour,
weight: "medium",
top-edge: "baseline",
bottom-edge: "baseline",
)[#align(center, [#address])]
align(center)[#display(contacts)]
line(length: 100%, stroke: 0.5pt + primary-colour)
}, header-ascent: 1em)
// Add recipient details
recipient-generate(
recipient.start-title,
recipient.cl-title,
recipient.date,
recipient.department,
recipient.institution,
recipient.address,
recipient.postcode,
)
set par(justify: true, first-line-indent: 2em)
set text(11pt, font: font-info, fill: primary-colour, weight: "regular")
mainbody
set text(11pt, font: font-info, fill: primary-colour, weight: "regular")
[Sincerely,\ ]
v(1pt)
[*#name*]
} |
https://github.com/Tyrn/wei | https://raw.githubusercontent.com/Tyrn/wei/master/robn.typ | typst | #import "robinson-pip.typ": part, chapter, formatDoc
#show: doc => formatDoc(doc)
#set text(lang: "es")
#set par(first-line-indent: 1em, justify: true)
#set text(hyphenate: true)
#place(center + horizon, dy: -10em)[
#text(size: 25pt)[
The title of our book
]
#text(size: 15pt)[
A subtitle for the book
]
]
#pagebreak()
#[
#set align(center)
#text(size: 25pt)[
Acknowledgments
]
#set align(center)
I just wanted to take a moment to express my deepest gratitude to each and every one of you. Whether you've been by my side through thick and thin or have just recently crossed paths with me, your presence means the world to me.
To my friends, family, and loved ones: Thank you for your unwavering support, love, and understanding. You make every day brighter, and I am endlessly grateful for the joy you bring into my life.
To my mentors and teachers: Thank you for your guidance, wisdom, and encouragement. Your belief in me has helped shape me into the person I am today, and I am forever thankful for your impact on my journey.
To my colleagues and peers: Thank you for your collaboration, camaraderie, and shared experiences. Your dedication and passion inspire me to strive for greatness, and I am honored to work alongside such incredible individuals.
To the strangers who have crossed my path: Thank you for your kindness, compassion, and random acts of generosity. Your unexpected gestures of goodwill remind me of the beauty in humanity, and I am humbled by your warmth and generosity.
And to everyone else: Thank you for being you. Whether we've laughed together, cried together, or simply shared a passing moment, your presence has left an imprint on my heart, and for that, I am eternally grateful.
So, from the bottom of my heart, thank you. Thank you for being a part of my life, for sharing in my journey, and for making each day a little brighter. I am blessed to know each and every one of you, and I look forward to the many adventures that lie ahead.
]
#pagebreak()
// start numbering from this point onwards
#set page(numbering: "1")
#outline()
#pagebreak()
= This heading won't show up
== Neither will this
=== You have to create headings with the provided `makeHeading` function
#part("Part One", "This is some part one stuff")
#lorem(100)
#chapter("Chapter One", "content for first chapter")
#lorem(100)
#chapter("Chapter Two", "content for the second chapter")
#lorem(100)
#chapter("Chapter Three", "Why not add a third chapter while we're at it")
#lorem(100)
#part("Part Two", "Part two is even better than part one")
#lorem(100)
#chapter("Chapter One", "content for first chapter, but this time in part two")
#lorem(100)
#chapter("Chapter Two", "content for the second chapter, but this time in part two")
#lorem(100)
#chapter("Chapter Three", "Why not add a third chapter while we're at it, but this time in part two")
#lorem(100)
#part("Part Three", "Part three is honestly the best thing since sliced bread")
#lorem(100)
#chapter("Chapter One", "content for first chapter, but in in part three. Whaaaaat????")
#lorem(100)
#chapter("Chapter Two", "content for the second chapter, and still in part three")
#lorem(100)
#chapter("Chapter Three")[
There's really no hard limit to how long these openers can be and to be quite honest I'm rather curious to see how far we can go.
<NAME> (1844–1917) started as the Principal of the Groton School in 1864. By 1865, he became second assistant librarian at the New York Mercantile Library; from 1867 to 1869, he was promoted to full librarian. From 1869 to 1873, he worked for Evarts, Southmayd & Choate, a law firm co-founded by <NAME>, <NAME> and <NAME>. He was admitted to the New York State Bar Association in 1873.
Anyway, how's your day been?
]
#lorem(100)
#part("Part Four", "Part four is where things start to get really interesting")
#lorem(100)
#chapter("Chapter One")[
Content for the first chapter in part four.
Already we are boldly launched upon the deep; but soon we shall be lost in its unshored, harbourless immensities. Ere that come to pass; ere the Pequod’s weedy hull rolls side by side with the barnacled hulls of the leviathan; at the outset it is but well to attend to a matter almost indispensable to a thorough appreciative understanding of the more special leviathanic revelations and allusions of all sorts which are to follow.
]
#lorem(100)
#chapter("Chapter Two", "content for the second chapter in part four. Brace yourselves!")
#lorem(100)
#chapter("Chapter Three", "Why not add a third chapter while we're at it, but this time in part four. Get ready!")
#lorem(100)
#part("Part Five", "Part five promises to take the excitement up a notch")
#lorem(100)
#chapter("Chapter One", "content for first chapter, but this time in part five. Can you handle it?")
#lorem(100)
#chapter("Chapter Two", "content for the second chapter, but this time in part five. Prepare for thrills!")
#lorem(100)
#chapter("Chapter Three", "Why not add a third chapter while we're at it, but this time in part five. It's about to get wild!")
#lorem(100)
#part("Part Six", "Part six is the grand finale you've all been waiting for")
#lorem(100)
#chapter("Chapter One", "content for first chapter, but this time in part six. The climax begins!")
#lorem(100)
#chapter("Chapter Two", "content for the second chapter, but this time in part six. Get ready for the big reveal!")
#lorem(100)
#chapter("Chapter Three", "Why not add a third chapter while we're at it, but this time in part six. Hold onto your seats!")
#lorem(100)
|
|
https://github.com/chendaohan/bevy_tutorials_typ | https://raw.githubusercontent.com/chendaohan/bevy_tutorials_typ/main/01_data/data.typ | typst | #set page(fill: rgb(35, 35, 38, 255), height: auto, paper: "a3")
#set text(fill: color.hsv(0deg, 0%, 90%, 100%), size: 22pt, font: "Microsoft YaHei")
#set raw(theme: "themes/Material-Theme.tmTheme")
= 1. 数据
ECS 的数据结构称为 World 。这就是存储和管理所有数据的地方。对于高级场景,可以有多个世界,然后每个世界都将表现为独立的 ECS 实例。但是,通常情况下,你只需使用 Bevy 为你的 App 设置主 World 即可。
你可以用两种不同的方式表示数据:实体/组件和资源。
= 2. Entities / Components (实体 / 组件)
从概念上来讲,你可以将其与表格进行类比,组件就像表格的“列”,实体就像表格的“行”。Entity 就像行号。他是一个整数索引,可让你查找特定的实体。
#table(
columns: (1fr, 1.5fr, 1fr, 1fr, 1fr, 1fr),
align: center + horizon,
stroke: 1pt + white,
table.header("Entity", "Translation", "Player", "Enemy", "Camera", "Health"),
"0", "✓", "✓", "", "", "✓",
"1", "✓", "", "✓", "", "✓",
"2","✓", "", "", "✓", "",
"3", "✓", "", "✓", "", "✓",
)
```rust
#[derive(Component)]
struct Translation { x: f32, y: f32, z: f32 }
#[derive(Component)]
struct Player;
#[derive(Component)]
struct Enemy;
#[derive(Component)]
struct Camera;
#[derive(Component)]
struct Health(f32)
fn spawn_entities(mut commands: Commands) {
commands.spawn((
Translation {x: 0., y: 0., z: 0.},
Player,
Health(50.),
));
commands.spawn((
Translation {x: 5., y: 7., z: 0.},
Enemy,
Health(100.),
));
commands.spawn((
Translation {x: 20., y: 13., z: 0.},
Camera,
));
commands.spawn((
Translation {x: 79., y: 43., z: 0.},
Enemy,
Health(250.),
));
}
fn player_infos(health: Query<&Health>) {
for health in &health {
info!("health: {health:?}");
}
}
```
= 3. Bundle
Bundle 是多个一起使用的组件组成的组件集。
```Rust
#[derive(Bundle)]
struct PlayerBundle {
translation: Translation,
player: Player,
health: Health,
}
fn spawn_player(mut commands: Commands) {
commands.spawn(PlayerBundle {
translation: Translation {x: 4., y: 3., z: 0.},
player: Player,
health: Health(50.),
});
}
```
= 4. Resources (资源)
Resource 是一个全局实例(单例),它是独立的,不与其它数据关联。
```rust
#[derive(Resource)]
struct GameSettings {
current_level: u32,
difficulty: u32,
max_time_seconds: u32,
}
fn setup_game_settings(mut commands: Commands) {
commands.insert_resource(GameSettings {
current_level: 1,
difficulty: 100,
max_time_seconds: 60,
});
}
fn game_settings_info(settings: Res<GameSettings>) {
info!("game settings: {settings:?}");
}
``` |
|
https://github.com/nixon-voxell/apu_ppf | https://raw.githubusercontent.com/nixon-voxell/apu_ppf/main/ppf.typ | typst | #set text(font: "Times New Roman", size: 12pt, hyphenate: false)
#align(center)[
#set text(size: 18pt)
#underline[*DRAFT PROJECT PROPOSAL FORM*]
#image("assets/apu-logo.png", width: 50%)
]
// #align(right)[
// #move(dx: 20pt, dy: -40pt)[
// #box(width: 80%)[
// #align(left)[
// #grid(
// columns: (1fr, 1fr),
// stroke: black,
// inset: 0.65em,
// [
// *Office Record* \
// Date Received: \
// Received by whom: \
// ],
// [
// *Receipt* \
// Student Name: <NAME> \
// Student Number: TP058994 \
// Received by: \
// Date: \
// ],
// )
// ]
// #align(left + bottom)[
// #stack(
// dir: ltr,
// image("assets/apu-logo.png", width: 30%),
// align(top)[#underline[*DRAFT PROJECT PROPOSAL FORM*]],
// )
// ]
// ]
// ]
// ]
#text(weight: "bold")[
#grid(
columns: (auto, auto, 1fr),
inset: 0.65em,
[Proposal ID], [:], [],
[Supervisor], [:], [],
[Student Name], [:], [<NAME>],
[Student No], [:], [TP058994],
[Email Address], [:], [<EMAIL>],
[Programme Name], [:], [Computer Games Development (CGD)],
[Title of Project],
[:],
[Real-time Global Illumination and Dynamic Compute-Centric Vector Graphics in Games],
)
]
#pagebreak()
#outline()
#pagebreak()
#set heading(numbering: "1.")
#set par(justify: true, leading: 1em)
#set enum(indent: 1em)
#show par: set block(spacing: 2em)
#show heading: set block(height: 1em)
// #show heading: it => [
// #stack(dir: ltr)[
// CHAPTER #counter(heading).display(
// it.numbering
// ) #it.body
// ]
// ]
// #pagebreak()
// = Abstract
// Reflection of what information you have.
// How you do sampling.
// 6 keywords
= Introduction
// Assume the reader has very little knowledge of the subject. Introduce the topic, the sector of business/industry concerned and how the project relates to it. Define the context of the problem and identify the research required to solve it.
Achieving visually rich and interactive content in real-time without compromising performance is a key aspect of immersive gameplay.
This project addresses two major challenges in modern game development: creating dynamic, interactive user experiences and implementing accurate, real-time lighting models.
Tackling these challenges requires three key innovations: a compute-centric vector graphics renderer, a programmable approach for developing interactive content, and a performant global illumination technique.
Vector graphics is a form of computer graphics where visual images are generated from geometric shapes, such as points, lines, curves, and polygons, defined on a Cartesian plane.
Vector graphics are often used in situations where scalability and precision are essential. Common applications include: logos, typography, diagrams, charts, motion graphics, etc.
Examples of softwares that generates or uses vector graphics content includes Adobe Illustrator, Adobe After Effects, Affinity Publisher, Graphite, and many more.
Vector graphics is also used in a wide range of file formats including Scalable Vector Graphics (SVG), Portable Document Format (PDF), TrueType Font (TTF), OpenType Font (OTF), etc.
However, these formats are rarely used in the game industry directly (they are often preprocessed into some other formats, i.e. triangulation or signed distance fields [SDF]), as game engines are often only tailored towards rendering triangles and bitmap textures instead of paths and curves that we see in the vector graphics formats.
Markup languages _(i.e. Hypertext Markup Language [HTML], Extensible Markup Language [XML])_ and style sheets _(i.e. Cascading Style Sheets [CSS])_ has dominated the way developers layout and style contents.
Over the years, technologies like Unity UI Toolkit has evolved in the game industry to adopt the same pattern but with a user friendly editor, allowing users to layout content using a drag and drop manner while styling their content using sliders, color pickers, and input fields @jacobsen2023.
While this improves the user experience of content creation, it lacks the capability of integrating logic and custom contents right inside the user interfaces.
These features are often delegated to the programmer which can lead to unintended miscommunications.
Calculating indirect lighting is extremely computationally expensive, as it requires simulating how light bounces off surfaces and interacts with the environment.
Ray tracing is an algorithm that calculates these light interactions by tracing lights from the camera into the scene, following their paths as they bounce off surfaces and interact with materials.
Each bounce contributes to the final color and lighting of the scene, accounting for reflections, refractions, and scattering.
Unfortunately, ray tracing is too slow for real-time applications, like games.
New techniques like light probes and light baking has been employed to approximate global illumination in modern game engines.
However, the major issue still exists for these techniques --- scalability to larger and more complex scenes.
#pagebreak()
= Problem Statement
// Identify past and current work in the subject area. Outline the key references to other people’s work, indicate for the most pertinent of these how your proposal relates to the ideas they contain.
== Vector Graphics
#figure(caption: [Vector vs Bitmap graphics @arneratermanis2017])[#image("assets/vector-vs-bitmap.png")] <vector-vs-bitmap>
Traditional methods of rendering 2D graphics has always relied on bitmap-based texture mapping @ray2005vector.
While this approach is ubiquitous, it suffers a major drawback of the _pixelation_ effect when being scaled beyond the original resolution @nehab2008random.
Furthermore, creating animations using bitmap graphics can be extremely limited and complex because of the rigid grid-like data structure used to store the data.
Animating bitmap graphics are commonly done through the use of shaders which directly manipulates the individual pixels, or relying on image sequences (flipbooks) which produces an illusion of movement.
Unlike raster graphics, which rely on a fixed grid of pixels, vector graphics are resolution-independent.
This means that it can scale without losing quality (shown in @vector-vs-bitmap).
A vector illustration is composed of multiple _paths_ that define _shapes_ to be painted in a given order @ganacim2014massively.
Each of these individual paths can be traced, altered, or even morphed into a completely different shape which allows for a huge variety of animation techniques.
// TODO: use this in limitations?
Lastly, it is crucial to recognize that while vector graphics offer numerous benefits, it is only suitable for representing precise shapes --- such as fonts, logos, and icons.
In contrast, complex images with intricate details, like photographs of a cat are far better represented using bitmap formats.
== Interactive UI/UX
Most game engines in the market like Unity, Godot, Game Maker, and Unreal Engine uses a WYSIWYG _(What You See Is What You Get)_ editor for creating user interfaces.
WYSIWYG editors are visual centric tools that let users work directly within the presentation form of the content @madje2022programmable.
Users normally layout their contents using a drag and drop editor and then style them using a style-sheet.
To bind interactions or animations towards a content, users would need to label it with a unique tag and query them through code.
Complex content and logic wouldn't be possible through a typical WYSIWYG editor.
For instance, it is virtually impossible to author a custom polygon shape in the editor with custom math based animation based on a time value.
This can only be achieved through code, and is often limited by the application programming interface (API) layer provided by the WYSIWYG editor.
This creates a huge distinction between the game logic and the visual representation that is needed to convey the messages.
While hot-reloading is applicable for the layout and styling (and simple logic to some extend) of a content.
A WYSIWYG editor would not be capable of hot-reloading complex logic as these can only be achieved using code, which in most cases, requires a re-compilation.
This could lead to frustration and lost of creativity due to the slow feedback loop.
In summary, WYSIWYG editors are great for prototyping but suffers from complex animations and interactions.
== Global Illumination
#figure(caption: [Local illumination vs global illumination @jalnionis2022])[#image("assets/global-illumination.png")]
Global illumination has been a notoriously hard problem to solve in computer graphics.
To put things into perspective, global illumination intends to solve the _many to many_ interactions between light, obstacles, and volumes.
In real-time game engines like Unity and Unreal Engine, light probes _(a.k.a radiance probes)_ are placed around the scene to capture lighting information into a cube map, which can be applied to nearby objects.
To smoothen out the transition between probes, objects interpolate between various nearby probes weighted by distance to approximate the global radiance.
Manual placement of probes leads to questions like "how many probes should a scene have?" or "how much probes is a good approximation?".
It ultimately becomes a trade-off between fidelity and performance with more probes resulting in better approximation, while fewer probes improve performance.
This paradoxical issue raises the challenge of finding the optimal balance.
This dilemma underscores the need for smarter, adaptive techniques, ensuring both visual fidelity and efficiency.
#pagebreak()
= Project Aim and Objectives
In this project, we aim to empower creators to create rich and visually appealing content in games in an efficient and streamlined workflow.
This allows the creator to have the luxury of focusing most of their time on quality content rather than the technical details needed to achieve the look or feel that they envisioned.
The objectives of this project are:
+ To utilize Vello, a compute-centric vector graphics renderer for rendering animated and dynamic vector graphics content.
+ To create an intuitive and yet powerful (programmable) workflow for generating animated and dynamic content.
+ To streamline the collaboration between UI/UX developer and gameplay programmers
+ To allow creators to focus on the creative aspects of game development.
+ To implement Radiance Cascades, a technique that provides realistic lighting without sacrificing real-time performance.
#pagebreak()
= Literature Review
== Vector Graphics
Scanline rendering is the process of shooting rays from one side of the screen to the other while coloring pixels in between based on collision checkings with paths in between.
A GPU based scanline rasterization method is proposed by parallelizing over _boundary fragments_ while bulk processing non-boundary fragments as horizontal spans @li2016efficient.
This method allows fully animated vector graphics to be rendered in interactive frame rates.
Apart from scanline rasterization, tesselation method can also be used to convert vector graphics into triangles and then pushed to the GPU for hardware accelerated rasterization.
#cite(<loop2005resolution>, form: "prose") further improved this method by removing the need of approximating curve segments into lines.
Instead, each curve segments is evalulated in a _fragment shader_ which can be calculated on the GPU.
This allows for extreme zoom levels without sacrificing qualities.
Re-tesselation of vector graphics can be computationally expensive, especially when it's inherently a serial algorithm that often needs to be solved on the CPU.
#cite(<kokojima2006resolution>, form: "prose") combines the work of #cite(<loop2005resolution>, form: "prose") with the usage of GPU's stencil buffer by using _triangle fans_ to skip the tesselation process.
This approch, however, does not extend to cubic Bézier segments as they might not be convex.
#cite(<rueda2008gpu>, form: "prose") addressed this issue by implementing a fragment shader that evaluates the implicit equation of the Bézier curve to discard the pixels that fall outside it.
The two-step "Stencil then Cover" (StC) method builds upon all of these work and unified path rendering with OpenGL's shading pipeline --- #text(font: "Consolas")[NV_path_rendering] @kilgard2012gpu.
This library was further improved upon by adding support for transparency groups, patterns, gradients, more color spaces, etc. @batra2015accelerating.
It was eventually integrated into Adobe Illustrator.
// TODO: Vector textures
// TODO: Other solutions as well (Skia, Pathfinder, etc.)
/*
Vector Graphics
- Scanline
- Triangulation
- Stencil then Cover (StC)
- Further improved and applied to real world application like Adobe Illustrator
- ^ Composition, Gradients
- Vector texture
- Massively parallel
*/
== Interactive UI/UX
Beneath all graphical interfaces lies the underlying code that structure and renders the visual elements.
The two most notable approach towards creating user interface frameworks are immediate-mode graphical user interface (IMGUI) and retained-mode graphical user interface (RMGUI).
Some popular IMGUI frameworks includes Dear IMGUI and egui @imgui @egui, while some popular RMGUI frameworks includes Xilem @xilem.
Although powerful, these UI frameworks strongly relies on hardcoded programming.
Enter the web technologies.
Modern browsers typically render UI elements using markup languages like HTML and SVG for structuring the content and style-sheets like CSS for styling them.
The use of markup structures allows developers to fully separate their UI layout from the codebase, simplifying the identification and management of UI components.
With style sheets, developers can create, share, and reuse templates, enhancing consistency and streamlining the design process throughout the application.
// TODO: explore in more detail on each framework
Notable frameworks that utilizes this model includes Unity UI Toolkit, React, Vue, etc @jacobsen2023 @react @vue.
Markup languages also give rise to many WYSIWYG editors.
These editors let users perform drag and drop actions to layout UI for quick prototyping as each components can now be represented using only markup syntax (no code required).
A major limitation of simple markup languages like HTML is that structure changes can only be achieved through code.
For example, if you want a form to disappear after button press, you would need to alter the HTML via code.
Typst offers an alternative towards this problem by introducing programming capabilities into markdown.
Typst is a competitor of LaTeX, designed to simplify the typesetting process with a modern and intuitive approach.
Unlike its predecessors, Typst can directly embed logic.
Using the previous example, developers would only need to pass in a boolean value and Typst will automatically exclude the form from being in the layout at all.
This currently works only in theory, as Typst is primarily a document generator without a user-friendly interface for modifying defined variables.
This is where our project comes in, we aim to provide this interface through Velyst, which couple Typst with Vello for rendering dynamic and programmable content in real-time.
/*
Interactive UI/UX
- From code to markup to css styling
- Research on WYSIWYG editors
- Explore competitors: LaTeX
*/
== Global Illumination
Ray tracing is the de-facto standard for calculating light bounces which contributes to global illumination.
Clever methods like backwards ray tracing has been introduced to speed up the algorithm, but still, it is no where near real-time frame rates @arvo1986backward.
Light baking is introduced to solve this issue, however, it lacks the ability to adapt to runtime scene changes.
Recent studies has shown great results of utilizing neural networks for approximating global illumination @choi2024baking.
// TODO: add more sources to support the claim
However, neural network based methods tend to suffer from unpredictability as the output is highly basd upon the input training data, making it unreliable.
Recent works by #cite(<mcguire2017real>, form: "prose") places light field probes around the scene to encode lighting information from static objects and sample them in real-time.
Dynamic diffuse global illumination (DDGI) further improves this technique by allowing light field probes to update dynamically based on scene changes @majercik2019dynamic.
Radiance cascades improves upon this technique by using a hierarchical structure to place light probes @osborne2024radiance.
This technique is based upon the _penumbra condition_, where closer distance require low angular resolution and high spatial resolution while further distance require high angular resolution and low spatial resolution.
/*
Global illumination
- Backwards Ray Tracing
- Light baking
- Light probes
- Adaptive probes volumes (only capture local volumes)
- Voxell GI
- SDF GI
*/
#pagebreak()
= Deliverables
This project introduces *Velyst*, an innovative approach towards generating interactive content.
It utilizes *Vello*, a compute-centric vector graphics renderer @linebender, and *Typst*, a programmable markup language for typesetting @madje2022programmable.
Velyst provides an extremely streamlined workflow that allows both UI/UX developers and gameplay programmers to easily collaborate in a project. The following are the deliverables that will be implemented in the Velyst library:
+ Allows the user to create interactive content that responds to user inputs in real-time.
+ Allows the user to perform hot-reloading during the development phase.
+ Allows the user to synchronize components with in-app states dynamically.
+ Allows the user to embed logic directly inside Typst scripts for rendering complex scenes.
#figure(caption: "Velyst wave (custom shape + animation) demo")[#image(
"assets/velyst-wave.png",
width: 90%,
)]
#figure(caption: "Velyst user interface demo")[#image(
"assets/velyst-ui.png",
width: 90%,
)]
A library will also be created that utilizes the *Radiance Cascades* technique for rendering real-time 2D global illumination in the Bevy game engine. The following are the deliverables for the Radiance Cascades library:
+ Allows the user to implement global illumination using Radiance Cascades.
+ Allows the user to create adaptive lighting that responds to scene changes in real-time.
+ Allows the user to use arbritrary emmisive shapes to illuminate the scene in game.
#figure(caption: "Radiance cascades demo")[#image(
"assets/radiance-cascades.png",
width: 90%,
)]
// Data Gathering Design:
// 2 techniques
#pagebreak()
= References
#bibliography("citation.bib", style: "apa", title: none)
|
|
https://github.com/tuto193/typst-uos-thesis | https://raw.githubusercontent.com/tuto193/typst-uos-thesis/main/main.typ | typst | MIT License | #import "template.typ": project, gls, cite-et-al, cite-string, fancy-align, fancy-fill, fancy-stroke
#import "glossaries.typ": GLOSSARIES
#import "languages.typ": dict
// If you need to mark stuff and not forget!
// #import "@preview/big-todo:0.2.0": *
// If you need FANCIER tables!
// #import "@preview/tablex:0.0.7": tablex, rowspanx, colspanx, hlinex, vlinex
// Make a simple separator for term lists
#set terms(separator: [: ], tight: false)
// Set the language of the document here!
// #let language = "de"
// #let language = "es"
#let language = "en"
#let figimage = figure.with(supplement: [#dict.at(language).at("fig")], kind: image)
// Use this to cite code in a cool way
#let code(body, caption: "", label: <code>, placement: auto) = {
let content = ()
let i = 1
for item in body.children {
if item.func() == raw {
for line in item.text.split("\n") {
content.push(text(str(i), font: "Fira Code", fill: luma(180), size: 10pt))
content.push(raw(line, lang: item.lang))
i += 1
}
}
}
let bg-fill-1 = luma(230)
let bg-fill-2 = luma(240)
[
#figimage(
block(
// stroke: 1pt + gray,
inset: 0pt,
// fill: rgb(99%, 99%, 99%),
fill: bg-fill-1,
// width: 0.8fr,
radius: 10pt,
)[
#set align(left)
#table(
columns: (auto, 1fr),
inset: 4pt,
stroke: none,
fill: (_, row) => {
if calc.odd(row) {
bg-fill-2
} else {
bg-fill-1
}
},
align: horizon,
..content
)
],
caption: [#caption],
placement: placement,
)
#label
]
}
#let figtable = figure.with(supplement: [#dict.at(language).at("tab")], kind: table)
#show: project.with(
title: "My Very Cool Title",
lang: language,
degree: "master",
author: "<NAME>",
registration-number: "123456",
institute: "Institute of Very Cool People",
logo: "images/logo.png",
bibliography-path: "bibliography.bib",
bibliography-style: "ieee",
city: "Osnabrück",
first-supervisor: "Prof. Dr. <NAME>",
second-supervisor: "Prof. Dr. rer. nat. <NAME>",
// abstract: [
// I am the abstract text.
// ],
glossaries: GLOSSARIES,
double-sided: false, // affects the placements of page numbers
)
= Introduction
<chap-Introduction>
Here we are in @chap-Introduction.
#lorem(90)... see @fig-1
#figimage(
[A simple test figure],
caption: [My very cool caption],
) <fig-1>
== Motivation
<sec-Motivation>
We are in @sec-Motivation and here is a fancy table in @table-1, and we can
do some references. Let's quote something with many authors #cite-et-al("RWS+2023GDAL", form: "prose").
#figtable(
table(
columns: 2,
align: fancy-align,
fill: fancy-fill,
stroke: fancy-stroke,
[My Left], [My Right],
[1], [a],
[2], [b],
[3], [c],
),
caption: "Not so fancy caption",
) <table-1>
There is also some code in @code-1.
#code(
[
```python
my_list: list = [1, 3, 4, 9]
print(my_list)
```
],
caption: [Some code],
label: <code-1>
)
|
https://github.com/bennyhandball/PA1_LoB_Finance | https://raw.githubusercontent.com/bennyhandball/PA1_LoB_Finance/main/PA/supercharged-dhbw/2.1.0/acronym-lib.typ | typst | #let prefix = "acronym-state-"
#let acros = state("acronyms", none)
#let init-acronyms(acronyms) = {
acros.update(acronyms)
}
// Check if an acronym exists
#let is-valid(acr) = {
acros.display(acronyms => {
if acr not in acronyms {
panic(acr + " is not a key in the acronyms dictionary.")
return false
}
})
return true
}
// Display acronym as clickable link
#let display-link(acr, text) = {
if is-valid(acr) {
link(label("acronym-" + acr), text)
}
}
// Display acronym
#let display(acr, text, link: true) = {
if link {
display-link(acr, text)
} else {
text
}
}
// Display acronym in short form
#let acrs(acr, plural: false, link: true) = {
if plural {
display(acr, acr + "s", link: link)
} else {
display(acr, acr, link: link)
}
}
// Display acronym in short plural form
#let acrspl(acr, link: true) = {
acrs(acr, plural: true, link: link)
}
// Display acronym in long form
#let acrl(acr, plural: false, link: true) = {
acros.display(acronyms => {
if is-valid(acr) {
let defs = acronyms.at(acr)
if type(defs) == "string" {
if plural {
display(acr, defs + "s", link: link)
} else {
display(acr, defs, link: link)
}
} else if type(defs) == "array" {
if defs.len() == 0 {
panic("No definitions found for acronym " + acr + ". Make sure it is defined in the dictionary passed to #init-acronyms(dict)")
}
if plural {
if defs.len() == 1 {
display(acr, defs.at(0) + "s", link: link)
} else if defs.len() == 2 {
display(acr, defs.at(1), link: link)
} else {
panic("Definitions should be arrays of one or two strings. Definition of " + acr + " is: " + type(defs))
}
} else {
display(acr, defs.at(0), link: link)
}
} else {
panic("Definitions should be arrays of one or two strings. Definition of " + acr + " is: " + type(defs))
}
}
})
}
// Display acronym in long plural form
#let acrlpl(acr, link: true) = {
acrl(acr, plural: true, link: link)
}
// Display acronym for the first time
#let acrf(acr, plural: false, link: true) = {
if plural {
display(acr, [#acrlpl(acr) (#acr\s)], link: link)
} else {
display(acr, [#acrl(acr) (#acr)], link: link)
}
state(prefix + acr, false).update(true)
}
// Display acronym in plural form for the first time
#let acrfpl(acr, link: true) = {
acrf(acr, plural: true, link: link)
}
// Display acronym. Expands it if used for the first time
#let acr(acr, plural: false, link: true) = {
state(prefix + acr, false).display(seen => {
if seen {
if plural {
acrspl(acr, link: link)
} else {
acrs(acr, link: link)
}
} else {
if plural {
acrfpl(acr, link: link)
} else {
acrf(acr, link: link)
}
}
})
}
// Display acronym in the plural form. Expands it if used for the first time.
#let acrpl(acronym, link: true) = {
acr(acronym, plural: true, link: link)
}
// Print an index of all the acronyms and their definitions.
#let print-acronyms(language, acronym-spacing) = {
heading(level: 1, outlined: false, numbering: none)[#if (language == "de") {
[Abkürzungsverzeichnis]
} else {
[List of Acronyms]
}]
acros.display(acronyms=>{
let acronym-keys = acronyms.keys()
let max-width = 0pt
for acr in acronym-keys {
let result = measure(acr).width
if (result > max-width) {
max-width = result
}
}
let acr-list = acronym-keys.sorted()
for acr in acr-list{
grid(
columns: (max-width + 0.5em, auto),
gutter: acronym-spacing,
[*#acr#label("acronym-" + acr)*], [#acrl(acr, link: false)]
)
}
})
} |
|
https://github.com/coastalwhite/typst-tudelft | https://raw.githubusercontent.com/coastalwhite/typst-tudelft/main/README.md | markdown | # TU Delft Typst Template
This is a very basic Typst template to create slide decks for presentations with
TU Delft affiliation.
## License
Assets from TU Delft are not mine and are taken from the TU Delft website. |
|
https://github.com/peng1999/typst-pyrunner | https://raw.githubusercontent.com/peng1999/typst-pyrunner/main/CHANGELOG.md | markdown | MIT License | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.2.0]
### Fixed
- Panic when calling `open`.
- Panic when calling `datetime.now()`
### Improved
- Bundled wasm file gets smaller by removing some useless file from stdlib.
## [0.1.0]
### Added
- `block`, `compile`, and `call` API.
[Unreleased]: https://github.com/peng1999/typst-pyrunner/compare/v0.2.0...HEAD
[0.2.0]: https://github.com/peng1999/typst-pyrunner/compare/v0.1.0...v0.2.0
[0.1.0]: https://github.com/peng1999/typst-pyrunner/releases/tag/v0.1.0
|
https://github.com/protohaven/printed_materials | https://raw.githubusercontent.com/protohaven/printed_materials/main/meta-documentation/README-class_handouts-content.md | markdown | # Protohaven Class Handouts
This repo tracks all of the materials to reproduce, maintain, develop the class handout PDFs:
* Typst sources
* Source images
* Source files for annotated images
* Reference materials
* Notes
For more notes, see the Wiki:
https://protohaven.org/wiki/class_materials
# Notes on Editing the Typst Sources
## Lists
Typst is picky about syntax in a few places. One of those places is lists, which we use to designate authors, instructors, and clearances in the template metadata.
Lists look like this:
```
("thing one", "thing two")
```
...quoted strings, separated by commas, enclosed in parentheses.
When specifying people as authors or instructors, it's best to include name and email address (if available) in the standard format:
```
("<NAME> <<EMAIL>>", "<NAME> <<EMAIL>>")
```
### Lists with one element require an extra comma
For example, we may have two people that we want to cite as an author or instructor:
```
("Ashurbanipal", "Ikunum")
```
...and the above list should work fine (those folks don't have valid email addresses).
if you only want to list one person in the template, however, this won't work:
```
("Ashurbanipal")
```
The problem here is that Typst doesn't think that's a list (instead, it interprets the above as a single string), so the code that automatically processes the instructor entries will fail (because it's expecting a list, not a single string).
(This is a bug in the template; it's being worked on.)
The workaround to convince Typst that this is really a list (and make it happy) is to add an extra comma after the quoted string:
```
("Ashurbanipal",)
```
...which is conformant syntax for specifying a list of one thing. |
|
https://github.com/kdog3682/2024-typst | https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/ex-graph-of-x2.typ | typst | #import "base-utils.typ": *
#let generate-points(domain, fn, skip: none) = {
let skip-fn(x, y) = {
if skip == none {
return false
}
if is-array(skip) and x in skip {
return true
}
if is-string(skip) {
if skip == "odd" {
return is-odd(x)
}
else if skip == "even" {
return is-even(x)
}
else {
panic("aa")
}
}
return skip(x, y)
}
let store = ()
for i in range(domain.at(0), domain.at(1) + 1) {
let pt = (i, fn(i))
if not skip-fn(..pt) {
store.push(pt)
}
}
return store
}
#let skip = "odd"
#panic(generate-points((-3, 3), (x) => calc.pow(x, 2), skip: skip))
// v represents the value
// use with y-format: y-formatterr
// currently not used
#let y-formatter(v) = {
if v == 0 {
[#str(v)]
} else {
[#str(v)]
}
}
#let pi-formatter(v) = if v != 0 {$ #{v/calc.pi} pi $} else {$ 0 $}
// formats the x-axis with pi
#let fn-xsquared = (x) => calc.pow(x, 2)
#let fn-xcubed = (x) => calc.pow(x, 3)
#let plot(domain) = {
cetz.canvas({
import cetz.plot
import cetz.draw: *
plot.plot(name: "p",
//x-format: formatter,
// x label is the "x"
grid: "major",
// didnt really do anything
size: (4,4),
axis-style: "school-book",
// looks quite nice
x-minor-tick-step: 1,
x-tick-step: 1, y-tick-step: y-tick-step, y-minor-tick-step: y-minor-tick-step, {
plot.add(domain: (0, 2*calc.pi), fn-xsquared)
for i in range(2, 6) {
plot.add-anchor(str(i), (i, fn-xsquared(i)))
// drawing the rectangles can be done in a different way too
}
})
for-each-anchor("p", (x) => {
if x.match(regex("^\\d$")) != none {
circle("p." + x, radius: 2pt, fill: black)
}
})
for i in range(2, 7) {
rect((i, 0), (i + 1, 2))
}
})
}
// i forgot what was supposed to happen ...
#import "@preview/cetz:0.2.0"
#let y-formatter(v) = {
if v == 0 {
[#str(v)]
} else {
[#str(v)]
}
}
#let formatter(v) = if v != 0 {$ #{v/calc.pi} pi $} else {$ 0 $}
#let plot(content) = {
cetz.canvas({
import cetz.plot
import cetz.draw: *
plot.plot(name: "p",
//x-format: formatter,
// x label is the "x"
grid: "major",
y-minor-tick-step: 4,
size: (4,4),
axis-style: "school-book",
x-minor-tick-step: 1,
x-tick-step: 2, y-tick-step: 8, y-minor-tickstep: 4, {
let fn = (x) => calc.pow(x, 2)
plot.add(domain: (-5, 5), fn)
for i in range(2, 6) {
plot.add-anchor(str(i), (i, fn(i)))
}
})
for-each-anchor("p", (x) => {
if x.match(regex("^\\d$")) != none {
circle("p." + x, radius: 2pt, fill: black)
}
})
})
}
#plot(1)
/*
#let grapher() = {
cetz.canvas({
import cetz.plot
import cetz.draw: *
plot.plot(size: (4, 4), name: "plot",
data: (x) => calc.pow(x, 2),
x-tick-step: 1, y-tick-step: 1,
axis-style: "school-book", {
plot.add(((0,0), (1,1), (2,1), (4,3)))
plot.add-anchor("pt", (1,1))
})
//circle("plot.pt", radius: 0.5, fill: yellow)
line("plot.pt", ((), "|-", (0, 2)), mark: (fill: black, start: "stealth", ), name: "line")
content("line.end", [$(1,1)$], anchor: "south", padding: .2)
})
}
*/
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/text/linebreak.typ | typst | Apache License 2.0 | // Test line breaks.
---
// Test overlong word that is not directly after a hard break.
This is a spaceexceedinglylongy.
---
// Test two overlong words in a row.
Supercalifragilisticexpialidocious Expialigoricmetrioxidation.
---
// Test that there are no unwanted line break opportunities on run change.
This is partly emp#emph[has]ized.
---
Hard #linebreak() break.
---
// Test hard break directly after normal break.
Hard break directly after \ normal break.
---
// Test consecutive breaks.
Two consecutive \ \ breaks and three \ \ more.
---
// Test forcing an empty trailing line.
Trailing break \ \
---
// Test justified breaks.
#set par(justify: true)
With a soft #linebreak(justify: true)
break you can force a break without #linebreak(justify: true)
breaking justification. #linebreak(justify: false)
Nice!
---
// Test comments at the end of a line
First part//
Second part
// Test comments at the end of a line with pre-spacing
First part //
Second part
---
// Test linebreak for East Asian languages
ทีวีตรวจทานนอร์ทแฟรีเลคเชอร์โกลด์อัลบัมเชอร์รี่เย้วสโตร์กฤษณ์เคลมเยอบีร่าพ่อค้าบลูเบอร์รี่สหัสวรรษโฮปแคนูโยโย่จูนสตรอว์เบอร์รีซื่อบื้อเยนแบ็กโฮเป็นไงโดนัททอมสเตริโอแคนูวิทย์แดรี่โดนัทวิทย์แอปพริคอทเซอร์ไพรส์ไฮบริดกิฟท์อินเตอร์โซนเซอร์วิสเทียมทานโคโยตี้ม็อบเที่ยงคืนบุญคุณ
|
https://github.com/Jollywatt/typst-wordometer | https://raw.githubusercontent.com/Jollywatt/typst-wordometer/master/tests/text-extraction/test.typ | typst | MIT License | #import "/src/lib.typ": *
#set page(width: 15cm, height: auto)
#let el = [
Hello, _this_ is a bunch of `content`. \ New line.
== What we have
- This and that
- etcetera
#text(red)[what we *need*]
#circle[love]
]
#rect(el)
#raw(extract-text(el))
|
https://github.com/mrtz-j/typst-thesis-template | https://raw.githubusercontent.com/mrtz-j/typst-thesis-template/main/template/utils/feedback.typ | typst | MIT License | #let feedback(
feedback: none,
response: none,
) = {
counter("feedback").step()
rect(
width: 100%,
inset: 10pt,
radius: 3pt,
stroke: 0.5pt + blue,
fill: blue.lighten(75%),
)[
#text(
weight: 700,
counter("feedback").display() + ". Feedback: ",
)
#feedback
#if response != none {
line(length: 100%, stroke: 0.5pt + blue)
[#text(weight: 700, "Response: ") #response]
}
]
}
|
https://github.com/max-niederman/MATH51 | https://raw.githubusercontent.com/max-niederman/MATH51/main/hw/5.typ | typst | #import "../lib.typ": *
#show: homework.with(
title: "Math 51 Homework #5"
)
= 16.1
== a
$x_(n+1)$, the number of customers of $P_1$ in year $n+1$,
is equal to the sum of the $0.85x_n$ customers who stayed with $P_1$ from the previous year
and the $0.08y_n$ who switched from $P_1$.
Written as an equation, this implies:
$ x_(n+1) = 0.85x_n + 0.08y_n $
Similarly for $y$, $y_(n+1)$ is equal to the sum of the $0.15x_n$ customers who switched from $P_1$ and the $0.92y_n$ who stayed with $P_2$:
$ y_(n+1) = 0.15x_n + 0.92y_n $
We can rewrite this system of linear equations as a matrix equation:
$
vname(v)_(n+1) = mat(
0.85, 0.08;
0.15, 0.92;
) vname(v)_n \
M = mat(
0.85, 0.08;
0.15, 0.92;
)
$
== b
The first row of $M^2$ describes the transformation (thought of as a $1 times 3$ matrix)
which takes the initial state vector $vname(v)_n$
to the scalar $x_(n+2)$ representing the number of customers of $P_1$ two years later.
As a scalar equation, it states
$ x_(n+2) = 0.7345x_n + 0.1416y_n $
== c
The two numbers appearing in the matrix
represent the relative frequency of customers of each of the two phone companies.
= 16.3
== a
First, in spring, $10%$ of the birds on $I_1$ migrate.
This corresponds to the matrix $B$.
Then, in fall, $10%$ of the birds on $I_2$ migrate.
This corresponds to the matrix $D$.
Multiplying these matrices together gives the matrix $M$:
$
M &= D B \
&= mat(
1, 0.1;
0, 0.9;
) mat(
0.9, 0;
0.1, 1;
) \
&= mat(
0.91, 0.1;
0.09, 0.9;
)
$
== b
$N$ is the matrix corresponding to the application of the migration pattern twice.
That is to say, $N = M^2$.
$
N &= M^2 \
&= mat(
0.91, 0.1;
0.09, 0.9;
) mat(
0.91, 0.1;
0.09, 0.9;
) \
&= mat(
0.8281, 0.181;
0.1719, 0.819;
)
$
== c
It means that the eventual populations of birds on $I_1$ and $I_2$
are independent of the initial populations of birds on $I_1$ and $I_2$,
given enough time (large $n$).
= 16.4
== a
$
A = mat(
0.9, 0, 0.05;
0, 0.9, 0.1;
0.1, 0.1, 0.85;
)
$
== b
$M$ corresponds to the application of $A$ ten times, so $M = A^10$.
= 16.6
== c
$
A vname(v)_n
&= mat(
1, -1;
1, 0;
) vec(
a_(n+1),
a_n
) \
&= vec(
a_(n+1) - a_n,
a_(n+1)
) \
&= vec(
a_(n+2),
a_(n+1)
) \
&= vname(v)_(n+1)
$
$
A^6
&= (A^3)^2 \
&= (mat(
1, -1;
1, 0;
) mat(
1, -1;
1, 0;
) mat(
1, -1;
1, 0;
))^2 \
&= (mat(
1, -1;
1, 0;
) mat(
0, -1;
1, -1;
))^2 \
&= mat(
-1, 0;
0, -1;
)^2 \
&= mat(
1, 0;
0, 1;
) \
&= I_2
$
== d
$A$ is the matrix that takes $vname(v)_n$ to $vname(v)_(n+1)$,
so $A^6$ is the matrix which takes $vname(v)_n$ to $vname(v)_(n+6)$.
Then $A^6 = I_2$ implies that $vname(v)_(n+6) = I_2 vname(v)_n = vname(v)_n$.
The second components of these two vectors must therefore be equal,
so $a_(n+6) = a_n$.
We can verify this from the recurrence definition of the sequence:
$
a_(n+6)
&= a_(n+5) - a_(n+4) \
&= (a_(n+4) - a_(n+3)) - (a_(n+3) - a_(n+2)) \
&= a_(n+4) - 2a_(n+3) + a_(n+2) \
&= (a_(n+3) - a_(n+2)) - 2a_(n+3) + a_(n+2) \
&= -a_(n+3) \
&= -(a_(n+2) - a_(n+1)) \
&= -a_(n+2) + a_(n+1) \
&= -(a_(n+1) - a_n) + a_(n+1) \
&= a_n
$
= 16.7
== a
$
M &= mat(
0.93, 0.03;
0.07, 0.97;
) \
M' &= mat(
0.95, 0.04;
0.05, 0.96;
)
$
The first columns of $M$ and $M'$ represent
have two entries representing the probability that
an urban resident will remain in the city and move to the suburbs, respectively.
Similarly, the second columns represent whether suburban residents will move to the city or remain in the suburbs.
These matrices are correct because they correctly correspond to the probabilities given:
- $M e_1 = vec(0.93, 0.07)$, so 7% of urban residents of $C$ move to the suburbs and the rest (93%) stay.
- $M e_2 = vec(0.03, 0.97)$, so 3% of suburban residents $C$ move to the city and the rest (97%) stay.
- $M' e_1 = vec(0.95, 0.05)$, so 5% of urban residents of $C'$ move to the suburbs and the rest (95%) stay.
- $M' e_2 = vec(0.04, 0.96)$, so 4% of suburban residents of $C'$ move to the city and the rest (96%) stay.
== b
The columns of $M^m$ both being $vec(.3, .7)$ implies that,
regardless of the initial distribution of residents of $C$,
the proportion of urban and suburban residents will eventually converge to $30%$ and $70%$, respectively.
Similarly, the columns of $M'^m$ both being $vec(.444, .556)$ implies that the population of $C'$ will be $44.4%$ urban and $55.6%$ suburban
given enough time.
The size the long-term effect was surprising to me,
although in retrospect it makes sense that a big proportional difference in small numbers would have a big effect.
= 17.1
$
f(x, y) &= vec(sin(x), e^(y x^2)) \
D(f)(x, y) &= mat(
cos(x), 0;
2 x y e^(y x^2), x^2 e^(y x^2);
) \
\
g(r, s, t) &= r s t \
D(g)(r, s, t) &= mat(s t, r t, r s) \
\
h(v) &= vec(v + sin(v), v + cos(v)) \
D(h)(v) &= mat(1 + cos(v); 1 - sin(v)) \
$
== a
$
D(h compose g)(r, s, t)
&= D(h)(g(r, s, t)) D(g)(r, s, t) \
&= mat(1 + cos(r s t); 1 - sin(r s t)) mat(s t, r t, r s) \
&= mat(
s t (1 + cos(r s t)), r t (1 + cos(r s t)), r s (1 + cos(r s t));
s t (1 - sin(r s t)), r t (1 - sin(r s t)), r s (1 - sin(r s t));
)
$
== b
$
D(f compose h)(vname(0))
&= D(f)(h(vname(0))) D(h)(vname(0)) \
&= D(f)(vec(0, 1)) mat(2; 1) \
&= mat(
1, 0;
0, 0;
) mat(2; 1) \
&= mat(2; 0)
$
== c
First by computing $g(f(x, y), x)$:
$
g(f(x, y), x)
&= g(sin(x), e^(y x^2), x) \
&= x e^(y x^2) sin(x) \
diff / (diff x) g(f(x, y), x)
&= diff / (diff x) x e^(y x^2) sin(x) \
&= e^(y x^2) diff / (diff x) x sin(x) + x sin(x) diff / (diff x) e^(y x^2) \
&= e^(y x^2) (x cos(x) + sin(x)) + x sin(x) (2 y x e^(y x^2)) \
&= x e^(y x^2) cos(x) + e^(y x^2) sin(x) + 2 y x^2 e^(y x^2) sin(x) \
$
Then by using the chain rule:
Let $F : RR^2 -> RR^3$ be defined as $ F(x, y) = vec(f_1(x, y), f_2(x, y), x) = vec(sin(x), e^(y x^2), x) $.
Then
$
g(f(x, y), x)
&= (g compose F)(x, y) \
diff / (diff x) g(f(x, y), x)
&= diff / (diff x) (g compose F)(x, y) \
&= D(g compose F)(x, y) vname(e)_1 \
&= D(g)(F(x, y)) D(F)(x, y) vname(e)_1 \
&= D(g)(vec(sin(x), e^(y x^2), x)) mat(
cos(x), 0, 0;
2 x y e^(y x^2), x^2 e^(y x^2), 0;
0, 0, 1;
) vname(e)_1 \
&= mat(
x e^(y x^2),
x sin(x),
e^(y x^2) sin(x),
) mat(
cos(x), 0, 0;
2 x y e^(y x^2), x^2 e^(y x^2), 0;
1, 0, 0;
) vname(e)_1 \
&= mat(
x e^(y x^2),
x sin(x),
e^(y x^2) sin(x),
) mat(
cos(x);
2 x y e^(y x^2);
1;
) \
&= x e^(y x^2) cos(x) + 2 x^2 y e^(y x^2) sin(x) + e^(y x^2) sin(x)
$
= 17.4
== a
Formula (17.1.6) states that
$ (diff f) / (diff x_i) = sum_(k=1)^p (diff f) / (diff y_k) (diff y_k) / (diff x_i) $
Applying this to the given variables, we have
$
(diff f) / (diff r)
&= (diff f) / (diff x) (diff x) / (diff r) + (diff f) / (diff y) (diff y) / (diff r) \
&= (diff f) / (diff x) (2 r) + (diff f) / (diff y) s \
$
== b
$
(diff f) / (diff v)
&= (diff f) / (diff x) (diff x) / (diff v) + (diff f) / (diff y) (diff y) / (diff v) \
&=
(diff f) / (diff x) (
(diff x) / (diff r) (diff r) / (diff v)
+ (diff x) / (diff s) (diff s) / (diff v)
)
+ (diff f) / (diff y) (
(diff y) / (diff r) (diff r) / (diff v)
+ (diff y) / (diff s) (diff s) / (diff v)
) \
&= (diff f) / (diff x) (2 r + 4 s v)
+ (diff f) / (diff y) (s + 2 r v)
$
== c
$
(diff f) / (diff t)
&= (diff f) / (diff x) (diff x) / (diff t) + (diff f) / (diff y) (diff y) / (diff t) \
&= (diff f) / (diff x) (
(diff x) / (diff r) (diff r) / (diff t)
+ (diff x) / (diff s) (diff s) / (diff t)
)
+ (diff f) / (diff y) (
(diff y) / (diff r) (diff r) / (diff t)
+ (diff y) / (diff s) (diff s) / (diff t)
) \
\
(diff r) / (diff t)
&= (diff r) / (diff v) (diff v) / (diff t)
+ (diff r) / (diff w) (diff w) / (diff t)
= 1 dot 2 t + 1 dot 0
= 2 t \
(diff s) / (diff t)
&= (diff s) / (diff v) (diff v) / (diff t)
+ (diff s) / (diff w) (diff w) / (diff t)
= 2 v dot 2 t + 0 dot 0
= 4 v t \
\
(diff f) / (diff t)
&= (diff f) / (diff x) (4 r t + 8 s v t)
+ (diff f) / (diff y) (2 t s + 4 r v t)
$
= 17.5
== a
$
dif / (dif x) F(x, y)
&= D(F compose h)(x) \
&= D(F)(x, y) D(h)(x) \
&= mat(F_x (x, y), F_y (x, y)) mat(1; (dif y) / (dif x)) \
&= F_x (x, y) + (dif y) / (dif x) F_y (x, y) \
&= (dif y) / (dif x) F_y (x, y) \
(dif y) / (dif x)
&= - (F_x (d, y)) / (F_y (x, y))
= - (diff F slash diff x) / (diff F slash diff y)
$
This also follows from the fact that the tangent line is orthogonal to the line with direction $nabla F$,
and the slope of a perpindicular line is the negative reciprocal of the slope of the original line.
== b
$
F(x, y) &= 2 x^3 y - y^5 x \
F_x (x, y) &= 6 x^2 y - y^5 \
F_y (x, y) &= 2 x^3 - 5 y^4 x \
\
(dif y) / (dif x)
&= - (F_x (1, 1)) / (F_y (1, 1)) \
&= - (6 - 1) / (2 - 5) \
&= 5 / 3 \
$
= 17.6
== a
Let $g_z (x, y) = vec(x, y, z(x, y))$, so that $f(x, y, z(x, y)) = (f compose g_z)(x, y)$.
Similarly, let
$g_x (y, z) = vec(x(y, z), y, z)$
and $g_y (x, z) = vec(x, y(x, z), z)$.
Then
$
D(f compose g_z)(x, y)
&= D(f)(g_z (x, y)) D(g_z)(x, y) \
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) mat(
1, 0;
0, 1;
(diff z) / (diff x), (diff z) / (diff y);
) \
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) vec(0, 1, (diff z) / (diff y)) \
&= (diff f) / (diff y) + (diff f) / (diff z) (diff z) / (diff y) \
(diff f) / (diff z) (diff z) / (diff y) &= - (diff f) / (diff y) \
(diff z) / (diff y) &= - (diff f slash diff y) / (diff f slash diff z)
$
$
D(f compose g_x)(y, z)
&= D(f)(g_x (y, z)) D(g_x)(y, z) \
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) mat(
(diff x) / (diff y), (diff x) / (diff z);
1, 0;
0, 1;
) \
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) vec((diff x) / (diff z), 0, 1) \
&= (diff x) / (diff z) (diff f) / (diff x) + (diff f) / (diff z) \
(diff x) / (diff z) (diff f) / (diff x) &= - (diff f) / (diff z) \
(diff x) / (diff z) &= - (diff f slash diff z) / (diff f slash diff x)
$
$
D(f compose g_y)(x, z)
&= D(f)(g_y (x, z)) D(g_y)(x, z) \
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) mat(
1, 0;
(diff y) / (diff x), (diff y) / (diff z);
0, 1;
) \
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) vec(1, (diff y) / (diff x), 0) \
&= (diff f) / (diff x) + (diff f) / (diff y) (diff y) / (diff x) \
(diff f) / (diff y) (diff y) / (diff x) &= - (diff f) / (diff x) \
(diff y) / (diff x) &= - (diff f slash diff x) / (diff f slash diff y)
$
=== $(diff x) / (diff y)$ and $(diff y) / (diff x)$
We know from the third computation that:
$
(diff y) / (diff x)
= - (diff f slash diff x) / (diff f slash diff y)
$
We also know from the second computation that:
$
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) mat(
(diff x) / (diff y), (diff x) / (diff z);
1, 0;
0, 1;
) \
$
So,
$
0
&= mat(
(diff f) / (diff x),
(diff f) / (diff y),
(diff f) / (diff z),
) vec((diff x) / (diff y), 1, 0) \
&= (diff x) / (diff y) (diff f) / (diff x) + (diff f) / (diff y) \
(diff x) / (diff y) (diff f) / (diff x) &= - (diff f) / (diff y) \
(diff x) / (diff y) &= - (diff f slash diff y) / (diff f slash diff x)
$
And these two fractions are reciprocal:
$
- (diff f slash diff x) / (diff f slash diff y)
&= ((diff f slash diff y) / (diff f slash diff x))^(-1) \
(diff y) / (diff x)
&= ((diff x) / (diff y))^(-1)
$
== b
We substitute in the computed formulae and cancel:
$
(diff z) / (diff y)
(diff y) / (diff x)
(diff x) / (diff z)
&=
(- (diff f slash diff y) / (diff f slash diff z))
(- (diff f slash diff x) / (diff f slash diff y))
(- (diff f slash diff z) / (diff f slash diff x)) \
&=
(-1)^3
dot (diff f slash diff y) / (diff f slash diff z)
dot (diff f slash diff x) / (diff f slash diff y)
dot (diff f slash diff z) / (diff f slash diff x) \
&=
-1
dot cancel(diff f slash diff y) / cancel(diff f slash diff z)
dot cancel(diff f slash diff x) / cancel(diff f slash diff y)
dot cancel(diff f slash diff z) / cancel(diff f slash diff x) \
&= -1
$
== c
$
T &= (P V) / c \
(diff T) / (diff V) &= P/c \
\
V &= (c T) / P \
(diff V) / (diff P) &= - (c T) / P^2 \
\
P &= (c T) / V \
(diff P) / (diff T) &= c / V \
\
(diff T) / (diff V)
(diff V) / (diff P)
(diff P) / (diff T)
&=
(cancel(P)/cancel(c))
(- (cancel(c) T) / (cancel(P^2)))
(c / V) \
&=
- (T c) / (P V) \
&=
- (T c) / (T c) \
&= -1
$
= 17.7
Let $p : RR^(m + m) -> RR$
and $q : RR^n -> RR^(m + m)$
be defined as
$
p(vname(a), vname(b)) &= vname(a) dot vname(b) \
q(vname(x)) &= vec(f(vname(x)), g(vname(x))) \
$
(I'm being a bit sloppy with notation. To be clear, the $n$- and $m$-vectors are being _concatenated_ rather than structurally composed into a 2-vector.)
Then the derivative matrices of $p$ and $q$ are
$
D(p)(a_1, a_2, ..., a_m, b_1, b_2, ..., b_m) &= mat(
b_1, b_2, ..., b_m,
a_1, a_2, ..., a_m;
) \
D(q)(vname(x)) &= mat(
D(f)(vname(x));
D(g)(vname(x));
)
$
Then we can find the derivative matrix for $h = p compose q$ using the Chain Rule:
$
D(h)(vname(x))
&= D(p)(q(vname(x))) D(q)(vname(x)) \
&= mat(
g_1(vname(x)), g_2(vname(x)), ..., g_m(vname(x)),
f_1(vname(x)), f_2(vname(x)), ..., f_m(vname(x)),
) mat(
D(f)(vname(x));
D(g)(vname(x));
) \
&= mat(
g_1(vname(x)), g_2(vname(x)), ..., g_m(vname(x)),
) D(f)(vname(x))
+ mat(
f_1(vname(x)), f_2(vname(x)), ..., f_m(vname(x)),
) D(g)(vname(x)) \
&= g(vname(x))^T D(f)(vname(x)) + f(vname(x))^T D(g)(vname(x)) \
$
= 17.8
== a
$
(D_vname(v) f)(vname(a))
&= lr(dif / (dif t) bar)_(t=0) f(vname(a) + t vname(v)) \
&= lr(dif / (dif t) bar)_(t=0) (f compose g)(t) \
&= (D f)(g(0)) (D g)(0) \
&= (D f)(vname(a) + 0 vname(v)) (D g)(0) \
&= (D f)(vname(a)) vname(v) \
$
== b
Let $vname(c) = vec(1 slash sqrt(2), 1 slash sqrt(2))$ be the unit northeastern direction vector.
The slope is $(D_vname(c) f)(vname(a))$, which we can compute using the pervious result:
$
(D_vname(c) f)(vname(a))
&= (D f)(vname(a)) vname(c) \
&= (D f)(vec(1, 2)) vname(c) \
&= mat(
2 pi cos(2 pi),
pi cos(2 pi)
) vec(1 slash sqrt(2), 1 slash sqrt(2)) \
&= mat(
2 pi, pi
) vec(1 slash sqrt(2), 1 slash sqrt(2)) \
&= (2 pi) / sqrt(2) + pi / sqrt(2) \
&= (3 pi) / sqrt(2)
$
= 17.12
== a
Let $c : RR^2 -> R$ be the function which takes vectors from polar coordinates to Cartesian coordinates:
$
c(r, theta) &= vec(r cos theta, r sin theta) \
(D c)(r, theta) &= mat(
cos theta, -r sin theta;
sin theta, r cos theta;
)
$
Then, we can use the Chain Rule to compute
the derivative matrix of $h = f compose c$:
$
(D h)(r, theta)
&= (D f)(c(r, theta)) (D c)(r, theta) \
&= mat(
(diff f) / (diff x), (diff f) / (diff y);
) mat(
cos theta, -r sin theta;
sin theta, r cos theta;
) \
mat(
(diff h) / (diff r), (diff h) / (diff theta);
)
&= mat(
cos theta (diff f) / (diff x) + sin theta (diff f) / (diff y),
-r sin theta (diff f) / (diff x) + r cos theta (diff f) / (diff y);
)
$
And seperating the columns on both sides gives
$
(diff h) / (diff r) = cos theta (diff f) / (diff x) + sin theta (diff f) / (diff y)
, #h(0.25in)
(diff h) / (diff theta) = -r sin theta (diff f) / (diff x) + r cos theta (diff f) / (diff y)
$
== b
$
1 &= sin^2 theta + cos^2 theta \
(diff f) / (diff y) &= sin^2 theta (diff f) / (diff y) + cos^2 theta (diff f) / (diff y) \
&= sin theta cos theta (diff f) / (diff x)
+ sin^2 theta (diff f) / (diff y)
+ cos^2 theta (diff f) / (diff y)
- sin theta cos theta (diff f) / (diff x) \
&= sin theta (cos theta (diff f) / (diff x) + sin theta (diff f) / (diff y))
+ 1 / r cos theta (r cos theta (diff f) / (diff y) - r sin theta (diff f) / (diff x)) \
&= sin theta (diff h) / (diff r)
+ 1 / r cos theta (diff h) / (diff theta) \
$
So one such pair of functions $g_1, g_2 : RR^2 -> RR$ is
$
g_1 (r, theta) = sin theta
, #h(0.25in)
g_2 (r, theta) = 1 / r cos theta
$ |
|
https://github.com/goshakowska/Typstdiff | https://raw.githubusercontent.com/goshakowska/Typstdiff/main/assets/example2/new.typ | typst | = Introduction
Authors using Microsoft Word will first need to save the AIAA Meeting PapersTemplate.dotx file in the “Templates” directory of their hard drive. To do so, simply open the AIAA Meeting Papers Template.dotx file and then click
= Some other headline
All manuscripts are to be submitted electronically to the ScholarOne Abstracts site created for each conference. The manuscript upload will be enabled several weeks after acceptance notices have been sent. Presenting authors of accepted papers will receive an email with instructions when manuscript submission opens. It is *important* that presenting authors keep their email addresses up-to-date so they do not miss this notice.
= General Guidelines
The following section outlines general (nonformatting) guidelines to follow. These guidelines are applicable to all authors (except as noted), and include information on the policies and practices relevant to the publication of your CHANGED.
== Publication by AIAA
Your manuscript cannot be published by AIAA if:
+ It has been published previously or
+ Changed for something new
== Paper Review and Visa Considerations
It is the responsibility of the author to obtain any required government or company reviews for their papers in advance of publication. Start early to determine if the reviews are required; this process can take several weeks.
If you plan to attend an AIAA Forum, technical conference or professional development course held in the United States and you require a visa for travel, it is incumbent upon you to apply for a visa with the U.S. embassy (consular division) or consulate with ample time for processing.
== Control ID Number vs Paper
Your paper was assigned a control ID number at the time you submitted your abstract. It is critical that you reference the tracking number and conference name when contacting AIAA regarding your submission. The control ID number is not the final AIAA paper number. The paper number, which appears in the format AIAA-20XX-XXXX, will be used to refer to your paper in the program and in any publication format. It will not be assigned until shortly before the conference. *Do not include a paper number anywhere on your paper, as this number will be stamped automatically in the top right corner of your paper at the time of processing.*
== Header
New paragraph
- something
- new
== Copyright
Before AIAA can print or publish any paper, the copyright information must be completed in the submission system. Failure to complete the electronic form correctly could result in your paper not being published. The following fields must be completed:
+ Clearance Statement
+ Non-Infringement Statement
+ Publication Status Statement |
|
https://github.com/ern1/typiskt | https://raw.githubusercontent.com/ern1/typiskt/main/templates/resume.typ | typst | #import "colors.typ": *
#let theme = (
/*bg-page: white,
bg-sidebar: gradient.linear(
(md.grey-400, 0%),
(md.grey-400, 65%),
(md.grey-100, 100%),
angle: 30deg,
),*/
bg-page: md.grey-200.lighten(15%),
bg-sidebar: white,
primary: md.grey-800,
primaryLight: md.grey-700,
primaryDark: md.grey-900,
accentPrimary: oggole.blue,
accentSecondary: oggole.green,
)
// Used to collect sidebar articles.
#let articles = state("articles", ())
#let resume(
// Name and descriptive title of ?
firstName: none, //"<NAME>",
lastName: none, //"'nestake",
description: none,
profile-image: none, // path to profile image
body // main content
) = {
// Set document metadata.
set document(
title: "resume",
author: firstName + " " + lastName,
keywords: ("resume, cv, work, developer")
)
// Configure pages. The background parameter is used to
// add the right background to the pages.
set page(
paper: "a4",
margin: (left: 16pt, right: 32pt, top: 24pt, bottom: 25pt),
fill: theme.bg-page,
)
place(left + top,
rect(
fill: theme.bg-sidebar,
//stroke: 1pt,
radius: (left: 15pt, right: 15pt),
height: 100%,
width: 30%,
//outset: (y: 1.2cm),
),
)
// Set the body font.
set text(11pt, font: "Source Sans Pro")
// Configure headings.
show heading: set text(font: "Source Sans Pro")
show heading.where(level: 1): set text(1.1em)
show heading.where(level: 1): set par(leading: 0.4em)
show heading.where(level: 1): set block(below: 0.6em)
show heading: it => {
set text(weight: 600) if it.level > 2
it
}
show link: it => {
set text(weight: "regular", fill: md.blue-grey-700)
it
}
// The document is split in two with a grid. A left column for the main
// flow and a right column for the sidebar articles.
grid(
columns: (30%, auto),
column-gutter: 16pt,
row-gutter: 8pt,
circle(stroke: none, width: 100%, inset: -0pt)[
#set align(center)
#image("../" + profile-image, width: 100%)
],
// Name and description of what it is i am
{
set align(left + horizon)
set text(font: "Roboto", size: 32pt, theme.primaryLight, weight: "light") // firstname
stack(spacing: 10pt,
"", // Todo: remove if not needed
[#firstName #h(2pt) #text(theme.primary, weight: "bold")[#lastName]],
[#h(4pt) #text(size: 16pt)[#description]]
)
},
// The sidebar with articles.
locate(loc => {
set align(left)
//set text(fill: white, weight: "regular")
//v(44pt)
show heading.where(level: 1): set text(size: 1em-1pt, weight: "bold", fill: theme.primaryDark)
set text(weight: "regular", fill: theme.primary)
show terms: set text(weight: "regular", fill: theme.primaryLight)
for element in articles.final(loc) {
//move(dx: 0.4cm, element)
pad(left: 1em, right: 1.1em, element)
v(24pt, weak: true)
}
}),
// The main flow with body and publication info.
{
show heading.where(level: 1): set text(weight: "bold", fill: theme.primaryDark)
//show heading.where(level: 2): set text(size: 10pt, weight: "bold", fill: black)
set par(justify: true)
body
},
)}
#let hBar() = [
#h(2pt) | #h(2pt)
]
#let mainEntry(title: none, society: none, date: none, location: none, description: none) = {
// Add header with name and other info about entry
stack(dir: ltr,
align(left, stack(spacing: 6pt,
text(size: 1em-1pt, weight: "bold", fill: theme.primary, title),
text(size: 1em-2pt, weight: "regular", fill: theme.primary, society)
)),
align(right, stack(spacing: 6pt,
text(size: 1em-3pt, weight: "regular", fill: theme.primary, date),
text(size: 1em-3pt, weight: "regular", fill: theme.primary, location) // 5D5D5D
)),
)
text(size: 1em-2pt, weight: "regular", fill: theme.primaryLight, description)
}
// An article that is displayed in the sidebar. Can be added
// anywhere in the document. All articles are collected automatically.
#let article(body) = articles.update(it => it + (body,))
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/ascii-ipa/1.1.0/CHANGELOG.md | markdown | Apache License 2.0 | # Changelog of `ascii-ipa`
follows [semantic versionin][semver]
## 1.1.0
- Translations will now return a [`string`][typst-string] if the font is not overridden
- The library now explicitly exposes functions via a "gateway" entrypoint
- Update internal project structure
- Update package metadata
- Update documentation
## 1.0.0
- Initial release
[semver]: https://semver.org
[typst-string]: https://typst.app/docs/reference/foundations/str/
|
https://github.com/flaribbit/typst-tetris | https://raw.githubusercontent.com/flaribbit/typst-tetris/main/README.md | markdown | # typst-tetris
Yes, you can play tetris in typst!
[typst-preview](https://marketplace.visualstudio.com/items?itemName=mgt19937.typst-preview) vscode extension or [web app](https://typst.app/) recommended for best experience.

Keys:
- <kbd>a</kbd> : move left
- <kbd>d</kbd> : move right
- <kbd>s</kbd> : move down
- <kbd>w</kbd> : drop
- <kbd>j</kbd> : rotate left
- <kbd>k</kbd> : rotate right
|
|
https://github.com/gywangMtl/typst_vertical | https://raw.githubusercontent.com/gywangMtl/typst_vertical/main/loushiming.typ | typst | MIT License | #set text (
dir:rtl,
font: "SimSun",
size:25pt,
lang: "zh"
)
#let char_per_line = 25 *3 //3 bytes per char
#let vertical_text(content) = {
let a = ()
let l = 0
while l*char_per_line < content.len() {
let cn = char_per_line
if ((l*char_per_line + cn) >content.len()) {
cn = content.len() -l*char_per_line
}
a.push(content.slice(l*char_per_line, count: cn))
l+=1
}
let c = ()
for l in a {
let b = l
c.push(stack(dir:ttb,
spacing: 7pt,
..b.clusters()
))
}
stack(dir:rtl,
spacing: 2pt,
..c
)
}
#show: p=> {
let text_list = ()
if (p.has("children")) {
text_list = p.children
} else {
text_list.push(p)
}
for c in text_list {
if c.has("text") {
vertical_text(c.text)
} else {
if c.has("body") {
vertical_text(c.body.text)
pagebreak()
} else {
pagebreak()
}
}
}
}
山不在高有仙則名水不在深有龍則靈斯是陋室惟吾德馨苔痕上階綠草色入簾青談笑有鴻儒往來無白丁可以調素琴閱金經無絲竹之亂耳無案牘之勞形南陽諸葛廬西蜀子雲亭孔子云何陋之有
|
https://github.com/PA055/5839B-Notebook | https://raw.githubusercontent.com/PA055/5839B-Notebook/main/frontmatter.typ | typst | #import "/utils.typ"
#import "/packages.typ": notebookinator
#import notebookinator: *
#import themes.radial.components: *
#create-frontmatter-entry(title: "Our Team")[
]
#create-frontmatter-entry(title: "Table of Contents")[
#toc()
] |
|
https://github.com/wumin199/wumin199.github.io | https://raw.githubusercontent.com/wumin199/wumin199.github.io/main/source/_posts/2023/target-2023.md | markdown | ---
title: TARGET 2024
date: 2023-03-10 13:39:27
toc: true
password: <PASSWORD>
comment: false
widgets:
- type: toc
position: right
index: true
collapsed: false
depth: 3
---
人生是一个自我实现的预言,你不知道自己要去哪,那就连第一步都迈不出去
<!-- more -->
# 2023
## 职业
### 职业规划
- [x] 潘老师的职业入门课
- [x] 潘老师的新手经理必修课
- [ ] 针对以上2个课程,进行突破练习
- [x] 调研并确定一个知识管理工具
- GitPage+百度网盘+Notion
- [ ] 第二曲线探索
- [ ] 结合树莓派,个人开发者(e.g., 老人报警按钮/英语/ChatGPT服务器/机器人学习/数学学习/CICD/Trello)
- [ ] [花生壳](https://hsk.oray.com/)
- [ ] [Trello项目管理](https://trello.com/zh-Hans)(排除)
---
### 职业技能
- [ ] 维护一个Docker
- [x] 维护一个docker
- [x] GitPage
- [ ] test_cpp_template/test_python_template
- [ ] Clion/VSCode Code Indexing, Debugging in Linux/Windows
- [ ] CI/CD
- [ ] k8s
- [ ] 编程
- [ ] 阅读python snap7
- 学习如何封装c++
- [ ] C++数据结构与算法
- [ ] 运动学
- [ ] wm-world
- [ ] wm-kdl
- [ ] ruckig
- [ ] topp-ra
- [ ] morden-robotics
- [ ] robotics-toolbox
- [x] 数学
- [x] Power-of-Matrix
- [x] 工作效率
- [x] ChatGPT
- [x] ChatGPT Prompt
- [x] copilot/cursor
- [x] 维护一个工具页面
- [x] Typst维护数学知识
- [ ] 数学工具
- [ ] GeoGebra
- [ ] Matlab/simulink/matrix/robotics
---
## 生活
- [ ] 心理健康管理
- [ ] 阅读一部经典人生哲学著作
- [ ] 罗翔人生哲学
- [ ] 哈佛大学幸福课
- [ ] 身体健康管理
- [ ] 克服倡导者的过度“透支”问题,制定可行计划并执行
- [ ] 找到一项符合自己状况的运动项目(e.g., 膝盖)
---
# 2024
## 职业
- [ ] [EECS127](https://eecs127.github.io/)(综合课程:线性代数+机器学习+最优化,网上有作业答案)
- [ ] [16-745: Optimal Control & Reinforcement Learning](https://optimalcontrol.ri.cmu.edu/)(Julia编程+数值优化的基本入门(merit function)+控制+DP+凸集凸优化)
- [知乎课程介绍](https://zhuanlan.zhihu.com/p/544119652)(课程主要讲的是基于数值方法的控制,也就是说把一个控制问题转换成一个数值优化问题来解决。)
- [ ] wm-world
- [ ] wm-ruckig
- [ ] wm-kinco-test
- [ ] 英语公众号维护
## 家庭
- [ ] 备孕
- [ ] 怀孕
## 健康
- [ ] 饮食清淡,戒辣
- [ ] 轻断食
|
|
https://github.com/FrightenedFoxCN/cetz-cd | https://raw.githubusercontent.com/FrightenedFoxCN/cetz-cd/main/tests/render.typ | typst | #import "../src/cetz-cd.typ": *
// #cetz-cd(```
// $A$ ar[r, $g$] ar[rd, $f$, swapped] & $B$ ar[d] ;
// & $D$
// ```)
#cetz-cd(```
$EE$ ar[r, $tau_i$] & $EE_i$ ar[r] & $FF^"alg"$;
$FF[alpha]$ ar[r, $sigma_i$] ar[u] & $FF[sigma_i alpha]$ ar[u] ar[r] & $FF[sigma_i alpha]$ ar[u] ;
$FF$ ar[u] ar[r] & $FF$ ar[u] & $$;
```)
#cetz-cd(```
$EE$ & & $EE'$;
& $FF$ ar[lu] ar[ru] &
```, width: 1, height: 1)
#cetz-cd(```
$R$ & ;
& $S(R, A)$ ar[lu] ;
$A$ ar[ru] ar[uu] &
```, height: 1)
#cetz-cd(```
$ZZ$ ar[r] ar[d] & $QQ$ ar[d];
$?$ ar[r] & $FF$
```, width: 1, height: 1) |
|
https://github.com/MLAkainu/Network-Comuter-Report | https://raw.githubusercontent.com/MLAkainu/Network-Comuter-Report/main/contents/04_socket/index.typ | typst | Apache License 2.0 | = Socket programming with Python
Socket là cánh cổng ngăn cách giữa Application Layer với Transport Layer.
#figure(caption: [Mô hình socket],
image("../../components/assets/socket.png"))
Thư viện socket của Python hỗ trợ một số API sau để thiết lập kết nối socket TCP/UDP:
#block()[
- socket()
- bind()
- listen()
- accept()
- connect()
- connect_ex()
- send()
- recv()
- close()
]
Thư viện socket của Python hỗ trợ cả TCP socket lẫn UDP socket. Trong project này, nhóm dự định sử dụng TCP socket để hiện thực dự án.\
Flow của một TCP socket connection có thể được thể hiện như hình dưới đây:
#figure(caption: [ Quy trình một kết nối socket để gửi nhận dữ liệu (Nguồn ResearchGate)],
image("../../components/assets/connsoc.png"))
#pagebreak()
= Application design
== Architecture design
#figure(caption: [Kiến trúc tổng quan của hệ thống],
image("../../components/assets/design.png"))
#pagebreak()
== Flow protocol design
=== Register
#figure(caption: [Sơ đồ luồng của chức năng đăng ký],
image("../../components/assets/register.png"))
=== Login
#figure(caption: [Sơ đồ luồng của chức năng đăng nhập],
image("../../components/assets/login.png"))
=== Ping
#figure(caption: [Sơ đồ luồng của chức năng ping],
image("../../components/assets/ping.png"))
=== Discover
#figure(caption: [Sơ đồ luồng của chức năng discover],
image("../../components/assets/discover.png"))
#pagebreak()
=== Publish
#figure(caption: [Sơ đồ luồng của chức năng publish],
image("../../components/assets/publish.png"))
=== Fetch
#figure(caption: [Sơ đồ luồng của chức năng fetch],
image("../../components/assets/fetch.png"))
#pagebreak()
== Activity diagram
=== Register
#figure(caption: [Sơ đồ hoạt động của chức năng đăng ký],
image("../../components/assets/ActivityRe.png"))
=== Login
#figure(caption: [Sơ đồ hoạt động của chức năng đăng nhập],
image("../../components/assets/Activity_Lo.png"))
=== Ping
#figure(caption: [Sơ đồ hoạt động của chức năng ping],
image("../../components/assets/Activity_ping.png"))
#pagebreak()
=== Discover
#figure(caption: [Sơ đồ hoạt động của chức năng discover],
image("../../components/assets/ActivityDis.png"))
#pagebreak()
=== Publish
#figure(caption: [Sơ đồ hoạt động của chức năng publish],
image("../../components/assets/ActivityPub.png"))
=== Fetch
#figure(caption: [Sơ đồ hoạt động của chức năng fetch],
image("../../components/assets/Activity_fe.png"))
#pagebreak()
== Status Code
Ngoài ra, nhóm còn định nghĩa thêm một số status code để phục vụ cho việc trao đổi thông tin giữa server đến client và ngược lại.
#figure(caption: [Các status code được định nghĩa],
table(
columns: (1fr, 1fr),
align: horizon, [*Status Code*],[*Mô tả*],
"200", "Đăng nhập thành công",
"201", "Đăng ký thành công",
"202", "Publish thành công",
"203", "Server muốn ping client",
"204", "Fetch thành công",
"205", "Download file thành công",
"400", "Yêu cầu không hợp lệ",
"401", "Không tìm thấy tài khoản",
"402", "Sai mật khẩu",
"403", "Tài khoản đã đăng nhập",
"404", "Tài khoản đã tồn tại",
"405", "Clietn thoát khỏi server",
"406", "File không tồn tại"
))
#figure(caption: [Trao đổi status code giữa client và server ],
image("../../components/assets/exchange_code.png"))
#pagebreak()
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/fletcher/0.4.0/src/draw.typ | typst | Apache License 2.0 | #import "@preview/cetz:0.1.2" as cetz: vector
#import "utils.typ": *
#import "marks.typ": *
/// Get the point at which a connector should attach to a node from a given
/// angle, taking into account the node's size and shape.
///
/// - node (dictionary): The node to connect to.
/// - θ (angle): The desired angle from the node's center to the connection
/// point.
/// -> point
#let get-node-anchor(node, θ) = {
if node.radius < 1e-3pt { return node.real-pos }
if node.shape == "circle" {
vector.add(
node.real-pos,
vector-polar(node.radius + node.outset, θ),
)
} else if node.shape == "rect" {
let origin = node.real-pos
let μ = calc.pow(node.aspect, node.defocus)
let origin-δ = (
calc.max(0pt, node.size.at(0)/2*(1 - 1/μ))*calc.cos(θ),
calc.max(0pt, node.size.at(1)/2*(1 - μ/1))*calc.sin(θ),
)
let crossing-line = (
vector.add(origin, origin-δ),
vector.add(origin, vector-polar(1e3*node.radius, θ)),
)
intersect-rect-with-crossing-line(node.outer-rect, crossing-line)
} else { panic(node.shape) }
}
/// Get the points where a connector between two nodes should be drawn between,
/// taking into account the nodes' sizes and relative positions.
///
/// - edge (dictionary): The connector whose end points should be determined.
/// - nodes (pair of dictionaries): The start and end nodes of the connector.
/// -> pair of points
#let get-edge-anchors(edge, nodes) = {
assert(nodes.len() == 2)
let center-center-line = nodes.map(node => node.real-pos)
let v = vector.sub(..center-center-line)
let θ = vector-angle(v) // approximate angle of connector
if edge.kind in ("line", "arc") {
let δ = edge.bend
let incident-angles = (θ + δ + 180deg, θ - δ)
let points = zip(nodes, incident-angles).map(((node, θ)) => {
get-node-anchor(node, θ)
})
return points
} else if edge.kind == "corner" {
zip(nodes, (θ + 180deg, θ)).map(((node, θ)) => {
get-node-anchor(node, θ)
})
}
}
#let draw-edge-label(edge, label-pos, options) = {
cetz.draw.content(
label-pos,
box(
fill: edge.crossing-fill,
inset: .2em,
radius: .2em,
stroke: if options.debug >= 2 { DEBUG_COLOR + 0.25pt },
[#edge.label],
),
anchor: if edge.label-anchor != auto { edge.label-anchor },
)
if options.debug >= 2 {
cetz.draw.circle(
label-pos,
radius: 0.75pt,
stroke: none,
fill: DEBUG_COLOR,
)
}
}
// Get the arrow head adjustment for a given extrusion distance
#let cap-offsets(edge, y) = {
(0, 1).map(pos => {
let mark = edge.marks.find(mark => mark.pos == pos)
if mark == none { return 0pt }
let x = cap-offset(mark, (2*pos - 1)*y/edge.stroke.thickness)
if pos == int(mark.rev) { x -= mark.at("inner-len", default: 0) }
if mark.rev { x = -x - mark.at("outer-len", default: 0) }
if pos == 0 { x += mark.at("outer-len", default: 0) }
x*edge.stroke.thickness
})
}
#let draw-edge-line(edge, nodes, options) = {
// Stroke end points, before adjusting for the arrow heads
let (from, to) = get-edge-anchors(edge, nodes)
let θ = vector-angle(vector.sub(to, from))
// Draw line(s), one for each extrusion shift
for shift in edge.extrude {
// let shifted-line-points = (from, to).zip((0pt, 0pt))
let shifted-line-points = (from, to).zip(cap-offsets(edge, shift))
.map(((point, offset)) => vector.add(
point,
vector.add(
// Shift end points lengthways depending on markers
vector-polar(offset, θ),
// Shift line sideways (for double line effects, etc)
vector-polar(shift, θ + 90deg),
)
))
cetz.draw.line(
..shifted-line-points,
stroke: edge.stroke,
)
}
// Draw marks
let curve(t) = vector.lerp(from, to, t)
for mark in edge.marks {
place-arrow-cap(curve, edge.stroke, mark, debug: options.debug >= 4)
}
// Draw label
if edge.label != none {
// Choose label anchor based on connector direction,
// preferring to place labels above the edge
if edge.label-side == auto {
edge.label-side = if calc.abs(θ) > 90deg { left } else { right }
}
let label-dir = if edge.label-side == right { +1 } else { -1 }
if edge.label-anchor == auto {
edge.label-anchor = angle-to-anchor(θ - label-dir*90deg)
}
edge.label-sep = to-abs-length(edge.label-sep, options.em-size)
let label-pos = vector.add(
vector.lerp(from, to, edge.label-pos),
vector-polar(edge.label-sep, θ + label-dir*90deg),
)
draw-edge-label(edge, label-pos, options)
}
}
#let draw-edge-arc(edge, nodes, options) = {
// Stroke end points, before adjusting for the arrow heads
let (from, to) = get-edge-anchors(edge, nodes)
let θ = vector-angle(vector.sub(to, from))
// Determine the arc from the stroke end points and bend angle
let (center, radius, start, stop) = get-arc-connecting-points(from, to, edge.bend)
let bend-dir = if edge.bend > 0deg { +1 } else { -1 }
// Draw arc(s), one for each extrusion shift
for shift in edge.extrude {
// Adjust arc angles to accomodate for cap offsets
let (δ-start, δ-stop) = cap-offsets(edge, shift)
.map(arclen => -bend-dir*arclen/radius*1rad)
cetz.draw.arc(
center,
radius: radius + shift,
start: start + δ-start,
stop: stop + δ-stop,
anchor: "center",
stroke: edge.stroke,
)
}
// Draw marks
let curve(t) = vector.add(center, vector-polar(radius, lerp(start, stop, t)))
for mark in edge.marks {
place-arrow-cap(curve, edge.stroke, mark, debug: options.debug >= 4)
}
// Draw label
if edge.label != none {
if edge.label-side == auto {
edge.label-side = if edge.bend > 0deg { left } else { right }
}
let label-dir = if edge.label-side == left { +1 } else { -1 }
if edge.label-anchor == auto {
// Choose label anchor based on connector direction
edge.label-anchor = angle-to-anchor(θ - label-dir*90deg)
}
edge.label-sep = to-abs-length(edge.label-sep, options.em-size)
let label-pos = vector.add(
center,
vector-polar(
radius + label-dir*bend-dir*edge.label-sep,
lerp(start, stop, edge.label-pos),
)
)
draw-edge-label(edge, label-pos, options)
}
// Draw debug stuff
if options.debug >= 3 {
for (cell, point) in zip(nodes, (from, to)) {
cetz.draw.line(
cell.real-pos,
point,
stroke: DEBUG_COLOR + 0.1pt,
)
}
cetz.draw.arc(
center,
radius: radius,
start: start,
stop: stop,
anchor: "center",
stroke: DEBUG_COLOR + edge.stroke.thickness/4,
)
}
}
#let draw-edge-corner(edge, nodes, options) = {
let θ = vector-angle(vector.sub(nodes.at(1).real-pos, nodes.at(0).real-pos))
let θ-floor = calc.floor(θ/90deg)*90deg
let θ-ceil = calc.ceil(θ/90deg)*90deg
let bend-dir = (
if edge.corner == right { true }
else if edge.corner == left { false }
else { panic("Edge corner option must be left or right.") }
)
// Angles at which arrow heads point, going along the edge
let cap-angles = (
if bend-dir { (θ-ceil, θ-floor) }
else { (θ-floor, θ-ceil) }
)
let cap-points = zip(nodes, cap-angles, (0, 1)).map(((node, θ, dir)) => {
// todo: defocus?
get-node-anchor(node, θ + 180deg*dir)
})
let corner-point = if calc.even(calc.floor(θ/90deg) + int(bend-dir)) {
(cap-points.at(1).at(0), cap-points.at(0).at(1))
} else {
(cap-points.at(0).at(0), cap-points.at(1).at(1))
}
let verts = (
cap-points.at(0),
corner-point,
cap-points.at(1),
)
// Compute the three points of the right angle,
// taking into account extrusions and mark offsets
let get-vertices(shift) = {
// normal vectors to the (first, second) segment
let (a, b) = cap-angles.map(θ => vector-polar(shift, θ + 90deg))
// apply extrusions
let verts = verts.zip((a, vector.add(a, b), b))
.map(((v, o)) => vector.add(v, o))
// apply mark offsets
let offsets = cap-offsets(edge, shift).zip(cap-angles, (1, 0))
.map(((offset, θ, dir)) => vector-polar(offset, θ + 180deg*dir))
(
vector.sub(verts.at(0), offsets.at(0)),
verts.at(1),
vector.add(verts.at(2), offsets.at(1)),
)
}
// Draw right-angled line(s), one for each extrusion shift
for shift in edge.extrude {
cetz.draw.line(
..get-vertices(shift),
stroke: edge.stroke,
)
}
// Draw mark(s)
let curve(t) = {
let i = int(t >= 0.5)
vector.lerp(verts.at(i), verts.at(i + 1), 2*t - i)
}
for mark in edge.marks {
place-arrow-cap(curve, edge.stroke, mark, debug: options.debug >= 4)
}
// Draw label
if edge.label != none {
if edge.label-side == auto { edge.label-side = edge.corner }
let label-dir = if edge.label-side == right { +1 } else { -1 }
if edge.label-anchor == auto {
// Choose label anchor based on connector direction
edge.label-anchor = angle-to-anchor(θ - label-dir*90deg)
}
let v = get-vertices(label-dir*edge.label-sep)
let label-pos = zip(..v).map(coord => {
lerp-at(coord, 2*edge.label-pos)
})
draw-edge-label(edge, label-pos, options)
}
}
#let draw-edge(edge, nodes, options) = {
edge.marks = interpret-marks(edge.marks)
if edge.kind == "line" { draw-edge-line(edge, nodes, options) }
else if edge.kind == "arc" { draw-edge-arc(edge, nodes, options) }
else if edge.kind == "corner" { draw-edge-corner(edge, nodes, options) }
else { panic(edge.kind) }
}
#let draw-node(node, options) = {
if node.stroke != none or node.fill != none {
for (i, offset) in node.extrude.enumerate() {
let fill = if i == 0 { node.fill } else { none }
if node.shape == "rect" {
cetz.draw.content(
node.real-pos,
rect(
width: node.size.at(0) + 2*offset,
height: node.size.at(1) + 2*offset,
stroke: node.stroke,
fill: fill,
radius: node.corner-radius,
)
)
}
if node.shape == "circle" {
cetz.draw.content(
node.real-pos,
circle(
radius: node.radius + offset,
stroke: node.stroke,
fill: fill,
)
)
}
}
}
if node.label != none {
cetz.draw.content(node.real-pos, node.label, anchor: "center")
}
// Draw debug stuff
if options.debug >= 1 {
// dot at node anchor
cetz.draw.circle(
node.real-pos,
radius: 0.5pt,
fill: DEBUG_COLOR,
stroke: none,
)
}
// node bounding shapes
if options.debug >= 3 and node.shape == "rect" {
cetz.draw.rect(
..node.rect,
stroke: DEBUG_COLOR + 0.25pt,
)
}
if options.debug >= 3 and node.shape == "circle" {
cetz.draw.circle(
node.real-pos,
radius: node.radius,
stroke: DEBUG_COLOR + 0.25pt,
)
}
}
#let find-node-at(nodes, pos) = {
nodes.filter(node => node.pos == pos)
.sorted(key: node => node.radius).last()
}
#let draw-diagram(
grid,
nodes,
edges,
options,
) = {
// cetz.draw.scale((
// x: if options.axes.at(0) == ltr { +1 } else if options.axes.at(0) == rtl { -1 },
// y: if options.axes.at(1) == btt { +1 } else if options.axes.at(1) == ttb { -1 },
// ))
for node in nodes {
draw-node(node, options)
}
for edge in edges {
let nodes = edge.points.map(find-node-at.with(nodes))
let intersection-stroke = if options.debug >= 2 {
(paint: DEBUG_COLOR, thickness: 0.25pt)
}
// assert(edge.marks.all(mark => type(mark) == dictionary), message: repr(edge))
draw-edge(edge, nodes, options)
}
// draw axes
if options.debug >= 1 {
cetz.draw.scale(
(
x: grid.scale.at(0),
y: grid.scale.at(1),
),
)
(
{
cetz.draw.rect(
(0,0),
grid.bounding-size,
stroke: DEBUG_COLOR + 0.25pt
)
for (axis, coord) in ((0, (x,y) => (x,y)), (1, (y,x) => (x,y))) {
for (i, x) in grid.centers.at(axis).enumerate() {
let size = grid.sizes.at(axis).at(i)
// coord = (x, y) => coord(..(x, y).zip(grid.scale).map(((a, b)) => a*b))
// coord = (x, y) => element-wise-mul((x, y), grid.scale)
// coordinate label
cetz.draw.content(
coord(x, -0.4em),
text(fill: DEBUG_COLOR, size: .75em)[#(grid.origin.at(axis) + i)],
anchor: if axis == 0 { "top" } else { "right" }
)
// size bracket
cetz.draw.line(
..(+1, -1).map(dir => coord(x + dir*max(size, 1e-6pt)/2, 0)),
stroke: DEBUG_COLOR + .75pt,
mark: (start: "|", end: "|")
)
// gridline
cetz.draw.line(
coord(x, 0),
coord(x, grid.bounding-size.at(1 - axis)),
stroke: (
paint: DEBUG_COLOR,
thickness: .3pt,
dash: "densely-dotted",
),
)
}
}
}
)
}
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/fletcher/0.1.1/src/draw.typ | typst | Apache License 2.0 | #import "@preview/cetz:0.1.2" as cetz: vector
#import "utils.typ": *
#import "marks.typ": *
/// Get the point at which a connector should attach to a node from a given
/// angle, taking into account the node's size and shape.
///
/// - node (dictionary): The node to connect to.
/// - θ (angle): The desired angle from the node's center to the connection
/// point.
/// -> point
#let get-node-anchor(node, θ) = {
if node.radius < 1e-3pt { return node.real-pos }
if node.shape == "circle" {
vector.add(
node.real-pos,
vector-polar(node.radius + node.outset, θ),
)
} else if node.shape == "rect" {
let origin = node.real-pos
let μ = calc.pow(node.aspect, node.defocus)
let origin-δ = (
calc.max(0pt, node.size.at(0)/2*(1 - 1/μ))*calc.cos(θ),
calc.max(0pt, node.size.at(1)/2*(1 - μ/1))*calc.sin(θ),
)
let crossing-line = (
vector.add(origin, origin-δ),
vector.add(origin, vector-polar(1e3*node.radius, θ)),
)
intersect-rect-with-crossing-line(node.outer-rect, crossing-line)
} else { panic(node.shape) }
}
/// Get the points where a connector between two nodes should be drawn between,
/// taking into account the nodes' sizes and relative positions.
///
/// - conn (dictionary): The connector whose end points should be determined.
/// - nodes (pair of dictionaries): The start and end nodes of the connector.
/// -> pair of points
#let get-conn-anchors(conn, nodes) = {
let center-center-line = nodes.map(node => node.real-pos)
let v = vector.sub(..center-center-line)
let θ = vector-angle(v) // approximate angle of connector
let δ = if conn.mode == "arc" { conn.bend } else { 0deg }
let incident-angles = (θ + δ + 180deg, θ - δ)
let points = zip(nodes, incident-angles).map(((node, θ)) => {
get-node-anchor(node, θ)
})
points
}
#let draw-connector(conn, nodes, options) = {
// Stroke end points, before adjusting for the arrow heads
let cap-points = get-conn-anchors(conn, nodes)
let θ = vector-angle(vector.sub(..cap-points))
// Get the arrow head adjustment for a given extrusion distance
let cap-offsets(y) = zip(conn.marks, (+1, -1))
.map(((mark, dir)) => {
if mark == none or mark not in CAP_OFFSETS {
0pt
} else {
dir*CAP_OFFSETS.at(mark)(y)*conn.stroke.thickness
}
})
let cap-angles
let label-pos
if conn.mode == "line" {
cap-angles = (θ, θ + 180deg)
for shift in conn.extrude {
let d = shift*conn.stroke.thickness
let shifted-line-points = cap-points
.zip(cap-offsets(shift))
.map(((point, offset)) => vector.add(
point,
vector.add(
// Shift end points lengthways depending on markers
vector-polar(offset, θ),
// Shift line sideways (for double line effects, etc)
vector-polar(d, θ + 90deg),
)
))
cetz.draw.line(
..shifted-line-points,
stroke: conn.stroke,
)
}
// Choose label anchor based on connector direction
if conn.label-side == auto {
conn.label-side = if calc.abs(θ) > 90deg { left } else { right }
}
let label-dir = if conn.label-side == right { +1 } else { -1 }
if conn.label-anchor == auto {
conn.label-anchor = angle-to-anchor(θ - label-dir*90deg)
}
conn.label-sep = to-abs-length(conn.label-sep, options.em-size)
label-pos = vector.add(
vector.lerp(..cap-points, conn.label-pos),
vector-polar(conn.label-sep, θ + label-dir*90deg),
)
} else if conn.mode == "arc" {
let (center, radius, start, stop) = get-arc-connecting-points(..cap-points, conn.bend)
let bend-dir = if conn.bend > 0deg { +1 } else { -1 }
let δ = bend-dir*90deg
cap-angles = (start + δ, stop - δ)
for shift in conn.extrude {
let (start, stop) = (start, stop)
.zip(cap-offsets(shift))
.map(((θ, arclen)) => θ + bend-dir*arclen/radius*1rad)
cetz.draw.arc(
center,
radius: radius + shift*conn.stroke.thickness,
start: start,
stop: stop,
anchor: "center",
stroke: conn.stroke,
)
}
if conn.label-side == auto {
conn.label-side = if conn.bend > 0deg { left } else { right }
}
let label-dir = if conn.label-side == left { +1 } else { -1 }
if conn.label-anchor == auto {
// Choose label anchor based on connector direction
conn.label-anchor = angle-to-anchor(θ + label-dir*90deg)
}
conn.label-sep = to-abs-length(conn.label-sep, options.em-size)
label-pos = vector.add(
center,
vector-polar(
radius + label-dir*bend-dir*conn.label-sep,
lerp(start, stop, conn.label-pos),
)
)
} else { panic(conn.mode) }
for (mark, pt, θ) in zip(conn.marks, cap-points, cap-angles) {
if mark == none { continue }
draw-arrow-cap(pt, θ, conn.stroke, mark)
}
if conn.label != none {
cetz.draw.content(
label-pos,
box(
fill: options.crossing-fill,
inset: .2em,
radius: .2em,
stroke: if options.debug >= 2 { DEBUG_COLOR + 0.25pt },
$ #conn.label $,
),
anchor: conn.label-anchor,
)
if options.debug >= 2 {
cetz.draw.circle(
label-pos,
radius: conn.stroke.thickness,
stroke: none,
fill: DEBUG_COLOR,
)
}
}
if options.debug >= 3 {
for (cell, point) in zip(nodes, cap-points) {
cetz.draw.line(
cell.real-pos,
point,
stroke: DEBUG_COLOR + 0.1pt,
)
}
}
}
#let find-node-at(nodes, pos) = {
nodes.filter(node => node.pos == pos)
.sorted(key: node => node.radius).last()
}
#let draw-diagram(
grid,
nodes,
arrows,
options,
) = {
for (i, node) in nodes.enumerate() {
if node.label == none { continue }
if node.stroke != none or node.fill != none {
if node.shape == "rect" {
cetz.draw.rect(
..node.rect,
stroke: node.stroke,
fill: node.fill,
)
}
if node.shape == "circle" {
cetz.draw.circle(
node.real-pos,
radius: node.radius,
stroke: node.stroke,
fill: node.fill,
)
}
}
cetz.draw.content(node.real-pos, node.label, anchor: "center")
if options.debug >= 1 {
cetz.draw.circle(
node.real-pos,
radius: 1pt,
fill: DEBUG_COLOR,
stroke: none,
)
}
if options.debug >= 2 {
if options.debug >= 3 or node.shape == "rect" {
cetz.draw.rect(
..node.rect,
stroke: DEBUG_COLOR + 0.25pt,
)
}
if options.debug >= 3 or node.shape == "circle" {
cetz.draw.circle(
node.real-pos,
radius: node.radius,
stroke: DEBUG_COLOR + 0.25pt,
)
}
}
}
for arrow in arrows {
let nodes = arrow.points.map(find-node-at.with(nodes))
let intersection-stroke = if options.debug >= 2 {
(paint: DEBUG_COLOR, thickness: 0.25pt)
}
draw-connector(arrow, nodes, options)
}
// draw axes
if options.debug >= 1 {
cetz.draw.rect(
(0,0),
grid.bounding-size,
stroke: DEBUG_COLOR + 0.25pt
)
for (axis, coord) in ((0, (x,y) => (x,y)), (1, (y,x) => (x,y))) {
for (i, x) in grid.centers.at(axis).enumerate() {
let size = grid.sizes.at(axis).at(i)
// coordinate label
cetz.draw.content(
coord(x, -.4em),
text(fill: DEBUG_COLOR, size: .75em)[#(grid.origin.at(axis) + i)],
anchor: if axis == 0 { "top" } else { "right" }
)
// size bracket
cetz.draw.line(
..(+1, -1).map(dir => coord(x + dir*max(size, 1e-6pt)/2, 0)),
stroke: DEBUG_COLOR + .75pt,
mark: (start: "|", end: "|")
)
// gridline
cetz.draw.line(
coord(x, 0),
coord(x, grid.bounding-size.at(1 - axis)),
stroke: (
paint: DEBUG_COLOR,
thickness: .3pt,
dash: "densely-dotted",
),
)
}
}
}
}
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/08-adv-features/adv-features.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/heading.typ": chapter
#import "/template/components.typ": note, cross-ref
#import "/lib/glossary.typ": tr
#show: web-page-template
#chapter(meta: (
footnote: [译注:作者尚未开始编写本章,原文亦只有小节标题。#cross-ref(<heading:not-finish-point>, web-path: "/chapters/09-layout/other-dir.typ", web-content: [第9.2节])及之后也是此情况,不注。]
))[
// Advanced Feature Programming
高级特性编程
]
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/016_Fate%20Reforged.typ | typst | #import "@local/mtgset:0.1.0": conf
#show: doc => conf("Fate Reforged", doc)
#include "./016 - Fate Reforged/001_A New Tarkir of Old.typ"
#include "./016 - Fate Reforged/002_Unwritten.typ"
#include "./016 - Fate Reforged/003_The Reforged Chain.typ"
#include "./016 - Fate Reforged/004_The Truth of Names.typ"
#include "./016 - Fate Reforged/005_The Doom of the Golden Fang.typ"
#include "./016 - Fate Reforged/006_No End and No Beginning.typ"
#include "./016 - Fate Reforged/007_Khanfall.typ"
|
|
https://github.com/alberto-lazari/computer-science | https://raw.githubusercontent.com/alberto-lazari/computer-science/main/advanced-topics-cs/quantum-algorithms/chapters/gradient.typ | typst | #import "/common.typ": *
= Gradient estimation
Classical computers are able to estimate the gradient of a $d$-dimension function $f : [0, 1]^d -> RR_+$ at the origin by evaluating $f$ in $O(d)$ time.
Quantum computers are able to do it by evaluating $f$ once, a constant amount of time.
== Jordan's quantum gradient algorithm
Given a linear $d$-dimension function $f(x) = a^top x + b, quad a in RR^d$ the algorithm returns
$a = nabla f(0)$.
A binary oracle $U_f$ of the function is needed: a matrix such that
$ U_f#ket($arrow(x)$)ket(arrow(y)) =
ket(arrow(x))ket(arrow(y) plus.square arrow(f(x)))
$
Where $ket(arrow(x))$ is a register composed of $d$ $q$-qubit registers,
representing the arguments $x_1, x_2, ..., x_d$ of $f ==>
ket(arrow(x)) = ket(arrow(x_1))ket(arrow(x_2))...ket(arrow(x_d))
$
In particular, when $ket(arrow(y))$ is the initial quantum state
$ U_f#ket($arrow(x)$)ket(arrow(0)) =
ket(arrow(x))ket(arrow(f(x)))
$
The algorithm returns $a$, which is $nabla f(0)$, the gradient of $f$ at the origin, by applying the $H$ gate on $ket(arrow(x))$, QFT on $ket(arrow(y))$, then $U_f$ on the result.
In order to measure $a$, $"QFT"^(-1)$ is applied.
== Nonlinear functions
If $f$ is nonlinear, an approximation can be constructed as $g(x) = nabla f(0)^top x$.
Then Jordan's algorithm can be applied to $g$, in order to estimate $nabla f(0)$.
$g(x)$ can be expressed (for $m$ large enough) as
$ sum_(k = -m)^m a_n f(k x) $
The complexity is $O(m)$.
|
|
https://github.com/ntjess/typst-nsf-templates | https://raw.githubusercontent.com/ntjess/typst-nsf-templates/main/main-nsf-general.typ | typst | MIT License | #import "nsf-proposal.typ": *
#show: template
#show regex("(?i:\bxxx\b)"): highlight[
(UPDATE ME)
]
#let note(..args) = {
set par(justify: false)
set text(size: 8pt, fill: blue)
[~]
drafting.margin-note(..args)
}
#title[Project Title XXX]
#text(red)[
#set par(justify: false)
Instructions pulled from:
- https://www.nsf.gov/pubs/policydocs/pappg17_1/pappg_2.jsp
- https://new.nsf.gov/policies/pappg/23-1/ch-2-proposal-preparation
- https://www.lsu.edu/osp/files/NSF_StandardProposalOutline.pdf
Hide them by either deleting the `#instruction` blocks or uncommenting the specified
line below.
]
// ----------------------------------
// UNCOMMENT ME TO HIDE INSTRUCTIONS
// ----------------------------------
// #let instructions(..args) = {}
= Cover Sheet
== Awardee Organization/Primary Place of Performance
#instructions[
The information on the Awardee Organization is prefilled on the Cover Sheet based on the login information entered. If the project will be performed at the awardee organization, check the designated box. If the project, however, will be performed at a location other than the awardee, provide the following information (where applicable):
Organization Name (identify the organization name of the primary site where the work will be performed, if different than the awardee); Street; City; State; 9-digit Zip Code.
]
== Program Announcement/Solicitation/Program Description Number
#instructions[
Proposers are required to select the applicable funding opportunity (i.e., program announcement, solicitation or description.) If the proposal is not submitted in response to a specific funding opportunity, proposers should select "Proposal and Award Policies and Procedures Guide." Proposers are advised to select "No Closing Date" when the proposal is not submitted in response to any relevant NSF funding opportunity.
]
== NSF Unit of Consideration
#instructions[
Proposers are required to select the NSF Unit of Consideration from the picklist. If the proposal is not submitted in response to a specific funding opportunity, proposers should select "Not Applicable."
]
== Remainder of the Cover Sheet
#instructions[
The remainder of the Cover Sheet is prefilled based on the information provided by the proposer. The information is pulled from the proposer's profile. The information is editable by the proposer. The following information is required:
#columns(2)[
- Project Title;
- Budget;
- Announcement and Consideration Information
- PI & Co-PI Information;
- Previous NSF Award Information;
#colbreak()
- Other Federal Agencies Information;
- Awardee Organization Information;
- Primary Place of Performance
- Other Information: beginning investigator, lobbying activities, ...
]
]
= Project Summary (1 page)
#instructions[
The Project Summary should be written in the third person, informative to other persons working in the same or related fields, and, insofar as possible, understandable to a scientifically or technically literate lay reader. It should not be an abstract of the proposal. The summary must include three separate sections, each of which is input into a text box in FastLane:
- *Overview*: Describe the activity that would result if the proposal were funded and state the objectives and methods to be employed.
- *Intellectual Merit*: Describe the potential of the proposed activity to advance knowledge.
- *Broader Impacts*: Describe the potential of the proposed activity to benefit society and contribute to the achievement of specific, desired societal outcomes.
]
== Overview
#instructions[
The overview includes a description of the activity that would result if the proposal were funded and a statement of objectives and methods to be employed. The overview should be written in the third person and be informative to other persons working in the same or related fields. It should not be an abstract of the proposal.
]
== Intellectual Merit
#instructions[
The Intellectual Merit criterion encompasses the potential to advance knowledge. It includes the potential to: (i) enhance understanding of the subject under study; (ii) advance the concepts, theories, methodologies, instrumentation, or interventions employed; (iii) contribute to improved research capacity, infrastructure, or capability; (iv) enhance the understanding of the participating students/trainees as well as the intellectual merit of their thesis research; and (v) advance the mission of the program to broaden participation of underrepresented groups.
]
== Broader Impacts
#instructions[
The Broader Impacts criterion encompasses the potential to benefit society and contribute to the achievement of specific, desired societal outcomes. Examples illustrating activities likely to demonstrate broader impacts are available electronically on the NSF website. The project activities may be based on previously established and/or innovative methods and approaches, but in either case must be well justified. These issues apply to both the technical aspects of the proposal and the way in which the project may make broader contributions.
]
= Table of Contents (automatically generated)
// #outline(depth: 1)
#instructions[
The Table of Contents is automatically generated for the proposal. The proposer cannot edit this form.
]
= Project Description (15 pages)
#instructions[
The Project Description should provide a clear statement of the work to be undertaken and must include the objectives for the period of the proposed work and expected significance; the relationship of this work to the present state of knowledge in the field, as well as to work in progress by the PI under other support. URLs must not be used. Visual materials, including charts, graphs, maps, photographs and other pictorial presentations are included in the 15-page limitation. Should this project involve collaboration with other institutions/organizations, describe the roles to be played by the other entities, specify the managerial arrangements, and explain the advantages of the multi-organizational effort.
The Project Description should outline the general plan of work, including the broad design of activities to be undertaken, and, where appropriate, provide a clear description of experimental methods and procedures. Proposers should address what they want to do, why they want to do it, how they plan to do it, how they will know if they succeed, and what benefits could accrue if the project is successful. The project activities may be based on previously established and/or innovative methods and approaches, but in either case must be well justified. These issues apply to both the technical aspects of the proposal and the way in which the project may make broader contributions.
*The Project Description also must contain, as a separate section within the narrative, a section labeled "Broader Impacts"* (see below).
]
== Broader Impacts
#instructions[
This section should provide a discussion of the broader impacts of the proposed activities. Broader impacts may be accomplished through the research itself, through the activities that are directly related to specific research projects, or through activities that are supported by, but are complementary to the project. NSF values the advancement of scientific knowledge and activities that contribute to the achievement of societally relevant outcomes. Such outcomes include, but are not limited to: full participation of women, persons with disabilities, and underrepresented minorities in science, technology, engineering, and mathematics (STEM); improved STEM education and educator development at any level; increased public scientific literacy and public engagement with science and technology; improved well-being of individuals in society; development of a diverse, globally competitive STEM workforce; increased partnerships between academia, industry, and others; improved national security; increased economic competitiveness of the U.S.; use of science and technology to inform public policy; and enhanced infrastructure for research and education. These examples of societally relevant outcomes should not be considered either comprehensive or prescriptive. Proposers may include appropriate outcomes not covered by these examples.
Plans for data management and sharing of the products of research, including preservation, documentation, and sharing of data, samples, physical collections, curriculum materials and other related research and education products should be described in the Special Information and Supplementary Documentation section of the proposal (see Chapter II.D.2.i(ii) for additional instructions for preparation of this section).
For proposals that include funding to an International Branch Campus of a U.S. IHE or to a foreign organization or foreign individual (including through use of a subaward or consultant arrangement), the proposer must provide the requisite explanation/justification in the project description. See Chapter I.E for additional information on the content requirements.
]
== Results from Prior NSF Support (up to 5 pages)
#instructions[
The purpose of this section is to assist reviewers in assessing the quality of prior work conducted with prior or current NSF funding. If any PI or co-PI identified on the proposal has received prior NSF support including:
an award with an end date in the past five years; or
any current funding, including any no cost extensions
Information on the award is required for each PI and co-PI, regardless of whether the support was directly related to the proposal or not. In cases where the PI or any co-PI has received more than one award (excluding amendments to existing awards), they need only report on the one award that is most closely related to the proposal. Support means salary support, as well as any other funding awarded by NSF, including research, Graduate Research Fellowship, Major Research Instrumentation, conference, equipment, travel, and center awards, etc.
The following information must be provided:
+ The NSF award number, amount and period of support;
+ The title of the project;
+ A summary of the results of the completed work, including accomplishments, supported by the award. The results must be separately described under two distinct headings: Intellectual Merit and Broader Impacts;
+ A listing of the publications resulting from the NSF award (a complete bibliographic citation for each publication must be provided either in this section or in the References Cited section of the proposal); if none, state “No publications were produced under this award.”
+ Evidence of research products and their availability, including, but not limited to: data, publications, samples, physical collections, software, and models, as described in any Data Management Plan; and
+ If the proposal is for renewed support, a description of the relation of the completed work to the proposed work.
If the project was recently awarded and therefore no new results exist, describe the major goals and broader impacts of the project. Note that the proposal may contain up to five pages to describe the results. Results may be summarized in fewer than five pages, which would give the balance of the 15 pages for the Project Description.
]
== Unfunded Collaborations
#instructions[
Any substantial collaboration with individuals not included in the budget should be described in the Facilities, Equipment and Other Resources section of the proposal (see Chapter II.D.2.g) and documented in a letter of collaboration from each collaborator. Such letters should be provided in the supplementary documentation section of Research.gov and follow the format instructions specified in Chapter II.D.2.i. Collaborative activities that are identified in the budget should follow the instructions in Chapter II.E.3.
]
== Group Proposals
#instructions[
NSF encourages submission of proposals by groups of investigators; often these are submitted to carry out interdisciplinary projects. Unless stipulated in a specific program solicitation, however, such proposals will be subject to the 15-page Project Description limitation established in Section (ii) above. PIs who wish to exceed the established page limitations for the Project Description must request and receive a deviation in advance of proposal submission. (Chapter II.A contains information on deviations.)
]
== Proposals for Renewed Support
#instructions[
A proposal for renewed support may be either a "traditional" proposal in which the proposed work would be accomplished within a five-year period, or a "continuing grant" proposal, which is a request for a commitment for support beyond five years that is predicated upon the achievement of specific goals and objectives described in the proposal. In either case, the proposal must describe the results of the prior NSF support and indicate how the new request builds upon or deviates from work supported by the NSF award. If the proposal is for renewed support for a continuing grant, the proposer must include a summary of progress toward the specific goals and objectives established for the current award. The proposal also must describe the relation of the completed work to the proposed work. The proposal must contain a separate section within the Project Description that provides results from the prior NSF support.
]
= References Cited (no page limit)
#instructions[
This section should contain a list of bibliographic citations relevant to the proposal. While there is no page limit, this section must include bibliographic citations only and must not be used to provide parenthetical information outside of the 15-page Project Description. Note that:
- Every reference must include the names of all authors (in the same sequence in which they appear in the publication), the article and journal title, book title, volume number, page numbers, and year of publication.
- If the document is available electronically, the website address also should be identified. Inclusion of a website address is optional.
- Proposers must be especially careful to follow accepted scholarly practices in providing citations for source materials relied upon when preparing any section of the proposal.
]
= Bio Sketch(es) (2 pages per person)
== Senior Personnel
#instructions[
A biographical sketch (limited to two pages) is required for each individual identified as senior personnel. (See #link("https://www.nsf.gov/pubs/policydocs/pappg17_1/pappg_2.jsp#2ex7")[Exhibit II-7] for the definitions of Senior Personnel.) Proposers may elect to use third-party solutions, such as NIH's SciENcv to develop and maintain their biographical sketch. However, proposers are advised that they are still responsible for ensuring that biographical sketches created using third-party solutions are compliant with NSF proposal preparation requirements.
The following information must be provided in the order and format specified below. Inclusion of additional information beyond that specified below may result in the proposal being returned without review.
Do not submit any personal information in the biographical sketch. This includes items such as: home address; home telephone, fax, or cell phone numbers; home e-mail address; driver's license number; marital status; personal hobbies; and the like. Such personal information is not appropriate for the biographical sketch and is not relevant to the merits of the proposal. NSF is not responsible or in any way liable for the release of such material. (See also #link("https://www.nsf.gov/pubs/policydocs/pappg17_1/pappg_3.jsp#IIIH")[Chapter III.H]).
]
=== Professional Preparation
#instructions[
A list of the individual's undergraduate and graduate education and postdoctoral training (including location) as indicated below
]
#let prep-table = ProfessionalPreparationTable()
#let add-info = prep-table.add-info
#add-info(
is-first: true,
[Undergraduate Institution(s)],
institutions: [University of xxx],
locations: [City, State],
majors: [Major],
degrees: [Degree, Year]
)
// Also easy to indicate multiple institutions
#add-info(
[Graduate Institution(s)],
institutions: [
- University of xxx
- xxx University
],
locations: [
- City, State
- City, State
],
majors: [
- Major 1
- Major 2
],
degrees: [
- Degree, Year
- Degree, Year
]
)
#add-info(
[Postdoctoral Institution(s)],
institutions: [University of xxx],
locations: [City, State],
majors: [Major],
degrees: [Degree, Year]
)
#prep-table.table
=== Appointments
#instructions[
A list, in reverse chronological order, of all the individual's academic/professional appointments beginning with the current appointment.
]
=== Products
#instructions[
A list of: (i) up to five products most closely related to the proposed project; and (ii) up to five other significant products, whether or not related to the proposed project. Acceptable products must be citable and accessible including but not limited to publications, data sets, software, patents, and copyrights. Unacceptable products are unpublished documents not yet submitted for publication, invited lectures, and additional lists of products. Only the list of ten will be used in the review of the proposal.
Each product must include full citation information including (where applicable and practicable) names of all authors, date of publication or release, title, title of enclosing work such as journal or book, volume, issue, pages, website and URL or other Persistent Identifier.
If only publications are included, the heading "Publications" may be used for this section of the Biographical Sketch.
]
=== Synergistic Activities
#instructions[
A list of up to five examples that demonstrate the broader impact of the individual's professional and scholarly activities that focuses on the integration and transfer of knowledge as well as its creation. Examples could include, among others: innovations in teaching and training (e.g., development of curricular materials and pedagogical methods); contributions to the science of learning; development and/or refinement of research tools; computation methodologies and algorithms for problem-solving; development of databases to support research and education; broadening the participation of groups underrepresented in STEM; and service to the scientific and engineering community outside of the individual�s immediate organization.
]
#instructions[
*In FastLane, Biographical sketches for all senior project personnel must be uploaded as a single PDF file associated with that individual.*
]
== Other Personnel
#instructions[
For the personnel categories listed below, the proposal also may include information on exceptional qualifications that merit consideration in the evaluation of the proposal. Such information should be clearly identified as "Other Personnel" biographical information and uploaded as a single PDF file associated with the Other Personnel. The following information must be provided:
+ Postdoctoral associates
+ Other professionals
+ Students (research assistants)
]
== Equipment Proposals
#instructions[
For equipment proposals, the following must be provided for each auxiliary user:
+ A short biographical sketch (limited to two pages)
+ A list of up to five publications most closely related to the proposed acquisition.
Such information should be clearly identified as "Equipment Proposal" biographical information and uploaded as a single PDF file in the Other Supplementary Documents section of the proposal.
]
= Budget and Budget Justification (budget justification limited to 3 pages)
#instructions[
Each proposal must contain a budget for each year of support requested. The budget justification must be no more than three pages per proposal. The amounts for each budget line item requested must be documented and justified in the budget justification as specified below. For proposals that contain a subaward(s), *each subaward* must include a separate budget justification of no more than three pages. See #link("https://www.nsf.gov/pubs/policydocs/pappg17_1/pappg_2.jsp#IIC2gvie")[Chapter II.C.2.g.(vi)(e)] for further instructions on proposals that contain subawards.
The proposal may request funds under any of the categories listed so long as the item and amount are considered necessary, reasonable, allocable, and allowable under 2 CFR200, Subpart E, NSF policy, and/or the program solicitation. For-profit entities are subject to the cost principles contained in the Federal Acquisition Regulation, Part 31. Amounts and expenses budgeted also must be consistent with the proposing organization's policies and procedures and cost accounting practices used in accumulating and reporting costs.
Proposals for large facilities also should consult NSF's Large Facilities Manual for additional budgetary preparation guidelines.
]
== Salaries and Wages (Lines A and B on the Proposal Budget)
=== Senior Personnel Salaries & Wages Policy
#instructions[
NSF regards research as one of the normal functions of faculty members at institutions of higher education. Compensation for time normally spent on research within the term of appointment is deemed to be included within the faculty member's regular organizational salary.
As a general policy, NSF limits the salary compensation requested in the proposal budget for senior personnel to no more than two months of their regular salary in any one year. (See Exhibit II-3 for the definitions of Senior Personnel.) It is the organization's responsibility to define and consistently apply the term “year”, and to specify this definition in the budget justification. This limit includes salary compensation received from all NSF-funded grants. This effort must be documented in accordance with 2 CFR §200, Subpart E, including 2 CFR §200.430(i). If anticipated, any compensation for such personnel in excess of two months must be disclosed in the proposal budget, justified in the budget justification, and must be specifically approved by NSF in the award notice budget.[14]
Under normal rebudgeting authority, as described in Chapters VII and X, a recipient can internally approve an increase or decrease in person months devoted to the project after an award is made, even if doing so results in salary support for senior personnel exceeding the two-month salary policy. No prior approval from NSF is necessary unless the rebudgeting would cause the objectives or scope of the project to change. NSF prior approval is necessary if the objectives or scope of the project change.
These same general principles apply to other types of non-academic organizations.
]
=== Administrative and Clerical Salaries & Wages Policy
#instructions[
In accordance with 2 CFR §200.413, the salaries of administrative and clerical staff should normally be treated as indirect costs (F&A). Direct charging of these costs may be appropriate only if all the conditions identified below are met:
]
== Administrative or clerical services are integral to a project or activity;
#instructions[
(ii) Individuals involved can be specifically identified with the project or activity;
(iii) Such costs are explicitly included in the approved budget or have the prior written approval of the cognizant NSF Grants and Agreements Officer; and
(iv) The costs are not also recovered as indirect costs.
Conditions (i) (ii) and (iv) above are particularly relevant for consideration at the budget preparation stage.
]
=== Procedures
#instructions[
The names of the PI(s), faculty, and other senior personnel and the estimated number of full-time-equivalent person-months for which NSF funding is requested, and the total amount of salaries requested per year, must be listed. For consistency with the NSF cost sharing policy, if person months will be requested for senior personnel, a corresponding salary amount must be entered on the budget. If salary and person months are not being requested for an individual designated as senior personnel, they should be removed from Section A of the budget. Their name(s) will remain on the Cover Sheet and the individual(s) role on the project should be described in the Facilities, Equipment and Other Resources section of the proposal.
For postdoctoral associates and other professionals, the total number of persons for each position must be listed, with the number of full-time-equivalent person-months and total amount of salaries requested per year. For graduate and undergraduate students, secretarial, clerical, technical, etc., whose time will be charged directly to the project, only the total number of persons and total amount of salaries requested per year in each category is required. Compensation classified as salary payments must be requested in the salaries and wages category. Salaries requested must be consistent with the organization's regular practices. The budget justification should detail the rates of pay by individual for senior personnel, postdoctoral associates, and other professionals.
]
=== Confidential Budgetary Information
#instructions[
The proposing organization may request that salary data on senior personnel not be released to persons outside the Government during the review process. In such cases, the item for senior personnel salaries in the proposal may appear as a single figure and the person-months represented by that amount omitted. If this option is exercised, senior personnel salaries and person-months must be itemized in a separate statement and forwarded to NSF in accordance with the instructions specified in Chapter II.E.1. This statement must include all of the information requested on the proposal budget for each person involved. NSF will not forward the detailed information to reviewers and will hold it privileged to the extent permitted by law. The information on senior personnel salaries will be used as the basis for determining the salary amounts shown in the budget. The box for "Proprietary or Privileged Information" must be checked on the Cover Sheet when the proposal contains confidential budgetary information.[15]
]
== Fringe Benefits (Line C on the Proposal Budget)
#instructions[
If the proposer's usual accounting practices provide that its contributions to employee benefits (leave, employee insurance, social security, retirement, other payroll-related taxes, etc.) be treated as direct costs, NSF award funds may be requested to fund fringe benefits as a direct cost. These are typically determined by application of a calculated fringe benefit rate for a particular class of employee (full time or part-time) applied to the salaries and wages requested. They also may be paid based on actual costs for individual employees if that institutional policy has been approved by the cognizant Federal agency. See 2 CFR §200.431 for the definition and allowability of inclusion of fringe benefits on a proposal budget.
]
== Equipment (Line D on the Proposal Budget)
#instructions[
Equipment is defined as tangible personal property (including information technology systems) having a useful life of more than one year and a per-unit acquisition cost which equals or exceeds the lesser of the capitalization level established by the proposer for financial statement purposes, or \$5,000. It is important to note that the acquisition cost of equipment includes modifications, attachments, and accessories necessary to make an item of equipment usable for the purpose for which it will be purchased. Items of needed equipment must be adequately justified, listed individually by description and estimated cost.
Allowable items ordinarily will be limited to research equipment and apparatus not already available for the conduct of the work. General purpose equipment such as office equipment and furnishings, and information technology equipment and systems are typically not eligible for direct cost support. Special purpose or scientific use computers or associated hardware and software, however, may be requested as items of equipment when necessary to accomplish the project objectives and not otherwise reasonably available. Any request to support such items must be clearly disclosed in the proposal budget, justified in the budget justification, and be included in the NSF award budget. See 2 CFR §200.313 and 200.439 for additional information.
]
== Travel (Line E on the Proposal Budget)
=== General
#instructions[
When anticipated, travel and its relation to the proposed activities must be specified, itemized, and justified by destination and cost. Funds may be requested for field work, attendance at meetings and conferences, and other travel associated with the proposed work, including subsistence. To qualify for support, however, attendance at meetings or conferences must be necessary to accomplish proposal objectives or disseminate research results. Travel support for dependents of key project personnel may be requested only when the travel is for a duration of six months or more either by inclusion in the approved budget or with the prior written approval of the cognizant NSF Grants and Agreements Officer. Temporary dependent care costs above and beyond regular dependent care that directly result from travel to conferences are allowable costs provided that the conditions established in 2 CFR §200.475 are met.
Allowance for air travel normally will not exceed the cost of round-trip, economy airfares. Persons traveling under NSF awards must travel by U.S.-Flag Air carriers, if available.
]
=== Domestic Travel
#instructions[
Domestic travel includes travel within and between the U.S., its territories, and possessions.[16] Travel, meal, and hotel expenses of recipient employees who are not on travel status are unallowable. Costs of employees on travel status are limited to those specifically authorized by 2 CFR §200.475.
]
=== Foreign Travel
#instructions[
Travel outside the areas specified above is considered foreign travel. When anticipated, the proposer must enter the names of countries and dates of visit on the proposal budget, if known.
]
== Participant Support (Line F on the Proposal Budget)
#instructions[
This budget category refers to direct costs for items such as stipends or subsistence allowances, travel allowances, and registration fees paid to or on behalf of participants or trainees (but not employees) in connection with NSF-sponsored conferences or training projects. Any additional categories of participant support costs other than those described in 2 CFR §200.1 (such as incentives, gifts, souvenirs, t-shirts, and memorabilia), must be justified in the budget justification, and such costs will be closely scrutinized by NSF. (See also Chapter II.F.7.) Speakers and trainers generally are not considered participants and should not be included in this section of the budget. However, if the primary purpose of the individual's attendance at the conference is learning and receiving training as a participant, then the costs may be included under participant support. If the primary purpose is to speak or assist with management of the conference, then such costs should be budgeted in appropriate categories other than participant support.
For some educational projects conducted at local school districts, the participants being trained are employees. In such cases, the costs must be classified as participant support if payment is made through a stipend or training allowance method. The school district must have an accounting mechanism in place (i.e., sub-account code) to differentiate between regular salary and stipend payments.
To help defray the costs of participating in a conference or training activity, funds may be proposed for payment of stipends, per diem or subsistence allowances, based on the type and duration of the activity. Such allowances must be reasonable, in conformance with the policy of the proposing organization and limited to the days of attendance at the conference plus the actual travel time required to reach the conference location. Where meals or lodgings are furnished without charge or at a nominal cost (e.g., as part of the registration fee), the per diem or subsistence allowance should be correspondingly reduced. Although local participants may participate in conference meals and coffee breaks, funds may not be proposed to pay per diem or similar expenses for local participants in the conference. Costs related to an NSF-sponsored conference (e.g., venue rental fees, catering costs, supplies, etc.) that will be secured through a service agreement/contract should be budgeted on line G.6., “Other Direct Costs” to ensure appropriate allocation of indirect costs.
This section of the budget also may not be used for incentive payments to research subjects. Human subject payments should be included on line G.6. of the NSF budget under “Other Direct Costs,” and any applicable indirect costs should be calculated on the payments in accordance with the organization's Federally negotiated indirect cost rate.
Funds may be requested for the travel costs of participants. If so, the restrictions regarding class of accommodations and use of U.S.-Flag air carriers are applicable.[17] In training activities that involve off-site field work, costs of transportation of participants are allowable. The number of participants to be supported must be entered in the parentheses on the proposal budget. Participant support costs must be specified, itemized, and justified in the budget justification section of the proposal. Indirect costs (F&A) are not usually allowed on costs budgeted as participant support unless the recipient's current, Federally approved indirect cost rate agreement provides for allocation of F&A to participant support costs. Participant support costs must be accounted for separately should an award be made.
]
== Other Direct Costs (Lines G1 through G6 on the Proposal Budget)
#instructions[
Any costs proposed to an NSF project must be allowable, reasonable, and directly allocable to the supported activity. When anticipated, the budget must identify and itemize other anticipated direct costs not included under the headings above, including materials and supplies, publication costs, and computer and vendor services. Examples include aircraft rental, space rental at research establishments away from the proposing organization, minor building alterations, payments to human subjects, and service charges. Reference books and periodicals only may be included on the proposal budget if they are specifically allocable to the project being supported by NSF.
]
=== Materials and Supplies (including Costs of Computing Devices) (Line G1 on the Proposal Budget)
#instructions[
When anticipated, the proposal budget justification must indicate the general types of expendable materials and supplies required. Supplies are defined as all tangible personal property other than those described in paragraph (d)(iii) above. A computing device is considered a supply if the acquisition cost is less than the lesser of the capitalization level established by the proposer or \$5,000, regardless of the length of its useful life. In the specific case of computing devices, charging as a direct cost is allowable for devices that are essential and allocable, but not solely dedicated, to the performance of the NSF project. Details and justification must be included for items requested to support the project.
]
=== Publication/Documentation/Dissemination (Line G2 on the Proposal Budget)
#instructions[
The proposal budget may request funds for the costs of documenting, preparing, publishing or otherwise making available to others the findings and products of the work to be conducted under the award. This generally includes the following types of activities: reports, reprints, page charges or other journal costs (except costs for prior or early publication); necessary illustrations; cleanup, documentation, storage and indexing of data and databases; development, documentation and debugging of software; and storage, preservation, documentation, indexing, etc., of physical specimens, collections, or fabricated items. Line G.2. of the proposal budget also may be used to request funding for data deposit and data curation costs.[18]
]
=== Consultant Services (also referred to as Professional Service Costs) (Line G3 on the Proposal Budget)[19]
#instructions[
The proposal budget may request costs for professional and consultant services. Professional and consultant services are services rendered by persons who are members of a particular profession or possess a special skill, and who are not officers or employees of the proposing organization. Costs of professional and consultant services are allowable when reasonable in relation to the services rendered and when not contingent upon recovery of costs from the Federal government. Anticipated services must be justified, and information furnished on each individual's expertise, primary organizational affiliation, normal daily compensation rate, and number of days of expected service. Consultants' travel costs, including subsistence, may be included. If requested, the proposer must be able to justify that the proposed rate of pay is reasonable. Additional information on the allowability of consultant or professional service costs is available in 2 CFR §200.459. In addition to other provisions required by the proposing organization, all contracts made under the NSF award must contain the applicable provisions identified in 2 CFR §200 Appendix II.
]
=== Computer Services (Line G4 on the Proposal Budget)
#instructions[
The cost of computer services, including computer-based retrieval of scientific, technical, and educational information, may be requested only where it is institutional policy to charge such costs as direct charges. A justification based on the established computer service rates at the proposing organization must be included. The proposal budget also may request costs for leasing of computer equipment.
]
=== Subawards (Line G5 on the Proposal Budget)[20][21]
#instructions[
Except for the purpose of obtaining goods and services for the proposer's own use which creates a procurement relationship with a contractor, no portion of the proposed activity may be subawarded or transferred to another organization without prior written NSF authorization. Such authorization must be provided either through approval specifically granted in the award notice or by receiving written prior approval from the cognizant NSF Grants and Agreements Officer after an award is issued. If the subaward organization is changed, prior approval of the new subaward organization must be obtained from the cognizant NSF Grants and Agreements Officer.
If known at the time of proposal submission, the intent to enter into such arrangements must be disclosed in the proposal. A separate budget and a budget justification of no more than five pages, must be provided for each subrecipient, if already identified. The description of the work to be performed must be included in the project description.
All proposing organizations are required to make a case-by-case determination regarding the role of a subrecipient versus contractor for each agreement it makes. 2 CFR §200.331 provides characteristics of each type of arrangement to assist proposing organizations in making that determination. However, inclusion of a subaward or contract in the proposal budget or submission of a request after issuance of an NSF award to add a subaward or contract will document the required organizational determination.
NSF does not negotiate rates for organizations that are not direct recipients of NSF funding (e.g., subrecipients). Consistent with 2 CFR §200.332, NSF recipients must use the domestic subrecipient's applicable U.S. Federally negotiated indirect cost rate(s). If no such rate exists, the NSF recipient must determine the appropriate rate in collaboration with the subrecipient. The appropriate rate will be: a negotiated rate between the NSF recipient and the subrecipient; a prior rate negotiated between a different pass-through entity and the same subrecipient, or the de minimis indirect cost recovery rate of 10% of modified total direct costs.
It is also NSF's expectation that NSF recipients will use foreign subrecipients' applicable U.S. Federally negotiated indirect cost rate(s). However, if no such rate exists, the NSF recipient will fund the foreign subrecipient using the de minimis indirect cost rate recovery of 10% of modified total direct costs. See also Chapter I.E.2. for additional requirements on issuance of a subaward or consultant arrangement to a foreign organization or a foreign individual.
Proposers are responsible for ensuring that proposed subrecipient costs, including indirect costs, are reasonable and appropriate.
]
=== Other (Line G6 on the Proposal Budget)[22]
#instructions[
Any other direct costs not specified in Lines G.1. through G.5. must be identified on Line G.6. Such costs must be itemized and detailed in the budget justification. Examples include:
Contracts for the purpose of obtaining goods and services for the proposer's own use (see 2 CFR §200.331 for additional information); and
Incentive payments, for example, payments to human subjects or incentives to promote completion of a survey, should be included on line G.6. of the NSF budget. Incentive payments should be proposed in accordance with organizational policies and procedures. Indirect costs should be calculated on incentive payments in accordance with the organization's approved U.S. Federally negotiated indirect cost rate(s). Performance based incentive payments to employees as described in 2 CFR §200.430(f) should not be included in this section of the budget.
]
== Total Direct Costs (Line H on the Proposal Budget)
#instructions[
The total amount of direct costs requested in the budget, to include Lines A through G, must be entered on Line H.
]
== Indirect Costs (also known as Facilities and Administrative Costs (F&A) for Colleges and Universities) (Line I on the Proposal Budget)
#instructions[
Except where specifically identified in an NSF program solicitation, the applicable U.S. Federally negotiated indirect cost rate(s) must be used in computing indirect costs (F&A) for a proposal. Use of an indirect cost rate lower than the organization's current negotiated indirect cost rate is considered a violation of NSF's cost sharing policy. See section (xii) below. The amount for indirect costs must be calculated by applying the current negotiated indirect cost rate(s) to the approved base(s), and such amounts must be specified in the budget justification. Indirect cost recovery for IHEs is additionally restricted by 2 CFR §200, Appendix III, paragraph C.7. which specifies Federal agencies are required to use the negotiated F&A rate that is in effect at the time of the initial award throughout the life of the sponsored agreement. Additional information on the charging of indirect costs to an NSF award is available in Chapter X.D.
Domestic proposing organizations that do not have a current negotiated rate agreement with a cognizant Federal agency may choose to apply the de minimis rate of 10% to a base of modified total direct costs (MTDC) as authorized by 2 CFR §200.414(f). No supporting documentation is required for proposed rates of 10% or less of modified total direct costs. Organizations without a current negotiated indirect cost rate agreement and that wish to request indirect cost rate recovery above 10% should prepare an indirect cost proposal based on expenditures for its most recently ended fiscal year. Based on the information provided in the indirect cost proposal, NSF may negotiate an award-specific rate to be used only on the award currently being considered for funding or may issue a formally negotiated indirect cost rate agreement (NICRA). The contents and financial data included in indirect cost proposals vary according to the make-up of the proposing organization. Instructions for preparing an indirect cost rate proposal can be found on the NSF website. NSF formally negotiates indirect cost rates for the organizations for which NSF has rate cognizance. NSF does not negotiate rates for entities that do not yet hold direct NSF funding, nor does NSF negotiate rates for subrecipients.
Foreign organizations that do not have a current U.S. Federally negotiated indirect cost rate(s) are limited to a de minimis indirect cost rate recovery of 10% of modified total direct costs. Foreign recipients that have a U.S. Federally negotiated indirect cost rate(s) may recover indirect costs at the current negotiated rate.
]
== Total Direct and Indirect Costs (F&A) (Line J on the Proposal Budget)
#instructions[
The total amount of direct and indirect costs (F&A) (sum of Lines H and I) must be entered on Line J.
]
== Fees (Line K on the Proposal Budget)
#instructions[
This line is available for use only by the SBIR/STTR programs and Major Facilities programs when specified in the solicitation.
]
== Amount of This Request (Line L on the Proposal Budget)
#instructions[
The total amount of funds requested by the proposer.
]
== Cost Sharing (Line M on the Proposal Budget)
#instructions[
The National Science Board issued a report entitled "Investing in the Future: NSF Cost Sharing Policies for a Robust Federal Research Enterprise" (NSB 09-20, August 3, 2009), which contained eight recommendations for NSF regarding cost sharing. In implementation of the Board's recommendation, NSF's guidance[23] is as follows:
Voluntary Committed and Uncommitted Cost Sharing
As stipulated in 2 CFR §200.1, Voluntary committed cost sharing means cost sharing specifically pledged on a voluntary basis in the proposal's budget or the Federal award on the part of the non-Federal entity and that becomes a binding requirement of Federal award.” As such, to be considered voluntary committed cost sharing, the amount must appear on the NSF proposal budget and be specifically identified in the approved NSF budget.[24] Unless required by NSF (see the section on Mandatory Cost Sharing below), inclusion of voluntary committed cost sharing is prohibited and Line M on the proposal budget will not be available for use by the proposer. NSF Program Officers are not authorized to impose or encourage mandatory cost sharing unless such requirements are explicitly included in the program solicitation.
In order for NSF, and its reviewers, to assess the scope of a proposed project, all organizational resources necessary for, and available to, a project must be described in the Facilities, Equipment and Other Resources section of the proposal (see Chapter II.D.2.g for further information). While not required by NSF, the recipient may, at its own discretion, continue to contribute voluntary uncommitted cost sharing to NSF-sponsored projects. As noted above, however, these resources are not auditable by NSF and should not be included in the proposal budget or budget justification.
Mandatory Cost Sharing
Mandatory cost sharing will only be required for NSF programs when explicitly authorized by the NSF Director, the NSB, or legislation. A complete listing of NSF programs that require cost sharing is available on the NSF website. In these programs, cost sharing requirements will be clearly identified in the solicitation and must be included on Line M of the proposal budget. For purposes of budget preparation, the cumulative cost sharing amount must be entered on Line M of the first year's budget. Should an award be made, the organization's cost sharing commitment, as specified on the first year's approved budget, must be met prior to the award end date.
Such cost sharing will be considered as an eligibility, rather than a review criterion. Proposers are advised not to exceed the mandatory cost sharing level or amount specified in the solicitation.[25]
When mandatory cost sharing is included on Line M, and accepted by the Foundation, the commitment of funds becomes legally binding and is subject to audit. When applicable, the estimated value of any in-kind contributions also should be included on Line M. An explanation of the source, nature, amount, and availability of any proposed cost sharing must be provided in the budget justification[26]. It should be noted that contributions derived from other Federal funds or counted as cost sharing toward projects of another Federal agency must not be counted towards meeting the specific cost sharing requirements of the NSF award.
Failure to provide the level of cost sharing required by the NSF solicitation and reflected in the NSF award budget may result in termination of the NSF award, disallowance of award costs and/or refund of award funds to NSF by the recipient.
]
== Allowable Costs
#instructions[
2 CFR §200, Subpart E provides comprehensive information regarding costs allowable under Federal awards.
]
= Current and Pending Support (no page limit)
#instructions[
This section of the proposal calls for required information on all current and pending support for ongoing projects and proposals, including this project, and any subsequent funding in the case of continuing grants. All current project support from whatever source (e.g., Federal, State, local or foreign government agencies, public or private foundations, industrial or other commercial organizations, or internal funds allocated toward specific projects) must be listed. The proposed project and all other projects or activities requiring a portion of time of the PI and other senior personnel must be included, even if they receive no salary support from the project(s). The total award amount for the entire award period covered (including indirect costs) must be shown as well as the number of person-months per year to be devoted to the project, regardless of source of support. Similar information must be provided for all proposals already submitted or submitted concurrently to other possible sponsors, including NSF. Concurrent submission of a proposal to other organizations will not prejudice its review by NSF. Note, however, that if a proposal (or a duplicate of a proposal) is submitted to more than one division within NSF, review costs will be charged to the first unit receiving the proposal. Again, the proposal must contain all the information requested by the Current and Pending Support form. The following information must be provided for this project and for any ongoing projects or proposals in which the PI and other senior personnel involved with this proposal are key personnel:
]
= Facilities, Equipment and Other Resources (no page limit)
#instructions[
This section of the proposal is used to assess the adequacy of the resources available to perform the effort proposed to satisfy both the Intellectual Merit and Broader Impacts review criteria. Proposers should describe only those resources that are directly applicable. Proposers should include an aggregated description of the internal and external resources (both physical and personnel) that the organization and its collaborators, and subawardees will provide to the project, should it be funded. Such information must be provided in this section, in lieu of other parts of the proposal (e.g., Budget Justification, Project Description). The description should be narrative in nature and must not include any quantifiable financial information. Reviewers will evaluate the information during the merit review process and the cognizant NSF Program Officer will review it for programmatic and technical sufficiency.
Although these resources are not considered voluntary committed cost sharing as defined in 2 CFR §200.1, the Foundation does expect that the resources identified in the Facilities, Equipment and Other Resources section will be provided, or made available, should the proposal be funded. Chapter VII.B.1 specifies procedures for use by the recipient when there are postaward changes to objectives, scope, or methods/procedures.
]
= Senior Personnel and Other Personnel
== Biographical Sketch(es) (2-page limit per person)
#instructions[
See #link("https://new.nsf.gov/policies/pappg/23-1/ch-2-proposal-preparation#2D2h")[NSF guidelines] for information on formatting
]
=== Senior Personnel
=== Other Personnel
=== Equipment Proposals
== Current and Pending Support
#instructions[
See #link("https://new.nsf.gov/policies/pappg/23-1/ch-2-proposal-preparation#2D2i")[NSF guidelines] for information on formatting
]
== Collaborators & Other Affiliations Information
= Special Information and Supplementary Documentation
#instructions[
Except as specified below, Special Information and Supplementary Documentation must be included as part of the Project Description (or part of the Budget Justification), if it is relevant to determining the quality of the proposed work. Information submitted in the following areas is not considered part of the 15-page Project Description limitation. This Special Information and Supplementary Documentation section also is not considered an appendix. Specific guidance on the need for additional documentation may be obtained from the organization’s SPO or in the references cited below.
]
== Postdoctoral Researcher Mentoring Plan
#instructions[
Each proposal[38] that requests funding to support postdoctoral researchers[39] must upload under "Mentoring Plan" in the supplementary documentation section of Research.gov, a description of the mentoring activities that will be provided for such individuals. In no more than one page, the mentoring plan must describe the mentoring that will be provided to all postdoctoral researchers supported by the project, regardless of whether they reside at the submitting organization, any subrecipient organization, or at any organization participating in a simultaneously submitted collaborative proposal. Proposers are advised that the mentoring plan must not be used to circumvent the 15-page Project Description limitation. See Chapter II.E.3 for additional information on collaborative proposals. Mentoring activities provided to postdoctoral researchers supported on the project will be evaluated under the Broader Impacts review criterion.
Examples of mentoring activities include but are not limited to: career counseling; training in preparation of proposals, publications and presentations; guidance on ways to improve teaching and mentoring skills; guidance on how to effectively collaborate with researchers from diverse backgrounds and disciplinary areas; and training in responsible professional practices.
]
== Plans for Data Management and Sharing of the Products of Research
#instructions[
Proposals must include a document of no more than two pages uploaded under “Data Management Plan” in the supplementary documentation section of Research.gov. This supplementary document should describe how the proposal will conform to NSF policy on the dissemination and sharing of research results (see Chapter XI.D.4), and may include:
the types of data, samples, physical collections, software, curriculum materials, and other materials to be produced in the course of the project;
the standards to be used for data and metadata format and content (where existing standards are absent or deemed inadequate, this should be documented along with any proposed solutions or remedies);
policies for access and sharing including provisions for appropriate protection of privacy, confidentiality, security, intellectual property, or other rights or requirements;
policies and provisions for re-use, re-distribution, and the production of derivatives; and
plans for archiving data, samples, and other research products, and for preservation of access to them.
Data management requirements and plans specific to the Directorate, Office, Division, Program, or other NSF unit, relevant to a proposal are available on the NSF website. If guidance specific to the program is not available, then the requirements established in this section apply.
Simultaneously submitted collaborative proposals and proposals that include subawards are a single unified project and should include only one supplemental combined Data Management Plan, regardless of the number of non-lead collaborative proposals or subawards included. In such collaborative proposals, the data management plan should discuss the relevant data issues in the context of the collaboration.
A valid Data Management Plan may include only the statement that no detailed plan is needed, as long as the statement is accompanied by a clear justification. Proposers who feel that the plan cannot fit within the limit of two pages may use part of the 15-page Project Description for additional data management information. Proposers are advised that the Data Management Plan must not be used to circumvent the 15-page Project Description limitation. The Data Management Plan will be reviewed as an integral part of the proposal, considered under Intellectual Merit or Broader Impacts or both, as appropriate for the scientific community of relevance.
]
== Rationale for Performance of All or Part of the Project Off-campus or Away from Organizational Headquarters
== Documentation of Collaborative Arrangements of Significance to the Proposal through Letters of Collaboration
#instructions[
(See Chapter II.D.2.d.(iv).) Letters of collaboration should be limited to stating the intent to collaborate and should not contain endorsements or evaluation of the proposed project. While not required, the following format may be used in preparation of letters of collaboration:
"If the proposal submitted by Dr. [insert the full name of the Principal Investigator] entitled [insert the proposal title] is selected for funding by NSF, it is my intent to collaborate and/or commit resources as detailed in the Project Description or the Facilities, Equipment and Other Resources section of the proposal."
While letters of collaboration are permitted, unless required by a specific program solicitation, letters of support should not be submitted as they are not a standard component of an NSF proposal. A letter of support is typically from a key stakeholder such as an organization, collaborator, or Congressional Representative, and is used to convey a sense of enthusiasm for the project and/or to highlight the qualifications of the PI or co-PI. A letter of support submitted in response to a program solicitation requirement must be unique to the specific proposal submitted and cannot be altered without the author’s explicit prior approval. Proposals that contain letters of support not authorized by the program solicitation may be returned without review.
]
== Federal Environmental Statutes
#instructions[
In order for NSF to comply with Federal Environmental Statutes (including, but not limited to, the National Environmental Policy Act (42 USC §4321, et seq.)), the National Historic Preservation Act (54 USC §306108 [previously codified at 16 USC §470, et seq.], and the Endangered Species Act (16 USC §1531, et seq.), the proposer may be requested to submit supplemental post-proposal submission information to NSF in order that a reasonable and accurate assessment of environmental impacts by NSF may be made. Supplemental information also may be requested to assist NSF in complying with Executive Orders and Presidential memoranda directing Federal agencies to carry out their obligations to engage in Tribal consultation when Tribal interests may be impacted. The types of information that may be requested is shown in the Organization Environmental Impacts Checklist.
]
== Antarctic Proposals
#instructions[
to any NSF program require “Logistical Requirements and Field Plan” supplementary documents to be submitted with the proposal. See “proposal with fieldwork” in Chapter V.A of the Antarctic Research solicitation. Special budgetary considerations also apply. See also Chapter V.B of the Antarctic Research solicitation.
]
== Research in a Location Designated, or Eligible to be Designated, a Registered Historic Place
#instructions[
(See Chapter XI.J). Where applicable, the box for "Historic Places" must be checked on the Cover Sheet.
]
== Research Involving Field Experiments with Genetically Engineered Organisms
== Documentation Regarding Research Involving the Use of Human Subjects, Hazardous Materials, Live Vertebrate Animals, or Endangered Species
== Special Components in New Proposals or in Requests for Supplemental Funding
#instructions[
This includes, for example, Facilitation Awards for Scientists and Engineers with Disabilities (FASED), Research Opportunity Awards (ROAs), Research Experiences for Undergraduates (REUs), Facilitating Research at Primarily Undergraduate Institutions (RUIs), Research Experiences for Veterans (REV), and Research Experiences for Teachers, and Research Experiences for Graduates. See Chapter II.F.7 for information on FASED, and, for the other programs identified, consult the relevant solicitation or Dear Colleague Letter.
In addition, the supplementary documentation section should alert NSF officials to unusual circumstances that require special handling, including, for example, proprietary or other privileged information in the proposal, matters affecting individual privacy, required intergovernmental review under E.O. 12372 (Intergovernmental Review of Federal Programs) for activities that directly affect State or local governments, or possible national security implications.
] |
https://github.com/grnin/Zusammenfassungen | https://raw.githubusercontent.com/grnin/Zusammenfassungen/main/AIAp/AIAp_Zusammenfassung_FS24_NG_JT.typ | typst | // Compiled with Typst 0.11.1
#import "../template_zusammenf.typ": *
#import "@preview/wrap-it:0.1.0": wrap-content
#show: project.with(
authors: ("<NAME>", "<NAME>"),
fach: "AIAp",
fach-long: "AI Applications",
semester: "FS24",
language: "en",
font-size: 10pt,
tableofcontents: (enabled: true, depth: 3, columns: 2),
)
= Artificial Neural Network (ANN)
An ANN is a machine learning model inspired by the structure and function of animal brains.
It consists of _neurons_ inside a _layer_. Neurons receive, process and send signals to other
neurons in the next layer. The output is computed by some non-linear function of the sum of its
inputs, the _activation function_. The strength of the signal at each connection is determined
by a _weight_, which adjusts during the learning process. Signals travel from the _input layer_
to the _output layer_, while possibly passing through multiple intermediate layers, the _hidden
layers_.
#image("img/aiap_1.png", width: 75%)
#tcolor("rot", "3")-dimensional input $x_n$, hidden layer with #tcolor("grün", "2") neurons,
output layer with #tcolor("orange", "1") neuron. The sum of all inputs with its weights $W_n$
for each input is added to a bias $b$ before it is passed to the corresponding neuron.\
_Trainable parameters:_ $fxcolor("grün", 2) dot (fxcolor("rot", 3) "weights" + 1 "bias") +
fxcolor("orange", 1) dot (fxcolor("grün", 2) "weights" + 1 "bias") = 11 "parameters"$
= Large Language Model (LLM)
A LLM is a _huge ANN_ with a _particular structure_ that is much more complicated.
A LLM generates text _sequentially_ #hinweis[(word-by-word or token-by-token)].
The output is not a word, but a _probability distribution over a dictionary_.
The next word is sampled according to this distribution and is the most "plausible" according to
the model. _auto-regressive:_ each output is added to the input-context.
So the generation of each next word takes into account the entire sequence generated so far.\
The _knowledge of an LLM_ is represented in the _weights_ and the _network structure_.
Data is not stored explicitly.
== Prompt Engineering
A LLM performs better when it has access to _more context_. A general, open ended prompt will
produce a worse output than one that provides information about the _starting point_ and the
_desired result_, often divided in the categories
_Role_ #hinweis[(who the LLM should be, i.e a cook)],
_Context_ #hinweis[(what it has access to, i.e. what ingredients)],
_Constraints_ #hinweis[(what it should pay attention to i.e. a healthy, low fat diet)] and
_Instruction_ #hinweis[(what output it should produce, i.e. a recipe)].
This is called _prompt engineering_. By providing information not present at training time,
the LLM will generate a _more specific/reliable output_. An LLMs process of responding to
prompts using provided data or context is called _augmented generation_.
*Retrieval Augmented Generation (RAG):*
Let the LLM _access a data source_ by uploading files or writing prompts that hint the LLM to
send a query to a search engine. Also called _context-aware LLM_. Use cases: Searching internal,
new #hinweis[(post-training)] or domain-specific data. A popular RAG architecture is LlamaIndex.
== Open Source LLMs
Many open source models are available. Two popular models are _LLaMA2_ by Meta and _Mistral_.
Models are basically defined by two files:
_model.py_ #hinweis[(small code file that defines the structure of the LLM and runs it)] and
_weights.pkl_ #hinweis[(potentially huge file with the parameters. Sometimes called a "checkpoint")].
== Working with your own LLM
Enhances _security_, protects private/internal data.
Depending on use-case: _Lower costs_, develop a _domain-expert model_ on your own data.
You can do this by _fine-tuning_ a pre-trained model #hinweis[(expensive training, new data
requires new fine-tuning)] or with _RAG_.
== Multimodal AI
_Modality:_ The way something is expressed or perceived #hinweis[(human senses, images, videos, sound)].
_Multimodal:_ involving several ways of operating or dealing with something.
A multimodal AI integrates different modes of data. Internally, different modes are mapped onto
similar representations #hinweis[(image, text, voice representing a tree)].
#pagebreak()
= Technology Ethics
*Problems of AI:*
Energy consumption, Few players controlling huge market, Conflict of Interest, Deskilling of moral decisions\
*DIKW Pyramid:*
_Data_ #hinweis[(Discrete, objective facts about an event)],
_Information_ #hinweis[(Data with analyzed relationships and connections)],
_Knowledge_ #hinweis[(Contextualized Information)],
_Wisdom_ #hinweis[(This is the top of the pyramid)].
These four layers lead to real-world decision making.
== Technology Ethics
_Application of ethical thinking to the practical concerns of technology._
Take actions that are voluntarily constrained by our judgement -- our ethics.
=== 16 Challenges and Opportunities
#grid(
columns: (1fr, 1fr, 1fr),
gutter: 1em,
[
+ Technical Safety
+ Transparency and Privacy
+ Beneficial Use & Capacity for Good
+ Malicious Use & Capacity for Evil
+ Bias in Data, Training Sets, etc.
+ Unemployment / Lack of Purpose
],
[
7. Growing Inequality
+ Environmental Effects
+ Automating Ethics
+ Moral Deskilling & Debility
+ AI Consciousness, "Robot Rights"
],
[
12. AGI and Superintelligence
+ Dependency on AI
+ AI-powered Addiction
+ Isolation and Loneliness
+ Effects on the Human Spirit
],
)
== 6 Ethical Lenses
_The Rights Lens_ #hinweis[(Focuses on protecting individual moral rights based on human dignity and autonomy)],
_The Justice Lens_ #hinweis[(Ensures fair and equal treatment based on merit or need, addressing various types of justice.)],
_The Utilitarian Lens_ #hinweis[(Evaluates actions by their consequences, aiming for the greatest good for the most people.)]
_The Common Good Lens_ #hinweis[(Promotes actions that contribute to societal welfare and the well-being of all community members)],
_The Virtue Lens_ #hinweis[(Emphasizes actions aligned with ideal virtues fostering personal development and character)] and
_The Care Ethics Lens_ #hinweis[(Prioritizes empathy and compassion in relationships, considering stakeholders' feelings and contexts)].
/*
#grid(
columns: (1fr, 1fr),
gutter: 1em,
[
=== The Rights Lens
The rights lens focuses on _protecting and respecting the moral rights of individuals_,
which are grounded in _human dignity and autonomy_. It emphasizes treating people as ends in
themselves, not merely as means to an end. _Moral rights_ include making one's own life
choices, being told the truth, and having privacy.
=== The Justice Lens
The justice lens is about giving each person their due based on _fair and equal treatment_,
which can vary by merit or need. Different types of justice include social, distributive,
corrective, retributive, and restorative justice, addressing various contexts of _fairness_.
It aims to ensure everyone receives what they are rightfully entitled to.
=== The Utilitarian Lens
Utilitarianism evaluates the ethicality of actions based on their _consequences_, aiming to
produce the _greatest balance of good over harm_ for the _most people_. It involves careful
analysis of the impacts on all stakeholders, such as customers, employees, and the community.
This lens often uses _cost/benefit analysis_ to guide decisions.
],
[
=== The Common Good Lens
The common good lens asserts that ethical actions contribute to the _welfare and interlocking
relationships of society_. It stresses mutual concern for shared interests and the importance
of common conditions like clean air, public health, and education. This approach values
_community life and the well-being of all members_.
=== The Virtue Lens
Virtue ethics emphasizes actions consistent with ideal virtues that foster human development,
such as _honesty, courage, and integrity_. It asks what kind of person one will become by
performing certain actions, focusing on _acting according to the highest potential_ of one's
character. This lens is about _aligning actions with virtuous traits_.
=== The Care Ethics Lens
Care ethics prioritizes _relationships_ and responding to individuals' specific circumstances
with _empathy_ and _compassion_. It values interdependence and seeks to understand the
_feelings and viewpoints_ of all stakeholders. This lens advocates for resolutions that
consider the relational context and societal duties, promoting a holistic approach to ethical conflicts.
],
)
*/
*Problems in using the lenses:*
We may _not agree_ on the _content_ of these lenses, not everyone has the same set of civil
rights for example. _Different lenses_ may lead to _different answers_ to the question
"What is ethical?".\
*Framework for Ethical Decision Making:*
_Identify the Ethical Issues:_ #hinweis[(Determine if the decision could harm or unfairly
benefit someone or a group)],
_Get the Facts_ #hinweis[(relevant information, stakeholders, explore possible actions)],
_Evaluate Alternative Actions_ #hinweis[(Use the lenses for this)],
_Choose an option for action and test it_ #hinweis[(select best option, plan carefully)] and finally
_Implement your decision and reflect on the outcome_ #hinweis[(review results, identify
follow-up actions)]
/*
== Framework for Ethical Decision Making
+ _Identify the Ethical Issues:_ Determine if the decision could harm or unfairly benefit
someone or a group, Consider if the issue is beyond legal or efficiency concerns, involving moral choices.
+ _Get the Facts:_ Gather all relevant information, and identify any unknowns.
Recognize the stakeholders and assess whose concerns are most important.
Explore all possible actions and consult with relevant parties to find creative options.
+ _Evaluate Alternative Actions:_ Use the lenses.
*Rights Lens:* Which option respects everyone's rights?
*Justice Lens:* Which option is the fairest?
*Utilitarian Lens:* Which option produces the most good and least harm?
*Common Good Lens:* Which option benefits the community as a whole?
*Virtue Lens:* Which option aligns with the virtues I want to embody?
*Care Ethics Lens:* Which option considers relationships and stakeholder concerns?
+ _Choose an Option for Action and Test it:_ Select the best option after thorough evaluation.
Consider the perspective of a respected person or public audience on your choice.
Plan the implementation carefully, ensuring stakeholder concerns are addressed.
+ _Implement Your Decision and Reflect on the Outcome:_ Execute the decision and review its
results. Reflect on what was learned and identify any necessary follow-up actions.
*/
= Image classification
Image classification is one of the main tasks of _supervised learning_, where training data
consisting of an image with a label of its content is used for training.
The trained model can then predict the most likely label when given an image.
It requires 4 things: _Data_, a _loss function_ #hinweis[(typically a probabilistic loss like
`SparseCategoricalCrossentropy`)], a _model_ and an _optimizer_ #hinweis[(`Adam`, an improved
version of Stochastic Gradient Descent)]
== Example Architecture
- _Input:_ The pixel values of a 2D image are "flattened" into a column vector
#hinweis[($1 times n$ vector)] and fed to the hidden layers of an ANN.
- _Output:_ The ANN has one output neuron per class. Each neuron "votes" for its class.
The more active a neuron is, the more likely the input belongs to this class.
#image("img/aiap_3.png", width: 70%)
== Terminology
- _The Convolution Operation:_
Involves applying a filter (kernel) to input data to create a feature map,
detecting patterns like edges in images.
- _Logits:_
A vector of non-normalized predictions that a classification model generates.
Typically passed to a softmax function to generate percentages.
- _Kernel, Filter, Receptive Field:_
Kernels are small matrices used to apply filters across the input.
The receptive field refers to the area of the input that influences a particular feature map value.
- _Feature Map:_
Result of applying filters to the input, highlighting specific features such as edges or textures.
- _Tensor, Rank, Dimension, Size:_
Tensors are multi-dimensional arrays. Rank is the number of dimensions,
size refers to the total number of elements, and dimensions specify the shape.
- _Fully Connected Layer (Dense Layer):_
Every neuron in one layer connects to every neuron in the next,
crucial for decision-making based on extracted features.
== Implementation
We do not implement a complex ANN from scratch. Instead we use _TensorFlow_.
This is an open source platform for machine learning, which is developed and maintained by Google.
We do not use TF directly, instead we use _Keras_, which provides a higher-level Python API that
hides the complexity of TF.
=== A fully connected ANN with 1 hidden layer
Layers extract representations from the data fed into them.
```py
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(128, activation='relu'), # Rectified Linear Unit sets negative value to 0
tf.keras.layers.Dense(10)
])
```
- _First Layer: `tf.keras.layers.Flatten`:_
Transforms images from 2D (28x28 pixels) to 1D (784 pixels),
unstacks rows of pixels and lines them up. No parameters, only reformats data.
- _Subsequent Layers: `tf.keras.layers.Dense`:_
*First Dense Layer*: Contains 128 fully connected neurons.\
*Second Dense Layer*: Returns a logits array with a length of 10. Each node indicates the
score for one of the 10 classes.
==== Compiling
```py
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
- _Optimizer:_ This is how the model is updated based on the data it sees and its loss function.
- _Loss function:_ This measures how accurate the model is during training.
You want to minimize this function to "steer" the model in the right direction.
- _Metrics:_ Used to monitor the training and testing steps.
The example uses accuracy, the fraction of the images that are correctly classified.
==== Training
```py
model.fit(train_images, train_labels, epochs=10)
```
Training the neural network model requires the following steps:
+ _Feed the training data to the model._
In this example, the training data is in the `train_images` and `train_labels` arrays.
+ _The model learns_ to associate images and labels.
+ You ask the model to _make predictions_ about a test set —
in this example, the `test_images` array.
+ _Verify_ that the predictions match the labels from the `test_labels` array.
==== Evaluation
```py test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)```\
Compare how the model performs on the test dataset.
==== Make predictions
With the model trained, you can use it to _make predictions_ about some images.
Attach a _softmax layer_ to convert the model's linear outputs (logits) to probabilities,
which should be easier to interpret.
```py
probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()]);
predictions = probability_model.predict(test_images)
```
== Tensor Terminology
- _Tensor:_ A multi-dimensional array with a uniform data type.
- _Rank:_ The number of dimensions #hinweis[(axes)] in a tensor.
- _Dimension:_ A particular dimension of a tensor, such as rows or columns.
- _Shape:_ The size of each dimension in a tensor.
- _Size:_ The total number of elements in a tensor, product of the shape's elements.
- _Indexing:_ Accessing tensor elements using their indices.
- _Reshaping:_ Changing the shape of a tensor without altering its data.
- _Broadcasting:_ Extending tensors of different shapes for arithmetic operations #hinweis[(scalar product)].
= Convolutional Neural Network (CNN)
A CNN is an ANN with _multiple convolutional layers stacked_
(and usually combined with fully connected layers). It is used to process and classify images.\
To flatten an image into a long vector is not optimal for processing, because
_neighborhood information_ is lost. Instead, we can apply something called _convolution_.\
In theory, a fully connected network could also learn to classify images an outperform a CNN.
The reason is that a _dense network_ has a _lower bias but higher variance_.
The pre-wired structure of the convolution imposes a _bias_ to the mode.
This _helps_ the model to learn _faster_. A model that integrates some domain knowledge into
its pre-wired structure requires less data to be trained. This form of bias is called _inductive bias_.
== Feature Detector
For example, one part of the brain detects faces, another objects, another shapes and another lines.
Together, these informations form an image.
=== Mathematical Model
- _The Input:_ An image #hinweis[(width #sym.times height (#sym.times 3 if RGB))] and
a filter #hinweis[(kernel)], which consists of an $m times n$ matrix in the simplest case
#hinweis[(1 channel, gray scale)] or an $m times n times d$ "stack of matrices" where the
depth of the kernel is equal to the number of input channels #hinweis[(3 for RGB)].
- _The operation:_ Convolution of the image with the kernel. The kernel "slides" over the image
and on each slide, a matrix multiplication is performed. This can be done in parallel.
Multiple kernels are trained to extract different features.
Multi-channel inputs need at least as many kernels as channels.
- _The output:_ This function returns a _feature map_. This is a "map" #hinweis[(like a gray
scale image)] that shows the locations where a feature is present. One convolution produces
one feature map. Even if the input and the filter have multiple channels, the output of the
convolution has one channel.
== Kernels
Kernels #hinweis[(or filters)] are _small matrices_. They slide across the input data and
perform element-wise multiplication. This _produces a feature map_ that _highlights specific
patterns_ or features #hinweis[(like vertical edges)]. The kernel weights are initialized with
random data. In the optimization process, they are adjusted so that the loss is minimized.
The amount by how many pixels is shifted with each slide is called _stride_.
Usually the same in both dimensions, but can be different. Edges, corners and strides that
don't cover the entire input can be dealt with _padding_: _"valid"_ #hinweis[(No padding)] or
_"same"_ #hinweis[(evenly padded with zeroes horizontally & vertically)].
== Feeding Feature Maps into the next Conv-Layer
The output of a convolutional layer looks like a _new image_. We can _apply multiple kernels_
to the same input. This produces _multiple feature maps_ which can be _stacked_ #hinweis[(like
an RGB image is stacked from 3 Channels)].
This output can be fed into the next convolutional layer to _find more complex patterns_.
== Pooling
Instead of directly feeding the next layer, a _pooling layer_ is usually used.
In TF, the default pool size is 2x2. _Max pooling_ only keeps the largest value in the pool.
_Average Pooling_ keeps the average value within the pool. Comes with a loss of information
#hinweis[(loses details, but keeps important features)]. These layers don't have trainable
parameters. _Advantages:_ Reduces the amount of data to process by the pool size
#hinweis[(2x2 pool #sym.arrow 4x less data)],
Field-of-view increases #hinweis[(pooled matrix contains information from a wider input)].
== CNN in Keras
```py
model = Sequential([
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)), # Normalize pixel values
layers.Conv2D(16, 3, padding='same', activation='relu'), # 16 neurons, 3x3 kernel, ReLU
layers.MaxPooling2D(), # Max pooling layer to reduce spatial dimensions
layers.Conv2D(32, 3, padding='same', activation='relu'), # 32 neurons, 3x3 kernel, ReLU
layers.MaxPooling2D(), # Max pooling layer to reduce spatial dimensions
layers.Conv2D(64, 3, padding='same', activation='relu'), # 64 neurons, 3x3 kernel, ReLU
layers.MaxPooling2D(), # Max pooling layer to reduce spatial dimensions
layers.Flatten(), # Flatten the tensor to a 1D vector
layers.Dense(128, activation='relu'), # Fully connected layer with 128 neurons, ReLU activation
layers.Dense(num_classes) # Output layer with units equal to the number of classes
])
```
#pagebreak()
== Calculation Example
#image("img/aiap_4.png", width: 80%)
#wrap-content(
image("img/aiap_5.png"),
align: top + right,
columns: (65%, 35%),
)[
=== Convolve the input image with kernel 1
Horizontal stride = 1, vertical stride = 2, Padding: none (valid), Pool: 2x2\
#hinweis[*Stride* by how many pixels the kernel is shifted.]\
$23 dot 1 + 255 dot 0 + 21 dot 0 + 34 dot 1 = 57$ #hinweis[(Shift kernel 1 to the right)]\
$255 dot 1 + 40 dot 0 + 34 dot 0 + 200 dot 1 = 455$\
... #hinweis[(Shift 2 down when kernel has reached the right edge)]\
_Size of the resulting feature map:_ $2 times 4$\
If a kernel has _multiple channels_, the channels need to be calculated separately and
then added together with the bias to get the final value.\
*Example:* Channel 1 returns $158$, Channel 2 returns $hyph.minus 14$, Channel 3 $653$.
Bias is $1$. $=> 158 + hyph.minus 14 + 653 + 1 = underline(798)$
]
#wrap-content(
image("img/aiap_6.png"),
align: top + right,
columns: (80%, 20%),
)[
=== Apply the max pooling operation
Using a $2 times 2$ kernel-size, stride = kernel-size (default).
This means get the _max value_ of every $2 times 2$ square.\
_Size of the feature map after pooling:_ $1 times 2$
]
#wrap-content(
image("img/aiap_7.png"),
align: top + right,
columns: (80%, 20%),
)[
=== Shape of output of one Conv-Layer
After all Kernels are applied, what will be the shape of the output of one Conv-Layer with
these four kernels, followed by max pooling?
_Output tensor of rank=3 with shape= (1,2,4)_
]
#wrap-content(
image("img/aiap_8.png"),
align: top + right,
columns: (80%, 20%),
)[
=== Observations
- Applying a threshold _highlights the structure_ of the "image".
_The different kernels detect different structures_. For example, the "long diagonal"
#hinweis[(255, 200, 190, 240)] is captured by the first kernel.
The diagonal 190, 200 is *not* detected by kernel 2 because of the stride of 2.
- By applying _max pooling_ on these small feature maps, we _lose many details_.
- Kernel 3 is more specific than kernel 4. _Kernel 3 is a "corner detector"._
All other kernels, which have 0s, strongly respond to the block in the lower left corner.
]
=== Calculation of trainable parameters (degrees of freedom)
```py
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), strides=2, padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(10))
```
==== Layer 1, Conv2D
_The number of weights/trainable parameters_ in this layer is calculated as\
$(fxcolor("grün", "kernel size") dot fxcolor("orange", "input channels") + fxcolor("rot", "bias"))
dot fxcolor("gelb", "amount of kernels in this layer") =>
(fxcolor("grün", (3 dot 3)) dot fxcolor("orange", 3) + fxcolor("rot", 1)) dot fxcolor("gelb", 32) = 896$\
#hinweis[Note that the number of parameters in a ConvLayer does *not* depend on
the width and height of the input]
_Output shape:_
$(fxcolor("grün", "input_width") - fxcolor("orange", "kernel_width")) / fxcolor("rot", "stride") + 1 =>
(fxcolor("grün", 32) - fxcolor("orange", 3)) / fxcolor("rot", 1) + 1 = 30$\
Without padding, we can apply the kernel $30$ times over a width of 32px
#hinweis[(same for height)]. Each of the $32$ kernels outputs one $30 times 30$ feature map.
Therefore, the output has the shape $underline(30 times 30 times 32)$.\
#hinweis[Default values are `strides=(1,1), padding="valid"`]
==== Layer 2, MaxPooling
MaxPooling Layers do not have weights and therefore no trainable parameters.\
_Output shape:_
$(fxcolor("grün", "input_width") - fxcolor("orange", "pool_width")) /
fxcolor("rot", "stride") + 1 =>
(fxcolor("grün", 30) - fxcolor("orange", 2)) / fxcolor("rot", 2) + 1 = underline(15)$,
shape is therefore $underline(15 times 15 times 32)$\
#hinweis[Default values are `strides=None, padding="valid"`.
When `strides=None`, it will default to `pool_size`.]
==== Layer 3, Conv2D
Number of _trainable parameters:_
$(fxcolor("grün", (3 dot 3)) dot fxcolor("orange", 32) + fxcolor("rot", 1)) dot
fxcolor("gelb", 64) = underline(18'496)$
_Output shape:_
Note the `padding='same'` hyper-parameter. This adds rows and columns of 0s to
the input, evenly to the left/right or up/down when `'same'` so that the output feature map
will have the _same_ dimensions as the input tensor.\
*Calculation:*
The input size is _increased by 1_ at the left and at the right:
#fxcolor("hellblau", $(+(2 dot 1)$)\
$(fxcolor("grün", 15) + #fxcolor("hellblau", $(+(2 dot 1)$) - fxcolor("orange", 3)) \/
fxcolor("rot", 2) - 1 = underline(8)$.
So the resulting shape is $underline(8 times 8 times 64)$
==== Layer 4, MaxPooling
No trainable parameters.\
_Output shape:_
$(fxcolor("grün", 8) - fxcolor("orange", 2)) \/ fxcolor("rot", 2) + 1
= underline(4)$,
Shape is $underline(4 times 4 times 64)$\
==== Layer 5, Flatten
Does not have any trainable parameters.\
_Output shape:_
Flatten takes the $4 times 4 times 64$ input tensor and reshapes it into a
"flat" $1024 times 1$ vector.
==== Layer 6, Dense
_Trainable weights:_
Each of the $10$ neurons is fully connected, plus $1$ bias:
$10 dot (1024 + 1) = underline(10'250)$\
_Output shape:_
$10 times 1$ or simply $10$.
==== Hoy many trainable parameters does the network have?
$896 + 18'496 + 10'250 = underline(29'642)$ trainable Parameters.
==== What would happen if the input image is 2-times wider?
The number of trainable parameters in the ConvLayers does not change, but the output feature
maps are larger. The Dense Layer would have $approx 2 times$ more parameters.
= Deep learning architectures
A network is typically called a _deep neural network_ if it has at least 2 hidden layers.\
*AlexNet:* competed in the ImageNet Large Scale Visual Recognition Challenge in 2012 and
_massively outperformed_ its competition. The original paper's primary result was that
_using Dropout_ in the fully-connected layers was _very effective_. Even though the model was
_computationally expensive_, it was made feasible due to the utilization of GPUs during training.
#image("img/aiap_11.png")
#pagebreak()
== Softmax Function
Softmax _turns logits_ (numeric output of the last layer) _into probabilities_ by normalizing
each number so the entire output vector adds up to one. A softmax layer is often appended to
the last layer of a CNN. Softmax is _differentiable_ because we need gradient descent to train
the model.
$ S(y)_i = exp(y_i) / (sum_(j=1)^n exp(y_j)) $
=== Sparse categorical crossentropy Loss
Sparse Categorical Cross Entropy is a loss function that is commonly used. It is an extension
of the Cross Entropy loss function that is used for binary classification problems.
The sparse categorical cross-entropy is used in cases where the output labels are represented
in a sparse matrix format.
- _Forward Pass (Inference):_ Input passes through the network, the network produces logits
which are converted to probabilities. The true label is provided and the loss is calculated.
- _Backward Pass (Learning):_ Using gradient descent, the network updates its weights in
reverse order. The gradients of the loss with respect to each weight are computed.
The weights are adjusted to minimize the loss.
This _iterative process_ improves the _model's accuracy_ over time.
== Dropout Layer
A dropout layer is a regularization method used to _prevent overfitting_.
In the dropout layer, a random subset of the current neurons is dropped.
```py tf.keras.layers.Dropout(0.1)```\
The data is often better interpretable _with_ dropout layers in between because it _reduces
noise_ and allows rapid testing of different "versions" of a model where some neurons are
missing. This forces the network to break up situations where neurons have adapted to mistakes
from previous layers and thus makes the model _more robust_ and _increases its generalization_.
= Autoencoder
#wrap-content(
image("img/aiap_9.png"),
align: top + right,
columns: (65%, 35%),
)[
An Autoencoder is an unsupervised artificial neural network that learns how to efficiently
compress and encode data, then learns how to reconstruct the data back _from_ the reduced
encoded representation _to_ a representation that is as close to the original input as
possible. In other words, it _reproduces input_. It uses mean squared error as its loss
function #hinweis[(distance between the RGB values of the input & output pixel, sum up all
squared distances and divide by number of pixels)].
]
It consists of 4 main parts:
- _Encoder:_ The model learns how to reduce the input and compress the data into an encoded representation.
- _Bottleneck:_ The layer that contains the compressed representation of the input data, the _latent space_
- _Decoder:_ The model learns how to reconstruct the data from the encoded representation.
- _Reconstruction Loss:_ Method that measures how well the decoder is performing.
== Applications
- _Compression:_ Encoding can group pixels together and therefore be used for compression.
- _Denoising:_ The bottleneck layer learns useful features which are shared across different
images. Isolated/Random pixels (noise) are not representable in the code. Therefore the
reconstruction is a composition of the most relevant features, a "denoised" image.
== Deconvolution
The _opposite_ process from _Convolution_. It is also known as Transposed Convolution.
In Deconvolution, the _feature map gets converted into an Image_.
== Mel-Spectrogram
We can also classify audio with an image classifier by transforming the audio into a mel-scaled
spectrogram which shows the _power of different frequencies over the time_.
This transformation can be done with Fourier transform.
/*
== Basic Autoencoder
=== Compression
```py
class Autoencoder(Model):
def __init__(self, latent_dim, shape):
super(Autoencoder, self).__init__()
self.latent_dim = latent_dim # dimensionality of bottleneck
self.shape = shape # shape of input, like 28x28px
self.encoder = tf.keras.Sequential([
layers.Flatten(), # matrix to vector, is then fed into the next layer
layers.Dense(latent_dim, activation='relu'),
]) # encoder part
self.decoder = tf.keras.Sequential([
layers.Dense(tf.math.reduce_prod(shape).numpy(), activation='sigmoid'),
layers.Reshape(shape) # inverse of flatten
]) # decoder part
def call(self, x): # this is the function that gets executed
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
shape = x_test.shape[1:]
latent_dim = 64 # the bigger this number, the more details can be represented.
# Compression Factor: go down from 28x28 = 784 to 64 -> ~factor 12
autoencoder = Autoencoder(latent_dim, shape)
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
autoencoder.fit(x_train, x_train, epochs=10, shuffle=True, validation_data=(x_test, x_test))
encoded_imgs = autoencoder.encoder(x_test).numpy() # apply encoder
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy() # apply decoder
```
=== Denoising
```py
class Denoise(Model):
def __init__(self):
super(Denoise, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Input(shape=(28, 28, 1)), # 1 grayscale channel, images of 28x28px
layers.Conv2D(16, (3, 3), activation='relu', padding='same', strides=2), # 16 3x3 kernels
layers.Conv2D(8, (3, 3), activation='relu', padding='same', strides=2)]) #8 3x3 kernels
# dimensionality of bottleneck (Output of encoder): 7x7x8
# (Input 28x28, after first layer because of stride=2: 14x14x16, output of second layer because stride=2: 7x7x8
self.decoder = tf.keras.Sequential([
layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2D(1, kernel_size=(3, 3), activation='sigmoid', padding='same')])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Denoise()
```
*/
#pagebreak()
= Representations
== Expression Trees
#image("img/aiap_10.png", width: 80%)
There is not always just one solution, there might be different trees for the same expression.
== Backpropagation
Backpropagation is a supervised learning technique to _adjust the weights of the neurons to
minimize the error_. It calculates the gradient of the loss function with respect to each
weight in the network. This gradient is then used to update the weights in the opposite
direction of the gradient, which in turn minimizes the loss function.
The algorithm works by computing the error between the predicted output and the actual output
for each example and then _propagating_ this error back through the network to adjust the weights.
This is repeated multiple times until the weights converge to a point where the error is minimized.
*Steps of the algorithm:*
#columns(2)[
+ _Initialize the weights_ of the network _randomly_
+ _Forward propagate an input_ through the network to get the predicted output.
+ _Compute the error_
#colbreak()
4. _Backward propagate the error_ through the network to compute the gradient
+ _Update the weights_ in the opposite direction of the gradient using SGD or something similar
+ _Repeat_ steps 2-5 for multiple iterations
]
== Gradient Descent
MSE loss function for a quadratic model and a single data point $x,y$:\
$L("values the function depends on") = ("loss function" - "datapoint i")^2
=> L(x,y; a,b) = (\ax^2 + \bx + c - y)^2$
= Advances Techniques in Keras
== Data Augmentation
The _convolution_ is _translation-invariant_
#hinweis[(Object still identifiable if shifted along x or y axis)] but not
_rotation-invariant_ #hinweis[(CNN fails to classify a rotated image)].
It is also in general not _scale-invariant_ #hinweis[(a scaled object does not get recognized)].
A classifier can only classify data that is _similar to the training data_. If we have not
enough training data, we need to artificially create more by applying different transformations
#hinweis[(De-texturized, de-colorized, edge enhanced, salient edge map, flip, rotate, ...)].
```py
layers.Resizing(IMG_SIZE, IMG_SIZE), layers.Rescaling(1./255),
layers.RandomFlip("horizontal_and_vertical"), layers.RandomRotation(0.2),
skimage.color.rgb2gray(imgs)
```
== Batch Normalization
Batch normalization applies a transformation that maintains the _mean output close to 0 and the
standard deviation close to 1_. _Advantages:_ We can train deeper networks and increase the
learning rate. This is added as a layer inside the model.\
```py keras.layers.BatchNormalization( ... )```
== Learning Rate Scheduling
A constant learning rate is often not optimal. It is often necessary to try different learning
rates and schedulers. _Annealing_ is one example of Learning Rate Scheduling.\
With a scheduler, we can have _faster training_ #hinweis[(higher learning rate in the beginning)]
and _better convergence into a (local) minimum_.
```py keras.optimizers.schedules.CosineDecay( ... )```
== Accuracy vs. Loss
*Accuracy:*
_Counting the correct samples._ All samples contribute the same amount to this value,
they are either correct or incorrect. This _cannot be optimized_ with a function.\
*Loss:*
The loss is a _differentiable_ function which _can be optimized_.
Very confidently wrong predictions make the loss explode.
== U-Net Architecture
Is a "classic" architecture for image segmentation that splits the image into fore- and background.
#image("img/aiap_12.png", width: 78%)
=== Complex Network Topologies
*Skip Connections:* Keep a pointer to the output of a layer.
Pass that data unchanged to a deeper layer in the network.
```py
skip1 = layers.Conv2D(1, kernel_size=1, activation=None, padding='same', strides=1)(x)
... # potentially many more layers here
x = layers.concatenate([x, skip1])
```
*Multiple Outputs:* An ANN can have multiple output layers.
Each output layer can have its own Loss Function.
= Reinforcment Learning (RL)
*Behavioral Learning:* Learning is the _change in behavior_ that occurs as a _result of experience_.\
In RL, an _agent interacts with its environment_. The agent _selects from available actions_.
The agent's goal is to select those actions which maximize the (long term) reward.
In the beginning, the agent selects _random actions_. Over time, the agent _learns_ to _prefer_
those actions that yield _higher_ (long term) _rewards_: _trial and error_.\
RL does _not learn from "pre-collected" labelled / unlabeled data_.
It learns from _interaction_ with the environment.\
*Fields of Application:*
Healthcare, Education, Transportation, Energy, Business Management, Science, NLP,
Computer vision, robotics, games, computer systems, finance, ....\
*Limitations:*
_Sample efficiency_ #hinweis[(No real progress in the first couple million trials)],
_Difficult_ to define the _reward function_.
== Basic Concepts
An RL agent tries to solve an RL-problem by learning the actions that maximize reward.\
- _Environment:_ The Environment the agent is in. Contains states and actions.
- _Agent:_ Knows its state #hinweis[(but not the entire environment, only "sensor data")],
observes the states, takes actions and learns optimal behavior to maximize reward
#hinweis[(hedonistic behavior)].
- _State:_ The agent arrives in a new state after taking a action #hinweis[(`S0`, `S1`, `S2` etc.)].
- _Terminal State:_ A state where there aren't any further actions that can be taken by the
agent #hinweis[(Terminates an episode)].
- _Episode:_ One "run" of the agent through the environment until it lands in a terminal state.
- _Actions:_ The agent takes an action to get from one state to another #hinweis[(`a0`, `a1`)].
- _Rewards:_ The agent gets a positive or negative reward when it arrives in a new state.
Through this, the agent learns what "good" and "bad" actions are.
- _Policy $bold(pi)$:_ Fully defines the behavior of the agent. Maps states to action-selection
probabilities. Normally an agent starts with a random policy and then tries to improve it.
#hinweis[("At state S53, go left with probability 76% and right with 24%" is noted as
$pi(s=53, a="left") = 0.76$)]
== Markov Decision Process (MDP)
Composed of 4 entities:
_S_ #hinweis[(set of states {`S0`, `S1`, ... `S7`})],
_A_ #hinweis[(set of actions {`a0`, `a1`})],
_R_ #hinweis[(positive or negative rewards)] an
_P_ #hinweis[(Transitions, visualized with arrows)].
MDPs are a very general mathematical function. They are used to describe and study a large
variety of problems.
#wrap-content(
image("img/aiap_13.png"),
align: top + right,
columns: (65%, 35%),
)[
== General RL framework
An agent _decides_ which _action_ to take _according to its policy $bold(pi)$_.
The action has an _effect on the environment_. As a result, the environment _transitions to
the next state_ and _returns a reward_. This _loop_ continues _infinitely_ or until a
_terminal state_ is reached.
]
=== Goal of an RL
"Find the optimal policy $pi^*$" by trial-and-error. $pi^*$ is: At each state $S_t$, take the
action $A_t$ which returns the _largest sum of (discounted) rewards_.
=== Different policies, different sum of rewards <rl-map>
#hinweis[Without discounting]
#grid(
columns: (1fr, 1fr, 2fr),
gutter: 1em,
[
*Optimal policy $bold(pi^*)$*
#table(
columns: (1fr, 1fr, 1fr),
table.header([State], [`a0`], [`a1`]),
[_`S0`_], [1], [0],
[_`S1`_], [0], [1],
[_`S2`_], [1], [0],
[_`S3`_], [-], [-],
[_`S4`_], [0.5], [0.5],
[_`S5`_], [-], [-],
[_`S6`_], [-], [-],
[_`S7`_], [-], [-],
)
],
[
*Random policy $bold(pi^"random")$*
#table(
columns: (1fr, 1fr, 1fr),
table.header([State], [`a0`], [`a1`]),
[_`S0`_], [0.5], [0.5],
[_`S1`_], [0.5], [0.5],
[_`S2`_], [0.5], [0.5],
[_`S3`_], [-], [-],
[_`S4`_], [0.5], [0.5],
[_`S5`_], [-], [-],
[_`S6`_], [-], [-],
[_`S7`_], [-], [-],
)
],
[
#image("img/aiap_14.png")
],
)
#grid(
columns: (1fr, 2fr),
gutter: 1em,
[
*Assume the agent starts in state `x` and follows $bold(pi^*)$. Total reward?*
#table(
columns: (auto, 1fr),
table.header([State], [Total Reward]),
[_`S0`_], [$V^pi^*("S0") = hyph.minus 1 + 4 + 5 = 8$],
[_`S1`_], [$V^pi^*("S1") = 4 + 5 = 9$],
[_`S2`_], [$V^pi^*("S2") = 7$],
[_`S3`_], [$V^pi^*("S3") = 0$],
)
],
[
*Same for Random Policy. What does the agent collect on average? Work backwards.*
#table(
columns: (auto, 1fr),
table.header([State], [Total Reward]),
[_`S7`_], [$V^pi^*("S7") = 0$],
[_`S4`_], [$V^pi^*("S4") = pi("S4", "a0") dot 5 + pi("S4", "a1") dot 5 = 0.5 dot 5 + 0.5 dot 5= 5$],
[_`S1`_], [$V^pi^*("S1") = 0.5 dot hyph.minus 10 + 0.5 dot (4+5) = hyph.minus 5 + 4.5 = hyph.minus 0.5$],
[_`S0`_], [$V^pi^*("S0") = 0.5 dot (hyph.minus 1 + hyph.minus 0.5) + 0.5 dot (5 + 0) = 1.75$],
)
],
)
This is the function used for this table:
$V("S0") = pi("S0", "a0") dot (R + V("S1")) + pi("S0", "a1") dot (R + V("S1"))$\
$R$: Reward of $A$, $V(S_1)$: V-State of the state the action leads to.
If this is a terminal state, this is zero.
=== Discounting
A _far away reward is less attractive_. In a discrete-time RL, it is common to apply a
_constant discount factor $bold(gamma)$_ at each time-step to steer the agent away from far
away rewards towards nearer ones with the same or a similar reward. The value of $gamma$
depends on the problem, but typical values are 0.95, 0.98, 0.99, 0.999.
Discounting is a simple and efficient strategy to lump together all kinds of risks.
#image("img/aiap_15.png", width: 80%)
Discounting is only applied on _future rewards_. In each state, the agent receives the actual
reward $r$. When in state $S_t$, a reward 3 steps into the future is discounted by the power of 2:
$y^2R_(t+3)$. But when actually moving there, the agent receives $r_(t+3)$, not $y^2R_(t+3)$.
#pagebreak()
*Exercise*\
While in state `S0`, evaluate the discounted reward of the two paths
in the image in @rl-map for $fxcolor("grün", gamma = 0.95)$.\
$tau_1$: `S0-S1-S4-S7` $= hyph.minus 1 dot fxcolor("grün", (0.95)^0)
+ 4 dot fxcolor("grün", (0.95)^1) + 5 dot fxcolor("grün", (0.95)^2) = 7.3$ \
$tau_2$: `S0-S2-S6` $= 0 dot fxcolor("grün", (0.95)^0) + 7 dot fxcolor("grün", (0.95)^1)
= 6.65$\
The trajectory $tau_1$ has a _higher discounted reward_ and gets therefore chosen.
*Reward $bold(R)$* is the _quantity an agent receives immediately_ when landing in a state.
Usually described with $r_t$\
#hinweis[($r_t$ is used for the actual value that has been received by the agent,
$R_t$ is used for the random variable before observing the actual outcome)]\
*Return $bold(G)$:* is the _discounted sum of future rewards_. Is a sum of random variables and
therefore a random variable itself. Usually the symbol $G_t$ is used.\
#block($
G_t eq.def R_(t+1) + gamma R_(t+2) + gamma^2 R_(t+3) + ... = sum_(k=0)^infinity gamma^k R_(t+k+1)
$)
=== State-action value $bold(Q(s,a))$
The state-action value $Q^pi (s,a)$ is defined as the expected Return $G$, when starting in
state $s$, taking action $a$, and following the policy $pi$ thereafter.
#block($
q_pi (s,a) eq.def EE_pi [#fxcolor("grün", $G_t$) | S_t = s, A_t = a] =
#fxcolor("orange", $EE_pi$) [sum_(k=0)^infinity gamma^k R_(t+k+1) #fxcolor("rot", "|") S_t =s, A_t = a]
$)
#hinweis[#tcolor("grün", "expected Return G"),
#tcolor("orange", "Expected return when following policy " + $pi$),
#tcolor("rot", "Reward we get when we are in this state")]
The goal of many RL algorithms is to estimate these $q$-values. $Q$-values are expectations and
depend on the policy and on $gamma$. To calculate a Q-value, you only need the next state/action values.
A simple method to approximate expectations is _sampling:_ simulate many trajectories,
observe the rewards and calculate the mean reward. _The sample mean is an approximation of the expectation._
This is called _Monte-Carlo_ Method.
== Temporal Difference Learning
In state $s_t$, taking action $a_t$, the agent expects the future reward $q(s_t, a_t)$.
After taking a step, the agent observes the next state and reward:
$q(s_t, a_t) eq.quest r_(t+1) + gamma q(s_(t+1), a_(t+1))$
#hinweis[($q(s_t, a_t)$ is the expected future reward before taking a step,
$r$ is an actual observation and $q(s_(t+1), a_(t+1))$ is the expected return from here on)]\
This equation _does not hold_.
We need to add a _difference $bold(delta_t)$_ to make both sides equal:
$q(s_t, a_t) fxcolor("rot", + delta_t) eq.quest r_(t+1) + gamma q(s_(t+1), a_(t+1))$
$delta$ is needed because of the difference between the expected value and the actual value.
#hinweis[(i.e. the average expected value of a dice roll is 3.5, but the actual value can never be 3.5)]
We can now start with any random initial guess for the $q$-values and then use the
_temporal difference error / reward prediction error (RPE) $bold(delta_t)$_ to _improve_ our guess.
Over time, the $q$-values will _converge_ towards the "true" expected values and $delta = 0$
#hinweis[(on average)]. Positive RPE means we got more than predicted.
*The TD-Learning update rule:*\
$"RPE" = r_(t+1) + gamma q(s_(t+1), a_(t+1)) - q(s_t, a_t)$\
$q(s_t, a_t) arrow.l q(s_t, a_t) + alpha dot "RPE"$ #hinweis[(updated guess overwrites the old guess)]\
_$bold(alpha)$_ is the _learning rate_ or _step size_ #hinweis[(typically between 0.000001 and 0.1)].
=== State-Action-Reward-State-Action (SARSA)
Unlike Monte-Carlo, SARSA integrates _new information_ at _every step_, not just at the end of
an episode. The agent in state $S$ chooses action $A$, lands in state $S'$ and gets reward $R$.
It then chooses its next action $A'$ based on its policy $Q$. The initial guess $Q(S, A)$ gets
updated based on the agents observation of $S$, $A$, $R$, $S'$ and its choice of $A'$.
The RPE calculates the difference between the predicted $Q(S, A)$ and the actual value $Q(S', A')$.\
$fxcolor("grün", Q(S,A)) arrow.l fxcolor("grün", Q(S,A)) + fxcolor("orange", alpha)
[fxcolor("hellgrün", R) + fxcolor("rot", gamma) fxcolor("dunkelblau" , Q(S', A')) - fxcolor("grün", Q(S,A))]$\
$arrow.l.r.double "Immediate reward" + "discounted" ["prediction in" S', A' - Q"-value of prediction when we started"]$
#let salsatable = table(
columns: (1fr, 1fr, 1fr, 1fr),
table.header([], [`S1`], [`S2`], [`S3`]),
[_`a1`_], [$1.1$], [$hyph.minus 0.9$], [$1.3$],
[_`a2`_], [$0$], [$1.1$], [$1.2$],
[_`a3`_], [$fxcolor("grün", 2.1)$], [$1.5$], [$fxcolor("dunkelblau" , hyph.minus 1.4)$],
)
#grid(
columns: (7fr, 2fr),
gutter: 1em,
[
Assume $fxcolor("orange", alpha = 0.1)$ and $fxcolor("rot", gamma = 0.9)$.
An agent performs the following state-action sequence:\
`S1 - a3 - S3 - a3`.
*Which entry of the Q-Table gets updated?*
The $fxcolor("grün", 2.1)$\
*When landing in `S3`, it receives a $fxcolor("hellgrün", bold(+4))$ reward.
Calculate the updated $Q$-value with SARSA:*\
$"RPE" = fxcolor("hellgrün", 4) + fxcolor("rot", 0.9)
dot (fxcolor("dunkelblau" , hyph.minus 1.4)) - fxcolor("grün", 2.1) = underline(0.64)$\
$fxcolor("grün", 2.1) arrow.l fxcolor("grün", 2.1) + fxcolor("orange", 0.1) dot 0.64
= underline(2.164)$
],
[#salsatable],
)
*Q-Learning:*
A variant of SARSA. The update of $Q(s,a)$ is based on the agent's observation of $r$ and $s'$,
and _the $Q$-value of "best" action in $S'$_:
$fxcolor("grün", Q(S,A)) arrow.l fxcolor("grün", Q(S,A)) + fxcolor("orange", alpha)
[fxcolor("hellgrün", R) + fxcolor("rot", gamma) max_a Q(S', a) - fxcolor("grün", Q(S,A))]$
The next action $a'$ can differ from the best action. That means, in the next step, the agent
can take one action $a'$, but use another action $max_a$ for the Q-Update.
Algorithms that learn from actions that differ from the actual taken action are called
_off-policy_. SARSA = on-policy #hinweis[(takes this action in the next step)],
Q-learning = off-policy #hinweis[(next step decided by Q-learner based on best Q-value.)]
=== SARSA and Q-learning Calculations
An agent has interacted with the Treasure-Map environment and approximated the
following Q-Table #hinweis[(state $times$ action)]:
#wrap-content(
image("img/aiap_19.png"),
align: top + right,
columns: (89%, 11%),
)[
*SARSA:*\
$"RPE"_t = r_(t+1) + gamma Q^pi (s_(t+1), a_(t+1)) - Q^pi (s_t, a_t), space.quad
Q^pi (s_t, a_t) arrow.l Q^pi (s,a) + alpha "RPE"_t$ \
*Q-learning:* \
$"RPE"_t = r_(t+1) + gamma max_(tilde(a)) Q^pi (s_(t+1), tilde(a)) - Q^pi (s_t, a_t), space.quad
Q^pi (s_t, a_t) arrow.l Q^pi (s,a) + alpha "RPE"_t$ \
*When moving form `S0` to `S2`, the RPE is always 0 because when landing in stat `S2` the reward is 0:*\
This is wrong because the RPE depends not only on the immediate reward.\
*The agent is in state `S0` and follows the trajectory $tau =$ `S0 - a0 - S1 - a0 - S3`.
Assume $bold(#fxcolor("hellgrün", $gamma = 0.95$))$ and
$bold(#fxcolor("dunkelblau", $alpha = 0.05$))$*
]
*Calculate the RPE and the updated $bold(#fxcolor("grün", $Q("s0","a0")$))$:*\
$#fxcolor("grün", $Q("s0","a0")$) = #fxcolor("grün", $4$), space.quad #fxcolor("gelb", $r = -1$)$
#hinweis[(see image in @rl-map)]\
$#fxcolor("orange", $Q("s1","a0")$) = #fxcolor("orange", $hyph.minus 8.5$)
arrow.double #fxcolor("hellgrün", $gamma$) #fxcolor("orange", $Q("s1","a0")$)
= #fxcolor("hellgrün", $0.95$) dot #fxcolor("orange", $hyph.minus 8.5$)
= hyph.minus 8.075 arrow.double #fxcolor("gelb", $r$)
+ #fxcolor("hellgrün", $gamma$) #fxcolor("orange", $Q("s1","a0")$)
= #fxcolor("gelb", $hyph.minus 1$) + hyph.minus 8.075
= hyph.minus 9.075$ \
*RPE SARSA:*
$#fxcolor("gelb", $r$) + #fxcolor("hellgrün", $gamma$) #fxcolor("orange", $Q("s1","a0")$)
- #fxcolor("grün", $Q("s0","a0")$)
= #fxcolor("gelb", $hyph.minus 1$) + hyph.minus 8.075 - #fxcolor("grün", $4$)
= hyph.minus 13.075$ \
*$bold(#fxcolor("grün", $Q("s0","a0")$))$ after SARSA update:*
$#fxcolor("grün", $Q("s0","a0")$) + #fxcolor("dunkelblau", $alpha$) ("RPE")
= #fxcolor("grün", $4$) + #fxcolor("dunkelblau", $0.05$) dot hyph.minus 13.075
= underline(3.34625)$\
$#fxcolor("rot", $max_tilde(a) Q("s1", "a")$)
= #fxcolor("rot", $6.5$) arrow.double #fxcolor("hellgrün", $gamma$)
#fxcolor("rot", $max_tilde(a) Q("s1", tilde(a))$)
= #fxcolor("hellgrün", $0.95$) dot #fxcolor("rot", $6.5$) = 6.175\
arrow.double #fxcolor("gelb", $r$) + #fxcolor("hellgrün", $gamma$)
#fxcolor("rot", $max_tilde(a) Q("s1", tilde(a))$)
= #fxcolor("gelb", $hyph.minus 1$) + 6.175 = 5.175$\
*RPE Q:*
$#fxcolor("gelb", $r$) + #fxcolor("hellgrün", $gamma$) #fxcolor("rot", $max_tilde(a)
Q("s1", tilde(a))$) - #fxcolor("grün", $Q("s0","a0")$)
= #fxcolor("gelb", $hyph.minus 1$) + 6.175 - #fxcolor("grün", $4$) = 1.175$\
*$bold(#fxcolor("grün", $Q("s0","a0")$))$ after Q-Value update:*
$#fxcolor("grün", $Q("s0","a0")$) + #fxcolor("dunkelblau", $alpha$) ("RPE")
= #fxcolor("grün", $4$) + #fxcolor("dunkelblau", $0.05$) dot 1.175
= underline(4.05875)$\
=== Epsilon-Greedy-Policy
In order to discover the best available options, an agent needs to _explore_ the _entire_
state-action-space. In contrast, in order to _maximize the return_, the agent wants to follow
the _best trajectory_. Therefore, we need a way to describe a _flexible behavior_.\
If the agent just follows the _most promising path_ based on its learned knowledge,
he _stops exploring_ and _starts exploiting_, thus always taking the "best" known action:
a _greedy policy_. This way he might _miss the best trajectory_.
We need to _balance_ between exploration and exploitation with the epsilon-greedy-policy:\
`at each state:
with probability 1-`#math.epsilon`: Take the action that has the highest value (greedy exploit)
with probability` #math.epsilon`: Take a random action (explore)`\
_Finding a good value for #math.epsilon is key. Possible strategies:_
fixed at for example $0.1$ or starting at $0.95$ #hinweis[(high exploration)] and reducing/
_annealing_ it over time to $0.05$ #hinweis[(greedy). $epsilon = 1$ means random behavior.]
= Deep Reinforcement Learning
*Disadvantages of Q-tables:* The Q-Table maps $(s,a)$ #hinweis[(state-action pairs)] to
$Q(s,a)$ #hinweis[(expected reward of (s,a))]. Since it is a table, it does _not scale well_.
It is _limited to discrete states and discrete actions_. There is also _lack of
generalization_, there is no relation to neighboring states and actions.
In real life, nearby/similar states often have similar Q-Values and it is reasonable to apply
similar actions. While the input is often _high-dimensional_, the relevant features are often
not #hinweis[(background vs. relevant part of an image)]. A tabular-RL would classify each
changed pixel as a completely independent state #hinweis[(senseless!)]. \
Since $Q$ is a function, we can _approximate_ it. In DRL, the $Q$-values are approximated using
a Deep Neural Network.
_Input:_ the state $s$ #hinweis[(e.g. RGB image, state of a chess board)],
_Output:_ one neuron per action $a$. The neurons activity is $approx Q(s,a)$.
== Deep Q-Network (DQN)
*Replay buffer:*
The transitions that an agent performs during an episode are stored in the replay buffer as a _experience_.
Randomly pick a few experiences, do a forward- and backward-pass to _train_ the
network on this _batch_ of samples.\
*Target Network:*
This is a network that is updated _more slowly_ compared to the Q-Network, which is used to
define the optimal policy based on the value estimation. The target network is _not_ updated
every iteration but once in a while to _stabilize_ the learning process.\
*Epsilon-greedy Policy:*
Given $Q$-values, which actions should be taken? Epsilon-greedy is simple to implement and
often works well enough for a DQN.
If two actions are _almost equally good_, it is better to use Softmax.
#pagebreak(weak: true)
*Softmax:*
T or $tau$ is the _temperature_. It gives us control over the _exploration/exploitation_ balance.
_High $bold(tau)$:_ all actions have almost the same probability
#hinweis[(Even with big differences between the Q-Values, they have a similar probability)].
_Low $bold(tau)$:_ for $tau arrow 0$, the policy becomes greedy.
Often, $tau$ is annealed #hinweis[(becomes more greedy over time)].
= Sequential and Time Series Data
*Sequential Data:*
Words, sentences, streams of text, etc.\
*Time-Series data:*
Is a _subset_ of sequential data where the x-axis is time
#hinweis[(Stock option pricing, Sensor Data, etc.)].\
Sequential Data is everywhere. It is a tremendous source of information.
It is also a _indispensable tool in science_ and there are a lot of _business opportunities_
and _applications_ #hinweis[(Predictive maintenance, portfolio management, sales, politics,
weather prediction, forecasting traffic, text generation, anomaly detection, ...)]. \
*Model-based Prediction:*
Based on physics and historical data, _mathematical models_ are developed.
The sequential data provide the initial conditions. Then the models are used to calculate the
evolution. The _quality_ depends on many factors #hinweis[(quality of measurements, quality of
the model, nature of the dynamic system)]. For most time-series, we _do not have a causal
model_. Therefore, predictions need to be made _based on historical data_.
This is done by looking for _structure/patterns_ #hinweis[(Seasonality, Trends)].\
*Feature Engineeering:*
Assuming some function and optimize the free parameters.\
*Learning from Data:*
Make very few assumptions about the structure (low bias), machine learning algorithms
_discover_ structure from the data. This can _outperform_ model-based predictions, for example
in the weather forecast.
= Recurrent Neural Network (RNN)
== Models
_One-to-one_ #hinweis[(Sequence has only one time step (static).
One input is fed and one output is generated. This is the case in traditional ANNs.)],\
_Many-to-one_ #hinweis[(sentiment classification - positive/negative sentence)],
_One-to-Many_ #hinweis[(Image Captioning)],
_Many-to-many_ #hinweis[(Machine Translation)]\
_ANN_ and _FFNN_ #hinweis[(Feed Forward Neural Networks)] are no good match for sequential data
because they are _unable_ to _capture the temporal order_ of a time series, since they treat
each input _independently_ #hinweis[(results aren't based on past data)].
Ignoring the temporal order restrains performance. _CNN_ is partially good for this task, you
can use _filters_ and flatten the input features to 1 dimension (time).
If the forecast depends on _detecting specific patterns_, then _CNNs are a good fit_.
However, convolutions and pooling operations _lose information about the local order of words_.
The meaning could be misinterpreted or the grammar could be incorrect if sequential information is not used.
== RNN Concept
RNNs introduce a _recurrent connection_ which allows information to flow from one time-step to
the next. This allows the RNNs to _maintain internal memory_ and utilize information from the
previous step for the current step and therefore _learn temporal dependencies_. \
== Architecture
#image("img/aiap_16.png", width: 89%)
#hinweis[Note, this is the same cell, but unfolded through time.]\
*Input:*
$x_t$, could be a one-hot vector corresponding to a word.\
*Hidden state:*
$h_t$ represents a hidden state at a time $t$ and acts as "memory" of the network.
It is calculated based on the current input and the previous time step's hidden state:
$h_t = f(x_t, h_(t-1))$. $f$ is parameterized by _weights and bias_ which need to be _learned_.
These are _shared_ across all timestamps.\
*Weights:*
The inputs $x_t$ and $h_t$ are multiplied with their weights $W^T_(h x)$ and $W^T_(h h)$ respectively.\
*Bias:*
With an addition, $b_h$ is added to the output of $h_t$ and $x_t$ after their respective weights have been added.\
*Activation function:*
The biased value is put through $g$, a nonlinear activation function
$h_t = g(W^T_(h h) h_(t-1) + W^T_(h x)x_t + b_h)$\
*Output:* The model returns $hat(y)$, the prediction for this timestep.
This value is fed into the network again under the name $h_t$.
== Training
*Backpropagation:*
Initialize weights, and repeat until convergence: Forward pass, calculate loss,
calculate gradient, update weights in backward pass.\
*Backpropagation through time (BPTT):*
The error is propagated backward through time until the initial timestep.\
*Loss Function:*
_Binary cross entropy_ for binary classification, _categorical cross entropy_ for
multi-class classification. For regression, use _RMSE_.\
*Keras:* ```py model = Sequential() # 3 timesteps, 1 feature, 32 neurons```\
```py model.add(SimpleRNN(units=32, input_shape=(3,1), activation = "tanh"))```
== Limitations
*Exploding Gradient:*
As the backpropagation algorithm advances backwards, the gradient can get _larger and larger_
#hinweis[(huge weight changes)] and therefore _never converges_ on the optimum.
_Gradient Clipping_ and _proper weight initialization_ can help.\
*Vanishing Gradient:*
As the algorithm advances backwards, the gradient can get _smaller and smaller_ and approach
zero, therefore _never converges_ on the optimum. It is harder and harder to propagate errors
from the loss back to distant past. As a result, it might just learn _short term dependencies_.
Fix with _batch normalization_, _Long short-term memory_, _weight initialization_.
= Transformers and the attention mechanism
Transformers are the _most important deep learning architecture_ for sequence modeling.
They make use of a powerful building block: _the attention mechanism_.\
*Limitations of (simple) RNNs:*
All information about past patterns in the sequence must be conveyed through the hidden signal
$h$ #hinweis[(context vector)]. The longer the sequence, the more information is "faded out".
$h$ is a _bottleneck_.\
*Techniques to overcome these limitations:*
_Long Short-Term Memory (LSTM)_ has an additional "conveyor belt" signal $c$.
During training, the LSTM learns when to keep / use / modify this signal.
This makes an LSTM a _trainable memory unit_. LSTM still fail to learn from very long sequences.\
We can't feed an _entire sequence_ into a fully connected network:
Variable length, input is treated _independently_, _exploding_ no. of parameters.
We need to give this "clueless" network a clue: _Inductive Bias_ aka _attention_.
== Attention
Attention assigns varying _levels of importance_ to different words in a sentence,
by calculating "soft" weights #hinweis[(its embedding)], within a specific section of the
entence called the _context window_ to determine its _importance_.
Attention "looks" at all words #hinweis[(all tokens)] in the input sequence.
Each attention head has its own weights $W_q$, $W_k$, $W_v$.
Therefore, different heads look for different patterns, but all heads look at all words.
== Preprocessing
*Tokenization:*
Long text strings are broken down into _short chunks_ #hinweis[(1 token = 1 word or a part of a word)].
The _output_ is a _sequence of integers_.\
*Embedding:*
The word (or token) indices are then _mapped_ onto a _vector_ with a _position encoding_.
In Keras, this is implemented using a special type of layer: `keras.layers.Embedding`\
*Positional Encoding:*
For text processing, the _relative position_ of each word _within the sequence is relevant_
#hinweis[(sentence structure)]. For each word, a _"location vector"_ is calculated and added
(or concatenated) to its embedding vector before it is fed into the encoder.
A common technique is based on _trigonometric functions_ sine and cosine using _different frequencies_.
== The two components of Transformers
*Encoders:*
A stack of _Encoders_ maps the input onto _embedding vectors_. Encoders can process the entire
input sequence in parallel. The sequence of input vectors is passed through a _self-attention
layer_. For each vector in the input, it produces a _weighted_ linear combination of all input
vectors. The same (fully connected) _feed forward network_ is then applied to each vector.
This layer performs a nonlinear transformation. The output is fed into the next encoded layer.
*Decoders:*
A stack of _Decoders_ map the embeddings to the _desired target sequence_.
The Output generation is (usually) word-by-word. Each decoder consists of a _self-attention
layer_, an _encoder-decoder attention layer_ which combines the vectors from the decoder below,
with the embeddings from the encoder, and a _feed forward network_.
== Calculation of Attention
The _attention head_ consists of the matrices _query $Q$_, _key $K$_ and _value $V$_.
The goal of _attention_ is to look-up the _value $V$_ of those "neighbors" which are strongly
influencing the current token #hinweis[(words with similar meanings)].
An _attention-score_ is calculated between each pair of the embedding vectors.
The attention mechanism is a "softmax-version" of a dictionary:
The _attention score_ is a _similarity measure_ between a _key $K$_ and a _query $Q$_.
Both are vectors, the dot-product is used to calculate how strongly they match.
_Strong match_: #hinweis[(cosine similarity close to 1)] "most" of the value $V$ is returned
#hinweis[($V *$ cosine similarity)]. _No strong match:_ a close to 0 vector is returned.
_The result of this softmax-lookup is the sum of all values, weighted according to the key/value similarity._
#image("img/aiap_18.png", width: 80%)
More compact representation: $"Attention"(Q,K,V) = "softmax"((Q K^T) / sqrt(d_k)) V$
#hinweis[(The $sqrt(d_k)$ stabilizes the calculation)]
= Transfer learning, foundation models
*Transfer Learning:*
Train a complex model on Task A, then freeze part of the model and _reuse it for a new/related task B_.
Useful when you have only a few labels / data for your actual task or if you just don't want to
do all of the training by yourself.\
*Foundation Model:*
A large model trained on a vast quantity of data at scale, resulting in a model that can be
_adapted_ to a _wide range of downstream tasks_.\
*In-Context Learning:*
The LM performs a task just by _conditioning on input-output examples_, without optimizing any parameters.
It has _no training phase_, the examples provided in the augmented prompt are _directly
influencing_ the generative process.
= Machine Learning Operations (ML-OPS)
*Components of real-world AI projects:*
Data, Versioning, CI/CD, Monitoring, Reproducibility, Time/Budget constraints, ...\
*CI / CD / CD:*
Continuous integration, Continuous delivery, Continuous Deployment.\
*Training pipeline:*
_Data Collection_ #hinweis[(Provided by customer, generated using models or collected from company experts)],
_Data Preparation_ #hinweis[(augmentation, sampling)],
_Model (Re-)Training_ #hinweis[(infrastructure, consistent versioning, optimization)],
_Model Evaluation_ #hinweis[(acceptance testing, performance evaluation, reproducibility)],
_Model Deployment_ #hinweis[(manage rollout of different networks, track and document)],
_Model Monitoring_ #hinweis[(monitor performance, identify problematic cases, collect feedback and re-train)].\
*Data Engineer:*
Understands _general workflows_ and data flows with all specific requirements.\
*DevOps:*
Has a _technical skill set_: Operate and maintain processing resources, network infrastructure.\
*Data Analyst, ML Engineer:*
Has _in-depth knowledge_ of underlying _algorithm_ and processes.
== Problems with Data
_Biased data_ #hinweis[(unrepresentative of all data)],
too many _similar/redundant samples_ #hinweis[(car with a road)],
_missing edge cases_ #hinweis[(pedestrian coming out of nowhere)],
_bad quality_ #hinweis[(pictures during a rainy day)]\
This can lead to _bad models_. To improve we need to _collect more data_ in _"missing areas"_
or _balance/remove redundant data_.
The last step can be achieved via _self-supervised learning_ where similar/redundant pictures are
automatically categorized as less relevant. This can decrease dataset size and costs.\
When the training data is _modified_, it should also be _versioned_ to measure performance
across different iterations and to create _reproducible models_. Can be done via Git.
|
|
https://github.com/drunkenQCat/typst-arknights-simulator | https://raw.githubusercontent.com/drunkenQCat/typst-arknights-simulator/master/template.typ | typst | #import "@preview/grayness:0.1.0": grayscale-image
#let background_pic = "pics/Avg_bg_bg_indoor_2.png"
#let portrait_pic = "pics/portrait.png"
#let char_name = "丰川祥子"
#let char_scripts = "你这家伙,满脑子都是自己呢。"
#let name_grid(name) = text(
text(
name,
size: 1.3em,
tracking: 0.01em,
),
font: "HarmonyOS Sans SC",
weight: "medium",
fill: luma(50%)
)
#let script_grid(script) = text(
[
#script],
font: "HarmonyOS Sans SC",
fill: luma(206)
)
#let bottom_dialog(name, script) = table(
columns: (1fr, 4fr , 15fr, 1fr),
gutter: 2cm,
inset: 10pt,
align: (right, top + right, top + left),
stroke: rgb(100, 100, 100, 0),
rows: (118pt),
box(), name_grid(name), script_grid(script)
)
#let portrait_table(portrait1, portrait2, focus: 0) = table(
columns: (5fr, 5fr), // 两边的图片列和中间的空白列
rows: auto, // 根据内容自动调整行高
align: (right, left), // 居中对齐内容
stroke: rgb(100, 100, 100, 0), // 透明边框
place(
right,
dx: 3cm,
if focus == 2{
let data = read(portrait1.path, encoding: none)// 左侧肖像
grayscale-image(data, height: portrait1.height)
}else{
portrait1 // 左侧肖像
},
),
place(
left,
dx: -3cm,
if focus == 1{
let data = read(portrait2.path, encoding: none)// 右侧肖像
grayscale-image(data, height: portrait2.height)
}else{
portrait2 // 右侧肖像
},
),
)
#let arknights_sim(name, script, portrait, bg, focus: 0) = page(
height: 810pt,
width: 1440pt,
margin: (
bottom: -0.01cm,
left: 0cm,
right: 0cm,
top: 0cm
),
)[
#set text(size: 2.7em)
#place(
top,
bg
)
#place(
center,
[
#box(height: 40pt),
#if focus == -1 {
let data = read(portrait.path, encoding: none)
grayscale-image(data, height: portrait.height)
} else {
portrait
}
]
)
#place(bottom,
image(
"pics/dialog_background.png",
fit: "cover",
width: 100%
)
)
#place(
bottom,
bottom_dialog(name, script)
)
]
#let arknights_sim_2p(name, script, portrait1, portrait2, bg, focus: 0) = page(
height: 810pt,
width: 1440pt,
margin: (
bottom: -0.01cm,
left: 0cm,
right: 0cm,
top: 0cm
),
)[
#set text(size: 2.7em)
#place(
top,
bg
)
#place(
center,
[
#box(height: 40pt),
#portrait_table(portrait1, portrait2, focus: focus)
]
)
#place(bottom,
image(
"pics/dialog_background.png",
fit: "cover",
width: 100%
)
)
#place(
bottom,
bottom_dialog(name, script)
)
]
// 示例
#arknights_sim("长崎素世", "求你了,如果没有祥子你们的话,瓦塔西!", image("pics/sayo_portrait.png",height: 80%), image(background_pic, width: 100%), focus: -1)
#arknights_sim(
char_name,
char_scripts,
image(
portrait_pic,
height: 150%
),
image(
background_pic,
width: 120%),
)
#arknights_sim_2p(
char_name,
char_scripts,
image(
portrait_pic,
height: 150%
),
image("pics/sayo_portrait.png",height: 80%),
image(
background_pic,
width: 120%
),
focus: 2
)
#portrait_table(
image(
portrait_pic,
width: 100%
),
image("pics/sayo_portrait.png",width: 30%),
) |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/data-09.typ | typst | Other | // Error: 7-18 failed to parse toml file: expected `.`, `=`, index 15-15
#toml("test/assets/files/bad.toml")
|
https://github.com/sitandr/mdbook-typst-highlight | https://raw.githubusercontent.com/sitandr/mdbook-typst-highlight/main/example-book/src/indented/indented.md | markdown | MIT License | # Indented
```typst
#block(fill: red.lighten(70%), stroke: red, inset: 1em)[
This is a block
]
```
```typ
Page break:
#pagebreak()
#set page(header: "HEADER")
Some text
``` |
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/lovelace/0.2.0/examples/list-label.typ | typst | Apache License 2.0 | #set page(width: auto, height: auto, margin: 1em)
#import "../lib.typ": *
#show: setup-lovelace
#pseudocode-list[
- *input:* number $n in NN$
- *output:* zero
+ *while* $n > 0$
+ $n <- n - 1$ #line-label(<line:decr>)
+ *end*
+ *return* $n$
]
In @line:decr, we decrease $n$.
|
https://github.com/binhtran432k/ungrammar-docs | https://raw.githubusercontent.com/binhtran432k/ungrammar-docs/main/contents/system-implementation/playground.typ | typst | === Ungrammar Online Demonstration Playground <subsec-impl-playground>
The Ungrammar Online Demonstration Playground offers a convenient and
accessible platform for users to quickly explore and experience the
capabilities of the Ungrammar language ecosystem without requiring any
installation.
Key Features:
- *Effortless Access*: Provides a user-friendly web-based interface for
immediate interaction.
- *Monaco Editor Integration*: Leverages the Monaco editor and Ungrammar Monaco
Extension (@subsec-impl-monaco) to deliver a robust and feature-rich code
editing experience.
- *Full-Featured LSP Support*: Offers a comprehensive set of Language Server
Protocol (LSP) features, including syntax highlighting, code completion,
diagnostics, and navigation.
- *Client-Side Implementation*: Operates entirely within the browser,
eliminating the need for a separate server installation.
Benefits:
- *Rapid Evaluation*: Allows users to quickly assess the capabilities of the
Ungrammar language ecosystem without the overhead of installation.
- *Accessibility*: Accessible from any device with a web browser, making it
convenient for users to explore and experiment.
- *Seamless Integration*: Leverages the Monaco editor for a familiar and
intuitive coding experience.
- *Comprehensive Feature Set*: Provides a wide range of LSP features to enhance
productivity and streamline development workflows.
By providing an online demonstration playground, we aim to make the Ungrammar
language ecosystem easily accessible and approachable for developers of all
levels.
==== Implementation Detail
The Ungrammar Online Demonstration Playground is a user-friendly web-based
platform designed to showcase the capabilities of the Ungrammar language
ecosystem. Powered by the Monaco editor and leveraging the Ungrammar Monaco
extension, the playground provides a robust and interactive code editing
environment.
Built on the Astro framework, the playground benefits from efficient static
site generation (SSG), ensuring fast load times and optimal performance.
Additionally, the Preact library is used for efficient component rendering,
contributing to the playground's responsiveness and user experience.
For a visually appealing and customizable interface, the playground utilizes
Daisy UI and Tailwind CSS, providing a wide range of styling options.
Overall, the Ungrammar Online Demonstration Playground offers a comprehensive
and engaging platform for users to explore the features and benefits of the
Ungrammar language ecosystem.
==== Deployment to the Cloud
The Ungrammar Online Demonstration Playground is now accessible at
#link("https://binhtran432k.com/ungrammar-language-features"). This
user-friendly platform offers a seamless gateway into the world of Ungrammar,
allowing users to explore, experiment, and harness the power of the language
ecosystem without the need for installation.
#figure(
image("/assets/playground.jpg", width: 90%),
caption: [Ungrammar Playground - Home Page],
)
#figure(
image("/assets/playground-dark.jpg", width: 90%),
caption: [Ungrammar Playground - Home Page (Dark Mode)],
)
#figure(
image("/assets/playground-popup.jpg", width: 90%),
caption: [Ungrammar Playground - Security Pop-up],
)
|
|
https://github.com/TGM-HIT/typst-diploma-thesis | https://raw.githubusercontent.com/TGM-HIT/typst-diploma-thesis/main/template/chapters/studie.typ | typst | MIT License | #import "../lib.typ": *
= Studie
Nach der Definition der Problemstellungen und Ziele soll recherchiert werden, wie diese erreicht, beziehungsweise gelöst werden können. Diese Studie beschäftigt sich mit möglichen Lösungen und Technologien und analysiert deren Eigenschaften um konkrete Vor- und Nachteile zu finden. Beendet wird dieser Abschnitt mit einem Fazit.
== Fazit
Das Fazit soll zeigen, wie das Projekt durchgeführt werden kann. |
https://github.com/Error-418-SWE/Documenti | https://raw.githubusercontent.com/Error-418-SWE/Documenti/src/1%20-%20Candidatura/Preventivo%20dei%20costi/Preventivo%20dei%20costi.typ | typst | #import "template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: project.with(
title: "Preventivo Costi",
subTitle: "Organizzazione temporale e assunzione di impegno",
docType: "doc",
// Nome //Ruolo Durata Presenza
partecipants: (
(name: "<NAME>", role: "Redattore"),
(name: "<NAME>", role: "Verificatore"),
(name: "<NAME>", role: "Verificatore"),
(name: "<NAME>", role: "Amministratore"),
(name: "<NAME>", role: "Amministratore"),
(name: "<NAME>", role: "Redattore"),
(name: "<NAME>", role: "Verificatore"),
),
showLog: false,
showIndex: true,
)
#set text(font:"New Computer Modern", size: 1.3em)
#set align(left)
= Analisi preliminare
A seguito di una quanto più accurata analisi relativa al carico di lavoro e al monte ore necessario al suo svolgimento, il gruppo Error_418 ha individuato tre periodi di sviluppo fondamentali, quali:
- periodo raccolta e analisi dei requisiti;
- periodo di sviluppo della Requirements and Technologies Baseline;
- periodo di sviluppo del Minimun Viable Product (MVP).
== Periodo raccolta e analisi dei requisiti
In questo periodo, il ruolo centrale è svolto dagli analisti, che mediante un costante e produttivo rapporto con l’azienda delineeranno i requisiti che il prodotto finale dovrà possedere, soffermandosi nel carpirne dettagli, delimitazioni e standard di accettazione.
== Periodo di sviluppo della Requirements and Technologies Baseline
A seguito del risultato prodotto nella fase precedente di analisi dei requisiti, ampio
spazio sarà dedicato alla progettazione e all’utilizzo pratico delle tecnologie necessarie
allo sviluppo, dimostrando la loro validità e funzionalità.
Il prodotto finale di questa fase sarà il Proof of Concept dimostrante la concreta possibilità di poter utilizzare le tecnologie in modo cooperante tra loro.
== Periodo di sviluppo del Minimun Viable Product (MVP)
Dimostrata la concreta possibilità implementativa mediante il Proof of Concept, il gruppo
impiegherà le risorse nella realizzazione materiale di un prodotto software atto al soddisfacimento almeno dei requisiti minimi di accettazione, fornendo tutte le funzionalità
richieste per essere considerato valido, ponendo attenzione non solo al lato funzionale
ma anche, e soprattutto, al lato implementativo.
= Analisi dei ruoli
== Responsabile
Il responsabile è un ruolo presente durante l'intero progetto. Il suo compito è quello di gestire il team e di assicurarsi che lo sviluppo proceda secondo le tempistiche predefinite e le aspettative del committente. Deve inoltre redigere e far rispettare le Norme di Progetto, nonché le linee guida di sviluppo che l'intero team deve rispettare. Essendo il ruolo più costoso, il numero di ore è stato scelto per favorire l'efficienza e non pesare eccessivamente sul costo finale.
== Amministratore
L'amministratore è un ruolo presente durante l'intero progetto, in quanto si occupa di predisporre e controllare il corretto utilizzo delle procedure e degli strumenti definiti nelle Norme di Progetto, andando anche a gestire e implementare automatismi, migliorando così l'efficienza del team. Il monte ore scelto è tale, poiché essendo questo un ruolo di controllo, non sono richieste un numero elevato di ore.
== Analista
L'analista è presente esclusivamente nelle fasi iniziali del progetto, poiché si occupa della raccolta e dell'elaborazione dei requisiti. Ciò comporta un numero di ore ridotto e una distribuzione delle stesse unicamente nelle prime settimane del progetto.
== Progettista
Il progettista ha il compito di delineare e documentare l'architettura del prodotto in modo da:
- soddisfare i requisiti raccolti nelle fasi pregresse;
- aiutare il team di sviluppo con una documentazione chiara ed esaustiva.
Nello stabilire l'architettura deve quindi indicare anche quali saranno le tecnologie da utilizzare per la sua implementazione.
Ritenendolo un ruolo impegnativo dal punto di vista temporale, il numero di ore risulta maggiore rispetto ai ruoli precedenti.
== Programmatore
Il programmatore ha il compito di tradurre in codice eseguibile l'architettura prodotta dal progettista. Il ruolo prevede un numero di ore molto elevato poiché riteniamo il lavoro più dispendioso a livello temporale rispetto a quello delle altre figure professionali.
== Verificatore
Il verificatore è un ruolo presente durante l'intero progetto, che si occupa di mantenere degli standard qualitativi sul lavoro del team: egli deve verificare la correttezza, esaustività e coerenza di tutti i documenti, e nella fase di codifica sarà colui che si occuperà di controllare la qualità del software prodotto. Proprio per questo il totale delle ore risulta essere il secondo più elevato dopo il programmatore.
= Preventivo dei Costi
#table(align: center,
columns: (2fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr),
[*Membro*],[*Resp.*],[*Amm.*],[*Analis.*],[*Prog.*],[*Program*],[*Verifi.*],[*Totale*],
[Alessio], [13], [10], [10], [14], [30], [18], [95],
[Antonio], [13], [10], [10], [14], [30], [18], [95],
[Giovanni], [13], [10], [10], [14], [30], [18], [95],
[Mattia], [13], [10], [10], [14], [30], [18], [95],
[Riccardo], [13], [10], [10], [14], [30], [18], [95],
[Rosario], [13], [10], [10], [14], [30], [18], [95],
[Silvio], [13], [10], [10], [14], [30], [18], [95],
[Totale Ore], [91], [70], [70], [98], [210], [126], [665],
[<NAME> (€)], [30,00], [20,00], [25,00], [25,00], [15,00], [15,00], [],
[Costo Ruolo (€)], [2.730], [1.400], [1.750], [2.450], [3.150], [1.890], [],
)
#align(center, "Costo Totale Preventivato: € 13.370,00") |
|
https://github.com/ludovic-esperce/afpa-typst-template | https://raw.githubusercontent.com/ludovic-esperce/afpa-typst-template/main/lib.typ | typst | #let afpa(title, subtitle, date, author, doc) = {
// Paramètrage des méta-données du document
set document(title: title, author: author, date: date)
let afpa_color = rgb("#86bc24")
let grey_color = rgb("#5A5A5A")
// Configuration des paramètres de style communs à toutes les pages
set par(justify: true)
set text(font: "Liberation Sans", size: 12pt)
show link: underline
// Paramétrage des listes
set list(indent: 2em)
set enum(indent: 2em)
// Paramétrage des tites
set heading(numbering: "I.1.")
show heading : set text(font: "Rockwell")
// Paramétrage du "strong"
show strong: set text(afpa_color)
// Paramètrage du style pour le code
// on paramètre le langage par défaut de façon à la coloration correcte sur
// le code "inline"
// Valeurs possibles : "java", "rust", "cs", "cpp"...
// liste complète des languages disponibles : https://github.com/jincheng9/markdown_supported_languages
let default_code_lang = "java";
set raw(theme: "assets/halcyon.tmTheme", lang: "java")
let code_background_fill = rgb("#1d2433")
let code_text_fill = rgb("#a2aabc")
// Paramétrage pour le code enoturé de ```
show raw.where(block: true): r => block(
fill: code_background_fill,
inset: 8pt,
radius: 5pt,
text(fill: rgb("#a2aabc"), r)
)
// Paramétrage pour le code entouré de `
show raw.where(block: false): r => box(block(
fill: code_background_fill,
inset: 8pt,
radius: 5pt,
text(fill: rgb("#a2aabc"), r)
))
// Paramètrage de la première page
set page(
footer:
context[
#set text(10pt)
#align(left)[ #text(fill: afpa_color)[Afpa •] #text(fill: grey_color)[#author • #date.display("[day]/[month]/[year]")]]
#align(right)[ *#text(fill: grey_color)[#counter(page).display("1")]*]
],
background: image("assets/first-page-background.svg")
)
// Première page
page([
#set text(size: 34pt, font: "Rockwell")
#set par(justify: false)
#show par: set block(spacing: 0.6em)
#align(start + horizon, text(weight: "bold")[#title])
#set text(size: 18pt, font: "Rockwell")
#subtitle
])
// Paramètrage des pages à partir de la seconde
set page(
header: align(left)[ #text(fill: rgb(90, 90, 90))[#title]],
footer:
context[
#set text(10pt)
#align(left)[ #text(fill: afpa_color)[Afpa •] #text(fill: rgb(90, 90, 90))[#author • #date.display("[day]/[month]/[year]")]]
#align(right)[ *#text(fill: grey_color)[#counter(page).display("1",)]*]
],
background: image("assets/page-background.svg"),
margin: (top: 3.5cm)
)
// insertion de la suite du document
doc
}
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10F70.typ | typst | Apache License 2.0 | #let data = (
("OLD UYGHUR LETTER ALEPH", "Lo", 0),
("OLD UYGHUR LETTER BETH", "Lo", 0),
("OLD UYGHUR LETTER GIMEL-HETH", "Lo", 0),
("OLD UYGHUR LETTER WAW", "Lo", 0),
("OLD UYGHUR LETTER ZAYIN", "Lo", 0),
("OLD UYGHUR LETTER FINAL HETH", "Lo", 0),
("OLD UYGHUR LETTER YODH", "Lo", 0),
("OLD UYGHUR LETTER KAPH", "Lo", 0),
("OLD UYGHUR LETTER LAMEDH", "Lo", 0),
("OLD UYGHUR LETTER MEM", "Lo", 0),
("OLD UYGHUR LETTER NUN", "Lo", 0),
("OLD UYGHUR LETTER SAMEKH", "Lo", 0),
("OLD UYGHUR LETTER PE", "Lo", 0),
("OLD UYGHUR LETTER SADHE", "Lo", 0),
("OLD UYGHUR LETTER RESH", "Lo", 0),
("OLD UYGHUR LETTER SHIN", "Lo", 0),
("OLD UYGHUR LETTER TAW", "Lo", 0),
("OLD UYGHUR LETTER LESH", "Lo", 0),
("OLD UYGHUR COMBINING DOT ABOVE", "Mn", 230),
("OLD UYGHUR COMBINING DOT BELOW", "Mn", 220),
("OLD UYGHUR COMBINING TWO DOTS ABOVE", "Mn", 230),
("OLD UYGHUR COMBINING TWO DOTS BELOW", "Mn", 220),
("OLD UYGHUR PUNCTUATION BAR", "Po", 0),
("OLD UYGHUR PUNCTUATION TWO BARS", "Po", 0),
("OLD UYGHUR PUNCTUATION TWO DOTS", "Po", 0),
("OLD UYGHUR PUNCTUATION FOUR DOTS", "Po", 0),
)
|
https://github.com/r8vnhill/apunte-bibliotecas-de-software | https://raw.githubusercontent.com/r8vnhill/apunte-bibliotecas-de-software/main/Unit2/data_classes.typ | typst | == Data Classes
Las data classes en Kotlin están diseñadas principalmente para contener datos.
Estas clases proporcionan automáticamente funcionalidades estándar como `toString()`, `equals()`, `hashCode()`, y `copy()` sin necesidad de implementarlas manualmente.
=== Características de las Data Classes
- *Funcionalidades Automáticas:*Las data classes generan automáticamente métodos como `toString()`, `equals()`, `hashCode()`, y `copy()`.
- *No Heredables:* No pueden ser heredadas, pero pueden ser extendidas mediante funciones de extensión.
=== Ejemplo de una Data Class
```kotlin
data class Person(val name: String, val age: Int)
```
En este ejemplo, `Person` es una data class que tiene dos propiedades: `name` y `age`.
=== Constructor Primario
Las data classes necesitan tener un constructor primario.
Aunque se puede usar `var`, es recomendable utilizar `val` para propiedades inmutables y el método `copy` para crear nuevas instancias con modificaciones.
```kotlin
val person1 = Person("Alice", 29)
val person2 = person1.copy(age = 30)
```
En el ejemplo anterior, `person2` es una copia de `person1` pero con una edad modificada.
=== Descomposición de Data Classes
Las data classes pueden ser descompuestas en sus componentes individuales utilizando la sintaxis de desestructuración.
```kotlin
fun main() {
val (name, age) = Person("Alice", 29)
println("name = $name, age = $age")
}
```
En este ejemplo, la data class `Person` se descompone en las variables `name` y `age`.
|
|
https://github.com/HollowNumber/DDU-Rapport-Template | https://raw.githubusercontent.com/HollowNumber/DDU-Rapport-Template/main/src/chapters/projektbeskrivelse.typ | typst |
#include "./Projektbeskrivelse/indledning.typ"
#include "./Projektbeskrivelse/analyse.typ"
#include "./Projektbeskrivelse/kravspecifikationer.typ"
#include "./Projektbeskrivelse/projektafgraesning.typ" |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cheda-seu-thesis/0.2.0/seu-thesis/parts/main-body-degree-fn.typ | typst | Apache License 2.0 | #import "../utils/states.typ": *
#import "../utils/fonts.typ": 字体, 字号
#let main-body-bachelor-conf(thesisname: [], doc) = {
set page(
header: {
set align(center)
set text(font: 字体.宋体, size: 字号.小五, lang: "zh")
set par(first-line-indent: 0pt, leading: 16pt, justify: true)
show par: set block(spacing: 16pt)
locate(loc => {
let next-heading = query(selector(<__heading__>).after(loc), loc)
if next-heading != () and next-heading.first().location().page() == loc.page() and chapter-level-state.at(loc) == 1 {
[]
} else {
if calc.even(loc.page()) {
thesisname.heading
} else {
let cl1nss = chapter-l1-numbering-show-state.at(loc)
if not cl1nss in (none, "", [], [ ]){
cl1nss
h(0.3em)
}
chapter-l1-name-str-state.at(loc)
}
v(-1em)
line(length: 100%, stroke: (thickness: 0.5pt))
}})
counter(footnote).update(0)
},
numbering: "1",
header-ascent: 10%,
footer-descent: 10%
)
pagebreak(weak: false)
counter(page).update(1)
counter(heading.where(level: 1)).update(0)
part-state.update("正文")
doc
} |
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/styling/show-where.typ | typst | --- show-where-optional-field-raw ---
// Test that where selectors also trigger on set rule fields.
#show raw.where(block: false): box.with(
fill: luma(220),
inset: (x: 3pt, y: 0pt),
outset: (y: 3pt),
radius: 2pt,
)
This is #raw("fn main() {}") some text.
--- show-where-optional-field-text ---
// Note: This show rule is horribly inefficient because it triggers for
// every individual text element. But it should still work.
#show text.where(lang: "de"): set text(red)
#set text(lang: "es")
Hola, mundo!
#set text(lang: "de")
Hallo Welt!
#set text(lang: "en")
Hello World!
--- show-where-folding-text-size ---
// Test that folding is taken into account.
#set text(5pt)
#set text(2em)
#[
#show text.where(size: 2em): set text(blue)
2em not blue
]
#[
#show text.where(size: 10pt): set text(blue)
10pt blue
]
--- show-where-folding-stroke ---
// Test again that folding is taken into account.
#set rect(width: 40pt, height: 10pt)
#set rect(stroke: blue)
#set rect(stroke: 2pt)
#{
show rect.where(stroke: blue): "Not Triggered"
rect()
}
#{
show rect.where(stroke: 2pt): "Not Triggered"
rect()
}
#{
show rect.where(stroke: 2pt + blue): "Triggered"
rect()
}
--- show-where-resolving-length ---
// Test that resolving is *not* taken into account.
#set line(start: (1em, 1em + 2pt))
#{
show line.where(start: (1em, 1em + 2pt)): "Triggered"
line()
}
#{
show line.where(start: (10pt, 12pt)): "Not Triggered"
line()
}
--- show-where-resolving-hyphenate ---
// Test again that resolving is *not* taken into account.
#set text(hyphenate: auto)
#[
#show text.where(hyphenate: auto): underline
Auto
]
#[
#show text.where(hyphenate: true): underline
True
]
#[
#show text.where(hyphenate: false): underline
False
]
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/scienceicons/0.0.6/README.md | markdown | Apache License 2.0 | # Open-science icons for Typst
SVG icons for open-science articles
## Usage
```typst
#import "@preview/scienceicons:0.0.6": open-access-icon
This article is Open Access #open-access-icon(color: orange, height: 1.1em, baseline: 20%)
```

## Arguments
The arguments for each icon are:
- `color`: A typst color, `red`, `red.darken(20%)`, `color(#FF0000)`, etc. Default is `black`.
- `height`: The height of the icon, by default this is slightly larger than the text height at `1.1em`
- `baseline`: Change the baseline of the box surrounding the icon, moving the icon up and down. Default is `13.5%`.
Additionally the raw SVG text for each icon can be found by replacing `Icon` with `Svg`.
## List of Icons
- arxiv-icon
- cc-by-icon
- cc-nc-icon
- cc-nd-icon
- cc-sa-icon
- cc-zero-icon
- cc-icon
- curvenote-icon
- discord-icon
- email-icon
- github-icon
- jupyter-book-icon
- jupyter-text-icon
- jupyter-icon
- linkedin-icon
- mastodon-icon
- myst-icon
- open-access-icon
- orcid-icon
- osi-icon
- ror-icon
- slack-icon
- twitter-icon
- website-icon
- youtube-icon
## See All Icons
You can browse and see all icons here:
https://github.com/curvenote/scienceicons/tree/main/typst/docs/scienceicons.pdf

## Contributing
To add or request an icon to be added to this package see: \
https://github.com/curvenote/scienceicons
|
https://github.com/Quaternijkon/notebook | https://raw.githubusercontent.com/Quaternijkon/notebook/main/content/数据结构与算法/.chapter-算法/滑动窗口与双指针/有序数组两数之和.typ | typst | #import "../../../../lib.typ":*
=== #Title(
title: [有序数组两数之和],
reflink: "https://leetcode.cn/problems/two-sum-ii-input-array-is-sorted/description/",
level: 2,
)<有序数组两数之和>
#note(
title: [
两数之和 II - 输入有序数组
],
description: [
给你一个下标从1开始的整数数组`numbers`,该数组已按非递减顺序排列,请你从数组中找出满足相加之和等于目标数`target`的两个数。如果设这两个数分别是`numbers[index1]`和`numbers[index2]`,则`1 <= index1 < index2 <= numbers.length`。
以长度为 2 的整数数组`[index1, index2]`的形式返回这两个整数的下标`index1`和`index2`。
你可以假设#BlueText[每个输入只对应唯一的答案],而且你不可以重复使用相同的元素。
你所设计的解决方案必须#BlueText[只使用常量级的额外空间]。
],
examples: ([
输入:numbers = [2,7,11,15], target = 9
输出:[1,2]
解释:2 与 7 之和等于目标数 9 。因此 index1 = 1, index2 = 2 。返回 [1, 2] 。
],[
输入:numbers = [2,3,4], target = 6
输出:[1,3]
解释:2 与 4 之和等于目标数 6 。因此 index1 = 1, index2 = 3 。返回 [1, 3] 。
],[
输入:numbers = [-1,0], target = -1
输出:[1,2]
解释:-1 与 0 之和等于目标数 -1 。因此 index1 = 1, index2 = 2 。返回 [1, 2] 。
]
),
tips: [
$2 <= n"umbers.length" <= 3 * 10^4$
$-1000 <= "numbers[i]" <= 1000$
numbers 按 非递减顺序 排列
$-1000 <= "target" <= 1000$
仅存在一个有效答案
],
solutions: (
( name:[二分查找],
text:[
在数组中找到两个数,使得它们的和等于目标值,可以首先固定第一个数,然后寻找第二个数,第二个数等于目标值减去第一个数的差。利用数组的有序性质,可以通过二分查找的方法寻找第二个数。为了避免重复寻找,在寻找第二个数时,只在第一个数的右侧寻找。
],code:[
```cpp
class Solution {
public:
vector<int> twoSum(vector<int>& numbers, int target) {
for (int i = 0; i < numbers.size(); ++i) {
int low = i + 1, high = numbers.size() - 1;
while (low <= high) {
int mid = (high - low) / 2 + low;
if (numbers[mid] == target - numbers[i]) {
return {i + 1, mid + 1};
} else if (numbers[mid] > target - numbers[i]) {
high = mid - 1;
} else {
low = mid + 1;
}
}
}
return {-1, -1};
}
};
```
]),(
name:[双指针],
text:[初始时两个指针分别指向第一个元素位置和最后一个元素的位置。每次计算两个指针指向的两个元素之和,并和目标值比较。如果两个元素之和等于目标值,则发现了唯一解。如果两个元素之和小于目标值,则将左侧指针右移一位。如果两个元素之和大于目标值,则将右侧指针左移一位。移动指针之后,重复上述操作,直到找到答案。
使用双指针的实质是缩小查找范围。那么会不会把可能的解过滤掉?答案是不会。假设 `numbers[i]+numbers[j]=target` 是唯一解,其中 $0≤i<j≤"numbers.length"−1$。初始时两个指针分别指向下标 0 和下标 `numbers.length−1`,左指针指向的下标小于或等于 i,右指针指向的下标大于或等于 j。除非初始时左指针和右指针已经位于下标 i 和 j,否则一定是左指针先到达下标 i 的位置或者右指针先到达下标 j 的位置。
如果左指针先到达下标 i 的位置,此时右指针还在下标 j 的右侧,`sum>target`,因此一定是右指针左移,左指针不可能移到 i 的右侧。
如果右指针先到达下标 j 的位置,此时左指针还在下标 i 的左侧,`sum<target`,因此一定是左指针右移,右指针不可能移到 j 的左侧。
由此可见,在整个移动过程中,左指针不可能移到 i 的右侧,右指针不可能移到 j 的左侧,因此不会把可能的解过滤掉。由于题目确保有唯一的答案,因此使用双指针一定可以找到答案。],
code:[
```cpp
class Solution {
public:
vector<int> twoSum(vector<int>& numbers, int target) {
int low = 0, high = numbers.size() - 1;
while (low < high) {
int sum = numbers[low] + numbers[high];
if (sum == target) {
return {low + 1, high + 1};
} else if (sum < target) {
++low;
} else {
--high;
}
}
return {-1, -1};
}
};
```
]
)
),
gain:none,
) |
|
https://github.com/GolemT/BA-Template | https://raw.githubusercontent.com/GolemT/BA-Template/main/subtext.typ | typst | #import "template.typ": *
#lorem(60) |
|
https://github.com/kazewong/lecture-notes | https://raw.githubusercontent.com/kazewong/lecture-notes/main/Engineering/SoftwareEngineeringForDataScience/lab/jax.typ | typst | #set page(
paper: "us-letter",
header: align(center, text(17pt)[
*Introduction to Jax*
]),
numbering: "1",
)
#import "./style.typ": style_template
#show: doc => style_template(doc,)
= Foreword
My first encounter with `Jax` was in 2018. I was told that `jax` can be very useful for large scale trying to write a numerical relativity simulation in `jax` such that it can leverage modern accelerators to scale the compute. At the time the development experience was horrible, a lot of the documentation was pretty arcane and it was quite hard for me to debug the code. In the end the code was made, but it was nothing more than a fun side project for me to try out `jax`. Over the years, `jax` has seen a lot of community effort in improving its usability and expanding its capability. There is a growing community which is interested in adding `jax` into their scientific computing workflow, and I basically use `jax` for most of my deep learning and scientific computing projects. In this tutorial, hopefully I will be able to convert you to switch to `jax`, since I think that is straightly beneficial.
= What is Jax
== python on steroid
`jax` is a code transformation library that can do a lot of magic behind the scene, and we will get to that very soon. Syntax-wise `jax` is `python`, even though there are patterns that are encouraged and discouraged in `jax` that requires a different mindset comparing to a normal `python` code, you are allowed to write `numpy`-like code and enjoy all the benefits `jax` has to offer. This is a huge advantage comparing to other options such as writing `CUDA` code, since it basically does not require you to learn a new language.
== XLA
One of the original advantage of `jax` is it is codeveloped with `XLA`, which stands for accelerated linear algebra. `XLA` is a compiler that is designed to compile efficient code for accelerators, such as GPUs and TPUs. The idea is to take a high level code, then compile it into a low level code that can be executed on the accelerator. It provides almost hand-optimized `CUDA` code performance in a majority of cases, while not requiring the user to learn how to program in `CUDA`. These days `XLA` has moved on to a standalone project which in principle can be integrated with `PyTorch`, but I think `jax` integration is still the tightest.
= Core features
== jit
The first feature you may want to use in `jax` is its just-in-time compilation feature. Let's look at an example first:
```python
import numpy as np
import jax
import jax.numpy as jnp
def selu(x, alpha=1.67, lambda_=1.05):
return lambda_ * jnp.where(x > 0, x, alpha * jnp.exp(x) - alpha)
def selu_numpy(x, alpha=1.67, lambda_=1.05):
return lambda_ * np.where(x > 0, x, alpha * np.exp(x) - alpha)
selu_jit = jax.jit(selu)
x = jnp.arange(1000000)
%timeit selu(x).block_until_ready()
%timeit selu_jit(x).block_until_ready()
y = np.arange(1000000)
%timeit selu_numpy(y)
```
Here we create three version of the `selu` function, which is a commonly used activation function in deep learning. The first one is the `jax` version without any optimization, the second one is the `jax` version with `jit` optimization, and the last one is the `numpy` version. Similar to `julia`, the first time you call a jit function will have the compilation overhead. Unlike `julia`, you can choose to compile a function or not, whereas in `julia`, everything is compiled.
Benchmark the three functions, and we can see that the `jit` version is faster than the non-optimized version, and the `numpy` version is slower than the non-optimized version. Why is this the case? There are mainly two factors why this could be the case. The first reason is `jax` can interface seamlessly with `XLA`, which is a compiler that is designed to compile efficient code for accelerators. This compilation produces more optimized code by fusing operations such that the execution time and memory footprint are reduced. Another reason is without changing the source code, your program can run on a GPU or TPU, which makes large and dense computation much faster.
While jitting generally provide performance boost, it is not always the case. For example, if the function is very simple, the communication overhead between the CPU and the accelerator may outweigh the performance gain from the compilation. But this is usually a corner case that is not very practical, so while keeping that in mind, try jitting whenever you can.
== grad
Another awesome feature jax provides is automatic differentiation. Again, let's first look at how it works:
```python
import jax
import jax.numpy as jnp
def tanh(x):
return (jnp.exp(x) - jnp.exp(-x)) / (jnp.exp(x) + jnp.exp(-x))
d_tanh = jax.grad(tanh)
assert d_tanh(1.57) == (1./jnp.cosh(1.54))
```
Given a function `f` that is compatible with `jax`, then `jax.grad(f)` returns a function that will return the gradient of `f` with respect to its arguments. If this is not magical enough, the runtime of the gradient function is usually linear in the number of arguments instead of geometric like finite differencing, and the accuracy is usually as accurate as the function itself! How is this even possible?
The idea behind autodiff is basically just chain rules. Say I have a function `sin(x)`, we know exactly it's derivative, which is `cos(x)`. Now if I build on top of the function and I want its gradient with respect to its input, then I just need to apply chain rule and multiply the jacobian I get at each stage. There are a lot of infrastructure behind the scene to make the efficient, but this is the crux of autodiff.
== vmap
The third vanilla feature we will learn here is `vmap`, and here is how it looks like:
```python
def my_function(x, y):
return x@y
x = jnp.arange(100)
y = jnp.eye(100)
my_function(x, y)
x_batch = x[None].repeat(10, axis=0)
y_batch = y[None].repeat(10, axis=0)
jax.vmap(my_function)(x_batch, y_batch)
```
`jax.vmap` takes a function as an argument then return a function that add a batch dimension to all of the function's argument, and return that new function as its result. If you want to map over different axis of your input, you can use the `in_axes` argument in `vmap` You can find more detail in the #link("https://jax.readthedocs.io/en/latest/_autosummary/jax.vmap.html")[link].
If you have ever come across the function `np.vectorize`, you can read from their documentation that `np.vectorize` is actually just a for-loop. While `vmap` achieve the same thing in terms of function signature, the underlying computation is completely different. Since `np.vectorize` is essentially a for-loop, the runtime of the vectorized function is linear in the batch dimension. However, `vmap` add extra annotation to the function before dispatching it to the `XLA`, telling the runtime the newly added batch axis can be parallelized, a vmapped function could be executed just as fast as the original function in most cases, as long as you are not memory bounded by your resource. This means we can acheive much higher throughput while not paying extra time cost. The cost we have to be mindful here is memory, since you are essentially parallelizing over one axis, you are computing more stuff at one time, meaning you will need more memory to store the input and the result.
= Sharp bits
While `jax` offers a lot of good features that you should use, it does have some sharpbits that could be a bit unintuitive if you have not worked with these kinds of code transformation libraries.
== Jax only
Now you may be wondering how the compiler knows about the gradient of the function you have just coded up, while you are pretty this is a completely original piece code.The answer lies in the fact that ultimately the functions or "primitives" you are allowed to use to construct your original function be known to `jax` already. For example, if you have a function that uses `np.sin`, you must replace it with something `jax` is aware of its internel, in this case `jnp.sin`. If you have functions which `jax` do not know the internel of the function, then `jax` will not be able to convert the function automatically.
The implication of this that needed to be transform has to be in `jax`. If you find a package online that is pretty neat, but it does not use `jax` as its dependencies, you either need to expose the internal of the code to `jax`, or just rewrite the thing in `jax`.
== Control flows
Writing control flows in `jax` could be a bit tricky as well. Some of the control flow such as `if` is not allowed if you want to `jit` a function. You may want to consult #link("https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#control-flow")[this page] if you want to add control flows to your function. In general, you should compose control flows using the primitives provided in #link("https://jax.readthedocs.io/en/latest/jax.lax.html#control-flow-operators")[here].
== Dynamic allocation is prohibited
In order to efficiently work with accelerators, `jax` aggressively hates any dynamic allocation, i.e. functions that do not know how much memory it will need at compile time. This means in your function, you are almost never allow to something that will change in shape if you want to `jit` it. For example, say you want to sample from a distribution using MCMC, you may be tempted to write something along the line:
```python
def step(x, n):
for i in jnp.arange(n):
x += i
return x
```
This will fail to compile if you try to `jit`, the reason is the size of the `samples` list is not known at compile time, and the compiler cannot allocate memory for it. The way to fix this is to use `jax.lax.scan`, which we will see an example of this later.
== Long compilation with for loop
While `jax` can provide really good run time performance, and most deep learning practitioners will not be acutely aware of the compilation overhead, the compilation overhead can be pretty annoying when it comes to scientific computing. The difference between deep learning and scientific computing is despite being compute heavy, deep neural network are very simple program. The core of many neural network are just matrix multiplication, so there are not many lines to the program. On the other hand, scientific programs often come with a wide variety of forms, and there could be many interdependent modules, hence often way more lines of code involved. At its core, `jax`'s `jit` system is still running on `python`, and overhead related to `python` could make compilation slow. To make it worse, for loops are quite common in scientific computing, and if one naively write a for loop and trying to compile it in `jax`, `jax` will unroll the whole for loop and inline everything. This means the compilation time of your program scales linear as the number of loops you have. One example is when I first tried `jax` in 2018, I was writing a numerical relativity simulation in `jax`, hoping to capitalize on the use of accelerator from `jax`. While the simulation runtime is much faster than alternatives (30 minutes vs a couple of hours), the compilation time is about 2 hours long, which completely defeat the point of having a fast code. In a later section, we will see what is the recommended way to write such operation.
= Ecosystem
== jaxtyping
`jaxtyping` is a package to add type hints for `jax` arrays and `pytree`. Instead of annotating an array as `jnp.ndarray`, you can do something like `Float[Array, "n m"]`, which is a type hint for a 2D array with `n` rows and `m` columns. This is very useful since shape mismatch is one of the most common error in deep learning workflows, and having a type hint for the shape of the array can help you catch the error early. You can find more information about `jaxtyping` #link("https://github.com/patrick-kidger/jaxtyping")[here].
== Equinox
`Equinox` is a library that allows you to build neural network like you would in `PyTorch`. Before I discovered `Equinox`, I was using `Flax`, which is not very intuitive to me, since model parameters are external to the model, and calling functions from a particular layer is not very ergonomic. `Equinox` mirror `PyTorch` interface quite well, while preserving some of the perks one may want to use in `jax`. For example:
```python
import equinox as eqx
class MyLayer(eqx.Module):
weights: Float[Array, "n_out n_in"]
biases: Float[Array, " n_out"]
def __init__(self,
weights: Float[Array, "n_out n_in"]
biases: Float[Array, " n_out"]
)
self.weights = weights
self.biases = biases
def __call__(self, x: Float[Array, " n_in"]):
return self.weights @ x + self.biases
weights = jax.random.uniform(jax.random.PRNGKey(0), shape = (5, 10))
biases = jax.random.uniform(jax.random.PRNGKey(1), shape = (5))
layer = MyLayer()
test_input = jnp.arange(10)
layer(test_input)
```
This is a very simple example to code your own layer, and here the variable `layer` are fully compatible with `jit`, `grad` and `vmap`. Further down the road if you continue to use `jax`, your model is also fully compatible with all the distributed training tools that `jax` provides.
== optax
`optax` is a library that provides a lot of optimization algorithms for `jax`. You will find a lot of the common optimization algorithms such as `adam` in `optax`. Using optax is slightly different from `PyTorch`, in the sense that the optimizer states need to be initialized and passed around explicitly, and you have to do something like `params = optax.apply_updates(params, updates)` to change the parameers. We will see an example of how to use `optax` in the next section.
== diffrax
`diffrax` is library for solving differential equations in `jax`. There are a bunch of cool features and solvers in `diffrax`. And here is the coolest thing about having `jax` libraries: most of the code should be composable with all the `jax` features we have seen. This means your _solvers_ and _optimizer_ are vmappable, jittable, and probably differentiable. This implies you can train multiple neural networks at the same time on a single GPU, and many more cool stuffs.
= Modeling a pendulum
Modeling dynamic system with deep neural network is getting a lot of attention because of its implications. Many infrastructures in our society are designed with very principle models, from turbines to electrical transformer (Not to be confused with the deep learning architecture). If deep learning can improve these models performance, it will lead to better infrastructure hence has huge societal impact.
In this section, we are going to attempt to solve one of simplest dynamic problem: forward modeling a pendulum system. Now instead of providing the position and velocity of the pendulum to you, let say your lab mate is also very deep learning brain, and the person provides you videos of the pendulum instead. We are going to simulate a dataset of image of pendulum, then model the system with a neural network. We are going to build a basic CNN to emulate the system. There is also some note on a more involved solution that combines a VAE and a neural ODE to model the system, but I figure we will not have enough time in the class so I am leaving it as optional.
== Step 1: Solving the dynamics
A pendulum system can be modeled as the following second order ODE:
$ frac(d^2 theta, d t ^2) = -frac(g,l)sin theta $
where $g$ is the acceleration due to gravity, $l$ is the length of the pendulum, and $theta$ is the angle of the pendulum. When implementing a ODE solver, we often favor solving a coupled first order ODE instead of the second order ODE, so we are going to rewrite the equation as:
$ frac(d theta, d t) &= omega \
frac(d omega, d t) &= -frac(g,l)sin theta
$
We are now ready to solve this system forward in time with `diffrax`. Take a look of the example here #link("https://github.com/patrick-kidger/diffrax?tab=readme-ov-file#quick-example")[here], and implement the `simulate_pendulum` function in `generate_data.py` in your template code.
The function signature of `simulate_pendulum` is given to you, and your task is to create the relevent objects in the function and fill out the line
```python
sol = dx.diffeqsolve(
...
)
```
Make sure to include `args` such as the length of the pendulum and gravity, and remember to create `dx.Saveat` object to save the trajectory of the pendulum.
== Step 2: Rendering an animation
The next step is to render the simulation into images which we will use as our data. Instead of actually generating the image and save as a png, we are only going to save the output of the simulation as a `jax.numpy` array. This allows us to `vmap` over the rendering function later. You are required to fill out the `render_pendulum` function in the `generate_data.py` file.
Once again the function signature of `render_pendulum` is given to you. Here is the pseudo code of the function:
+ Generate a grid denotating the position of each pixel with `jnp.meshgrid`
+ Given the angle of the pendulum, compute the coordinate of the pendulum with respect to the center of the image
+ Compute the distance from each pixel to the pendulum with `jnp.linalg.norm`, and set the pixel's value to be 1 if the distance is less than the radius of the pendulum, and 0 otherwise.
+ (Optional) Plot the image to see whether the pendulum is rendered correctly.
== Step 3: Creating our data
Now we have the simulation code and rendering code, we simply have to loop over them to generate a dataset for our machine learning model. You are required to fill out the `generate_dataset` function in the `generate_data.py` file. You may notice the function signature is a bit odd, that we require two input frames and only predict one output frames (as seen in the channel). Think a bit about why this is the case, and you are also welcome to modify the function to take one input frame and output one frame and see what happens.
The thing you have to pay attention to in this function is you should use `vmap` to render the pendulum frames instead of a for loop. Another thing is since we don't necessary have to jit this complete function #footnote[You are welcome to jit the whole thing, but that is a bit more involved so we are not going to do it here.], you can use non-`jax` object such as a list.
Finally, to ensure reproducibility, the random number system in `jax` works a bit differently than `numpy`. You should use `jax.random.PRNGKey` to generate random number in `jax`. Instead of `np.random.uniform`, you have to provide an additional key to the random generator, which can be generated using `jax.random.PRNGKey(your_integer)`, which acts as a seed. And instead of manually inputing all the key, you can use `jax.random.split` to generate a list of keys, then use the keys in your function. You can find more information about the random number system in `jax` #link("https://jax.readthedocs.io/en/latest/random-numbers.html")[here].
= Building an emulator with `Equinox`
Now we have our data from the previous section, let's build an emulator to model this problem. Our task is defined to be: given a snapshot of the current system, predict how would it look like in the coming time step. Since our data is in the form of images, let's try to build a convolutional neural network to model our dataset.
== Step 1: Buidling the model
Building a model in `Equinox` is very similar to building a model in `PyTorch`. You should find the starter code in `models.py`. Let's see what the essential components for our model:
+ You should see the variable `layers` as class variable. This is where you can define relevant variables to use in your class. For this tutorial, `layers` should be sufficient.
+ In the `__init__` function, you should put the code which creates the layers of your model. For this specific tutorial, we will use the `eqx.nn.Conv2d` layer and `jnp.tanh` as our activation. Checkout the #link("https://docs.kidger.site/equinox/api/nn/conv/")[documentation] for more information. You may also need #link("https://docs.kidger.site/equinox/api/nn/sequential/#equinox.nn.Lambda")[Lambda]
After that, you just need to implement the `__call__` function, which is the forward pass of your model. Since we have access to `jit`, it is actually okay to just loop through the layers.
== Step 2: Writing the training loop
You should be able to find the template code in the `train_models.py` file. There are a couple of things you have to implement for the training loop to work:
+ Implement a mean square error as your loss function in `loss_fn`. There is a tricky bit related to how you should handle a batch of data. It is not too difficult to figure out so I will leave that as a puzzle.
+ In the main train function, you need to define an optimizer. We will use `optax` to define our optimizer, you can find information #link("https://optax.readthedocs.io/en/latest/getting_started.html#basic-usage-of-optax")[here].
+ You will have to implement the `make_step` function. This could be a bit involved, so I leave a lot of hints in the body of the function. The idea here is this function will take your model, optimization state, and a batch of data as input, then spit out a new model, new optimization state, and the loss.
+ Finally, you have to loop over a certain number of epochs to train you model.
== Step 3:Train and evaluation
Once you have completed the training loop, it is time to generate the data and train your model. Generate a small amount of training data using functions you have defined in `generate_data.py`, then train your model with the training loop. After you have trained your model, try making a simulation of the pendulum and compared it with a true simulation.
// = (Optional) Combining VAE and ODE for better Modeling
// Now we have seen what a black box model is capable of, let's see if we can do better if we combine machine learning and more traditional modeling tools such as an ODE. There is two part to this problem: first, since the dimensionality of the our data is very high (the number of pixel), having an ODE to model this is going to be pretty nightmarish. So we are going to use a VAE to compress the dimensionality of the data#footnote[If I am given this problem and I need to solve it like my life depends on it, I would rather use a pose estimation model like YOLO instead of modeling in the latent space. But this is a good example to show how we can combine different libraries in `jax` to come up with a good solution.]. The next thing we are going to do is to solve a neural ode in the latent space, then decode back into the image space.
// == Step 1: Building a VAE
// == Step 2: Modeling the latent space with neural ODE
// == Step 3: Projecting back into the image space.
== A slight rant
Since I have a physics background, and I have worked on some simulations before, I have a stroke everytime there are some obviously oblivious "machine learning experts" claim their network can accelerate simulations for orders of magnitudes. I once believed in that, only to later find out the field of deep learning is filled with bogus claims and these experts have no idea what they are talking about. The setting they provide is usually follow the pattern shown here:
+ There are some benchmark datasets they either download from some where or generated from some codes
+ The claimed SOTA code is slow.
+ They use some neural networks to model the simulation end-to-end, i.e. emulating the simulations.
+ Loss goes down, and they show some pictures of the simulations saying they look the same.
+ Claim victory.
If you are coming from a machine learning background, this is basically standard practice. Propose some new architecture, show improvement on benchmark metrics, then you can publish in a big conference. As long as you have enough buzzwords in your paper, you should make it to a big conference with relative ease.
And as a data science aligned person, you may have the impulsion of slapping a deep learning model to solve any problem, may it be CNN, graph, normalizing flow, or LLM. After all, that's what you learn in school. Unfortunately, despite all the hype you have heard about deep learning, as of September 2024, there are almost no deep learning solutions that are state-of-the-art, and in fact, most ML solutions are far from SOTA #footnote[For interested students, there is a fun recent paper about deep learning in CFD.].
The problem is, these days training neural networks has become so fast and easy that people forgot neural network has the following disadvantage:
+ They are not accurate, especially in a small data regime.
+ They are expensive. A GPU is often needed to iterate through a dataset.
+ They are often uncontrollable. I have full control and guaruntee over a RK4 solver, but I cannot say the same about even an MLP.
These properties are often overlooked when an ML team is given a problem. We raise a whole generation of students and engineers that can only solve problem through neural network, GPUs, and more data. But real life problems come with constraints that may not allow you to do so. GPUs cost money, electricity costs money, collecting data costs money, and ML engineers are massively overpriced. Let's go back to the pendulum system for a second: the inaccuracy in ML makes the CNN solution unreliable. You may argue you just need more data and maybe data augmentation to make the CNN works better, but that costs money and time. A competent engineer who actually care about solving the problem should think about how to combine all the tools one has access to and solve the problem in the fastest and cheapest way possible. In this case, running the experiment is very cheap, now instead of collecting the images, I rather just measure the position and velocity of the pendulum, probably through attaching a string to readout through a smooth pulley. Once I have the data, I don't even need a computer to create a model that will certainly beat any of the machine learning model here, it's called physics buddy.
= Best practices
== Think like a GPU
If you intend to run something on the GPU, it is helpful to think like a GPU. The way GPU (or other typical accelerators) work is through "packing" compute, meaning they usually manipulate a large batch of data at once. The clock speed of a GPU core is usually slower than a CPU core, but the GPU has many more cores. So when you write a program in jax, you should thinkg about how to execute as many unit of compute in parallel as possible. Say if your GPU can do 100 units of compute per cycle, if you do not saturate that bandwidth, the extra cycles are simply "wasted". So when you write a `jax` program, you should always ask yourself whether the way you write it is designed for a GPU.
== Avoiding in-place mutation
Speaking of think like a GPU, you should avoid in-place mutation. In-place mutation is when you change the value of a variable without creating a new variable, usually happens when you mutate the values within an array, i.e. `x[0] = 1`. You can see how this is not very GPU friendly, since it requires the computer to go to a very specific location and operate locally, instead of applying a lot of changes everywhere. In `jax`, in-place mutation is not allowed, and if you try to do so, you will get an error. If you have to mutate some values, instead of `x[0] = 1`, you have to do `x = x.at[0].set(1)`.
== Scan your for loop
We have mentioned writing a for-loop can result in long compilation time when `jit`-ing the function. One way to avoid this is to use `jax.lax.scan`. `jax.lax.scan` is a function that allows you to write a for-loop in a way that is more friendly to the compiler. Essentially it is telling the compiler it doesn't not have to unroll the loop and just compile the body of the loop once. The syntax of `jax.lax.scan` is a bit weird. While the official documentation is #link("https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html")[here], here is an example of how you can use it:
```python
import jax
import jax.numpy as jnp
def body(carry, x):
return carry + x, carry
def scan_example(x):
return jax.lax.scan(body, 0, x)
scan_example(jnp.arange(10))
```
The output of the function is `(45, array([0, 1, 3, 6, 10, 15, 21, 28, 36, 45]))`. The first element of the tuple is the aggregated value of the loop, and the second element is the value of the loop at each step.
== jit at top level
When you transform a function with `jit`, it looks through your code and try to optimize the code before compiling it. So if you `jit` at the highest level possible, the compiler has more information to compile more efficient code. Because of this, usually I save the `jit` until the very last moment.
== check your gradient
Sometime you may have some wacky result in script and you have checked everything you have thought off. You may have printed the value of the function and see not problem at all. The thing is sometimes the function itself can be well defined by the gradient can be `nan`. For example, if you use the function `nansum`, which sum the array while ignoring `nan`, the gradient of the final function can be broken and may not be as useful as intended.
|
|
https://github.com/topdeoo/Course-Slides | https://raw.githubusercontent.com/topdeoo/Course-Slides/master/Courses/Method2CS/main.typ | typst | #import "../../theme/iTalk.typ": *
#show: nenu-theme.with(
short-title: "PyTorch",
short-date: "23-12-19",
short-author: "Virgil"
)
#let pinit-highlight-equation-from(height: 2em, pos: bottom, fill: red.darken(5%), highlight-pins, point-pin, body) = {
pinit-point-from(
fill: fill,
pin-dx: -0.6em,
pin-dy: if pos == bottom { 0.8em } else { -0.6em },
body-dx: 0pt,
body-dy: if pos == bottom { -1.7em } else { -1.6em },
offset-dx: -0.6em,
offset-dy: if pos == bottom { 0.8em + height } else { -0.6em - height },
point-pin,
rect(
inset: .5em,
stroke: (bottom: 0.12em + fill),
{
set text(fill: fill)
body
}
)
)
}
#title-slide(
title: "A Diving Odyssey through PyTorch",
authors: (name: "凌典", email: "<EMAIL>"),
logo: image("fig/nenu-logo-title.png", width: 30%),
institution: "Northeast Normal University",
date: "2023-12-19"
)
#slide(
session: "PyTorch",
title: "What is PyTorch"
)[
#only((beginning: 1, until: 4))[
== Pytorch is a AI Framework
#parbreak()
#v(.2em)
+ PyTorch define a grammer to describe the network
+ Use the grammer to build the network
]
#only((beginning: 2, until: 4))[
== PyTorch is a #pin(1)Compiler#pin(2)
#pinit-highlight(1, 2)
#parbreak()
#only((beginning: 3, until: 4))[
#pinit-point-from(
fill: black,
pin-dx: -5pt,
pin-dy: 5pt,
body-dx: -10em,
body-dy: 8pt,
offset-dx: -45pt,
offset-dy: 40pt,
2
)[
#text(fill: black)[How to understand the \
compiler here?]
]
]
#only(4)[
#pinit-point-from(fill: red.darken(10%) ,2)[
#text(fill: red)[
As there are many programming languages and\ hardware, we may develop a all in one compiler
]
]
]
]
]
#slide(
session: "PyTorch",
title: "What is Machine Learning Compiler"
)[
#only((beginning: 1, until: 2))[
#figure(
placement: top,
image("fig/trad-compiler.png", width: 60%),
caption: [#pin(1)Traditional Compiler#pin(2)]
)
#only(1)[
#pinit-highlight(1, 2)
#pinit-point-from(2)[
So many architectures \
we need a all-in-one \
compiler
]
]
]
#only(2)[
In traditional compiler, we can define an IR to describe the computation process.
What about ML Compiler?
]
#only(3)[
// #figure(
// image("fig/ML-compiler.png", width: 60%),
// caption: "ML Compiler"
// )
#figure(
image("fig/ComputationGraph-example.png", width: 80%),
caption: "Computation Graph"
)
Here is a computation graph, which computes $d = (a * w_1 + w_2) * (a * w_1)$
]
#only((beginning: 4, until: 6))[
#image("fig/ML-compiler.png")
]
#only(4)[
#simple-arrow(
fill: red,
stroke: 0pt,
start: (50pt, 0pt),
end: (280pt, -180pt),
thickness: 2pt,
arrow-width: 4,
arrow-height: 4,
inset: 0.5,
tail: (),
)
#text(fill: red)[Model here can be seen as programming\
language]
]
#only(5)[
#simple-arrow(
fill: red,
stroke: 0pt,
start: (100pt, 0pt),
end: (500pt, -220pt),
thickness: 2pt,
arrow-width: 4,
arrow-height: 4,
inset: 0.5,
tail: (),
)
#text(fill: red)[a.k.a computation graph]
]
#only(6)[
#simple-arrow(
fill: red,
stroke: 0pt,
start: (100pt, 0pt),
end: (550pt, -80pt),
thickness: 2pt,
arrow-width: 4,
arrow-height: 4,
inset: 0.5,
tail: (),
)
#text(fill: red)[A backend to translate the Graph IR to \
executable machine code on different device]
]
]
#slide(
session: "PyTorch",
title: "Structure of PyTorch"
)[
#only(1)[
As we know, a compiler can be decomposed into 2 parts:
+ front end
+ back end
Then what about PyTorch?
]
#only(2)[
#figure(
image("fig/pytorch_compiler.drawio.png", height: 90%),
caption: "PyTorch Structure"
)
]
]
#slide(
session: "Front End",
title: "FX Graph"
)[
#only((until: 1))[
As we have seen, PyTorch use a graph to describe the computation process, which is called `Computation Graph`
But how does PyTorch describe the computation graph and how does it caputure the graph?
]
#only(2)[
Before PyTorch 2.0, PyTorch use FX Graph to represent the computation graph, example code:
]
#only(2)[
#grid(
columns: (100%),
rows: (5fr, 10fr)
)[
```python
class MyModule(nn.Module):
def __init__():
super().__init__()
self.param = \
torch.nn.Parameter(torch.rand(3, 4))
self.linear = torch.nn.Linear(4, 5)
def forward(self, x):
return self.linear( \
x + self.param \
).clamp(min=0.0, max=1.0)
```
#only(2)[
#set align(right)
#image("fig/FX_Graph_example.png", fit: "contain")
]
]
]
]
#slide(
session: "Front End",
title: "How to capture the graph"
)[
#only(1)[
#set align(center + horizon)
#text(size: 1.5em)[
We use a front end module called #pin(3) `dynamo` #pin(4) to capture the Computation Graph.
#pinit-highlight(3, 4)
]
// #image("fig/dynamo_.png", fit: "contain", width: 80%, height: 85%)
]
// #only(2)[
// #set align(center + horizon)
// #text(size: 1.5em)[
// Before we diving into it
// we need to know some basic concepts
// ]
// + How does Python execute
// + What is Frame
// ]
]
#slide(
session: "Front End",
title: "torch._dynamo (high level) intro"
)[
#only((until: 2))[
Why called `dynamo`?
]
#only(2)[
Historical records and official documentation indicate that the name `dynamo` can be traced back to a 1998 technical report by HP, where the term `dynamic optimization` was used.
]
#only((beginning: 3))[
#v(1em)
#set align(center + horizon)
#text(size: 1.5em)[dynamo is a Python-level #strong[Just-in-Time] (JIT) compiler]
#v(2em)
#text(size: 1.2em)[Before diving into dynamo, we need to know some basic concepts]
]
]
#focus-slide()[
#set align(center)
#strong[How does Python execute?]
]
#slide(
session: "CPython",
title: "Intro"
)[
#only(1)[
Python interpreter is written in C, which is called CPython.
A Python code runs in CPython is like this:
#grid(
columns: (60%, 40%),
image("fig/CPython-workflow.png", height: 70%, fit: "contain"),
text[
+ `.py` source code translate into bytecode
+ CPython VM loop and execute the bytecode
]
)
]
#only(2)[
A example like this:
#grid(
columns: (40%, 10%, 50%),
text[
```py
def add(x, y):
return x + y
```
],
simple-arrow(
fill: red,
start: (-50pt, 50pt),
end: (50pt, 50pt),
thickness: 4pt,
arrow-width: 4,
arrow-height: 4,
inset: 0,
tail: (),
),
text[
2 #h(1em) LOAD_FAST #h(5em) 0 (x)
4 #h(1em) LOAD_FAST #h(5em) 1 (y)
6 #h(1em) BINARY_OP #h(5.2em) 0 (+)
10 #h(.5em) RETURN_VALUE
]
)
The bytecode of function `add` is on right
The VM will execute the bytecode line by line
]
]
// #slide(
// session: "CPython",
// title: "Frame and Frame Evaluation"
// )[
// #only(1)[
// A `Frame` is a bit different from `Stack Frame` in C, it is a data structure to store the local variables and some other information.
// We can describe a `Frame` like this:
// #grid(
// columns: (70%, 30%),
// image("fig/CPython-Frame-example.png", width: 80%, height: 60%, fit: "contain"),
// text[
// The call stack of a function is essentially the process of recursively creating #highlight[`Frame`] and executing them.
// ]
// )
// ]
// #only((2, 3))[
// #grid(
// columns: (80%, 20%),
// text[
// ```py
// def foo():
// frame = inspect.currentframe()
// cur_func_frame = frame
// print(f'code name of current frame is {cur_func_name.f_code.co_name}')
// prev_func_frame = frame.f_back
// print(f'current code name of previous frame is {prev_func_frame.f_code.co_name}')
// def bar(a=1):
// foo()
// if __name__ == '__main__':
// bar()
// ```
// ],
// text[
// #only(2)[#highlight(fill: rgb("#f2a1cc"))[result is: ]]
// code name of current frame is #highlight[foo]
// current code name of previous frame is #highlight[bar]
// #only(3)[
// so we can get all previous frame info in current frame
// ]
// ]
// )
// ]
// #only((4, 5))[
// #set align(center + horizon)
// #text(size: 1.5em)[
// How about stealing the future?
// ]
// #set align(left + horizon)
// If we know the function frame before we execute it, then we can do some awesome things to the frame, like:
// #v(.5em)
// - inject some code into the frame
// - changing the execution order of the code
// - etc.
// ]
// #only(5)[
// #set align(center + horizon)
// #text(size: 1.2em)[
// Python Enhancement Proposal 523 (PEP 523) is a proposal to add a new API to the Python interpreter to allow #strong[stealing the future].
// ]
// ]
// ]
// #focus-slide()[
// #set align(center + horizon)
// So, what does dynamo do?
// ]
// #slide(
// session: "Front End",
// title: "Frame Evaluation"
// )[
// #only(1)[
// #set align(center + horizon)
// #image("fig/PEP-523.png", fit: "stretch")
// ]
// #only((2, 3))[
// #grid(
// columns: (40%, 60%),
// column-gutter: 3em,
// image("fig/PEP-523-dynamo.png", width: 120%, fit: "stretch"),
// text[
// `dynamo` change the `_PyEval_EvalFrameDefault` function to its self-defined function
// `dynamo` makes a clever choice by performing bytecode parsing at the Python layer and \
// passing it as a callback function to a frame evaluation function.
// #only(3)[
// When we invoke `optimizer('inductor')(fn)`, `dynamo` replaces the frame evaluation function \
// of `fn` with its own custom one, and passes the callback function as an argument.
// ]
// ]
// )
// ]
// ]
#slide(
session: "Front End",
title: "Captures Computation Graph"
)[
#only(1)[
#set align(center + horizon)
#text(size: 1.2em)[
`dynamo` captures the computation graph by #strong[stealing the future]
But we will not discuss the details of this, we only care about that `dynamo` captures the computation graph before executing into bytecode.
]
]
#only(2)[
#text(size: 1.2em)[
Still remember FX Graph?
]
In general, we could simply map each statement to a node in the graph like this:
#grid(
columns: 2,
column-gutter: 3mm,
text[
```py
def forward(self, x):
return self.linear(
x + self.param
).clamp(min=0.0, max=1.0)
```],
image("fig/FX_Graph_example.png", fit: "contain")
)
But it is not enough, we need to capture the control flow, like `if` statement.
]
#only((3, 4))[
Let us see a example, when we have a control flow like this:
```py
def toy_example(x):
a = nn.Linear(1, 1)(x)
b = nn.Linear(1, 1)(x)
if x.sum() < 0:
return a + b
return a - b
```
We can not capture the control flow in FX Graph
Because FX Graph is a `symbolic trace`, i.e. it will not do any computation, just record the computation.
]
#only(4)[
Then what does `dynamo` do?
]
// #only((5, 6))[
// Let's see:
// #grid(
// columns: 2,
// column-gutter: 5em,
// text[
// ```python
// def forward(x):
// if x.sum() < 0:
// return x + 1
// else:
// return x - 1
// ```
// ],
// grid(
// columns: 1,
// rows: 2,
// row-gutter: 2em,
// rect[
// #only((5, 6))[
// ```py
// def forward(self, x: torch.Tensor):
// sum_1 = x.sum(); x = None
// lt = sum_1 < 0; sum_1 = None
// return (lt, )
// ```
// ]
// ],
// rect[
// #only((5, 6))[
// ```py
// def forward(self, x: torch.Tensor):
// add = x + 1; x = None
// return (add, )
// ```
// ]
// ]
// )
// )
// #only(6)[
// #text(size: 1.2em)[
// ...wait, Why there are only 2 computation graph?\
// Where is the `else` branch?
// ]
// ]
// ]
]
#slide(
session: "Front End",
title: "Graph Break"
)[
// #only(1)[
// #grid(
// columns: 2,
// column-gutter: 10mm,
// text[
// ```py
// def toy_example(x):
// a = nn.Linear(1, 1)(x)
// b = nn.Linear(1, 1)(x)
// if x.sum() < 0:
// return a + b
// return a - b
// ```
// ],
// text[
// With `Guard`, `dynamo` can guarantee that if the tensor does not change, the computation graph will not change, so we do not need to compiler the `else` branch.
// But... what if the tensor changes?
// ]
// ),
// ]
#only(1)[
let's see this example:
]
#only((beginning:1, until: 6))[
#grid(
columns: 2,
column-gutter: 12mm,
[
#only((1, 2))[
```py
def toy_example(x):
a = nn.Linear(1, 1)(x)
b = nn.Linear(1, 1)(x)
if x.sum() < 0:
return a + b
return a - b
```
]
#only((3, 4, 5, 6))[
```py
def compiled_toy_example(x):
a, b, lt = __compiled_fn_0(x)
if lt:
return __resume_at_30_1(b, x)
else:
return __resume_at_38_2(a, x)
```
]
],
[
#only(1)[
`dynamo` here take an action called `Graph Break`
`toy_example()` would be compiled into 3 computation graph.
]
#only(2)[
```py
def compiled_toy_example(x):
a, b, lt = __compiled_fn_0(x)
if lt:
return __resume_at_30_1(b, x)
else:
return __resume_at_38_2(a, x)
```
three function shows below:
]
#only(3)[
before `if` statement:
```py
def __compiled_fn_0(x):
a, b = nn.Linear(1, 1)(x), nn.Linear(1, 1)(x)
return x.sum() < 0:
```
]
#only(4)[
`if` branch:
```py
def __resume_at_30_1(x):
goto if_next
a, b = nn.Linear(1, 1)(x), nn.Linear(1, 1)(x)
if x.sum() < 0:
label if_next
return a + b
return a - b
```
]
#only(5)[
`else` branch:
```py
def __resume_at_38_2(x):
goto if_jump
a, b = nn.Linear(1, 1)(x), nn.Linear(1, 1)(x)
if x.sum() < 0:
b = a + b
label if_jump
return a - b
```
]
#only(6)[
At first, the function `__compiled_fn()` will be compiled, and we have the `lt` flag
Then `dynamo` will compile the `if` branch #highlight(fill: rgb("#f2f1cf"))[*or*] `else` branch according to the `lt` flag.
]
]
)
]
]
#slide(
session: "Dynamic Compile",
title: "Dynamic Compile"
)[
Let's consider a example:
#grid(
columns: (50%, 50%),
column-gutter: 1.5em,
[
A simple graph:
```py
def forward(x, y):
return (x + y) * y
```
Here the $x$, $y$ are tensors, which means it has its own shape, dtype, device, etc.
],
[
#only((2, 3))[
Let's consider that somehow the $x$ tensor changes, such as $x arrow.r x^T$
Then the computation graph will change, and we need to re-compile the graph.
]
]
)
#only(3)[
The question is: how does `dynamo` know the tensor changes?
(i.e. how does `dynamo` guard the tensor?)
]
]
#slide(
session: "Front End",
title: "Guard"
)[
#only(1)[
What is Guard?
As we have said, PyTorch is #pin(1)JIT#pin(2) compiler.
#pinit-highlight(1, 2)
It means PyTorch will #pin(5)compile the code when it is running#pin(6).
#pinit-highlight(5, 6)
But we do not want to compile the code every time we run it.
So we need to #pin(3)guard the code#pin(4), dynamically compile the code when it is running for the first time, and cache the compiled code for later use.
#pinit-highlight(3, 4)
]
#only((beginning: 2, until: 4))[
#grid(
columns: (45%, 55%),
column-gutter: 3mm,
text[
```py
class MyModule(nn.Module):
def __init__():
super().__init__()
self.linear = torch.nn.Linear(4, 5)
def forward(self, x):
return self.linear( \
x + self.param \
).clamp(min=0.0, max=1.0)
```
],
[
#only(2)[
We need `Guard` to guard the computation graph in case of recompiling the code every time we run it.
]
#only(3)[
```py
local 'self' NN_MODULE
{
'guard_types': ['ID_MATCH'],
'code': ['___check_obj_id(self, 140260021010384)'],
'obj_weakref': <weakref at 0x7f8f9864a110; to 'Model' at 0x7f90d4ba7fd0>
'guarded_class': <weakref at 0x7f90d4bc71a0; to 'type' at 0x705e0f0 (Model)>
}
```
]
#only(4)[
```py
local 'x' TENSOR_MATCH
{
'guard_types': ['TENSOR_MATCH'],
'code': None,
'obj_weakref': <weakref at 0x7f9035c8bce0; to 'Tensor' at 0x7f8f95bddd00>
'guarded_class': <weakref at 0x7f8f98cf9440; to 'torch._C._TensorMeta' at 0x57f3e10 (Tensor)>
}
```
]
]
)
]
#only((beginning: 5, until: 7))[
#grid(
columns: (55%, 45%),
[
#only((beginning: 5, until: 7))[
```py
local 'x' TENSOR_MATCH
{
'guard_types': ['TENSOR_MATCH'],
'code': None,
'obj_weakref': <weakref at 0x7f9035c8bce0; to 'Tensor' at 0x7f8f95bddd00>
'guarded_class': <weakref at 0x7f8f98cf9440; to 'torch._C._TensorMeta' at 0x57f3e10 (Tensor)>
}
```
]
],
[
#only((5, 6))[
The `Guard` has few ways to check a variable have changed or not.
]
#only(6)[
+ `check_obj_id`: i.e. `ID_MATCH`
+ `check_tensor`: i.e. `TENSOR_MATCH`
]
#only(7)[
When check `Tensor`, `Guard` will check the `Tensor`'s:
1. `dtype`: `fp32` to `fp16`, etc.
2. `device`: `GPU0` to `GPU1`, etc.
3. `shape`: `(1, 2)` to `(2, 1)`, etc.
4. `requires_grad`: `True` to `False`, etc.
5. `alignment`: `32` to `64`, etc.
It will re-compile when all check failed.
]
]
)
]
]
#slide(
session: "Front End",
title: "Loop Unrolling"
)[
#only(1)[
#set align(center + horizon)
#text(size: 1.2em)[
The control flow also includes `for` and `while` statement, how does `dynamo` handle them?
]
]
#only((2, 3))[
#only(2)[Let's see an example:]
#grid(
columns: 2,
column-gutter: 20mm,
[
```py
@torch.compile
def toy_example(x, n):
for i in range(1, n + 1):
x = x * i
return x
def test():
x = torch.randn(10)
toy_example(x, 4)
```
],
[
#only(3)[
The Graph of `toy_example` is like this:
```py
def forward(self, x : torch.Tensor):
mul = x * 1; x = None
mul_1 = mul * 2; mul = None
mul_2 = mul_1 * 3; mul_1 = None
mul_3 = mul_2 * 4; mul_2 = None
return (mul_3,)
```
]
]
)
]
]
#slide(
session: "Front End",
title: "dynamo"
)[
We can summarize dynamo like this:
#image("fig/dynamo_.png", fit: "contain", width: 100%, height: 85%)
]
#slide(
session: "Front End",
title: "AOTAutograd"
)[
#only((1, 2, 3))[
Before `PyTorch` 2.0, we can capture the forward computation graph by `symbolic trace`
What about the backward?
]
#only((2, 3))[
Before `PyTorch` 2.0, each operator has it forward and backward implementation
#only(3)[
...But we #highlight(fill: rgb("#f2f1cf"))[can not do any optimization on the backward computation graph]
]
]
#only(4)[
So we a function to capture the backward computation graph which can be optimized
We call it `AOTAutograd` in `PyTorch` 2.0
]
#only((5, 6))[
#text(size: 1.3em)[What can `AOTAutograd` do?]
Ahead Of Time Auto Gradient (i.e. `AOTAutograd`) can do:
+ capture the forward & backward computation graph
+ #highlight[compile] the forward & backward computation graph with diffierent backends
+ do optimization on the forward & backward computation graph
#only(6)[
The `compiler` here means:
Through the `AOTAutograd` interface, the operator in the computation graph would transform to `ATen & Prim` level from `torch` level.
(e.g `torch.sigmoid` $arrow.r$ `torch.aten.ops.sigmoid.default()` which is implmented by `C++`)
]
]
#only(7)[
#text(size: 1.2em)[Why is it called AOTAutograd? ]
AOTAutograd traces both forward and backward propagation in an Ahead-of-Time fashion.
It caputure the forward & backward propagation graph before the function is actually executed.
]
]
#slide(
session: "Front End",
title: "What does AOTAutograd do?"
)[
We can summarize `AOTAutograd` like this:
#algo(
title: "AOTAutograd",
parameters: ("FXGraph",),
radius: 2pt,
row-gutter: 1.2em
)[
#comment(inline: true)[
Generate the forward & backward computation graph
]
$"joint_graph" arrow.l "torch.__dispatch__"("FXGraph")$\
#comment(inline: true)[
Partition the forward & backward computation graph
]
$"forward, backward" arrow.l "partition_fn"("joinit_graph")$\
#comment(inline: true)[
Decompose the operator into more fine-grained opeartor
]
$"forward, backward" arrow.l "decompositions"("forward, backward")$\
$"torch.autograd.Function" arrow.l "_compiler"("forward, backward")$
]
]
#slide(
session: "AOTAutograd",
title: "dispatch"
)[
#text(size: 1.2em)[
All we need to know is that:
the `__dispatch__` will map high-level operator to a low-level operator like this:
#grid(
rows: 2,
row-gutter: .5em,
code(
row-gutter: 2pt,
line-numbers: false
)[
```py
Tensor dot(const Tensor &self, const Tensor &other)
```
],
code(
line-numbers: false
)[
```yaml
- func: dot(Tensor self, Tensor tensor) -> Tensor
variants: function, method
dispatch:
CPU: dot
CUDA: dot_cuda
MPS: dot_mps
```
]
)
]
]
#slide(
session: "AOTAutograd",
title: "partition graph"
)[
#only(1)[
With `__dispatch__`, we can trace a joint graph (forward & backward graph):
]
#only((1, 2, 3, 4))[
#image("fig/joint-graph.png", fit: "contain", width: 100%, height: 80%)
]
#only(2)[
Now, we want to partition the joint graph into forward and backward graph.
]
#only(3)[
How?
]
#only(4)[
The general principle adopted is to minimize the memory requirement, we model it as a #pin(1)Max-Flow/Min-Cut problem #pin(2)
#pinit-highlight(1, 2)
]
]
#slide(
session: "Front End",
title: "Summary"
)[
#only(1)[#image("fig/Front-End.png", fit: "contain")]
#only(2)[
#grid(
columns: (35%, 70%),
[
```py
model = nn.Sequential(
nn.Conv2d(16, 32, 3),
nn.BatchNorm2d(32),
nn.ReLU(),
).cuda()
```
],
[
#image("fig/demo.png", fit: "contain")
]
)
]
]
#focus-slide()[
#set align(center + horizon)
#strong[Q & A]
]
// #slide(
// session: "Back End",
// title: "Backend Intro"
// )[
// ]
|
|
https://github.com/Vanille-N/mpri2-edt | https://raw.githubusercontent.com/Vanille-N/mpri2-edt/master/typtyp.typ | typst | // TypTyp
// Type assertions library for Typst
//
// by <NAME> <EMAIL>
//
// The purpose of this library is to provide easy type annotations
// (or to be more precise: field requirements) for Typst objects.
//
// Example usage:
//
// ```
// #import "typtyp.typ"
// #let tt = typtyp
//
// #let Person = tt.typedef(tt.struct(
// name: tt.str,
// age: tt.int,
// ))
//
// #let jack = tt.ret(Person, ( name: "Jack", age: 31 ))
// #let jane = ( name: "Jill", age: 22, gender: "W" )
// #tt.is(Person, jane) // assertion error: field 'gender' not part of 'Person'
// ```
//
// The general way that this is implemented for easy composition is that a
// type can be seen a function `any -> bool` that returns `true` iff the object
// has the right type.
// In reality, mostly for diagnostics purposes
// - instead of a boolean, the output is either `()` or an `( err: "Error message" )`
// - a type isn't actually just a `any -> bool` but a `( fn: any -> bool, label: str )`
//
// Note that typechecking is naive and may be costly on large datasets.
// Improvements will be designed if necessary.
#let has_type_of(t) = (obj) => {
if type(t) == type(obj) {
()
} else {
(
err: "Object " + repr(obj) + " does not have the right type: expected " + str(type(t)) + ", got " + str(type(obj)) + ".",
)
}
}
#let result_join_array(arr, fn) = {
let curr = ()
for new in arr {
if curr != () { return curr }
curr = fn(new)
}
curr
}
#let result_join_dict(map, fn) = {
let curr = ()
for (k, v) in map {
if curr != () { return curr }
curr = fn(k, v)
}
curr
}
#let typing_assert(r) = { if r != () { panic(r.err) } }
#let verify(t, o) = { typing_assert((t.fn)(o)); [ ok #o \ ] }
#let falsify(t, o) = { assert((t.fn)(o) != ()); [ #{ (t.fn)(o).err } \ ] }
#let typedef(name, t) = {
if type((:)) == type(t) {
( label: name, fn: t.fn )
} else {
( label: name, fn: t )
}
}
#let is(t, o) = typing_assert((t.fn)(o))
#let ret(t, o) = { is(t, o); o }
// Basic types
#let any = typedef("any", (_) => ())
#let never = typedef("never", (_) => ( err: "There are no instances of the nevertype" ))
#let bool = typedef("bool", has_type_of(true))
#let int = typedef("int", has_type_of(0))
#let color = typedef("color", has_type_of(rgb(0, 0, 0)))
#let str = typedef("str", has_type_of(""))
#let content = typedef("content", has_type_of([]))
#let null = typedef("none", has_type_of(none))
#verify(any, 1)
#verify(any, true)
#falsify(never, true)
#verify(color, rgb(1, 1, 1))
#verify(bool, false)
#falsify(bool, "")
#verify(null, none)
// Composition mechanisms
// Union types
#let union(..ts) = typedef("union { ... }", obj => {
let ts = ts.pos()
ts.fold(
( err: "None of " + ts.map(t => t.label).join(", ") + " match " + repr(obj) ),
(old, add) =>
if old == () or (add.fn)(obj) == () {
()
} else {
old
},
)
})
#verify(union(bool, int), 1)
#verify(union(bool, int), true)
#falsify(union(str, int), true)
#let option(t) = typedef(t.label + "?", union(t, null))
// Products
#let array(t) = typedef("array { ... }", arr => {
let old = has_type_of(())(arr)
if old != () { old } else { result_join_array(arr, t.fn) }
})
#verify(array(bool), (true, false, true))
#falsify(array(bool), (true, 1, true))
#verify(array(union(int, bool)), (true, 1, true))
#falsify(array(union(int, bool)), (true, 1, "foo"))
#falsify(array(array(bool)), (((true,), (true)), ((true,), (true))))
#falsify(array(array(array(bool))), (((true,), (true)), ((true,), (true))))
#verify(array(array(array(bool))), (((true,), (true,)), ((true,), (true,))))
// Test nesting
// We don't have much room in terms of maximum recursion limit, so ensuring
// that our typechecking assertions don't consume too much function depth
// is actually relevant.
#let nested(n, node, leaf) = {
if n <= 0 { leaf } else { node(nested(n - 1, node, leaf)) }
}
#{ let n = 30; verify(nested(n, array, bool), nested(n, x => (x,), true)) }
#let contains_field(map, field) = { map.at(field, default: none) == map.at(field, default: 1) }
#let struct(..args) = typedef("struct { ... }", obj => {
// Needs to be positional XOR named arguments
let map = args.named()
let tup = args.pos()
if type((:)) == type(obj) {
// Check that the type is a dictionnary with the right fields
// This simply consists of
// - checking that all keys in the type exist in the object and, these recursively match
// - checking that all keys in the object are declared in the type
let pre = result_join_dict(
map, (field, ft) => {
if not contains_field(obj, field) {
( err: "Should have field " + repr(field) )
} else {
(ft.fn)(obj.at(field))
}
}
)
if pre != () { return pre }
result_join_dict(
obj, (field, _) => {
if not contains_field(map, field) {
( err: "No such field " + repr(field) )
} else {
()
}
},
)
} else if type(()) == type(obj) {
// Check that the type is a tuple with the right fields
// I.e. lengths match, and the types/values match 1-1 (zip them together)
if tup.len() == obj.len() {
result_join_array(
tup.zip(obj), (vs) => {
let (t, v) = vs;
(t.fn)(v)
}
)
} else { ( err: "Mismatched lengths" ) }
} else { ( err: "Object is not a dictionnary or an array" ) }
})
#verify(struct(foo: str, bar: int, baz: bool), (foo: "abc", bar: 1, baz: false))
#falsify(struct(foo: str, bar: int, baz: bool), (foo: "abc", bar: 1))
#falsify(struct(foo: str, baz: bool), (foo: "abc", bar: 1, baz: false))
#verify(struct(int, bool, str), (1, true, "foo"))
#{ let n = 20; verify(nested(n, x => struct(foo: x), bool), nested(n, x => (foo: x), true)) }
// Demo
#let Person = typedef("Person", struct(age: int, name: str))
#let jack = ( age: 31, name: "Jack" )
#is(Person, jack)
#let Pair(T, U) = typedef("Pair", struct(T, U))
#let swap = (T, U) => (tup) => {
is(Pair(T, U), tup)
let (t, u) = tup
ret(Pair(U, T), (u, t))
}
#let thing = (42, "foobar")
#is(Pair(int, str), thing)
#is(Pair(str, int), swap(int, str)(thing))
|
|
https://github.com/Tran-Thu-Le/typst-collection | https://raw.githubusercontent.com/Tran-Thu-Le/typst-collection/main/multi-files-reference/multi-ref.typ | typst | #let section-refs = state("section-refs", ())
#show ref: it => {
it
if it.element != none {
return
}
section-refs.update(old => {
old.push(it.target)
old
})
}
#show heading.where(level: 1): it => {
section-refs.update(())
it
}
|
|
https://github.com/lucannez64/Notes | https://raw.githubusercontent.com/lucannez64/Notes/master/Maths_Expertes_Ex_14_05_2024.typ | typst | #import "@preview/bubble:0.1.0": *
#import "@preview/fletcher:0.4.3" as fletcher: diagram, node, edge
#import "@preview/cetz:0.2.2": canvas, draw, tree
#import "@preview/cheq:0.1.0": checklist
#import "@preview/typpuccino:0.1.0": macchiato
#import "@preview/wordometer:0.1.1": *
#import "@preview/tablem:0.1.0": tablem
#show: bubble.with(
title: "Maths Expertes",
subtitle: "Pour le 15/05/2024",
author: "<NAME>",
affiliation: "LHB",
year: "2023/2024",
class: "101",
logo: image("JOJO_magazine_Spring_2022_cover-min-modified.png"),
)
#set page(footer: context [
#set text(8pt)
#set align(center)
#text("page "+ counter(page).display())
]
)
#set heading(numbering: "1.1")
#show: checklist.with(fill: luma(95%), stroke: blue, radius: .2em)
= Exercices
== Numéro 27 page 196
===
En calculant on trouve :
$A^2 = mat(
-2, 3, -3;
-9, 10, -9;
-3, 3, -2;
)$
Par conséquent $ A^2-3A+2I = mat(
-2, 3, -3;
-9, 10, -9;
-3, 3, -2;
) + mat(
0, -3, 3;
9, -12, 9;
3, -3, 0;
) + mat(
2, 0, 0;
0, 2, 0;
0, 0, 2;) $
$ = mat(
-2+2, -3+3, -3+3;
-9+9, 10+2-12, -9+9;
-3+3, 3-3, -2+2;
) = mat(
0,0,0;
0,0,0;
0,0,0;
) = O $
2. A est inversible si et seulement si $det(A)!= 0$
Or $ det(A) = mat(delim: "|", 0,1,-1; -3, 4, -3; -1, 1, 0;) $ $ = 0 times 4 times 0 + 1 times (-3) times (-1) + (-1) times (-3) times (1) - (-1) times 4 times (-1) - 1 times (-3) times 0 - 0 times (-3) times 1 = 2 $
Donc A est une matrice inversible et son inverse est $A^(-1)$
On a $A X=I$ admet une solution avec $X = A^(-1)$
$ X= mat(3/2, -1/2, 1/2;
3/2, -1/2, 3/2;
1/2, -1/2, 3/2;) $
$ A^T = mat(0, -3, -1; 1, 4 , 1; -1, -3 ,0 ;) $
$ a d j(A) = mat(3, -1, 1; 3, -1 ,3; 1, -1, 3;) $
$ A^(-1) = 1/det(A) times A^(T) = mat(3/2, -1/2, 1/2; 3/2, -1/2, 3/2; 1/2, -1/2, 3/2;) $
Ou $ 1/2(3A-A^2) = I $
$ A((3/2)I-A/2) = I $
Donc A est inversible et son inverse est $ A^(-1) = 3/2I-1/2 A $
== Ex 30
===
$ A^(-1) = 1/(4 times 2-1 times (-1)) mat( 2, -1; 1, 4;) $
$ = mat(2/9, -1/9; 1/9, 4/9;) $
$ B^(-1) = 1/(5 times 2 - 2 times 0) mat( 2, -2; 0, 5;) $
$ B^(-1) = mat(1/5, -1/5; 0, 1/2;) $
===
$ (A B)^(-1) = mat(20, 10; -5, 2;)^(-1) = mat(1/45, -1/9; 1/18, 2/9;) $
$ A^(-1)B^(-1) = mat(2/45, -1/10; 1/45, 1/5;) $
On remarque que le résultat est différent donc les deux opérations ne sont pas équivalentes en général
===
$ B^(-1)A^(-1) = mat(1/45, -1/9; 1/18, 2/9;) $
On peut conjecturer que pour toutes matrices inversibles $A$ et $B$ $(A B)^(-1) = B^(-1)A^(-1)$
===
$ A^(-1) B^(-1)= mat(1/8, 3/4; 1/32, -1/16;) $
$ (A B)^(-1) = mat(1/4, -1/8; 1/8, -3/16;)$
$ B^(-1) A^(-1) = mat(1/4, -1/8; 1/8, -3/16;) $
== Ex 50
===
$A B = mat(0, 0; 0, 0) = O$
$B A = mat(22, -44; 11, -22;)$
===
La multiplication de matrice n'est pas commutative en général.
===
$det(A) = 2 times 6 - ((-4) times (-3)) = 0$
et $det(B) = 2 times (-3) - (-6 times 1) = 0$
Donc A et B ne sont pas inversible.
===
La matrice nulle convient mais il n'y a pas d'unicité car la matrice $mat(0, 3; 0,1)$ convient aussi de plus d'après le cours il y a dans ce cas une infinité de solution ou aucune solution or $O$ est solution donc il y a une infinité de solution et il n'y a pas d'unicité.
===
On pose
$C = mat(a, b; c, d;)$
Alors $B C = O$ si seulement si $ cases(2a-6c = 0, a-3c=0, 2b-6d=0, b-3d= 0 ) $
$ arrow.l.r.double.long cases( a= 3c, b=3d ) $
Ainsi l'ensemble des matrices $C$ tel que $B C = O$ est l'ensemble $S = {mat(3c, 3d; c, d) text(" ") c,d in RR}$
== Ex 54
===
$det(P) = 1 times 2 - (-1 times -1) = 1 eq.not 0$
donc P est inversible
$ P^(-1) = 1/(det(P)) mat(2, 1;1,1) $
$ P^(-1) = mat(2, 1;1,1) $
===
$ P^(-1)A P = mat(2, 0; 0, -1;) $
Donc $D = mat(2, 0; 0, -1;)$
===
Initialisation:
pour $n=0$
On a $A^0=I$
et $P D^0 P^(-1) = P I P^(-1) = P P^(-1) = I$
Donc la propriété est initialisée.
Hérédité:
On suppose qu'il existe un entier naturel $k$ tel que $A^k = P D^k P^(-1)$
Alors $ A^(k+1) = A^k times A $
$ A^(k+1) = P D^k P^(-1) times P D P^(-1) $
$ A^(k+1) = P D^(k) D P^(-1) $
$ A^(k+1) = P D^(k+1) P^(-1) $
Donc la propriété est héréditaire.
Conclusion:
La propriété étant initialisée et héréditaire selon le principe de récurrence, pour tout $n$ entier naturel on a $A^n = P D^n P^(-1)$
===
On trouve $D^n = mat(2^n , 0; 0, (-1)^n;)$
Par conséquent $A^n = mat(2^n, 2^n+ (-1)^(n+1); -2^(n+1)+ 2(-1)^n, -2^n + 2(-1)^n;)$
|
|
https://github.com/lucannez64/Notes | https://raw.githubusercontent.com/lucannez64/Notes/master/Calculus.typ | typst | #import "template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: project.with(
title: "Calculus",
authors: (
"<NAME>",
),
date: "30 Octobre, 2023",
)
#set heading(numbering: "1.1.")
#link("Derivative.pdf")[Derivative] #link("Integral.pdf")[Integrale]
#link("Multivariable Function.pdf")[Multivariable Function]
#link("Vector-Valued Functions.pdf")[Vector-Valued Functions]
#link("Multiple Integrations.pdf")[Multiple Integrations]
#link("Vectors Formula.pdf")[Vectors Formula]
#link("Maths.pdf")[Maths]
|
|
https://github.com/MobtgZhang/sues-thesis-typst | https://raw.githubusercontent.com/MobtgZhang/sues-thesis-typst/main/paper/paper.typ | typst | MIT License | #import "thesis.typ":*
#show: doc => sues_thesis_master(
outlinedepth:3,
blind: true,
doc
)
// 中英文封面
#include("covers/cover.typ")
// 原创说明
#include("covers/origin.typ")
// 授权书
#include("covers/authorization.typ")
#frontmatter()
// 中文摘要和英文摘要
#display_abstract("chapters/abstract-cn.typ",language:"中文")
#display_abstract("chapters/abstract-en.typ",language:"英文")
// 目录
// 插入目录
#locate(loc => {
chinese_outline(
title: "目录",
depth: 3,
indent: true,
)
})
// 符号说明表
#include("chapters/symbols.typ")
#mainmatter()
// 正文
#include("chapters/ch01.typ")
#include("chapters/ch02.typ")
#include("chapters/ch03.typ")
#include("chapters/ch04.typ")
#include("chapters/ch05.typ")
#include("chapters/ch06.typ")
#backmatter()
// 参考文献
// 附录
#include("chapters/appendix.typ")
// 攻读硕士学位所获得成果
#include("chapters/achivement.typ")
// 致谢
#include("chapters/thanks.typ")
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/bookletic/0.2.0/README.md | markdown | Apache License 2.0 | # Bookletic :book:
Create beautiful booklets with ease.
The current version of this library (0.2.0) contains a single function to take in an array of content blocks and order them into a ready to print booklet, bulletin, etc. No need to fight with printer settings or document converters.
### Example Output
Here is an example of the output generated by the `sig` function (short for a book's signature) with default parameters and some sample content:

Here is an example with some customization applied:

## `sig` Function
The `sig` function is used to create a signature (booklet) layout from provided content. It takes various parameters to automatically configure the layout.
### Parameters
- `signature_paper`: The paper size for the booklet. Currently supports `"us-letter"` and `"us-legal"`.
- `page_margin_top`: The top margin for each page in the booklet.
- `page_margin_bottom`: The bottom margin for each page in the booklet.
- `page_margin_binding`: The binding margin for each page in the booklet (space between pages).
- `page_margin_edge`: The edge margin for each page in the booklet.
- `page_border`: Takes a color space value to draw a border around each page. If set to none no border will be drawn.
- `draft`: A boolean value indicating whether to output an unordered draft or final layout.
- `p-num-layout`: A configuration for page numbering styles, allowing multiple layouts that apply to specified page ranges. Each layout can be provided as a dictonary specifying the following options:
- `p-num-start`: Starting page number for this layout
- `p-num-alt-start`: Alternative starting page number (e.g., for chapters)
- `p-num-pattern`: Pattern for page numbering (e.g., `"1"`, `"i"`, `"a"`, `"A"`)
- `p-num-placment`: Placement of page numbers (`top` or `bottom`)
- `p-num-align-horizontal`: Horizontal alignment of page numbers (`left`, `center`, or `right`)
- `p-num-align-vertical`: Vertical alignment of page numbers (`top`, `horizon`, or `bottom`)
- `p-num-pad-left`: Extra padding added to the left of the page number
- `p-num-pad-horizontal`: Horizontal padding for page numbers
- `p-num-size`: Size of page numbers
- `p-num-border`: The border color for the page numbers. If set to none no border will be drawn.
- `pad_content`: The padding around the page content.
- `contents`: The content to be laid out in the booklet. This should be an array of blocks.
### Usage
To use the `sig` function, simply call it with the desired parameters and provide the content to be laid out in the booklet:
```typst
#sig(
signature_paper: "us-letter",
contents: [
["Page 1 content"],
["Page 2 content"],
["Page 3 content"],
["Page 4 content"],
],
)
```
This will create a signature layout with the provided content, using the default values for the other parameters.
You can customize the layout by passing different values for the various parameters. For example:
```typst
#sig(
signature_paper: "us-legal",
page_margin_top: 0.5in,
page_margin_bottom: 0.5in,
page_margin_binding: 0.5in,
page_margin_edge: 0.5in,
page_border: none,
draft: true,
p-num-layout: (
(
p-num-start: 1,
p-num-alt-start: none,
p-num-pattern: "~ 1 ~",
p-num-placment: bottom,
p-num-align-horizontal: right,
p-num-align-vertical: horizon,
p-num-pad-left: -5pt,
p-num-pad-horizontal: 0pt,
p-num-size: 16pt,
p-num-border: rgb("#ff4136"),
),
),
pad_content: 10pt,
contents: (
["Page 1 content"],
["Page 2 content"],
["Page 3 content"],
["Page 4 content"],
),
)
```
This will create an unordered draft signature layout with US Legal paper size, larger margins, no page borders, page numbers at the bottom right corner with a red border, and more padding around the content.
### Notes
- The `sig` function is currently hardcoded to only handle two-page single-fold signatures. Other more complicated signatures may be supported in the future.
- Paper size is also currently hardcoded to only handle US Letter and US Legal.
- The `booklet` function is a placeholder for automatically break a single content block into pages dynamically. It is not implemented yet but will be added in coming versions.
## Collaboration
I would love to see this package eventually turn into a community effort. So any interest in collaboration is very welcome! You can find the github repository for this library here: [Bookletic Repo](https://github.com/harrellbm/Bookletic). Feel free to file an issue, pull request, or start a discussion.
## Changlog
#### 0.2.0
- Handle odd number of pages by inserting a blank back cover
- Implements page number layouts to allow defining different page numbers for different page ranges.
- Add various other page number options
#### 0.1.0
Initial Commit
|
https://github.com/sbleblanc/typst-templates | https://raw.githubusercontent.com/sbleblanc/typst-templates/main/resume/template.typ | typst | #import "@preview/fontawesome:0.2.0": *
#let build_header(personal_infos, header_bg, use_full_address) = {
let name_content = {
set text(weight: "bold", size: 2em)
smallcaps([#personal_infos.first_name\ #personal_infos.last_name])
}
let contact_content = grid(
columns: (1.8em, auto, 1.8em, auto),
align: (top + left, left + top, center + horizon, left + horizon),
row-gutter: 0.3em,
[#grid.cell(rowspan: 3)[#fa-icon("location-dot", solid: true)]],
grid.cell(rowspan: 3)[
#if use_full_address [
#personal_infos.street_no #personal_infos.street\
]
#personal_infos.city\ #personal_infos.country, #personal_infos.province, #personal_infos.postal_code
],
[#fa-icon("square-phone-flip", solid: true)],
[#personal_infos.phone],
[#fa-icon("square-envelope", solid: true)],
[#link("mailto:" + personal_infos.email, [#personal_infos.email])],
[#fa-linkedin()],
[#link("https://www.linkedin.com/in/" + personal_infos.linkedin_profile, [#personal_infos.linkedin_profile])]
)
let inner_content = grid(
columns: (1fr, auto),
[#align(left + bottom, name_content)],
[#align(right + bottom, contact_content)]
)
rect(
width: 100%,
height: 100%,
outset: (x: 100%),
inset: 0pt,
fill: header_bg,
stroke: (bottom: 1pt + black),
align(horizon, inner_content)
)
}
#let label_box(content) = {
set text(size: 0.7em, fill: white)
move(dy: -3pt)[
#rect(fill: black, stroke: none, radius: 5pt, inset: 3pt)[*#content*]
]
}
#let build_timeline(name, events) = {
let cells = ()
for e in events {
cells.push([
#grid.cell(inset: (right: 10pt))[#e.start - #e.end]
])
cells.push([
#let left_margin = 10pt
#let bottom_margin = 1em
#let circle_width = 0.7em
#rect(
width: 100%,
stroke: (left: 1pt),
inset: (left: left_margin, top: 0pt, bottom: bottom_margin),
)[
#rect(
stroke: (bottom: 1pt + black, left: 1pt + black),
fill: rgb("FFBE98"),
radius: (top-right: 2pt),
inset: (x: 0pt, y: 3pt),
outset: (left: left_margin),
width: 100%
)[
#set text(fill: black)
== #e.title
_ #e.organization.name, #e.organization.city, #e.organization.province _
]
#place(top + left, dx: {-left_margin - circle_width/2}, dy: 0pt)[
#circle(width: circle_width, fill: white, stroke: 1pt + black)
]
#e.description
]
])
if "specific_events" in e {
for se in e.specific_events {
cells.push(
grid.cell(inset: (right: 10pt), align: horizon)[#label_box(se.year)]
)
cells.push(
grid.cell(align: horizon)[
#rect(
width: 100%,
stroke: (left: 1pt),
inset: (left: 10pt, top: 0pt, bottom: 8pt)
)[
#se.name
]
]
)
}
}
}
set par(justify: true)
heading(name)
grid(
columns: (7em, auto),
align: (right, left),
..cells
)
}
#let build_skills_interest_content(categorized_data, header_bg, content_bg) = {
for (cat, levels) in categorized_data.pairs() {
block(breakable: false,
stack(
dir: ttb,
rect(width: 100%, radius: (top: 5pt), fill: header_bg)[
== #cat
],
rect(width: 100%, radius: (bottom: 5pt), fill: content_bg, {
if type(levels) != dictionary {
levels
} else {
for (level_name, level_contents) in levels.pairs() {
heading(level: 3, level_name)
align(center)[#level_contents.join(" • ")]
parbreak()
}
}
})
)
)
}
}
#let resume(
personal_infos,
work_experiences,
education_experiences,
skills,
interests,
use_full_address: false,
header_bg: rgb("F6995C"),
skill_interests_header_bg: rgb("FFBE98"),
skill_interests_content_bg: rgb("FEECE2"),
) = {
set text(font: "Noto Serif", size: 10pt)
let header_content = build_header(personal_infos, header_bg, use_full_address)
set page(header: header_content, header-ascent: 10%, margin: (top: 70pt, bottom: 30pt, x: 0.7cm),fill: white, numbering: "1 / 1")
show heading.where(level: 1): h => {
set text(size: 1.2em)
rect(
width: 100%,
inset: (bottom: 0.3em, rest: 1pt),
stroke: (bottom: 1pt + black),
h.body
)
}
show link: l => {
set text(fill: rgb(13, 0 , 110))
underline(l)
}
grid(
columns: (7fr, 3fr),
column-gutter: 1em,
{
build_timeline("Work Experience", work_experiences)
build_timeline("Education", education_experiences)
},
grid.cell()[
#show heading.where(level: 1): set align(center)
#show heading.where(level: 2): h => {
set align(center + horizon)
block(h.body)
}
#show heading.where(level: 3): h => {
set align(center + horizon)
block(underline[#h.body])
}
= Skills
#build_skills_interest_content(skills, skill_interests_header_bg, skill_interests_content_bg)
= Interests
#build_skills_interest_content(interests, skill_interests_header_bg, skill_interests_content_bg)
]
)
} |
|
https://github.com/jdupak/lang-talk-borrow-checker | https://raw.githubusercontent.com/jdupak/lang-talk-borrow-checker/master/theme/ctu.typ | typst | #import "@preview/polylux:0.3.1": *
#let theme = (
accent: rgb("#F74C00"),
text: rgb("#000000"),
)
#let ctu-theme(
aspect-ratio: "4-3",
footer: [],
background: white,
foreground: black,
body
) = {
set page(
paper: "presentation-" + aspect-ratio,
margin: 1em,
header: none,
footer: none,
fill: background,
)
set text(
fill: foreground,
size: 24pt,
font: "Fira Sans",
weight: 400,
)
show footnote.entry: set text(size: .6em)
show heading.where(level: 2): set block(below: 2em)
show heading.where(level: 1): set block(below: 1em)
show heading: set text(fill: theme.accent)
set outline(target: heading.where(level: 1), title: none, fill: none)
show outline.entry: it => it.body
show outline: it => block(inset: (x: 1em), it)
set page(footer:
utils.polylux-progress( p => {
box(fill: gradient.linear(rgb("#F58600"), theme.accent), width: p * 100%, height: 1em, outset: (left: 24pt, right: 24pt), align(left, move(dy: -2em, dx: -.7em, image("../media/ferris.svg", height: 2.5em, width: 4em)))
)
}))
body
}
#let master-slide(body) = {
polylux-slide({
body
})
}
#let centered-slide(body) = {
master-slide(align(center + horizon, body))
}
#let title-slide(body) = {
set heading(outlined: false)
show heading: set block(above: 3em)
set text(fill: theme.accent)
centered-slide(body)
}
#let slide(body) = {
master-slide({
block(inset: (top: 2em, x: 2em), width: 100%, height: 80%, body)
})
}
#let notes(body) = { pdfpc.speaker-note(body) }
#let code(lines, block) = {
show raw: it => stack(..it.lines.map(line =>
box(
width: 100%,
height: 1.25em,
inset: 0.25em,
align(horizon, stack(if lines.contains(line.number) { line.body } else { strike(stroke: rgb(255, 255, 255, 70%) + 1.25em, line.body) }
)))))
block
} |
|
https://github.com/Jollywatt/typst-fletcher | https://raw.githubusercontent.com/Jollywatt/typst-fletcher/master/docs/readme-examples/feynman-diagram.typ | typst | MIT License | #diagram(/*<*/edge-stroke: fg,/*>*/$
e^- edge("rd", "-<|-") & & & edge("ld", "-|>-") e^+ \
& edge(gamma, "wave") \
e^+ edge("ru", "-|>-") & & & edge("lu", "-<|-") e^- \
$) |
https://github.com/seven-mile/gatsby-transformer-typst | https://raw.githubusercontent.com/seven-mile/gatsby-transformer-typst/master/README.md | markdown |
# `gatsby-transformer-typst`
A simple transformer plugin for Gatsby that converts `typ` / `typst` `File` nodes into `Typst` nodes.
## Node structure
Check `gatsby-node.js` for the node structure.
## How to use
This plugin depends on [`typst.ts`](https://github.com/Myriad-Dreamin/typst.ts). These packages are not published to npm, so you need to link them manually.
```sh
yarn link @myriaddreamin/typst.ts @myriaddreamin/typst-ts-renderer @myriaddreamin/typst-ts-node-compiler
yarn build
yarn link
cd $GATSBY_PROJECT_ROOT
yarn link gatsby-transformer-typst
```
Then include this plugin in your `gatsby-config.js`:
```js
module.exports = {
plugins: [
/// ...
`gatsby-transformer-typst`,
],
}
```
If you need tweak the scale of typst content, you can pass in a `domScale` option:
```js
module.exports = {
plugins: [
/// ...
{
resolve: `gatsby-transformer-typst`,
options: {
domScale: 1.17,
}
},
],
}
```
`domScale` align your font-size in the typst content (like `16pt`) with the one in HTML/CSS. Trying several values interactively should work.
## Example
Check out my [`blog-ng`](https://github.com/seven-mile/blog-ng) for an example of integrating it into a gatsby.js blog.
[This workflow](https://github.com/seven-mile/blog-ng/blob/main/.github/workflows/deploy.yml) should be helpful.
Use [this React component](https://github.com/seven-mile/blog-ng/blob/main/src/components/typst-doc.tsx) to render the `Typst` artifacts.
|
|
https://github.com/RolfBremer/in-dexter | https://raw.githubusercontent.com/RolfBremer/in-dexter/main/tests/DisplayTest.typ | typst | Apache License 2.0 | #import "../in-dexter.typ": *
#set page("a6", flipped: true, numbering: "1")
_A Test for the Display parameter.
The order of marked entries specifies, which display is used in the reference on the index page._
In this sample, the first entry for Gaga hides the displays of the other entries for Gaga.
#index[Gaga]
#index(display: "GagaDisplay")[Gaga]
#index(display: "GagaDisplay2")[Gaga]
== Index
#columns(2)[
#make-index(
use-bang-grouping: true,
use-page-counter: true,
sort-order: upper,
)
]
|
https://github.com/kdog3682/mathematical | https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/tests/convex-hull.typ | typst | #import "../algorithms/convex-hull.typ": convex-hull
#let points = (
(0, 0),
(1, 1),
(2, 2),
(3, 1),
(4, 0),
(3, 3),
(2, 4),
(1, 3),
(2, 1),
(10, 10),
)
#{
convex-hull(points)
} |
|
https://github.com/RandomcodeDev/FalseKing-Design | https://raw.githubusercontent.com/RandomcodeDev/FalseKing-Design/main/engine/components.typ | typst | = Engine components
The engine will be made of these pieces:
#table(
columns: 4,
[*Component*], [*Components needed*], [*Functionality*], [*Available in tools builds*],
[`Base`], [none], [containers, basic algorithms, strings, data manipulation and serialization, Unicode handling,
startup, shutdown, threading, synchronization, screen output, system information, basic file system functions,
input, debugging features], [yes],
[`Math`], [none], [handles higher level math related things than `rtm`], [yes],
[`Utility`], [`Base`, `Platform`], [localisation, thread pools (maybe), configuration, logging], [yes],
[`Texture`], [`Base`, `Platform`], [texture format], [yes],
[`Mesh`], [`Base`, `Platform`], [mesh format], [yes],
[`Pack`], [`Base`, `Platform`], [package file format], [yes],
[`Launcher`], [`Base`, `Platform`, `Utility`], [loading an application DLL and the components it needs], [yes],
[`Engine`], [`Base`, `Platform`, `Utility`], [cameras, scene management, entity component system, commonly used components (for entities), system management], [no],
[`RenderSystem`], [`Base`, `Math`, `Platform`], [rendering scenes, UIs, anything else], [no],
[`InputSystem`], [`Base`, `Platform`], [user input], [no],
[`UiSystem`], [`Base`, `InputSystem`, `Math`, `Platform`, `RenderSystem`], [user interfaces], [no],
[`PhysicsSystem`], [`Base`, `Math`], [simulates mechanical physics], [no],
[`AnimationSystem`], [`Base`, `Math`], [controls skeletal animation], [no],
[`AudioSystem`], [`Base`, `Math`], [handles audio], [no],
[`Game`], [`AnimationSystem`, `Base`, `Engine`, `Utility`], [game functionality common between client and server, such as prediction and data parsing], [no],
[`GameServer`], [`Base`, `Engine`, `PhysicsSystem`, `Utility`], [game functionality that happens on the server, such as simulation, player management, etc], [no],
[`GameClient`], [`Base`, `Engine`, `InputSystem`, `RenderSystem`, `UiSystem`, `Utility`], [game functionality that happens on the client, such as rendering, player input, and possibly prediction], [no],
)
|
|
https://github.com/davidedomini/DTM-2425-Crash-Course | https://raw.githubusercontent.com/davidedomini/DTM-2425-Crash-Course/master/slides/intro.typ | typst | Apache License 2.0 | #import "@preview/polylux:0.3.1": *
#import "@preview/fontawesome:0.1.0": *
#import themes.metropolis: *
#show: metropolis-theme.with(
aspect-ratio: "16-9",
// footer: [Optional Footnote]
)
#set text(font: "Fira Sans", weight: 350, size: 20pt)
#show math.equation: set text(font: "Fira Math")
#set strong(delta: 200)
#set par(justify: true)
#set quote(block: true)
#show quote: set align(left)
#show quote: set pad(x: 2em, y: -0.8em)
#set raw(tab-size: 4)
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 1em,
radius: 0.7em,
width: 100%,
)
#show bibliography: set text(size: 0.8em)
#show footnote.entry: it => {
block(inset: (x: 2em, y: 0.1em))[#text(size: 0.75em)[#it.note.body]]
}
#let fcite(clabel) = {
footnote(cite(form: "full", label(clabel)))
}
#let author = block(inset: 0.1em)[
#table(inset: 0.5em, stroke: none, columns: (auto, 4fr), align: (left, left),
[#alert[*Davide Domini*]], [`<EMAIL>`],
// [Author 2], [`<EMAIL>`],
// [Author 3], [`<EMAIL>`],
)
#place(right, dy:-1.5em)[
#figure(image("images/disi.svg", width:40%))
]
]
#title-slide(
title: "Crash Course on Software Development and Computer Architectures",
subtitle: "Digital Transformation Management @ 2024",
author: author,
// date: datetime.today().display("[day] [month repr:long] [year]"),
)
// #new-section-slide("Slide section 1")
#slide(title: "Who am I?")[
- Bachelor and Master Degree in #alert[Computer Science and Engineering] at University of Bologna
- Currently #alert[PhD Student] in Computer Science and Engineering at University of Bologna
- #alert[Teaching tutor] for Software Engineering stuff in the DTM master course
- If you have any question write me at: `<EMAIL>`
]
#slide(title: "Why this course?")[
- In the first year you'll have a lot of #alert[CS courses] (programming and computer architectures, OS, cybersecurity, DB, ...)
- This crash course #alert[is NOT a replacement] for those courses
- This crash course have been conceived (mainly) for students with a business background
- It is also a good refresher for those with a technological background
]
#slide(title: "Outline (tentative)")[
- #alert[Lesson 01]: Computer Architectures and Operating Systems
- #alert[Lesson 02]: Intro to Programming Languages and Sofware Development
- #alert[Lesson 03]: Tools for Sofware Development
]
#slide(title: "When and where?")[
- Tuesday 24/09, Room 3.10, Time: 9-12
- Tuesday 01/10, Room 3.10, Time: 9-12
- Tuesday 15/10, Room 3.10, Time: 9-12
]
#slide(title: "Where can I find the slides?")[
You can find #alert[all the slides and the code] on #alert[GitHub] (of course we will see what GitHub is, in the meanwhile think of it as one big shared folder).
#figure(image("images/qr.svg", width: 30%))
] |
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/construct-04.typ | typst | Other | // The constructor property should still work
// when there are recursive show rules.
#show enum: set text(blue)
#enum(numbering: "(a)", [A], enum[B])
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/list_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
- First level.
- Second level.
There are multiple paragraphs.
- Third level.
Still the same bullet point.
- Still level 2.
- At the top.
|
https://github.com/kotfind/hse-se-2-notes | https://raw.githubusercontent.com/kotfind/hse-se-2-notes/master/edu/seminars/2024-09-25.typ | typst | = Разработка этапов актуализации знаний и первичного получения новых знаний
== Этап актуализации знаний
Актуализация знаний --- тип занятий, который
- готовит к восприятию нового материала
- напоминает изученные темы
- этап "пробуждения"
Форматы проведения актуализации:
- "Шкатулка идей, понятий, имен":
Вместе создаем "шкатулку" понятий
- "За-Против" ("Pro et Contra"):
Варианты:
- Перекрестные дебаты
- Аргументы по поднятию руки
- Метод "Углов". Школьники расходятся по углам по мнениям
и называют свои мнения
- "Аргументация"
- "Интересный факт":
Интересный факт "вкидывается" учителем, а потом развивается до полноценного
сюжета
- "Решение ситуационных задач проблемного характера"
- "Согласен -- Не согласен" ("Case study")
- "Визуальный ряд"
- "Ассоциативный ряд"
- "Задания -- загадки"
- "Викторина"
== Комплексный подход
- Новые знания должны быть связаны со старыми, встраиваться
в сеть уже освоенного материала
- Развитие знаний должно быть по таксономии Блума (есть теории отрицающие это)
- Знания должны быть живыми и многоуровневыми, применимыми в реальных ситуациях
== Базовая структура занятия
+ Мотивирование на учебную деятельность
+ Актуализация знаний --- проверка, что запомнили с прошлого урока
+ Постановка проблемы
+ *TODO*
== Этап первичного получения новых знаний
Урок начинается с объяснения, зачем изучать данную тему.
Форматы:
- Ассоциативные карты
- Mind Map
- Подводящий (сократический) диалог
- Ролевая игра
- Решение кейсов
- Работа с новым материалом в малых группах
- "Интервьюирование" преподавателя
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.0.1/canvas.typ | typst | Apache License 2.0 | #import "matrix.typ"
#import "vector.typ"
#import "draw.typ"
#import "cmd.typ"
#import "util.typ"
#import "coordinate.typ"
#import "styles.typ"
// Compute bounding box of points
#let bounding-box(pts, init: none) = {
let bounds = init
if type(pts) == "array" {
for (i, pt) in pts.enumerate() {
if init == none and i == 0 {
bounds = (l: pt.at(0), r: pt.at(0), t: pt.at(1), b: pt.at(1))
}
bounds.l = calc.min(bounds.l, pt.at(0))
bounds.r = calc.max(bounds.r, pt.at(0))
bounds.t = calc.min(bounds.t, pt.at(1))
bounds.b = calc.max(bounds.b, pt.at(1))
}
} else if type(pts) == "dictionary" {
if init == none {
bounds = pts
} else {
bounds.l = calc.min(bounds.l, pts.l)
bounds.r = calc.max(bounds.r, pts.r)
bounds.t = calc.min(bounds.t, pts.t)
bounds.b = calc.max(bounds.b, pts.b)
}
} else {
panic("Expected array of vectors or bbox dictionary, got: " + repr(pts))
}
return bounds
}
// Recursive element traversal function which takes the current ctx, bounds and also returns them (to allow modifying function locals of the root scope)
#let process-element(element, ctx) = {
if element == none { return }
let drawables = ()
let bounds = none
let anchors = (:)
// Allow to modify the context
if "before" in element {
ctx = (element.before)(ctx)
}
if "style" in element {
ctx.style = styles.resolve(
ctx.style,
if type(element.style) == "function" {
(element.style)(ctx)
} else {
element.style
}
)
}
if "push-transform" in element {
ctx.transform = matrix.mul-mat(
if type(element.push-transform) == "function" { (element.push-transform)(ctx) } else { element.push-transform },
ctx.transform
)
}
// Render children
if "children" in element {
let child-drawables = ()
let children = if type(element.children) == "function" {
(element.children)(ctx)
} else {
element.children
}
for child in children {
let r = process-element(child, ctx)
if r != none {
if r.bounds != none {
bounds = bounding-box(r.bounds, init: bounds)
}
ctx = r.ctx
child-drawables += r.drawables
}
}
if "finalize-children" in element {
drawables += (element.finalize-children)(ctx, child-drawables)
} else {
drawables += child-drawables
}
}
// Query element for points
let coordinates = ()
if "coordinates" in element {
for c in element.coordinates {
c = coordinate.resolve(ctx, c)
// if the first element is `false` don't update the previous point
if c.first() == false {
// the format here is `(false, x, y, z)` so get rid of the boolean
c = c.slice(1)
} else {
ctx.prev.pt = c
}
coordinates.push(c)
}
}
// Render element
if "render" in element {
for drawable in (element.render)(ctx, ..coordinates) {
// Transform position to absolute
drawable.segments = drawable.segments.map(s => {
return (s.at(0),) + s.slice(1).map(util.apply-transform.with(ctx.transform))
})
if "bounds" not in drawable {
drawable.bounds = drawable.segments.map(s => s.slice(1)).flatten()
} else {
drawable.bounds = drawable.bounds.map(util.apply-transform.with(ctx.transform));
}
bounds = bounding-box(drawable.bounds, init: bounds)
// Push draw command
drawables.push(drawable)
}
}
// Add default anchors
if bounds != none {
let mid-x = (bounds.l + bounds.r) / 2
let mid-y = (bounds.t + bounds.b) / 2
anchors += (
center: (mid-x, mid-y, 0),
left: (bounds.l, mid-y, 0),
right: (bounds.r, mid-y, 0),
top: (mid-x, bounds.t, 0),
bottom: (mid-x, bounds.b, 0),
top-left: (bounds.l, bounds.t, 0),
top-right: (bounds.r, bounds.t, 0),
bottom-left: (bounds.l, bounds.b, 0),
bottom-right: (bounds.r, bounds.b, 0),
)
// Add alternate names
anchors.above = anchors.top
anchors.below = anchors.bottom
}
// Query element for (relative) anchors
let custom-anchors = if "custom-anchors-ctx" in element {
(element.custom-anchors-ctx)(ctx, ..coordinates)
} else if "custom-anchors" in element {
(element.custom-anchors)(..coordinates)
}
if custom-anchors != none {
for (k, a) in custom-anchors {
anchors.insert(k, util.apply-transform(ctx.transform, a)) // Anchors are absolute!
}
}
// Query (already absolute) anchors depending on drawable
if "custom-anchors-drawables" in element {
for (k, a) in (element.custom-anchors-drawables)(drawables) {
anchors.insert(k, a)
}
}
if "default" not in anchors {
anchors.default = if "default-anchor" in element {
anchors.at(element.default-anchor)
} else if "center" in anchors {
anchors.center
}
}
if "anchor" in element and element.anchor != none {
assert(element.anchor in anchors,
message: "Anchor '" + element.anchor + "' not found in " + repr(anchors))
let translate = vector.sub(anchors.default, anchors.at(element.anchor))
for (i, d) in drawables.enumerate() {
drawables.at(i).segments = d.segments.map(
s => (s.at(0),) + s.slice(1).map(c => vector.add(translate, c)))
}
for (k, a) in anchors {
anchors.insert(k, vector.add(translate, a))
}
bounds = bounding-box(
(
vector.add(
translate,
(bounds.l, bounds.t)
),
vector.add(
translate,
(bounds.r, bounds.b)
)
),
)
}
if "name" in element and type(element.name) == "string" {
ctx.nodes.insert(
element.name,
(
anchors: anchors,
// Part of intersections
// paths: for drawable in drawables {
// if drawable.type == "path" {
// (drawable.coordinates + if drawable.close {(drawable.coordinates.first(),)},)
// }
// }
)
)
}
if ctx.debug and bounds != none {
drawables.push(
cmd.path(
ctx,
stroke: red,
fill: none,
close: true,
("line", (bounds.l, bounds.t),
(bounds.r, bounds.t),
(bounds.r, bounds.b),
(bounds.l, bounds.b))
).first()
)
}
if "after" in element {
ctx = (element.after)(ctx, ..coordinates)
}
return (bounds: bounds, ctx: ctx, drawables: drawables)
}
#let canvas(length: 1cm, /* Length of 1.0 canvas units */
background: none, /* Background paint */
debug: false, body) = layout(ly => style(st => {
if body == none {
return []
}
let length = length
assert(type(length) in ("length", "ratio"),
message: "length: Expected length, got " + type(length) + ".")
if type(length) == "ratio" {
// NOTE: Ratio length is based on width!
length = ly.width * length
}
// Canvas bounds
let bounds = none
// Canvas context object
let ctx = (
typst-style: st,
length: length,
debug: debug,
// Previous element position & bbox
prev: (pt: (0, 0, 0)),
// Current content padding size (added around content boxes)
content-padding: 0em,
em-size: measure(box(width: 1em, height: 1em), st),
style: styles.default,
// Current transform
transform: matrix.mul-mat(
matrix.transform-shear-z(.5),
matrix.transform-scale((x: 1, y: -1, z: 1)),
),
// Nodes, stores anchors and paths
nodes: (:),
// group stack
groups: (),
)
let draw-cmds = ()
for element in body {
let r = process-element(element, ctx)
if r != none {
if r.bounds != none {
bounds = bounding-box(r.bounds, init: bounds)
}
ctx = r.ctx
draw-cmds += r.drawables
}
}
if bounds == none {
return []
}
// Final canvas size
let width = calc.abs(bounds.r - bounds.l) * length
let height = calc.abs(bounds.t - bounds.b) * length
// Offset all element by canvas grow to the bottom/left
let transform = matrix.transform-translate(
-bounds.l,
-bounds.t,
0
)
box(stroke: if debug {green}, width: width, height: height, fill: background, {
for d in draw-cmds {
d.segments = d.segments.map(s => {
return (s.at(0),) + s.slice(1).map(v => {
return util.apply-transform(transform, v)
.slice(0,2).map(x => ctx.length * x)
})
})
(d.draw)(d)
}
})
}))
|
https://github.com/TGM-HIT/typst-protocol | https://raw.githubusercontent.com/TGM-HIT/typst-protocol/main/template/assets/mod.typ | typst | MIT License | #let just-do-it-logo = image.with("just-do-it.png")
|
https://github.com/Wuvist/lcpc | https://raw.githubusercontent.com/Wuvist/lcpc/main/1st_progress.typ | typst | = Living Cost Misery Index
=== ECON6017 Paper 1st progress - <NAME>
== Motivation
There is no existing economic index reflecting living cost of local people in different cities in the world, nor to make comparison.
Existing relevant indexes have limitations:
- Worldwide Cost of-Living by EIU - Economist Intelligence Unit
- https://www.eiu.com/n/campaigns/worldwide-cost-of-living-2023/
- More for expats like managers working for MNC 's reference
- Not reflecting living cost bared by the locals
- SG was ranked 1st
- CPI
- Reflecting Inflation, not necessarily living cost -> living cost & income may increase at the same time
- GDP per capita using PPP
- PPP adjusted index for purchasing power
- But GDP may not reflecting income of local -> lots for foreign trade
- Could be misleading: SG was ranked 1st
== Living Cost Misery Index
LCMI - Living Cost Misery Index could be very simple:
$ "LCMI" = "Essential Living Expanse"/"Income" "(Monthly)" $
- Essential Living Expanse could consist:
- Housing (Rental / Mortgage)
- Food
- Transportation
- Communication
- Clothing
- Health
- Education
- Entertainment
- Income is relatively simple:
- Payroll
=== Typical Groups
Living Cost could be very different for different group of people depending on their life style, especially in SG:
- HDB vs Condo
- Hawker Center/Food Court vs Restaurant
- Public transport vs Grab/Self driving
- Public school vs Private school
- etc
Estimate(or survey) cost of each category into different levels, and choose different category levels to form typical groups, like:
- High: Expat
- Medium-High Income: PMET worker
- Medium: Worker
- Medium-Low
- Low
Or group by housing type(SG government like to use housing type to separate residence into different income group)?
== LCMI curve
Calculate LCMI for each groups in a city and form a LCMI curve:
- Will the middle class being the most miserable?
LCMI curve of different cities could be used for comparison:
- No need to adjust for exchange? As it's a ratio of local currency?
- Could serve as a reference for people consider to relocation
- Just make a estimate of the denominator: income
- Even for foreign student with no income
- Could also serve as alternative to gini coefficient
== Factors impacting LCMI
Can regress on different price factor to show impact on LCMI:
- Inflation
- Increase in GST
- How effective f government subsidies?
- Increase in interest rate/mortgage rate
- Increase in rental
- In SG case, it's not necessary impacting LCMI negatively
- High housing ownership
- Increase in rental could be increase of income for the landlords, i.e. the locals
- etc
== Conclusion
LCMI curve as a framework to find out when where is a change in certain price, how each typical group in society will be affected.
Expand & Income categories could be expanded/fine tuned, linkages could be built.
PS: As an software engineer, I would also build a website/app to let consumer adjust arbitrary price factor to forecast the impact. Perhaps, it could serve as a tool to estimate new government policy's index? (I suppose SG government should already have this kind of benchmark tool? Is it publicly available?)
= My questions
- How to set clear scope
- Just cover LCMI for SG or include other cities like HK/Shanghai/New York?
- Shall collect historical data to see how LCMI changes?
- good to have
- How many groups to have for LCMI curve?
- which group has change the most?
- get the curve first!
- Really no existing index solve this problem?
- How many price factors to evaluate? And how to evaluate?
|
|
https://github.com/Me0wzz/study-note | https://raw.githubusercontent.com/Me0wzz/study-note/main/Electromagnetics/vector.typ | typst | = Divergence theorem
$
integral_V nabla dot bold(A) d v = integral.cont_S bold(A) dot d s
$
- 체적 적분(3중 integral) < - - > 면적 적분(2중 integral)
- 내부의 인접한 elements 들 끼리 상쇄가 되기 때문에 가장자리 (맨 바깥쪽) 부분만 남음
- 체적 내부에선 발산이 상쇄되어 없어지고, 미소체적이 정의되지 않은 외부경계변의 발산만 남음
= Example 2-9 잘못된 풀이
- $1/e$ 누락된거 나중에 확인
== 시험 문제
공업수학 참고: Green theorem in plane S (XY Plane)
#image("assets/2024-04-12-12-08-05.png")
- Special case of stokes's theorem
- 스톡스 정리 그린정리의 일반화
- 2차원에서 적용되는 스톡스 정리
- 스톡스 정리의 아주 일반적인 케이스 |
|
https://github.com/kotfind/hse-se-2-notes | https://raw.githubusercontent.com/kotfind/hse-se-2-notes/master/prob/lectures/2024-10-18.typ | typst | #import "/utils/math.typ": *
#blk[
Пусть есть генератор случайно величины $xi ~ R(0; 1)$.
Хотим получить случайную величину $eta ~ F_eta (y)$
$ eta = F_eta^(-1) (xi) $
Обратная функция всегда существует т.к. $F$ возрастает
]
=== Экспоненциальное (показательное) распределение
$ xi ~ E(lambda), lambda > 0 $
$ f(x) = cases(
lambda e^(-lambda x) "," & x >= 0,
0 "," & x < 0
) $
$ E xi = integral_0^oo x lambda e^(-lambda x) d x =
= { "по частям" } =\
= underbrace(-e(-lambda x) |_0^oo, 0)
+ 1/lambda underbrace(integral_0^(+oo) lambda e^(-lambda x) d x, = 1 "(из усл. нормировки)")
= 1/lambda $
$ D xi = E xi^2 - 1/lambda^2
= integral_0^oo x^2 lambda e^(-lambda x) d x - 1/lambda^2
= ... = 2/lambda^2 - 1/lambda^2 = 1/lambda^2 $
$ F(x) = cases(
0 "," & x <= 0,
integral_0^x lambda e^(-lambda x) d t
= -e^(-lambda t) |_0^x= 1 - e^(-lambda x) "," & x >= 0
) $
#blk(title: [Характеристическое свойство экспоненциального распределения])[
Пусть:
$ xi ~ E(lambda) $
Тогда:
$ forall t > 0 forall tau > 0: P(xi > t + tau | xi > t) = P(xi > tau) $
#proof[
$ P(xi > t + tau | xi > t) = P(xi > t + tau) / P(xi > t)
= (1 - F(t + tau)) / (1 - F(t)) =\
= (e^(-lambda (t + tau)))/(e^(-lambda t)) = e^(-lambda tau)
= 1 - F(tau) = P(xi > tau) $
]
Из непрерывных только экспоненциальное обладает этим свойством.
Из дискретных --- только геометрическое.
]
Пример применения: теория массового обслуживания (например, обслуживание
клиентов, обработка интернет-запросов)
=== Распределение Гаусса (Нормальное распределение)
$ xi ~ N(m, sigma^2) $
$m$ --- математическое ожидание
$sigma^2$ --- дисперсия
$ f(x) = 1/(sqrt(1 pi) sigma) exp(- (x-m)^2 / (2 sigma^2)) $
Симметрична относительно прямой $x = m$
$ f_max = f(m) = 1/(sqrt(1 pi) sigma) $
$ lim_(x -> plus.minus oo) f(x) = 0 $
При увеличении $sigma$ график становится шире, но ниже.
Интеграл от $f(x)$ не берется, $F(x)$ записать нельзя, поэтому используют таблички.
$ E xi = integral_(-oo)^(+oo) x f(x) d x
= integral_(-oo)^(+oo) x/(sqrt(2 pi) sigma) exp(- (x-m)^2 / (2 sigma^2)) d x
= integral_(-oo)^(+oo) (x - m + m)/(sqrt(2 pi) sigma) exp(- (x-m)^2 / (2 sigma^2)) d x =\
= integral_(-oo)^(+oo) underbrace((x - m)/(sqrt(2 pi) sigma) exp(- (x-m)^2 / (2 sigma^2)), "нечетная") d x
+ m underbrace(integral_(-oo)^oo f(x) d x, = 1)
= m $
$ D xi = integral_(-oo)^(+oo) (x - m)^2/(sqrt(1 pi) sigma) exp(- (x-m)^2 / (2 sigma^2))
= {y = (x - m) / sigma} =\
= 2 sigma^2 integral_0^(+oo) y^2 / sqrt(2 pi) exp(- y^2/2) d y
= 2 sigma^2 (
underbrace(y / sqrt(2 pi) exp(- y^2 / 2) |_0^(+oo), = 0)
- underbrace(integral_0^oo f(y) d y, = 1/2)
) = sigma^2 $
=== Стандартное распределение Гаусса
Чтобы использовать таблички, используем *стандартное* гауссовское распределение.
$ xi ~ N(0; 1) $
$ f(x) = 1/sqrt(2 pi) exp(- x^2 / 2) $
#figure(
caption: [Купюра с изображением гауссовского распределения],
image("deutsche_mark_gauss.jpg", height: 5cm)
)
==== Уравнение Лапласса
$ Phi_0 (x) = integral_0^x exp(-t^2/x)/sqrt(2 pi) d t $
Очень быстро стремится к нулю (для числа пять почти равна нулю (до 7-ого знака))
$ Phi_0 (+oo) = 1/2 $
$ Phi_0 (-x) = - Phi_0 (x) $
$ -1/2 <= Phi_0 (x) <= 1/2 $
$ Phi(x) := F(x) = Phi_0 (x) + 1/2 $
==== Вероятность попадания в заданный интервал
Хотим посчитать вероятность попадания $xi$ в интервал $(alpha, beta)$.
Общий случай:
$ P(alpha < xi < beta) = {y = (x - m)/sigma; d y = 1/sigma d x }
= integral_((alpha - m) / sigma)^((beta - m)/sigma) exp(- y^2 / 2) / sqrt(2 pi)
= Phi_0((beta - m) / sigma) - Phi_0((alpha - m) / sigma) $
Если интервал симметричен относительно $m$
$ P(abs(xi - m) < delta) = Phi_0 (delta / sigma) - Phi_0 (- delta / sigma)
= 2 Phi_0 (delta / sigma) $
Правило трех сигм:
$ P(abs(xi - m) < 3 sigma) = 2 Phi_0 ((3 sigma) / sigma)
= 2 Phi_0 (3) approx 0.997 $
Т.е. почти все значение лежат в промежутке $(-3 sigma, 3 sigma)$.
|
|
https://github.com/OrangeX4/typst-talk | https://raw.githubusercontent.com/OrangeX4/typst-talk/main/main.typ | typst | #import "@preview/touying:0.5.2": *
#import themes.university: *
#import "@preview/a2c-nums:0.0.1": int-to-cn-ancient-num
#import "utils.typ": *
// global styles
#set text(font: ("IBM Plex Serif", "Source Han Serif SC", "Noto Serif CJK SC"), lang: "zh", region: "cn")
#show heading.where(level: 1): set heading(numbering: "1.")
#set text(weight: "medium")
#set par(justify: true)
#set raw(lang: "typ")
#set underline(stroke: .05em, offset: .25em)
#show raw: set text(font: ("IBM Plex Mono", "Source Han Sans SC", "Noto Sans CJK SC"))
#show raw.where(block: false): box.with(
fill: luma(240),
inset: (x: .3em, y: 0em),
outset: (x: 0em, y: .3em),
radius: .2em,
)
#show raw.where(block: true): set par(justify: false)
#show: university-theme.with(
aspect-ratio: "16-9",
footer-a: self => self.info.subtitle,
footer-c: self => (
h(1fr) + utils.display-info-date(self) + h(1fr) + context utils.slide-counter.display(int-to-cn-ancient-num) + h(1fr)
),
config-common(
handout: true,
datetime-format: "[year] 年 [month] 月 [day] 日",
),
config-info(
title: [并不复杂的 Typst 讲座],
subtitle: [Typst is Simple],
author: [OrangeX4],
date: datetime(year: 2024, month: 3, day: 17),
institution: [南京大学],
logo: emoji.school,
),
// hack for hiding list markers
config-methods(cover: (self: none, body) => box(scale(x: 0%, body))),
config-page(margin: (top: 2.4em))
)
#title-slide()
== 目录 <touying:hidden>
#align(horizon, components.adaptive-columns(outline(title: none, indent: 1em, depth: 1)))
= 介绍
== 什么是 Typst?
- *介绍:*
- #Typst 是为写作而诞生的基于标记的排版系统。#Typst 的目标是成为功能强大的排版工具,并且让用户可以愉快地使用它。#pause
- *简单来说:*
- #Typst = #LaTeX 的排版能力 + #Markdown 的简洁语法 + 强大且现代的脚本语言 #pause
- *运行环境:*Web Wasm / CLI / LSP Language Server
- *编辑器:*Web App #linkto("https://typst.app/") / VS Code / Neovim / Emacs
#empty-slide[
#set align(center + horizon)
#image("images/typst-introduction.png")
#place(top + right, dx: -.5em, dy: .5em)[#linkto(icon: "mark-github", "https://github.com/typst/typst")]
]
== Typst 速览
#slide[
#set text(.5em)
```typ
#set page(width: 10cm, height: auto)
#set heading(numbering: "1.")
= Fibonacci sequence
The Fibonacci sequence is defined through the recurrence relation $F_n = F_(n-1) + F_(n-2)$.
It can also be expressed in _closed form:_
$ F_n = round(1 / sqrt(5) phi.alt^n),quad phi.alt = (1 + sqrt(5)) / 2 $
#let count = 8
#let nums = range(1, count + 1)
#let fib(n) = (
if n <= 2 { 1 }
else { fib(n - 1) + fib(n - 2) }
)
The first #count numbers of the sequence are:
#align(center, table(
columns: count,
..nums.map(n => $F_#n$),
..nums.map(n => str(fib(n))),
))
```
来源:Typst 官方 Repo #linkto("https://github.com/typst/typst")
][
#set align(center + horizon)
#image("examples/fibonacci.png")
]
== Typst 优势
- *语法简洁:*上手难度跟 #Markdown 相当,文本源码可阅读性高。#pause
- *编译速度快:*
- Typst 使用 Rust 语言编写,即 `typ(esetting+ru)st`。
- 增量编译时间一般维持在*数毫秒*到*数十毫秒*。#pause
- *环境搭建简单:*不像 #LaTeX 安装起来困难重重,#Typst 原生支持中日韩等非拉丁语言,官方 Web App 和本地 VS Code 均能*开箱即用*。#pause
- *现代脚本语言:*
- 变量、函数、闭包与错误检查 + 函数式编程的纯函数理念
- 可嵌套的 `[标记模式]`、`{脚本模式}` 与 `$数学模式$` #strike[不就是 JSX 嘛]
- 统一的包管理,支持导入 WASM 插件和按需自动安装第三方包
== Typst 对比其他排版系统
#slide[
#set text(.7em)
#let 难 = text(fill: rgb("#aa0000"), weight: "bold", "难")
#let 易 = text(fill: rgb("#007700"), weight: "bold", "易")
#let 多 = text(fill: rgb("#aa0000"), weight: "bold", "多")
#let 少 = text(fill: rgb("#007700"), weight: "bold", "少")
#let 慢 = text(fill: rgb("#aa0000"), weight: "bold", "慢")
#let 快 = text(fill: rgb("#007700"), weight: "bold", "快")
#let 弱 = text(fill: rgb("#aa0000"), weight: "bold", "弱")
#let 强 = text(fill: rgb("#007700"), weight: "bold", "强")
#let 较强 = text(fill: rgb("#007700"), weight: "bold", "较强")
#let 中 = text(fill: blue, weight: "bold", "中等")
#let cell(top, bottom) = stack(spacing: .2em, top, block(height: 2em, text(size: .7em, bottom)))
#v(1em)
#figure(
table(
columns: 8,
stroke: none,
align: center + horizon,
inset: .5em,
table.hline(stroke: 2pt),
[排版系统], [安装难度], [语法难度], [编译速度], [排版能力], [模板能力], [编程能力], [方言数量],
table.hline(stroke: 1pt),
[LaTeX], cell[#难][选项多 + 体积大 + 流程复杂], cell[#难][语法繁琐 + 嵌套多 + 难调试], cell[#慢][宏语言编译\ 速度极慢], cell[#强][拥有最多的\ 历史积累], cell[#强][拥有众多的\ 模板和开发者], cell[#中][图灵完备\ 但只是宏语言], cell[#中][众多格式、\ 引擎和发行版],
[#Markdown], cell[#易][大多编辑器\ 默认支持], cell[#易][入门语法十分简单], cell[#快][语法简单\ 编译速度较快], cell[#弱][基于 HTML排版能力弱], cell[#中][语法简单\ 易于更换模板], cell[#弱][图灵不完备 \ 需要外部脚本], cell[#多][方言众多\ 且难以统一],
[Word], cell[#易][默认已安装], cell[#易][所见即所得], cell[#中][能实时编辑\ 大文件会卡顿], cell[#强][大公司开发\ 通用排版软件], cell[#弱][二进制格式\ 难以自动化], cell[#弱][编程能力极弱], cell[#少][统一的标准和文件格式],
[#Typst], cell[#易][安装简单\ 开箱即用], cell[#中][入门语法简单\ 进阶使用略难], cell[#快][增量编译渲染\ 速度最快], cell[#较强][已满足日常\ 排版需求], cell[#强][制作和使用\ 模板都较简单], cell[#强][图灵完备\ 现代编程语言], cell[#少][统一的语法\ 统一的编译器],
table.hline(stroke: 2pt),
),
)
]
#slide[
#set align(center + horizon)
#v(-1.5em)
#image("images/meme.png")
#v(-1.5em)
From reddit r/LaTeX #linkto("https://www.reddit.com/r/LaTeX/comments/z2ifki/latex_vs_word_vs_pandoc_markdown/") and modfied by OrangeX4
]
= 安装
== 云端使用
- 官方提供了 Web App,可以直接在浏览器中使用 #linkto("https://typst.app/") #pause
- *优点:*
- 即开即用,无需安装。
- 类似于 #LaTeX 的 Overleaf,可以直接编辑、编译、分享文档。
- 拥有*「多人协作」*支持,可以实时共同编辑。#pause
- *缺点:*
- 中文字体较少,经常需要手动上传字体文件,但有上传大小限制。
- 缺少版本控制,目前无法与 GitHub 等代码托管平台对接。
== 本地使用(推荐)
- *VS Code 方案(推荐)*
- 在插件市场安装「Tinymist Typst」和「Typst Preview」插件。
- 新建一个 `.typ` 文件,然后按下 #keydown[Ctrl] + #keydown[K] #keydown[V] 即可实时预览。
- *不再需要其他配置*,例如我们并不需要命令行安装 Typst CLI。#pause
- *Neovim / Emacs 方案*
- 可以配置相应的 LSP 插件和 Preview 插件。#pause
- *CLI 方案:* `typst compile --root <DIR> <INPUT_FILE>`
- Windows: `winget install --id Typst.Typst`
- macOS: `brew install typst`
- Linux:查看 Typst on Repology #linkto("https://repology.org/project/typst/versions")
= 快速入门
== Hello World
#slide[
#set text(.5em)
````typ
#set page(width: 20em, height: auto)
#show heading.where(level: 1): set align(center)
#show "Typst": set text(fill: blue, weight: "bold")
#show "LaTeX": set text(fill: red, weight: "bold")
#show "Markdown": set text(fill: purple, weight: "bold")
= Typst 讲座
Typst 是为 *学术写作* 而生的基于 _标记_ 的排版系统。
Typst = LaTeX 的排版能力 + Markdown 的简洁语法 + 现代的脚本语言
#underline[本讲座]包括以下内容:
+ 快速入门 Typst
+ Typst 编写各类模板
- 笔记、论文、简历和 Slides
+ Typst 高级特性
- 脚本、样式和包管理
+ Typst 周边生态开发体验
- Pinit、MiTeX、Touying 和 VS Code 插件
```py
print('Hello Typst!')
```
````
][
#set align(center + horizon)
#v(-1em)
#image("examples/poster.png")
]
== 标记只是语法糖
#slide[
````typ
= 一级标题
== 二级标题
简单的段落,可以*加粗*和_强调_。
- 无序列表
+ 有序列表
/ 术语: 术语列表
```py
print('Hello Typst!')
```
````
][
````typ
#heading(level: 1, [一级标题])
#heading(level: 2, [二级标题])
简单的段落,可以#strong[加粗]和#emph[强调]。
#list.item[无序列表]
#enum.item[有序列表]
#terms.item[术语][术语列表]
#raw(lang: "py", block: true, "print('Hello Typst!')")
````
]
== 核心概念
- *Simplicity through Consistency*
- 类似 #Markdown 的特殊标记语法,实现*「内容与格式分离」*。
- `= 一级标题` 只是 `#heading[一级标题]` 的*语法糖*。#pause
- *标记模式和脚本模式*
- 标记模式下,使用井号 `#` 进入脚本模式,如 `#strong[加粗]`。
- 脚本模式下不需要额外井号,例如 `#heading(strong([加粗]))`
- 大段脚本代码可以使用花括号 `{}`,例如 `#{1 + 1}`。#pause
- 脚本模式下,使用方括号 `[]` 进入标记模式,称为*内容块*。
- #Typst 是强类型语言,有常见的数据类型,如 `int` 和 `str`。
- 内容块 `[]` 类型 `content` 是 #Typst 的核心类型,可嵌套使用。
- `#fn(..)[XXX][YYY]` 是 `#fn(.., [XXX], [YYY])` 的*语法糖*。
== Set/Show Rules
- *Set 规则可以设置样式,即「为函数设置参数默认值」的能力。*
- 例如 `#set heading(numbering: "1.")` 用于设置标题的编号。
- 使得 `#heading[标题]` 变为 `#heading(numbering: "1.", [标题])`。#pause
- *Show 规则用于全局替换,即在语法树上进行「宏编程」的能力。*
- 例如 `#show "LaTeX": "Typst"` 将单词 `LaTeX` 替换为 `Typst`。
- 例如让一级标题居中,可以用*「箭头函数」*:
- #block(
width: 100%,
fill: luma(240),
inset: (x: .3em, y: 0em),
outset: (x: 0em, y: .3em),
radius: .2em,
```typ
#show heading.where(level: 1): body => {
set align(center)
body
}
```,
)
- 化简为 `#show heading.where(level: 1): set align(center)`
== 数学公式
- `$x$` 是行内公式,`$ x^2 + y^2 = 1 $ <circle>` 是行间公式。#pause
- 与 #LaTeX 的差异:
- `(x + 1) / x >= 1 => 1/x >= 0`
- #raw(lang: "latex", `\frac{x + 1}{x} \ge 1 \Rightarrow \frac{1}{x} \ge 0`.text) #pause
- *报告,我想用 LaTeX 语法:*#linkto("https://github.com/mitex-rs/mitex")
```typ
#import "@preview/mitex:0.2.2": *
Write inline equations like #mi("x") or #mi[y].
#mitex(`
\frac{x + 1}{x} \ge 1 \Rightarrow \frac{1}{x} \ge 0
`)
```
= 制作模板
== 制作论文模板
现在,我们想要为一个会议制作一个模板,以下是*需求规范*:
+ *字体*应为 11pt 的衬线字体;
+ *标题*应为 17pt 的粗体,居中对齐;
+ 论文包含*单栏摘要*和*两栏正文*;
+ *摘要*应居中;
+ *正文*应两端对齐;
+ *一级章节标题*应为 13pt,居中并以小写字母呈现;
+ *二级标题*是短标题,斜体,与正文文本具有相同的大小;
+ 最后,*页面尺寸*应为 US letter,编号在页脚的中心,每页的左上角应包含论文的标题。
#slide[
#set text(.65em)
#show: columns.with(2)
#show raw.where(block: true): block.with(width: 100%, fill: luma(240), outset: .7em, radius: .2em)
```typ
#let conf(
title: none,
authors: (),
abstract: [],
doc,
) = {
// 字体应为 11pt 的衬线字体
set text(font: "Linux Libertine", 11pt)
// 正文应两端对齐
set par(justify: true)
// 页面尺寸应为 US letter,编号在页脚的中心,
// 每页的左上角应包含论文的标题
set page(
"us-letter",
margin: auto,
header: align(
right + horizon,
title
),
numbering: "1",
)
// 一级章节标题应为 13pt,居中并以小写字母呈现
show heading.where(
level: 1
): it => block(
align(center,
text(
13pt,
weight: "regular",
smallcaps(it.body),
)
),
)
// 二级标题是短标题,斜体,与正文文本具有相同的大小
show heading.where(
level: 2
): it => box(
text(
11pt,
weight: "regular",
style: "italic",
it.body + [.],
)
)
// 标题应为 17pt 的粗体,居中对齐
set align(center)
text(17pt, title)
// 添加作者列表,最多分为 3 列
let count = calc.min(authors.len(), 3)
grid(
columns: (1fr,) * count,
row-gutter: 24pt,
..authors.map(author => [
#author.name \
#author.affiliation \
#link("mailto:" + author.email)
]),
)
// 单栏摘要,摘要应居中
par(justify: false)[
*Abstract* \
#abstract
]
// 两栏正文
set align(left)
columns(2, doc)
}
```
来源:Typst 官方文档 #linkto("https://typst.app/docs/tutorial/making-a-template/")
]
#slide[
#set text(.5em)
#v(-1em)
#block(breakable: false)[
```typ
#show: doc => conf(
title: [
Towards Improved Modelling
],
authors: (
(
name: "<NAME>",
affiliation: "Artos Institute",
email: "<EMAIL>",
),
(
name: "<NAME>",
affiliation: "Honduras State",
email: "<EMAIL>",
),
),
abstract: lorem(80),
doc,
)
= Introduction
#lorem(90)
== Motivation
#lorem(140)
== Problem Statement
#lorem(50)
= Related Work
#lorem(200)
```
]
][
#set align(right + horizon)
#show: rect.with(stroke: .5pt)
#image("examples/conference.png")
]
== 制作简历模板
#slide(composer: (1fr, auto))[
- Word / HTML 简历模板?
- *不够美观* #pause
- #LaTeX 简历模板?
- *环境配置复杂*
- *自主定制困难* #pause
- #Typst 简历模板?
- *绝对优势领域*
- *Chinese-Resume-in-Typst* #linkto("https://github.com/OrangeX4/Chinese-Resume-in-Typst")
#meanwhile
][
#set align(center + horizon)
#show: rect.with(stroke: .5pt)
#image("images/resume.png")
]
#slide(composer: (1fr, auto))[
#set text(.5em)
#show: columns.with(2)
#raw(lang: "typ", block: true, read("examples/chicv.typ"))
#set text(1.5em)
来源:chicv #linkto("https://github.com/skyzh/chicv")
][
#set align(center + horizon)
#show: rect.with(stroke: .5pt)
#image("examples/chicv.png")
]
== 南京大学学位论文
#slide(composer: (1fr, auto))[
- *nju-thesis-typst* #linkto("https://github.com/nju-lug/nju-thesis-typst")
- 总共开发时间:*一周*
- 语法简洁、编译迅速
- 通过*「闭包」*封装保存全局配置
- *本科生模板 + 研究生模板*
#set text(.65em)
#show: columns.with(3)
#show raw.where(block: true): block.with(width: 100%, fill: luma(240), outset: .5em, radius: .2em)
```typ
// 文稿设置
#show: doc
// 封面页
#cover()
// 声明页
#decl-page()
// 前言
#show: preface
// 中文摘要
#abstract(
keywords: ("我", "就是", "测试用", "关键词")
)[
中文摘要
]
// 目录
#outline-page()
// 插图目录
#list-of-figures()
// 表格目录
#list-of-tables()
// 正文
#show: mainmatter
= 基本功能
== 脚注
我们可以添加一个脚注。#footnote[脚注内容]
```
][
#set align(center + horizon)
#show: rect.with(stroke: .5pt)
#image("images/nju-thesis.png")
]
= 制作 Slides
== Touying
- #Touying 是为 Typst 开发的 Slides 包,类似于 #LaTeX 的 Beamer。
- 取自中文「*投影*」,而 Beamer 是德语「*投影仪*」的意思。#linkto("https://touying-typ.github.io/touying/zh/") #pause
- *基本框架:*
- 全局单例对象 `s` 保存标题、作者和日期等信息。
- 使用 `= 节`、`== 小节` 和 `=== 标题` 划分 Slides 结构。
- 使用 `#slide[..]` 块来实现更优雅且精细的控制。 #pause
- *使用主题:*`#let s = themes.university.register()` #pause
- *动画:*
- `#pause` 和 `#meanwhile` 标记。
- `#only("2-")[]`、`#uncover("2-")[]` 和 `#alternatives[][]`。
#slide(composer: (1fr, auto))[
#set text(.5em)
#show: columns.with(2, gutter: 3em)
```typ
#import "@preview/touying:0.5.2": *
#import themes.aqua: *
#show: aqua-theme.with(
aspect-ratio: "16-9",
config-info(
title: [Title],
subtitle: [Subtitle],
author: [Authors],
date: datetime.today(),
institution: [Institution],
),
)
#title-slide()
#outline-slide()
= The Section
== Slide Title
#lorem(40)
#focus-slide[
Another variant with primary color in background...
]
== Summary
#slide(self => [
#align(center + horizon)[
#set text(size: 3em, weight: "bold", fill: self.colors.primary)
THANKS FOR ALL
]
])
```
#set text(1.2em)
来源:Touying 文档 #linkto("https://touying-typ.github.io/zh/docs/themes/aqua")
][
#set align(center + horizon)
#show: pad.with(right: -1.5em)
#image(height: 90%, "examples/touying.png")
]
== Pinit
- *Pinit* 包提供基于*「图钉」(pin)*进行相对定位的能力。
- 可以方便地实现*「箭头指示」*与*「解释说明」*的效果。
- *一个简单示例:*
#grid(columns: 2, gutter: 1em)[
#set text(.85em)
#show raw.where(block: true): block.with(width: 100%, fill: luma(240), outset: .7em, radius: .2em)
```typ
#import "@preview/pinit:0.2.0": *
#set text(size: 24pt)
A simple #pin(1)highlighted text#pin(2).
#pinit-highlight(1, 2)
#pinit-point-from(2)[It is simple.]
```
][
#show: align.with(center + horizon)
#show: block.with(breakable: false)
#v(-2em)
#image("images/pinit-1.png")
#image("images/pinit-2.png")
]
#slide[
#set align(center + horizon)
#image(height: 115%, "images/pinit-3.png")
#set text(.8em)
#place(top + left, dy: -.5em)[使用 #Typst 和 *Pinit* 复刻算法课的 Slides,样式来源于 #linkto("https://chaodong.me/")]
#place(
top + right,
dx: 1.5em,
dy: -.5em,
)[*示例代码* #linkto(icon: "mark-github", "https://touying-typ.github.io/touying/zh/docs/integration/pinit")]
]
== Touying 对比其他 Slides 方案
#slide[
#set text(.7em)
#let 难 = text(fill: rgb("#aa0000"), weight: "bold", "难")
#let 易 = text(fill: rgb("#007700"), weight: "bold", "易")
#let 慢 = text(fill: rgb("#aa0000"), weight: "bold", "慢")
#let 快 = text(fill: rgb("#007700"), weight: "bold", "快")
#let 弱 = text(fill: rgb("#aa0000"), weight: "bold", "弱")
#let 强 = text(fill: rgb("#007700"), weight: "bold", "强")
#let 中 = text(fill: blue, weight: "bold", "中等")
#let cell(top, bottom) = stack(spacing: .2em, top, block(height: 2em, text(size: .7em, bottom)))
#v(1em)
#figure(
table(
columns: 8,
stroke: none,
align: center + horizon,
inset: .5em,
table.hline(stroke: 2pt),
[方案], [语法难度], [编译速度], [排版能力], [模板能力], [编程能力], [动画效果], [代码公式],
table.hline(stroke: 1pt),
[*PowerPoint*], cell[#易][所见即所得], cell[#快][实时编辑], cell[#强][大公司开发\ 通用软件], cell[#强][模板数量最多\ 容易制作模板], cell[#弱][编程能力极弱\ 难以显示进度], cell[#强][动画效果多\ 但用起来复杂], cell[#难][难以插入代码和公式 #strike[贴图片]],
[Beamer], cell[#难][语法繁琐 + 嵌套多 + 难调试], cell[#慢][宏语言编译\ 速度极慢], cell[#弱][使用模板后\ 排版难以修改], cell[#中][拥有较多模板\ 开发模板较难], cell[#中][图灵完备\ 但只是宏语言], cell[#中][简单动画方便\ 无过渡动画], cell[#易][基本默认支持],
[#Markdown], cell[#易][入门语法十分简单], cell[#快][语法简单\ 编译速度较快], cell[#弱][语法限制\ 排版能力弱], cell[#弱][难以制作模板\ 只有内置模板], cell[#弱][图灵不完备\ 需要外部脚本], cell[#中][动画效果全看提供了什么], cell[#易][基本默认支持],
[#Touying], cell[#易][语法简单\ 使用方便], cell[#快][增量编译渲染\ 速度最快], cell[#中][满足日常学术\ Slides 需求], cell[#强][制作和使用\ 模板都较简单], cell[#强][图灵完备\ 现代编程语言], cell[#中][简单动画方便\ 无过渡动画], cell[#易][默认支持\ MiTeX 包],
table.hline(stroke: 2pt),
),
)
]
== 一些常见的 Slides 问题
- *能不能插入 LaTeX 公式?*
- 可以,只需要使用 MiTeX 包。#linkto("https://github.com/mitex-rs/mitex") #pause
- *能不能够加入 GIF 动图或者视频?*
- GIF 动图可以,但是要使用 *Typst Preview* 插件的 Slide 模式。
- 这是因为 *Typst Preview* 插件是*基于 SVG* 的。 #pause
- *插入图片方便吗?*
- 方便,比如本讲座的 Slides 就有一堆图片。
- 你可以使用 *grid 布局*。
- 也可以使用 *Pinit* 包的 *「图钉」* 功能。
= 包管理
== Typst 包管理
- #Typst 已经有了一个简单但强大的包管理方案。
- 包可以通过 `#import "@preview/pkg:1.0.0"` 的方式导入。
- *按需自动下载和自动导入第三方包。*
- 因此我们不需要像 *TexLive* 一样全量安装吃满硬盘。
- 使用 `@preview` 命名空间。
- 需要写上版本号,以保证文档源代码可复现性。
- 包目前存放于统一的 GitHub Repo 中。#linkto("https://github.com/typst/packages")
- 包可以是 *Package* 和 *Template*。
- 包也可以存放在本地,并且可以全局导入。
- #Typst 有一个 *Typst Universe*,可以浏览已有包。#linkto("https://typst.app/universe")
== WASM 插件
- *WASM* 是一种基于 *Web* 的*跨平台*汇编语言表示。
- #Typst 有 *WASM Plugin* 功能,也就是说:
- #Typst 的包并不一定要是纯 Typst 代码。
- #Typst 的包基本上可以用任意语言编写,例如 *Rust* 和 *JS*。#pause
- 一些 WASM 包的例子:
- *jogs:*封装 *QuickJS*,在 #Typst 中运行 *JavaScript* 代码。
- *pyrunner:*在 #Typst 中运行 *Python* 代码。
- *tiaoma:*封装 *Zint*,生成条码和二维码。
- *diagraph:*在 #Typst 中使用 *Graphviz*。
= Typst 生态开发体验
== 我参与开发的项目
- *Touying:*#Touying 是为 Typst 开发的 Slides 包。#linkto("https://github.com/touying-typ/touying")
- *MiTeX:*一个 Rust 写的*转译器*,用于快速地渲染 *LaTeX 公式*。#linkto("https://github.com/mitex-rs/mitex")
- *Pinit:*提供基于*「图钉」(pin)*进行相对定位的能力。#linkto("https://github.com/OrangeX4/typst-pinit")
- *nju-thesis-typst:*基于 #Typst 的南京大学学位论文。#linkto("https://github.com/nju-lug/nju-thesis-typst")
- *Chinese-Resume-in-Typst:*美观的 #Typst 中文简历。#linkto("https://github.com/OrangeX4/Chinese-Resume-in-Typst")
- *Tablem:*在 #Typst 中支持 #Markdown 形式的表格。#linkto("https://github.com/OrangeX4/typst-tablem")
- *Typst Sympy Calculator:*在 *VS Code* 中做科学符号运算。#linkto("https://github.com/OrangeX4/vscode-typst-sympy-calculator")
- *Typst Sync:*云端同步本地包的 *VS Code* 插件。#linkto("https://github.com/OrangeX4/vscode-typst-sync")
== 开发体验
- #Typst 生态现状:#strike[*勃勃生机,万物竞发*] #pause
- 语法简单,强类型语言,易于开发和调试。
- 写起 DSL 也很方便,比如 *MiTeX*、#Touying 和 *Tablem*。#pause
- 还有很多功能可以开发,#strike[例如把 #LaTeX 的宏包全都复刻一遍]。#pause
- *一些例子:*
- 国人开发的 *Tinymist* 插件和 *Typst Preview* 插件。
- *Pandoc* 支持和 *Quarto* 支持。
- 在网页上运行 #Typst:typst.ts 和 typst-book。#linkto("https://myriad-dreamin.github.io/typst-book/")
- 在 *VS Code* 的编辑器里显示数学符号的 *Typst Math* 插件。
= 最后
== 参考与鸣谢
#slide[
#set enum(numbering: "[1]")
+ #Typst 官方文档 #linkto("https://typst.app/docs")
+ *现代 #LaTeX 入门讲座* #linkto("https://github.com/stone-zeng/latex-talk")
+ *#Typst 中文教程* #linkto("https://github.com/typst-doc-cn/tutorial")
+ *Typst 非官方中文交流群* 793548390
+ *南京大学 Typst 交流群* 943622984
]
== 关于
#slide[
*本幻灯片:*https://github.com/OrangeX4/typst-talk #linkto("https://github.com/OrangeX4/typst-talk")
*最后更新:*#datetime.today().display()
*License:* CC BY-SA 4.0
*作者:*OrangeX4 #linkto("https://github.com/OrangeX4")
]
#focus-slide[
#set align(center + horizon)
\#thanks
]
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-11660.typ | typst | Apache License 2.0 | #let data = (
("MONGOLIAN BIRGA WITH ORNAMENT", "Po", 0),
("MONGOLIAN ROTATED BIRGA", "Po", 0),
("MONGOLIAN DOUBLE BIRGA WITH ORNAMENT", "Po", 0),
("MONGOLIAN TRIPLE BIRGA WITH ORNAMENT", "Po", 0),
("MONGOLIAN BIRGA WITH DOUBLE ORNAMENT", "Po", 0),
("MONGOLIAN ROTATED BIRGA WITH ORNAMENT", "Po", 0),
("MONGOLIAN ROTATED BIRGA WITH DOUBLE ORNAMENT", "Po", 0),
("MONGOLIAN INVERTED BIRGA", "Po", 0),
("MONGOLIAN INVERTED BIRGA WITH DOUBLE ORNAMENT", "Po", 0),
("MONGOLIAN SWIRL BIRGA", "Po", 0),
("MONGOLIAN SWIRL BIRGA WITH ORNAMENT", "Po", 0),
("MONGOLIAN SWIRL BIRGA WITH DOUBLE ORNAMENT", "Po", 0),
("MONGOLIAN TURNED SWIRL BIRGA WITH DOUBLE ORNAMENT", "Po", 0),
)
|
https://github.com/platformer/typst-algorithms | https://raw.githubusercontent.com/platformer/typst-algorithms/main/test/test.typ | typst | MIT License | #import "../algo.typ": algo, i, d, comment, code
== Plain `algo` and `code`
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"title:", "\"Floyd-Warshall\" (algo only)",
"parameters:", "(\"V\", \"E\", \"w\") (algo only)"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#code()[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
#show heading: it => {
pagebreak()
it
}
== Basic styling parameters
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"fill:", "none",
"stroke:", "2pt + black",
"radius:", "10pt",
"row-gutter:", "8pt",
"column-gutter:", "8pt",
"inset:", "15pt",
"indent-size:", "12pt (algo only)",
"indent-guides:", "1pt + gray",
"indent-guides-offset:", "4pt",
"comment-prefix:", "[#sym.triangle ] (algo only)"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
fill: none,
stroke: 2pt + black,
radius: 10pt,
row-gutter: 8pt,
column-gutter: 8pt,
inset: 15pt,
indent-size: 12pt,
indent-guides: 1pt + gray,
indent-guides-offset: 4pt,
comment-prefix: [#sym.triangle ]
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#code(
fill: none,
stroke: 2pt + black,
radius: 10pt,
row-gutter: 8pt,
column-gutter: 8pt,
inset: 15pt,
indent-guides: 1pt + gray,
indent-guides-offset: 4pt,
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== Empty bodies
#linebreak()
#algo()[]
#code()[]
== `code` with empty raw text
#linebreak()
#code()[#raw("")]
== `code` with empty raw block
#linebreak()
#code()[#raw("", block: true)]
== `code` with non-sequence raw block
#linebreak()
#code()[```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```]
== Indent guides with line wrapping
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"indent-guides:", "1pt + black",
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
indent-guides: 1pt + black,
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$\
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah#d#d#d#d\
\
Return $"dist"$
]
#code(
indent-guides: 1pt + black
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
return dist
```
]
== `code` indent guides with custom tab size
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"indent-guides:", "1pt + black",
"tab-size:", "2"
)
#code(
indent-guides: 1pt + black,
tab-size: 2,
)[
```py
def floyd_warshall(
G
):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== No line numbers
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"line-numbers:", "false"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
line-numbers: false,
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#code(
line-numbers: false,
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== `algo` without keywords
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"strong-keywords:", "false"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
strong-keywords: false,
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== `algo` with custom keywords
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"keywords:", "(\"in\", \"to\", \"hello world\")"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
keywords: ("in", "to", "hello world")
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
blah blah hello world blah blah\
Return $"dist"$
]
== `algo` without title
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"title:", "none"
)
#algo(
parameters: ("V", "E", "w"),
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== `algo` without parameters
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"parameters:", "()"
)
#algo(
title: "Floyd-Warshall",
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== `algo` without header
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"title:", "none",
"parameters:", "()"
)
#algo()[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== `algo` with content-type parameters
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"parameters:", "([#text(blue, [V])], [#text(red, [E])], [#text(green, [w])])"
)
#algo(
title: "Floyd-Warshall",
parameters: ([#text(blue, [V])], [#text(red, [E])], [#text(green, [w])]),
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== `algo` with content-type title
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"title:", "[#set text(red);Floyd-Warshall]"
)
#algo(
title: [#set text(red);Floyd-Warshall]
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== `algo` with custom header
#linebreak()
#algo(
header: {
strong("Floyd-Warshall Algorithm")
move(dx: 18pt, table(
columns: 2,
align: (x, _) => (right, left).at(x),
stroke: none,
inset: 0pt,
row-gutter: 10pt,
column-gutter: 10pt,
strong("Inputs:"),
[
graph $G=(V,E)$\
weight function $w: E -> RR$
],
strong("Outputs:"),
[distance matrix $"dist"$]
))
align(center, line(length: 330pt))
}
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== Text styling
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"main-text-styles:", "(fill: green)",
"line-number-styles:", "(fill: red)",
"comment-styles:", "(fill: blue) (algo only)"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
main-text-styles: (fill: green),
line-number-styles: (fill: red),
comment-styles: (fill: blue)
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#code(
main-text-styles: (fill: green),
line-number-styles: (fill: red)
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== Indent guides with big main text
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"indent-guides:", "1pt + black",
"main-text-styles:", "(size: 15pt)"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
indent-guides: 1pt + black,
main-text-styles: (size: 15pt),
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#code(
indent-guides: 1pt + black,
main-text-styles: (size: 15pt)
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== Indent guides with big line numbers
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"indent-guides:", "1pt + black",
"line-number-styles:", "(size: 15pt)"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
indent-guides: 1pt + black,
line-number-styles: (size: 15pt),
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#code(
indent-guides: 1pt + black,
line-number-styles: (size: 15pt)
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== `algo` indent guides with big comments
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"indent-guides:", "1pt + black",
"comment-styles:", "(size: 15pt)"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
indent-guides: 1pt + black,
comment-styles: (size: 15pt),
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
== Alignment
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"indent-guides:", "1pt + black",
"block-align:", "bottom + right"
)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
indent-guides: 1pt + black,
block-align: bottom + right,
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#pagebreak()
#code(
indent-guides: 1pt + black,
block-align: bottom + right,
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== Breakable
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"indent-guides:", "1pt + black",
"breakable:", "true"
)
#v(450pt)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
indent-guides: 1pt + black,
breakable: true,
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#v(460pt)
#code(
indent-guides: 1pt + black,
breakable: true,
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
== Broken indent guides with small inset
#table(
columns: 2,
stroke: none,
align: (x, _) => (right, left).at(x),
"row-gutter:", "15pt",
"inset:", "3pt",
"indent-guides:", "1pt + black",
"breakable:", "true"
)
#v(380pt)
#algo(
title: "Floyd-Warshall",
parameters: ("V", "E", "w"),
row-gutter: 15pt,
inset: 3pt,
indent-guides: 1pt + black,
breakable: true,
)[
Let $"dist"[u,v] <- infinity$ for $u,v$ in $V$\
For $(u,v)$ in $E$:#i\
$"dist"[u,v] <- w(u,v)$ #comment[edge weights] #d\
For $v$ in $V$:#i\
$"dist"[v,v] <- 0$ #comment[base case] #d\
\
For $k <- 1$ to $|V|$:#i\
For $i <- 1$ to $|V|$:#i\
For $j <- 1$ to $|V|$:#i\
#comment(inline: true)[if new path is shorter, reduce distance]\
If $"dist"[i,j] > "dist"[i,k] + "dist"[k,j]$:#i\
$"dist"[i,j] <- "dist"[i,k] + "dist"[k,j]$#d#d#d#d\
\
Return $"dist"$
]
#v(420pt)
#code(
row-gutter: 15pt,
inset: 3pt,
indent-guides: 1pt + black,
breakable: true,
)[
```py
def floyd_warshall(G):
# let G be an adjacency matrix
dist = G
for k in range(len(G)):
for i in range(len(G)):
for j in range(len(G)):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
```
]
|
https://github.com/rabotaem-incorporated/algebra-conspect-1course | https://raw.githubusercontent.com/rabotaem-incorporated/algebra-conspect-1course/master/sections/01-number-theory/!sec.typ | typst | Other | #import "../../config.typ"
#if config.enable-chapters-from-sem1 [
= Теория чисел
#include "01-divisibility.typ"
#include "02-equiv-relation.typ"
#include "03-modulo-eq.typ"
#include "04-multiplicative-rings.typ"
#include "05-gcd.typ"
#include "06-coprimes.typ"
#include "07-diophantine-equations.typ"
#include "08-primes.typ"
#include "09-fundamental-th-of-arithm.typ"
#include "10-chinese-rem-th.typ"
#include "11-euler-fn.typ"
] |
https://github.com/noahjutz/AD | https://raw.githubusercontent.com/noahjutz/AD/main/notizen/sortieralgorithmen/insertion_sort/insertion_sort.typ | typst | #import "/components/num_row.typ": num_row
#import "/config.typ": theme
#let row_push_key(nums, i, key) = num_row(
nums,
hl_primary: i,
prefix: (str(key),),
arrow: (
from: i,
to: -1,
color: theme.fg_light
)
)
#let row_shift_forward(nums, i, j, key) = num_row(
nums,
hl_primary: i,
hl_secondary: j,
prefix: (str(key),),
arrow: (
from: j,
to: j+1
)
)
#let row_pop_key(nums, i, j, key) = num_row(
nums,
hl_primary: i,
hl_secondary: j,
prefix: (str(key),),
arrow: (
from: -1,
to: j+1,
color: theme.fg_light
)
)
#let row_complete(nums, i, key) = num_row(
nums,
prefix: (str(key),),
hl_success: range(i+1),
)
#let rows(nums) = {
for i in range(1, nums.len()) {
let key = nums.at(i)
row_push_key(nums, i, key)
let j = i - 1
while j >= 0 and nums.at(j) > key {
row_shift_forward(nums, i, j, key)
nums.at(j+1) = nums.at(j)
j -= 1
}
row_pop_key(nums, i, j, key)
nums.at(j+1) = key
row_complete(nums, i, key)
}
}
#let insertion_sort(..nums) = table(
columns: (auto,) + (1fr,) * (nums.pos().len()+1),
align: center,
..rows(nums.pos())
) |
|
https://github.com/jamesrswift/chemicoms-paper | https://raw.githubusercontent.com/jamesrswift/chemicoms-paper/main/README.md | markdown | # chemicoms-paper
This is a Typst template for a two-column paper in a style similar to that of the Royal Society of Chemistry.
## Usage
You can use this template in the Typst web app by clicking "Start from template" on the dashboard and searching for the `chimicoms-paper`.
Alternatively, you can use the CLI to kick this project off using the command
```
typst init @preview/chemicoms-paper
```
## Configuration
This template exports the `template` function with the following named arguments:
- `title` (optional, content)
- `subtitle` (optional, content)
- `short-title` (optional, string)
- `author(s)` (optional, (array or singular) dictionary or string)
- `name` (required, string, inferred)
- `url` (optional, string)
- `phone` (optional, string)
- `fax` (optional, string)
- `orcid` (optional, string)
- `note` (optional, string)
- `email` (optional, string)
- `corresponding` (optional, boolean, default true if email set)
- `equal-contributor` (optional, boolean)
- `deceased` (optional, boolean)
- `roles` (optional, (array or singular) string)
- `affiliation(s)` (optional, (array or singular) dictionary or strng)
- either: (string) or (number)
- `abstract(s)` (optional, (array or singular) dictionary or content)
- `title` (default: "Abstract")
- `content` (required, content, inferred)
- `open-access` (optional, boolean)
- `venue` (optional, content)
- `doi` (optional, string)
- `keywords` (optional, array of strings)
- `dates` (optional, (array or singular) dictionary or date)
- `type` (optional, content)
- `date` (required, date or string, inferred)
The functions also accepts a single, positional argument for the body of the paper.
## Media
<p align="center">
<img alt="Light" src="./thumbnails/1.png" width="45%">
<img alt="Dark" src="./thumbnails/2.png" width="45%">
</p> |
|
https://github.com/coastalwhite/typst-tudelft | https://raw.githubusercontent.com/coastalwhite/typst-tudelft/main/tudelft-template.typ | typst | #let COLORS = (
primary: rgb("#00A6D6"),
secondary: rgb("#99D28C"),
)
#let BULLETS_FN = (level) => {
set text(fill: COLORS.primary)
([•], [‣], [◦]).at(level)
}
#let HEADER_FONT = "Roboto Slab"
#let TEXT_FONT = "Noto Sans"
#let SLIDE_INSET = 2cm
#let SPACING = 1cm
#let PROGRESS_BAR_HEIGHT = 0.5cm
#let CONTENT_SLIDE = (
title_font_size: 32pt,
text_font_size: 24pt,
numbering_font_size: 20pt,
)
#let TOPIC_SLIDE = (
title_font_size: 32pt,
subtitle_font_size: 24pt,
)
#let slide_counter = counter("slide-counter")
#let conf(doc) = {
set page(
paper: "presentation-16-9",
margin: (x: 0cm, y: 0cm),
)
show list: p => [ #set list(marker: BULLETS_FN); #p ]
show figure.caption: p => [ #set text(size: 14pt); #p ]
set text(font: TEXT_FONT)
doc
}
#let show_heading(
body,
fill: COLORS.primary,
font: HEADER_FONT,
weight: "medium",
) = [
#show heading: it => [
#set text(
fill: fill,
font: font,
weight: weight,
)
#it
]
#body
]
#let bare-slide(body) = [
#body
#pagebreak(weak: true)
]
#let numbered-slide(body) = bare-slide({
slide_counter.step()
locate(loc => grid(
rows: (PROGRESS_BAR_HEIGHT, 1fr),
rect(
stroke: none,
fill: COLORS.primary,
width: 100% * (slide_counter.at(loc).at(0) / slide_counter.final(loc).at(0)),
),
box(
inset: (
top: SLIDE_INSET - PROGRESS_BAR_HEIGHT,
left: SLIDE_INSET,
right: SLIDE_INSET,
bottom: 0cm
),
grid(
rows: (1fr, 3cm),
row-gutter: SPACING,
body,
grid(
columns: (1fr, 1fr),
image("assets/logo/TUDelft_logo_rgb.svg", height: 100%),
align(right + horizon, {
set text(size: CONTENT_SLIDE.numbering_font_size)
slide_counter.display()
}),
)
)
)
)
)
})
#let title-slide(title, author: none) = {
bare-slide(
align(center + horizon, show_heading[
#heading(level: 1, outlined: false, title)
#if author != none [
#heading(level: 2, outlined: false, author)
]
])
)
}
#let topic-slide(title, subtitle: none) = {
bare-slide(
rect(width: 100%, inset: (x: SLIDE_INSET, y: 0cm), fill: COLORS.primary, stroke: none,
grid(
rows: (20%, 60%, 20%),
[],
align(left + horizon, show_heading(fill: white)[
#show heading: it => {
set text(size: if it.level == 1 {
TOPIC_SLIDE.title_font_size
} else {
TOPIC_SLIDE.subtitle_font_size
})
it
}
= #title
#if subtitle != none [
#heading(level: 2, outlined: false, subtitle)
]
]),
align(left + horizon, [
#image("./assets/logo/TUDelft_logo_white.svg", height: 100%)
]), [],
)
)
)
}
#let slide(body, rhs_figure: none, title: "Placeholder Title") = {
numbered-slide(
grid(
rows: (32pt, 1fr),
row-gutter: SPACING,
show_heading[
#set text(size: CONTENT_SLIDE.title_font_size)
#heading(level: 6, title)
],
[
#set text(
font: TEXT_FONT,
size: CONTENT_SLIDE.text_font_size,
)
#if rhs_figure == none [
#body
] else [
#grid(
columns: (1fr, 1fr),
gutter: 1cm,
[ #body ],
align(center + horizon, rhs_figure)
)
]
],
)
)
}
#let outline-slide(title: "Table of Contents") = slide(
title: title,
locate(loc => {
let headers = query(heading, loc).filter(it => it.outlined)
list(..headers.map(h => list.item(h.body)))
}),
)
#let bare-bg-slide(body) = bare-slide[
#set page(background: rect(fill: COLORS.primary, width: 100%, height: 100%))
#image("./assets/background.svg", width: 100%)
#place(dy: -100% - 1em)[
#body
]
]
#let title-bg-slide(
title: "Title Placeholder",
subtitle: "Subtitle Placeholder"
) = bare-bg-slide[
#box(
inset: 2cm,
width: 100%,
height: 100%,
show_heading(
fill: white,
align(right + bottom, [
#show heading.where(level: 1): it => [
#set text(size: 36pt)
#it
]
#show heading.where(level: 2): it => [
#set text(size: 24pt)
#it
]
#stack(
dir: ttb,
spacing: 1cm,
[ = #title ],
[ == #subtitle ]
)
])
)
)
] |
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CS_SK_pascha_test/Pascha.typ | typst | #import "/CU_json/all.typ": *
#show: book
#show "(ímja rek)": sText
#show "(povie meno)": sText
#let sk = json("./pascha_sk.json")
#let csl = json("./pascha_csl.json")
#let generateTable(tbl) = [
#table(
columns: (auto, 20pt, auto),
column-gutter: 0pt,
stroke: none,
// align: (a,b,c,d,e) => (center, left, center, center, left).at(a),
..tbl
)
]
#set text(font: "Monomakh Unicode", lang: "cs", fill: black)
#show heading: it => {
// if it.location().position().y > (page.height * 0.5).to-absolute() [
// #pagebreak()
// ]
align(center, block(inset: (y: 1em), text(hyphenate: false, it)))
}
#show heading.where(level:1): it => align(center, block(inset: (y: 2em), text(hyphenate: false, it)))
#let line = table.hline(stroke: (paint: red, dash: "dashed"))
#let note(txt) = table.cell(colspan: 3, pText(txt))
#let D(sk, cs) = (
gText(sk),
table.cell(align: center, pText("D")),
gText(cs))
#let K(sk, cs) = (
gText(sk),
table.cell(align: center, pText("K")),
gText(cs))
#let L(sk, cs, note: "") = (
sk + if note.len() > 0 {sText(" " + note)} else {""},
table.cell(align: center, pText("L")),
cs + if note.len() > 0 {sText(" " + note)} else {""}
)
#let N(sk, cs, mid) = (
cText(sk),
table.cell(align: center, pText(mid)),
cText(cs)
)
#let X(sk, cs, mid) = (
gText(sk),
table.cell(align: center, pText(mid)),
gText(cs)
)
#let skChV = "Kristus slávne vstal z mŕtvych, * smrťou smrť premohol * a tým, čo sú v hroboch, * život daroval."
#let cslChV = "Christós voskrése iz mértvych, * smértiju smerť popráv * i súščym vo hrobích živót darováv."
#let paschalnyUvod() = {
generateTable((
K("Sláva svätej, jednopodstatnej, životodarnej a nedeliteľnej Trojici v každom čase, teraz i vždycky, i na veky vekov.", "<NAME>, i Jedinosúščňij, i Životvorjáščij, i Nerazďilímij Tróici, vsehdá, nýňi i prísno, i vo víki vikóv."),
line,
note("Ak niet kňaza:"),
L("Pane Ježišu Kriste, Bože náš, pre modlitby našich svätých otcov, zmiluj sa nad nami.", "Molítvami svjatých Otéc, Hóspodi Isúse Chríste, Bóže náš, pomíluj nás."),
line,
L("Amen.","Amíň."),
K(skChV, cslChV),
L(skChV, cslChV, note: "(2x)"),
).flatten())
}
#let verse() = {
let cnt = 1
let vers = sk.CH_S.zip(csl.CH_S)
let ret = (
table.cell(colspan: 3, inset: 1em, [== Verše]),
)
for it in vers {
ret.push(N(it.at(0), it.at(1), numbering("i", cnt)))
ret.push(L(skChV, cslChV))
cnt += 1
}
generateTable(ret.flatten())
}
#let velkaEktenia() = {
let sk = (
"Za pokoj zhora a za spásu našich duší modlime sa k Pánovi.",
"Za mier na celom svete, za blaho svätých Božích cirkví a za zjednotenie všetkých modlime sa k Pánovi.",
"Za tento svätý chrám a za tých, čo doň vstupujú s vierou, nábožnosťou a s Božou bázňou, modlime sa k Pánovi.",
"Za veľkňaza všeobecnej Cirkvi, nášho Svätého Otca (povie meno), rímskeho pápeža, modlime sa k Pánovi.",
"Za nášho najosvietenejšieho otca arcibiskupa a metropolitu (povie meno), za nášho bohumilého otca biskupa (povie meno), za ctihodných kňazov a diakonov v Kristovi, za všetko duchovenstvo a ľud modlime sa k Pánovi.",
"Za tých, čo spravujú a ochraňujú našu krajinu, modlime sa k Pánovi.",
"Za toto mesto (alebo Za túto obec alebo Za tento svätý dom), za všetky mestá, obce, krajiny a za tých, ktorí v nich podľa viery žijú, modlime sa k Pánovi.",
"Za priaznivé počasie, hojnosť plodov zeme a za pokojné časy modlime sa k Pánovi.",
"Za cestujúcich, chorých, trpiacich, zajatých a za ich záchranu modlime sa k Pánovi.",
"Za oslobodenie od všetkého nášho zármutku, hnevu a núdze modlime sa k Pánovi.",
"Zastaň sa a spas nás, zmiluj sa a zachráň nás, Bože, svojou milosťou."
)
let csl = (
"O svýšňim míri, i spaséniji dúš nášich, Hóspodu pomólimsja.",
"O míri vsehó míra, blahostojániji svjatých Bóžijich cerkvéj, i sojedinéniji vsích, Hóspodu pomólimsja.",
"O svjaťím chrámi sém, i s víroju, blahohovínijem, i stráchom bóžijim vchoďáščich v óň, Hóspodu pomólimsja.",
"O svjaťíjšem vselénsťim Archijeréji nášem (ímja rek), Pápi Rímsťim, Hóspodu pomólimsja.",
"O preosvjaščénňijšem Archijepískopi i Mitropolíťi nášem Kýr (ímja rék), i o bohoľubívim Jepískopi nášem Kýr (ímja rék), čéstňim presvýterstvi, vo Chrisťí dijákonstvi, o vsém príčťi i ľúdech, Hóspodu pomólimsja.",
"O bohochranímim naróďi nášem, o prederžáščich vlastéch nášich, i o vsém vójinstvi, Hóspodu pomólimsja.",
"O hráďi sém (ilí O vési séj, ilí O svjaťíj obíteli séj), vsjákom hráďi, straňí, i víroju živúščich v ních, Hóspodu pomólimsja.",
"O blahorastvoréniji vozdúchov, o izobíliji plodóv zemných, i vrémeňich mírnych, Hóspodu pomólimsja.",
"O plávajuščich, putešéstvujuščich, nedúhujuščich, strážduščich, pľinénnych, i o spaséniji ích, Hóspodu pomólimsja.",
"O izbávitisja nám ot vsjákija skórbi, hňíva i núždy, Hóspodu pomólimsja.",
"Zastupí, spasí, pomíluj, i sochraní nás, Bóže, Tvojéju blahodátiju."
)
let idx = 1
let mix = sk.zip(csl)
let all = (
)
for (s, c) in mix {
all.push(X(s,c,numbering("i", idx)))
idx += 1
}
let first_sk = "Modlime sa v pokoji k Pánovi."
let first_csl = "Mírom Hóspodu pomólimsja."
let last_sk = "Presvätú, prečistú, preblahoslavenú a slávnu Vládkyňu našu, Bohorodičku Máriu, vždy Pannu, i všetkých svätých spomínajúc, sami seba, druh druha i celý náš život Kristu Bohu oddajme."
let last_csl = "Presvjatúju, prečístuju, preblahoslovénnuju, slávnuju Vladýčicu nášu Bohoródicu i prisnoďívu Maríju, so vsími svjatými pomjanúvše, sámi sebé i drúh drúha, i vés živót náš Christú Bóhu predadím."
let zvolanie_sk = "Lebo tebe patrí všetka sláva, česť a poklona, Otcu i Synu, i Svätému Duchu, teraz i vždycky, i na veky vekov."
let zvolanie_csl = "Jáko podobájet Tebí vsjákaja sláva, čésť i poklonénije, Otcú, i Sýnu, i svjatómu Dúchu, nýňi i prísno i vo víki vikóv."
[== Ekténia]
generateTable((
line,
note("Ak je prítomný kňaz, alebo diakon, prednáša veľkú ekténiu. Ak nie, berieme:"),
L("Pane, zmiluj sa.", "Hóspodi, pomíluj.", note: "(12x)"),
L("Sláva, i teraz:", "Sláva, i nýňi:"),
line,
all,
D(first_sk, first_csl),
L("Pane, zmiluj sa.", "Hóspodi, pomíluj.", note: "(po každej prosbe)"),
D(last_sk, last_csl),
L("Tebe, Pane.", "Tebí, Hóspodi."),
K(zvolanie_sk, zvolanie_csl),
L("Amen.", "Amíň.")
).flatten())
}
#let piesen(idx) = {
let s = sk.K.at(str(idx))
let c = csl.K.at(str(idx))
let s_k = sk.K.at(str(idx)+"_K")
let c_k = csl.K.at(str(idx)+"_K")
let mix = s.zip(c)
let i = 1
let all = mix.slice(1).map(it => ((
if "TITLE" in it.at(0) and it.at(0).TITLE == "Troičen" {
N("Svät<NAME>, Bože náš, sláva tebe.", "Presvjatája Trojce, Bóže náš, sláva Tebí.", "")
} else {
N("Kristus slávne vstal z mŕtvych.", "Christós voskrése iz mértvych.", "")
}
),(
jObj(it.at(0)),
pText(numbering("i", mix.position(x => x == it))),
jObj(it.at(1))
)))
[=== Pieseň #idx]
generateTable((
jObj(mix.at(0).at(0)), "", jObj(mix.at(0).at(1)),
all,
(jObj(s_k), "", jObj(c_k))
).flatten())
}
#let mala_ektenia(zvolanie_sk, zvolanie_csl) = {
let last_sk = "Presvätú, prečistú, preblahoslavenú a slávnu Vládkyňu našu, Bohorodičku Máriu, vždy Pannu, i všetkých svätých spomínajúc, sami seba, druh druha i celý náš život Kristu Bohu oddajme."
let last_csl = "Presvjatúju, prečístuju, preblahoslovénnuju, slávnuju Vladýčicu nášu Bohoródicu i prisnoďívu Maríju, so vsími svjatými pomjanúvše, sámi sebé i drúh drúha, i vés živót náš Christú Bóhu predadím."
[=== Malá ekténia]
generateTable((
line,
note("Ak je prítomný kňaz, alebo diakon, prednáša malú ekténiu. Ak nie, berieme:"),
L("Pane, zmiluj sa.", "Hóspodi, pomíluj.", note: "(3x)"),
L("Sláva, i teraz: Amen.", "Sláva, i nýňi: Amíň."),
line,
D("Znova a znova modlime sa v pokoji k Pánovi.", "Páki i páki, mírom Hóspodu pomólimsja."),
L("Pane, zmiluj sa.", "Hóspodi, pomíluj."),
D("Zastaň sa a spas nás, zmiluj sa a zachráň nás, Bože, svojou milosťou.","Zastupí, spasí, pomíluj, i sochraní nás, Bóže, Tvojéju blahodátiju."),
L("Pane, zmiluj sa.", "Hóspodi, pomíluj."),
D(last_sk, last_csl),
L("Tebe, Pane.", "Tebí, Hóspodi."),
K(zvolanie_sk, zvolanie_csl),
L("Amen.", "Amíň.")
).flatten())
}
#let kanon() = {
[== Kánon]
align(center, pText("Hlas " + str(sk.K.H)))
piesen(1)
mala_ektenia("Lebo ty vládneš a tvoje je kráľovstvo a moc i sláva, Otca i Syna i Svätého Ducha, teraz i vždycky i na veky vekov.", "Jáko Tvojá deržáva, i Tvojé jésť cárstvo i síla, i sláva Otcá i Sýna i svjatáho Dúcha, nýňi i prísno i vo víki vikóv.")
piesen(3)
mala_ektenia("Lebo ty si náš Boh a my ti vzdávame slávu, Otcu i Synu, i Svätému Duchu, teraz i vždycky, i na veky vekov.", "Jako Ty jesí Boh naš, i Tebí slávu vozsylájem, Otcú, i Sýnu, i Svjatómu Dúchu, nýňi i prísno, i vo víki vikóv.")
// ypakoj
piesen(4)
mala_ektenia("Lebo ty si Boh dobrý a láskavý k ľuďom a my ti vzdávame slávu, Otcu i Synu i Svätému Duchu, teraz i vždycky i na veky vekov.", "Jako Blah i Čelovikoľúbec Boh jesí, i Tebí slávu vozsylájem, Otcú, i Sýnu, i Svjatómu Dúchu, nýňi i prísno, i vo víki vikóv.")
piesen(5)
mala_ektenia("Lebo sa posväcuje a oslavuje tvoje vznešené a veľkolepé meno, Otca i Syna i Svätého Ducha, teraz i vždycky i na veky vekov.", "Jako svjatísja i proslávisja prečestnóe i velikoľípoje ímja Tvojé, Otcá, i Sýna, i Svjatáho Dúcha, nýňi i prísno, i vo víki vikóv.")
piesen(6)
mala_ektenia("Lebo ty si Kráľ pokoja a Spasiteľ našich duší, a my ti vzdávame slávu, Otcu i Synu, i Svätému Duchu, teraz i vždycky, i na veky vekov.", "Ty bo jesí Car míra, i Spas duš nášich, i Tebí slávu vozsylájem, Otcú, i Sýnu, i Svjatómu Dúchu, nýňi i prísno, i vo víki vikóv.")
// kondak, ikos, videli sme..., stichira
piesen(7)
mala_ektenia("Nech je vždy oslavovaná a zvelebovaná vláda kráľovstva Otca i Syna i Svätého Ducha, teraz i vždycky i na veky vekov.", "Búdi deržáva Cárstvija Tvojehó blahoslovénna i preproslávlenna, Otcá, i Sýna, i Svjatáho Dúcha, nýňi i prísno, i vo víki vikóv.")
piesen(8)
mala_ektenia("Lebo sa požehnáva tvoje meno a oslavuje sa tvoje kráľovstvo, Otca i Syna, i Svätého Ducha, teraz i vždycky, i na veky vekov.", "Jako blahoslovísja ímja Tvojé, i proslávisja Cárstvo Tvojé, Otcá, i Sýna, i Svjatáho Dúcha, nýňi i prísno, i vo víki vikóv.")
// velebenie
piesen(9)
mala_ektenia("Lebo teba chvália všetky nebeské mocnosti a my ti vzdávame slávu, Otcu i Synu, i Svätému Duchu, teraz i vždycky, i na veky vekov.", "Jáko tebé chváľat vsjá síly nebésnyja, i tebí slávu vozsylájem Otcú i Sýnu i svjatómu Dúchu, nýňi i prísno i vo víki vikóv.")
}
= Utiereň svetlého týždňa
#paschalnyUvod()
#verse()
#velkaEktenia()
#kanon()
// svitilen
// chvaly
// ektenia
// prepustenie
|
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/math/green.typ | typst | #import "../lib.typ": *
#show heading: heading_fct
On dit que $(M,times)$ avec $M$ un ensemble est un monoïde s'il respecte 3 axiomes:
- $times$ est une loi de composition interne, aka $(times) : M^2 --> M$
- $times$ est associative, aka $forall a,b,c in M, (a times b) times c = a times (b times c)$
- Il existe un élément neutre, aka $exists e in M, forall x in M, x times e = e times x = x$
$(T, times_M)$ est un _sous-monoide_ de $(M, times_M)$ si on a $T subset.eq M$, on a $times_M (T^2) subset.eq T$ et que $T$ possède un élément neutre.
#question(0)[Montrer que $(Sigma^*, dot)$ est un monoide pour $Sigma$ un alphabet fini non vide et $(dot)$ la concaténation.]
#correct([
On vérifie les axiomes : l'élément neutre est $epsilon$ et on a bien $u dot (v dot w) = (u dot v) dot w$.
])
#question(1)[Montrer que $({0,...,N}, max)$ est un monoide. En conclure sur si un sous-monoide d'un monoide $M$ doit posséder toujours le même élément neutre que $M$.]
#correct([
On vérifie les axiomes : l'élément neutre est $0$ et on a bien $max (x, max(y, z)) = max(max(x, y), z)$.
On remarque que ${1,...,N}$ est un sous-monoide, et son élément neutre est $1$, donc un sous-monoide n'as pas forcément le même élément neutre.
])
On dit que $x in M$ est idempotent si $x = x^2$. On se fixe un monoide $M$ fini.
#question(2)[Montrer que $exists N in NN, forall x in M, x^N$ est idempotent.]
#correct([
Pour ça on pose $N = |M|!$, alors comme $M$ est fini, on considère soit $x in M$ la suite
$ x=x^1 --> x^2 --> x^3 --> ... --> x^i --> ... --> x^j $
avec $x^i = x^j$. On a que $j<=|M|$ et que $i-j <= |M|$ donc que $i-j$ divise $|M|! = N$, et en posant $k = N/(i-j)$, on a $x^N = x^N.x^((i-j)k) = x^(2N)$.
])
=== Relations de Green
Le but de cette khôlle est de démontrer que dans tout monoide fini, on peut trouver un groupe pour la même loi que celle du monoïde.
On définit 4 relations d'ordre sur $(M,.)$ :
- $x <=_J y$ ssi $exists a,b in M, a x b = y $ la relation infixe.
- $x <=_L y$ ssi $exists a in M, x b = y $ la relation préfixe (L comme Left).
- $x <=_R y$ ssi $exists b in M, a x = y $ la relation postfixe (R comme Right).
- $x <=_H y$ ssi $x <=_L y and x <=_R y$ l'intersection de la relation préfixe et infixe.
Pour chaque relation d'ordre ($forall alpha in {J,L,R,H}$) on définit une relation d'équivalence par:
$ x equiv_alpha y "ssi" x <=_alpha y and y <=_alpha x $
On appelle les _$alpha$-classes_ les classes d'équivalence pour $equiv_alpha$ pour $alpha in {J,L,R,H}$.
#question(2)[Donner les partitions en $alpha$-classes pour le monoide $({0,1,2}^{0,1,2}, compose )$ pour tout $alpha in {J,L,R,H}$]
#question(2)[Montrer que toute L-classe (respectivement R-classe) est incluse dans une J-classe.]
#question(2)[Montrer que les $alpha$-classes sont ordonnées entres elles pour tout $alpha in {J,L,R,H}$.]
#question(3)[Montrer que si $X$ est une J-classe, alors toutes les $L$-classes de $X$ (respectivement $R$-classes) sont incomparable, aka si $x,y in X$ et $x <=_L y$, on a $x equiv_L y$ ]
#correct([
Soit $x,y in X$. On a alors $x equiv_J y$, on écrit donc $x = a y b$. De plus, on suppose $x <=_L y$, on a donc aussi $x c = y$.
Alors, en combinant les deux, on a $x = a y b = a x c b$. En itérant $N$ fois (le $N$ de la question 3), il viens que $x = a^N x (c b)^N = a^N x (c b)^(2N)$. On dépile maintenant (car $a x c b = x$) et on a $x = x (c b)^(N) = y b (c b)^(N-1)$, donc $y <=_L x$, donc $x equiv_L y$.
Une preuve similaire fonctionne pour $<=_R$
])
#question(3)[Montrer que dans $X$ une J-classe toutes les $L$-classes de $X$ intersectent toutes les R-classes de $X$]
#correct([
Soit $x,y in X$. On écrit $x = a y b$. On a donc que $x = (a y) b$ donc $a y <=_L x$ mais aussi $y <=_R a y$, donc par la question 7, on a $x equiv_L a y equiv_R y$ donc l'élément $a y$ est dans les deux classes, dont l'intersection est non vide.
])
#question(3)[Montrer que si $X$ est une J-classe maximale, si $H$ est une H-classe contenue dans $X$, alors on a la stabilité du produit dans $X$]
// TODO(Coda): correct
#question(2)[Montrer que si $X$ est une J-classe maximale, alors toutes les H-classe contenue dans $X$ sont des groupes]
#correct([
On a la stabilité du produit, il nous manque que le neutre et l'inverse.
//TODO(Coda): continue correct
]) |
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/references/cross_module.typ | typst | Apache License 2.0 | #import "base.typ": *
#x
-----
// path: base.typ
#let /* ident after */ x = 1; |
https://github.com/Enter-tainer/typstyle | https://raw.githubusercontent.com/Enter-tainer/typstyle/master/tests/assets/simple-paper.typ | typst | Apache License 2.0 | #let project(
title: "",
authors: (),
abstract: none,
keywords: (),
body
) = {
let song = "FZShuSong-Z01"
let hei = "FZHei-B01"
let kai = "FZKai-Z03"
let xbsong = "FZXiaoBiaoSong-B05"
let code = "DejaVu Sans Mono"
// Moidfy the following to change the font.
let title-font = hei
let author-font = kai
let body-font = song
let heading-font = xbsong
let caption-font = kai
let header-font = kai
let strong-font = hei
let emph-font = kai
let raw-font = code
set document(author: authors, title: title)
set page(numbering: "1", number-align: center, header: align(left)[
#set text(font: header-font)
#title
])
set heading(numbering: "1.1")
set text(font: body-font, lang: "zh", region: "cn")
set bibliography(style: "gb-7114-2015-numeric")
show heading: it => box(width: 100%)[
#v(0.50em)
#set text(font: heading-font)
#if it.numbering != none { counter(heading).display() }
#h(0.75em)
#it.body
]
show heading.where(
level: 1
): it => box(width: 100%)[
#v(0.5em)
#set align(center)
#set heading(numbering: "一")
#it
#v(0.75em)
]
// Title
align(center)[
#block(text(font: title-font, weight: 700, 1.75em, title))
#v(0.5em)
]
// Author information.
pad(
top: 0.5em,
bottom: 0.5em,
x: 2em,
grid(
columns: (1fr,) * calc.min(3, authors.len()),
gutter: 1em,
..authors.map(author => align(center, text(font: author-font, author))),
),
)
// Main body
set par(first-line-indent: 2em)
set enum(indent: 2em)
set figure(gap: 0.8cm)
show figure: it => [
#v(12pt)
#set text(font: caption-font)
#it
#par()[#text(size: 0.0em)[#h(0.0em)]]
#v(12pt)
]
show image: it => [
#it
#par()[#text(size: 0.0em)[#h(0.0em)]]
]
show table: it => [
#set text(font: body-font)
#it
]
show strong: set text(font: strong-font)
show emph: set text(font: emph-font)
show ref: set text(red)
show raw.where(block: true): block.with(
width: 100%,
fill: luma(240),
inset: 10pt,
)
show raw: set text(font: (raw-font, hei), size: 10pt)
show link: underline
show link: set text(blue)
if abstract != none [
#v(2pt)
#h(2em) *摘要:* #abstract
#if keywords!= () [
*关键字:* #keywords.join(";")
]
#v(2pt)
]
body
}
#let problem-counter = counter("problem")
#problem-counter.step()
#let problem(body) = {
problem-counter.step()
set enum(numbering: "(1)")
block(
fill: rgb(241, 241, 255),
inset: 8pt,
radius: 2pt,
width: 100%,
)[*题目 #problem-counter.display().* #h(0.75em) #body]
}
#let solution(body) = {
set enum(numbering: "(1)")
block(
inset: 8pt,
width: 100%
)[*解答.* #h(0.75em) #body]
}
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/let-28.typ | typst | Other | #let _ = 4
#for _ in range(2) []
// Error: 2-3 unexpected underscore
#_
// Error: 8-9 unexpected underscore
#lorem(_)
// Error: 3-4 expected expression, found underscore
#(_,)
// Error: 3-4 expected expression, found underscore
#{_}
// Error: 8-9 expected expression, found underscore
#{ 1 + _ }
|
https://github.com/fiit-community/base-typst-template | https://raw.githubusercontent.com/fiit-community/base-typst-template/main/template/assets.typ | typst | MIT License | #import "assets/mod.typ": *
|
https://github.com/hewliyang/fyp-typst | https://raw.githubusercontent.com/hewliyang/fyp-typst/main/abstract.typ | typst | #align(
center + horizon,
[
= Abstract
#v(3em)
The evaluation of synthetic speech presents unique challenges in today's rapidly evolving technological landscape. While the Mean Opinion Score (MOS) remains the predominant metric for assessing speech quality, its inherent subjectivity raises concerns about reliability. This study examines the effectiveness of supervised neural models—specifically NISQA and MOSNet—in predicting MOS scores for speech naturalness.
To facilitate this investigation, we compiled an extensive dataset from the Blizzard Challenge and Voice Conversion Challenge, incorporating MOS labels and commercial use permissions. These datasets are now publicly accessible via the HuggingFace Hub. Our experimental framework focused on reproducing and analyzing NISQA's performance, incorporating both pre-training and transfer learning approaches.
Our results indicate that supervised MOS predictors face significant challenges in out-of-domain evaluations, primarily due to training data bias. Even with expanded datasets and pre-training strategies, the models demonstrated limited generalization capabilities. These findings suggest the need for fundamental changes in how we evaluate synthetic speech.
We explore promising alternatives to traditional MOS evaluation, including SpeechLMScore—an unsupervised metric leveraging speech language models—and UTMOS, which employs ensemble learning techniques. These approaches show potential in addressing MOS limitations and providing more reliable quality assessments. Our research highlights the critical need for evaluation frameworks that can adapt to advancing speech synthesis technologies while maintaining objectivity and reliability.
],
) |
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/2_RTB/VerbaliEsterni/VerbaleEsterno_231110/signature.typ | typst | MIT License | #place(
bottom + left,
grid(
columns: (1fr),
row-gutter: 20pt,
[Luogo: #line(length: 60%)],
[Data: #line(length: 60%)],
[Firma: #line(length: 60%)]
)
) |
https://github.com/mariunaise/HDA-Thesis | https://raw.githubusercontent.com/mariunaise/HDA-Thesis/master/graphics/plots/optimal_BACH.typ | typst | #import "@preview/cetz:0.2.2": *
#let data = csv("../../data/z_distribution/z_distribution.csv")
#let data = data.map(value => value.map(v => float(v)))
#let line_style = (stroke: (paint: black, thickness: 2pt))
#let dashed = (stroke: (dash: "dashed"))
#canvas({
plot.plot(size: (8,3),
legend : "legend.south",
legend-style: (orientation: ltr, item: (spacing: 0.5)),
x-tick-step: none,
x-ticks: ((0, [0]), (100, [0])),
y-label: $cal(Q)(1, z), abs(f_"Z" (z))$,
x-label: $z$,
y-tick-step: none,
y-ticks: ((0, [0]), (0.6, [1])),
axis-style: "left",
x-min: -5,
x-max: 5,
y-min: 0,
y-max: 0.6,{
plot.add((data), style: (stroke: (paint: red, thickness: 2pt)), label: [Optimized PDF])
plot.add(((-5, 0), (0, 0), (0, 0.6), (5, 0.6)), style: line_style, label: [Quantizer])
})
})
|
|
https://github.com/Area-53-Robotics/53B-Notebook-Over-Under-2023-2024 | https://raw.githubusercontent.com/Area-53-Robotics/53B-Notebook-Over-Under-2023-2024/master/entries/pre_building/game_reaveal.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/templates/entries.typ": *
#import "/templates/headers.typ": *
#import "/templates/text.typ": *
#create_default_entry(
title: [First Meeting After Reveal],
date: [April 28th, 2023],
design: [],
witness: [],
content:[
#box_header(
title: [Gabriel],
color: blue.lighten(60%)
) \
#entry_text()
After the reveal was released, I instantly thought of the importance of a small drivetrain, i.e 25-30 holes long, this is so that you can easily maneuver under the objects even if someone were to block the path. To add on to the drivetrain, I think it's important to be able to drive over the middle bar, so 4-inch wheels should be used this season. As for the RPM of the drivetrain, I think it should be circa 350 rpm. Also, the fact that there are more game objects that are match load-ins it is important to fastly place and shoot from the match load area. The way I think this is possible is by using a puncher that will be ratcheted to the intake for fast cycle times. For the endgame phase of the match, I think a piston that attaches to the pole and a motor connected to wheels to move up the bar will maximize points. Another good idea I thought of could be to pick up your alliance and carry them, since the points are relative to the highest point. Also, the new introduction of the watt limit is interesting.
#box_header(
title: [Deb],
color: purple.lighten(60%)
) \
#entry_text()
After watching the reveal, I immediately thought of a bot that would push the acorns into the back of the goal. This robot would have a puncher to do that, and that would also be able to launch the acorns to the other side of the field. I also thought about making a piston that would lift the front wheels which would allow us to go over the three inch pipe in the middle. Overall, my first impressions of this game are rather positive, and I think that it looks like fun.
#box_header(
title: [Jin],
color: yellow.lighten(60%)
) \
#entry_text()
Something in the video that surprised me was the hanging aspect of the game. Never have I thought we would need to make a robot capable of holding onto a bar. One of the biggest concerns was what will happen after the game is over and all the motors go into coast mode. My idea for this problem is to make a 4 bar which I believe could be strong enough to hold the weight of the bot. We would open the clamp with motors, so when the motor dies at the end of the game, we will remain hanging.
#box_header(
title: [Immad],
color: red.lighten(60%)
) \
#entry_text()
When I watched the video I was intrigued with the grabbing aspect of the game because it would lead to a lot of different methods of picking up the acorns . Some ideas I had that are pretty interesting are a claw bot which would have a lot of maneuverability or a bot that scoops up the triballs and punches them into the goals. As a new driver this game seems harder than last year's Spin Up since it seems to focus on more of a skill oriented driving style instead of an aggressive style.
#box_header(
title: [Juan],
color: green.lighten(60%)
) \
#entry_text()
After watching the reveal I was surprised to learn about the pole-climbing aspect of the game. A problem we may need to focus on is how to get the bot off the pole safely. We could make something that clamps onto the pole using motors to “drive” up the pole. Overall, I believe this will be an interesting season allowing for creative/innovative ideas.
]
) |
https://github.com/IllustratedMan-code/nixconfig-wsl | https://raw.githubusercontent.com/IllustratedMan-code/nixconfig-wsl/main/README.md | markdown | # A simple flake based config for WSL
I'm currently using a windows computer. Since I can't possibly live without NixOS I've
created this simple config that has most things I need for daily life.
Interestingly enough, it comes with a template for `typst`, but does not come with `typst`.
Try to figure that one out why don't ya.
|
|
https://github.com/clementjumel/resume-template | https://raw.githubusercontent.com/clementjumel/resume-template/main/README.md | markdown | # Resume Template
My personal resume's _template_, written with [typst](https://github.com/typst/typst) and initially based on the
[basic-resume template](https://typst.app/universe/package/basic-resume).
This template contains the main files needed to compile my resume, except that it is filled with fake data instead of my
own.
## Usage
To compile a resume to PDF, you need to install `typst` first (for instance with `brew install typst`), and then run
`typst compile` on a main `typst` file. For instance, run:
```shell
typst compile ./resume-english.typ
```
|
|
https://github.com/Shuenhoy/modern-zju-thesis | https://raw.githubusercontent.com/Shuenhoy/modern-zju-thesis/master/utils/supplement.typ | typst | MIT License | #let show-set-supplement(s) = {
show heading.where(level: 1): set heading(supplement: [章])
show heading.where(level: 2): set heading(supplement: [节])
show heading.where(level: 3): set heading(supplement: [小节])
show heading.where(level: 4): set heading(supplement: [小小节])
s
} |
https://github.com/eratio08/learn-typst | https://raw.githubusercontent.com/eratio08/learn-typst/main/latex.typ | typst | #set page(margin: 1.75in)
#set par(leading: 0.55em, first-line-indent: 1.8em, justify: true)
#set text(font: "New Computer Modern")
#show raw: set text(font: "New Computer Modern Mono")
#show par: set block(spacing: 0.55em)
#show heading: set block(above: 1.4em, below: 1em)
|
|
https://github.com/piepert/grape-suite | https://raw.githubusercontent.com/piepert/grape-suite/main/src/german-dates.typ | typst | MIT License | #let semester(short: false, date) = {
let wise = (
long: "Wintersemester",
short: "WiSe"
).at(if short { "short" } else { "long" })
let sose = (
long: "Sommersemester",
short: "SoSe"
).at(if short { "short" } else { "long" })
let sem = wise
let year = if date.month() < 4 or date.month() > 9 {
date.year() - 1
} else {
sem = sose
date.year()
}
[#sem ]
if sem == wise {
[#year/#(year+1)]
} else {
[#year]
}
}
#let days = (
"Montag",
"Dienstag",
"Mittwoch",
"Donnerstag",
"Freitag",
"Samstag",
"Sonntag"
)
#let weekday(short: false, daynr) = {
let day = days.at(daynr - 1)
if short {
day = day.slice(0, 2)
}
day
} |
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/quote_04.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Citation-format: label or numeric
#set text(8pt)
#set quote(block: true)
#quote(attribution: <tolkien54>)[In a hole in the ground there lived a hobbit.]
#set text(0pt)
#bibliography("/assets/files/works.bib", style: "ieee")
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/text/linebreak-04.typ | typst | Other | // Test hard break directly after normal break.
Hard break directly after \ normal break.
|
https://github.com/sitandr/typst-examples-book | https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/basics/states/query.md | markdown | MIT License | # Query
<div class="warning">This section is outdated. It may be still useful, but it is strongly recommended to study new context system (using the reference).</div>
> Link [there](https://typst.app/docs/reference/meta/query/)
Query is a thing that allows you getting location by _selector_ (this is the same thing we used in show rules).
That enables "time travel", getting information about document from its parts and so on. _That is a way to violate Typst's purity._
It is currently one of the _the darkest magics currently existing in Typst_. It gives you great powers, but with great power comes great responsibility.
## Time travel
```typ
#let s = state("x", 0)
#let compute(expr) = [
#s.update(x =>
eval(expr.replace("x", str(x)))
)
New value is #s.display().
]
Value at `<here>` is
#context s.at(
query(<here>)
.first()
.location()
)
#compute("10") \
#compute("x + 3") \
*Here.* <here> \
#compute("x * 2") \
#compute("x - 5")
```
## Getting nearest chapter
```typ
#set page(header: context {
let elems = query(
selector(heading).before(here()),
here(),
)
let academy = smallcaps[
Typst Academy
]
if elems == () {
align(right, academy)
} else {
let body = elems.last().body
academy + h(1fr) + emph(body)
}
})
= Introduction
#lorem(23)
= Background
#lorem(30)
= Analysis
#lorem(15)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.