repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/royfrancis/quarto-typst-certificate | https://raw.githubusercontent.com/royfrancis/quarto-typst-certificate/main/README.md | markdown | # quarto-typst-certificate
A quarto template that uses typst to create a PDF certificate
<img src="preview.jpg" width="400px">
## Installation
:warning: Requires quarto 1.4.
```
quarto use template royfrancis/quarto-typst-certificate
```
## Render
```
quarto render index.qmd
```
To modify parameters;
```
quarto render index.qmd --metadata participant:"<NAME>"
```
---
2024 • <NAME>
|
|
https://github.com/TOD-theses/paper-T-RACE | https://raw.githubusercontent.com/TOD-theses/paper-T-RACE/main/front-matter.typ | typst | #import "utils.typ": *
= Acknowledgements
I want to express gratitude towards all the companions I had so far. Fooling around with siblings, hiking towards wiggly stones, sharing meals in the kitchen with flatmates, enjoying movies together, playing games, learning new languages by doing uni projects or being at random parties. This work would, and should, not exist without these precious moments.
#linebreak()Merci! Mulțumesc! Grazie! Gracias! Teşekkürler! جهانی سپاس! Danke! Thank you!
#v(2em)
I also want to thank <NAME> and <NAME> for their continuous support. With the regular meetings and their inputs and the plentitude of constructive feedback, they helped me a lot to steadily progress and improve this work.
= Kurzfassung
Ethereum speichert einen Zustand ab, der mittels Transaktionen verändert wird. Die Reihenfolge, in der Transaktionen ausgeführt werden, kann einen Einfluss auf diese Zustandsänderungen haben (transaction order dependency; TOD). Man kann analysieren, ob zwei Transaktionen TOD sind, indem man diese in zwei verschiedenen Reihenfolgen ausführt und die Ergebniszustände miteinander vergleicht. Jedoch können Transaktionen, die zwischen den beiden analysierten Transaktionen ausgeführt wurden, diese Analyse beeinflussen. Dies kann eine fokussierte Analyse von zwei Transaktionen auf TOD verhindern.
In dieser Diplomarbeit entwerfen wir eine Methode, um verschiedene Reihenfolgen von Transaktionen zu simulieren und auf TOD zu analysieren. Wir verwenden Zustandsänderungen von Transaktionen, um Zustände zu berechnen, mit denen wir verschiedene Reihenfolgen von Transaktionen simulieren. Diese Berechnungen können Zustandsänderungen von dazwischenliegenden Transaktionen inkludieren, ohne diese während der Simulation ausführen zu müssen. Weiters ermöglicht es, nur jene Änderungen am Zu-#linebreak()stand vorzunehmen, die von den Transaktionen verursacht wurden, die wir analysieren. Wir verwenden diese Simulation zum Feststellen, ob Transaktionen TOD sind und ob sie Eigenschaften eines Angriffes haben.
Außerdem erläutern wir Umstände, in denen es trotz dieser Methodik zu Beeinflussungen der Analyse durch dazwischenliegende Transaktionen kommen kann.
Weiters durchsuchen wir Transaktionen, die in Ethereum ausgeführt wurden, nach Transaktionspaaren, welche potentiell TOD sind. Wir paaren Transaktionen anhand ihrer Zustandsabfragen und -änderungen. Unsere Analyse zeigt, dass nur Änderungen von Kontoständen und Kontospeichern relevant für Angriffe sind, daher verwerfen wir Paare ohne solche Änderungen. Weiters filtern wir Paare, bei welchen potentielle Störfaktoren die Simulation beeinflussen. Schließlich reduzieren wir die Anzahl an Paaren, welche ähnliche Zustandsabfragen und -änderungen haben.
Wir evaluieren unsere Methoden anhand eines existierenden Datensatzes, welcher 5.601 Angriffe aus einer Analyse von 175.552 Transaktionen enthält. Unsere Suche nach potentieller TOD markiert 5.600 Transaktionspaare als potentiell TOD. Nachdem wir diese filtern, verbleiben wir mit 115 Paaren. Wir zeigen, dass diese 115 Paare ähnlich zu 703 der gefilterten Angriffe sind. Weiters evaluieren wir unsere Simulationsmethode an allen 5.601 Angriffen und stellen bei 86% davon TOD fest und bei 81% Angriffseigenschaften. Wir analysieren die Unterschiede zwischen unseren Ergebnissen und dem Angriffsdatensatz, und führen für 60 Angriffe eine manuelle Untersuchung durch.
= Abstract
In Ethereum, the order in which two transactions are executed can influence the changes they perform on the world state. One method to analyze such transaction order dependencies (TOD) is to execute the transactions in two orders and compare their behaviors. However, when simulating a reordering of two transactions, the transactions that occurred between the two transactions can influence the analysis. This influence can prevent an isolated analysis of two transactions.
To address this issue, this thesis proposes a new method to simulate transaction orders to analyze TOD. We use state changes of transactions to compute world states that we use to simulate transaction execution in different orders. This computation removes the need to execute intermediary transactions for the simulation and allows applying only the state changes of the transactions we want to analyze. We then use our simulation method to detect if transactions are TOD and show attack characteristics. We discuss cases where, despite using this method, intermediary transactions can interfere with TOD analysis.
Furthermore, we use state changes to detect transaction pairs on the blockchain that are potentially TOD. We match transactions based on the state they access and modify. By enumerating and analyzing the causes of TOD, we show that only TODs related to the storage and balance of accounts are relevant attack vectors. With this insight, we can remove matches that are irrelevant to an attack analysis. Additionally, we filter out transaction pairs where intermediary transactions may interfere with the TOD simulation. Finally, we also reduce the amount of transaction pairs with similar state accesses and modifications.
For the evaluation, we use a dataset from a previous study as a ground truth, which contains 5,601 attacks obtained from analyzing 175,552 transactions. Our method to detect potential TODs finds 5,600 of the attacks. After applying the filters, only 115 of the attacks remain for further analysis. We show that these are similar to at least 703 of the removed attacks. We apply our simulation method to all 5,601 attacks and verify that 86% of them are TOD and 81% fulfill the attack characteristic used by the ground truth. We analyze the cases where our results differ from the ground truth, including a manual analysis of 60 attacks.
|
|
https://github.com/Treeniks/bachelor-thesis-isabelle-vscode | https://raw.githubusercontent.com/Treeniks/bachelor-thesis-isabelle-vscode/master/chapters/05-main-enhancements/symbol-handling.typ | typst | #import "/utils/todo.typ": TODO
#import "/utils/isabelle.typ": *
== Symbol Handling <symbol-handling>
As described in @background:isabelle-symbols, Isabelle utilizes its own #utf8isa encoding to deal with Isabelle Symbols. It is important to distinguish between 3 different domains:
/ Physical: This is the contents of the file. It is essentially a list of bytes that need to be interpreted. Certainly, a theory file contains text, meaning the bytes represent some list of symbols. However, even then, the exact interpretation of the bytes can vary depending on the encoding used. For example, the two subsequent bytes `0xC2` and `0xA5` mean different symbols in different encodings. If the encoding is #emph(utf8), those two bytes stand for the symbol for Japanese Yen #isabelle(`¥`). If, however, the encoding is #box[_ISO-8859-1 (Western Europe)_], the bytes are interpreted as #isabelle(`Â¥`). The file itself does not note the supposed encoding; without knowing the encoding, the meaning of a file's contents may be lost.
/ Isabelle System: This is where the language server lives. Here, an Isabelle symbol is simply an instance of an internal struct whose layout is outlined in @symbol-data-example.
/ Editor: This is where the language client lives. When opening a file in a code editor, it gets loaded into some internal structure the editor uses for its text buffers. During this loading, the editor will need to know the encoding to use, which will also affect what bytes the editor will write back to disk.
When using #jedit and loading a theory with the #utf8isa encoding, the bytes of the file will be interpreted as #utf8. Additionally, ASCII representations of symbols will be interpreted as their #utf8 counterparts. When writing back to disk, this conversion is done in reverse; thus, if all symbols within a theory are valid Isabelle symbols, which all have ASCII representations, a file saved with the #utf8isa encoding can be viewed as plain ASCII.
With #vscode, we get the additional problem of the Isabelle system not having direct access to our editor's buffer. As mentioned in @background:isabelle-vscode, Isabelle patches VSCodium to include a new #utf8isa encoding, so loading the file works virtually the same as in #jedit.
// #footnote[One particular difference between #vscode['s] and #jedit['s] implementation of the #utf8isa encoding is that the set of Isabelle symbols that #vscode understands is static. It is possible to extend this set and #jedit can deal with newly defined symbols while #vscode can not, although this is rarely a problem in practice.]
However, the language server must still obtain the contents of the file.
// #info[One particular difference between #vscode['s] and #jedit['s] implementation of the #utf8isa encoding is that the set of Isabelle Symbols that #vscode understands is static. It is possible to extend this set and #jedit can deal with newly defined symbols while #vscode can not, although this is rarely a problem in practice.]
Recall from @didchange that the LSP specification defines multiple notifications for text document synchronization, like the #box[`textDocument/didOpen`] and #box[`textDocument/didChange`] notifications, both containing data that informs the language server about the contents of a file. We will focus on #box[`textDocument/didOpen`] for now. This notification's `params` field contains a `TextDocumentItem` instance, whose interface definition is seen in @text-document-item.
#figure(
box(width: 90%)[
```typescript
interface TextDocumentItem {
uri: DocumentUri;
languageId: string;
version: integer;
text: string;
}
```
],
caption: [`TextDocumentItem` interface definition @lsp-spec.],
kind: raw,
// placement: auto,
) <text-document-item>
The most relevant data is the `text` field, which contains the content of the entire text document that was opened. Aside from the header, which is plain ASCII, the JSON data sent between client and server is interpreted as #utf8\; thus, the `text` string is also interpreted as #utf8 content. The exact content of this string depends on the text editor. In #vscode, thanks to the custom #utf8isa encoding, the language server will receive the full #utf8 encoded content of the file (i.e. #isabelle(`⟹`) instead of #isabelle(`\<Longrightarrow>`)). However, this may not be the case for another editor. Thankfully, the Isabelle system internally deals with all types of Isabelle Symbol representations equally, so the editor is free to mix and match whichever representation is most convenient for it.
Every code editor may handle Isabelle symbols differently. Some editors may be able to add custom encodings; others may not. For example, in the Neovim code editor, it is possible to programmatically change how certain character sequences are displayed to the user using a feature called _conceal_. #footnote[https://neovim.io/doc/user/options.html#'conceallevel'] Through this feature, Neovim can have the ASCII representation (#isabelle(`\<Longrightarrow>`)) within the file and buffer and still display the Unicode representation (#isabelle(`⟹`)) to the user without the need for a custom encoding. All in all, the language server should not make assumptions about the implementation details of Isabelle symbols in the language client.
=== Symbol Options <symbol-options>
There are many messages sent from the server to the client containing different types of content potentially containing Isabelle symbols: #box[`window/showMessage`] notifications sent by the server asking the client to display a particular message, text edits sent for completions, text inserts sent for code actions, content sent for output and state panels, and many more.
Previously, there was a single Isabelle option called `vscode_unicode_symbols`, which was supposed to control whether these messages sent by the server should send Isabelle symbols in their Unicode or ASCII representations. However, this option only affected a few messages (like hover information and diagnostics). Things like completions were hard-coded to always use Unicode, as that is what #vscode requires.
When viewing #vscode in its entirety, this is not a problem. If the VSCode Isabelle client expects Unicode symbols in certain scenarios and the language server is hard-coded to do so, then it works for #vscode. However, this is a problematic limitation once one moves to a different client. In the case of Neovim's _conceal_ feature, it would be desirable to have messages sent by the server use ASCII for consistency.
Another important consideration is that even if Neovim wants ASCII representations of symbols within the theory file, this may not necessarily be true for output and state panels. While the server sends many different types of content, it can generally be grouped into two categories: Content that is only supposed to get _displayed_ and content that is supposed to be _placed_ within the theory file.
To this end, we replaced the original `vscode_unicode_symbols` option with two new options: `vscode_unicode_symbols_output` for _displayed_ content and `vscode_unicode_symbols_edits` for _placed_ content. Additionally, we made use of these new options in the respective places within the language server code base, removing the previously hard-coded values.
// #TODO[
// - difference between ASCII representation of Symbols and Unicode representation of Symbols
// - not always the case
// - Isabelle internally makes no difference
// - add Encoding graphic and explanation here
// - before: `vscode_unicode_symbols`
// - inconsistently used throughout the server codebase
// - badly defined which symbols this affects
// - now: two settings `vscode_unicode_symbols_output` and `vscode_unicode_symbols_edits`
// - output for all output (e.g. panels, messages)
// - edits for all edits (TextEdit objects, e.g. for autocompletion/code actions)
// ]
// #{
// import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
//
// let h = 20mm
//
// diagram(
// spacing: (10mm, 5mm), // wide columns, narrow rows
// node-stroke: 1pt, // outline node shapes
// edge-stroke: 1pt, // make lines thicker
// mark-scale: 60%, // make arrowheads smaller
// node-corner-radius: 5pt,
// debug: true,
//
// node((0, 0), height: h, align(top + center)[Physical]),
// node((-1, 0), height: h, align(top + center)[Language Server]),
//
// node((1, 0), height: h, align(top + center)[Editor Logical], name: <elogic>),
// node((2, 0), height: h, align(top + center)[Editor Display], name: <edisplay>),
//
// node(align(top + center)[Editor], enclose: (<elogic>, <edisplay>)),
// edge((-2,0), "r,u,r", "-|>", $f$, label-side: left),
// edge((-2,0), "r,d,r", "..|>", $g$),
// node((0,-1), $F(s)$),
// node((0,+1), $G(s)$),
// node(enclose: ((0,-1), (0,+1)), stroke: teal, inset: 10pt, snap: false), // prevent edges snapping to this node
// edge((0,+1), (1,0), "..|>", corner: left),
// edge((0,-1), (1,0), "-|>", corner: right),
// node((1,0), text(white, $ plus.circle $), inset: 2pt, fill: black),
// edge("-|>"),
// )
// }
=== Symbols Request
The list of existing Isabelle symbols is not static; a user may augment this list in a #box[`$ISABELLE_HOME_USER/etc/symbols`] file~@manual-jedit[§2.2]. The issue is that previously, there was no way for a language client to get information about which symbols exist. So, even if it is possible to hard-code the _default_ set of Isabelle symbols into an Isabelle language extension, that would not be correct in light of user additions.
#vscode uses such a hard-coded list of symbols. This list is added to the custom #utf8isa encoding while building the patched VSCodium. It also only includes the default set of symbols Isabelle offers out of the box; it does not include custom user additions. As this list is hard-coded, any change in the list of symbols would also require recompiling #vscode. This is different from #jedit, where the code of the #utf8isa encoding exists within #scala and, therefore, has access to the complete list of symbols.
In order to eliminate the need for hard-coded lists of symbols for language clients, we added a #box[`PIDE/symbols_request`] request. When this request is sent, the language server responds with a list of all defined symbols. Note that, at the time of writing, this list _also_ only includes the default set without user additions in order to be in line with the set that is used by #vscode. This may be worth changing in the future, which we will discuss in @future-work.
=== Symbols Conversion
Another issue is that different language clients may want different symbol representations within files. While the typical way of handling symbols in Isabelle is to have symbols in their ASCII representation within files, some editors may want Unicode representations instead. In order for the client to freely choose which of the two it wants to use, it would be helpful if there were some way for it to convert the symbols from one representation into the other. Within #scala, this is easily done with the help of the internal `Symbol` module, and to pass this functionality to the language client, we added a new #box[`PIDE/symbols_convert_request`] request.
This request gets a string it should convert, as well as whether symbols in it should be converted into Unicode or ASCII representations. The language server then converts the symbols and sends the converted string back as its response. An example conversion request and response can be seen in @list:symbols-convert-request.
#figure(
{
import "@preview/codly:1.0.0": *
show raw: set text(size: 10pt, font: "Isabelle DejaVu Sans Mono")
show raw: it => block(width: 100%, it)
table(
columns: (1.8fr, 1fr),
stroke: none,
inset: (x: 0pt, y: 5pt),
table.header([*Request*], [*Response*]),
local(
lang-format: none,
```json
"jsonrpc": "2.0",
"id": 58,
"method": "PIDE/symbols_convert_request",
"params": {
"text": "A \<Longrightarrow> B",
"unicode": true
}
```
),
local(
number-format: none,
```json
"jsonrpc": "2.0",
"id": 58,
"result": {
"text": "A ⟹ B"
}
```
),
)
},
kind: raw,
caption: [`PIDE/symbols_convert_request` example request and response.],
// placement: auto,
) <list:symbols-convert-request>
Allowing the client to request the conversion for any string enables it to offer more flexible functionality. For example, an Isabelle language extension may allow the user to select an area of the text and only convert the selected area instead of the whole file.
Both the `PIDE/symbols_request` and `PIDE/symbols_convert_request` requests are not currently used by #vscode. They are only offered by the language server for other language clients and have already seen use in them. For example, our current Neovim Isabelle client prototype supports a `SymbolsConvert` command to convert the symbols in the current buffer.
// #TODO[
// - currently client was expected to just know what symbols are available, but this is dynamic
// - now client can request a list of all symbols from server
// - gives the same list used by VSCode during compilation, meaning dynamic symbol additions still don't work (Future Work)
// ]
//
// #TODO[
// - flush_edits used to automatically convert symbols based on `vscode_unicode_symbols`
// - but now the code for it was just unused, so it was removed
// - now symbol conversion is a request
// - client can easily convert whole document to unicode with that
// ]
|
|
https://github.com/sebaseb98/clean-math-thesis | https://raw.githubusercontent.com/sebaseb98/clean-math-thesis/main/main.typ | typst | MIT License | // global
#import "@preview/great-theorems:0.1.1": *
#import "@preview/hydra:0.5.1": hydra
//local
#import "meta.typ"
#import "customization/colors.typ": *
// ------------------- settings -------------------
#set heading(numbering: "1.1") // Heading numbering
#set enum(numbering: "(i)") // Enumerated lists
#set cite(style: meta.citation-style) // citation style
// ------------------- Math equation settings -------------------
// only labeled equations get a number
#show math.equation:it => {
if it.has("label"){
math.equation(block:true, numbering: "(1)", it)
} else {
it
}
}
#show ref: it => {
let el = it.element
if el != none and el.func() == math.equation {
link(el.location(), numbering(
"(1)",
counter(math.equation).at(el.location()).at(0) + 1
))
} else {
it
}
}
#show math.equation: box // no line breaks in inline math
#show: great-theorems-init // show rules for theorems
// ------------------- Settings for Chapter headings -------------------
#show heading.where(level: 1): set heading(supplement: [Chapter])
#show heading.where(
level: 1,
): it =>{
if it.numbering != none{
block(width: 100%)[
#line(length: 100%, stroke: 0.6pt + color2)
#v(0.1cm)
#set align(left)
#set text(22pt)
#text(color2)[Chapter
#counter(heading).display(
"1:" + it.numbering
)]
#it.body
#v(-0.5cm)
#line(length: 100%, stroke: 0.6pt + color2)
]
}
else{
block(width: 100%)[
#line(length: 100%, stroke: 0.6pt + color2)
#v(0.1cm)
#set align(left)
#set text(22pt)
#it.body
#v(-0.5cm)
#line(length: 100%, stroke: 0.6pt + color2)
]
}
}
// only valid for abstract and declaration
#show heading.where(
outlined: false,
level: 2
): it => {
set align(center)
set text(18pt)
it.body
v(0.5cm, weak: true)
}
// Settings for sub-sub-sub-sections e.g. section 1.1.1.1
#show heading.where(
level: 4
): it => {
it.body
linebreak()
}
// same for level 5 headings
#show heading.where(
level: 5
): it => {
it.body
linebreak()
}
// ------------------- other settings -------------------
// Settings for Chapter in the outline
#show outline.entry.where(
level: 1
): it => {
v(14.75pt, weak: true)
strong(it)
}
// table label on top and not below the table
#show figure.where(
kind: table
): set figure.caption(position: top)
// ------------------- Cover -------------------
#v(5mm)
//logos
#grid(
columns: (1fr, 1fr),
rows: (auto),
column-gutter: 100pt,
row-gutter: 7pt,
grid.cell(
colspan: 1,
align: center,
image(meta.uni_logo_path, width: 50%),
),
grid.cell(
colspan: 1,
align: center,
image(meta.institute_logo_path, width: 50%),
),
grid.cell(
colspan: 1,
align: center,
text(font: meta.sans-font, 1.5em, weight: 700, meta.university)
),
grid.cell(
colspan: 1,
align: center,
text(font: meta.sans-font, 1.5em, weight: 700, meta.institute)
)
)
#v(4.5cm)
//title
#line(length: 100%, stroke: color1)
#align(center, text(font: meta.sans-font, 3em, weight: 700, meta.title))
#line(start: (10%,0pt), length: 80%, stroke: color1)
#v(5mm)
//author
#align(center, text(font: meta.sans-font, 1.5em, weight: 500, meta.degree + " Thesis by " + meta.author))
//study program
#align(center, text(font: meta.sans-font, 1.3em, weight: 100, "Study Programme: " + meta.program))
//date
#align(center, text(font: meta.sans-font, 1.3em, weight: 100, meta.deadline))
// supervisors
#align(center + bottom, text(font: meta.sans-font, 1.3em, weight: 100, " supervised by" + linebreak() + meta.supervisor1 + linebreak() + meta.supervisor2))
#v(-3mm)
#pagebreak()
#include "chapter/abstract.typ"
#set page(
numbering: "1",
number-align: center,
header: context {
align(center, emph(hydra(1)))
v(0.2cm)
},
) // Page numbering after cover & abstract => they have no page number
#pagebreak()
// ------------------- Tables of ... -------------------
// Table of contents
#outline(depth: 3, indent: 1em, fill: line(length: 100%, stroke: (thickness: 1pt, dash: "loosely-dotted")))
#pagebreak()
// List of figures
#outline(
title: [List of Figures],
target: figure.where(kind: image),
fill: line(length: 100%, stroke: (thickness: 1pt, dash: "loosely-dotted"))
)
#pagebreak()
// List of Tables
#outline(
title: [List of Tables],
target: figure.where(kind: table),
fill: line(length: 100%, stroke: (thickness: 1pt, dash: "loosely-dotted"))
)
#pagebreak()
// ------------------- Chapters -------------------
#include "chapter/introduction.typ"
#pagebreak()
#include "chapter/example_chapter.typ"
#pagebreak()
#include "chapter/conclusions_outlook.typ"
#pagebreak()
#include "chapter/appendix.typ"
#pagebreak()
#bibliography("References.bib")<sec:References>
#pagebreak()
#set page(
numbering: none,
header: none,
) // turn off the header for the declaration
#include "chapter/declaration.typ" |
https://github.com/WawooGithub/nuist-thesis-typst | https://raw.githubusercontent.com/WawooGithub/nuist-thesis-typst/main/thesis.typ | typst | MIT License | #import "nju-thesis/template.typ": documentclass, tablex, fig, tlt, indent
#import "nju-thesis/utils/style.typ": 字号, 字体
#import "nju-thesis/utils/indent.typ": fake-par
#import "nju-thesis/utils/invisible-heading.typ": invisible-heading
// 双面模式,会加入空白页,便于打印
#let twoside = false
// #let twoside = true
#let (
doc, preface, mainmatter, mainmatter-end, appendix,
fonts-display-page, cover, decl-page, abstract, abstract-en, outline-page, list-of-figures, list-of-tables, notation, acknowledgement,
) = documentclass(
// type: "bachelor", // "bachelor" | "master" | "doctor" | "postdoc", 文档类型,默认为本科生 bachelor
// degree: "academic", // "academic" | "professional", 学位类型,默认为学术型 academic
// anonymous: true, // 盲审模式
twoside: twoside, // 双面模式,会加入空白页,便于打印
// 可自定义字体,先英文字体后中文字体,应传入「宋体」、「黑体」、「楷体」、「仿宋」、「等宽」
// fonts: (楷体: ("Times New Roman", "FZKai-Z03S")),
info: (
title: "基于 Typst 的 NUIST 本科学位论文",
title-en: "My Title in English",
grade: "20XX",
student-id: "1234567890",
author: "张三",
author-en: "<NAME>",
department: "计算机院",
department-en: "School of Computing",
major: "计算机科学与技术",
major-en: "Computer Science",
supervisor: ("李四", "教授"),
supervisor-en: "Professor My Supervisor",
// supervisor-ii: ("王五", "副教授"),
// supervisor-ii-en: "Professor My Supervisor",
submit-date: "二零二三 年 五 月 六 日",
super-title: "本科生毕业设计"
),
)
// 文稿设置
#show: doc
// 字体展示测试页
// #fonts-display-page()
// 封面页
#cover()
// 前言
#show: preface
// 目录
#outline-page()
// 中文摘要
#abstract(
keywords: ("我", "就是", "测试用", "关键词")
)[ 欢迎阅读 Typst 的中文文档!Typst 是为科学写作而诞生的基于标记的排版系统。 它被设计之初就是作为一种替代品,用于替代像 LaTeX 这样的高级工具,又或者是像 Word 和 Google Docs 这样的简单工具。 我们对 Typst 的目标是构建一个功能强大的排版工具,并且让用户可以愉快地使用它。
]
// 英文摘要
#abstract-en(
keywords: ("Dummy", "Keywords", "Here", "It Is")
)[
English abstract
]
#pagebreak()
// 正文
#show: mainmatter
= 基本功能
== 脚注
我们可以添加一个脚注。#footnote[脚注内容]
== 列表
=== 无序列表
- 无序列表项一
- 无序列表项二
- 无序子列表项一
- 无序子列表项二
=== 有序列表
+ 有序列表项一
+ 有序列表项二
+ 有序子列表项一
+ 有序子列表项二
=== 术语列表
/ 术语一: 术语解释
/ 术语二: 术语解释
== 图表
引用@tbl:timing,引用@tbl:timing-tlt,以及@fig:nju-logo。引用图表时,表格和图片分别需要加上 `tbl:`和`fig:` 前缀才能正常显示编号。以及这里使用 `fig` 函数替代原生 `figure` 函数以支持将 `tablex` 作为表格来识别。
#fig(
tablex(
align: center + horizon,
columns: 4,
[t], [1], [2], [3],
[y], [0.3s], [0.4s], [0.8s],
),
caption: [常规表],
) <timing1>
#fig(
tlt(
columns: 4,
[t], [1], [2], [3],
[y], [0.3s], [0.4s], [0.8s],
),
caption: [三线表],
) <timing-tlt1>
#align(center, (stack(dir: ltr)[
#fig(
tablex(
align: center + horizon,
columns: 4,
[t], [1], [2], [3],
[y], [0.3s], [0.4s], [0.8s],
),
caption: [堆叠常规表],
) <timing>
][
#h(50pt)
][
#fig(
tlt(
columns: 4,
[t], [1], [2], [3],
[y], [0.3s], [0.4s], [0.8s],
),
caption: [堆叠三线表],
) <timing-tlt>
]))
#fig(
image("nju-thesis/assets/vi/nju-emblem.svg", width: 20%),
caption: [图片测试],
) <nju-logo>
== 数学公式
可以像 Markdown 一样写行内公式 $x + y$,以及带编号的行间公式:
$ phi.alt := (1 + sqrt(5)) / 2 $ <ratio>
引用数学公式需要加上 `eqt:` 前缀,则由@eqt:ratio,我们有:
$ F_n = floor(1 / sqrt(5) phi.alt^n) $
#indent 图表和公式后的段落要用 `#indent` 手动缩进。同时,我们也可以通过 `<->` 标签来标识该行间公式不需要编号
$ y = integral_1^2 x^2 dif x $ <->
而后续数学公式仍然能正常编号。
$ F_n = floor(1 / sqrt(5) phi.alt^n) $
== 参考文献
可以像这样引用参考文献:@wang2010guide 和 @kopka2004guide。
== 代码块
代码块支持语法高亮。引用时需要加上 `lst:` @lst:code
#fig(
```py
def add(x, y):
return x + y
```, caption:[代码块]) <code>
= 正文
== 正文子标题
=== 正文子子标题
正文内容
// 手动分页
#if (twoside) {
pagebreak() + " "
}
#v(字号.五号)
// 参考文献
#[
#set text(font: 字体.黑体, size: 字号.四号)
#set par(first-line-indent: 0em)
#heading(numbering: none, level: 1)[参考文献]
#fake-par
#set text(font: 字体.宋体, size: 字号.小五)
#bibliography(("bibs/ex01.bib", "bibs/ex02.bib"),
style: "gb-7714-2005-numeric",
title: none
)
]
// 致谢
#acknowledgement[
感谢 NJU-LUG,感谢 NJUThesis LaTeX 模板。
]
// 手动分页
#if (twoside) {
pagebreak() + " "
}
// 正文结束标志,不可缺少
#mainmatter-end()
// 附录
#show: appendix
= 附录
== 附录子标题
=== 附录子子标题
附录内容,这里也可以加入图片,例如@fig:appendix-img。
#fig(
image("nju-thesis/assets/vi/nju-emblem.svg", width: 20%),
caption: [图片测试],
) <appendix-img>
|
https://github.com/SeniorMars/tree-sitter-typst | https://raw.githubusercontent.com/SeniorMars/tree-sitter-typst/main/examples/meta/ref.typ | typst | MIT License | // Test references.
---
#set heading(numbering: "1.")
= Introduction <intro>
See @setup.
== Setup <setup>
As seen in @intro, we proceed.
---
// Error: 1-5 label does not exist in the document
@foo
---
= First <foo>
= Second <foo>
// Error: 1-5 label occurs multiple times in the document
@foo
|
https://github.com/Narcha/typst-flagtastic | https://raw.githubusercontent.com/Narcha/typst-flagtastic/main/example.typ | typst | MIT License | #import "lib.typ": *
= Parameters and Examples
#let examples = (
"#flag(\"UA\") Some text // ISO 3166-1 alpha-2 code",
"#flag(\"UKR\") Some text // ISO 3166-1 alpha-3 code",
"#flag(\"Ukraine\") Some text // English name",
"#flag(\"UA\", shape: \"circle\") Some text",
"#flag(\"UA\", shape: \"rounded\") Some text",
"#flag(\"UA\", shape: \"rounded\", radius: 30%) Some text",
"#flag(\"UA\", height: 2em) Some text",
"#flag(\"UA\", height: 2em, baseline: 0.5em) Some text",
)
#table(
columns: (1fr, auto),
rows: 1.3cm,
inset: 1.5mm,
align: horizon,
..examples.map((example)=>{
(
raw(lang: "typ", example),
align(center, eval(example, mode: "markup", scope: (flag: flag))),
)
}).flatten(),
)
#let languages = (
("DE", "Deutsch"),
("FR", "Français"),
("GB-UKM", "English"),
("US", "English (simplified)"),
("ES", "Español"),
("NL", "Nederlands"),
("SE", "Svenska"),
("NO", "Norsk"),
)
#grid(columns: (1fr, 1fr, 1fr))[
== Rectangular
#for (code, language) in languages [
#flag(code) #language \
]
][
== Rounded
#for (code, language) in languages [
#flag(code, shape: "rounded") #language \
]
][
== Circular
#for (code, language) in languages [
#flag(code, shape: "circle") #language \
]
]
|
https://github.com/XcantloadX/TypstMomoTalk | https://raw.githubusercontent.com/XcantloadX/TypstMomoTalk/main/examples/introduction.typ | typst | #import "/momotalk/momotalk.typ": *
#import "/momotalk/files.typ": *
#import "/momotalk/characters.typ" : *
#show: doc => chat(
doc,
title: "MomoTalk 演示",
author: "@XcantloadX",
)
// 介绍
#yuuka((
"欢迎使用新的 MomoTalk “编辑器”。实际上,这是一个 Typst 模板,并不是一个独立的编辑器。",
quote(attribution: "Typst 非官方中文文档", block: true)[
Typst 是为科学写作而诞生的基于标记的排版系统。
它被设计之初就是作为一种替代品,用于替代像 LaTeX 这样的高级工具,又或者是像 Word 和 Google Docs 这样的简单工具。
我们对 Typst 的目标是构建一个功能强大的排版工具,并且让用户可以愉快地使用它。
]
))
#yuuka((
[
得益于 Typst 的专业性,你可以在消息里插入任何东西,例如:
*格式化* #underline[文本,]
#text(fill: red)[彩色文本,]
#text(fill: gradient.linear(..color.map.rainbow))[甚至是渐变色文本。]
#align(center)[让文本居中,] \
#align(right)[和居右。] \
(PS:由于字体原因,粗体和中文斜体无法显示。)
],
[当然,你也可以插入图片:#image("/examples/azusa.jpg", width: 30%)],
// arguments(no_box: true)[#image("/examples/azusa.jpg", width: 30%)],
[
以及标题:
= 标题
== 小标题
=== 小小标题
],
[
以及公式:
$v := vec(x_1, x_2, x_3)$
$ 7.32 beta +
sum_(i=0)^nabla
(Q_i (a_i - epsilon)) / 2 $
],
[
以及代码块:
```c
#include <stdio.h>
int main()
{
print("Hello World!");
return 0;
}
```
]
))
// 自定义学生演示
#nonomi[你可以选择不同的学生发送消息]
#nonomi[当然也可以自定义学生]
#let stuName = messages.with("꒰ঌ( ᗜ`v´ᗜ )໒꒱", "/examples/azusa.jpg")
#stuName[就像这样]
// 右向消息框演示
#sensei[sensei 可以发送消息]
#yuuka(direction: "right")[学生也可以在右边发送消息!]
// 其他 BA 消息演示
#system[这是一条系统消息]
#system[
*一样的*,也可以放其他内容 \
#box(image("azusa.jpg"), width: 10%) \
]
#hr
#azusa_swimsuit[(你就不能换张图片吗)]
#sensei[(不能)]
#azusa_swimsuit[#image("azusa.jpg", width: 30%)]
#hr
#yuuka[下面是预设卡片展示。]
#story_card("优香")
#reply_card(
none,
([选项 1], [选项 2])
)
#reply_card(
none,
([你没得选择])
)
#reply_card(
[回复卡可以有内容描述,也可以没有选项。],
none
)
#yuuka[下面是自定义卡片展示。]
#card(
"卡片标题",
"卡片内容",
("选项1", "选项2"),
background_color: rgb("#f2fff4"),
title_color: rgb("#21c236"),
option_fore_color: rgb("#00FF00"),
option_back_color: rgb("#000")
)
#yuuka[下面是扩展消息展示。]
#yuuka[语音消息:]
#yuuka((voice(5), voice(10), voice(60)))
#sensei(voice(20))
#yuuka[语音电话:]
#yuuka((
voice_call[],
voice_call[通话结束 01:45],
))
#sensei(voice_call[对方已拒绝])
#yuuka[行动消息:]
#action[这是一条行动消息 #emoji.thumb]
#time[15:22:41]
#unsend[优香]
#unsend[优香]
#time[以下是新消息]
#yuuka[文件消息:]
#yuuka((
file("RABBIT dance.mp3", "3.2 MB"),
file("a.mp4", "4.7 GB"),
file("本月开支.xlsx", "2.7 MB", footer: [MomoTalk 电脑版])
))
#sensei(file("检讨.docx", "6.7 MB"))
#yuuka[更多消息类型正在开发中!] |
|
https://github.com/htlwienwest/da-vorlage-typst | https://raw.githubusercontent.com/htlwienwest/da-vorlage-typst/main/lib.typ | typst | MIT License | #import "lib/template.typ" as template
// Reexport of template functions
#let diplomarbeit = template.diplomarbeit
#let autor = template.autor |
https://github.com/dashuai009/dashuai009.github.io | https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/026.typ | typst | #let date = datetime(
year: 2022,
month: 5,
day: 25,
)
#metadata((
title: "核酸检测管里的红色液体是什么?为啥是红色的?",
subtitle: [核酸检测],
author: "dashuai009",
description: "喜提管长几次,刷知乎遇到这个问题,感觉之前从来没想过这个问题。液体是病毒保存液,红色是酚红。",
pubDate: date.display(),
))<frontmatter>
#import "../__template/style.typ": conf
#show: conf
转自知乎!
#link("https://www.zhihu.com/question/532237774/answer/2489720812")[知乎作者:比克曼生物]
#link("https://www.zhihu.com/question/532237774/answer/2480186488")[知乎作者:<NAME>]
红色液体是病毒保存液,分为灭活型和非灭火型。
灭活型体系内含有#strong[高效病毒裂解盐];,采样后灭活病毒,有效防止二次感染风险,保障运输及检测人员安全。适用于新冠病毒、流感病毒、手足口病毒等常见病毒样本采集保存运输工作,可采集咽拭子鼻试子或特定部位组织样本,储存的样本可用于后续的核酸提取或纯化等临床试验。
非灭活型采样管能够保护病毒的蛋白和核酸,使采样的病毒具有其原始性,完整性更好,检出率更高。除了核酸检测外,样本还可以用于其他的病毒研究试验。
保存液显红色是因为溶液里含有酚红,被作为PH指示剂,在酸性时显黄色,中性时显红色,碱性时显紫色。
焦碳酸二乙酯
#figure(
image("026/400px-Diethyl_pyrocarbonate.svg.png"),
caption: [
焦碳酸二乙酯
],
)
== 2024后记
这篇博客写于2022年,摘要里有一句“喜提管长几次”。现在回看的时候想了半天管长是啥,还在想这句是不是乱码。
管长:做核酸时,每十个人一组。每个人先排队去领棉签,再依次去被捅嗓子。每十个人,会有一个人领到一个红色液体的管子(上文说的东西),这十个人的棉签会放到这个管子里。这个人称为管长。 |
|
https://github.com/tfachmann/presentation-optimal-music-recognition-typst | https://raw.githubusercontent.com/tfachmann/presentation-optimal-music-recognition-typst/main/main.typ | typst | #import "@preview/polylux:0.3.1": *
#import "tfachmann-theme.typ": *
// rgb("#FF5D05"),
#let c_primary = rgb("#E65100");
#let c_secondary = rgb("#0078b8");
#let c_background = rgb("#212121");
#let c_background_light = rgb("A2AABC")
#let c_white = rgb("#CFD8DC");
#set page(paper: "presentation-16-9")
#set line(stroke: 1pt + c_primary)
#set raw(theme: "halycon.tmTheme")
#show raw: it => block(
fill: rgb("#2b303b"),
inset: 1em,
radius: 0.8em,
text(fill: c_background_light, it)
)
#show heading: set text(c_primary)
#show: simple-theme.with(
background: c_background,
foreground: c_white,
footer: [Optical Music Recognition - From Dataset Creation to Inference],
)
#title-slide[
= #text(c_white, "Optical Music Recognition")
From Dataset Creation to Inference
#line(length: 100%)
<NAME>\
#place(
top + right,
text(size: 20pt, [January 12th 2024])
)
#place(
top + left,
text(
c_secondary,
size: 20pt,
link("https://tfachmann.com/music-recognition/", [
https:/\/#text(weight: "bold", [tfachmann])\.com
])
)
)
]
// #centered-slide[
// = Act 1
// == Writing a Music Renderer From Scratch
// ]
#centered-slide[
= Act 1 -- 3
== Music Rendering and Dataset Creation
]
#slide[
= Act 1 - Writing a Music Renderer From Scratch
#v(1em)
#set text(size: 22pt)
#side-by-side[
#v(1em)
- Very complex and opinionated rules
- *Not* a complete positioning engine
- Render *already engraved* music
- MusicXML
- PDF (images)
#v(2em)
#uncover(2)[#h(4em)... still needs some]
][
#only(1)[
#v(2em)
#figure(
image("./images/music_recognition.svg", width: 100%),
supplement: "Fig",
)
]
#only(2)[
#figure(
image("./images/hierarchy_drawing.svg", width: 99%),
supplement: "Fig",
)
]
]
]
//
// #centered-slide[
// = Act 2
// == Filetype Agnostic Music Rendering
// ]
#slide[
= Act 2 - Filetype Agnostic Music Rendering
#set text(size: 22pt)
#v(2.5em)
- Challenge: Output SVGs, PNGs, Segmentation Masks, Bounding Boxes, ...
- --> Rendering engine #text(c_primary, [independent]) of filetypes
#place(bottom + left, [
#figure(
image("./images/img.png", width: 32%),
supplement: "Fig",
)
])
#place(bottom + center, [
#figure(
image("./images/seg.png", width: 32%),
supplement: "Fig",
)
])
#place(bottom + right, [
#figure(
image("./images/bbox.png", width: 32%),
supplement: "Fig",
)
])
]
// #centered-slide[
// = Act 3
// == Creating Simple Datasets for OMR
// ]
#slide[
= Act 3 - Creating Simple Datasets for OMR
#set text(size: 22pt)
#v(2.5em)
#place(right, [#align(left,
[
- Samples have to be IID
- Provide intuitive API
- Create ground truths
])
])
#place(bottom + right, [
#figure(
image("./images/data_v03_overview.png", width: 32%),
supplement: "Fig",
)
])
#place(bottom + left, [
#set text(size: 14pt)
```rust
// initialize a RandomConfig
let random_config = RandomConfig::builder()
.octaves(vec![4, 5])
.note_types(vec![NoteType::Quarter])
.stems(vec![Stem::Up, Stem::Down])
.accidentals_maybe_of(vec![Accidental::Flat, Accidental::Sharp]);
// create 500 samples and store them to disk
(0..500).map(|_| {
let score = Score::builder()
.add_note(Note::note().random(&mut rng, &random_config)))
.add_note(Note::note().random(&mut rng, &random_config)));
let model = RenderModel::from_score(score, MAX_WIDTH);
model.bitmap();
model.segmentation_mask(&element_classes);
model.bounding_boxes(&element_classes);
// store to disk...
}
```
])
]
#centered-slide[
= Act 4
== Three Machine Learning Approaches
]
#slide[
= Act 4 - Three Machine Learning Approaches
#v(2.5em)
#set text(size: 20.0pt)
#align(center, [
#side-by-side[
1. Semantic segmentation
#figure(
image("./images/seg.png", width: 100%),
supplement: "Fig",
)
][
2. Simple regression
// #v(1.3em)
#figure(
image("./images/img.png", width: 100%),
supplement: "Fig",
)
][
3. Bounding-box regression
#figure(
image("./images/bbox.png", width: 100%),
supplement: "Fig",
)
]
])
]
#slide[
= Act 4 - Semantic Segmentation
#v(1em)
#set text(size: 21pt)
#align(center, [
#side-by-side[
Same distribution as training
#figure(
image("./images/seg_v04.png", width: 90%),
supplement: "Fig",
)
][
Transfer learning -- different dist.
#figure(
image("./images/seg_v04_transfer_01.png", width: 90%),
supplement: "Fig",
)
]
])
]
#slide[
= Act 4 - Simple Regression
#set text(size: 21pt)
#v(1em)
#side-by-side[
#v(2.5em)
- Predict data representation directly
#set text(size: 16pt)
```
Ground Truth: [0 1 0 1 0 0 0 0 0 0 0 0 ...]
Prediction: [0 1 0 0 1 0 0 0 0 0 0 0 ...]
```
#set text(size: 21pt)
- hard to #text(c_primary)[learn]
- hard to #text(c_primary)[generalize]
- Got stuck in #text(c_primary)[local minima] :(
- Still promising in real example
][
#v(1em)
#figure(
image("./images/reg_v04.png", width: 99%),
supplement: "Fig",
)
]
]
#slide[
= Act 4 - Bounding-box Regression
#v(1em)
#set text(size: 21pt)
#align(center, [
#side-by-side[
DummyOMRv01
#figure(
image("./images/data_v04_overview.png", width: 69%),
supplement: "Fig",
)
][
DummyOMRv02
#figure(
image("./images/data_v09_overview.png", width: 60%),
supplement: "Fig",
)
]
])
]
#slide[
= Act 4 - Bounding-box Regression
#v(1em)
#set text(size: 21pt)
#grid(
columns: (1fr, 1fr, 1fr),
rows: (auto, auto),
row-gutter: 0.5em,
column-gutter: -18.5em,
[#h(11.5em) v01],
[#h(10em) v01 + Aug],
[#h(10em) v02 + Aug],
figure(
image("./images/bbox_v05.png", width: 19%),
supplement: "Fig",
),
figure(
image("./images/bbox_v05_aug.png", width: 19.5%),
supplement: "Fig",
),
figure(
image("./images/bbox_v08_p.png", width: 20%),
supplement: "Fig",
),
)
]
#slide[
= Act 4 - Bounding-box Regression
#v(1em)
#set text(size: 21pt)
#grid(
columns: (1fr, 1fr, 1fr),
rows: (auto, auto),
row-gutter: 0.5em,
column-gutter: -18.5em,
[#h(11.5em) v01],
[#h(10em) v01 + Aug],
[#h(10em) v02 + Aug],
figure(
image("./images/bbox_v05_transfer_01.png", width: 19%),
supplement: "Fig",
),
figure(
image("./images/bbox_v05_aug_transfer_01.png", width: 19%),
supplement: "Fig",
),
figure(
image("./images/bbox_v08_transfer_01.png", width: 19.6%),
supplement: "Fig",
),
)
]
#slide[
= Act 4 - Bounding-box Regression
#v(1em)
#set text(size: 21pt)
#grid(
columns: (1fr, 1fr, 1fr),
rows: (auto, auto),
row-gutter: 0.5em,
column-gutter: -15.5em,
[#h(10.5em) v01],
[#h(9em) v01 + Aug],
[#h(9em) v02 + Aug],
figure(
image("./images/bbox_v05_transfer_02.png", width: 29.2%),
supplement: "Fig",
),
figure(
image("./images/bbox_v05_aug_transfer_02.png", width: 29%),
supplement: "Fig",
),
figure(
image("./images/bbox_v08_transfer_02.png", width: 29%),
supplement: "Fig",
),
)
]
#slide[
= Act 4 - Bounding-box Regression
#v(1em)
#set text(size: 21pt)
#side-by-side[
- Data augmentation is very important
- Transfer learning possible
- Bboxes good compromise between
#h(3em) Generalizable \<--> Parseable
#v(1em)
=== Next Steps
- Combine networks
- Use state-of-the art End2end detection
- Phrase as seq2seq problem
][
#figure(
image("./images/bbox_v08.png", width: 68%),
supplement: "Fig",
)
]
]
#centered-slide[
= Thank you for listening!
#v(2em)
Read the whole story at:\
#text(c_secondary, [
#link(
"https://tfachmann.com/music-recognition/",
[
https:/\/#text(weight: "bold", [tfachmann])\.com/music-recognition
])
])
]
|
|
https://github.com/arambaroudi/tp4 | https://raw.githubusercontent.com/arambaroudi/tp4/main/Lab-4%20(1).typ | typst | #import "Class.typ": *
#show: ieee.with(
title: [#text(smallcaps("Lab #4: ROS2 using RCLPY in Julia"))],
/*
abstract: [
#lorem(10).
],
*/
authors:
(
(
name: "<NAME>",
department: [Senior-lecturer, Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "a-mhamdi",
),
/*
(
name: "<NAME>",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "abc",
),
(
name: "<NAME>",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "abc",
),
(
name: "<NAME>",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "abc",
)
*/
)
// index-terms: (""),
// bibliography-file: "Biblio.bib",
)
You are required to carry out this lab using the REPL as in @fig:repl.
#figure(
image("Images/REPL.png", width: 100%, fit: "contain"),
caption: "Julia REPL"
) <fig:repl>
#exo[Minimal Publisher/Subscriber Setup][The combination of Julia and rclpy opens up opportunities for developing efficient and performant robotics applications with the benefits of ROS2s ecosystem.]
#test[Make sure to read the instructions thoroughly, follow each step precisely, and ask for clarification if needed.]
We begin first of all by sourcing our ROS2 installation as follows:
```zsh
source /opt/ros/humble/setup.zsh
```
#solution[Always start by sourcing ROS2 installation in any newly opened terminal.]
Open a _tmux_ session and write the instructions provided at your Julia REPL.
#let publisher=read("../Codes/ros2/publisher.jl")
#let subscriber=read("../Codes/ros2/subscriber.jl")
#raw(publisher, lang: "julia")
In a newly opened terminal, we need to setup a subscriber that listens to the messages being broadcasted by our previous publisher#footnote[Remember to source ROS2 installation before using it with Julia].
#raw(subscriber, lang: "julia")
The graphical tool *rqt_graph* of @fig:rqt_graph displays the flow of data between our nodes: #emph[my_publisher] and #emph[my_subscriber], through the topic we designed _infodev_. //It can be launched using the following commands:
```zsh
source /opt/ros/humble/setup.zsh
rqt_graph
```
#figure(
image("Images/rqt_graph.png", width: 100%),
caption: "rqt_graph",
) <fig:rqt_graph>
@fig:pub-sub depicts the publication and reception of the message _"Hello, ROS2 from Julia!"_ in a terminal. The left part of the terminal showcases the message being published, while the right part demonstrates how the message is being received and heard.
#figure(
image("Images/pub-sub.png", width: 100%),
caption: "Minimal publisher/subscriber in ROS2",
) <fig:pub-sub>
@fig:topic-list shows the current active topics, along their corresponding interfaces.
/*
```zsh
source /opt/ros/humble/setup.zsh
ros2 topic list -t
```
*/
#figure(
image("Images/topic-list.png", width: 100%),
caption: "List of topics",
) <fig:topic-list>
//#test[Some test]
|
|
https://github.com/bionick7/ME46060_Optimization | https://raw.githubusercontent.com/bionick7/ME46060_Optimization/main/report/report.typ | typst | #show figure.where(kind: table): set figure.caption(position: top)
#let vv(x) = $arrow(#x)$
#set math.vec(delim: "[")
#set math.mat(delim: "[")
#set heading(numbering: "1.1")
// TODO: title
#set align(center)
#text(size: 32pt)[
Optimal placement of reaction control thrusters conforming to a spacecraft's surface
]
#set align(left)
#pagebreak()
#set page(numbering: "1/1", number-align: right, header: grid(columns: (100%-3em, auto), [<NAME>], [5223679]))
#counter(page).update(1)
#set text(size: 11pt)
#outline(
title: [Table of contents],
indent: auto
)
#heading(level: 1, outlined: false)[Table of contents]
#figure(table(columns: 2, stroke: none, align: left,
$vv(M)$, [Moments exserted by the thruster system],
$vv(F)$, [Forces exserted by the thruster system],
$vv(T)$, [Thrusts of each thruster (can be negative)],
$vv(r)_i$, [Position of thruster $i$],
$vv(d)_i$, [Normalized direction of thruster $i$],
$vv(u)$, [x-position of the thrusters on the 2d map],
$vv(v)$, [y-position of the thrusters on the 2d map],
$vv(a)$, [Angle of the thrusters on the surface],
$A$, [Thruster matrix],
$vv(x)$, [Design variables],
$f(vv(x))$, [Objective function],
$g(vv(x))$, [Inequality constraint],
$vv(phi.alt)(x)$, [2D Path representing the surface],
))
= Introduction
In order to control attitude and translation, spacecraft can make use of reaction control thrusters. Reaction control thrusters are small thrusters that can expel gas in a certain direction to impart a controlled force at a specific point of the vehicle into a specific direction. Ideally, a combination of a sufficient amount of these thrusters on specific points of the craft can provide any combination of torque and force within a certain magnitude. Some thruster arrangements may be more or less efficient at converting their thrust into certain torques or moments than others.
The purpose of this report is to find the optimal arrangement of thrusters on a certain spacecraft. The spacecraft in question is shown in @fig:spacecraft-render. This is a spacecraft design I came up casually some years ago. It is chosen as it provides a geometry which is not to complex, but where the optimal solution is not immediately obvious. The geometry also contains holes, and some sharp angles, which might make the optimization more interesting. The thruster design used is as shown in @fig:thruster-design. The reason for this design is the bi-directional thrust, which allows the thrust to be negative in the mathematical model. This significantly simplifies the calculations. The challenge of the optimization is mostly the restrictions of the thruster placement.
The thrusters need to lie on the surface of the vehicle and the thrust direction needs to be parallel to the surface.
A more rigorous formulation of the optimization problem and the model is given in @ch:problem-formulation. Some of the mathematical properties of the objective function, like bounds/continuity and noise are discussed in @ch:initial-problem-investigation. The optimization of a simplified 2D problem is discussed @ch:simp-problem while the actual optimization is discussed in @ch:true-problem. In @ch:results, the results of the optimization are commented on and their sensitivity is analyzed. The final conclusion is presented in @ch:conclusion.
#grid(columns: (30%, auto), align: bottom,
[#figure(image("thruster.png", width: 100%), caption: [The concept of the thruster design, thrust can be applied in the positive or negative x-direction])<fig:thruster-design>],
[#figure(image("spacecraft_render.png", width: 100%), caption: [The spacecraft in question. Only the surface where thruster placement is allowable is visible])<fig:spacecraft-render>]
)
= Problem formulation <ch:problem-formulation>
The moment ($M_i$) and force ($F_i$) a single thruster imparts on the spacecraft can be written as:
$
vv(M)_i &= T_i (vv(r)_i times vv(d)_i) \
vv(F)_i &= T_i vv(d)_i
$
where $vv(r)_i$ is the position of the thruster, $vv(d)_i$ is the direction and $T_i$ the thrust, that thruster exerts.
The final forces and moments are the sum of the forces and moments from the individual thrusters.
$
vec(vv(M), vv(F)) &= mat(
vv(r)_1 times vv(d)_1, dots.c, vv(r)_n times vv(d)_n;
vv(d)_1, dots.c, vv(d)_n
) vec(T_1, dots.v, T_n) \
&= A vv(T)
$
If $n = 6$, $A$ is a square matrix and, if $A$ is invertible, the thrust of the thrusters can be calculated as
$
vv(T) = A^(-1) vec(vv(M), vv(F))
$
If $A$ is not invertible, there are forces/moments that the design cannot provide.
The goal of this project is to find a combination of thrusters on a specific spacecraft that most 'efficiently' provide the thrusts for certain input thrusts/torques, i.e. where the average magnitude of $vv(T)$ is minimal.
== Simplifications
Several assumptions are made in the problem statement that are be stated below.
- The geometry of the spacecraft does not influence the thrust of the thrusters. Thruster are considered to always give constant thrust no matter where they are placed
- Thrusters can be placed anywhere on the surface. Some of the surface has been omitted from the model to indicate places where no placement is possible. The remaining surface is all considered an equally valid destination. In reality, the placement of some thrusters might restrict the valid placement of others.
- All forces and torques are weighed equally. For the purpose of this optimization, 1 unit of force has the same utility as one unit of torque. Since all points in this problem lie within $[-1, 1]^3$, force and torque have similar scales. The importance of different forces/moments can easily be weighted by scaling the rows of the matrix $A$ to account for e.g. the mass moment of inertia.
== Optimization problem
// (also present mathematical problem statement)
The initial attempt to represent the 'average magnitude of $vv(T)$' mathematically is to solve $vv(T)$ for multiple predefined values of $[vv(M), vv(F)]^T$ and consider the (smooth) maximum absolute value between all the solutions the value to minimize. However, this approach brought multiple problems:
- Only a finite subset of all possible inputs can ever be represented
- The maximum thrust is considered, where in practice the average thrust is more important as it relates to fuel consumption
- The function evaluation involved solving multiple 6x6 system of equations, which is rather slow.
- There are many cases where $A$ is not invertible.
Instead, I settled on minimizing of $abs(det(A^(-1)))$, since the determinant is representative of how much a vector is scaled. Minimizing $abs(det(A^(-1)))$ is equivalent to maximizing $abs(det(A))$, since $det(A^(-1)) = det(A)^(-1)$. To archive a smoother version of the absolute value, the final optimization function is.
$
f(x) = -sqrt(det(A)^2 + 0.1^2) \
A = mat(
vv(r)_1 times vv(d)_1, dots.c, vv(r)_6 times vv(d)_6;
vv(d)_1, dots.c, vv(d)_6
)
$
== Modelling aspects
// (incl. discussion of simplification choices)
Since the matrix $A$ is constructed from the vectors $vv(r)_1 ... vv(r)_6$ and $vv(d)_1 ... vv(d)_6$, it is easy to conclude to model the problem as a 36-dimensional problem with equality constraints that force $vv(r)_i$ to conform to the surface of the vehicle and $vv(d)_i$ to be normalized and perpendicular to the the surface. However, there are 2 ways to reduce the dimensionality of the problem while simultaneously reduce the amount of constraints.
Firstly, since each thruster has to be confined to the surface, the position can be defined by a 2d vector, which has the components $u_i$ and $v_i$. To retrieve the 3d position form a 2d vector, an image as shown in @fig:position-map can be used. Each point on the image represent a vector on the surface of the object. This is a technique common in computer graphics, called UV-mapping /*TODO: source*/. The red, green and blue components of the color in the image correspond to the x, y and z components in 3d space.
A similar mapping process can be done with the surface normal vector. This map is split up into the positive and negative components of the normal vector (@fig:pos-normal-map and @fig:neg-normal-map), because of the limitations of the images and the tools to create them. Subtracting @fig:neg-normal-map from @fig:pos-normal-map gives the true normal vector. Since the direction is orthogonal to the surface normal, its rotation can now be expressed by a single variable, called $a_i$. The direction is retrieved by building an orthonormal basis around the normal vector via the Gram-Schmidt process, where $pi dot a_i$ defines a rotation in the plane orthogonal to the normal vector.
These two simplifications allow to reduce the problem from 6 variables per thruster to 3, yielding 18 variables in total. In addition the large amount of complicated constraints has been reduced to a single constraint: the 2d-coordinates of all thrusters must fall within the area for which a surface point is defined. For this, a signed distance field is defined, as seen in @fig:sdf-map. The signed distance field defines the distance of each point to a surface /*Source*/ , with points within the object being negative. Therefore the signed distance field can be used directly as a constraint for a single thruster. The final inequality constraint is the maximum sdf of all thrusters.
$
g(vv(u), vv(v), vv(a)) = limits("Max")_i ("sdf"(u_i, v_i)) <= 0
$
$vv(u)$, $vv(v)$ and $vv(a)$ can take any value, but repeat outside of the (0, 1) domain.
#let grid-width = 90%
#grid(columns: 2, row-gutter: 1em,
[#figure(image("../models/textures/ih/positions.png", width: grid-width), caption: [Position map. Image is remapped from (0, 1) to (-1, 1)])<fig:position-map>],
[#figure(
box(fill: black, inset:0pt, image("../models/textures/ih/sdf.png", width: grid-width)),
caption: [Inverse SDF map. Image is remapped from (0, 1) to (1, -1) before use])<fig:sdf-map>],
[#figure(image("../models/textures/ih/normals_pos.png", width: grid-width), caption: [Positive normal map])<fig:pos-normal-map>],
[#figure(image("../models/textures/ih/normals_neg.png", width: grid-width), caption: [Negative normal map])<fig:neg-normal-map>],
)
Lastly, the symmetry of the spacecraft can be exploited to place pairs of thrusters instead of individual thrusters and maintain the symmetrical design. This has the risk of eliminating potential optimal solutions that are either non-symmetrical or has thrusters at the boundary, but reduces the number of design variables to 9. This is performed by representing the symmetry of the 3d model in the symmetry of the UV-map and setting the the state of the last 3 elements as written below.
$
u_(4,5,6) &= 1 - u_(1,2,3) \
v_(4,5,6) &= v_(1,2,3) \
a_(4,5,6) &= -a_(1,2,3) \
$
= Initial problem investigation <ch:initial-problem-investigation>
There is no reason to assume that the objective function is either monotonous or convex and it will be clear in the simplified problem that it isn't. It is also clear from @fig:position-map that the domain is not convex as well.
== Bounds
The optimization function is
$
f(vv(x)) = -sqrt(det(A)^2 + 0.1^2)
$
Whose upper bound occurs as $det(A) = 0$. So $f(x) <= -0.1$. To find the lower bound, the bounds of $abs(det(A))$ need to be known. Hadamards inequality states that $abs(det(A)) <= product_(i=1)^n norm(vv(w)_i)$, where $vv(w)_i$ are the column vectors of $A$ @hadamard. In this case,
$
vv(w)_i &= vec(vv(r)_i times vv(d)_i vv(d)_i) \
<=> norm(vv(w)_i) &= sqrt(norm(vv(r)_i times vv(d)_i)^2 + norm(vv(d)_i)^2) \
&<= sqrt(norm(vv(r)_i)^2 + 1) \
$
The maximum value of $norm(vv(r_i))$ across the whole surface is around 1.087. Therefore:
$
det(A)^2 <= (1.087^2 + 1)^3 tilde.equiv 10.383 \
-10.383 <= f(vv(x)) <= -0.1
$
This gives a lower bound for the optimum.
== Smoothness & Continuity
The determinant varies smoothly with respect to the components of the matrix, so $f(vv(x))$ is smooth with respect to the positions and directions of the thrusters $vv(r)_i$ and $vv(d)_i$. The direction also varies smoothly with the angle $a_i$. Thus the question of smoothness and continuity relies on the smoothness of the position/normal maps. The position map is continuous because of the way it is generated: adjacent, valid points on the 2d map are also adjacent in 3d space. Since the surface normal is related to the gradient of the surface, the normal map represents the derivatives of the position map. Therefore the normal map is discontinuities if and only if the position map is non-smooth. This corresponds with sharp angles in the geometry of the mesh. Even though the shape is mostly smooth, there are some sharp corners in the mesh and some very tight bevels (only a few pixels wide), which are represented as very steep gradients in the normal map. Therefore there are some discontinuities or near discontinuities in the objective functions.
== Noise and precision
Storing data in images comes with two fundamental restriction in resolution. Both the resolution of the data itself, as well as the spatial resolution are limited by the image format. The position and normal maps are 2048$times$2048 pixel maps with a color depth of 16 bits per channel. The image is sampled by linearly interpolating between the closest pixels and can be considered, on a small scale, to consist of facets of size $5 dot 10^(-4)$. This can influence finite difference calculations, especially around sharp corners. The 16 bit color depth makes the normals accurate to the order $~ 1.5 dot 10^(-5)$ and positions accurate to the order $~ 3 dot 10^(-5)$. This is sufficient for the problem, and a much less dominant restriction than the image resolution.
I could not get an SDF image in 16 bit per channel color depth. Therefore it is also limited to 1024$times$1024 resolution as an SDF that decreases by less than 1 unit per pixel can lead to problems. This means that he constraint has significantly lower resolution than the optimization function. Because of the type of optimizer chosen and the way the constraint is implemented, however, the noise in the constraint is much less relevant.
// TODO: Smoothness drives optimization conditions
= Initial optimization on simplified problem <ch:simp-problem>
== Simplified problem model
Before solving the full problem, a simplified version of the problem is investigated in order to gage the complexity and nature of the optimization function. An easy way to do this is to look at the 2-dimensional equivalent of the original problem.
In the 2-dimensional case, there are 2 degrees of translational motion and a single degree of rotational motion. Therefore only 3 thrusters are necessary. Instead of defining the position of the thrusters as 2d point that maps to a 3d surface, it can be defined as a single value that maps to a 2d position via evaluating a path representing the surface. The constraint that the thrust direction needs to be orthogonal to the surface allows to get the direction directly from the position, as the normalized derivative of the the path. This reduces the number of variables in the 2D case to only 3. Similarly to the actual problem, the use of symmetry allows to reduce the problem to 2 variables by assuming that $x_3 = 1 - x_1$.
The path $vv(phi.alt)(x)$is represented by interpolating between 20 points with piecewise cubic hermite spline, as seen in @fig:path-2d. This way, some of the flat surfaces and sharp corners of the shape stay conserved while still offering smooth interpolation in some places. The path starts at (0.5, 0), reaches (0.5, 1) at $x = 0.5$ and returns to (0.5, 0) at $x = 1$. It is symmetrical along $x = 0.5$.
$
vv(r)_(i, "2D") = vv(phi.alt)(x_i) \
vv(d)_(i, "2D") = (dif vv(phi.alt)(x_i)) / (dif x_i) dot norm((dif vv(phi.alt)(x_i)) / (dif x_i))^(-1)
$
$
A_"2D" = mat(
mat(delim: "|", vv(r)_(1, "2D"), vv(d)_(1, "2D")), mat(delim: "|", vv(r)_(2, "2D"), vv(d)_(2, "2D")), mat(delim: "|", vv(r)_(3, "2D"), vv(d)_(3, "2D"));
vv(d)_(1, "2D"), vv(d)_(2, "2D"), vv(d)_(3, "2D")
)
$
== Algorithm selection
The reduction to two variables allows to plot the function across the whole domain at once, as seen in @fig:plot-f-2d. Furthermore, a cross-section along the $x_1 = 0.5$ is shown in @fig:cross-section-2d. These two figures confirm two of the difficulties that where suspected earlier: The objective function has many local optima and it contains a lot of plateaux and discontinuities.
#figure(image("contour_plot_2d.png"), caption: [Plot of $f(vv(x))$ for the simplified problem over its entire domain]) <fig:plot-f-2d>
#figure(image("func_cross_section_2d.png"), caption: [Plot of a cross-section of $f(vv(x))$ at $x_1$ = 0.5 for the simplified problem]) <fig:cross-section-2d>
This leads to the following criteria to choose an optimization algorithm:
+ The optimizer must be able to escape local optima.
+ The optimizer must not rely on the first/second derivative.
The last condition eliminates almost any method that are not direct search methods. Direct search methods include stochastic methods like simulated annealing or genetic algorithms, as well as methods like cyclic coordinate search and the Nelder-mead simplex method. /*TODO: Sources perhaps?*/ My initial thought was to use a stochastic method for their ability to escape local minima. I chose to use simulated annealing for its simplicity to implement. Indeed, for this relatively simple problem, simulated annealing performs relatively well and finds the global optimum in most runs. There are multiple global optima because of the symmetry in the objective function, but a solution is shown in @fig:path-2d.
The thrusters are as far as possible from each other to maximize the angular momentum they can provide and are 'balancing' on the corners to be able to provide thrust in both x- and y- directions. Although this makes the solution technically very sensitive to the inputs, it doesn't make for an unreasonable design. This will be discussed more in the sensitivity analysis of the main problem.
#figure(image("solution_2d.png"), caption: [Plot of the path representing the 2d shape in red. Position of the thrusters in the solution plotted in blue.]) <fig:path-2d>
// - Analytical (expected) solution
// - Optimizer must be 0th order
// - Apply simulated annealing
// - Arrives at solution (sometimes)
// Motivation of optimization approach, choices
// Investigation of obtained optimum
// Observations, interpretation of results, conclusions
= Optimization of the actual problem <ch:true-problem>
The actual 3D problem has 9 design variables, so it is not possible to construct a plot like @fig:plot-f-2d, but similar properties can be assumed, since flat surfaces and tight corners are also present in the 3D case. In this more complicated case, simulated annealing turned out to be very unreliable and often converged to results which were obviously not optimal.
The reason for this might be the ease for simulated annealing to get trapped at low temperature within shallow plateaux which are local optima. This can be mitigated by 'reheating' the solution after it is stuck for to long, but the optimizer is still very inefficient in finding the right solution. Since there are a lot of variations of simulated annealing and a lot of parameters that can be adjusted, there might be a more efficient version of simulated annealing that can deal with this problem, but there is also a simpler way.
An efficient algorithm, robust to both un-smooth functions and plateaux, can be run on many random points on the domain in hopes that it finds a good global optimum in the process. After some testing, Powell's conjugate directions (PCD) method was chosen @powell. Since PCD is based on line-searches that use golden-section search or quadratic interpolation internally @fminbnd, it does not rely on derivatives and is pretty robust.
A major benefit of PCD in this case is that the search radius can be defined exactly with the bounds of the line-searches. The optimization performs better and converges faster on a smaller search radius, centered around 0. By limiting the search radius of the algorithm, the optimizer needs to deal with multiple optima less often and is more likely to find the closest local optimum quickly. It is also less likely to cycle between different optima without converging, as less of the function is visible at a time. This comes with the disadvantage that less of the solution space is explored at a time, so more cycles might be needed to find the local minimum. A good compromise is to set the search radius to 0.25.
Since PCD, like most direct search methods, does not account explicitly for constraints, the objective function needs to be transformed to give a penalty outside the feasible domain, such that the optimal solution cannot lie outside the domain. The follow transformation is used.
$
tilde(f) = f + p(max(0, g))^2 #v(2em) p in RR^+
$
where $p$ determines the magnitude of the penalty. $p$ can be set quite high right away, since the optimizer needs to deal with very steep gradients (almost discontinuities) anyway. The value used is $p = 1000$.
To generate an initial value to use for the PCD, a random vector is generated with a uniform distribution in the domain $[0, 1]^9$ and re-generated until it conforms to the inequality constraint. After some initial tests, it was found that all the most promising results occur with 2 sets of thruster at $v_i >= 0.5$ (the upper half of the UV map, corresponding to the rear of the spacecraft) and one set at $v_i <= 0.5$. This makes physical sense, as the thrusters would benefit to be evenly distributed over the spacecraft to gain the maximum moment arm, with the rear of the spacecraft being wider and better for roll control. Ultimately, the domain for the initial value generation was constrained to accommodate these preferences and make the total search space of initial guesses smaller. Along with the restrictions in $v_i_0$, $u_i_0 < 0.5$ can be enforced, as the $f(x)$ is symmetric with respect to $u_i$.
/*$
v_(1,2)_0 in [0.5, 1]\
v_(3)_0 in [0, 0.5]\
vv(u)_0 in [0, 0.5]^3\
$*/
This reduces the hypervolume of the search space by a factor of $2^6 = 64$.
The interval of convergence for the PCD is set to a rather large number, 0.01. The idea is to have an algorithm that converges quickly, such that it can be run many times to find a good candidate for the optimal solution, then find the precise optimum with that candidate as a starting point in a second step.
// - Apply simulated annealing
// - Problems
// - Apply Powell's conj. directions on 100 randomly sampled points
// - Constrain domain
// (Same points as above)
// (possibly including different variations of model, problem formulation, optimization approach)
= Interpretation of results <ch:results>
After running the program multiple times with different rng states, most come up with similar solutions. This is a good sign, as it is likely that this family of solutions are include the global optimum. It is impossible to say though if any solutions is the true global minimum unless a better solution is found. The result that is shown here is the best one found with a utility function of $f(x) = -2.8347$. The values in terms of map coordinates ($u$, $v$) and $a$ are presented in @fig:solution-uva, while the solutions in terms of position ($vv(r)$) and direction ($vv(d)$) are presented in @fig:solution-rd. A 3d visualization of the solution is also visible in @fig:solution-3d.
// Solution makes sense physically
#figure(table(columns: 4,
$i$, $1$, $2$, $3$,
$u$, $0.2355$, $0.4431$, $0.2622$,
$v$, $0.1255$, $0.9431$, $0.1081$,
$a$, $0.7742$, $0.0647$, $0.1260$,
), caption: [Solution in terms of map coordinates of first 3 thrusters]) <fig:solution-uva>
#figure(table(columns: 7,
$i$, $1$, $2$, $3$, $4$, $5$, $6$,
$r_x$, $-0.3320$, $-0.0473$, $-0.3587$, $ 0.3327$, $ 0.0473$, $ 0.3584$,
$r_y$, $ 0.8432$, $-0.9758$, $ 0.8344$, $ 0.8431$, $-0.9763$, $ 0.8342$,
$r_z$, $ 0.0342$, $ 0.0036$, $-0.0050$, $ 0.0336$, $ 0.0033$, $-0.0060$,
$d_x$, $ 0.6213$, $-0.6935$, $ 0.2543$, $ 0.6184$, $-0.5928$, $ 0.3233$,
$d_y$, $ 0.6593$, $ 0.0775$, $ 0.3084$, $-0.6590$, $-0.0769$, $-0.3085$,
$d_z$, $ 0.4234$, $-0.7032$, $-0.9165$, $-0.4281$, $ 0.7854$, $ 0.8945$,
), caption: [Solution in terms of 3d positions and directions]) <fig:solution-rd>
//#figure(image("solution_map.png"), caption: [Solution in map coordinates, overlaid over the (positive) normal map])
#figure(image("solution_3d.png"), caption: [Solution rendered in 3d]) <fig:solution-3d>
It can be seen from @fig:solution-3d that the thrusters from the solution form 3 pairs, one at each extremity of the spacecraft. Each pair forms a perpendicular cross. This solution maximizes both the distance of the thrusters from each other while making sure there is minimum alignment between any two thruster. It makes sense physically that this is one of the best solution that can be archived.
== Sensitivity analysis
// 2 kinds of sensitivity: w.r.t. 3d positions and w.r.t variables
// Since there are no active constraints, response of f to x is either 0 or discontinuous.
// 2nd point is discontinuous in u and v
// Takes advantage of corners, so theoretically very sensitive
// Not an issue in practice.
For this problem, a sensitivity analysis comes in two forms. One is the sensitivity with respect to the design parameters that have been optimized, which is what is typically done. These parameters, however have been chosen to reduce the dimensionality of the problem and do not correspond to any physical dimensions. Therefore, the sensitivity with respect to the position and direction is also calculated.
The single constraint was not an active constraint in the solution, so the only response that is relevant is the response of the objective function itself. Since the relevant point is a local optimum that does not lie on any constraint, the derivatives at that point with any value are either zero or the function is not smooth/discontinuous. Discontinuities can only come from the position/normal maps themselves. After looking at the discrete gradient of the normal map, it is found that points 2 and 4 (the pair in the front of the spacecraft) lie on a crease, so they have discontinuous derivatives and very large sensitivity. The other points lie on a smoother surface, so the derivative with respect to them is zero.
The thruster pair is, in a way, exploiting the interpolation between pixels in the normal map to find a very narrow solution that results in a right angle for the thrusters.
This makes the solution very sensitive with respect to the placement of thruster 2/4 on the 2d map, but this sensitivity does not have any physical meaning.
/*#figure(table(columns: 4,
$i$, $1$, $2$, $3$,
$u$, $0$, $infinity$, $0$,
$v$, $0$, $infinity$, $0$,
$a$, $0$, $0$, $0$,
), caption: [Solution in terms of map coordinates of first 3 thrusters]) <fig:sensitivities-uva>*/
The sensitivity with respect to the positions and directions of the thrusters has a more direct physical meaning. It represents how much the objective function changes with manufacturing defects, design adjustments and physical deformation (due to e.g. temperature or stress).
The derivative with respect to each position $vv(r)_i$ and direction $vv(d)_i$ can be calculated analytically.
$
(dif f) / (dif mat(vv(r), vv(d))^T) = (dif f) / (dif det(A))
(dif det(A)) / (dif A)
(dif A) / (dif mat(vv(r), vv(d))^T)
$
$
(dif f) / (dif det(A)) = det(A) / f approx -1
$
For the second derivative, Jacobi's formula can be used
$
(dif det(A)) / (dif A) = "adj"(A) = det(A) A^(-1)
$
To get $(diff A) / (diff mat(vv(r), vv(d))^T)$, both matrices can be written in vector form.
$
diff vec(A_11, A_21, dots.v, A_56, A_66) = mat(
Omega_r_1, Omega_d_1, "", dots.c, 0;
"", I, "", "", dots.v;
"", "", dots.down, "", "";
dots.v, "", "", Omega_r_6, Omega_d_6;
0, dots.c, "", "", I;
) diff vec(vv(r)_1, vv(d)_1, dots.v, vv(r)_6, vv(d)_6)
$
where
$
vv(r)_i times vv(d)_i = Omega_r_i vv(r)_i = Omega_d_i vv(d)_i
$
Finally, it needs to be considered that the direction vectors $vv(d)_i$ are always normalized. Therefore a change in one dimension of $vv(d)$ will have indirect changes, as the other components adjust.
Since, for any pair of vector components
$
&d_i^2 + d_j^2 + ... = 1 \
<=>& (diff d_i) / (diff d_j) = - d_j / d_i
$
The total derivative can be calculated as:
$
(dif f) / (dif d_i) &= (diff f) / (diff d_i) + (diff f) / (diff d_j) (diff d_j) / (diff d_i) + (diff f) / (diff d_k) (diff d_k) / (diff d_i) \
&= (diff f) / (diff d_i) - (diff f) / (diff d_j) d_i / d_j - (diff f) / (diff d_k) d_i / d_k\
(dif f) / (dif vv(d)) &= 2 I - vv(d) (1/vv(d))^T (diff f) / (diff vv(d)) #h(2em) 1/vv(d) " refers here to the \n element-wise inversion"
$
The final sensitivities are presented in @tab:sensitivity-rd. Since both position and direction have a similar scaling (from -1 ... 1), it makes sense to present the sensitivity in linear derivatives, instead of logarithmic ones.
The highest sensitivity is in $r_x_3$ with an absolute value of 3.2. For reference: If the vehicle has a length of 10m, then a displacement of $r_x_3$ of 10 mm in the most sensitive direction results of a change in the utility function of 0.2 %, which I consider sufficiently stable.
Furthermore, it can be seen that some thrusters are more sensitive to design changes than other ones. Thruster pair 2/5, for example is much less sensitive than pair 3/6. The dimension at which the highest sensitivity for a thruster occurs can also give insight into the purpose of the unit. Since thrusters 2/5 are more sensitive to changes in $r_y$ and $d_z$, it can be deduced that is most important in pitch maneuvers (torque around the x-axis). With this information, the solution can be altered manually to better integrate it with the rest of the design, while maintaining its performance as much as possible.
//$
// (dif mat(vv(a), vv(u), vv(v))^T) / (dif vv(x)) = mat(I_(9 times 9), -I_(9 times 9))
//$
/*
#figure(table(columns: 4,
$i$, $1$, $2$, $3$,
$u$, $ 0.1407$, $-13.4541$, $-0.3078$,
$v$, $-0.1660$, $ 11.0299$, $-0.1425$,
$a$, $-0.1663$, $ -0.9259$, $-0.0001$
))
*/
#let log_sens = (
(-1.2418, -0.0696, -3.1924, 1.2660, 0.0599, 3.1779),
( 0.6157, -1.3783, 0.7497, 0.6488, -1.4026, 0.7667),
( 0.8635, -0.0832, -0.6336, 0.8300, -0.0922, -0.8844),
(-1.9138, -0.0374, -0.6434, -2.0050, 0.0287, -1.0214),
(-1.2718, -0.3768, -0.8854, 1.4866, 0.3834, 0.7276),
(-1.2939, 1.1169, 3.0216, 1.1591, -1.3929, -2.3653),
)
#figure(
// Written for maximum obfuscation
table(columns: 7, fill: (i, j) => if (i*j == 0) {white} else { gradient.linear(green, orange, space: oklab).sample(calc.abs(log_sens.at(j - 1).at(i - 1)) / 2.0 * 100%)}, $i$, ..range(1,7).map(x => $#x$), ..($r_x$, $r_y$, $r_z$, $d_x$, $d_y$, $d_z$).zip(log_sens).map(x => (x.at(0), ) + x.at(1).map(x => $#x$)).flatten(),),
caption: [The sensitivity with respect to different $vv(r)_i$ and $vv(d)_i$, colored by magnitude]
) <tab:sensitivity-rd>
= Conclusion & Recommendations <ch:conclusion>
The objective of this optimization problem was to find the optimum thruster placement for a specific spacecraft. Nominally the 6 necessary thrusters define a 36-dimensional design space with many quite complex constraints, as the thrusters need to conform to the surface of the spacecraft. With usage of position/normal maps however, the problem can be restated with 18 design variables, and a single (non active) constraint. With the use of symmetry, the amount of design variables can be further reduced to 9. This restatement comes with the cost of introducing noise, discontinuities and plateaux into the objective function, which limits the choice of optimization function. Ultimately, running Powell's conjugate direction method with a limited search radius from multiple randomly chosen starting points yields a good result most of the time.
The best solution found is presented in @fig:solution-3d. The sensitivity within the restated problem is very high, as one thruster lies on a sharp edge, corresponding to a near discontinuity in the objective function. The solution is much less sensitive in the original problem, which is much more representative of the physical reality.
The success of this methodology relies heavily on the geometry and the UV-map that maps the 3d surface to a 2d plane. Geometry with less sharp corners and less flat planes will be easier to optimize. The choice of mapping also strongly affects the objective function. The reduction of design variables by employing symmetry required a specific mapping to work. For some geometries, the original 36-dimensional problem formulation might be more advantageous, since the objective function is much smoother, and can be looked into for future research. The main issue with that approach is to define the problem constraints mathematically.
#bibliography("bibliography.bib")
#pagebreak()
#set heading(numbering: "A")
#counter(heading).update(0)
= Aknowledgments
normal/position maps (@fig:position-map @fig:pos-normal-map @fig:neg-normal-map), as well as the 3D renders where created in Blender (https://www.blender.org/download/)\
signed distance field from @fig:sdf-map was generated in SDF maker by job talle (https://jobtalle.com/SDFMaker/) \
All computations made in matlab
= Source code
Source code can be found on github (https://github.com/bionick7/ME46060_Optimization)
// (preferably, include this digitally)
//== Additional graphs/data
// when applicable
|
|
https://github.com/vEnhance/1802 | https://raw.githubusercontent.com/vEnhance/1802/main/r05.typ | typst | MIT License | #import "@local/evan:1.0.0":*
#show: evan.with(
title: [Notes for 18.02 Recitation 5],
subtitle: [18.02 Recitation MW9],
author: "<NAME>",
date: [18 September 2024],
)
#quote(attribution: [Petey])[The most competent people, with weak processes, will screw up.]
This handout (and any other DLC's I write) are posted at
#url("https://web.evanchen.cc/1802.html").
Also, I'm going to start updating the online PDF's with *post-recitation notes*
based on what we talk about during the recitation.
So check the PDF later. (They're up already for R03 and R04.)
= How to find eigen-things of an $n times n$ matrix $A$
This is covered in full detail in Poonen section 6.
Here is a tl;dr, but please read Poonen.
(Unlike Poonen, I prefer $det(A - lambda I)$ to $det(lambda I - A)$ due to fewer minus signs.
This makes no difference.)
1. To find the *eigenvalues*, solve the equation $det(A - lambda I) = 0$ for $lambda$.
- For $n = 2$ it turns out that this equation is actually just
$ lambda^2 - (op("Tr") A) lambda + det A = 0 $
so you can take a shortcut specifically when $n = 2$.
- In real life, half the time this quadratic equation will have non-real roots.
I hope this doesn't happen in 18.02, though,
but I'll give you one example in a moment for your curiosity.
2. To find the *eigenvectors*, for _each_ eigenvalue $lambda$,
manually solve the system of $n$ equations
$ (A - lambda I) vec(x_1, dots.v, x_n) = 0 $
for $x_1$, ..., $x_n$. (For 18.02, usually $n <= 3$.)
- By the last step, $A - lambda I$ is _not_ invertible, (i.e. $det(A - lambda I) = 0$)
so this system of $n$ equations is always degenerate.
It will always have infinitely many solutions, i.e. at least one redundant equation.
(To tie into last recitation: it has at least one obvious solution, $x_1 = ... = x_n = 0$,
and because the matrix of coefficients $A - lambda I$ is not invertible, it thus has infinitely many.)
== Example with non-real eigenvalues (not for exam)
Let $M = mat(cos(60 degree), -sin(60 degree); sin(60 degree), cos(60 degree))$
be the matrix corresponding to rotation by $60$ degrees.
(Feel free to replace $60$ by a different number.)
I claim that $M$ has no real eigenvalues or eigenvectors.
Indeed, if $bf(v) in RR^2$ was an eigenvector,
then $M bf(v)$ needs to point in the same direction as $bf(v)$, by definition.
But that can never happen: $M$ is rotation by $60 degree$,
so $M bf(v)$ and $bf(v)$ necessarily point in different directions --- $60$ degrees apart.
If you want, you can compute the characteristic polynomial for $M$.
Indeed, $op("Tr")(M) = 2 cos(60 degree) = 1$
and $det(M) = cos^2(60 degree) + sin^2(60 degree) = 1$.
So the characteristic polynomial is $lambda^2 - lambda + 1 = 0$,
which has two nonreal roots $(1 pm sqrt(-3))/2$.
= An example showing a use for eigenvectors (not for exam)
#problem[
Let $M$ be the matrix $mat(2,1;0,3)$.
Calculate $M^100$.
]
This looks impossible at first glance without a computer.
I'll show you how to do it with eigenvectors.
#solution[
In R03 and R04, I repeatedly stress the idea if you have a linear operator $T$,
and you know the outputs of $T$ on _any_ basis, that tells you all the outputs of $T$.
Idea: we'll find the eigenvectors for $M$, and pray they're independent (they usually are).
Since $det M = 5$ and $op("Tr") M = 6$,
the characteristic polynomial is $lambda^2 - 5lambda + 6 = 0$,
so the eigenvalues are $2$ and $3$.
If you grind out the corresponding system, you'll find these eigenvectors:
- The vector $vec(1,0)$ is an eigenvector with eigenvalue $2$,
because $M vec(1,0) = vec(2,0) = 2 vec(1,0)$.
- The vector $vec(1,1)$ is an eigenvector with eigenvalue $3$,
because $M vec(1,1) = vec(3,3) = 3 vec(1,1)$.
Then it's really easy to apply $M^100$ because it's just multiplication by a constant:
$
M^(100) vec(1,0) = 2^(100) vec(1,0); #h(2em)
M^(100) vec(1,1) = 3^(100) vec(1,1).
$
So now we know the outputs of $M^100$ at two linearly independent vectors.
We can now rewrite this as
$
M^(100) vec(1,0) = vec(2^100, 0); #h(2em)
M^(100) vec(0,1) = M^100 vec(1,1) - M^100 vec(1,0) = vec(3^100 - 2^100, 3^100).
$
Thus encoding $M$ as in R03 gives the answer: $M^100 = mat(2^100, 3^100-2^100; 0, 3^100)$.
]
#remark[
A natural question might be whether every $2 times 2$ matrix (or any $n times n$ matrix)
has such an basis consisting of only eigenvectors,
i.e. whether you could do this trick with any matrix.
It turns out (I won't prove this) that it's always possible for $n times n$ matrices
as long as the characteristic polynomial has no repeated roots,
although the eigenvalues might end up being complex numbers.
]
= Recitation problems from official course
$ A = mat(1,2;2,1), #h(1em) B = mat(1,2;-1,-2), #h(1em) C = mat(-1,0,1;3,4,1;0,0,2). $
/ 1: Find the eigenvalues and eigenvectors of $A$ and $B$.
/ 2: Find the eigenvalues and eigenvectors of $C$.
/ 3: For any real numbers $a$ and $b$, find the eigenvalues and eigenvectors of
$A = mat(a,0;0,b)$ and $B = mat(a,1;0,a)$.
If $a = b$, how do these answers differ?
= Post-recitation notes: let's redo everything
Based on my pulse today, I want to redo the whole lecture from first principles.
Therefore, the discussion in this section is supposed to stand by itself,
even if you missed both the lecture and recitation and didn't read any other material.
I will try to explain, from first principles, how to find eigenvectors and eigenvalues.
However, I _will_ assume you've read `r03.pdf` and `r04.pdf` from my website,
so if you haven't read those yet, now's a good time to do so.
Math builds on itself, so you need to have those down to make further progress here.
== Terminology
First, the relevant definition:
#defn[
Given a square matrix $A : RR^n -> RR^n$, suppose that
$ A bf(v) = lambda bf(v) $
for some nonzero vector $bf(v)$.
Then we say that $bf(v)$ is an _eigenvector_ for $A$,
and $lambda$ is the _eigenvalue_.
]
The prefix "eigen-" means self or own.
It means that $A$ maps $bf(v)$ to a multiple of itself.
The basic problem is:
#prob[
If we have a matrix $A$, can we find any and all the eigen-things for it?
]
=== Type signatures
I forgot to harp about types as I did in R01.
- The type of "eigenvector" is a _vector_.
- The type of "eigenvalue" is a _scalar_ (i.e. it's a real number).
== Story of how to derive the eigen-things
Let's pretend we didn't go to lecture or recitation,
and see how to solve this problem.
For this story, our protagonist will be the matrix
$ A = mat(5, -2; 3, 10). $
Phrased another way, the problem of finding eigenvectors is, by definition,
looking for $lambda$, $x$, $y$ such that
$
A vec(x,y) = lambda vec(x, y)
<==>
cases(
5 x - 2 y = lambda x,
3 x + 10 y = lambda y
).
$
Smart-alecks will say $x=y=0$ always works for every $lambda$.
_Are there other solutions?_
=== Why guessing the eigenvalues is ill-fated
As an example, let's see if there are any eigenvectors $vec(x,y)$ with eigenvalue $100$.
In other words, let's solve
$ mat(5, -2; 3, 10) vec(x, y) = 100 vec(x, y). $
If we solve the system of equations, we get
$
cases(
5 x - 2 y = 100 x,
3 x + 10 y = 100 y
)
==>
cases(
-95x - 2y = 0,
3x - 90y = 0,
)
==> x = y = 0.
$
Well, that's boring. In this system of equations, the only solution is $x = y = 0$.
We can try a different guess: maybe we use $1000$ instead of $100$.
An eigenvector with eigenvalue $1000$ ought to be correspond to
$ mat(5, -2; 3, 10) vec(x, y) = 1000 vec(x, y). $
If we solve the system of equations, we get
$
cases(
5 x - 2 y = 1000 x,
3 x + 10 y = 1000 y
)
==>
cases(
-995x - 2y = 0,
3x - 990y = 0,
)
==> x = y = 0
$
which... isn't any better. We still don't get any solutions besides $x = y = 0$.
At this point, you should be remembering something I told you in R04:
a "random" system of equations and variables usually only has a unique solution.
So if I keep picking numbers out of a hat like $100$, $1000$, etc.,
then I'm unlikely to find anything interesting.
In order to get a system that doesn't just solve to $x=y=0$,
I'm going to need to cherry-pick my number $lambda$.
=== Cherry-picking $lambda$
Let's try to figure out what value of $lambda$ would make the system more interesting.
If we copy what we did above, we see that the general process is:
$
cases(
5 x - 2 y = lambda x,
3 x + 10 y = lambda y
)
==>
cases(
(5-lambda)x - 2y = 0,
3x + (10-lambda)y = 0,
)
$
We need to cherry-pick $lambda$ to make sure that the system doesn't just
solve to $x = y = 0$ like the examples we tried with $100$ and $1000$.
But we learned how to do this in R04:
in order to get a degenerate system you need to make sure that
$
0 = det mat(5-lambda, -2; 3, 10-lambda).
$
#remark[
At this point, you might notice that this is secretly an explanation
of why $A - lambda I$ keeps showing up on your formula sheet.
Writing $A bf(v) = lambda bf(v)$
is the same as $(A - lambda I) bf(v) = 0$, just more opaquely.
]
Expanding the determinant on the left-hand side gives
$
0 = det mat(5-lambda, -2; 3, 10-lambda)
= (5-lambda)(10-lambda) + 6 = lambda^2 - 15 lambda + 56 = (lambda - 7)(lambda - 8).
$
Great! So we expect that if we choose either $lambda = 7$ and $lambda = 8$,
then we will get a degenerate system, and we won't just get $x = y = 0$.
Indeed, let's check this:
- When $lambda = 7$, our system is
$
cases(
5 x - 2 y = 7 x,
3 x + 10 y = 7 y
)
==>
cases(
-2x - 2y = 0,
3x + 3y = 0,
)
==> x = -y.
$
So for example, $vec(-13,13)$ and $vec(37,-37)$ will be eigenvectors with eigenvalue $7$:
$ A vec(-13, 13) = mat(5,-2;3,10) vec(-13, 13) = vec(-91, 91) = 7 vec(-13, 13). $
On exam, you probably answer
"the eigenvectors with eigenvalue $7$ are the multiples of $vec(-1,1)$".
Or you can say
"the eigenvectors with eigenvalue $7$ are the multiples of $vec(1,-1)$" if you want;
these are the same thing.
You could even say
"the eigenvectors with eigenvalue $7$ are the multiples of $vec(100,-100)$"
and still get credit, but that's silly.
- When $lambda = 8$, our system is
$
cases(
5 x - 2 y = 8 x,
3 x + 10 y = 8 y
)
==>
cases(
-3x - 2y = 0,
3x + 2y = 0,
)
==> x = -2/3 y.
$
So for example, $vec(-20, 30)$ is an eigenvector with eigenvalue $8$:
$ A vec(-20, 30) = mat(5,-2;3,10) vec(-20, 30) = vec(-160, 240) = 8vec(-20, 30). $
On exam, you should answer
"the eigenvectors with eigenvalue $8$ are the multiples of $vec(-2,3)$".
Or you can say
"the eigenvectors with eigenvalue $7$ are the multiples of $vec(2,-3)$" if you want;
these are the same thing.
You could even say
"the eigenvectors with eigenvalue $7$ are the multiples of $vec(200,-300)$"
and still get credit, but that's silly.
== Summary
To summarize the story above:
- We had the matrix $A = mat(5, -2; 3, 10)$ and wanted to find $lambda$'s for which
the equation
$ mat(5, -2; 3, 10) vec(x,y) = lambda vec(x,y) $
had solutions other than $x=y=0$.
- We realized that guessing $lambda$ was never going to fly,
so we went out of our way to cherry-pick $lambda$ to make sure the system was degenerate.
The buzzwords for this are "find the roots of the characteristic polynomial",
but I wanted to show that it flows naturally from the end goal.
- For the two values of $lambda$ we cherry-picked,
we know the system of equations is degenerate.
So we solve the two degenerate systems and see what happens.
In lectures and notes, the last two bullets are separated as two different steps,
to make it into a recipe.
But don't lose sight of how they're connected!
I would rather call it the following interlocked thing:
#align(center)[
We cherry-pick $lambda$ to make sure the system doesn't just solve to $x=y=0$.
To do the cherry-picking, ensure $det(A - lambda I) = 0$
so that our system is degenerate.
]
=== Degenerate systems of two equations all look stupid
This is worth repeating:
*degenerate systems of two equations all look stupid*.
Earlier on, we saw the two systems
$
cases(
-2x - 2y = 0,
3x + 3y = 0,
)
#h(2em)
" and "
#h(2em)
cases(
-3x - 2y = 0,
3x + 2y = 0,
).
$
Both look moronic to the eye, because in each equation,
the two equations say the same thing.
This is by design: when you're solving the eigenvector problem,
_you're going out of your way to find degenerate systems_
so that there will actually be solutions besides $x = y = 0$.
In particular: if you do all the steps right,
*you should never wind up with $x = y = 0$ as your only solution*.
That means you either didn't do the cherry-picking step correctly,
or something went wrong when you were solving the system.
If that happens, check your work!
=== Degenerate systems of three equations may not look stupid, but they are
When you have three or more equations instead, they don't necessarily look as stupid.
To reuse the example I mentioned from R04, we have
$
x + 10 y - 9 z &= 0 \
3 x + y + 10 z &= 0 \
4 x + 11 y + z &= 0
$
which doesn't look stupid.
But again, if you check the determinant, you find out
$ det mat(1,10,-9;3,1,10;4,11,1) = 0. $
So you know _a priori_ that there will be solutions besides $x=y=z=0$.
I think 18.02 won't have too many situations where you need to solve a degenerate
three-variable system of equations, because it's generally annoying to do by hand.
But if it happens, you should fall back on your high school algebra
and solve the system however you learned it in 9th or 10th grade.
The good news is that one of the three equations is always redundant,
so you can just throw one away and solve for the other two.
For example, in this case we would solve
$
x + 10 y &= 9z \
3 x + y &= -10z
$
for $x$ and $y$, as a function of $z$.
I think this particular example works out to $x = -109/29 z$, $y = 37/29 z$.
And it indeed fits the third equation too.
== Good news for people who don't like eigenthings
If you don't like eigen-things, I have some good news for you:
according to what I'm hearing, they won't actually be used much later on this course.
|
https://github.com/TheTacBanana/make_deedpoll | https://raw.githubusercontent.com/TheTacBanana/make_deedpoll/master/README.md | markdown | # Deed Poll
A Typst template for creating a [Deed Poll](https://www.gov.uk/change-name-deed-poll),
used to change your name in the UK.
It is recommended to create several copies of a deed poll so that
you have backups in case an organisation decides they would like to keep it.
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/packages/typst.node/npm/freebsd-x64/README.md | markdown | Apache License 2.0 | # `@myriaddreamin/typst-ts-node-compiler-freebsd-x64`
This is the **x86_64-unknown-freebsd** binary for `@myriaddreamin/typst-ts-node-compiler`
|
https://github.com/sa-concept-refactoring/doc | https://raw.githubusercontent.com/sa-concept-refactoring/doc/main/chapters/introduction.typ | typst | // - Beschreibung der Ausgangslage
// - Beschreibung der Aufgabe
// - Rahmenbedingungen
// - Vorarbeiten
// - Übersicht über die kommende Kapitel
= Introduction
// COR Not to dissuade, but the intro would be slightly more compelling if there was a refactoring (idea) that supported the transformation from <C++20 code to constrained code/concepts.
Writing clean and readable code is getting more important as programming languages are growing and evolving.
This is also the case for C++, therefore, refactoring already written code is becoming more important.
#set quote(block: true)
#quote(attribution: [<NAME> @tips_for_cpp_refactoring])[Aside from the problems that could affect any language, C++ developers find code refactoring more challenging, in part because the language is complex. This complexity is compounded with language-extending functionality such as macros and templates, not to mention that the language is large, to begin with, and the syntax is difficult to process.]
The task of the project involves implementing and contributing new refactoring features to the LLVM project in order to assist the C++ community with their refactoring tasks.
== Initial Situation
The C++ programming language is constantly developed further by the standard committee @cpp_standard_committee.
With C++20, template parameter constraints were introduced which allow specification of requirements for template arguments.
Concepts are a way of abstracting these constraints. @constraints_and_concepts
Refactoring is a common technique to resolve code smells and improve the internal structure of code without changing its external behavior. @refactoring
Automated tests often ensure that the correct functionality is retained.
Older versions of integrated development environments (IDEs) were implementing support for code analysis and tools like symbol lookup and refactorings themselves.
This led to the problem that new languages only slowly gained adoption, one editor at a time, each of them having to spend the effort to implement support for it.
The goal of the Language Server Protocol was to address this and have the compiler or an adjacent tool implement the logic of these IDE features independently of a specific editor in something called a Language Server.
Editors then only need to know how to communicate with this Server and they gain support for a wide range of languages. @lsp_bit_services
The new constructs of C++20 concepts provide the potential to apply established refactorings, and there is also the possibility of developing new refactorings.
The LLVM project @llvm_github is an open source project, whose source code is available on GitHub.
It contains the source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and run-time environments.
Clangd is a language server which lives within the LLVM project.
It is able to recognize and make use of C++ code and contains smart features like code completion, compile errors and go-to-definitions.
#pagebreak()
== Problem Description
When developing in any programming language features like code refactorings are a helpful tool.
They can help to restructure and simplify source code.
To make these features available to as many IDEs as possible the language server protocol can be used.
One language server for C++ is clangd, which unfortunately does not have many refactorings available, especially not for features introduced with C++20.
Therefore, it would be nice to have some support for new language features like concepts.
It would make development much more convenient and make the developer aware of other ways of writing code using the newly added features.
== Project Goal
This section describes the goals of this project according to the task assignment @assignment.
Additionally, parts were added to give the project more structure, as this project is more explorative than usual.
The goal of this semester thesis is to come up with new ideas for refactoring operations specific to parameter type constraints and to implement some of them.
It should be checked if currently existing refactorings can be applied to concepts.
This may already be implemented in the currently available tooling.
Ideally, new refactorings should be submitted upstream as a pull request to clangd.
This is done to support the C++ community as well as helping the LLVM project grow.
In addition to this, research will also be carried out to determine how the clangd language server is communicating with the development tools.
This also includes documenting the basic knowledge needed to understand it.
For the implementation itself, it needs to be clear where the code needs to be added, how it should be tested, what the coding guidelines are, and how it can be contributed.
Each implemented refactoring feature should be documented, including is usage and how it transforms the source code.
#pagebreak()
== Structure of This Report
This report encompasses the analysis, elaboration, and implementation of the project's work. It is structured into the following sections:
*@analysis:* Captures the findings from the research conducted on the foundational principles of the language server protocol and clangd in particular.
*@refactoring_ideas:* Lists the collected ideas for potential refactorings.
*@inline_concept_requirement:* Describes the implementation process and result of the refactoring "Inline Concept Requirement".
*@abbreviate_function_template:* Describes the implementation process and result of the refactoring "Convert Function Template to Abbreviated Form".
*@development_process:* Gives insight about how the development environment was set up and which steps were needed to make the LLVM project compile locally.
*@project_management:* Outlines how the project was approached and explains the project plan and time tracking.
*@conclusion:* Summarizes key findings, insights, and implication of the project.
|
|
https://github.com/kimushun1101/typst-jp-conf-template | https://raw.githubusercontent.com/kimushun1101/typst-jp-conf-template/main/docs/setting-fonts.md | markdown | MIT No Attribution | # フォントの設定
lib.typ で設定されております.
例えば RSJ の場合は [libs/rsj-conf/lib.typ](../libs/rsj-conf/lib.typ#L5-L7) にあります.
基本的に Windows では MS フォント,Mac ではヒラギノフォントが自動的に選択されるはずです.
Linux では,フォントがインストールされておりませんので,ご自身でインストールする必要があるかもしれません.
例えば Ubuntu の場合には
```
sudo apt-get update
sudo apt-get install -y fonts-noto fonts-noto-cjk fonts-noto-cjk-extra
```
で Noto フォントがインストールできます.
BIZ UD フォントのインストールは
```
# Install BIZUDGothic
curl -L https://github.com/googlefonts/morisawa-biz-ud-gothic/releases/latest/download/morisawa-biz-ud-gothic-fonts.zip -o morisawa-biz-ud-gothic-fonts.zip
unzip morisawa-biz-ud-gothic-fonts.zip
sudo mkdir /usr/local/share/fonts/BIZUDGothic
sudo mv morisawa-biz-ud-gothic-fonts/fonts/ttf/*.ttf /usr/local/share/fonts/BIZUDGothic/
rm -rf morisawa-biz-ud-gothic-fonts*
# Install BIZUDMincho
curl -L https://github.com/googlefonts/morisawa-biz-ud-mincho/releases/latest/download/morisawa-biz-ud-mincho-fonts.zip -o morisawa-biz-ud-mincho-fonts.zip
unzip morisawa-biz-ud-mincho-fonts.zip
sudo mkdir /usr/local/share/fonts/BIZUDMincho
sudo mv morisawa-biz-ud-mincho-fonts/fonts/ttf/*.ttf /usr/local/share/fonts/BIZUDMincho/
rm -rf morisawa-biz-ud-mincho-fonts*
# Update fonts cache
fc-cache -vf
```
ちなみに GitHub Actions で自動生成される PDF は上記の設定を [.github/workflows/gh-pages.yml](../.github/workflows/gh-pages.yml#L22-L23) で行っております.
以下,本文より引用.
> ここで,ゴシック体とは "MS PGothic", "Hiragino Kaku Gothic Pro", "IPAexGothic", "Noto Sans CJK JP" のいずれか,明朝体とは ""MS PMincho", "Hiragino Mincho Pro", "IPAexMincho", "Noto Serif CJK JP" のいずれかで見つかるものが採用されます.
> これらのフォントがお使いのコンピュータになければインストールするか,代わりに使いたいフォントがあればソースコードの方に追加してください.
> 以下のコマンドで使用可能なフォント一覧を確認できます.
> ```sh
> typst fonts
> ```
## フォント選定の理由
1. できるだけデフォルトで入っている
2. 入っていない場合にはインストールが簡単
3. プロポーショナルフォントが望ましい
4. どの OS でも外観を似せたい
|
https://github.com/majst01/thesis | https://raw.githubusercontent.com/majst01/thesis/main/thesis.typ | typst | = Title of the Thesis
#set page(margin: (left: 3cm, right: 2cm))
#set par(leading: 1.5em)
#set text(font: "Times New Roman")
#outline()
== Introduction
+ Numbered list entry
+ Second item
- bullet list entry
#figure(caption: [Some description to the image], image("images/glacier.jpg"))
== Content
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. As mentioned in @aksin
#pagebreak()
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. As mentioned in @Hajshahvaladi2022-ma
== Final Thoughts
This was awesome.
#bibliography(
style: "american-psychological-association",
"biblatex-examples.bib"
) |
|
https://github.com/0x1B05/nju_os | https://raw.githubusercontent.com/0x1B05/nju_os/main/book_notes/content/04_persistence_02_filesystem.typ | typst | #import "../template.typ": *
#pagebreak()
= File System
== Files and Directory
== File System Implementation
This chapter is an introduction to *vsfs*(the *Very Simple File System*).
#tip("Tip")[
The file system is pure software.
]
=== The Way To Think
- *data structure*
- *access methods*
- how does it map the syscalls(like `open`, `read`..) onto its stuctures
#tip("Tip")[
Mental models are what you are really trying to develop when learning about systems.
- For file systems, your mental model should eventually include answers to questions like:
- what on-disk structures store the file system’s data and metadata?
- What happens when a process opens a file?
- Which on-disk structures are accessed during a read or write?
]
=== Overall Organization
- First *divide the disk into blocks*(a commonly-used size of 4KB)
What we need to store in these blocks to build a file system?
- *user data* stored in *data region*
- fs has to track info about each file => *metadata* stored in *inode table*
- fs has to track whether inodes or data blocks are free or allocated => *allocation structure*(here are *data bitmap* and *inode bitmap*)
- *superblock*(contains info about the fs, like a magic number, the number of data blocks and inods, where the inode table begins, and so forth)
#example("Example")[
#image("./images/vsfs-example.png")
- Assume a really small disk with just 64 blocks
- 56 blocks for the *data region*
- 5 blocks for the *inode table*
- 256 bytes per inode => a block can hold 16 inodes(our file system contains 80 total indoes, which represents the max num of files we can have in the fs)
- 1 block for *data bitmap* and 1 block for *inode bitmap*
#tip("Tip")[
A little bit overkilled, but just for simplicity.
]
- 1 block for *superblock*
]
=== Inode(Index Node)
Each *inode* is implicitly referred to by *a number* (called the i-number). Given an i-number, you should *directly be able to calculate where on the disk* the corresponding inode is located.
#image("./images/vsfs-inode-table.png")
*The sector address sector of the inode block* can be calculated as follows:
```
blk = (inumber * sizeof(inode_t)) / blockSize;
sector = ((blk * blockSize) + inodeStartAddr) / sectorSize;
```
#example("Example")[
Read inode number 32:
```
blockSize = 256 Byte
sectorSize = 512 Byte
inodeStartAddr = 12KB
blk = 32*sizeof(inode)/blockSize = 32*256/4*1024 = 2
sector = ((blk * blockSize) + inodeStartAddr) / sectorSize = ((2*4*1024)+12*1024)/512 = 40
```
]
Inside each inode is virtually all of the info you need about a file(referred to as *metadata*):
- its type (e.g., regular file, directory, etc.)
- its size
- the number of blocks allocated to it
- protection information
- some time info
- ...
#image("./images/Simplified-Ext2-Inode.png")
- How inode refers to where data blocks are?
- One simple approach would be to have one or more *direct pointers* (disk addresses) inside the inode; each pointer refers to one disk block that belongs to the file.
==== The Multi-Level Index
To support bigger files, file system designers have had to introduce different structures within inodes. One common idea is to have a special pointer known as an *indirect pointer*.
#definition("Definition")[
*indirect pointer* points to _a block that contains more pointers_, each of which point to user data
]
#example("Example")[
An inode may have *some fixed number of direct pointers* (e.g., 12), and *a single indirect pointer*.
Assuming 4-KB blocks and 4-byte disk addresses, that adds another 1024 pointers; the file can grow to be (12 + 1024) \* 4K or 4144KB.
- *double indirect pointer*
- A double indirect block thus adds the possibility to grow files with an additional 1024 · 1024 or 1-million 4KB blocks, in other words supporting files that are over 4GB in size.((12 + 1024 + 1024^2 ) × 4 KB)
- *triple indirect pointer*
...
]
- Why use an imbalanced tree like this? Why not a different approach?
- Many researchers have studied file systems and how they are used, and virtually every time they find certain “truths” that hold across the decades.
#three-line-table[
| truth | explanation |
| - | - |
| Most files are small| ˜2K is the most common size|
|Average file size is growing| Almost 200K is the average|
|Most bytes are stored in large files|A few big files use most of space|
|File systems contains lots of files|Almost 100K on average|
|File systems are roughly half full|Even as disks grow, file systems remain ˜50% full|
|Directories are typically small|Many have few entries; most have 20 or fewer|
]
#tip("Tip")[
A different approach is to use *extents* instead of pointers.
#definition("Definition")[
*An extent* is simply a disk pointer plus a length (in blocks); all one needs is *a pointer and a length* to specify the on-disk location of a file.
]
Just *a single extent is limiting*, as one may *have trouble finding a contiguous chunk of on-disk free space when allocating a file*.
Thus, extent-based file systems often allow for more than one extent, thus giving more freedom to the file system during file allocation.
- *Pointer-based approaches* are the most *flexible* but use a large amount of metadata per file
- Extent-based approaches are less flexible but more *compact*
]
=== Directory
A directory basically just contains *a list of (entry name, inode num- ber) pairs*.
#example("Example")[
Assume a directory dir (inode number 5) has three files in it (`foo`, `bar`, and `foobar_is_a_pretty_longname`).
#tablem[
|inum | reclen | strlen | name|
| -| - | - | - |
|5| 12| 2| .|
|2 |12| 3 |..|
|12 |12| 4 |foo|
|13 |12| 4 |bar|
|24 |36| 28| foobar_is_a_pretty_longname|
]
each entry has
- an inode number
- record length (the total bytes for the name plus any left over space)
- string length (the actual length of the name)
- the name of the entry.
]
|
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/04-opentype/opentype.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/heading.typ": chapter
#import "/template/components.typ": note
#import "/template/lang.typ": tibetan, khmer
#import "/lib/glossary.typ": tr
#show: web-page-template
#chapter[
// How OpenType Works
OpenType的工作原理
]
// They say that if you enjoy sausages, you shouldn't look too carefully into how they are made, and the same is true of OpenType fonts.
俗话说,如果你喜欢吃香肠,那就别关心它是怎么制作的。对于OpenType字体来说也一样。
// In this chapter we're going to look in some depth about the OpenType font format: how it actually converts the outlines, metrics and advanced typographic features of a font into a file that a computer can interpret. In an ideal world, this would be information that programmers of layout systems and font handling libraries would need, but implementation details that font designers could safely ignore.
在本章中,我们会深入OpenType字体的内部格式,了解它是如何将字体的#tr[outline]信息、#tr[metrics]信息和其他各种#tr[typography]相关的特性封装成一个计算机可读的文件的。在理想的世界中,只有开发电子排版系统或字体加载库的程序员们需要了解这方面的信息,字体设计师完全不需要懂这些。
// But we are not in an ideal world, and as we will see when we start discussing the metrics tables, the implementation details matter for font designers too - different operating systems, browsers and applications will potentially interpret the information contained within a font file in different ways leading to different layout.
但我们这个世界并不理想,所以你经常能听到字体设计师也在讨论#tr[metrics]之类的实现层面的细节。不同的操作系统、浏览器、应用程序会用不同的方式来理解和使用字体文件中的信息,也会产生不同的显示效果。
// So put on your overalls, grab your bucket, and let's take a look inside the font sausage factory.
现在穿上工作服,拿上装备,我们就要亲自深入香肠工厂好好看看了。
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/布局/layout/layout.typ | typst | #image("屏幕截图 2024-04-16 155513.png")
#image("屏幕截图 2024-04-16 155621.png") |
|
https://github.com/swaits/typst-collection | https://raw.githubusercontent.com/swaits/typst-collection/main/pro-letter/0.1.0/README.md | markdown | MIT License | # Business Letter Typst Template
This Typst template lets you create professional business letters effortlessly.
## Features
- **Flexible Styling**: Adjust fonts, sizes, and emphasis according to your preferences.
- **Customizable Sender and Recipient Details**: Easily configure names,
addresses, and contact information.
- **Date and Subject Line**: Clearly define the date and subject of your letter.
- **Attachment Listings**: Describe any attachments that accompany your letter.
- **Notary Section**: Include an optional notary acknowledgment page.
## Usage
To use this template, import it and configure the parameters as shown:
```typst
#import "@preview/pro-letter:0.1.0": pro-letter
#show: pro-letter.with(
sender: (
name: "<NAME>",
street: "123 Blueberry Lane",
city: "Wonderland",
state: "NA",
zip: "56789",
phone: "+1-555-987-6543",
email: "<EMAIL>",
),
recipient: (
company: "Fantasy Finance Faucets",
attention: "Treasury Team",
street: "456 Dreamscape Ave",
city: "Fabletown",
state: "IM",
zip: "12345",
),
date: "January 15, 2025",
subject: "Account Closure Request",
signer: "<NAME>",
attachments: "Fae Council Closure Order.",
)
I am writing to formally request the closure of the enchanted vault at Fantasy
Finance Faucets held in my name, <NAME>.
Attached is the official Fae Council Closure Order for your verification and
records.
The account is identified by the vault number: *12345FAE*.
As the rightful owner, I _authorize the closure of the aforementioned vault_ and
_request that all enchanted funds be redirected to the Fae Council Reserve_.
Please find the necessary details for the transfer enclosed.
Thank you for your prompt attention to this magical matter.
```
## Parameters
### Address Information
Both `sender` and `recipient` parameters accept the following optional fields in
the form of a dictionary. Include only the fields necessary for your letter:
- **`name`**: Full name of the person.
- **`company`**: Company name.
- **`attention`**: Department or individual to address within the company.
- **`street`**: Street address.
- **`city`**: City.
- **`state`**: State or region.
- **`zip`**: ZIP or postal code.
- **`phone`**: Phone number.
- **`email`**: Email address.
### Letter Details
- **`date`**: The date of the letter. Optional; defaults to none.
- **`subject`**: The subject line of the letter. Optional; defaults to none.
- **`salutation`**: The greeting in the letter. Optional; defaults to "To whom
it may concern,".
- **`closing`**: The closing line of the letter. Required; defaults to "Sincerely,".
- **`signer`**: The name of the person signing the letter. Required.
### Additional Features
- **`attachments`**: Description of any attachments accompanying the letter.
Optional; defaults to none.
- **`notary-page`**: Boolean flag to include a notary public acknowledgment
page. Defaults to false.
### Text and Style Settings
- **`font`**: The typeface to use for the letter. Defaults to "Linux Libertine".
- **`size`**: Font size. Defaults to 11pt.
- **`weight`**: Font weight. Defaults to "light".
- **`strong-delta`**: Additional weight for bold text. Defaults to 300.
- **`lang`**: Language for the document. Defaults to "en".
### Page Settings
- **`paper`**: Paper size for the document. Defaults to "us-letter".
- **`margin`**: Margin size around the edges of the page. Defaults to "1in".
## License
This work is licensed under the MIT License.
|
https://github.com/Mc-Zen/tidy | https://raw.githubusercontent.com/Mc-Zen/tidy/main/examples/funny-math/funny-math.typ | typst | MIT License | #import "funny-math-complex.typ": *
/// This function computes the sine $sin(x)$ of $x$.
///
/// See also @@funny-sqrt() for computing square roots.
///
/// - phi (float): Angle for the sine function.
/// -> float
#let funny-sin(phi) = { calc.sqrt(phi) }
/// This function computes the square root $sqrt(x)$ of it's argument.
///
///
/// === Example
/// #example(`funny-math.funny-sqrt(12)`)
///
///
/// - x (float, int): Argument to take the square root of. For $x=0$, the result is $0$:
/// #example(`funny-math.funny-sqrt(0)`)
/// -> float
#let funny-sqrt(x) = { calc.sqrt(x) }
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/yagenda/0.1.0/agenda.typ | typst | Apache License 2.0 | #let light-gray = luma(100)
#let ultralight-gray = luma(240)
#let mainfont = "Source Sans Pro"
// #let mainfont = "Linux Biolinum Keyboard"
#let entries(..args) = {
grid(
stroke: (bottom: (paint: ultralight-gray, dash: "dotted")),
inset: (left: 0pt, rest: 4pt),
..args.pos().map(((label, value)) => [#smallcaps(lower(label)) #value]),
..args.named()
)
}
#let meeting-header(name: none, date: none, time: none, location: none, invited: none) = grid(
columns:(auto, 1fr, auto),
[#text(size:14pt, weight: "bold")[Agenda: #name]
#entries(
(text(fill: light-gray)[Invited:], invited),
)],[],entries(
(text(fill: light-gray)[Date:], date),
(text(fill: light-gray)[Time:], time),
(text(fill: light-gray)[Location:], location),
))
#let meeting-topic(topic, time: none, purpose: none, lead: none) = entries(
(none, strong(topic)),
(text(fill: light-gray)[Time:], time),
(text(fill: light-gray)[Lead:], lead),
(text(fill: light-gray)[Purpose:], purpose),
stroke: (_, y) => if y == 0 { (bottom: (paint: ultralight-gray, dash: "dotted")) } else { (bottom: (paint: ultralight-gray, dash: "dotted")) }
)
#let agenda-row(index, topic: none, time: none, purpose: none, lead: none, preparation: none, process: none) = {
(
text(fill: light-gray)[*#index* |],
[
#pad(left: -10pt, top: -0.4em, meeting-topic(topic, time: time, purpose: purpose, lead: lead))
],
preparation,
process
)
}
#let agenda-table(data) = {
let data-rows = data.pairs().map((item) => {
let contents = item.at(1)
(
topic: [#eval(contents.Topic, mode: "markup")],
time: [#eval(contents.Time, mode: "markup")],
purpose: [#eval(contents.Purpose, mode: "markup")],
lead: [#eval(contents.Lead, mode: "markup")],
preparation: [
#eval(contents.Preparation, mode: "markup")
],
process: [
#eval(contents.Process, mode: "markup")
]
)
})
table(
columns: (auto,auto,auto, 1fr) ,
align: (right,left, left, left),
stroke: (top: none, bottom: black + 0.6pt, right: (paint: gray, dash: "dotted", thickness: 0.4pt)),
inset: (x, _) => if x == 0 { (left: -25pt, rest: 10pt) } else { 10pt },
table.vline(stroke: none),
table.header(
[],
pad(left:-10pt)[*Topic*],
[*Preparation*],
[*Proposed process*]
),
table.vline(stroke: none),
..data-rows.enumerate().map(((index, row)) => agenda-row(index + 1, ..row)).join(),
table.hline(stroke: none),
)
}
#let agenda(name: none,
date: none,
time: none,
location: none,
invited: none,
doc,
) = {
set text(font: mainfont, size: 10pt, weight: "regular")
set page("a4",
margin:(top: 2cm,
bottom: 2cm,
left: 2.5cm, right: 2cm), flipped: true)
set page(footer: context [
#set text(size: 8pt)
#line(length: 100%,stroke: 0.2pt + gray)
#name meeting — #date
#h(1fr)
#counter(page).display(
"1/1",
both: true,
)
])
meeting-header(name: name, date: date, time: time, location: location, invited: invited)
set text(font: mainfont, size: 10pt, weight: "regular")
doc
}
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/string-02.typ | typst | Other | // Error: 2-12 string is empty
#"".first()
|
https://github.com/0xPARC/0xparc-intro-book | https://raw.githubusercontent.com/0xPARC/0xparc-intro-book/main/src/groth16.typ | typst | #import "preamble.typ":*
= Groth16, another zkSNARK protocol <groth16>
Like PLONK, Groth16 is a protocol for quadratic equations,
as we described in @arith-intro.
For Groth16, the format of the equations is in so-called _R1CS format_.
== Input format
To describe the technical specification of the input format,
we call the variables $a_0 = 1$, $a_1$, ..., $a_n$.
The $q$'th equation takes the form
$ (sum_(i=0)^n u_(i,q) a_i) (sum_(i=0)^n v_(i,q) a_i)
= (sum_(i=0)^n a_i w_(i,q)) $
for $1 <= q <= m$, where $m$ is the number of equations.
(In other words, we require the quadratic part of each equation to factor.
The $a_0 = 1$ term is a dummy variable that simplifies notation
so that we don't need to write the constant terms separately.)
The inputs are divided into two categories:
- ($a_0$), $a_1$, ..., $a_ell$ are _public inputs_; and
- $a_(ell+1)$, $a_1$, ..., $a_n$ are _private inputs_.
== Interpolation
The basic idea of Groth16 is to interpolate polynomials
through the coefficients of the $m$ equations,
then work with KZG commitments to these polynomials.
(This is sort of the opposite of the interpolation in PLONK (@plonk),
where we interpolate
a polynomial through Peggy's solution $(a_0, ..., a_n)$.
Philosophically, you might also think of this as verifying a
random linear combination of the $m$ equations --
where the coefficients of the random linear combination
are determined by the unknown secret $s$ from the KZG protocol.)
Interpolate polynomials $U_i, V_i, W_i$ such that
$
U_i (q) & = u_(i, q) \
V_i (q) & = v_(i, q) \
W_i (q) & = w_(i, q)
$
for $1 lt.eq q lt.eq m$.
In this notation, we want to show that
$ (sum_(i=0)^n a_i U_i (q)) (sum_(i=0)^n a_i V_i(q))
= (sum_(i=0)^n a_i W_i(q)), $
for $q = 1, dots, m$.
This is the same as showing that there exists some polynomial $H$
such that
$ (sum_(i=0)^n a_i U_i (X)) (sum_(i=0)^n a_i V_i (X))
= (sum_(i=0)^n a_i W_i (X)) + H(X)T(X), $
where
$ T(X) = (X-1)(X-2) dots (X-m). $
The proof that Peggy sends to Victor will take the form of
a handful of KZG commitments.
As a first idea (we'll have to build on this afterwards),
let's have Peggy send KZG commitments
$ Com( sum_(i=0)^n a_i U_i ), #h(1em) Com( sum_(i=0)^n a_i V_i ), #h(1em) Com( sum_(i=0)^n a_i W_i ), #h(1em) Com( H T ). $
Recall from @kzg that the Kate commitment $Com( F )$ to a polynomial
$F$ is just the elliptic curve point $[F(s)]$.
Here $s$ is some field element whose value nobody knows,
but a handful of small powers $[1], [s], [s^2], dots,$
are known from the trusted setup.
The problem here is for Peggy to convince Victor that these four group elements,
supposedly $ Com( sum_(i=0)^n a_i U_i ) $ and so forth, are well-formed.
For example, Peggy needs to show that $ Com( sum_(i=0)^n a_i U_i ) $
is a linear combination of the KZG commitments $Com(U_i)$,
that $ Com( sum_(i=0)^n a_i V_i ) $
is a linear combination of the KZG commitments $Com(V_i)$,
and that the two linear combinations use the same coefficients $a_i$.
How can Peggy prove this sort of claim?
== Proving claims about linear combinations
We've already come across this sort of challenge
in the setting of IPA (@ipa),
but Groth16 uses a different approach,
so let's get back to a simple toy example.
#example[
Suppose there are publicly known group elements
$ Com(U_1), Com(U_2), dots, Com(U_n). $
Suppose Peggy has another group element $g$,
and she wants to show that $g$ has the form
$ g = a_0 + sum_(i=1)^n a_i Com(U_i), $
where the $a_i$'s are constants Peggy knows.
Groth's solution to this problem
uses a _trusted setup_ phase, as follows.
Before the protocol runs, Trent (our trusted setup agent)
chooses a nonzero field element $delta$ at random
and publishes:
$ [delta], Com(U_1), Com(delta U_1), Com(U_2), Com(delta U_2),
dots, Com(U_n), Com(delta U_n). $
Trent then throws away $delta$.
Peggy now sends to Victor
$ g = [a_0] + sum_(i=1)^n a_i Com(U_i) $
and
$ h = delta g = a_0 [delta] + sum_(i=1)^n a_i Com(delta U_i). $
Victor can verify that
$ pair(g, [delta]) = pair( h, [1] ), $
which shows that the element Peggy said was $delta g$
is in fact $delta$ times the element $g$.
But Peggy does not know $delta$!
So (assuming, as usual, that the discrete logarithm problem is hard),
the only way Peggy can find elements
$g$ and $h$ such that $h = delta g$
is to use the commitments Trent released in trusted setup.
In other words,
$g$ must be a linear combination of
the elements $[1], Com(U_1), dots, Com(U_n),$
which Peggy knows how to multiply by $delta$,
and $h$ must be the same linear combination
of $[delta], Com(delta U_1), dots, Com(delta U_n)$.
] <groth-motiv-1>
#example[
Here's a more complicated challenge,
on the way to building up the Groth16 protocol.
Suppose there are publicly known group elements
$ Com(U_1), Com(U_2), dots, Com(U_n) $
and
$ Com(V_1), Com(V_2), dots, Com(V_n). $
Peggy wants to publish
$ g_1 = sum_(i=1)^n a_i Com(U_i) $
and
$ g_2 = sum_(i=1)^n a_i Com(V_i), $
and prove to Victor that these two group elements
have the desired form
(in particular, with the same coefficients $a_i$
used for both).
To do this, Trent does the same trusted setup thing.
Trent chooses two constants $alpha$ and $beta$
and publishes $[alpha]$, $[beta]$, and
$ alpha Com(U_i) + beta Com(V_i), $
for $1 lt.eq i lt.eq n$.
In addition to $g_1$ and $g_2$,
Peggy now also publishes
$ h = sum_(i=1)^n a_i (alpha Com(U_i) + beta Com(V_i) ). $
Victor needs to verify that
$ h = alpha g_1 + beta g_2; $
if this equality holds, then $g_1$ and $g_2$ must
have the correct form,
just like in @groth-motiv-1.
So Victor checks the equality of pairings
$ pair(h, [1]) = pair(g_1, [alpha]) + pair(g_2, [beta]), $
and the proof is complete.
] <groth-motiv-2>
== The protocol
Armed with @groth-motiv-1 and @groth-motiv-2,
it's not hard to turn our vague idea from earlier
into a full protocol.
This protocol won't be zero-knowledge --
to make it zero-knowledge, we would have to throw in
an extra "blinding" term,
which just adds an additional layer of complication
on top of the whole thing.
If you want to see the full ZK version,
check out #link("https://eprint.iacr.org/2016/260.pdf")[Groth's original paper].
=== Trusted setup
We start with the same secret setup as the KZG commitment scheme.
That is, we have a fixed pairing $"pair" : E times E -> ZZ slash N ZZ$
and a secret scalar $s in FF_p$.
The trusted setup is as before: $s$ is kept secret from everyone,
but the numbers $[1]$, $[s]$, ..., up to $[s^(2m)]$ are published.
However, this protocol requires additional setup
that actually depends on the system of equations (unlike in the KZG situation in
@kzg, in which trusted setup is done for the curve $E$ itself
and afterwards can be freely reused for any situation needing a KZG commitment.)
Specifically, let's interpolate polynomials $U$, $V$, $W$
through the coefficients of our R1CS system;
that is we have $U_i (X), V_i (X), W_i (X) in FF_p [X]$ such that
$ U_i (q) = u_(i,q), #h(1em) V_i (q) = v_(i,q), #h(1em) W_i (q) = w_(i,q). $
So far we have ignored the issue of public inputs.
The values $a_0, a_1, dots, a_ell$ will be public inputs to the circuit,
so both Peggy and Victor know their values,
and Victor has to be able to verify that they were assigned correctly.
The remaining values $a_(ell+1), dots, a_n$
will be private.
Trent (who is doing the trusted setup) then selects secrets
$alpha, beta, delta, epsilon in FF_p$
and publishes all of the following points on $E$:
$ [alpha], [beta], [delta], [epsilon], \
#h(1em) [(beta U_i (s) + alpha V_i (s) + W_i (s)) / delta] text("for") ell lt i lt.eq m,
#h(1em) [(x^i T(s)) / epsilon] text("for") 0 lt.eq i lt.eq n-2.
$
Note that this means this setup needs to be done _for each system of equations_.
That is, if you are running Groth16 and you change the system,
the trusted setup with $delta$ needs to be redone.
This might make the protocol seem limited.
On the other hand, for practical purposes,
one can imagine that Peggy has
a really general system of equations
that she wants to prove many solutions for.
In this case, Trent can run the trusted setup just once,
and once the setup is done there is no additional cost.
#example[
In practice, one often wants to prove a computation of a hash function:
$ op("sha")(M) = H. $
When you convert this into a system of quadratic equations
for PLONK or Groth16,
both $M$ and $H$ will be public inputs to the system.
The equations themselves will depend only on the
details of the hash function $op("sha")$.
In this case, a single trusted setup can be used
to prove the hash of any message.
]
=== The protocol (not optimized)
+ Peggy now sends to Victor:
$ A = [sum_(i=0)^n a_i U_i (s)], #h(1em) B = [sum_(i=0)^n a_i V_i (s)], #h(1em) C = [sum_(i=0)^n a_i W_i (s)], \
D = [sum_(i=ell+1)^n a_i (beta U_i (s) + alpha V_i (s) + W_i (s)) / delta], \
E = [H(s)], #h(1em) F = [H(s) T(s) / epsilon]. $
+ Victor additionally computes
$ D_0 = [sum_(i=1)^ell (beta U_i (s) + alpha V_i (s) + W_i (s))] $
and
$ G = [T(s)] $
based on publicly known information.
+ Victor verifies the pairings
$ pair( [delta], D ) + pair( [1], D_0 ) = pair( [beta], A ) + pair( [alpha], B ) + pair( [1], C ). $
This pairing shows that $ delta D + D_0 = beta A + alpha B + C. $
Now just like in @groth-motiv-1,
the only way that Peggy could possibly find two group elements $g$ and $h$
such that $delta g = h$
is if $g$ is a linear combination of terms
$[(beta U_i (s) + alpha V_i (s) + W_i (s)) / delta]$.
So we have verified that
$
D = [sum_(i=ell+1)^n a_i (beta U_i (s) + alpha V_i (s) + W_i (s)) / delta]
$
for some constants $a_i$, which implies
$
beta A + alpha B + C = [sum_(i=0)^n a_i (beta U_i (s) + alpha V_i (s) + W_i (s))].
$
And just like in @groth-motiv-2,
since $alpha$ and $beta$ are unknown,
the only way an equality like this can hold is if
$ A = [sum_(i=0)^n a_i U_i (s)], #h(1em) B = [sum_(i=0)^n a_i V_i (s)], #h(1em) C = [sum_(i=0)^n a_i W_i (s)], $
where $a_i$ is equal to the public input for $i lt.eq ell$
(because Victor computed $D_0$ himself!)
and $a_i$ is equal to some fixed unknown value for $i gt ell$.
+ Victor verifies that
$ pair( [epsilon], F ) = pair( E, G ). $
Again like in @groth-motiv-1, since $epsilon$ is unknown,
this shows that $F$ has the form
$ [(H(s) T(s)) / epsilon], $
where $H$ is a polynomial of degree at most $n-2$.
Since $G = [T(s)] $ (Victor knows this because he computed it himself),
we learn that $E = [H(s)]$ is a KZG commitment to a polynomial
whose coefficients Peggy knows.
+ Finally, Victor verifies that
$ pair(A, B) = pair( [1], C ) + pair(E, G). $
At this point, Victor already knows that $A$, $B$, $C$, $E$, $H$
have the correct form, so this last pairing check
convinces Victor of the key equality,
$ (sum_(i=0)^n a_i U_i (X)) (sum_(i=0)^n a_i V_i(X))
= (sum_(i=0)^n a_i W_i(X)) + H(x)T(x). $
The proof is complete.
=== Optimizing the protocol
The protocol above can be optimized further.
We didn't optimize it because we wanted it to be easier to understand.
In our protocol, the proof length is 6 group elements
(Peggy sends Victor $A, B, C, D, E, F$),
and Victor has to compute 8 elliptic curve pairings to verify the proof.
Additionally, Victor has to do $O(ell)$ group operations
to compute $D_0$ depending on the public input.
It turns out that, by cleverly combining multiple verifications into one,
you can get away with a proof length of just 3 group elements,
and verifier work of just 3 elliptic curve pairings
(plus the same $O(ell)$ group operations).
Additionally, we didn't make the protocol zero-knowledge.
This requires the addition of a blinding factor.
Incredibly, Groth manages to take care of the blinding factor
in the 3-element proof as well.
The fully optimized protocol
is in #link("https://eprint.iacr.org/2016/260.pdf")[Groth's paper].
|
|
https://github.com/DanielOaks/tme | https://raw.githubusercontent.com/DanielOaks/tme/main/elements.typ | typst | #let sheet(
title: "",
paper: "a4",
content
) = {
set text(
font: (
"Helvetica",
"Arial",
"sans-serif",
),
size: 13pt,
)
set document(
title: title,
)
set page(
paper: paper,
margin: (
top: 1.5cm,
bottom: 1.5cm,
x: 1.5cm,
),
footer: [
#align(center,
text(
size: 8pt,
title
)
)
]
)
content
}
#let titleEntry(content) = {
v(2pt)
text(
size: 11pt,
content + ":"
)
v(-7pt)
line(
length: 100%,
stroke: luma(20%),
)
}
#let smallTitleEntry(content) = {
v(-1pt)
text(
size: 10pt,
content + ":"
)
v(-7pt)
line(
length: 100%,
stroke: luma(20%),
)
}
#let attribute(content, die: "") = {
grid(
columns: 2,
gutter: .33cm,
box(
height: 1.1cm,
width: 1.55cm,
stroke: luma(60%),
radius: .43cm,
pad(
x: 7pt,
y: 5pt,
stack(
dir: ltr,
align(left + horizon)[
#text(
luma(40%),
{
v(3pt)
"d"
}
)
],
align(left + horizon)[
#text(
18pt,
{
h(4.5pt)
die
}
)
]
)
)
),
text(
size: 14pt,
align(start + horizon, content)
)
)
}
#let emptyLines(count) = {
let n = 0
while n < count {
n += 1
v(11pt)
line(
length: 100%,
stroke: luma(60%)
)
}
}
#let avatar_box_size = (
x: 5.5cm,
y: 5cm,
)
#let characterSheetTop() = [
// header
#grid(
columns: (avatar_box_size.x, auto),
gutter: .75cm,
// avatar box
rect(
width: avatar_box_size.x,
height: avatar_box_size.y,
stroke: luma(30%),
radius: .4cm,
),
// top matter
{
align(
end,
image(
"./tme_v2_-_light_bg.svg",
width: 35%,
)
)
v(5pt)
titleEntry("Name")
grid(
columns: (1.5fr, 2fr),
gutter: .7cm,
titleEntry("Family"),
titleEntry("Magic Affinity"),
titleEntry("Vibe"),
titleEntry("Magic Focus"),
)
},
)
// main content
#grid(
columns: (4cm, 3.8cm, auto),
gutter: .75cm,
// attributes
{
attribute("Strength")
attribute("Speed")
attribute("Style")
},
{
attribute("Smarts")
attribute("Guts")
attribute("Magic", die: "6")
},
// space for notes
{
v(8pt)
emptyLines(5)
}
)
]
#let extraCharacterBox(
lines: 3,
) = {
box(
width: 100%,
stroke: luma(30%),
radius: .5cm,
{
v(3.5cm)
line(
length: 100%,
stroke: luma(30%),
)
box(
inset: .4cm,
{
v(-10pt)
emptyLines(lines)
v(3pt)
}
)
}
)
}
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/based/0.1.0/test/test.typ | typst | Apache License 2.0 | #import "../src/lib.typ": *
// Test cases from: https://www.rfc-editor.org/rfc/rfc4648#section-10
#{
// Test encode
assert.eq(encode64(""), "")
assert.eq(encode64("f"), "Zg==")
assert.eq(encode64("fo"), "Zm8=")
assert.eq(encode64("foo"), "Zm9v")
assert.eq(encode64("foob"), "Zm9vYg==")
assert.eq(encode64("fooba"), "Zm9vYmE=")
assert.eq(encode64("foobar"), "Zm9vYmFy")
assert.eq(encode32(""), "")
assert.eq(encode32("f"), "MY======")
assert.eq(encode32("fo"), "MZXQ====")
assert.eq(encode32("foo"), "MZXW6===")
assert.eq(encode32("foob"), "MZXW6YQ=")
assert.eq(encode32("fooba"), "MZXW6YTB")
assert.eq(encode32("foobar"), "MZXW6YTBOI======")
assert.eq(encode32(hex: true, ""), "")
assert.eq(encode32(hex: true, "f"), "CO======")
assert.eq(encode32(hex: true, "fo"), "CPNG====")
assert.eq(encode32(hex: true, "foo"), "CPNMU===")
assert.eq(encode32(hex: true, "foob"), "CPNMUOG=")
assert.eq(encode32(hex: true, "fooba"), "CPNMUOJ1")
assert.eq(encode32(hex: true, "foobar"), "CPNMUOJ1E8======")
assert.eq(encode16(""), "")
assert.eq(encode16("f"), "66")
assert.eq(encode16("fo"), "666f")
assert.eq(encode16("foo"), "666f6f")
assert.eq(encode16("foob"), "666f6f62")
assert.eq(encode16("fooba"), "666f6f6261")
assert.eq(encode16("foobar"), "666f6f626172")
// Test decode
assert.eq(str(decode64("")), "")
assert.eq(str(decode64("Zg")), "f")
assert.eq(str(decode64("Zm8=")), "fo")
assert.eq(str(decode64("Zm9v")), "foo")
assert.eq(str(decode64("Zm9vYg==")), "foob")
assert.eq(str(decode64("Zm9vYmE")), "fooba")
assert.eq(str(decode64("Zm9vYmFy")), "foobar")
assert.eq(str(decode32("")), "")
assert.eq(str(decode32("MY======")), "f")
assert.eq(str(decode32("MZXQ")), "fo")
assert.eq(str(decode32("MZXW6===")), "foo")
assert.eq(str(decode32("MZXW6YQ=")), "foob")
assert.eq(str(decode32("MZXW6YTB")), "fooba")
assert.eq(str(decode32("MZXW6YTBOI")), "foobar")
assert.eq(str(decode32(hex: true, "")), "")
assert.eq(str(decode32(hex: true, "CO======")), "f")
assert.eq(str(decode32(hex: true, "CPNG")), "fo")
assert.eq(str(decode32(hex: true, "CPNMU===")), "foo")
assert.eq(str(decode32(hex: true, "CPNMUOG")), "foob")
assert.eq(str(decode32(hex: true, "CPNMUOJ1")), "fooba")
assert.eq(str(decode32(hex: true, "CPNMUOJ1E8======")), "foobar")
assert.eq(str(decode16("")), "")
assert.eq(str(decode16("66")), "f")
assert.eq(str(decode16("666f")), "fo")
assert.eq(str(decode16("666F6F")), "foo")
assert.eq(str(decode16("666f6f62")), "foob")
assert.eq(str(decode16("666F6F6261")), "fooba")
assert.eq(str(decode16("666F6f626172")), "foobar")
}
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/matrix_09.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
//
// // Error: 3-37 cannot draw a vertical line after column 3 of a matrix with 3 columns
// $ mat(1, 0, 0; 0, 1, 1; augment: #3) $, |
https://github.com/mem-courses/linear-algebra | https://raw.githubusercontent.com/mem-courses/linear-algebra/main/note/7.欧氏空间.typ | typst | #import "../template.typ": *
#show: project.with(
title: "Linear Algebra #7",
authors: (
(name: "<NAME>", email: "<EMAIL>", phone: "3230104585"),
),
date: "December 17, 2023",
)
#let alpha = math.bold(math.alpha)
#let beta = math.bold(math.beta)
#let gamma = math.bold(math.gamma)
#let theta = math.bold(math.theta)
#let eta = math.bold(math.eta)
#let nu = math.bold(math.nu)
#let xi = math.bold(math.xi)
#let ds = math.dots.c
#let TT = math.upright("T")
= 欧式空间
在这章中,我们总假设 $V$ 是 $RR$ 上的一个线性空间.
== 内积
若 $phi$ 是 $V times V$ 到 $RR$ 上的一个对应规则,记 $(alpha,beta) = phi(alpha,beta) sp (alpha,beta in V)$.使得 $forall alpha,beta in V$,存在唯一确定实数 $phi(alpha,beta)$ 与之对应.同时该规则满足:
(1) 对称性:$(alpha,beta) = (beta,alpha)$.\
(2) 正定性:$(alpha,alpha)>=0$ 且 $(alpha,alpha) = 0 <=> alpha = theta$.\
(3) 双线性:$(k alpha, beta) = k (alpha, beta) = (alpha, k beta)$;$(alpha+beta, gamma) = (alpha,gamma) + (beta, gamma)$.\
称该规则为 *内积*,定义有内积的实的线性空间为 *欧氏空间*.
#warn[
注意正定性中的 $(alpha,alpha) = 0 <=> alpha = theta$ 这条,要证明既是充分条件又是必要条件.
]
#def[推论1] $(alpha,theta) = 0$.
#def[推论2] $display((sum_(i=1)^m k_i alpha_i , sum_(j=1)^n t_j beta_j) = sum_(i=1)^m sum_(j=1)^n k_i t_j (alpha_i, beta_j))$.
#def[定理]任何一个实数域上的有限维线性空间一定可以定义适当的内积成为欧氏空间.
#prof[
#def[证明]设 $dim V=n$,取 $seqn(xi,n)$ 为 $V$ 的一组基,对于 $(alpha,beta)$,设
$
alpha &= x_1 xi_1 + x_2 xi_2 + dots.c + x_n xi_n\
beta &= y_1 xi_1 + y_2 xi_2 + dots.c + y_n xi_n\
$
取 $(alpha,beta) = x_1 y_1 + x_2 y_2 + dots.c + x_n y_n$ 即可.
]
== 度量矩阵
设 $bold(A) = ((xi_i,xi_j))_(n times n) = (a_(i j))_(n times n)$ 为基 $seqn(xi,n)$ 的 *度量矩阵*.
对于任意 $alpha,beta$,设 $alpha = x_1 xi_1 + x_2 xi_2 + dots.c + x_n xi_n$,$beta = y_1 xi_1 + y_2 xi_2 + dots.c + y_n xi_n$,有:
$
(alpha,beta)
= (sum_(i=1)^n x_i xi_i, sum_(j=1)^n y_j xi_j)
= sum_(i=1)^n sum_(j=1)^n x_i y_j (xi_i, xi_j)
= sum_(i=1)^n sum_(j=1)^n a_(i j) x_i y_j
= bold(X)^TT bold(A) bold(Y)
$
在知道 $V$ 的一组基后,任意两个向量的基都可以通过度量矩阵进行计算.
== 向量的长度(范数)
令 $||alpha|| = sqrt((alpha,alpha))$,称为向量的长度或模.根据内积的正定性,$(alpha,alpha)>=0$,故这个式子总是有意义的.
#def[定理](Cauchy-schwarz 不等式)$forall alpha,beta in V$,有 $|(alpha,beta)| <= ||alpha|| dot ||beta||$;且 $|(alpha,beta)| = ||alpha|| dot ||beta|| <=> alpha,beta$ 线性相关.
#prof[
#def[证明]
1. 先证 $alpha,beta$ 线性相关 $=>$ $|(alpha,beta)| = ||alpha|| dot ||beta||$.
设 $alpha=k beta$,则 $|(alpha,beta)| = |(k beta, beta)| = |k| (beta,beta) = |k| ||beta||^2 = ||k beta|| dot ||beta|| = ||alpha|| dot ||beta||$.
2. 再证 $alpha,beta$ 线性无关 $=>$ $|(alpha,beta)| < ||alpha|| dot ||beta||$.
由于 $alpha,beta$ 线性无关,$forall k in RR$,有 $alpha+k beta!=0$.根据正定性,有
$
(alpha+k beta, alpha+k beta)
= ||alpha||^2 + 2k ||alpha|| dot ||beta|| + k^2 ||beta||
$
看作关于 $k$ 的二次函数与 $x$ 轴没有交点,有
$
Delta < 0 => 4k^2 ((alpha,beta))^2 - 4 ||alpha||^2 dot ||beta||^2 < 0 => |(alpha,beta)| < ||alpha|| dot ||beta||
$
3. 再证 $|(alpha,beta)| = ||alpha|| dot ||beta||$ $=>$ $alpha,beta$ 线性相关.
反设 $alpha,beta$ 线性无关,根据 2.,有 $|(alpha,beta)| < ||alpha|| dot ||beta||$ 与题设矛盾.
]
== 向量的夹角
因为 $|(alpha,beta)| <= ||alpha|| dot ||beta||$,所以当 $alpha,beta$ 非零时,有 $display(-1 <= ((alpha,beta))/(||alpha|| dot ||beta||) <= 1)$.
定义 $angle.l alpha,beta angle.r = display(arccos ((alpha,beta))/(||alpha|| dot ||beta||))$ 为 *向量的夹角*.
当 $angle.l alpha,beta angle.r = 0$ 时,称 $alpha,beta$ 这两向量正交.
#def[性质1](三角形不等式)$||alpha + beta|| <= ||alpha|| + ||beta||$.
#prof[
#def[证明]
$
||alpha + beta||^2
&= (alpha+beta,alpha+beta)
= (alpha,alpha) + 2 (alpha,beta) + (beta,beta)\
&<= ||alpha||^2 + 2 ||alpha|| dot ||beta|| + ||beta||^2
= (||alpha|| + ||beta||)^2
$
]
#def[性质2](勾股定理)若 $alpha$ 与 $beta$ 垂直,则 $||alpha + beta||^2 = ||alpha||^2 + ||beta||^2$.
== 正交向量组
对于 $V$ 中的一个向量组,如果其中的向量 *非零* 且两两正交,则称该向量组为 $V$ 的一个 *正交向量组*.
#def[性质1]一个正交向量组必线性无关;但反过来,一个线性无关的向量组不一定是正交向量组.
#prof[
#def[证明]设 $seqn(alpha,m)$ 为一个正交向量组,考察 $k_1 alpha+k_2 alpha_2 + dots.c + k_m alpha_m = theta quad (*)$.
下面先证明 $(*) "成立" <=> k_1 = 0$:
$
because 0 &= (alpha_1,theta) = (alpha_1,k_1 alpha_1 + k_2 alpha_2 + dots.c k_m alpha_m)\
&= (alpha_1, k_1 alpha_1) + (alpha_1, k_2 alpha_2) + dots.c + (alpha_1, k_m alpha_m) = k_1 ||alpha_1||^2
$
由于 $||alpha_1|| != 0$,故 $k_1=0$,同理可证所有 $k_i$ 全为零.故 $seqn(alpha,m)$ 线性无关.
反过来,对于 $alpha_1=display(mat(1,0))^TT, alpha_2=display(mat(1,1))^TT$,显然 $alpha_1$ 和 $alpha_2$ 线性无关,但 $(alpha_1,alpha_2)=1 != 0$,所以 $alpha_1$ 和 $alpha_2$ 显然不正交.
]
#def[性质2]若 $alpha$ 与 $seqn(beta,m)$ 正交,则 $alpha$ 与 $L(seqn(beta,m))$ 中的任一向量正交.
== 标准正交基
若 $dim V = n$,则 $V$ 中 $n$ 个两两正交的向量所组成的向量组,称为 $V$ 的一组 *正交基*.
若 $V$ 的一组正交基中每个向量的长度均为 $1$,则这组基又称为 $V$ 的 *标准正交基*.
#def[性质]标准正交基的度量矩阵为单位矩阵.
#def[定理] (1) $V$ 中任意两个标准正交基之间的过渡矩阵一定是矩阵.
#deft[定理] (2) 若 $V$ 中的一组标准正交基 (I) 到另一组基 (II) 的过渡矩阵为正交矩阵,那么 (II) 也是标准正交基.
// #prof[12月11日习题课 25:48]
=== 标准正交基的作用
设 $xi_1,xi_2,dots,xi_n$ 为 $n$ 维欧式空间 $V$ 的一组标准正交基,$forall alpha,beta in V$,设 $alpha = (xi_1,xi_2,dots,xi_n) X$,$beta = (xi_1,xi_2,dots,xi_n) Y$,其中 $X = (x_1,x_2,dots,x_n)^TT$,$Y = (y_1,y_2,dots,y_n)^TT in R^n$,则:
(1) $(alpha,beta) = (X,Y) = X^T Y = Y^T X$;\
(2) $||alpha|| = ||X|| = sqrt(x_1^2 + x_2^2 + dots.c + x_n^2)$;\
(3) $alpha = (xi_1,xi_2,dots,x_n) X = x_1 xi_1 + x_2 xi_2 + dots.c + x_n xi_n => x_i = (alpha,xi_i), i=1 dots.c n$.
=== 正交矩阵(重要)
设 $bold(U) in RR^(n times n)$,若 $bold(U) bold(U)^TT= bold(U)^TT bold(U) = bold(E)$,即 $bold(U)$ 可逆且逆矩阵为 $bold(U)^TT$,则称 $bold(U)$ 为 $n$ 阶正交矩阵.
#def[定理]设 $bold(U) = (u_(i j))_(n times n) = display(mat(eta_1,eta_2, dots.c , eta_n))$ 为正交矩阵 $<=> bold(U)^TT = bold(U)^(-1)$ $<=>$ $eta_1,eta_2,dots.c,eta_n$ 为 $R^n$ 的标准正交基 $<=>$ $display((eta_i,eta_j) = cases(1\,quad i=j, 0\,quad i!=j))$.
#prof[
#def[证明](以 $n=3$ 的情况为例)
$
bold(U)^TT bold(U)
= mat(eta_1^TT; eta_2^TT; eta_3^TT) mat(eta_1, eta_2, eta_3)
= mat(eta_1^TT eta_1, eta_1^TT eta_2, eta_1^TT eta_3;
eta_2^TT eta_1, eta_2^TT eta_2, eta_2^TT eta_3;
eta_3^TT eta_1, eta_3^TT eta_2, eta_3^TT eta_3)
= mat(1,0,0;0,1,0;0,0,1)
$
]
== Schmidt 正交化法
对于 $n$ 维线性空间的任意一个向量组 $seqn(alpha,s)$,都可以 *改造* 成一个(标准)正交向量组 $seqn(eta,s)$,且有 $L(seqn(alpha,j)) = L(seqn(eta,j)) sp (j=1,2,dots.c,s)$.
改造第一步,正交化:
令 $beta_1 = alpha_1$.对于 $j>1$,令
$
beta_j = alpha_j - sum_(i=1)^(j-1) ((alpha_j, beta_i))/((beta_i, beta_i)) beta_i
$
则 $seqn(beta,s)$ 是 $V$ 的一组正交基.
改造第二步,标准化:
令 $eta_j = display((beta_j)/(||beta_j||)) sp (j=1,2,dots.c,s)$,则 $seqn(eta,s)$ 是 $V$ 的一组标准正交基.
// #warn[
// 例题 TBD
// 01:55:00
// https://classroom.zju.edu.cn/livingroom?course_id=56584&sub_id=1036865&room_id=316&tenant_code=112&sub_public=1
// ] |
|
https://github.com/saurabtharu/CV | https://raw.githubusercontent.com/saurabtharu/CV/main/cover_letter.typ | typst | Dear Hiring Manager,
I’m excited about the opportunity to apply for the Junior DevOps Engineer position at Nex Digital Studio. With a strong foundation in DevOps practices and hands-on experience gained through both academic projects and a recent internship, I am confident that I can contribute effectively to your team.
During my internship at F1Soft International Pvt. Ltd., I worked extensively with Docker and Kubernetes, deploying containerized applications. My experience with tools like Jenkins, Ansible, and Terraform has given me a solid understanding of Infrastructure as Code and automation processes.
I have attached my resume for your consideration, which further highlights my skills, coursework, and academic projects. I would be delighted to discuss my qualifications further in an interview. I look forward to hearing from you. Thank you for considering my application.
Sincerely,
<NAME>
|
|
https://github.com/protohaven/printed_materials | https://raw.githubusercontent.com/protohaven/printed_materials/main/common-tools/planer.typ | typst | #import "/meta-environments/env-features.typ": *
= Planer
The _planer_ (or _thickness planer_) removes material from the top face of a board to make it flat. The resulting surface will be parallel to the opposite face.
== Notes
=== Safety
- Never attempt to plane engineered products like MDF, plywood, particleboard or
chipboard, or any workpieces with loose knots or splintered wood.
They will splinter and the resulting shrapnel can injure you and damage the machine.
- Do not attempt to plane stock shorter in length than the distance between the feed rollers.
=== Care
- Do not over-tax the motor.
- Turn the crank no more than a quarter turn between cuts.
- If the table was lowered after stopping the motor, reset the cut depth for its thickest point and set the table height for that thickness.
- For flattening or reducing the thickness of thin stock, a drum sander may be a better choice.
== Parts of the Planer
#figure(
image("images/planer-front_quarter-annotated.png", width: 100%),
caption: [
An annotated front quarter view of the planer.
],
)
=== On/Off Switches
Use the *green button* to turn on the machine.
Use the *red button* to turn off the machine.
There is no emergency stop button on the thickness planer.
=== Bed
The bed of the planer supports the workpiece as it moves through the planer.
=== Lock Knobs
The two lock knobs on the left side of the table lock the position of the bed on the screw posts.
The lock knobs must be loosened before adjusting the bed height, and should be tightened while cutting to keep the table from drifting during an operation and to help prevent chatter.
In daily use these are often not locked (against manufacturer's instructions) when making a series of cuts at different thicknesses to the same workpiece, but should definitely be locked if feeding multiple pieces through that should end up being the same size.
=== Height Adjustment Crank
The hight adjustment crank raises and lowers the bed.
A quarter of a crank is the maximum depth adjustment that should be made between cuts. Limiting the depth change will help prevent bogging.
=== Cutter Housing
The cutter housing stays stationary over the bed. The cutter housing holds the cutter head, infeed and outfeed rollers, and kickback pawls.
=== Kickback Pawls
The kickback pawls prevent the workpiece from being ejected backwards from the front of the machine.
Once they engage with the workpiece, the workpiece cannot be pulled out of the front of the planer without first lowering the table. Do not try to pull material back out of the planer against the kickback pawls.
In severe situations when the material has bound or failed in some fashion, the machine should be turned off, the table lowered, and the material manually pulled out the back of the machine.
=== Infeed Roller
The infeed roller is a toothed roller powered by the motor that pulls the material toward the cutter head.
At no time should your hand be remotely close to the infeed roller. The roller will embed itself into the surface of the workpiece, so if the depth of cut is too shallow, perpendicular lines will be visibly imprinted in the surface of the workpiece when it exits the machine. When the infeed roller first grabs the workpiece, it can sometimes lift it off of the infeed table briefly, creating a pinching hazard. Keep your hands away from the bottom of the workpiece when feeding it.
The infeed roller should do the work of moving the workpiece through the machine. If you have to force the workpiece through manually, you are taking too deep a cut. It may be necessary to push the work forward manually past a thin spot where the feed rollers are not able to touch the workpiece.
=== Cut Depth Limiter
The cut depth limiter is a small piece of metal on the top edge of the mouth of the planer that prevents the user from running a wide board at the maximum depth of cut.
Narrower boards can be run through to one side or the other at a deeper cut, but it can be used as a good guide for determining a good height for your first cut. Do not crank the workpiece upward underneath this, potentially damaging the height screws or table. Instead, adjust the table height, then check to see if the workpiece fits under the limiter.
=== Power Feed Handle
The power feed handle adjusts the speed at which material is fed past the cutter head.
Leave the power feed handle pulled out to the 16FPM position. It is less taxing on the motor and yields a better surface finish. The feed rate should only be changed when the machine is running.
=== Support Rollers
The support rollers can easily return stock to the front of the machine for subsequent cuts.
=== Digital scale
The digital scale measures the depth of cuts, keeping track of how much material should still be removed.
Take a skim cut and zero the scale. Measure the thickness of your workpiece and determine how much material you wish to remove. Take cuts and raise the bed until the scale indicates the appropriate distance has been moved.
// Should this be "take cuts and adjust the bed until enough material has been removed"?
== Basic Operation
=== Setting Up
+ Place the workpiece on the infeed table.
+ Unlock both lock knobs.
+ Adjust the table height so that the thickest part of the workpiece barely fits under the cut depth limiter.
+ Lock the lock knobs.
+ Turn on dust collection.
+ Make sure the workpiece is not engaged with the kickback pawls or infeed roller.
+ Turn on the motor.
=== Pass the Workpiece Through the Planer
+ Push the workpiece slowly into the machine until the infeed roller engages.
+ Allow the feed rollers to pull the workpiece completely through the machine.
+ Support the workpiece at the front and rear of the machine as necessary.
+ Remove the workpiece from the outfeed table once the outfeed roller has disengaged.
Adjust the planer height as needed and repeat to remove more material. Do not adjust the planer table height by more than a quarter crank between cuts.
=== Cleaning Up
+ Turn off the motor.
+ Turn off the dust collection.\
_If other members are still using the dust collector, leave it on._
+ Brush off any dust, and sweep any chips from the surrounding area. |
|
https://github.com/LeptusHe/LeptusHe.github.io | https://raw.githubusercontent.com/LeptusHe/LeptusHe.github.io/main/source/_posts/编写可测试的代码-依赖注入.typ | typst | #import "./typst-inc/blog-inc.typc": *
#show: blog_setting.with(
title: "编写可测试的代码-依赖注入",
author: ("<NAME>"),
paper: "a1"
)
#metadata("依赖注入") <tags>
#metadata("单元测试") <categories>
#metadata("2018-12-13") <date>
= 测试
在代码中,类与类、类与方法或者其他软件实体之间总是存在着某些依赖关系。例如,假设某个类$A$的对象$a$需要完成某个功能,然而,它需要通过与类$B$的对象$b$的协作才能够完成该功能。在这个过程中,类$B$则成为类$A$的一个依赖。类$B$的对象$b$则成为了一个service对象(服务的提供者),类$A$的对象$a$则是一个client对象(服务的使用者)。
在编写代码时,如果类$A$对类$B$存在依赖,即类$A$完成某个功能需要类$B$提供的某项服务。那么,我们就需要思考一个问题—我们该如何获得一个提供类$A$所需要的服务的对象呢?
= 依赖注入
在一般情况下,类$A$可能会自己创建一个能够提供其所需要的服务的对象,即new出一个提供服务的对象。然而,这种方式存在着一些问题。如果类$A$的代码中采用了new一个提供服务的对象的方式来获得一个service对象,那么这些代码就是存在耦合的,它违背了面向对象编程中的开闭原则(open/closed principle)。因为,如果存在其他能够提供相同服务的类$C$对象,并且类$A$想要使用类$C$对象来作为service对象,那么我们就需要修改类$A$的代码。这时,代码就不再满足"对扩展是开放的,对修改是关闭的“这两个特征了。
在进行单元测试时,被测试类为了完成某个功能可能需要与其他对象进行协作,例如通过文件系统读取文件或者通过数据库系统获取数据。然而,为了保证单元测试的独立性和可重复性,我们一般都会使用mock对象来替代被测试类所依赖的对象,即service对象。通过这种方式,单元测试的正确性只有被测试类有关,而与被依赖的对象无关。因为如果测试结果依赖于某个外部依赖,例如数据库系统,那么每次的测试结果都可能由于数据库的状态不一样从而导致测试结果不一样,这样就无法保证测试的可重复性和独立性了。
然而,如果被测试类的client代码中存在直接通过new的方式获得service对象的代码,由于service对象无法被替换,那么通过mock对象替换被依赖对象的方法就无法使用了,从而会导致了我们无法写出具有独立性和可重复性的单元测试。我们称这段client代码为不可测试的代码。
为了解决上述所讲的client对象和service对象的耦合问题,我们可以使用一种称为 *依赖注入* (*dependency injection*) 的技术。依赖注入是一种将client对象所依赖的service对象注入到client对象中的技术。通过依赖注入技术,client代码中就不再需要直接通过new的方式来获得service对象了,而是通过由其他代码将service对象注入到client对象中的方式来获得service对象,这就解决了client对象和service对象的耦合问题。在面向对象设计中,这样的代码也是满足了开闭原则的,具有可扩展性。在单元测试中,mock对象就能够通过注入的方式来替换实际的service对象,这样也能够保证我们写出的测试具有独立性和可重复性。这样的代码也称为可测试的代码。
= 依赖注入的方式
一般而言,将依赖注入的方式具有两种:
- constructor注入
- setter注入
以下以一个简单的例子来介绍这两种注入方式:
假设存在一个ModelManager,该类负责从磁盘加载一个model文件并创建出一个Model类对象,然而因为Model类对象是由多个\<mesh, material\>对构成的,所以ModelManager无法直接创建Mesh类对象和Material类对象,所以其需要其他对象来创建Mesh类对象和Material类对象,MeshManager类负责从磁盘加载一个mesh文件并创建一个Mesh对象,MaterialManager类负责从磁盘加载一个material文件并创建一个material对象。所以,ModelManager需要MeshManager与MaterialManager的协助才能够完成该功能,即MeshManager和MaterialManager是ModelManager的两个依赖。
以下以这个例子来进行两种注入方式的说明:
== Constructor注入
```cpp
class ModelManager {
public:
ModelManager(MeshManager *meshManager, MaterialManager *materialManager)
: mMeshManager(meshManager), mMaterialManager(materialManager)
{
if (!meshManager) {
throw std::exception("mesh mananger must not be null");
}
if (!materialManager) {
throw std::exception("material manager must not be null");
}
}
Model loadModel(const std::string& filePath) {
// load model by mMeshManager and mMaterialManager
}
private:
MeshManager *mMeshManager;
MaterialManager *mMaterialManager;
}
```
Constructor注入方法是通过将service对象作为constructor的参数来将依赖注入到client类中的。因此,注入的依赖成为了client的状态的一部分。一般而言,如果依赖能够在调用constructor前构造出来,constructor注入方法是首选的依赖注入方法。因为在constructor被调用后,client对象所需要的依赖都已经被满足了,所以constructor注入方法能够保证client对象总是处于一个合法的状态。
Constructor注入的缺点在于缺乏灵活性。当client对象被构造后,它的依赖,即service对象就无法被更改了。在某些情况下,当client对象被构造后,client对象所依赖的service对象可能需要被修改。此时,constructor注入就无法解决这个问题。
== Setter注入
```cpp
class ModelManager {
public:
ModelManager() : mMeshManager(nullptr), mMaterialManager(nullptr) {}
void setMeshManager(MeshManager *meshManager) { mMeshManager = meshManager; }
void setMaterialManager(MaterialManager *materialManager) { mMaterialManager = materialManager; }
Model loadModel(const std::string& filePath) {
validateState();
// load model by mMeshManager and mMaterialManager
}
private:
bool validateState() const {
if (!meshManager) {
throw std::exception("mesh mananger must not be null");
}
if (!materialManager) {
throw std::exception("material manager must not be null");
}
}
private:
MeshManager *mMeshManager;
MaterialManager *mMaterialManager;
}
```
Setter注入方法主要是通过提供一个setter函数来注入依赖。它的优点在于能够灵活地注入service对象。在任意时刻,client对象所依赖的service对象都能够被更改。然而,由于service对象的注入时间不确定,从而导致在需要使用service对象时,client对象都需要查看service对象是否已经被注入。当需要使用service对象,而其未被注入时,client对象就处于不合法的状态,导致我们需要编写额外的代码来对这种情况进行处理,从而增加了代码量和其他负担。
== Constructor注入和Setter注入的结合
```cpp
class ModelManager {
public:
ModelManager(MeshManager *meshManager, MaterialManager *materialManager)
: mMeshManager(meshManager), mMaterialManager(materialManager)
{
if (!meshManager) {
throw std::exception("mesh mananger must not be null");
}
if (!materialManager) {
throw std::exception("material manager must not be null");
}
}
void setMeshManager(MeshManager *meshManager) {
assert(meshManager != nullptr);
mMeshManager = meshManager;
}
void seaMaterialManager(MaterialManager *materialManager) {
assert(mMaterialManager != nullptr);
mMaterialManager = materialManager;
}
Model loadModel(const std::string& filePath) {
// load model by mMeshManager and mMaterialManager
}
private:
MeshManager *mMeshManager;
MaterialManager *mMaterialManager;
}
```
通过结合constructor注入和setter注入方法,我们能够同时拥有两种注入方法的优点,并没有了两种注入方式的缺点。如上所示,在client对象被构造后,其处于一个合法的状态。并且,通过setter函数能够随时替换service对象并保持client对象状态的合法性。
通过两种注入方式的结合,在使用service对象时,我们总能够确保service对象时存在的,从而不用进行验证查找service对象是否存在,进而避免了需要编写异常处理代码的需要。
= References
1. #link("https://en.wikipedia.org/wiki/Dependency_injection")[Dependency injection[wiki]]
2. #link("https://agostini.tech/2017/03/27/using-dependency-injection/")[Using Dependency Injection[De<NAME>ostini][2017]]
3. #link("https://agostini.tech/2017/04/24/unit-tests-with-dependency-injection/")[Unit Tests with Dependency Injection[Dejan Agostini][2017]]
|
|
https://github.com/antonWetzel/Masterarbeit | https://raw.githubusercontent.com/antonWetzel/Masterarbeit/main/arbeit/fazit.typ | typst | #import "setup.typ": *
= Fazit
== Segmentierung
Die Segmentierung unterteilt die Punkte in einzelne Bäume. Wenn die Kronen der Bäume klar getrennte Spitzen haben, werden diese ohne Probleme unterteilt. Dadurch werden manche Waldgebiete gut segmentiert, aber je näher die Kronen der Bäume zueinander sind, desto wahrscheinlicher werden mehrere Bäume zu einem Segment zusammengefasst. Vor der Segmentierung muss der Mindestabstand zwischen Segmenten und die Breite der Scheiben festgelegt werden. Die Parameter müssen passend für den Datensatz gewählt werden, was eine Anpassungsmöglichkeit, aber auch eine Fehlerquelle ergibt.
Die bereits existierende Segmentierung vom Datensatz unterteilt die Punktwolken besser für die einzelnen Bäume @pang. Diese würde nicht nur mit einzelnen Datensätzen durchgeführt, sondern mit einer Kombination der verfügbaren Datensätze für ein Gebiet.
== Analyse
Bei der Analyse von einem Baum werden Daten für jeden Punkt im Baum und für den gesamten Baum berechnet. Für die einzelnen Punkte werden Punktgröße, Normale für die Visualisierung und die lokale Krümmung problemlos berechnet. Die Berechnung von der Ausdehnung für eine bestimmte Höhe vom Baum funktioniert für die meisten Bereiche, wird aber durch Punkte, welche nicht zum Baum gehören stark beeinflusst. Dadurch ist die Ausdehnung stark von der Qualität der Segmentierung abhängig.
Die charakteristischen Eigenschaften vom Baum können mit den genaueren TLS-Daten abgeschätzt werden, haben aber noch systematische Fehler, welche das Ergebnis verfälschen. Mit den ALS- und ULS-Daten werden größere Gebiete abgedeckt, dafür sind die berechneten Eigenschaften ungenauer. Besonders die Berechnung vom Durchmesser vom Stamm ist nur mit den TLS-Daten möglich. Mit den ALS- und ULS-Daten können die Werte für die Baumhöhe und die Baumkrone abgeschätzt werden, weil diese Werte auch mit einer geringeren Punktdichte berechnet werden können. Für Punktwolken mit geringer Dichte oder aus der Luft aufgenommen existieren besonders wenig Punkte unterhalb der Baumkrone, wodurch keine Approximation vom Baumstamm möglich ist.
== Visualisierung
Die Software ermöglicht den Übergang von den Punktdaten ohne weitere Informationen zu einer interaktiven Visualisierung vom Waldgebiet. Dadurch ist ein Überblick über das gescannte Waldgebiet und einzelne Bäume möglich.
Die Triangulierung berechnet ein Mesh für die Segmente, welches von der Visualisierung angezeigt werden kann. Unterschiedlich zu anderer 3D-Render-Software, kann das Mesh mit den berechneten Punkteigenschaften als Farbinformationen kombiniert werden.
Die Visualisierung ermöglicht eine interaktive Darstellung der Daten. Im Gegensatz zu LAStools @lastools werden Datenpunkte nicht mit einer festen Größe, sondern durch einen orientierten Kreis mit variablem Radius angezeigt. Dadurch entstehen keine Lücken zwischen den Datenpunkten, wenn diese eine geringere Dichte haben oder die Entfernung zur Kamera gering ist. Ein Vergleich ist in @fazit_lasview_compare gegeben. Beim `LASviewer` sind die Baumkronen weit von den Messpunkten schwer zu sehen, weil weniger Datenpunkte existieren.
#figure(
caption: [Vergleich zwischen den Visualisierungen von `LASviewer` und `treee`.],
grid(
columns: 1 * 2,
gutter: 1em,
subfigure(rect(inset: -1pt, radius: 4pt, stroke: 5pt, image("../images/lasviewer_it.png")), caption: [`LASviewer`]),
subfigure(rect(inset: -1pt, radius: 4pt, stroke: 5pt, image("../images/lasviewer_treee.png")), caption: [`treee`]),
),
) <fazit_lasview_compare>
Für größere Datensätze zeigt LAStools nur eine vorher festgelegte Anzahl von zufällig ausgewählten Punkten. Je größer die Anzahl, desto länger wird für das Rendern der Punkte benötigt, wodurch für große Datensätze nicht alle Daten gerendert werden oder das Rendern länger dauert und die Navigation erschwert. Die Berechnung der Detailstufen benötigen einen Vorverarbeitungsschritt, dafür kann der komplette Datensatz interaktiv angezeigt werden.
== Ausblick
Momentan werden die ermittelten Daten nur für die Visualisierung verwendet. In der `project.json` sind die charakteristischen Eigenschaften für die Segmente gespeichert, aber auch Daten für die Visualisierung. Durch eine Trennung wird eine automatische Weiterverarbeitung der berechneten Eigenschaften erleichtert.
Für jeden Baum wird abhängig von der Höhe die horizontale Ausdehnung bestimmt. Ein Beispiel für die Daten ist in @fazit_slices gegeben. Die Verteilung kann als Basis für die Baumform benutzt werden, womit zum Beispiel eine Schätzung für die zugehörige Spezies gemacht werden kann.
#figure(caption: [Horizontale Scheiben für einen Baum mit der zugehörigen Fläche für jede Scheibe.], grid(
columns: 1 * 2,
image("../images/auto-crop/prop_height.png", height: 30%),
image("../images/klassifkation_slices.svg", height: 30%),
)) <fazit_slices>
Vor der Visualisierung müssen die Daten importiert werden. Je größer die Datenmenge, desto länger dauert der Import und während des Imports können die Daten noch nicht inspiziert werden. Die Möglichkeit die Zwischenergebnisse vom Importprozess anzuzeigen würde das Anpassen von Importparametern erleichtern und die Zeit verringert, ab der die ersten Ergebnisse sichtbar sind.
Der Importer unterstützt momentan nur Dateien im LASzip-Format, wodurch Daten in anderen Formaten nicht verwendet werden können oder zuerst konvertiert werden müssen. Durch andere Importformate kann die Verwendung erleichtert werden.
Die Visualisierung kann Punktwolken auch aus anderen Quellen als Waldgebiete anzeigen, ist aber stark an diesen Verwendungszweck angepasst. Durch einen zusätzlicher Importer, welcher für beliebige Datensätze geeignet ist kann die Visualisierung vielseitiger benutzt werden.
|
|
https://github.com/MatheSchool/typst-g-exam | https://raw.githubusercontent.com/MatheSchool/typst-g-exam/develop/examples/exam-localization.typ | typst | MIT License | #import "../src/lib.typ": *
#show: g-exam.with(
localization: (
grade-table-queston: [Number of *questions*],
grade-table-total: [Total _poinst_],
grade-table-points: [#text(fill: red)[Points]],
grade-table-grade: [#text(fill: gradient.radial(..color.map.rainbow))[Grades obtained]],
point: [point],
points: [Points],
page: [],
page-counter-display: "1 - 1",
family-name: "*Family* _name_",
given-name: "*Given* _name_",
group: [*Classroom*],
date: [*Date* of exam]
),
)
#g-question(points: 2)[Question 1]
#g-question(points: 1)[Question 2]
#g-question(points: 1.5)[Question 3] |
https://github.com/jonaspleyer/peace-of-posters | https://raw.githubusercontent.com/jonaspleyer/peace-of-posters/main/docs/content/documentation/layouts.md | markdown | MIT License | ---
title: "Layouts"
weight: 20
---
# Layouts
Layouts modify mostly shapes and spacings between boxes.
Similarly to [themes](/documentation/themes), their purpose is cosmetic.
A layout, just like a theme, is a dictionary.
```typst
layout = (
"spacing": [length] [relative],
"body-size": [length],
"heading-size": [length],
"title-size": [length],
"subtitle-size": [length],
"authors-size": [length],
"keywords-size": [length],
)
```
All default values assume an `a0` layout.
You can easily change this by using the `set-layout` function below.
| Argument | Type | Default Value | Description |
| --- | --- | --- | --- |
| `spacing` | [length] [relative] | `1.2em` | Spacing between boxes when stretching box. |
| `body-size` | [length] |`33pt` | Size of the body. |
| `heading-size` | [length] |`50pt` | Size of the heading. |
| `title-size` | [length] |`75pt` | Size of the title. |
| `subtitle-size` | [length] |`60pt` | Size of the subtitle. |
| `authors-size` | [length] |`50pt` | Size of the authors. |
| `keywords-size` | [length] |`40pt` | Size of the keywords. |
## Updating a Layout
```typst
update-poster-layout(
..args
)
```
All arguments coincide with the values of the ones of `layout` shown above.
## Setting a new Layout
```typst
set-poster-layout(
layout: [dictionary],
)
```
## Example Layouts
### Default
```typst
#let -default-layout = (
"spacing": 1.2em,
)
```
### A0
```typst
#let layout-a0 = -default-layout + (
"paper": "a0",
"size": (841mm, 1188mm),
"body-size": 33pt,
"heading-size": 50pt,
"title-size": 75pt,
"subtitle-size": 60pt,
"authors-size": 50pt,
"keywords-size": 40pt,
)
```
### A1
```typst
#let layout-a1 = -default-layout + (
"paper": "a1",
"size": (594mm, 841mm),
"body-size": 27pt,
"heading-size": 41pt,
"title-size": 61pt,
"subtitle-size": 49pt,
"authors-size": 41pt,
"keywords-size": 33pt,
)
```
### A2
```typst
#let layout-a2 = -default-layout + (
"paper": "a2",
"size": (420mm, 594mm),
"body-size": 20pt,
"heading-size": 31pt,
"title-size": 47pt,
"subtitle-size": 38pt,
"authors-size": 31pt,
"keywords-size": 25pt,
)
```
### A3
```typst
#let layout-a3 = -default-layout + (
"paper": "a3",
"size": (297mm, 420mm),
"body-size": 14pt,
"heading-size": 22pt,
"title-size": 32pt,
"subtitle-size": 26pt,
"authors-size": 22pt,
"keywords-size": 18pt,
)
```
### A4
```typst
#let layout-a4 = -default-layout + (
"paper": "a4",
"size": (210mm, 297mm),
"body-size": 8pt,
"heading-size": 12pt,
"title-size": 18pt,
"subtitle-size": 15pt,
"authors-size": 12pt,
"keywords-size": 10pt,
)
```
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/04-opentype/exploring/hhea-os2.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note
#import "/lib/glossary.typ": tr
#show: web-page-template
// ### Vertical metrics: `hhea` and `OS/2`
=== 纵向#tr[metrics]:`hhea` 和 `OS/2` 表
// Before this next test, we will actually add some outlines to our font: a capital A and a capital B:
在介绍下一部分之前,我们先要往测试字体中加入一些#tr[glyph]#tr[outline],比如@figure:soource-sans-AB 中的A和B。
#figure(
caption: [<NAME> 设计的 Source Sans 字体中的 A B #tr[glyph]]
)[
#grid(
columns: 2,
inset: 0pt,
image("sourcesans-A.png"),
image("sourcesans-B.png"),
)
] <figure:soource-sans-AB>
// After exporting the new font and converting it to XML again using TTX, let's have a look at the `hhea` and `OS/2` tables. These tables are used to set the global defaults that a rendering engine needs to know when using this font in horizontal typesetting. They represent one of the more unfortunate compromises of the OpenType standard, which brought together font files from both the Windows and Mac platforms. It's a cardinal rule of data handling that you shouldn't store the same value in two different places; if you do, they are practically guaranteed to end up either going out of sync, or being used in different ways. OpenType manages to display *both* of these failure modes.
用 `ttx` 把添加了新#tr[glyph]的字体转换成XML后,现在可以开始介绍`hhea`和`OS/2`表了。这两张表提供了文本横排时所需的全局默认值。用两张表来做同一件事再一次体现了OpenType标准中令人遗憾的妥协,它需要同时支持 Windows 和 Mac 平台上的不同历史标准。不应该把一个值重复储存在两个地方是数据处理中的经典规则。因为如果这么做,那迟早有一天这两个地方的值会不再一致,或者会被以不同的方式使用。OpenType不幸的两点全中。
首先,我们来看`hhea`表中的(一部分)内容:
```xml
<hhea>
<!-- ... -->
<ascent value="1000"/>
<descent value="-200"/>
<lineGap value="0"/>
<advanceWidthMax value="618"/>
<!-- ... -->
</hhea>
```
// At this point let's pause and skip down to the parts of the `OS/2` table which deal with glyph metrics. Because you can't say `OS/2` in valid XML, `ttx` writes it funny:
同时我们也看一下`OS/2`表中关于#tr[glyph]#tr[metrics]的部分。因为`OS/2`这个标识符无法在XML中作为元素名称使用,`ttx` 将它写作 `OS_2`:
```xml
<OS_2>
<!-- ... -->
<fsSelection value="00000000 01000000"/>
<sTypoAscender value="800"/>
<sTypoDescender value="-200"/>
<sTypoLineGap value="200"/>
<usWinAscent value="1000"/>
<usWinDescent value="200"/>
<sxHeight value="500"/>
<sCapHeight value="700"/>
<!-- ... -->
</OS_2>
```
// Immediately you should see that the font editor (I used Glyphs to produce this font) has chosen to provide different values for things that seem to be the same: `ascent` in `hhea` is 1000, and so is `usWinAscent` in `OS/2`, but we also have `sTypoAscender` which is set to 800. `descent` and `sTypoDescender` are -200, but `usWinDescent` is 200. Most confusingly, `lineGap` in `hhea` is 0 but `sTypoLineGap` in `OS/2` is 200, but if you look into the OpenType specification, you will find that `lineGap` in `hhea` is described as "typographic line gap" and `sTypoLineGap` is described as "the typographic line gap for this font". Sounds like the same thing, doesn't it?
很容易就能发现,字体编辑器(我这里使用的是Glyphs)在看起来含义相同的两个字段里填入了不同的值:`hhea`中的`ascent`值为1000,和`OS/2`中的`usWinAscent`相同,但它里面还有一个值为 800 的`sTypoAscender`;`descent`和`sTypoDescender`都是 -200,但`usWinDescent`又是200。更让人迷惑的是,`hhea`中的`lineGap`是0,`OS/2` 中 的`sTypoLineGap` 却是 200。如果你觉得它们描述的是两个不同的参数,那看看OpenType标准规范中是怎么说的吧:`lineGap`的含义是“行间距”;`sTypoLineGap`的含义是“此字体的行间距”。这明明看上去就是一回事吧?
// Finally, bit 7 of the `fsSelection` flag is set. The description of bit 7 in the OpenType standard reads "if set, it is strongly recommended to use `OS/2.sTypoAscender - OS/2.sTypoDescender + OS/2.sTypoLineGap` as a value for default line spacing for this font."
然后,`fsSelection`字段的第七个比特位为1。这个比特在OpenType规范中定义为:“如果设置为1,则强烈推荐使用`OS/2.sTypoAscender - OS/2.sTypoDescender + OS/2.sTypoLineGap`作为此字体的默认行距”。
// What on earth is going on?
这到底是在干嘛呢?
// The problem, once again, is that not only did Macintosh and Windows have their own font formats, they each had their own interpretation of how the metrics of a font should be interpreted. Things were bad enough when the only consumers of fonts were the Windows and Mac operating systems, but now word processors, page layout programs, web browsers and a wide range of other software peek into the metrics of fonts and interpret the values in their own special way.
之所以有这些问题,不只是因为我们提到过的 Mac 和 Windows 平台的字体格式不同,更重要的原因是它们对如何理解和处理字体的#tr[metrics]也有不同意见。而现在除了这两个系统之外,又多了文字处理软件、页面排版软件、浏览器等等各种字体使用者。这些使用者各自使用不同的方式理解字体中的属性值,这就让问题更加严重了。
// In particular, when we are talking about global scripts with glyph proportions and line spacing that can differ significantly from Latin glyphs, the default mechanisms of font browsers for computing these values may not be ideal, and they may not provide identical results across platforms. So you may well need to play with some of these values yourself.
特别是我们现在还是在讨论#tr[global scripts],某些#tr[scripts]的#tr[glyph]属性、行间距等参数可能和拉丁字符有天壤之别。字体浏览器计算这些值的默认方式可能也不一致,可能导致不同的平台上的展示效果也不相同。所以你可能需要手工仔细调整这些值。
// How are they used? To see what's going on, I made this special glyph in position `C`:
这些值到底是干什么用的?为了更好的说明,我在`C`的位置上画了一个特殊的#tr[glyph]:
#figure(
placement: none,
caption: [一个用于测量的特殊#tr[glyph]]
)[
#v(4em)
#text(font: ("TTX Test",), size: 10em)[C]
#v(6em)
]
// On my Mac, the TextEdit application seemed to struggle with this, sometimes clipping off the top of the `1000` and anything below the center of the `-200`, and sometimes maintaining it:
// Safari, Firefox and Illustrator all do this:
// (Although when I select the glyph in Illustrator, the selection extends to slightly *more* than 500 points below the baseline. I have no idea why this is.)
在我的Mac系统上,TextEdit应用似乎很难正确处理这个#tr[glyph]。有时候最上面的`1000`和`-200`以下的所有内容都会被截断(@figure:textedit-safari 左半),但有时又不会。
Safari,Firefox和Illustrator三个软件的结果一致(@figure:textedit-safari 右半)。不过在Illustrator中选中#tr[glyph]时,反色选区会略微超过基线以下500单位的位置,我不清楚其中的原因。
#figure(
caption: [不同应用的显示结果。左图为TextEdit,右图为Safari等]
)[
#grid(
fill: white,
columns: 2,
column-gutter: 2em,
image("textedit.png"),
image("safari.png"),
)
] <figure:textedit-safari>
// The `usWinAscent` and `usWinDescent` values are used for text *clipping* on Windows. In other words, any contours above 1000 or below -200 units will be clipped on Windows applications. On a Mac, the relevant values for clipping are `hhea`'s `ascender` and `descender`. Mac uses `hhea`'s `lineGap` to determine line spacing. As we can see from our Safari example, there is no gap between the lines: the first line's descender at -200 units lines up perfectly with the second line's ascender at 1000 units. Finally, the `typo` values are used by layout applications to position the first baseline of a text block and set the default line spacing.
`usWinAscent`和`usWinDescent`这两个值是 Windows 平台用于截断文本的。也就是说,Windows 平台的软件在显示时会将 1000 以上和 -200 以下的所有东西都裁切去除。在 Mac 上,和截断相关的值是 `hhea` 表中的 `ascender` 和 `descender`。Mac用`hhea`中的`lineGap`来决定行间距,这里的值是 0。所以正如@figure:textedit-safari 中Safari所显示的那样,两行文本间没有空隙,第一行的 -200 和第二行的 1000 位于相同高度。剩下的 `typo` 系列属性用于排版系统中,用以确定文本块中第一行的#tr[baseline]位置和默认行距。
// So how should actually we set these values? Unfortunately, there is not a real consensus on the "right" way to do this - the major foundries each have their own strategies - but hopefully by being aware of what the relevant values do, you should now know what to test for and how to adjust your font's metrics where there are problems. To get you started, here is our recommended method (distilled from a discussion on [Typedrawers](http://typedrawers.com/discussion/1705)):
所以我们到底该怎么设置这些值呢?很抱歉,没有哪一种方法是绝对“正确”的,每个开发商都有自己的设置策略。但只要知道了这些值的用途,你就知道在设计字体时需要测试哪些问题,并且如何对#tr[metrics]进行相应的调整了。推荐先按如下的方式设置(来自Typedrawers论坛上的讨论帖@Wei<EMAIL>fontVertical.2016):
// * The `sTypoAscender` minus the `sTypoDescender` should equal the unit square. (Usually 1000 units.)
- `sTypoAscender - sTypoDescender` 应该等于#tr[em square]的高度。
// * `sTypoLinegap` should be the default linespacing distance for your intended language environment.
- `sTypoLinegap` 应该等于目标语言环境中的默认行距。
// * `lineGap` should be zero.
- `lineGap` 应该设置为 0。
// * `usWinAscent` and `usWinDescent` should be set so that no clipping occurs. If your font contains glyphs with tall, stacked accents - for instance, the Vietnamese letter LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE (Ẳ) - you will need to ensure that these values can accommodate the highest (and lowest) *possible* values of your shaped text. They should also be set so that they sum to *at least* the value of `sTypoAscender - sTypoDescender + sTypoLinegap`.
- `usWinAscent` 和 `usWinDescent` 应该被设置成不会导致任何裁切的值。如果你的字体中会往较高#tr[glyph]上继续堆叠符号的话(比如越南字母 `LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE` Ẳ),你需要保证这两个值比文本#tr[shaping]后可能达到的最高(或最低)值还大。另外这两个值的和至少应该大于`sTypoAscender - sTypoDescender + sTypoLinegap`。
// * `ascent` and `descent` should be set to the same values as `usWinAscent` and `usWinDescent`, remembering that `usWinDescent` is positive and `descent` is negative.
- `ascent`、`descent` 应该和 `usWinAscent`、`usWinDescent` 设置为相同的值。但要注意 `usWinDescent` 是正数,而对应的 `descent` 需要是负数。
// * Bit 7 of `fsSelection` should be turned on.
- `fsSelection`的第7位应该设置成1。
// If you don't like this strategy, there are plenty of others to choose from. The [Glyphs web site](https://www.glyphsapp.com/tutorials/vertical-metrics) describes the strategies used by Adobe, Microsoft and web fonts; [Google fonts](https://github.com/googlefonts/gf-docs/tree/main/VerticalMetrics) has another. Karsten Lucke has a [guide](https://www.kltf.de/downloads/FontMetrics-kltf.pdf) which goes into all of this in excruciating detail but finally lands on the strategy mentioned above.
如果你不喜欢这种设置策略,也可以选择其他方式。Glyphs的网站上描述了 Adobe公司、微软公司和网页字体标准中的设置策略@Scheichelbauer.ZongXiang.2012。Google Fonts也有自己的策略@Google<EMAIL>.VerticalMetrics。<NAME> 写了一篇关于如何调整这些值的指南@Lucke.FontMetrics,不过最后也归纳为了我们上面描述的策略。
// Yes, this is a complete hot mess. Sorry.
很抱歉,但确实就是这么混乱。
|
https://github.com/KhalilAMARDJIA/SimpleReport | https://raw.githubusercontent.com/KhalilAMARDJIA/SimpleReport/main/_extensions/simplereport/typst-template.typ | typst | // main variables settings
#let main_color = rgb(0,0,255)
#let main_font = ("Latin Modern Roman", "Libertinus Serif", "IBM Plex Serif", "Times New Roman")
#let secondary_font = ("Cantarell", "Helvetica", "IBM Plex Sans", "Noto Sans", "Arial")
#let report(
title: none,
subtitle: none,
authors: none,
date: none,
abstract: none,
abstract-title: none,
version: none,
margin: (top: 4cm, bottom: 4cm, left: 4cm, right: 4cm),
paper: "a4",
lang: "en",
region: "US",
font: main_font,
fontsize: 11pt,
bibliography: none,
doc,
) = {
////////////////////////// Page settings ////////////////////////////
set page(
paper: paper,
margin: margin,
numbering: "1",
)
// Footer settings
// Configure footer
set page(footer: grid(
columns: (1fr, 1fr, 1fr),
align: (left, center, right),
gutter: 0.5em,
if version != none {
text([VERSION: #version], size: 1em)
},
[],
align(counter(page).display("1 of 1", both: true), right),
))
///////////////////////////// Text settings ////////////////////////////
set par(justify: true)
set block(spacing: 1.5em) // space between paragraphs
set text(lang: lang,
region: region,
font: font,
size: fontsize, hyphenate: false
)
// heading settings
set heading(numbering: "1.1")
show heading: set text(font: secondary_font)
show heading.where(level: 1): it => {
pagebreak()
set text(font: secondary_font, weight: 600, size: 1.35em)
it
v(1.5em)
}
show heading.where(level: 2): it => {
set text(font: secondary_font, weight: 400, size: 1.15em)
it
v(1em)
}
show heading.where(level: 3): it => [
#set text(fill: main_color.darken(50%), weight: 400, size: 1em, font: secondary_font)
#block(
smallcaps(it),
inset: (top: 0.5em, bottom: 0.5em, rest: 0em),
)
#v(0.8em)
]
show heading.where(level: 4): set text( weight: 300, size: 1em)
show heading.where(level: 5): set text( weight: 300, size: 1em)
show heading.where(level: 6): set text( weight: 300, size: 1em)
show heading.where(level: 7): set text( weight: 300, size: 1em)
show heading.where(level: 8): set text( weight: 300, size: 1em)
// title settings
if title != none {
align(left)[
#box(
inset: 2em,
width: 100%,
)[
#text(weight: "bold", size: 2.2em, font: secondary_font)[#upper(title)]
]
]
}
if subtitle != none {
align(left)[#box(inset: (bottom: 5em, rest: 2em))[
#text(weight: 300, size: 1.5em, font: secondary_font)[#subtitle]
]]
} else {
v(2em)
}
if date != none {
align(right)[
#v(0.5fr)
#box(
inset: 1em,
)[
#text(date, weight: 600, size: 2em, font: secondary_font)
]]
}
if abstract != none {
block(inset: 2em)[
#text(weight: "semibold")[#abstract-title] #h(1em) #abstract
]
}
// Authors and approuval settings
if authors != none {
heading(
"Author(s) and approuval(s)",
numbering: none
)
let count = authors.len()
let ncols = calc.min(count, 1)
grid(
stroke: 0.5pt + black,
inset: 1em,
columns: (1fr, ) * ncols,
row-gutter: 0.5em,
..authors.map(author =>
align(left)[
#strong(author.name) \
#author.affiliation \
#author.email
]
)
)
}
// add table of content
let custom_outline_fill = box(width: 1fr, repeat(" ."))
context {
show outline.entry: it =>{
text([#it], size: 1em)
}
show outline.entry.where(level: 1): it => [
#v(0.2em)
#strong(text(
it,
font: secondary_font,
weight: 400
))
]
outline(indent: auto, depth: 5, fill: custom_outline_fill)
}
// Set bulette list settings
set list(
tight: false,
indent: 1.5em,
body-indent: 1em,
spacing: auto,
marker: ([•], [--], [○], [‣]))
// Set number list settings
set enum(
tight: false,
indent: 1.5em,
body-indent: 1em,
spacing: auto,
)
// links settings
show link: set text(fill: main_color)
show ref: set text(fill: main_color)
doc
context {
show outline.entry: it => {
text(it, fill: main_color)
}
outline(
title: [List of Figures],
target: figure.where(kind: "quarto-float-fig"),
fill: custom_outline_fill,
)
outline(
title: none,
target: figure.where(kind: image),
fill: custom_outline_fill,
)
outline(
title: [List of Tables],
target: figure.where(kind: "quarto-float-tbl"),
fill: custom_outline_fill
)
outline(
title: none,
target: figure.where(kind: table),
fill: custom_outline_fill
)
}
// Display bibliography.
if bibliography != none {
pagebreak()
show bibliography: set text(0.85em)
// Use default paragraph properties for bibliography.
show bibliography: set par(leading: 0.65em, justify: false, linebreaks: auto)
bibliography()
}
}
// Figure settings
#show figure.caption: it =>{
// set text(weight: 400, size: 1.1em)
block(it, inset: 1em)
}
// Table settings
#show figure.where(kind: table): set figure.caption (position: top)
#show figure.where(kind: table): set block(breakable: true)
#show table.cell : set text(size: 0.8em)
#show figure.where(kind: "quarto-float-tbl"): set block(breakable: true)
#show figure.where(kind: "quarto-float-tbl"): set table.header(repeat: true)
#set table(
stroke: (_, y) => (
top: if y == 1 { 0.5pt } else if y < 1 { 1pt } else { 0pt },
bottom: if y != 1 { 1pt } else if y == calc.max(y) { 0.5pt } else if y > 1 { 1pt } else { 0pt }
),
) |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-2D80.typ | typst | Apache License 2.0 | #let data = (
("ETHIOPIC SYLLABLE LOA", "Lo", 0),
("ETHIOPIC SYLLABLE MOA", "Lo", 0),
("ETHIOPIC SYLLABLE ROA", "Lo", 0),
("ETHIOPIC SYLLABLE SOA", "Lo", 0),
("ETHIOPIC SYLLABLE SHOA", "Lo", 0),
("ETHIOPIC SYLLABLE BOA", "Lo", 0),
("ETHIOPIC SYLLABLE TOA", "Lo", 0),
("ETHIOPIC SYLLABLE COA", "Lo", 0),
("ETHIOPIC SYLLABLE NOA", "Lo", 0),
("ETHIOPIC SYLLABLE NYOA", "Lo", 0),
("ETHIOPIC SYLLABLE GLOTTAL OA", "Lo", 0),
("ETHIOPIC SYLLABLE ZOA", "Lo", 0),
("ETHIOPIC SYLLABLE DOA", "Lo", 0),
("ETHIOPIC SYLLABLE DDOA", "Lo", 0),
("ETHIOPIC SYLLABLE JOA", "Lo", 0),
("ETHIOPIC SYLLABLE THOA", "Lo", 0),
("ETHIOPIC SYLLABLE CHOA", "Lo", 0),
("ETHIOPIC SYLLABLE PHOA", "Lo", 0),
("ETHIOPIC SYLLABLE POA", "Lo", 0),
("ETHIOPIC SYLLABLE GGWA", "Lo", 0),
("ETHIOPIC SYLLABLE GGWI", "Lo", 0),
("ETHIOPIC SYLLABLE GGWEE", "Lo", 0),
("ETHIOPIC SYLLABLE GGWE", "Lo", 0),
(),
(),
(),
(),
(),
(),
(),
(),
(),
("ETHIOPIC SYLLABLE SSA", "Lo", 0),
("ETHIOPIC SYLLABLE SSU", "Lo", 0),
("ETHIOPIC SYLLABLE SSI", "Lo", 0),
("ETHIOPIC SYLLABLE SSAA", "Lo", 0),
("ETHIOPIC SYLLABLE SSEE", "Lo", 0),
("ETHIOPIC SYLLABLE SSE", "Lo", 0),
("ETHIOPIC SYLLABLE SSO", "Lo", 0),
(),
("ETHIOPIC SYLLABLE CCA", "Lo", 0),
("ETHIOPIC SYLLABLE CCU", "Lo", 0),
("ETHIOPIC SYLLABLE CCI", "Lo", 0),
("ETHIOPIC SYLLABLE CCAA", "Lo", 0),
("ETHIOPIC SYLLABLE CCEE", "Lo", 0),
("ETHIOPIC SYLLABLE CCE", "Lo", 0),
("ETHIOPIC SYLLABLE CCO", "Lo", 0),
(),
("ETHIOPIC SYLLABLE ZZA", "Lo", 0),
("ETHIOPIC SYLLABLE ZZU", "Lo", 0),
("ETHIOPIC SYLLABLE ZZI", "Lo", 0),
("ETHIOPIC SYLLABLE ZZAA", "Lo", 0),
("ETHIOPIC SYLLABLE ZZEE", "Lo", 0),
("ETHIOPIC SYLLABLE ZZE", "Lo", 0),
("ETHIOPIC SYLLABLE ZZO", "Lo", 0),
(),
("ETHIOPIC SYLLABLE CCHA", "Lo", 0),
("ETHIOPIC SYLLABLE CCHU", "Lo", 0),
("ETHIOPIC SYLLABLE CCHI", "Lo", 0),
("ETHIOPIC SYLLABLE CCHAA", "Lo", 0),
("ETHIOPIC SYLLABLE CCHEE", "Lo", 0),
("ETHIOPIC SYLLABLE CCHE", "Lo", 0),
("ETHIOPIC SYLLABLE CCHO", "Lo", 0),
(),
("ETHIOPIC SYLLABLE QYA", "Lo", 0),
("ETHIOPIC SYLLABLE QYU", "Lo", 0),
("ETHIOPIC SYLLABLE QYI", "Lo", 0),
("ETHIOPIC SYLLABLE QYAA", "Lo", 0),
("ETHIOPIC SYLLABLE QYEE", "Lo", 0),
("ETHIOPIC SYLLABLE QYE", "Lo", 0),
("ETHIOPIC SYLLABLE QYO", "Lo", 0),
(),
("ETHIOPIC SYLLABLE KYA", "Lo", 0),
("ETHIOPIC SYLLABLE KYU", "Lo", 0),
("ETHIOPIC SYLLABLE KYI", "Lo", 0),
("ETHIOPIC SYLLABLE KYAA", "Lo", 0),
("ETHIOPIC SYLLABLE KYEE", "Lo", 0),
("ETHIOPIC SYLLABLE KYE", "Lo", 0),
("ETHIOPIC SYLLABLE KYO", "Lo", 0),
(),
("ETHIOPIC SYLLABLE XYA", "Lo", 0),
("ETHIOPIC SYLLABLE XYU", "Lo", 0),
("ETHIOPIC SYLLABLE XYI", "Lo", 0),
("ETHIOPIC SYLLABLE XYAA", "Lo", 0),
("ETHIOPIC SYLLABLE XYEE", "Lo", 0),
("ETHIOPIC SYLLABLE XYE", "Lo", 0),
("ETHIOPIC SYLLABLE XYO", "Lo", 0),
(),
("ETHIOPIC SYLLABLE GYA", "Lo", 0),
("ETHIOPIC SYLLABLE GYU", "Lo", 0),
("ETHIOPIC SYLLABLE GYI", "Lo", 0),
("ETHIOPIC SYLLABLE GYAA", "Lo", 0),
("ETHIOPIC SYLLABLE GYEE", "Lo", 0),
("ETHIOPIC SYLLABLE GYE", "Lo", 0),
("ETHIOPIC SYLLABLE GYO", "Lo", 0),
)
|
https://github.com/The-Notebookinator/notebookinator | https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/gallery/linear.typ | typst | The Unlicense | #import "../lib.typ": *
#import themes.linear: linear-theme
#import themes.linear.components
#show: notebook.with(
theme: linear-theme,
team-name: "53C",
season: "Over Under",
year: "2023-2024",
)
#create-frontmatter-entry(title: "Color Coding Guide")[
This key represents each step of the Engineering Design Process with a color that corresponds to the Engineering Notebooking Rubric categories. In the table of contents, each page is assigned to a color which summarizes the content on that page. However, if a page highlights multiple sections of the Engineering Design Process, headings on the page will be colored accordingly.
#grid(
columns: (1fr, 5fr),
gutter: 10pt,
// Row 1
square(size: 1in, fill: components.dark-red),
align(horizon,
[
#set text(size: 14pt)
Identify the Problem
#set text(size: 12pt)
Identifies the game and robot design challenges in detail at the start of each design process cycle with words and pictures. States the goals for accomplishing the challenge.
]),
// Row 2
square(size: 1in, fill: components.dark-yellow),
align(horizon,
[
#set text(size: 14pt)
Brainstorm, Diagram, or Prototype Solutions
#set text(size: 12pt)
Lists three or more possible solutions to the challenge with labeled diagrams. Citations provided for ideas that came from outside sources such as online videos or other teams.
]),
// Row 3
square(size: 1in, fill: components.dark-green),
align(horizon,
[
#set text(size: 14pt)
Select the Best Solution and Plan
#set text(size: 12pt)
Explains why the solution was selected through testing and/or a decision matrix. Fully describes the plan to implement the solution.
]),
// Row 4
square(size: 1in, fill: components.dark-blue),
align(horizon,
[
#set text(size: 14pt)
Build the Solution
#set text(size: 12pt)
Records the steps to build the solution. Includes enough detail that the reader can follow the logic used by the team to develop their robot design, as well as recreate the robot design from the documentation.
]),
// Row 5
square(size: 1in, fill: components.dark-purple),
align(horizon,
[
#set text(size: 14pt)
Program the Solution
#set text(size: 12pt)
Records the steps to program the solution. Includes enough detail that the reader can follow the logic used by the team to develop their robot code, as well as recreate the robot code from the documentation.
]),
// Row 6
square(size: 1in, fill: components.dark-pink),
align(horizon,
[
#set text(size: 14pt)
Test the Solution
#set text(size: 12pt)
Records all the steps to test the solution, including test results.
]),
// Row 7
square(size: 1in, fill: components.surface-4),
align(horizon,
[
#set text(size: 14pt)
Reflect on Prior Solutions
#set text(size: 12pt)
Evaluates the decisions and mistakes made in the past to better prepare and overcome challenges in the future.
])
)
We also wanted to create a new category: Reflect on Prior Solutions. We believe this is an important step in the Engineering Design Process and we want to highlight our reflections using our color coding guide.
]
#create-frontmatter-entry(title: "Table of Contents")[
#components.toc()
]
#create-body-entry(
title: "Title",
type: "identify",
date: datetime(year: 1111, month: 11, day: 1),
)[
= Heading 1
#lorem(50)
#grid(
columns: (1fr, 1fr),
gutter: 15pt,
lorem(30),
align(
center + horizon,
image("../logo.png"),
),
)
== Subheading 1
#components.pro-con(
pros: [
#list(
[#lorem(10)],
[#lorem(12)],
[#lorem(15)],
)
],
cons: [
#list(
[#lorem(12)],
[#lorem(10)],
)
],
)
== Subheading 2
#components.decision-matrix(
properties: (
(name: "Category 1"),
(name: "Category 2"),
(name: "Category 3", weight: 2),
),
("Decision", 3, 1, 4),
("Matrix", 2, 3, 5),
)
= Heading 2
```cpp
#include <iostream>
#include <cstdio>
int main() {
printf("Hello world!");
return 0;
}
```
]
#create-body-entry(
title: "Title",
type: "brainstorm",
date: datetime(year: 1111, month: 11, day: 2),
)[]
#create-body-entry(
title: "Title",
type: "brainstorm",
date: datetime(year: 1111, month: 11, day: 2),
)[]
#create-body-entry(
title: "Title",
type: "brainstorm",
date: datetime(year: 1111, month: 11, day: 2),
)[]
#create-body-entry(
title: "Title",
type: "brainstorm",
date: datetime(year: 1111, month: 11, day: 2),
)[]
#create-body-entry(
title: "Title",
type: "decide",
date: datetime(year: 1111, month: 11, day: 3),
)[]
#create-body-entry(
title: "Title",
type: "build",
date: datetime(year: 1111, month: 11, day: 3),
)[]
#create-body-entry(
title: "Title",
type: "build",
date: datetime(year: 1111, month: 11, day: 3),
)[]
#create-body-entry(
title: "Title",
type: "build",
date: datetime(year: 1111, month: 11, day: 3),
)[]
#create-body-entry(
title: "Title",
type: "program",
date: datetime(year: 1111, month: 11, day: 3),
)[]
#create-body-entry(
title: "Title",
type: "program",
date: datetime(year: 1111, month: 11, day: 3),
)[]
#create-body-entry(
title: "Title",
type: "test",
date: datetime(year: 1111, month: 11, day: 4),
)[]
#create-body-entry(
title: "Title",
type: "test",
date: datetime(year: 1111, month: 11, day: 4),
)[]
#create-body-entry(
title: "Title",
type: "build",
date: datetime(year: 1111, month: 11, day: 5),
)[]
#create-body-entry(
title: "Title",
type: "build",
date: datetime(year: 1111, month: 11, day: 5),
)[]
#create-body-entry(
title: "Title",
type: "brainstorm",
date: datetime(year: 1111, month: 11, day: 5),
)[]
#create-body-entry(
title: "Title",
type: "brainstorm",
date: datetime(year: 1111, month: 11, day: 5),
)[]
#create-body-entry(
title: "Title",
type: "decide",
date: datetime(year: 1111, month: 11, day: 5),
)[]
#create-body-entry(
title: "Title",
type: "program",
date: datetime(year: 1111, month: 11, day: 5),
)[]
#create-body-entry(
title: "Title",
type: "test",
date: datetime(year: 1111, month: 11, day: 5),
)[]
#glossary.add-term("Lorem")[
#lorem(10)
]
#glossary.add-term("Ipsum")[
#lorem(25)
]
#glossary.add-term("Dolor")[
#lorem(11)
]
#glossary.add-term("Sit")[
#lorem(5)
]
#create-appendix-entry(title: "Glossary")[
#lorem(50)
#components.glossary()
] |
https://github.com/r8vnhill/apunte-bibliotecas-de-software | https://raw.githubusercontent.com/r8vnhill/apunte-bibliotecas-de-software/main/Unit2/Inheritance.typ | typst | == Herencia
La herencia es un principio fundamental de la programación orientada a objetos que permite a una
clase derivar o heredar propiedades y comportamientos (métodos) de otra clase. Esto establece una
relación jerárquica entre la clase superior (superclase) y la clase derivada (subclase).
=== Propósito y Beneficios de la Herencia
- *Especialización*: La herencia permite crear nuevas clases a partir de clases existentes,
proporcionando un método eficaz para reutilizar y extender el código existente.
- *Reutilización de Código*: Aunque es un efecto secundario, la herencia facilita la reutilización
de código, permitiendo que las subclases utilicen métodos y propiedades de la superclase sin
necesidad de reescribirlos.
- *Jerarquía de Objetos*: Organiza y estructura el código en una jerarquía natural que refleja
relaciones reales entre entidades, facilitando el mantenimiento y la comprensión del código.
La herencia debe usarse con un propósito claro y con coherencia lógica:
- *Relación "Es un(a)"*: La herencia debe reflejar una relación lógica y natural de "es un(a)"
entre la superclase y la subclase. Por ejemplo, un `Perro` es un tipo de `Mamífero`, por lo tanto,
es lógico que `Perro` herede de `Mamífero`.
- *Evitar la Herencia Improcedente*: No se debe utilizar la herencia simplemente para reutilizar
código si no existe una relación lógica clara. Por ejemplo, decir que un `Perro` es un tipo de
`Lobo` solo para reutilizar métodos como `aullar()` es incorrecto, pues aunque ambos pueden
compartir comportamientos, son especies distintas con características propias.
|
|
https://github.com/The-Notebookinator/notebookinator | https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/themes/radial/metadata.typ | typst | The Unlicense | #import "./icons/icons.typ"
#import "./colors.typ": *
// These should really be with their respective files, but that causes a cyclic import error, so I put them here.
// @typstyle off
#let entry-type-metadata = (
"identify": (icon: icons.question-mark, color: yellow),
"brainstorm": (icon: icons.light-bulb, color: orange),
"decide": (icon: icons.target, color: blue),
"build": (icon: icons.hammer, color: red),
"program": (icon: icons.terminal, color: purple),
"test": (icon: icons.flask, color: green),
"management": (icon: icons.bar-chart, color: surface-4),
"notebook": (icon: icons.page, color: pink),
)
// @typstyle off
#let admonition-type-metadata = (
"note": (icon: icons.pencil, color: green, title: "Note"),
"warning": (icon: icons.warning, color: red, title: "Warning"),
"example": (icon: icons.web, color: purple, title: "Example"),
"quote": (icon: icons.quotes, color: gray, title: "Quote"),
"equation": (icon: icons.function, color: orange, title: "Equation"),
"decision": (icon: icons.target, color: blue, title: "Final Decision"),
"build": (icon: icons.hammer, color: red, title: "Build Complete"),
)
|
https://github.com/typst-jp/typst-jp.github.io | https://raw.githubusercontent.com/typst-jp/typst-jp.github.io/main/docs/changelog/welcome.md | markdown | Apache License 2.0 | ---
description: |
Learn what has changed in the latest Typst releases and move your documents
forward.
---
# Changelog
Learn what has changed in the latest Typst releases and move your documents
forward. This section documents all changes to Typst since its initial public
release.
## Versions
- [Typst 0.12.0]($changelog/0.12.0)
- [Typst 0.11.1]($changelog/0.11.1)
- [Typst 0.11.0]($changelog/0.11.0)
- [Typst 0.10.0]($changelog/0.10.0)
- [Typst 0.9.0]($changelog/0.9.0)
- [Typst 0.8.0]($changelog/0.8.0)
- [Typst 0.7.0]($changelog/0.7.0)
- [Typst 0.6.0]($changelog/0.6.0)
- [Typst 0.5.0]($changelog/0.5.0)
- [Typst 0.4.0]($changelog/0.4.0)
- [Typst 0.3.0]($changelog/0.3.0)
- [Typst 0.2.0]($changelog/0.2.0)
- [Typst 0.1.0]($changelog/0.1.0)
- [Earlier]($changelog/earlier)
|
https://github.com/pedrofp4444/BD | https://raw.githubusercontent.com/pedrofp4444/BD/main/report/content/[6] Conclusão e Trabalhos Futuros/main.typ | typst | #import "conclusão.typ": conclusão
#let conclusão = {
[
= Conclusões e Trabalho Futuro
#conclusão
]
} |
|
https://github.com/0x546974616e/typst-resume | https://raw.githubusercontent.com/0x546974616e/typst-resume/main/template/globals.typ | typst |
#let column-size = 161pt
#let spacing = (
small: 2pt,
medium: 4pt,
large: 8pt,
)
#let colors = (
fg1: rgb(60, 120, 216), // light blue
fg2: rgb(17, 85, 204), // dark blue
fg3: rgb("#434343"), // dark gray
fg4: white,
bg1: white,
bg2: rgb(60, 120, 216), // light blue
bg3: rgb(17, 85, 204), // dark blue
)
// State are boring now...
#let lang() = sys.inputs.at("lang", default: "en")
#let get-lang(object) = {
if type(object) != "dictionary" {
return object
}
let _lang = lang()
if not _lang in object {
// TODO: Better error handling.
panic(_lang + " is missing.")
}
return object.at(_lang)
}
|
|
https://github.com/Slyde-R/not-jku-thesis-template | https://raw.githubusercontent.com/Slyde-R/not-jku-thesis-template/main/template/content/Conclusion.typ | typst | MIT No Attribution | #import "../utils.typ": todo, silentheading, flex-caption
= Conclusion
#todo[Replace this chapter!]
== Summary of Findings
This thesis investigated feline manipulation tactics and their impact on human behavior through a multi-method approach involving observational studies, surveys, and interviews. The key findings of this research can be summarized as follows:
- *Vocalizations:* Cats use a variety of vocalizations, including meows, purrs, and chirps, to manipulate human behavior. Meowing, particularly when high-pitched and frequent, was primarily used to solicit food or attention. Purring was employed both to seek comfort and to reinforce affectionate interactions. Chirps and trills were less common but effectively used to prompt play and interaction.
- *Body Language:* Body language plays a crucial role in feline manipulation. Kneading, tail positioning, and eye contact were observed to be significant in influencing human responses. Kneading was used to solicit attention and comfort, while tail positioning indicated the cat’s emotional state and intent. Slow blinking and direct eye contact helped build trust and encouraged affection.
- *Attention-Seeking Behaviors:* Cats engage in various attention-seeking behaviors such as climbing on furniture, rubbing against humans, and bringing objects to prompt interaction. These behaviors were effective in gaining attention and initiating play, often leading to increased human engagement and positive interactions.
- *Human Responses:* Survey and interview data revealed that humans generally respond positively to feline manipulation tactics. Many participants reported adjusting their routines and behaviors to accommodate their cats' needs. While these manipulative behaviors often led to enhanced bonding and affection, some participants also experienced frustration or a sense of being manipulated.
== Implications of Findings
The findings of this study have several implications for understanding the dynamics of human-feline relationships:
- *Understanding Feline Behavior:* By identifying and categorizing feline manipulation tactics, this research provides valuable insights into how cats influence human behavior. This understanding can help cat owners recognize and interpret their pets’ needs and motivations more effectively.
- *Human-Cat Interaction:* The study highlights the complex nature of human-cat interactions and the role of manipulation in strengthening the bond between cats and their owners. Recognizing and responding to feline manipulation can lead to more harmonious and satisfying relationships.
- *Behavioral Adjustments:* The data suggests that while cats’ manipulation tactics can lead to positive outcomes, they may also necessitate changes in human routines and behaviors. Being aware of these dynamics can help owners manage their interactions with their cats more consciously.
== Limitations of the Study
Several limitations were noted during the research:
- *Sample Size:* The study's sample size was limited, which may affect the generalizability of the findings. A larger and more diverse sample could provide a broader perspective on feline manipulation tactics.
- *Observational Bias:* Observations were conducted in naturalistic settings, which may have introduced observational bias. While efforts were made to minimize this, the presence of observers could have influenced the cats’ behaviors.
- *Self-Reporting Bias:* Survey and interview data were subject to self-reporting bias, as participants' perceptions and recollections may not always accurately reflect their experiences.
== Recommendations for Future Research
To build on the findings of this study, future research could consider the following recommendations:
- *Larger Sample Sizes:* Including a larger and more varied sample of cats and human participants can enhance the generalizability of the results and provide a more comprehensive understanding of feline manipulation tactics.
- *Longitudinal Studies:* Conducting longitudinal studies could offer insights into how feline manipulation tactics and human responses evolve over time and in different life stages of both cats and their owners.
- *Experimental Research:* Experimental studies could explore the causal relationships between specific feline behaviors and human responses, providing a more detailed understanding of how manipulation tactics affect human behavior.
== Final Thoughts
This research contributes to a deeper understanding of the complex interactions between cats and their human companions. By examining the manipulation tactics employed by felines and their effects on human behavior, the study sheds light on the nuanced ways in which cats influence their environment. The insights gained from this research not only enhance our knowledge of feline behavior but also offer practical guidance for improving human-cat relationships |
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/055%20-%20Murders%20at%20Karlov%20Manor/010_Episode%2010%3A%20Roots%20of%20Decay.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Episode 10: Roots of Decay",
set_name: "Murders at Karlov Manor",
story_date: datetime(day: 18, month: 01, year: 2024),
author: "<NAME>",
doc
)
The vine around Kaya's ankle pulled tight enough to feel like it was grinding against the bone. Not that pain was the problem; she was an assassin and had been called on to serve as a warrior more times than she liked to ponder. Pain was an old friend at this point. Pain meant she wasn't dead yet, and as the vine whipped her toward what was sure to be a devastating impact with the ground, she knew without question that she wasn't ready to be. Death came for everyone, even Planeswalkers—and wow, did she know that was true—but she'd never seen any sign that the dead could walk between planes. When she died, she'd be staying wherever she fell.
It was a devastating thought, forming fast as she was pulled through the air. Did the ghosts of Planeswalkers haunt the planes where they died, never able to go home, wherever that was? Or did they cross the Blind Eternities one last time, one final gentle passage to the place their hearts could be at peace?
And maybe big questions of cosmology could wait until she wasn't about to get pancaked on Ravnican soil. Because one thing she could say for sure: she did #emph[not] want to die on Ravnica. She refused to be just one more Orzhov ghost, one more eternal cog in a machine that couldn't afford to stop turning, ever, no matter how many people were ground up inside it.
Again, she tried to phase free of the vine, and again, she felt the Ravnican Worldsoul pushing back against her, refusing to let her go. But her own magic wasn't the only thing she had. If Oba wanted to wield Mat'Selesnya like a weapon, there was no reason for Kaya not to respond in kind. She reached deep into herself, past her fear at what was about to happen, past her grief at what had already been, past even her own connection to the lands of the dead, necromantic without being a part of a necromancer's standard bag of tricks. Down she reached, trying hard to remain calm when she knew that any second now, the whipping vine would finish its journey through the air and slam her into the ground hard enough to break every bone in her comparatively fragile body.
Down, until she reached the place where her spark burned, that small fragment of the Multiverse that tied her to the Blind Eternities and made her what she was, always precious, ever since the moment she'd felt it catch fire at the core of everything she was, changing the way she understood reality forever. Even more precious now that it was so very, very rare. Planeswalkers had always been a rarity, but she couldn't imagine another time when their numbers had fallen so quickly and so brutally.
The Blind Eternities would carry the scars of Phyrexia forever, and so would she.
Wrapping mental fingers tight around her spark, Kaya called the Blind Eternities, asking them not for an exit but for an escape; asking for the aid she needed to push back against the living soul of an entire plane.
She barely felt her body turn intangible enough to slip through the vine. Oba howled fury and disbelief as Kaya, still semi-solid, landed lightly on the floor of Vitu-Ghazi, glowing lambent purple with her own magic—but also prismatic, bright as the skies over Kaldheim, as she held the connection to the Blind Eternities that would normally have whisked her away to some other place or plane, some other set of problems.
Drawing her daggers once more, Kaya gave them an experimental spin, feeling the way they pulled at her wrists, as intangible as the rest of her but solid enough to have presence and heft in her hands. It was an illusion, she knew, one more thing she shared with the dead; they felt as if the things they held were real, too, and were sometimes confused when the living disagreed.
But that was the big difference between Kaya Far-Traveler and the actual dead. When she struck the living, they felt it.
Vitu-Ghazi had descended even further into chaos during her trip to the sky and back. Krenko was no longer visible, just a lump of roots and branches on the cusp of being absorbed into the wall. The guild leaders had been entangled and subdued, even Ral, whose body juddered and shook with the waves of lightning that ran across it, apparently outside of his control. Proft was still kneeling next to Etrata, now bound to the floor by loop upon loop of root, unable to take his eyes off the fallen assassin. Someone had to finish this.
#emph[Kaya] had to finish this.
She gave her daggers another spin and started across the uneven floor toward Oba, phasing easily through the obstacles in her path, including the motionless body of Tolsimir.
Oba jerked around to face Kaya, snarling. "Why won't you #emph[die] ?" Oba demanded.
"Lots of people have asked me that," said Kaya, phasing one dagger back to solidity just long enough to sever a swinging branch. "Some of them were a lot more frightening than you. Rage-twisted dryad lady? You aren't even in the same #emph[league] as some of the things I've seen."
She shuddered theatrically, continuing her inexorable march forward. Oba swung more branches at her. She chopped those down as well.
Something wrapped around her waist, jerking her to a stop. Kaya looked down and was almost impressed to see a phantom branch holding her where she was.
"Clever," she said, unable to keep the approval out of her tone. The Ravnican Worldsoul had to encompass #emph[all] things on Ravnica, #emph[all] aspects of the plane, and that included the dead. A severed branch wouldn't normally manifest its own ghost, but the potential existed. She'd seen ghost trees in other places, and not all of them had been as functional or as worshipped as Kaldheim's World Tree or Phyrexia's Invasion Tree. They haunted the forests where they'd fallen, making their slow vegetable desires known.
This tree, once part of Vitu-Ghazi and controlled by the furiously enraged Oba, was anything but slow, and if it could be said to have desires, it desired only to stop her advance. It was also phantasmal, with no more substance than any ghost.
#emph[Kaito would tell you the best place to walk in a storm is between the raindrops; you just have to move faster than they fall] , she thought and turned solid again, free from the entangling ghost in an instant. Several more tangible branches lashed out to grab for her. She chopped the first two aside with her daggers before phasing through the third, moving into the space where the raindrops—or the potential murder weapons—were not. They grasped, she evaded. It was like a dance, a swift, potentially deadly dance, around the bodies of her friends and the fallen and the ones who were both.
A series of roots ripped through what was left of the floor and wrapped around her ankles. When she tried to step through them, she once again hit the resistance of the Ravnican Worldsoul, refusing to allow her to phase, refusing to let her go. #emph[So much for stepping between the raindrops] , she thought.
#figure(image("010_Episode 10: Roots of Decay/01.png.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
Well, Kaito wasn't the only person she'd had the opportunity to learn from. Koth was nothing if he wasn't a lesson in working with your environment, understanding and caring for the world that made you regardless of how difficult it became. She was a daughter of Tolvada, not Ravnica, but she had been here often enough and long enough that there was little chance the plane didn't know who she was. Ravnica understood her, possibly better than her homeworld, and even if she didn't want to haunt this plane when she died, she had to admit it was likely.
If she was touching the Ravnican Worldsoul, even when it was under Oba's control like this, the plane was aware. The plane knew what was happening. Maybe she could use that.
Tyvar's love of the natural world and the easy way he had been able to turn it to his own advantage flashed across her mind as another branch grabbed for her and she phased through it, her spark pulsing a little brighter inside her chest as if to mark the moment. And Nahiri, who had spoken to the stones around her—and the stones had seemed to answer back, hadn't they? Implying that they could somehow understand …
"Hey," she said, voice loud enough that Oba scowled in evident confusion. "I don't #emph[know] you can understand me, but I'm taking a gamble here, because we are what you made us, and you are what we made you. Gods and monsters, heroes and villains, and a city to hold and keep us all. You don't choose which ones of us you care about. You don't take sides. Well, #emph[she] wants you to take sides. And I say a Ravnica that chooses one side over another is no Ravnica at all. She wants to turn you into something you're not." She paused to shoot a venomous glance at Oba. "Just like the Phyrexians did."
Oba recoiled, looking for a moment like she might understand the scope of what she'd done. Kaya was almost sorry to see that. It was easier to fight the monsters when you knew they were beyond all helping. But this face of Trostani was responsible for Teysa's death, and so she didn't think a single look of shock was going to slow her down once she finally got free.
Looking back down at the roots that held her, Kaya added a desperate, "Ravnica. #emph[Please] ."
This time when she tried to phase, the roots passed right through her. Kaya raced the rest of the way across the room to Oba, jumping over roots, phasing through branches, and turning solid when the ghosts of branches lashed through her. Oba had a tighter grasp on the Worldsoul now; Kaya wasn't going to talk her way around the dryad's rage a second time. Kaya's rhythm had to be precise, as exact as Teysa tallying up the books. Every step had to balance the account opened by the step before it.
The thought of Teysa turned out to be just what Kaya needed to find her focus. She owed a debt to the Orzhov. Teysa had forgiven part of it by taking over the leadership of the guild, and what had that gotten her? Murdered by a grief-poisoned Trostani who had been unable to tell friend from foe in the aftermath of the invasion. Dead and laying in state and probably to return—powerful Orzhov usually did—but never to be the same. Never to be #emph[alive] . Her death was a line on a ledger that Kaya would never balance, never erase, and knowing that helped her move between, around, and through the obstacles Oba threw into her path.
And then Kaya was stepping through a fallen bough as thick around as her own thigh, almost face to face with Oba for the first time since the chaos had begun. She blinked, and for a moment, she thought she #emph[saw] Teysa, off to one side, gesturing for her to get on with it. Her attention snapped back to Oba as she snarled, beginning to spit some curse or insult at Kaya, and all Kaya could feel was grief, and weariness.
Her foot hit something as she adjusted her stance, and she glanced down to see an Agency barrier ward capsule on the floor. Ducking under another wildly swinging branch, she grabbed it, hoping its twin was nearby. Barrier wards weren't meant to be used in detention; they were for sealing off crime scenes that had been claimed by the authority of the Agency.
Vitu-Ghazi was a crime scene. And Trostani had acknowledged the authority of the Agency when she allowed them to hold their meeting here. Kaya straightened, anchor in her hand.
"No," she said and hit the button to deploy the barrier. Cascades of magical light shot out, struggling to wrap around Oba's form. Again, Kaya thought she saw Teysa out of the corner of her eye, grabbing for the ribbons and guiding them back toward Oba, but she couldn't look, couldn't hope one way or the other, couldn't allow herself to #emph[see] —
Oba shrieked, thrashing against the wards. She was going to break loose. It was inevitable. Barrier wards were meant to be initialized from two ends at the same time, allowing them to seal off a scene without leaving anything exposed. She couldn't do it alone, and she didn't have anyone else: they were all tied down, all confined. She was going to lose. Again.
Etrata opened her eyes.
The Dimir assassin, who had been lying motionless since she was impaled and had not been entwined by any of the roots that seized her allies, rolled onto her knees, blood pouring from the wound in her chest, and grabbed the second barrier ward anchor from where it lay forgotten on the floor. She hit the button as she climbed to her feet, and more ribbons lashed out, twining around Oba, pinning her in place as she howled her rage into the air. Together, Etrata and Kaya pulled the ribbons tighter and tighter—until they slipped out of Kaya's hands.
Frantic not to lose the ground they'd gained, Kaya grabbed the first thing she could find, her hands closing around one of Oba's ghost branches. She forced a jolt of necromantic energy into it and felt it bow to her command, becoming #emph[hers] rather than Oba's. Snapping the newly flexible branch around, she whipped it into the mass of cords, using it to catch and confine the struggling dryad.
The bindings were still stretched taut, like two people had hold of them. Once again, Kaya refused to allow herself to look. If it #emph[was] Teysa, she would be distracted, and she couldn't afford that right now. Ravnica couldn't afford that. If it wasn't, the disappointment would be even worse.
They pulled until Oba's struggles stopped, until she was wound up in the barrier ward and the ghost Kaya had stolen from Vitu-Ghazi, as captive as a fly in the center of a spider's web.
#figure(image("010_Episode 10: Roots of Decay/02.png.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
The room stopped moving.
"Is it over?" asked Yarus.
"Not quite," said Etrata. "But look …"
The other two heads of Trostani were beginning to stir. They straightened, then reached for their struggling sister. Each of them pressed a hand to her temple, and she went limp.
Something inside Kaya collapsed at the same time. She could have killed Oba during the fight. Now, with the dryad captive and unconscious, it would be too much like an execution. She couldn't do it. No matter how much she wanted to, she couldn't.
The last of the strength seemed to go out of Etrata as Oba stopped fighting. She collapsed then, face down on the floor, and didn't move again. Kaya straightened, finally allowing herself to look at the other end of the barrier ward. There was no one there, of course. That would have been too much to hope for. Her shoulders sagged as she looked around at the room of tangled wooden structures and trapped guild leaders. "Good meeting, all," she said wearily. "We should never, ever do this again as long as any of us live. That work for you?"
In his cage of roots, Ral began to laugh, and after a moment, Kaya did the same. They laughed not because they were amused, but because they were alive, and sometimes relief can look a lot like joy when it's seen in the right light.
Everything depends on how you're looking at it.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Lavinia waited until the two Planeswalkers had laughed themselves into silence before she cleared her throat and said, "If you're quite done, the rest of us would appreciate being cut free."
"Yeah, I guess cleanup comes next," Kaya said and moved to cut Proft free of the floor, offering her hand to help him to his feet. He nodded and took two steps before dropping to his knees and gathering the fallen Etrata into his arms.
"Were you playing dead that entire time?" he asked.
Etrata opened an eye. "You're the great detective. You tell me," she said. Her voice, while strained and weak, was perfectly clear.
Proft blinked several times before tilting his face up toward the now distant ceiling. "What am I going to do with you?"
"Didn't you know?" asked Etrata. "House Dimir is all but gone, our guildmaster dead. Our scattered assassins are like stray dogs. If you take one of us home, you have to keep us."
Proft looked at her sharply.
"Besides, you need an assistant, or you're going to get yourself murdered in no time at all. I'm pretty sure half this room would pay me to do it." Etrata looked briefly speculative. "Maybe I could get them to bid against each other."
"You wouldn't dare."
"True enough." Her expression softened. "An assistant needs a detective to protect. We're an ecosystem now, you and me."
"I can't pay you."
"I'll work on the side."
"And we're going to have to talk about your stubborn insistence that Lazav is gone. We both know better."
"A girl needs her secrets."
Kaya laughed again. It wasn't that funny, but it was a relief to be alive to laugh, to be standing in a ruined room filled with people who weren't afraid of each other, to know Teysa's death would be avenged. Teysa, and many others, some of whom might never be identified, but Oba had been right about one thing: you grieved harder for the people you knew than you grieved for strangers. That was the way the heart worked. That was the way the heart #emph[had] to work, or there would be nothing but grief ever again, anywhere in the Multiverse. It was better this way. Selfish and small, certainly, when compared to the vastness of something like the Blind Eternities, or even Vitu-Ghazi, but sometimes small was safer, because small was something that could be understood. Small was something that could be kept.
Kaya wanted to keep things for a while.
Around the room she went, cutting everyone free, helping Aurelia pull the grasping branches from her wings without breaking any additional feathers, trying to avoid accidentally cutting Krenko, who was swearing and thrashing in his cocoon of roots, seemingly not entirely clear on the fact that the battle was over.
Vitu-Ghazi no longer moved; Kaya wasn't even sure the great tree was alive. They hadn't escaped this battle unscathed, even if most of them were, surprisingly, going to walk away.
Ral fingered a tear in the hem of his jacket. "Tomik is going to kill me," he said. "He made me promise not to get into life-threatening fights with people I assumed were allies unless he was in the room, since apparently the presence of my spouse will keep me from behaving in what he called an 'unreasonably reckless' manner."
"If he asks, I promise to tell him you didn't start this fight, and you spent most of it literally rooted to the floor, not taking any unnecessary risks," said Kaya.
"Really?"
"I mean, it's true. And sure, you would have been reckless if you'd been able to. Want me to tell him that?"
"I would prefer if you didn't."
Kaya smiled. "Then I'll stick with my first story."
Once everyone was loose, Proft helped guide a limping Etrata out of the room, taking her vague needling with good grace, while the others left under their own power—all save Krenko, who found himself dragged along by a furious Aurelia determined to see #emph[someone] arrested after this day's chaos. Kaya privately suspected Aurelia would find the law didn't consider evidence gathered from the ranting of a dryad sufficient to convict, but that was a problem for later.
There were so many problems for later.
As the others left, Kaya approached Trostani. Ses and Cim were weeping. Oba dangled limply in her web of barrier ward, giving no sign she was still alive.
Stopping a respectful distance away, Kaya bowed her head to wait. One-third murderer or not, Trostani led Selesnya until someone said otherwise.
Finally, voice dull, Ses asked, "What do you need from us?"
"I apologize for intruding," said Kaya, looking up. "But I need to know. Oba …" She trailed off, unsure how to even frame the question.
Cim sighed. "The Guildpact has no provision for our situation. Oba remains one-third of us; we have no way to sever her. Nor would we if we could. She is still our sister." She paused, seeming to realize how tactless that had sounded. "I regret what you have lost."
"Me, too," said Kaya. She stopped then, biting her tongue. Anything else would have been facile.
"When a branch is rotten, it must be removed for the health of the tree as a whole," said Cim. "Left where it is, the rot will spread, and the tree will die. We will mourn our sister, who died in the invasion. What remains is not what was."
Ses made a strained, hiccupping sound, putting her hands over her face.
"We would prefer to be alone with our grief," said Cim. "We must go deep into our thoughts, and try to restore our connection to Mat'Selesnya, to see if she even wishes us to remain as we are, to speak for her, after what our sister has done. Perhaps we, too, are coming to an end. <NAME> will speak for the Conclave while we are in communion, and perhaps even after our return, depending on the will of Mat'Selesnya. We have called for her. It may be you do not see us again."
Ses lowered her hands. "Everything ends. Trees root, they grow, they spread their leaves to the sun, they live for a time, and when that time is done, they die. If Mat'Selesnya says our time is done, we will go."
"Ravnica will judge us for what our sister did," said Cim. "Now go."
Feeling more exhausted than she would have thought possible, Kaya looked around. She was alone with Trostani and the fallen Tolsimir. She supposed Selesnya had other servants, or perhaps the remaining members of Tolsimir's "most faithful," who would come and clean up the damage. If not them, then Emmara would arrange for it upon her arrival. Still, she hesitated.
"The Guildpact …"
"Will be available to any who need it," said Ses with surprising clarity. "Vitu-Ghazi is not fallen. Selesnya stands, for all that has happened. We will do our duty as required. If there is cause to consult the original Guildpact, it will be here for those who seek it."
That was the best she could hope for under the circumstances. Kaya nodded and made her retreat, easing the broken doors as close to closed behind her as she could.
No one had waited for her. She couldn't exactly blame them.
Making her way out of the manor, she realized with a start that she hadn't seen any sign of Kellan since Oba threw him off the roof. She quickened her pace, hoping she wasn't going to step outside and find her partner smeared across the stones of the courtyard.
#emph[Ezrim would have come to tell me if one of his agents was dead] , she thought, somewhat frantic. That didn't still the small voice in her head that was trying to remind her how often her friends got hurt, how frequently she walked away and they didn't.
Whoever had left most recently had taken the time to close the doors. Kaya could respect that. What she couldn't do was slow down now that she remembered what she needed to be worried about. She stepped through the door, almost walking straight into Kellan.
He blinked at her. She blinked back. He recovered his composure first.
"Ezrim told me to stay outside while you talked to Trostani, but not to go anywhere before you knew I was okay," said Kellan. "I'm okay. Are you okay?"
"I wish he'd told you to come in, but I'm glad he asked you to wait," said Kaya. "No, I don't think I'm okay. I don't think I'm okay at all."
Then, for the first time since she'd found Teysa dead in her office, Kaya allowed herself to do the unthinkable. In front of a very startled Kellan, she allowed herself to cry.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Three days later, Kaya sat in a pew at Karlov Cathedral, staring at her hands as the Orzhov pallbearers selected for the honor of bringing Teysa to the altar carried her remains to the front.
There was a hollow boom as they set the coffin down. The organist played the traditional march that accompanied the entry of honored Orzhov dead. And a wry, irritated voice beside Kaya said, "What is this piece of music called, anyway?"
"I think it's called 'Waltz for the Deathless,'" said Kaya, not quite daring to raise her head.
"Funny." Teysa sniffed. "I always thought it was called 'You Can Take It with You.'"
Kaya finally let herself look.
Teysa was next to her, only the faint transparency of her form betraying the fact that her body was on the altar while her ghost was with Kaya. The wound that killed her was gone. Unlike some ghosts, Teysa was clearly disinterested in defining her afterlife by the manner of her death. Her cane leaned against her leg. It had been an extension of her body for as long as Kaya had known her; it only made sense that she would keep it with her now.
"I appreciate you keeping my guild from falling apart as I gathered myself," said Teysa. "Also for helping to identify and eliminate my killer. That was kind of you."
"I owed you a debt for letting you die," said Kaya.
"Consider it repaid in full, if it ever existed," said Teysa. "Honestly, it's better this way. No more hunger, no more distracting bodily needs, just me and the ledgers and the assets of the guild, the way it's meant to be. Why would a little thing like death stop me from running the Syndicate? I'm going to be here for a long, long time."
#figure(image("010_Episode 10: Roots of Decay/03.png.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
"Was that you in the fight at Vitu-Ghazi?"
"Of course it was. I'll always look out for you when you're in Ravnica."
"Why did you …?"
"You had business to do. I would only have been a distraction. Really, Kaya, you have to learn to do things in the proper order."
Kaya made a sound that could have been either a soft laugh or a sob. In the darkness of the cathedral, it passed easily for both. Teysa frowned, eyeing her.
"Are you all right?"
"I'm sorry I couldn't kill Oba for you."
In the time since the fight, Trostani had stepped aside, as promised, and retreated into a deep, meditative state. Lavinia was studying the Guildpact daily, trying to find a way to arrest Oba when the dryad finally woke. So far, she had been able to find nothing of substance.
"Don't be," said Teysa, voice turning hard. "Lavinia won't find a justification for arresting her. There are too many loopholes she can worm her way through. I, on the other hand, don't intend to use anything so straightforward as the law. Selesnya will doubtless make the argument that she can be purified and reformed. #emph[I] intend to make the argument, through an army of accountants and financial bylaws so archaic #emph[Azor] wouldn't be able to comprehend them, that the price of assassinating an Orzhov guild leader is everything you thought you owned. They want her alive? Well, her death would have been far less expensive."
"Oh," said Kaya.
"You'll have to leave, of course. I wish it didn't need to be this way, but if I'm intending to absolutely #emph[ruin] Selesnya for not answering Oba's treason by handing someone an axe, it's best if I don't have my predecessor, the known assassin, hanging about the place being all alive and confusing the question of who's in charge." Teysa looked expectantly at Kaya. "You understand, don't you?"
"I sort of wish I didn't, but I do," said Kaya. "I was planning to go anyway. I don't want to become the kind of person who kills someone who can't defend themselves. But after what she did, if I stayed here … I might."
"Good. Then it's settled," said Teysa, and she sounded so perfectly like herself that Kaya laughed, unable to swallow the sound.
"I thought I'd lost you," Kaya said and leaned over, half-phasing herself out of the world of the living as she reached out, not quite shaking, and embraced her friend.
After a moment, Teysa smiled and hugged her back.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Four days after that, with Teysa's body lying in state and her spirit assembling the newest iteration of the Orzhov ghost council, Kaya stepped back into the atmosphere of quietly efficient chaos that reigned at the Agency. Chatting agents leaned against the reception desk with mugs of coffee; more agents moved along the hall, barely taking notice of each other.
They took even less notice of Kaya as she wove her way between them, heading for Kellan's desk. He was mercifully present, inputting the notes on his latest case, when she walked up and knocked against the corner of his desk.
Kellan looked up, breaking into a wide smile. "Kaya!"
"I'm heading out in the morning," she said, only a little amazed at the pang that rose in her chest. She was going to miss him, talented, unseasoned, eager partner that he was. It was nice to think there would still be people—living people—on Ravnica that she'd miss when she was gone. "Promised a friend I'd help him hunt for something he called a 'dire bear' once I finished with things here. He's a nice guy. Very enthusiastic. I think you'd like him." Fortunately, they would probably never meet. The thought of Kellan and Tyvar deciding to out-hero each other on the battlefield was exhausting.
"Thanks for letting me know," said Kellan.
"Just wanted to check in before I hit the road. Any sign of Judith yet?"
"None." Kellan shook his head. "No one's found a body, #emph[or] seen her, and no one's claimed the kill. The rest of the Rakdos seem to be laying low for the moment, and even though they're no longer on the verge of war with Boros, that's probably for the best. Aurelia's feathers are well and truly ruffled."
"I wouldn't want to break the law right now," agreed Kaya. "Proft still spending all his time with our former escapee?"
"He officially hired Etrata as his aide," said Kellan. "So she's going to be around for a while."
Kaya blinked. "He was #emph[serious] ?"
"Yup. Things are going to be interesting here even without you," said Kellan. Then he grinned. "I'm so glad."
Kaya grinned back. "Weirdly enough, me, too."
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/compiler/construct.typ | typst | Apache License 2.0 | // Test constructors.
---
// Ensure that constructor styles aren't passed down the tree.
// The inner list should have no extra indent.
#set par(leading: 2pt)
#list(body-indent: 20pt, [First], list[A][B])
---
// Ensure that constructor styles win, but not over outer styles.
// The outer paragraph should be right-aligned,
// but the B should be center-aligned.
#set list(marker: [>])
#list(marker: [--])[
#rect(width: 2cm, fill: conifer, inset: 4pt, list[A])
]
---
// The inner rectangle should also be yellow here.
// (and therefore invisible)
#[#set rect(fill: yellow);#text(1em, rect(inset: 5pt, rect()))]
---
// The inner rectangle should not be yellow here.
A #box(rect(fill: yellow, inset: 5pt, rect())) B
---
// The constructor property should still work
// when there are recursive show rules.
#show enum: set text(blue)
#enum(numbering: "(a)", [A], enum[B])
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/neoplot/0.0.1/README.md | markdown | Apache License 2.0 | # Neoplot
A Typst package to use [gnuplot](http://www.gnuplot.info/) in Typst.
```typ
#import "@preview/neoplot:0.0.1" as gp
```
Execute gnuplot commands as a **one-line** command:
```typ
#image.decode(
gp.eval("
set samples 1000;
set xlabel 'x axis';
set ylabel 'y axis';
plot sin(x),
cos(x)
")
)
```
is the equivalent of
```typ
#image.decode(
gp.eval("set samples 1000;set xlabel 'x axis';set ylabel 'y axis';plot sin(x),cos(x)")
)
```
Execute a gnuplot script:
~~~typ
#image.decode(
gp.exec(
```
# Can add comments since this is a script
set samples 1000
set xlabel 'x axis'
set ylabel 'y axis'
# Use a backslash to extend commands
plot sin(x), \
cos(x)
```
)
)
~~~
To read a data file:
```
# datafile.dat
# x y
0 0
2 4
4 0
```
~~~typ
#image.decode(
gp.exec(
// Use a datablock since Typst doesn't support WASI
"$data <<EOD\n" +
// Load "datafile.dat" using Typst
read("datafile.dat") +
"EOD\n" +
"plot $data w lp"
)
)
~~~
is equivalent to
~~~typ
#image.decode(
gp.exec(
```
$data <<EOD
0 0
2 4
4 0
EOD
plot $data w lp
```
)
)
~~~
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/tuhi-exam-vuw/0.1.0/tuhi-exam-vuw.typ | typst | Apache License 2.0 |
#let MURPH = {
[MUR]
box(move(
dx: -1.0pt, dy: 2.7pt,
box(scale(100%)[P])
));box(move(
dx: -3.0pt, dy: 3pt,
box(scale(80%)[H])
));h(-0.45em)
}
#let mark(x) = if x <= 1 {
h(1fr, weak: true)
box[#strong("[" + str(x) + " mark]")]
} else {
h(1fr, weak: true)
box[#strong("[" + str(x) + " marks]")]
}
#let fin = align(center,[
#v(1fr, weak: true)
#text[\* \* \* \* \* \* \* \* \* \* \* \* \* \* \*]
#v(1fr, weak: true)
])
#let tuhi-exam-vuw(
coursetitle: "scientific basis of murphy's laws",
coursecode: "murph101",
date: datetime(year: 2023, month: 8, day: 3),
year: "2023",
trimester: "2",
timeallowed: "three hours",
openorclosed: "open book",
permitted: "Any materials except communication via electronic devices.",
instructions: "",
logo: image("logo.svg", width: 80mm),
body) = {
// Set the document's basic properties.
set document(author: coursecode, title: coursetitle)
set enum(numbering: (..n) => {
let n = n.pos()
let pat = ("1.", "(a)", "(i)").at(n.len() - 1)
if n.len() == 3 {
set text(weight: "regular")
numbering(pat, n.last())
} else {
set text(weight: 300)
strong(numbering(pat, n.last()))
}
}, full: true)
// murphy's law
// show "murph": MURPH
set page(paper:"a4",
background: rect(width:100%,
height:100%, fill: white),
numbering: (num, total) => [Page *#num* of *#total*],
footer: [
#set align(center)
#set text(12pt, number-type: "old-style")
#grid(columns: (1fr,1fr,1fr), {
set text(tracking: 0.5pt)
align(left)[#smallcaps(lower(coursecode))]
},
[Page #counter(page).display((num, total) => [*#num* of *#total*], both: true)],
[])
],
// testing against official template
// background: image("template.svg", fit: "contain"),
margin: (left: 2.3cm, right: 2.3cm,
top: 2.3cm, bottom: 2.6cm),
footer-descent:50%) // end set page
// choice of fonts
set text(font: "STIX Two Text", lang: "en", size: 12pt)
show raw: set text(font: "Source Code Pro",
size: 1.1em, tracking: -0.1pt)
// -------------------------
// title block
// -------------------------
upper[
#set text(weight: "black", 14pt)
#align(center)[
#block(logo)
#v(0.8cm)
#block(inset: 0em)[
// #set text(weight: 800, 14pt)
// #show text: upper
EXAMINATIONS --- #year
#v(0.5em, weak: true)
TRIMESTER #trimester
#v(0.5em, weak: true)
FRONT PAGE
]#v(1.2cm, weak: true)
#rect(width: 90mm, inset: 0.5cm)[
#set par(leading: 0.7em)
#set text(weight: 800, 14pt)
#show text: upper
#coursecode#v(1.2em, weak: true)
#coursetitle#v(1em, weak: true)
#date.display("[month repr:long] [day], [year]")
] // rect
] // align center
] // upper
// -------------------------
// exam details
// -------------------------
set text(12pt)
v(6.5em, weak: true)
grid(row-gutter: 2em, columns: (3.8cm, auto),
strong[Time allowed:],strong[#upper[#timeallowed]],
strong[Permitted materials:],[
#strong[#upper[#openorclosed]]\
#permitted
],
strong[Instructions:],[#instructions])
pagebreak()
// -------------------------
// Main body
// -------------------------
set par(justify: true)
set enum(tight: false, spacing: 2.5em)
show heading: it => {
it.body
v(0.8em)
}
body
} |
https://github.com/kotfind/hse-se-2-notes | https://raw.githubusercontent.com/kotfind/hse-se-2-notes/master/os/seminars/2024-09-20.typ | typst | = Переменные окружающей среды
Полная сигнатура функции `main`:
```c
int main(int argc, char** argv, char** envp);
```
`envp` всегда завершается символом `NULL`.
= Изменение пользовательского контекста
Есть системный вызов `exec`.
Прямого доступа к нему нет --- есть обертки
|
|
https://github.com/joserlopes/PIC | https://raw.githubusercontent.com/joserlopes/PIC/main/Report_A003.typ | typst | #set text(lang: "por")
#show heading: it => {
if it.level == 1 {
set text(17pt, navy)
align(center)[#it]
} else if it.level == 2 {
set text(15pt, navy)
align(center)[#it]
} else if it.level == 3 {
set text(14pt, navy)
it
}
}
#show link: it => {
set text(blue)
underline(it)
}
#set par(justify: true, hanging-indent: -1em)
#set list(indent: 1em)
#show raw.where(block: false): box.with(
fill: luma(240),
inset: (x: 3pt, y: 0pt),
outset: (y: 3pt),
radius: 2pt,
)
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 10pt,
radius: 4pt,
)
= Projecto Integrador de 1º Ciclo em Engenharia Informática e de Computadores
== Relatório final - Grupo A-003
\
#box(height: 22pt,
columns(2)[
#align(center)[
<NAME> \
ist1103425
<NAME>\
ist1103938
]
]
)
=== Introdução
No âmbito da UC PLIC (_Projecto Integrador de 1º Ciclo em Engenharia Informática e de Computadores_)
decidimos contribuir para o projeto Open-Source _*Typst*_, um sistema de typesetting desenhado
com o objetivo de ser tão poderoso como LaTex sendo mais fácil de aprender e utilizar.
=== Desenvolvimento de uma nova funcionalidade
Para tal, um dos desafios da UC era submeter uma nova funcionalidade original dos alunos ou presente
no feature backlog do próprio projeto. Nós optamos pela segunda, mais precisamente, tentámos implementar
esta feature (https://github.com/typst/typst/issues/3963).
Infelizmente, não fomos capazes de implementar a feature até ao fim e, por conseguinte, os testes respetivos.
No entanto, já tínhamos uma ideia sobre como resolver os problemas com os quais nos fomos deparando, mas por
falta de organização e por acharmos que a implementação seria mais fácil do que acabou por ser, não conseguimos
levá-la até ao fim.
Apesar disso, conseguimos perceber e até mesmo realizar algumas adições para o fim de implementar a feature:
Inicialmente, pensámos em implementar um novo atributo com dois parâmetros: `(before:x, after: y)` (tal como se referiu na nossa _feature proposal_), mas depois de alguma consideração, decidimos adicionar apenas um novo atributo à função de formatação de parágrafos - `par()`. Assim, para além de facilitar o seu uso ao utilizador habitual, simplifica também a sua implementação. O objetivo, então, com este novo atributo é fornecer apenas um inteiro que indica o número máximo de linhas de a página atual pode ter do parágrafo em questão. O resto do parágrafo seria passado para a página seguinte.
#text(size: 13pt)[*Exemplo*:]
Input:
```typ
#set par(before: 1)
#lorem(40)
```
Output:
#figure(image("output.png"))
```rs
//typst/src/model/par.rs
#[elem(title = "Paragraph", Debug, Construct)]
pub struct ParElem {
/// The spacing between lines.
#[resolve]
#[ghost]
#[default(Em::new(0.65).into())]
pub leading: Length,
[...]
/// The paragraph's before parameter.
/// Maximum number of lines that must be on the current page.
/// The remaining number of lines get placed on the next page.
#[ghost]
pub before: Option<usize>,
}
```
De seguida, passámos para a lógica da _feature_:
- Caso haja um parágrafo que tenha de ser cortado para outra página, o programa lê o valor do `before` (se estiver presente; se não estiver, o seu valor por omissão é 0).
- A seguir, vê o limite de linhas que a própria página pode ter, para não haver linhas que possam sair da página ou desformatar a página. Se o valor do `before` for menor que o número de linhas que cabem na página, este não se altera. Caso o valor do `before` for maior, este passa a ter o número de linhas que cabem na página.
A lógica que se conseguiu implementar foi precisamente a primeira condição: deteta-se o número de linhas que cabem na página e o número de linhas do parágrafo, e verifica-se se é necessário criar um novo _frame_, isto é, uma nova página para conter o texto restante (ao terminar uma _region_, o programa cria um novo _frame_).
```rs
// typst/src/layout/flow.rs
fn layout_par(
[...]
) -> SourceResult<()> {
let align = AlignElem::alignment_in(styles).resolve(styles);
[...]
let before_lines = ParElem::before_in(styles).unwrap_or(0);
let lines = par .layout(engine, styles, consecutive,
self.regions.base(), self.regions.expand.x)?.into_frames();
// Logic for the 'before' parameter
if before_lines > 0 {
let mut total_height = 0;
let mut line_count = 0;
for line in &lines {
total_height += line.height();
line_count += 1;
if line_count >= before_lines {
break;
}
}
// If the lines don't fit in the current region, move to the next region
if !self.regions.size.y.fits(total_height) {
self.finish_region(engine, false)?;
}
}
[...]
}
```
=== Interação com a comunidade Open-Source
Apesar de tudo, um dos pontos mais positivos, foi a interação com a comunidade Open-Source do projeto em questão. Falando principalmente dos main developers, estes mostraram-se sempre prontos a responder respeitando sempre a nossa falta de experiência em contribuir para projetos de larga escala.
=== Desafios enfrentados
Sem dúvida que a maior dificuldade encontrada foi compreender a lógica inerente ao fluxo de controlo do programa, agravado pelo facto deste projeto ser no fundo um compilador composto por distintas etapas, cada uma dependente da anterior, tornando necessária uma compreensão mais profunda sobre as várias etapas.
=== Documentação do Projeto
O repositório do projeto contém informação bastante detalhada sobre a estrutura do código e a função de cada um dos módulos. A criação de novos testes e a interação com os já existentes encontra-se também bastante bem documentada.
=== Principais conclusões
Como primeira experiência de contribuição num projeto Open-Source e de larga escala, a impressão com que ambos ficámos foi bastante positiva, abrindo espaço para futuras contribuições para este projeto e outros com os quais nos iremos deparar no futuro. Aliás, o documento deste relatório foi escrito usando _*Typst*_.
|
|
https://github.com/ymgyt/techbook | https://raw.githubusercontent.com/ymgyt/techbook/master/programmings/js/typescript/specification/array.md | markdown | # array
```typescript
let array1: number[];
array1 = [1, 2, 3];
let array2: Array<number>;
array2 = [1, 2, 3];
```
* どちらも同じ意味
## spread
```typescript
const original = [1, 2, 3];
const copy = [...original];
copy.reverse();
// 追加もできる
const arr = [1, 2, 3];
const arr2 = [...arr, 4];
// 連結
const arr3 = [1, 2, 3];
const arr4 = [4, 5, 6];
const concated = [...arr3, ...arr4];
```
* `[...ident]`でarrayのcopyを作れる
|
|
https://github.com/yonatanmgr/university-notes | https://raw.githubusercontent.com/yonatanmgr/university-notes/main/0366-%5BMath%5D/03661101-%5BCalculus%201A%5D/src/lectures/03661101_merged.typ | typst | #import "/template.typ": *
#show: project.with(
title: "חשבון דיפרנציאלי ואינטגרלי 1א׳",
authors: ("יונתן מגר",),
date: "סמסטר א׳ - 2024",
)
\
#place(center)[עדכני נכון ל-#datetime.today().display("[day]/[month]/[year]")]
#set enum(numbering: "(1.א)")
#include "/utils/toc.typ"
#include "03661101_lecture_5.typ"
#pagebreak()
#include "03661101_lecture_6.typ"
#pagebreak()
#include "03661101_lecture_7.typ"
#pagebreak()
#include "03661101_lecture_8.typ"
#pagebreak()
#include "03661101_lecture_9.typ"
#pagebreak()
#include "03661101_lecture_10.typ"
#pagebreak()
#include "03661101_lecture_11.typ"
#pagebreak()
#include "03661101_lecture_13.typ"
#pagebreak()
#include "03661101_lecture_15.typ" |
|
https://github.com/frugal-10191/frugal-typst | https://raw.githubusercontent.com/frugal-10191/frugal-typst/main/book-template/book_template.typ | typst | Apache License 2.0 | #let fontSize = 10pt
//#let mainColor = green //rgb("#ff851b")
#let normalText = 1em
#let largeText = 3em
#let hugeText = 16em
#let title_main_1 = 2.5em
#let title_main_2 = 1.8em
#let title_main_3 = 2.2em
#let title1 = 2.2em
#let title2 = 1.5em
#let title3 = 1.3em
#let title4 = 1.2em
#let title5 = 1.1em
#let state_main_colour = state("main_colour", none)
#let state_title_font = state("state_title_font", none)
#let state_heading_font = state("state_heading_font", none)
#let chapter_image = state("chapter_image", none)
#let part_number = counter("part-number")
#let setChapterImage(img) = {
chapter_image.update(img)
}
// Outline/introduction
#let my-outline-row( textSize:none,
textWeight: "regular",
textFont: none,
insetSize: 0pt,
textColor: blue,
number: "0",
title: none,
heading_page: "0",
target: none) = {
set text(font: textFont, size: textSize, fill: textColor, weight: textWeight)
box(width: 1.1cm, inset: (y: insetSize), align(left, number))
h(0.1cm)
box(inset: (y: insetSize), width: 100% - 1.2cm, )[
#link(target, title)
#box(width: 1fr, repeat(text(weight: "regular")[. #h(4pt)]))
#link(target, heading_page)
]
}
#let partOutline() = {
show outline.entry: it => {
let counterInt = counter(heading).at(it.element.location())
let number = none
if counterInt.first() >0 {
number = numbering("1.1", ..counterInt)
}
let title = it.element.body
let heading_page = it.page
let colour = state_main_colour.at(it.element.location())
let headingFont = state_heading_font.at(it.element.location())
let textWeight = "regular"
let textSize = 1em
let textColour = black
if it.level == 1 {
v(1em, weak: true)
textWeight = "bold"
textSize = 1.1em
textColour = colour
}
else if it.level == 2 {
textWeight = "bold"
textSize = 1.0em
}
my-outline-row(insetSize: 2pt,
textWeight: textWeight,
textSize: textSize,
textFont: headingFont,
textColor:textColour,
title: title,
heading_page: heading_page,
number: number,
target: it.element.location())
}
locate(loc2 => [
#let parts = query(<part>, loc2)
#let currPartNo = part_number.at(loc2).first()
#if parts.len() > 0 {
if parts.len() > currPartNo {
let from = parts.at(currPartNo - 1).location()
let to = parts.at(currPartNo).location()
//part-title(from, parts.at(currPartNo).text)
align(bottom + right, block(width: 12cm, outline(title: none, target: selector(heading).after(from).before(to))))
}
else {
align(bottom + right, block(width: 12cm, outline(title: none, target: selector(heading).after(parts.last().location()))))
}
}
])
}
#let part(title) = {
pagebreak(to: "odd", weak: true)
locate(loc => [
#let colour = state_main_colour.at(loc)
#let titleFont = state_title_font.at(loc)
#page(fill: colour.lighten(70%))[
#part_number.step()
#block()[
#set text(font: titleFont, size: 48pt, weight: "bold")
#smallcaps([
Part: #part_number.display() \
#text(title)
<part>
])
]
#partOutline()
]
])
pagebreak()
}
#let show_outline = {
show outline.entry: it => {
let counterInt = counter(heading).at(it.element.location())
let number = none
if counterInt.first() >0 {
number = numbering("1.1", ..counterInt)
}
let title = it.element.body
let heading_page = it.page
let colour = state_main_colour.at(it.element.location())
let headingFont = state_heading_font.at(it.element.location())
let textWeight = "regular"
let textSize = 1em
let textColour = black
if it.level == 1 {
v(1em, weak: true)
textWeight = "bold"
textSize = 1.3em
textColour = colour
}
else if it.level == 2 {
textWeight = "bold"
textSize = 1.2em
}
my-outline-row(insetSize: 2pt,
textWeight: textWeight,
textSize: textSize,
textFont: headingFont,
textColor:textColour,
title: title,
heading_page: heading_page,
number: number,
target: it.element.location())
}
set outline(indent: false)
//set page(numbering: "i")
// A custom outline level for parts are they are not included in the
// normal numbering system
let part-title(loc, part_title) = link(loc)[
#let colour = state_main_colour.at(loc)
#let titleFont = state_title_font.at(loc)
#v(0.7cm, weak: true)
#box(width:100%, {
box(width: 1.1cm, fill: colour.lighten(80%), inset: 5pt, align(
center, text(font: titleFont, size: 24pt, weight: "bold", fill: colour.lighten(30%), numbering("I", counter(<part>).at(loc).first()))))
h(0.1cm)
box(width: 100% - 1.2cm, fill: colour.lighten(60%), inset: 5pt, align(center, text(font: titleFont, size: 24pt, weight: "bold", part_title)))
})
#v(0.45cm, weak: true)
]
// Display the outline, title first, then contents
heading(numbering: none, outlined: false, bookmarked: false, "Table of Contents")
// Make each part have its own outline
locate(loc => {
let parts = query(<part>, loc)
if parts.len()>0 {
for i in range(0, parts.len()-1) {
let from = parts.at(i).location()
let to = parts.at(i+1).location()
part-title(from, parts.at(i).text)
outline(title: none, target: selector(heading).after(from).before(to))
}
part-title(parts.last().location(), parts.last().text)
outline(title: none, target: selector(heading).after(parts.last().location()))
}
else {
outline(title: none)
}
})
}
#let book(title: "",
subtitle: "",
author: (),
logo: none,
cover: none,
imageIndex:none,
mainColor: blue,
copyright: [],
lang: "en",
listOfFigureTitle: none,
listOfTableTitle: none,
supplementChapter: "Chapter",
supplementPart: "PART",
titleFont: "New Computer Modern",
headingFont: "New Computer Modern",
bodyFont: "New Computer Modern",
fontSize: 10pt,
justify: true,
leading: 0.65em,
part_style: 0,
body,
) = {
// Set the meta data for the whole Document
set document(author: author, title: title)
//
state_main_colour.update(x => mainColor)
state_title_font.update(x => titleFont)
state_heading_font.update(x => headingFont)
/*
*
* Set up the main structure of the page
*
*/
set page(
paper: "a4",
//margin: (x: 4cm, bottom: 2.5cm, top: 3cm),
margin: (inside: 4cm, outside: 3cm, bottom: 2.5cm, top: 3cm),
// Specify the structure of the page headers
header: locate(loc => {
set text(font: headingFont, size: title5)
let page_number = counter(page).at(loc).first()
let odd_page = calc.odd(page_number)
// Are we on a page that starts a chapter?
let all = query(heading.where(level: 1), loc)
if all.any(it => it.location().page() == page_number) {
return
}
// Are we on a page that starts a part?
let parts = query(<part>, loc)
if parts.len() > 0 {
if parts.any(it => it.location().page() == page_number) {
return
}
}
// Get the array of chapters (level 1 headings) that have occured before this point
// we need to run the query because we want the chapter title, the counter will just
// give us the chapter number.
let chapters = query(selector(heading.where(level: 1)).before(loc), loc)
let mostRecentChapter = counter(heading).at(loc).first()
let heading_text = ""
if (chapters.len() > 0) {
// Current chapter number
let mostRecentChapterPage = chapters.last().location().page()
let prevParts = query(selector(<part>).before(loc), loc)
let mostRecentPartPage = 0
if parts.len() > 0 {
if prevParts.len()>0 {
mostRecentPartPage = prevParts.last().location().page()
if mostRecentChapterPage > mostRecentPartPage {
heading_text = text(weight: "bold",
chapters.last().supplement + " " + str(mostRecentChapter) + ". " + chapters.last().body
)
}
}
}
else {
// this document has no parts
heading_text = text(weight: "bold",
chapters.last().supplement + " " + str(mostRecentChapter) + ". " + chapters.last().body
)
}
}
if odd_page {
if chapters != () and mostRecentChapter > 0 {
box(width: 100%, inset: (bottom: 5pt), stroke: (bottom: 0.5pt))[
#heading_text
#h(1fr)
#page_number
]
}
} else{
if chapters != () and mostRecentChapter > 0 {
box(width: 100%, inset: (bottom: 5pt), stroke: (bottom: 0.5pt))[
#page_number
#h(1fr)
#heading_text
]
}
}
})
)
let showChapter(head) = {
pagebreak(to: "odd", weak: true)
locate(loc => {
let img = chapter_image.at(loc) // get the heading image state value at this point in the file
if img != none {
// we have an image
set image(width: 21cm, height: 9.4cm) // a4 paper is 210mm wide, so this image is the full width
place(move(dx: -4cm, dy: -3cm, img)) // move up and left by the size of the margins and show the image
}
else {
place(move(dx: -4cm, dy: -3cm, block(
width: 21cm,
height: 9.4cm,
fill: mainColor.lighten(70%)
))) // move up and left by the size of the
}
place(
move(dx: -4cm, dy: -3cm, // move up and left by the size of the margins
block(width: 21cm, height: 9.4cm, // create a block the same size as the image
align(right + bottom,
pad(bottom: 1.2cm, // leave 1.2cm of image empty at the bottom
block(
width: 21cm-3cm, // make it the full width minus the left margin size
stroke: (
right: none,
rest: 3pt + mainColor,
),
inset: (left:2em, rest: 1.6em),
fill: rgb("#FFFFFFAA"),
radius: ( right: 0pt, left: 10pt ),
align(left, text(font: headingFont, size: 36pt, head)) // display the actual heading
)
)
)
)
)
)
v(8.4cm)
})
}
show heading: it => {
if it.level == 1 {
//it
showChapter(it)
} else {
text(font: headingFont, it)
}
}
set heading(
numbering: (..nums) => {
let vals = nums.pos()
if vals.len() == 1 {
return str(vals.first()) + "."
}
else if vals.len() <=4 {
let color = mainColor
if vals.len() == 4 {
color = black
}
return place(dx:-4.5cm, box(width: 4cm, align(right, text(font: headingFont, fill: color)[#nums.pos().map(str).join(".")])))
}
},
supplement: supplementChapter
);
// Now we write the document
//
//
// Show a title page with a full page background
//
//
//set image(width: 100%, height: 100%)
let titleImg = image(cover, width: 100%, height: 100%)
page(margin: 0cm, header: none, background: titleImg)[
#set text(fill: black, font: titleFont)
#if logo != none {
place(top + center, pad(top:1cm, image(logo, width: 3cm)))
}
#align(center + horizon,
block(width: 100%, fill: mainColor.lighten(70%), height: 7.5cm,
pad(x:2cm, y:1cm)[
#text(size: 3em, weight: 900, title)
#v(1cm, weak: true)
#text(size: 3em, subtitle)
#v(1cm, weak: true)
#text(size: 1em, weight: "bold", author)
]
)
)
]
// Set default text properties
set text(size: fontSize, lang: lang, font: bodyFont)
set par(
first-line-indent: 0em,
justify: true,
leading: leading
)
//show par: set block(spacing: 3.0em) // set the spacing between paragraphs
show par: set block(above: 1em, below: 2em) // set the spacing between paragraphs
// Set the numbering format for numbered lists
set enum(numbering: "1.a.i.")
// Set the style for unnumbered lists.
set list(marker: ([•], [--], [◦]))
set page(numbering: "i")
// Show the copyright if one exists on a page of it's own
// If there is no copyright then show a blank page.
if (copyright!=none) {
set text(size: 10pt)
show link: it => [
#set text(fill: mainColor)
#it
]
show par: it => {
block(above: 2em, below: 2em, it)
}
pagebreak()
align(bottom, copyright)
} else {
// Manually put a pagebreak in to force us to an odd page
// to work around a bug in typst
pagebreak()
}
show_outline
set page(numbering: none)
body
}
|
https://github.com/MatheSchool/typst-g-exam | https://raw.githubusercontent.com/MatheSchool/typst-g-exam/develop/docs-shiroa/g-exam-doc/book.typ | typst | MIT License | #import "@preview/shiroa:0.1.1": *
#show: book
#book-meta(
title: "g-exam Docs",
description: "g-exam Documentation",
repository: "https://github.com/MatheSchool/typst-g-exam",
// repository-edit: "https://github.com/Myriad-Dreamin/shiroa/edit/main/github-pages/docs/{path}",
authors: ("<NAME>",),
language: "en",
summary: [ // begin of summary
= Introducction
#prefix-chapter("introduction.typ")[Introduction]
- #chapter("configuration/configuration.typ")[Configuration]
- #chapter("configuration/draft.typ")[Configuration]
- #chapter("commands.typ")[Commands]
- #chapter("page1.typ")[Página 1]
- #chapter("page2.typ")[Página 2]
== Latex Mit
- #chapter("latexmit/latexmit.typ")[LaTeX MIT ]
- #chapter("latexmit/latexmit-with-points.typ")[LaTeX MIT with points]
- #chapter("latexmit/latexmit-without-spaces.typ")[LaTeX MIT without spaces]
== Examples
- #chapter("examples/exam-big-image.typ")[Example]
- #chapter("about.typ")[About]
// - #chapter("guide/installation.typ")[Installation]
// - #chapter("guide/get-started.typ")[Get Started]
// - #chapter("guide/faq.typ")[Frequently Asked Questions]
// - #chapter(none)[Further reading]
// = Reference Guide
// - #chapter("cli/main.typ")[Command Line Tool]
// - #chapter("cli/init.typ")[init]
// - #chapter("cli/build.typ")[build]
// - #chapter("cli/serve.typ")[serve]
// - #chapter("cli/clean.typ")[clean]
// - #chapter("cli/completions.typ")[completions]
// - #chapter("format/main.typ")[Format]
// - #chapter("format/book.typ")[book.typ]
// - #chapter("format/book-meta.typ")[Book Metadata]
// - #chapter(none)[Draft chapter]
// // - #chapter(none)[chapter with - markers]
// // - #chapter(none, "= Introduction")
// // - #chapter(none)[#text("= Introduction")]
// - #chapter("format/build-meta.typ")[Build Metadata]
// - #chapter("format/theme.typ")[Theme]
// - #chapter("format/supports.typ")[Typst Support]
// - #chapter("format/supports/cross-ref.typ")[Cross Reference]
// - #chapter("format/supports/cross-ref-sample.typ")[Cross Reference Sample]
// - #chapter("format/supports/embed-html.typ")[Embed Sanitized HTML Elements]
// - #chapter("format/supports/multimedia.typ")[Multimedia components]
// - #chapter("format/supports/sema-desc.typ")[Semantic Page Description]
// - #chapter(none)[For developers]
// - #chapter(none)[Typst-side APIs]
// - #chapter(none)[shiroa CLI Internals]
// - #chapter(none)[Alternative Backends]
// // end of summary
],
)
#build-meta(
dest-dir: "../dist"
)
#get-book-meta()
// re-export page template
#import "../template/page.typ": project
#let book-page = project
#let cross-link = cross-link
// #let heading-reference = heading-reference |
https://github.com/MatejKafka/ctu-thesis-typst | https://raw.githubusercontent.com/MatejKafka/ctu-thesis-typst/main/example.typ | typst | #import "./template/template.typ": *
#show: template.with(
meta: (
title: "An Interesting Thesis Title",
author: (
name: "<NAME>",
email: "<EMAIL>",
url: "https://my.project/url",
),
submission-date: datetime(year: 2012, month: 1, day: 21),
// true for bachelor's thesis, false for master's thesis
bachelor: false,
supervisor: "Ing. <NAME>, PhD.",
faculty: "Faculty of Electrical Engineering",
department: "Department of Measurement",
study-programme: "Open Informatics",
branch-of-study: "Computer Engineering", // studijni obor
),
// set to true if generating a PDF for print (shifts page layout, turns code blocks greyscale, correctly aligns odd/even pages,...)
print: false,
abstract-en: [
#lorem(40)
#lorem(60)
],
abstract-cz: [
#lorem(40)
#lorem(60)
],
acknowledgement: [
#lorem(30)
#lorem(30)
],
)
= Introduction
#lorem(80) @template
#lorem(120)
#lorem(140)
#lorem(40)
#lorem(70)
= Background
== Part 1
#lorem(100)
=== Subpart 1
#lorem(40)
=== Subpart 2
#lorem(70)
== Part 2
#lorem(100)
= Future work
#lorem(100)
= Conclusion
#lorem(100)
#bibliography("bibliography.bib")
// all h1 headings from here on are appendices
#show: start-appendix
= An example appendix
#lorem(100) |
|
https://github.com/horaoen/typstempl | https://raw.githubusercontent.com/horaoen/typstempl/master/demo/note.typ | typst | #import "note.typ": note
#show: doc => note(
title: "neote template demo",
doc
)
= Introduction
#lorem(30)
#lorem(30)
== Motivation
#lorem(30)
#lorem(30)
== Problem Statement
#lorem(30)
== Related Work
#lorem(30)
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/g-exam/0.3.0/src/g-solution.typ | typst | Apache License 2.0 | #import"./global.typ": *
/// Show solution of question.
///
/// *Example:*
/// ``` #g-solution(
/// alternative-content: v(1fr)
/// )[
/// I know the demostration, but there's no room on the margin. For any clarification ask <NAME>.
/// ]```
///
///
/// - alternative-content (string, content): Alternate content when the question solution is not displayed.
/// - body (string, content): Body of question solution
#let g-solution(
alternative-content: none,
body) = {
assert(alternative-content == none or type(alternative-content) == "content",
message: "Invalid alternative-content value")
locate(loc => {
let show-solution = __g-show-solution.final(loc)
if show-solution == true {
body
}
else {
hide[#body]
// alternative-content
}
}
)
} |
https://github.com/mrcinv/nummat-typst | https://raw.githubusercontent.com/mrcinv/nummat-typst/master/domace_seznam.typ | typst | #import "admonitions.typ": admonition, admtitle, opomba
#import "julia.typ": code_box, repl, pkg
#let dobro(naslov, vsebina) = {
let title = admtitle("Dobro!", naslov)
admonition(title: title, color: lime.desaturate(40%), vsebina)
}
#let slabo(naslov, vsebina) = {
let title = admtitle("Slabo!", naslov)
admonition(title: title, color: red.desaturate(40%), vsebina)
}
#let dobrabarva = green.darken(40%)
#let slababarva = red
#show link: set text(blue)
= Navodila za pripravo domačih nalog
Ta dokument vsebuje navodila za pripravo domačih nalog. Navodila so napisana za
programski jezik #link("https://julialang.org/")[Julia]. Če uporabljate drug
programski jezik, navodila smiselno prilagodite.
== Kontrolni seznam
Spodaj je seznam delov, ki naj jih vsebuje domača naloga.
- koda (`src\DomacaXY.jl`)
- testi (`test\runtests.jl`)
- dokument `README.md`
- demo skripta, s katero ustvarite rezultate za poročilo
- poročilo v formatu PDF
Preden oddate domačo nalogo, uporabite naslednji _kontrolni seznam_:
- vse funkcije imajo dokumentacijo
- testi pokrivajo večino kode
- _README_ vsebuje naslednje:
- ime in priimek avtorja
- opis naloge
- navodila kako uporabiti kodo
- navodila, kako pognati teste
- navodila, kako ustvariti poročilo
- _README_ ni predolg
- poročilo vsebuje naslednje:
- ime in priimek avtorja
- splošen(matematičen) opis naloge
- splošen opis rešitve
- primer uporabe (slikice prosim :-)
== Kako pisati in kako ne
V nadaljevanju je nekaj primerov dobre prakse, kako pisati kodo, teste in
poročilo. Pri pisanju besedil je vedno treba imeti v mislih, komu je poročilo
namenjeno.
Pisec naj uporabi empatijo do bralca in naj poskuša napisati zgodbo, ki ji
bralec lahko sledi. Tudi, če je pisanje namenjeno strokovnjakom, je dobro, če je
čim več besedila razumljivega tudi širši publiki. Tudi strokovnjaki radi beremo
besedila, ki jih hitro razumemo. Zato je dobro začeti z okvirnim opisom z malo
formulami in splošnimi izrazi. V nadaljevanju lahko besedilo stopnjujemo k vedno
večjim podrobnostim.
Določene podrobnosti, ki so povezane s konkretno implementacijo, brez škode
izpustimo.
=== Opis rešitve naj bo okviren
Opis rešitve naj bo zgolj okviren. Izogibajte se uporabi programerskih izrazov
ampak raje uporabljajte matematične. Na primer izraz #text(slababarva)[uporabimo for zanko],
lahko nadomestimo s #text(dobrabarva)[postopek ponavljamo]. Od bralca zahteva
splošen opis manj napora in dobi širšo sliko. Če želite dodati izpeljave, jih
napišite z matematičnimi formulami, ne v programskem jeziku. Koda sodi zgolj v
del, kjer je opisana uporaba za konkreten primer.
#dobro(
"Splošen opis algoritma",
)[
Algoritem za LU razcep smo prilagodili tridiagonalni strukturi matrike. Namesto
trojne zanke smo uporabili le enojno, saj je pod pivotnim elementom neničelen le
en element. Časovna zahtevnost algoritma je tako z $cal(O)(n^3)$ padla na zgolj $cal(O)(n)$.
]
#slabo(
"Podrobna razlaga kode, vrstico po vrstico",
)[
V programu za LU razcep smo uporabili for zanko od 2 do velikosti matrike. V
prvi vrstici zanke smo izračunali `L.s[i]`, tako da smo element `T.s[i]` delili
z `U.z[i-1]`. Nato smo izračunali diagonalni element, tako da smo uporabili
formulo `U.d[i]-L.s[i]*U.d[i-1]`. Na koncu zanke smo vrnili matriki `L` in `U`.
]
=== Podrobnosti implementacije ne sodijo v poročilo
Podrobnosti implementacije so razvidne iz kode, zato jih nima smisla ponavljati
v poročilu. Algoritme opišete okvirno, tako da izpustite podrobnosti, ki niso
nujno potrebne za razumevanje. Podrobnosti lahko dodate, v nadaljevanju, če
mislite, da so nujne za razumevanje.
#dobro(
[Algoritem opišemo okvirno, podrobnosti razložimo kasneje],
)[
V matriki želimo eleminirati spodnji trikotnik. To dosežemo tako, da stolpce
enega za drugim preslikamo s Hausholderjevimi zrcaljenji. Za vsak stolpec
poiščemo vektor, preko katerega bomo zrcalili. Vektor poiščemo tako, da bo imela
zrcalna slika ničle pod diagonalnim elementom.
]
Tu lahko z razlago zaključimo. Če želimo dodati podrobnosti, pa jih navedemo za
okvirno idejo.
#dobro(
[Podrobnosti sledijo za okvirno razlago],
)[
Vektor zrcaljenja dobimo kot
$ u = [s(k) + A_(k, k), A_(k+1, k), ... A_(n, k)], $ kjer je $s(k)="sign"(A_(k, k))*||A(k:n,k)||$.
Podmatriko $A(k:n, k+1:n)$ prezrcalimo preko vektorja $u$, tako da podmatriki
odštejemo matriko
$ 2u (u^T A(k:n, k+1:n))/(u^T u). $
Na $k$-tem koraku prezrcalimo le podmatriko $k:n times k:n$, ostali deli matrike
pa ostanjejo nespremenjeni.
]
Takojšnje razlaganje podrobnosti, brez predhodnega opisa osnovne ideje, ni
dobro. Bralec težko loči, kaj je zares pomembno in kaj je zgolj manj pomembna
podrobnost.
#slabo(
[Takoj dodamo vse podrobnosti, ne da bi razložili zakaj],
)[
Za vsak $k$, poiščemo vektor $u=[s(k) + A_(k, k), A_(k+1, k), ... A_(n, k)]$,
kjer je $s(k)="sign"(A_(k, k))*||[A_(k,k), ..., A_(n, k)]||$.
Nato matriko popravimo
$
A(k:n, k+1:n) = A(k:n, k+1:n) -2*u*(u^T*A(k:n, k+1:n))/(u^T*u).
$
]
Če implementacija vsebuje posebnosti, kot na primer uporaba posebne podatkovne
strukture ali algoritma, jih lahko opišemo v poročilu. Vendar pazimo, da bralca
ne obremenjujemo s podrobnostmi.
#dobro(
[Posebnosti implementacije opišemo v grobem in se ne spuščamo v podrobnosti],
)[
Za tridiagonalne matrike definiramo posebno podatkovno strukturo `Tridiag`, ki
hrani le neničelne elemente matrike. Julia omogoča, da LU razcep tridiagonalne
matrike, implementiramo kot specializirano metodo funkcije `lu` iz paketa
`LinearAlgebra`. Pri tem upoštevamo posebnosti tridiagonalne matrike in
algoritem za LU razcep prilagodimo tako, da se časovna in prostorska zahtevnost
zmanjšata na $cal(O)(n)$.
]
Pazimo, da v poročilu ne povzemamo direktno posameznih korakov kode.
#slabo(
[Opisovanje, kaj počnejo posamezni koraki kode, ne sodi v poročilo.],
)[
Za tridiagonalne matrike definiramo podatkovni tip `Tridiag`, ki ima 3 atribute
`s`, `d` in `z`. Atribut `s` vsebuje elemente pod diagonalo, ...
LU razcep implementiramo kot metodo za funkcijo `LinearAlgebra.lu`. V `for`
zanki izračunamo naslednje:
1. element `l[i]=a[i, i-1]/a[i-1, i-1]`
2. ...
]
== Kako pisati teste
Nekaj nasvetov, kako lahko testiramo kodo.
- Na roke izračunajte rešitev za preprost primer in jo primerjajte z rezultati
funkcije.
- Ustvarite testne podatke, za katere je znana rešitev. Na primer za testiranje
kode, ki reši sistem `Ax=b`, izberete `A` in `x` in izračunate desne strani
`b=A*x`.
- Preverite lastnost rešitve. Za enačbe `f(x)=0`, lahko rešitev, ki jo izračuna
program preprosto vstavite nazaj v enačbo in preverite, če je enačba izpolnjena.
- Red metode lahko preverite tako, da naredite simulacijo in primerjate red
metode z redom programa, ki ga eksperimentalno določite.
- Če je le mogoče, v testih ne uporabljamo rezultatov, ki jih proizvede koda sama.
Ko je koda dovolj časa v uporabi, lahko rezultate kode same uporabimo za #link("https://en.wikipedia.org/wiki/Regression_testing")[regresijske teste].
=== Pokritost kode s testi <sec:pokritost-testi>
Pri pisanju testov je pomembno, da testi izvedejo vse veje v kodi. Delež kode,
ki se izvede med testi, imenujemo #link(
"https://en.wikipedia.org/wiki/Code_coverage",
)[pokritost kode (angl. Code Coverage)]. V juliji lahko pokritost kode dobimo, če
dodamo argument ```jl coverage=true``` metodi `Pkg.test`:
#code_box(
repl("import Pkg; Pkg.test(\"DomacaXY\"; coverage=true)", none)
)
Zgornji ukaz bo za vsako datoteko iz mape `src` ustvaril ustrezno datoteko s končnico `.cov`, v kateri je shranjena informacija o tem, kateri deli kode so bili uporabljeni med izvajanjem testov.
#pagebreak()
Za poročanje o pokritosti kode lahko uporabite paket #link("https://github.com/JuliaCI/Coverage.jl")[Coverage.jl]. Povzetek o pokritosti kode s testi lahko pripravite z naslednjim programom:
#code_box[
```jl
using Coverage
cov = process_folder("DomacaXY")
pokrite_vrstice, vse_vrstice = get_summary(cov)
delez = pokrite_vrstice / vse_vrstice
println("Pokritost kode s testi: $(round(delez*100))%.")
```
]
== Priprava zahteve za združitev na Github
Za lažjo komunikacijo predlagam, da rešitev domače naloge postavite v svojo vejo in ustvarite zahtevo za združitev (_Pull request_ na Githubu oziroma _Merge request_ na Gitlabu). V nadaljevanju bomo opisali, kako to storiti, če repozitorij z domačimi nalogami gostite na Githubu. Postopek za Gitlab in druge platforme je podoben.
Preden začnete z delom, ustvarite vejo na svoji delovni kopiji repozitorija in jo
potisnete na Github ali Gitlab. Ime veje naj bo `domača-X`, se pravi `domaca-1` za 1.
domačo nalogo in tako naprej. To storite z ukazom
```sh
$ git checkout -b domaca-1
$ git push -u origin domaca-1
```
Stikalo `-u` pove git-u, da naj z domačo vejo sledi veji na Githubu/Gitlabu.
Med delom sproti dodajate vnose z `git commit` in jih prenesete na splet z ukazom `git push`. Ko je domača naloga končana, na Githubu ustvarite zahtevo za združitev (angl. Pull request).
- Kliknete na zavihek `Pull requests` in nato na zelen gumb `Create pull request`.
- Na desni strani izberete vejo `domaca-1` in kliknete na gumb `Create draft pull request`.
- Ko je koda pripravljena na pregled, kliknite na gumb `Ready for review`.
- V komentarju za novo ustvarjeno zahtevo povabite asistenta k pregledu. To storite tako, da v komentar dodate uporabniško ime asistenta (npr. `@mojZlobniAsistent`).
```
@mojZlobniAsistent Prosim za pregled.
```
#opomba(naslov: [Pri domačih nalogah se posvetujte s kolegi])[
Nič ni narobe, če za pomoč pri domači nalogi prosite kolega. Seveda morate kodo in poročilo napisati samo, lahko pa kolega prosite za pregled ali za pomoč, če vam kaj ne dela.
Domačo nalogo tudi napišete v skupini, vendar morate v tem primeru rešiti toliko različnih nalog, kot je študentov v skupini.
] |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/mitex/0.1.0/lib.typ | typst | Apache License 2.0 | #import "mitex.typ": mitex-wasm, mitex-convert, mitex-scope, mitex, mi |
https://github.com/ukcw/typst-cv-template | https://raw.githubusercontent.com/ukcw/typst-cv-template/main/README.md | markdown | MIT License | # typst-cv-template
A Typst CV template inspired by [skyzh](https://github.com/skyzh/typst-cv-template) and [Jake's Resume](https://github.com/jakegut/resume).
## Sample CV

## Ulysses's Resume
[resume](https://ukcw.io/resume.pdf)
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/Math/variants.typ | typst | #set text(
size:10pt,
)
#set page(
paper:"a5",
margin:(x:1.8cm,y:1.5cm),
)
#set par(
justify: true,
leading: 0.52em,
)
= 变体
公式中的替代字体。
这些函数与文本函数不同,因为数学字体包含每个字母的多个变体。
= 功能
= 衬线体
数学中的衬线(罗马)字体样式。
这已经是默认设置。
#image("57.png")
= 内容
要设置样式的内容。
= 无衬线体
数学中的无衬线字体样式。
#image("58.png")
= 内容
要设置样式的内容。
= 德文尖角体
数学中的德文尖角体字体样式。
#image("59.png")
= 内容
要设置样式的内容。
= 等宽字体
数学中的等宽字体样式。
#image("60.png")
= 内容
要设置样式的内容。
= bb型
数学中的黑板粗体(双击)字体样式。
对于大写拉丁字母,黑板粗体还可以通过形式NN和RR的符号获得。
#image("61.png")
= 内容
要设置样式的内容。
= cal型
数学中的书法字体风格。
#image("62.png")
= 内容
要设置样式的内容。
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/string-19.typ | typst | Other | // Error: 17-21 expected either `start` or `end`
#"abc".trim(at: left)
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/par-justify_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test that lines with hard breaks aren't justified.
#set par(justify: true)
A B C \
D
|
https://github.com/hugoledoux/msc_geomatics_thesis_typst | https://raw.githubusercontent.com/hugoledoux/msc_geomatics_thesis_typst/main/appendices/reproducibility.typ | typst | MIT License |
= Reproducibility self-assessment
== Marks for each of the criteria
#figure(
image("../figs/reproducibility_criteria.png"),
caption: [Reproducibility criteria to be assessed.],
)<fig:reproducibility_criteria>
Grade/evaluate yourself for the 5 criteria (giving 0/1/2/3 for each):
+ input data
+ preprocessing
+ methods
+ computational environment
+ results
== Self-reflection
A self-reflection about the reproducibility of your thesis/results.
We expect maximum 1 page here.
For example, if your data are not made publicly available, you need to justify it why (perhaps the company prevented you from doing this).
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/def_use/import_star_recursive.typ | typst | Apache License 2.0 | // path: base.typ
#let x = 1;
#x
-----
// path: base2.typ
#import "base.typ": *
#let y = 2;
-----
#import "base2.typ": *
#x, #y |
https://github.com/francescoo22/LCD-exercises | https://raw.githubusercontent.com/francescoo22/LCD-exercises/main/src/G.typ | typst | #import "common.typ": *
= Exercise G
Show that the encoding of the operator Even in the mu-calculus captures the property of interest:
$ hmlr(Even(phi)) = {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)} $
== Solution
As discussed during the lectures:
$
Even(phi) = mu X . (phi or ([Act] X and ang(Act) T))\
=> hmlr(Even(phi)) = hmlr(mu X . (phi or ([Act] X and ang(Act) T))) = fix(f_phi)
$
where $f_phi (S) = hmlr(phi or ([Act] X and ang(Act) T))_[X->S] = hmlr(phi or ([Act] S and ang(Act) T))$
The proof can be divided in two parts:
+ $hmlr(Even(phi)) subset.eq {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)}$
+ $hmlr(Even(phi)) supset.eq {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)}$
In fact $1. and 2. => hmlr(Even(phi)) = { P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)}$
=== Part 1
$
S
= {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)}\
= {P | sat(P, phi) or P "can make a transition and " forall P_1 . P -> P_1 => forall "complete computation" P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)}\
= {P | sat(P, phi) } union ({P | forall P ->^Act P' . P in S} sect {P | exists P ->^Act P' . P' in Proc})\
= hmlr(phi) union (hmlr([Act] S) sect hmlr(ang(Act) T))\
= hmlr(phi or ([Act] S and ang(Act) T))
= f_phi (S)
= S
$
$=> S$ is a fixed point of $f_phi$ , but $hmlr(Even(phi))$ is the lfp of $f_phi$
$=> hmlr(Even(phi)) subset.eq S = {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)}$
=== Part 2
Let $ S_n = {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i < n "s.t." sat(P_i, phi)} $
I want to prove that $forall n in NN . S_n subset.eq f_phi^n (emptyset)$
By induction on $n$:
*Base case $n=1$*
$ S_1 = {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i < 1 "s.t." sat(P_i, phi)} = {P | sat(P, phi)} \ subset.eq {P | sat(P, phi)} union hmlr([Act] emptyset and ang(Act)T) = hmlr(phi or ([Act] emptyset and ang(Act)T)) = f_phi (emptyset) $
*Inductive case $n => n+1$*
$ S_(n+1) = {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i < n+1 "s.t." sat(P_i, phi)} $
It is easy to see that
$
S_(n+1) = {P | sat(P, phi) or P "can make a transition and " forall P_1 . P -> P_1 => forall "complete computation" P_1 -> "... " exists i < n "s.t." sat(P_i, phi)} \
= {P | sat(P, phi) } union ({P | forall P ->^Act P' . P in S_n} sect {P | exists P ->^Act P' . P' in Proc}) \
= hmlr(phi) union (hmlr([Act] S_n) sect hmlr(ang(Act) T)) \
$
By induction
$ S_n subset.eq f_phi^n (emptyset) \
=> hmlr([Act] S_n) subset.eq hmlr([Act] f_phi^n (emptyset)) \
=> S_(n+1) = hmlr(phi) union (hmlr([Act] S_n) sect hmlr(ang(Act) T)) subset.eq hmlr(phi) union (hmlr([Act] f_phi^n (emptyset)) sect hmlr(ang(Act) T)) = f_phi^(n+1) (emptyset)
$
Since $2^Proc$ is a dcpo, By Kleene fixed-point theorem $ hmlr(Even(phi)) = fix(f_phi) = sup ({f_phi^n (emptyset) | n in NN}) $
and so
$
forall n in NN . S_n subset.eq f_phi^n (emptyset) subset.eq fix(f_phi) = hmlr(Even(phi))\
=> {P | forall "complete computation" P = P_0 -> P_1 -> P_2 -> "... " exists i "s.t." sat(P_i, phi)} subset.eq hmlr(Even(phi))
$
|
|
https://github.com/SkiFire13/master-thesis-presentation | https://raw.githubusercontent.com/SkiFire13/master-thesis-presentation/master/common.typ | typst | #let syseq(x) = math.lr(sym.brace.l + block(math.equation(x, block: true, numbering: none)))
#let feq = math.scripts("=")
#let mathstr(s) = s.clusters().map(s => $#s$).join()
#let tt = mathstr("true")
#let ff = mathstr("false")
#let boxx(f) = $class("unary", [ #f ])$
#let diam(f) = $class("unary", angle.l #f angle.r)$
#let Inv = math.op("Inv")
#let Even = math.op("Even")
#let tup(a) = math.bold(a)
#let varempty = text(font: (), sym.emptyset)
#let sub = math.class("relation", sym.subset.eq.sq)
#let join = math.class("vary", sym.union.sq)
#let us = sym.mu + "s"
|
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/math/lawvere.typ | typst | #import "../lib.typ": *
#show heading: heading_fct
#import "@preview/gloss-awe:0.0.5": gls
#show figure.where(kind: "jkrb_glossary"): it => {it.body}
=== Théorème (Lawvere)
Soit $cal(C)$ une #gls(entry: "Catégorie")[catégorie] cartésienne fermée, si il existe $phi : A -> B^A$ surjective au point, alors tout endomorphisme $g : B -> B$ admet un point fixe.
=== Une preuve dans #emph("Set")
Soient $A,B$ des ensembles, puis $phi : A -> B^A$ surjective.
Soit $g : B -> B$,
on considère $q$ la fonction qui à $a in A$ associe $g(phi(a)(a))$.
#question(0)[Quel est le domaine de $q$ ? Son codomaine (ensemble d'arrivée) ?]
#correct[
$ q : A -> B $
]
#question(2)[Construire un point fixe de la fonction $g$.]
#correct[
Soit $p in A$ un antécédent de $q$ par $phi$.
Alors $phi(p)(p) in B$ est un point fixe de $g$ (il suffit de vérifier).
]
_On admet le théorème de Lawvere général dans le reste de l'énoncé (la démonstration est identique)_.
=== Théorème de Cantor
On note désormais $2 := {0,1}$ et $frak(P)(E) = {A subset E}$.
#question(0)[Rappeler la bijection entre l'ensemble des parties de $E$ et l'ensemble des fonctions de $E$ dans $2$.]
#correct[
Cette bijection est $chi_dot$ la fonction indicatrice.
]
#question(2)[Démontrer qu'il n'existe pas de surjection $phi : E arrow.twohead frak(P)(E)$.]
#correct[
Une surjection $phi : E arrow.twohead frak(P)(E)$ serait possible si on avait une surjection $psi : E arrow.twohead 2^E$.
$triangle$ Supposons par l'absurde disposer de $psi : E arrow.twohead 2^E$ surjective.
Les hypothèses du théorème sont Lawvere sont satisfaites, donc toute fonction de $2$ dans $2$ admet un point fixe. Or $"not" : 2 -> 2$ n'en admet pas, c'est absurde $arrow.zigzag$ #h(1fr) $triangle.l$
]
#question(1)[En déduire que $NN$ n'est pas équipotent à $RR$.]
#correct[
On rappelle que $RR$ est équipotent aux parties de $NN$ car une partie de $NN$ peut représenter un développement binaire d'un réel.
Mais comme il n'existe pas de surjection de $NN$ dans $frak(P)(NN)$, $RR$ et $NN$ ne sont pas équipotents. #align(right, $square$)
]
#question(1)[
Montrer que si $T,Y$ sont des ensembles et que $alpha : Y -> Y$ n'admet pas de point fixe, alors pour toute fonction $f : T times T -> Y$, il existe $g : T -> Y$ telle que
$ exists.not t in T, forall x in T, g(x) = f(t, x) $
]
On dira que $g$ n'est pas _représentable par $f$._
#correct[
C'est une simple reformulation de la contraposée du théorème de Lawvere.
$triangle$ Supposons par l'absurde que toute fonction de $T$ dans $Y$ soit représentable par un élément de $T$.
On construit $phi : T -> T^Y$ surjective et on applique le théorème de Lawvere. Alors toute fonction de $Y$ dans $Y$ admet un point fixe, c'est absurde $arrow.zigzag$ #h(1fr) $triangle.l$
]
#question(0)[
Donner la contraposée de la proposition ci-dessus.
]
#correct[
Si il existe $f : T times T -> Y$ telle que toute fonction de $T -> Y$ est représentable par un élément de $T$ avec $f$, alors toute fonction $alpha : Y -> Y$ admet un point fixe.
]
_C'est une forme alternative du théorème de point fixe._
// === Et les catégories dans cette histoire ?
// L'hypothèse $Y$ un ensemble tel qu'il existe $alpha : Y -> Y$ sans point fixe peut paraître étrange: elle est équivalente à $|Y| >= 2$.
// Là est l'intérêt des hypothèses catégoriques de l'exercice, elles restreignent l'ensemble des fonction considérées.
// Une _catégorie_ $cal(C)$ est une classe (lire ensemble) de _morphismes_ $"mor"(cal(C))$ munie de
// #align(center, grid(columns: (1fr, 1fr, 1fr),
// [- une classe $"ob"(cal(C))$],
// [- deux fonctions $"dom", "cod" : "mor"(cal(C)) -> "ob"(cal(C))$ \ \ ],
// [- une loi de composition $compose$ associative],
// [- Si $"cod"f = "dom"g,$], [$g compose f$ est défini], [$"cod"(g compose f)="cod" g$ et\ $"dom"(g compose f) = "dom"f$]
// ))
// On note $"Hom"(A, B) := {f in "mor"(cal(C)), "dom"f=A, "cod"f=B}$.
// _Toutes les catégories de cet énoncé qui en ont besoin vérifieront les hypothèses du théorème, et $B^A$ représente les morphismes de $A$ dans $B$_.
// #question(1)[Montrer que *Grp*, avec $"Ob"(bold("Grp"))$ l'ensemble des groupes et $"Hom"(G, H)$ les morphismes de $G$ dans $H$, est une catégorie.]
// #question(1)[Soit $Omega$ un ensemble ordonné. Montrer que $"Ob"(bold(Omega)) := Omega$ et $"Hom"(alpha, beta) = {alpha<=beta}$ si $alpha<=beta$ et $"Hom"(alpha, beta)=emptyset$ sinon forme bien une catégorie.]
// === Le problème de l'arrêt
// On note $cal("PC")$ l'ensemble des fonctions partiellement calculables de $NN$ dans $NN$.
// On introduit *Asm* la catégorie des assembleurs.
// #align(center, grid(columns: (1fr, 1fr),
// [- $"Ob"(bold("Asm")) := {(X, ~_X), med med med ~_X subset NN times X}$],
// [- $"Hom"((X, ~_X), (Y, ~_Y)) :=\ {f : X -> Y, exists phi_f in cal(P C), forall x in X, \ (n ~_X x ==> n in "dom"(phi_f) ), phi_f (n) ~_Y f(x)}$ ],
// ))
// Si $S subset NN$, on définit $S' := (S, =)$.
// #question(1)[Montrer que $2', NN' in "Ob"(bold("Asm"))$.]
// #question(2)[Montrer que si $S subset NN$, $"Hom"(NN', S')$ est exactement l'ensemble des fonctions calculables de $NN$ dans $S$.]
// #question(2)[Montrer que $"Hom"(NN' times NN', S')$ est exactement l'ensemble des fonctions calculables de $NN times NN$ dans $S$.]
// #question(1)[Justifier l'existence d'une surjection $e : NN -> cal("PC")$.]
// Soit $"halt"_e : NN times NN -> 2$ définie suivant $ "halt"_e(m,n) = cases(1 "si" m in "dom"(e) "et" n in "dom"(e(m)), 0 "sinon") $
// #question(2)[Justifier que cette fonction fait ce qu'on attend d'elle.]
// On définit $alpha : 2 -> 2$ avec $"dom"(alpha) = {1}$ et $alpha(1) = 1$, partiellement calculable.
// #question(2)[Montrer que pour tout $g : NN -> 2$ calculable, il existe $m_g in NN$ telle que\ $g(n) = "halt"_e (m_g, n)$.]
// #question(2)[Montrer que $"halt"_e$ n'est pas calculable.]
=== Le théorème de récursivité
On considère ici un modèle de calcul muni d'un interprète $epsilon : NN times NN -> NN$.
*Lemme*: Le théorème "S-M-N"
_Il existe $s in NN$ calculable tel que pour tout programme $e in NN$ et arguments $m,n in NN$, $ epsilon_e (m,n) = epsilon_(s(e,m))(n) $_
_On appelle $s$ une fonction d'itération_.
#question(1)[Démontrer le théorème "S-M-N".]
#correct[
À la traduction dans le modèle de calcul souhaité près,
```ocaml
let s (e, m) = fun n -> e (m, n)
```
]
Soit $cal(F) = {epsilon_f "unaire", f in NN}$ l'ensemble des fonctions unaires calculables.
*Théorème de récursivité*: Pour tout $h in cal(F)$, il existe $n_0 in NN$ tel que $epsilon_n_0 = epsilon_h(n_0)$.
On pose $f : NN times NN -> cal(F)$, $f(m,n) = epsilon_(epsilon_n (m))$, puis $Phi_h : cal(F) -> cal(F)$, $Phi_h (epsilon_n) = epsilon_(h(n))$.
#question(0)[Dessiner un diagramme avec $NN, NN times NN, cal(F), f, Phi_h$ et $Delta := x arrow.bar (x,x)$.]
#correct[
#import "@preview/fletcher:0.5.1": diagram, node, edge
#align(center, diagram(
node(( 0, 0), $NN times NN$),
node(( 0, 1), $NN$),
edge(( 0, 0), ( 0, 1), "<-", $Delta$),
node(( 1, 0), $cal(F)$),
node(( 1, 1), $cal(F)$),
edge(( 1, 0), ( 1, 1), "->", $Phi_h$, label-side: left),
edge(( 0, 0), ( 1, 0), "->", $f$)
))
]
#question(2)[Démontrer que toute fonction $g : NN -> cal(F)$ calculable est représentable par $f$.]
#correct[
// TODO(Juliette): foireux ??
$g$ est calculable donc il existe $n in NN$ tel que $g$ calcule $epsilon_n$, soit que $g(m) = epsilon_(epsilon_n(m)) = f(n,m)$.
]
#question(2)[En déduire le théorème de récursivité.]
#correct[
D'après les deux questions précédentes, le théorème de Lawvere s'applique et $Phi_h$ admet un point fixe $epsilon_n$, soit $ epsilon_n = epsilon_(h(n)) $
#align(right, $square$)
]
=== Le théorème de Rice
_On rappelle que $cal(F)$ dénote l'ensemble des fonctions unaires calculables dans le modèle de calcul $epsilon$._
Soit $cal(A) : cal(F) -> 2$ surjective.
#question(3)[Démontrer que $cal(A)$ n'est pas calculable.]
#correct[
$triangle$ On suppose par l'absurde que $cal(A)$ est calculable.
On pose $h$ le programme de $cal(F)$ calculant $"not" compose cal(A) compose epsilon$. D'après le théorème de récursivité, il existe $n_0$ tel que $epsilon_(n_0) = epsilon_(h(n_0))$.
On pose $A := cal(A)^(-1)(1)$.
$ & n_0 in A \
<==>& h(n_0) in.not A & "par définition de" h \
<==>& cal(A)(epsilon_(h_(n_0))) != 1 & "par définition de" A \
<==>& cal(A)(epsilon_(n_0)) & "par le théorème de récursivité" \
<==>& n_0 in.not A & "par définition de " A
$
C'est absurde $arrow.zigzag$ #h(1fr) $triangle.l$
]
#question(1)[En déduire que la fonction `halt` n'est pas calculable.]
#correct[
Le programme `let rec loop n = loop n` ne s'arrête pas, et le programme `let cool n = 861` termine.
Ainsi, $"arrêt" : cal(F) -> 2$ est une fonction surjective. Par le théorème de Rice, elle n'est pas calculable.
]
|
|
https://github.com/quarto-ext/typst-templates | https://raw.githubusercontent.com/quarto-ext/typst-templates/main/poster/README.md | markdown | Creative Commons Zero v1.0 Universal | # Typst Poster Format
An academic poster template designed for Typst. Supports both horizontal and vertical posters. Originally created for Typst by <NAME> (https://github.com/pncnmnp/typst-poster) and adapted for Quarto.
## Installing
```bash
quarto use template quarto-ext/typst-templates/poster
```
This will install the format extension and create an example qmd file
that you can use as a starting place for your poster.
## Using
Specify poster size, authors/affiliation, and footer content using the YAML options of the `poster-typst` format:
```yaml
---
title: This is an academic poster with typst and quarto!
format:
poster-typst:
size: "36x24"
poster-authors: "<NAME>, <NAME>, <NAME>, <NAME>"
departments: "Department of Computer Science"
institution-logo: "./images/ncstate.png"
footer-text: "posit::conf 2023"
footer-url: "https://posit.co/conference/"
footer-emails: "<EMAIL>"
footer-color: "ebcfb2"
keywords: ["Typesetting", "Typst", "Quarto"]
---
```
This is what a poster with the options specified above might look like:

|
https://github.com/dismint/docmint | https://raw.githubusercontent.com/dismint/docmint/main/networks/pset3.typ | typst | #import "template.typ": *
#show: template.with(
title: "14.15 PSET 3",
subtitle: "<NAME>",
pset: true
)
= Problem 1
#define(
title: "Aperiodic Conditions"
)[
One important condition that renders a matrix $A$ automatically aperiodic, is if there exists for some $i$, a value $A_(i i) > 0$. This automatically means that $A$ will converge as we take higher and higher powers.
]
== (a)
The matrix is aperiodic, meaning that the limit is well defined. The reason that we know it is aperiodic is because we can see that there are multiple $T_(i i)$ entries that are greater than 0. There only needs to be one such entry for the matrix to be aperiodic. Thus, we can compute $s$ as the eigenvector of $T'$
$ mat(5 / 9; 8 / 9; 1;) $
However we need to normalize, so thus we divide by $5 / 9 + 8 / 9 + 1 = 22 / 9$
$ mat(5 / 22; 8 / 22; 9 / 22;) $
Therefore we can compute $x*$ as:
$ x^* = mat(5 / 22, 8 / 22, 9 / 22) mat(x_1(0); dots.v; x_N(0)) = mat(5 / 22, 8 / 22, 9 / 22) times x(0) $
All values are of course, the same once the values converge.
== (b)
Similar to above, the limit exists as all $T_(i i) > 0$. We can see that the top left and bottom right $3 times 3$ values are mostly filled while the top right and bottom left quadrants are completely zeroed out. This indicates to us that there are two connected components. One with agents $[1, 2, 3]$, and another with $[4, 5, 6]$.
Therefore we expect two different values of convergence, with the first group of $[1, 2, 3]$ converging to the same value and $[4, 5, 6]$ converging to another different value. In essence, we can think of this as running the DeGroot model on two distinct graphs of size $N=3$
== (c)
Once again remember the property mentioned at the start of this problem. Since agent $i$ places weight $1$ on themselves, then the DeGroot learning model will end up converging over time. In addition, since the sum of weights for an agent is equal to $1$, it must be the case that agent $i$ never changes from their initial opinion $x_i (0)$ since they don't take influence from anyone else.
However, this is not enough to show that everyone else also converges to this same value, since it can be the case where a graph converges under the DeGroot learning model, but not to the same value unless it is strongly connected. We have no such guarantee in this graph, but there is an even easier way to show all agents will eventually each $x_i (0)$
Since it is given that for all $j != i$, it is the case that $A_(j i) > 0$, this lets us see that every other agent takes some amount of influence from $i$. Thus along with the previous observation, we can make the following steps:
+ The graph converges under the DeGroot learning model.
+ The value of $i$ never changes, that is it always values itself and nothing else for all of time.
+ Every other agent values $i$ some amount.
+ If agent $j$ has a different opinion than $i$, it will change in the next time period since it values $i$ some amount.
+ The graph converges, meaning all values stabilize.
+ Since all agents take influence from $i$, they must all stabilize to $x_i (0)$, or else the value would be continuously changing and the matrix would be periodic.
= Problem 2
#note(
title: "Notation"
)[
While I am consistent throughout this question with $i$ and $j$, the notation is mostly backwards from the question statement.
]
Let us start the proof for this problem by showing why it is true the $j$th column of $T^(t)$ is all zero if for any previous $t' < t$, it is true that the $j$th column of $T^(t')$ was all zeros.
If we view $T$ as an adjacency matrix, recall that $T^t$ conveys information about paths of length $t$ in the graph. Although traditionally the weights are binary, we can generalize to this problem by saying there is a length $t$ path between $i$ and $j$ as long as $T^t_(i j) > 0$
Now suppose there is some $t$ for which the $j$th column in $T^t$ is all zeros. This means that there is no path from any node $i$ to $j$ of length $t$. Then, this also implies that there are no paths of length $t+1$ to $j$, as that would require at least one node having a path length $t$ to $j$. Therefore, there are also no paths of length $t+2, t+3, dots$ to $j$
We can now conclude that if at some point the $j$th column of $T^(t)$ is all zeros, then for any future $t' > t$, $T^(t')$ also has the $j$th column filled with zeros.
Thus in context of the original question, this means that the $j$th agent is uninvolved in the final opinions of any agent. However, this does not mean they are uninfluenced by the others - an observation whose consequences we will see shortly.
It is then the case that the rest of the DeGroot model develops independently when taking the limit, and it must be true that given $x$ and $accent(x ,hat)$ are the same except the $j$th entry, then the limit will result in the exact same value for both as well. Recall that we are given that both limits exist.
However, we still need to show the $j$th entry will be the same for both. Why can it not be the case for example, that $x_j$ and $accent(x, hat)_j$ are different values, since the rest of the graph does not get impacted by them?
The reason stems from the fact that this is a relation under the DeGroot model, which guarantees us that the matrix $T$ is row-stochastic. Therefore, agent $j$ must place some value on someone else. Agent $j$ *can't* place a weight of $1$ on themselves, because this would mean that $T_(j j)$ could never be $0$, which we know to be true from the premise of the question. It must then be the case that they take influence from some other nodes, and their value is not independent of the rest of the graph.
In short, the values of $x_j$ nd $accent(x, hat)_j$ are completely independent of $x_j (0)$ and $accent(x, hat)_j (0)$, and instead rely on the rest of the agents, for which we know the starting values are exactly the same. We can now say that these values will also converge under these conditions.
Thus we conclude that the limits are the same under the given conditions, despite the potential starting values of $x_j (0)$ and $accent(x, hat)_j (0)$ differing.
|
|
https://github.com/wuc9521/CS-scholars-report | https://raw.githubusercontent.com/wuc9521/CS-scholars-report/main/sections/impl.typ | typst | = Implementation
== System Architecture
This sections includes detailed description of the system architecture. Generally speaking, our system architecture follows a three-tier model:
- *Frontend*: A React-based user interface that provides an interactive and responsive experience for users to explore and manage CS scholar data.
- *Backend*: A Java Spring Boot application that serves as the middle tier, handling business logic, data processing, and API endpoints. It acts as an intermediary between the frontend and the database.
- *Database*: A PostgreSQL database deployed on an Ubuntu server, which stores and manages all the structured data for our CS scholars application.
=== Frontend
Our frontend is built using `React`. This allows us to display the dynamically changing results of queries and keep our application responsive.
We've structured the application around four main components, allowing users to easily navigate between them and access their desired information:
- The *`Scholars`* component displays a searchable and sortable list of academic professionals. Users can filter scholars by location or major, sort them by publication count or h-index, and even isolate those with publications but no grants.
- The *`Locations`* component offers similar functionality for research locations. Users can filter locations by major, and sort them by number of scholars with major, total grants, or maximum h-index.
- When a user selects a specific *`Scholar`* or *`Location`*, they can access the respective detailed information from *`Profile`* or *`LocationProfile`* components.
- The *`Profile`* component displays in depth data about a scholar such as major, h-index, location, publications, and grants. It also includes forms for adding new publications, grants, and updating existing information.
- The *`LocationProfile`* component displays in depth data about a location such as the country, scholars, total grants, and publications.
All of these components use `React` hooks for state management and `axios` for API communication with the backend to ensure efficient data handling and responsive interactions. Additionally, we used `React Bootstrap` to keep the design neat, consistent and responsive across all devices.
=== Backend
We used `Java` and and the `Spring Boot` framework for our backend. Using `Spring MVC` we built a `REST` API which our frontend uses to interact with the system's data and functions. We planned to handle data storage and management with `JPA` and `Hibernate` which makes it easy to work with our data models. However we've needed to mostly use native `SQL` queries inside `JPA`'s `@Query` annotations.
We've defined repositories, including `PeopleRepository`, `LocationsRepository`, `PublicationsRepository`, and `GrantsRepository`, to extend `JpaRepository` to provide basic CRUD operations and sometimes implement more complex queries. For example in `PeopleRepository`:
```java
@Query(nativeQuery = true, value = "SELECT p.pid, p.name, p.major, p.hindex, " +
"l.loc_name AS location, l.locid as location_id, " +
"(SELECT COUNT(*) FROM Publish pub WHERE pub.pid = p.pid) AS publication_count, " +
"(SELECT COUNT(*) FROM Has o WHERE o.pid = p.pid) AS grant_count " +
"FROM People p " +
"JOIN \"in\" i ON p.pid = i.pid " +
"JOIN Locations l ON i.locid = l.locid " +
"WHERE p.pid = :personId")
Map<String, Object> getPersonProfile(@Param("personId") Long personId);
```
This query fetches a detailed profile for a person including their publication and grant counts.
We've implemented four main controllers: `PeopleController`, `LocationsController`, `PublicationsController`, and `GrantsController`. Here's a comprehensive list of the endpoints provided:
*`PeopleController`* endpoints:
- GET `/api/scholar/publication-count`: Retrieve scholars sorted by publication count
- GET `/api/scholar/hindex`: Retrieve scholars sorted by h-index
- GET `/api/scholar/publications-no-grants`: Get scholars with publications but no grants
- GET `/api/scholar/{personId}/profile`: Get a detailed profile for a specific scholar
- POST `/api/scholar/add`: Add a new scholar
- PUT `/api/scholar/update-hindex`: Update a scholar's h-index
- POST `/api/scholar/add-publication`: Add a new publication for a scholar
- POST `/api/scholar/assign-grant`: Assign a grant to a scholar
- PUT `/api/scholar/change-location`: Change a scholar's location
*`LocationsController`* endpoints:
- GET `/api/locations`: Get all locations
- GET `/api/locations/{id}`: Get a specific location by ID
- GET `/api/locations/person/{personId}`: Get locations for a specific person
- GET `/api/locations/by-people-count`: Get locations sorted by number of scholars
- GET `/api/locations/by-grant-count`: Get locations sorted by number of grants
- GET `/api/locations/by-max-hindex`: Get locations sorted by maximum h-index
- GET `/api/locations/{id}/profile`: Get a detailed profile for a specific location
- POST `/api/locations`: Add a new location
- DELETE `/api/locations/{id}`: Delete a location
*`PublicationsController`* endpoints:
- GET `/api/pub`: Get all publications
- GET `/api/pub/{id}`: Get a specific publication by ID
- GET `/api/pub/scholar/{pid}`: Get all publications for a specific scholar
- POST `/api/pub`: Create a new publication
- PUT `/api/pub/{id}`: Update a publication
- DELETE `/api/pub/{id}`: Delete a publication
*`GrantsController`* endpoints:
- GET `/api/grants`: Get all grants
- GET `/api/grants/{id}`: Get a specific grant by ID
- GET `/api/grants/scholar/{pid}`: Get all grants for a specific scholar
- POST `/api/grants`: Create a new grant
We used a service layer to include any necessary logic between controllers and repositories. For data consistency in complex operations, we've used the `@Transactional` annotation in our service layer.
=== Database
We use `PostgreSQL` deployed on a Ubuntu server. Our database combined with our PostgreSQL DBMS stores the data and out stored procedures, processes SQL queries.
We used stored procedures for functionalities that have many parameters and require detailed checks and multiple inserts. For example this is the stored procedure used to insert a new scholar:
```sql
CREATE OR REPLACE FUNCTION AddNewPerson(
p_name VARCHAR(100),
p_major VARCHAR(50),
p_hindex INT,
p_location VARCHAR(100)
) RETURNS void AS $$
DECLARE
new_pid INT;
loc_id INT;
BEGIN
SELECT MAX(pid) + 1 INTO new_pid FROM people;
INSERT INTO people (pid, name, major, hindex)
VALUES (new_pid, p_name, p_major, p_hindex);
SELECT locid INTO loc_id FROM locations WHERE loc_name = p_location;
IF NOT FOUND THEN
SELECT MAX(locid) + 1 INTO loc_id FROM locations;
INSERT INTO locations (locid, loc_name)
VALUES (loc_id, p_location);
END IF;
INSERT INTO "in" (pid, locid) VALUES (new_pid, loc_id);
END;
$$ LANGUAGE plpgsql;
```
Implementation: Third Normal Form
Our database is in 3NF. The functional dependencies are as follows:
- *`Person`*:
- `pid` $arrow$ `name`, `major`, `hindex`
- *`Location`*:
- `locid` $arrow$ `loc_name`, `city`, `state`, `country`
- *`Publication`*:
- `pubid` $arrow$ `pmid`, `doi`
- `pmid` $arrow$ `pubid`, `doi`
- `doi` $arrow$ `pubid`, `pmid`
- *`Grant`*: `grantid` $arrow$ `budget_start`
- *`In`*, *`Publish`*, *`Obtain`*: no non-trivial functional dependencies
For every non-trivial functional dependency, the left-hand side is a superkey, or the right-hand side is a prime attribute. The database is in 3NF.
== Dataset // Done
Our dataset is a snapshot of #link("https://academictree.org")[Academic Family Tree] taken on 2024-02-20. The data can be accessed from #link("https://zenodo.org/records/6349537")[this link] and explanations of columns are provided on #link("https://academictree.org/export.php")[the official website].
Originally, we downloaded these `csv` files:
- `locations.csv` (10 columns, 46,826 rows),
- `people.csv` (17 columns, 859,814 rows),
- `peopleGrant.csv` (13 columns, 6,123,642 rows),
- `authorPub[05-35].csv` (9 columns, 75,423,143 rows)
These datasets seemed to be exported from a relational database, as each entity has a unique `id` (`pid`, `locid`, `grantid`, `pubid`) and relations are represented by attributes with other entities' ids. However there were two main issues:
Some values weren't in the format we wanted and there were a lot of missing values. We imported these into Python using `pandas` and formatted the columns correctly and decided not to use the columns with too many empty values. We also decided to remove people whose name or major was empty as having these people in our database would not give the user any useful information. Initially we also removed people whose h-index was empty. However this removed too much valuable information that might be useful to our users, so we decided against it.
The second issue was that a total of 82,453,425 rows was too many for us too handle. For this we only limited our scope to people whose majors included computer science, and removed locations, grants, and publications not in a relationship with these people. After this we ended up with a total of 1,966,747 rows.
Finally we used Python scripts to combine the data into relationship and entity sets and automatically insert it into our database. We first tried to insert the data remotely which took a very long time. After this realization we switched to inserting the data locally on the server.
More details about our specific implementation for this process can be accessed from:
- Data Cleaning: https://github.com/ArdaGurcan/CS-Scholars/blob/main/cleaning.ipynb
- Data Insertion: https://github.com/wuc9521/CS-scholars-backend/tree/main/scripts
== ER Diagram
A detailed explanation of the ER diagram is provided in #ref(<sec:relational-model>).
#figure(
image(
"../figures/er.png",
width: 70%,
),
caption: "ER Diagram",
)<fig:er>
== Relational Model<sec:relational-model>
+ *`Person`*(#underline[`pid`], `name`, `major`, `hindex`)
- `pid` is the primary key
+ *`Location`*(#underline[`locid`], `loc_name`, `city`, `state`, `country`)
- locid is the primary key
+ *`Publication`*(#underline[`pubid`], `pmid`, `doi`)
- `pubid` is the primary key
- `pmid` and `doi` are also keys
+ *`Grant`*(#underline[`grantid`], `budget_start`)
- `grantid` is the primary key
+ *`In`*(#underline[`pid`, `loc_id`])
- `(pid, loc_id)` is a composite key
+ *`Publish`*(#underline[`pid`, `pubid`])
- `(pid, pubid)` is a composite key
+ *`Obtain`*(#underline[`pid`, `grantid`])
- `(pid, grantid)` is a composite key
== Prototype
// I don't know what else to put here
We managed to implement all the frontend and backend functionality specified in this report for the prototype. We used our own laptops to locally run the frontend using `Node.js` and the backend using `Apache Maven`. We hosted the `PostgreSQL` database on a remote `Ubuntu` VPS Server for persistency.
A detailed demo of our application prototype can be accessed from #link(
"https://www.youtube.com/watch?v=ILDfNDC0rio"
)[this link].
- Frontend source code: https://github.com/ardaGurcan/cs-scholars-frontend
- Backend source code: https://github.com/wuc9521/cs-scholars-backend
== Evaluation
We mainly design our evaluation from three aspects:
+ *Data Accuracy Evaluation*: *CRUD* directly from backend and check.
+ *Data Consistency Evaluation*: end-to-end data consistency.
+ *Performance Evaluation*: regarding concurrency, indexing etc.
As required in the #underline[#link("https://canvas.wisc.edu/courses/402238/discussion_topics/1885324")[announcement]], we also include the feedback from Checkpoint 4.
#figure(
image(
"../figures/fb.png",
width: 40%,
),
caption: "Feedback of Checkpoint 4",
)<fig:fb>
=== Data Accuracy Evaluation
We designed 5 test cases for this evaluation. The testing source file(s) can be accessed from the #link("https://github.com/wuc9521/CS-scholars-backend/tree/main/src/test/java/site/wuct/scholars/controller")[this link].
- `addNewPerson()`: Insert a new record and verify successful insertion
- `getPersonProfile()`: Read an existing record and verify data correctness
- `updatePublicationAndVerifyChanges()`: Update a record and verify changes are saved
- `deletePublicationAndVerifyRemoval()`: Delete a record and verify it has been removed
- `updateNonExistentPublication()`: Updating non-existent records
#figure(
image(
"../figures/pplct.png",
width: 100%,
),
caption: `PeopleControllerTest.java`,
)<fig:pplct>
#figure(
image(
"../figures/pubct.png",
width: 100%,
),
caption: `PublicationsControllerTest.java`,
)<fig:pubct>
For instance, in #ref(<code:updateNonExistentPublication>) we tested updating a non-existent publication to make sure our system handles it correctly without crashing. we use `MockMvc` in Spring framework. This lets us test our controllers without actually sending HTTP requests. This kind of edge case testing is very important. It helps us catch unexpected errors and makes our system more reliable.
#figure(
```java
/**
* Test for updating a non-existent publication
*
* @throws Exception if the request fails
*/
@Test
void updateNonExistentPublication() throws Exception {
Long nonExistentId = 999L;
when(publicationsService.findById(nonExistentId)).thenReturn(null);
mockMvc.perform(put("/api/pub/{id}", nonExistentId)
.contentType(MediaType.APPLICATION_JSON)
.content("{\"doi\":\"10.1000/nonexistent\"}"))
.andExpect(status().isNotFound());
verify(publicationsService, never()).save(any(Publication.class));
}
```,
caption: "update a non-existent publication",
)<code:updateNonExistentPublication>
=== Data Consistency Evaluation
We designed 5 test cases for this evaluation. The testing source file(s) can be accessed from the #link("https://github.com/ArdaGurcan/cs-scholars-frontend/tree/main/src/__tests__")[this link].
- Perform a create operation on the frontend and verify the data is correctly stored in the backend
- Search for a scholar and verify the end-to-end data consistency
- Filter scholars with specific criteria and verify backend API call
- Test different sorting criteria for scholars and verify corresponding backend calls
- Simulate network interruptions during data transfer and test recovery mechanisms
#figure(
image(
"../figures/e2e.png",
width: 70%,
),
caption: `Scholars.test.js`,
)<fig:e2e>
For example as shown in #ref(<code:networkError>), this end-to-end testing helps us catch issues that might only appear when the whole system is working together. Similarly, in this frontend test, we use the frontend testing framework Jest to mock axios API calls, we can test these edge cases without affecting real data in our database.
#figure(
```javascript
test('5. Simulates network error and tests error handling', async () => {
const mockOnSelectScholar = jest.fn()
const consoleSpy = jest.spyOn(console, 'error').mockImplementation(() => { });
axios.get.mockRejectedValueOnce(new Error('Network Error'))
render(<Scholars onSelectScholar={mockOnSelectScholar} />)
fireEvent.click(screen.getByText(/Search/i))
await waitFor(() => {
expect(consoleSpy).toHaveBeenCalledWith('Error fetching scholars:', expect.any(Error));
});
consoleSpy.mockRestore();
});
```,
caption: "Simulates network error and tests error handling",
)<code:networkError>
=== Performance Evaluation
We designed 5 test cases for this evaluation. The testing source file(s) can be accessed from the #link("https://github.com/wuc9521/CS-scholars-backend/tree/main/src/test/java/site/wuct/scholars/performance")[this link].
- `testLargeDatasetQuery()`: Measure response time for querying a large dataset
- `testConcurrentConnections()`: Test the system's ability to handle multiple concurrent database connections
- `testComplexJoinOperation()`: Evaluate the performance of complex join operations or aggregations
- `testIndexImpact()`: Measure the impact of indexing on query performance
- `testSustainedLoad()`: Test the system's performance under sustained load over an extended period
#figure(
image(
"../figures/pfm.png",
width: 100%,
),
caption: `PerformanceTest.java`,
)<fig:pfm>
For example, as shown in #ref(<code:indexImpact>), we tested the impact of indexing on query performance. We used random data to minimize cache effects and get a fair comparison. Our tests showed that adding an index significantly improved query speed. This kind of testing helps us make sure our system stays fast even with lots of data.
#figure(
```java
/**
* Test the impact of indexing on query performance
*/
@Test
public void testIndexImpact() throws SQLException {
jdbcTemplate.execute("DROP INDEX IF EXISTS idx_people_name"); // Ensure the index doesn't exist
BiFunction<String, Integer, Long> runQueries = (indexStatus, iterations) -> { // Function to run test queries
long totalTime = 0;
for (int i = 0; i < iterations; i++) { // Generate a random string to search for
String randomString = generateRandomString(3);
String query = "SELECT * FROM people WHERE name LIKE '" + randomString + "%' LIMIT 100";
// Flush cache by querying large amount of unrelated data
jdbcTemplate.query("SELECT * FROM people ORDER BY RANDOM() LIMIT 10000", (rs, rowNum) -> null);
long startTime = System.nanoTime();
List<Map<String, Object>> results = jdbcTemplate.queryForList(query);
totalTime += System.nanoTime() - startTime;
System.out.println("Query for '" + randomString + "' returned " + results.size() + " results");
}
double avgTime = totalTime / iterations / 1_000_000.0;
System.out.println("Average time " + indexStatus + " index: " + avgTime + " ms");
return totalTime;
};
long timeWithoutIndex = runQueries.apply("without", 20); // Test without index
jdbcTemplate.execute("CREATE INDEX IF NOT EXISTS idx_people_name ON people(name)"); // Create the index
long timeWithIndex = runQueries.apply("with", 20); // Test with index
jdbcTemplate.execute("DROP INDEX IF EXISTS idx_people_name"); // Clean up
System.out.println("Time without index: " + (timeWithoutIndex / 20 / 1_000_000.0) + " ms");
System.out.println("Time with index: " + (timeWithIndex / 20 / 1_000_000.0) + " ms");
assertTrue(timeWithIndex < timeWithoutIndex,
"Index did not improve performance. Without index: " + (timeWithoutIndex / 20 / 1_000_000.0) +
" ms, With index: " + (timeWithIndex / 20 / 1_000_000.0) + " ms");
}
```,
caption: "Test the impact of indexing on query performance",
)<code:indexImpact>
|
|
https://github.com/Ciolv/typst-template-bachelor-thesis | https://raw.githubusercontent.com/Ciolv/typst-template-bachelor-thesis/main/acronyms.typ | typst | // Dictionary with acronyms
// Acronym is key, value is ("long version", "long version plural ending", "short version plural ending")
// Plural endings are optional, "s" is default
// Values have to be at least ("",) where the comma is important
#let acronyms = (
API: ("Application Programming Interface", "s", "s"),
PEBKAC: ("Problem Exists Between Keyboard And Chair", "", ""),
acro: ("acronym",)
)
// The state which tracks the used acronyms
#let usedAcronyms = state("usedDic", (empty: false,))
// Check if acronym is in the acronym list
#let acronym_exists(ac) = {
return acronyms.keys().contains(ac)
}
// The acronym itself
#let acronym_short(ac) = {
return ac
}
// The acronyms meaning
#let acronym_long(ac) = {
return acronyms.at(ac).at(0)
}
// The acronym in its plural
#let acronym_short_plural(ac) = {
if (acronyms.at(ac).len() > 2) {
return acronym_short(ac) + acronyms.at(ac).at(2)
}
return acronym_short(ac) + "s"
}
// The acronyms meaning in plural
#let acronym_long_plural(ac) = {
if (acronyms.at(ac).len() > 1) {
return acronym_long(ac) + acronyms.at(ac).at(1)
}
return acronym_long(ac) + "s"
}
// The acronyms meaning with its short form appended
// Used to introduce an acronym
#let acronym_full(ac) = {
return acronym_long(ac) + " (" + acronym_short(ac) + ")"
}
// The acronyms meaning with its short form appended, both in plural
// Used to introduce an acronym
#let acronym_full_plural(ac) = {
return acronym_long_plural(ac) + " (" + acronym_short_plural(ac) + ")"
}
// Error message to alert for undeclared acronym usage
#let acronym_error(ac) = {
if(acronyms.keys().contains(ac) == false) {
return rect(
fill: red,
inset: 8pt,
radius: 4pt,
[*Error*: Acronym *#ac* not found!],
)
}
}
// To mark an acronym as used in text
// Following usages of ac or acp will not provide the long version anymore
#let declare_acronym_used(ac) = {
usedAcronyms.update(usedDic => {
usedDic.insert(ac, true)
return usedDic
})
}
// The function which either shows the acronym or the full text for it
#let ac(ac) = {
if (not acronym_exists(ac)) {
return acronym_error(ac)
}
usedAcronyms.display(usedDic => {
if(usedDic.keys().contains(ac)) {
return acronym_short(ac)
}
return acronym_full(ac)
});
declare_acronym_used(ac)
}
// The function which either shows the acronym or the full text for it in plural
#let acp(ac) = {
if (not acronym_exists(ac)) {
return acronym_error(ac)
}
usedAcronyms.display(usedDic => {
if(usedDic.keys().contains(ac)) {
return acronym_short_plural(ac)
}
return acronym_full_plural(ac)
});
declare_acronym_used(ac)
}
// The acronym itself
#let acs(ac) = {
if (not acronym_exists(ac)) {
return acronym_error(ac)
}
usedAcronyms.display(usedDic => {
return acronym_short(ac)
});
declare_acronym_used(ac)
}
// The acronym itself in plural
#let acsp(ac) = {
if (not acronym_exists(ac)) {
return acronym_error(ac)
}
usedAcronyms.display(usedDic => {
return acronym_short_plural(ac)
});
declare_acronym_used(ac)
}
// The acronyms meaning
#let acl(ac) = {
if (not acronym_exists(ac)) {
return acronym_error(ac)
}
usedAcronyms.display(usedDic => {
return acronym_long(ac)
});
declare_acronym_used(ac)
}
// The acronyms meaning in plural
#let aclp(ac) = {
if (not acronym_exists(ac)) {
return acronym_error(ac)
}
usedAcronyms.display(usedDic => {
return acronym_long_plural(ac)
});
declare_acronym_used(ac)
}
// Print the acronym register/listing
// Only used acronyms will be printed
#let print_acronym_listing(outline_name) = {
locate(loc => if (usedAcronyms.final(loc).len() > 1) {
[
// Acronym register title
#v(1em)
#heading(level: 1, numbering: none, outline_name)
#v(0.5em)
// Acronym register content
#text(size: 1em, locate(loc => usedAcronyms.final(loc)
.pairs()
.filter(x => x.last())
.map(pair => pair.first())
.sorted()
.map(key => grid(
columns: (1fr, auto),
gutter: 1em,
strong(key), acronyms.at(key).at(0)
)).join()))
]
})
}
|
|
https://github.com/pluttan/os | https://raw.githubusercontent.com/pluttan/os/main/lab2/lab2.typ | typst | #import "@docs/bmstu:1.0.0":*
#import "@preview/tablex:0.0.8": tablex, rowspanx, colspanx, cellx
#show: student_work.with(
caf_name: "Компьютерные системы и сети",
faculty_name: "Информатика и системы управления",
work_type: "лабораторной работе",
work_num: 2,
study_field: "09.03.01 Информатика и вычислительная техника",
discipline_name: "Операционные системы",
theme: "Исследование процесса загрузки ОС Linux",
author: (group: "ИУ6-52Б", nwa: "<NAME>"),
adviser: (nwa: "<NAME>"),
city: "Москва",
table_of_contents: true,
)
= Введение
== Цель работы
Цель работы - исследование процесса загрузки Linux на примере дистрибутива Debian. Освоение работы с интерпретатором командной строки «bash». А так же освоение
подключения репозиториев и установки пакетов.
== Задание
Согласно данному пособию:
- Выполнить загрузку без ввода пароля и смену пароля суперпользователя
- Научиться пользоваться «bash»
- Научиться пользоваться «vi»
- Добавить в список репозиториев официальный репозиторий Debian
- Установить предложенные пакеты
- Отобразить в w3m произвольную страницу.
Включить в отчёт основные действия и результаты этих действий - снимки экрана.
= Выполнение
== Выполнить загрузку без ввода пароля и смену пароля root
При загрузке ОС сначала считывается первичный загрузчик -- GNU GRUB. Можно приостановить/отменить дальнейшую загрузку ОС и зайти в конфигурационные файлы. Там можно прописать любые комманды при загрузке, таким образом можно получить права на изменения пароля просто прописав ```sh init=/bin/bash```.
#img(image("img/2.png", width:90%), [Интерфейс GNU GRUB])
#img(image("img/4.png", width:90%), [Файл конфигурации загрузки с дописанной строкой])
После того как файл был изменен продолжаем загрузку системы уже с текущей конфигурацией. Через несколько секунд получаем доступ к коммандной строке в обход ввода логина и пароля.
#img(image("img/5.png", width:90%), [Доступ к коммандной строке])
Теперь необходимо перемонтировать файловую систему с правами на запись: ```sh mount -rw -o remount /```
#img(image("img/7.png", width:90%), [Выполненная комманда])
Теперь даем комманду ```sh passwd``` и вводим новый пароль 2 раза.
#img(image("img/8.png", width:90%), [Изменяем пароль])
Перезапускаем машину и входить в суперпользователя с новым паролем.
#img(image("../lab1/img/26.png", width:90%), [Доступ к коммандной строке с новым паролем получен])
== Добавить в список репозиториев официальный репозиторий Debian
Прежде чем добавлять в список репозиториев официальный репозиторий Debian узнаем установленную версию. Для этого выполним комманду ```sh cat /proc/version```, т. е. прочитаем содержимое файла, которое содержит в себе информацию об установленной системе. Как мы видим тут используется версия 13 с кодовым названием trixie, это понадобится при редактировании файла с репозиториями
#img(image("img/18.png", width:100%), [```sh cat /proc/version```])
Теперь воспользуемся встроенным редактором vi для редактирования файла `/etc/apt/sources.list`. Для этого наберем комманду ```sh vi /etc/apt/sources.list```
#img(image("img/11.png", width:90%), [```sh vi /etc/apt/sources.list```])
Мы попадаем во встроенный редактор. Тут уже есть необходимая строка с нужным репозиторием. Перейдем в редим выделения (v) и выделим ее используя клавиши навигации (hjkl).
#img(image("img/17.png", width:90%), [Необходимая строчка])
Как мы видим версия репозитория совпала с версией системы по кодовому названию. Так как репозиторий был уже добавлен продемонстрируем работу 3 режима редактора -- режима ввода просто добавив еще один комментарий на первую строку.
#img(image("img/32.png", width:90%), [Демонстрация режима ввода vi])
Зайдем в режим ввода комманд и наберем комманду сохранения совместно с коммандой выхода из редактора ```cpp :wq```.
#img(image("img/21.png", width:90%), [Вышли из vi])
== Установить предложенные пакеты
Выполним обновления списков пакетов (```sh apt-get update```):
#img(image("img/22.png", width:90%), [Обновляем списки пакетов: ```sh apt-get update```])
Установим несколько пакетов:
```sh apt-get install psmisc net-tools w3m```
#img(image("img/24.png", width:90%), [Начало ```sh apt-get install psmisc net-tools w3m```])
#img(image("img/25.png", width:90%), [Конец ```sh apt-get install psmisc net-tools w3m```])
== Отобразить в w3m произвольную страницу.
Отобразим в w3m `google.com` (```sh w3m google.com```):
#img(image("img/31.png", width:90%), [```sh w3m google.com```])
#img(image("img/30.png", width:90%), [Отображение `google.com` в tty]) |
|
https://github.com/tilman151/pypst | https://raw.githubusercontent.com/tilman151/pypst/main/README.md | markdown | MIT License | # Pypst
Declarative Typst in Python with Pandas data frame support.
Pypst helps you dynamically generate [Typst](https://typst.app) code directly in Python. No manual string manipulation required.
It has two major use cases:
- Generating full Typst documents to be rendered as PDFs
- Generating and styling Typst tables from Pandas data frames to be included in another Typst document
Pypst produces human-readable Typst code that you can modify and extend.
It will not compile Typst code for you, but you can either use the [Typst compiler](https://github.com/typst/typst/releases) to do so in the command line or install the [Typst Python package](https://pypi.org/project/typst/).

## Installation
Pypst is available on PyPI and can be installed via pip.
```bash
pip install pypst
```
## Usage
Pypst contains classes that represent Typst elements.
You can create a Typst document by instantiating these classes and nesting them as needed.
```pycon
>>> import pypst
>>> heading = pypst.Heading("My Heading", level=1)
>>> itemize = pypst.Itemize(["First item", "Second item", "Third item"])
```
Each of these classes has a `render` method that returns the Typst code as a string.
```pycon
>>> print(heading.render())
= My Heading
>>> print(itemize.render())
- First item
- Second item
- Third item
```
If you want to combine multiple elements into a single document, you can use the `Document` class.
You can even add imports for other Typst files or packages.
```pycon
>>> document = pypst.Document([heading, itemize])
>>> document.add_import("utils.typ")
>>> print(document.render())
#import "utils.typ"
<BLANKLINE>
= My Heading
<BLANKLINE>
- First item
- Second item
- Third item
```
The output of the `render` method can be written to a `.typ` file for compilation.
The [documentation](https://krokotsch.eu/pypst) contains more extensive tutorials on generating [documents](https://krokotsch.eu/pypst/usage) and generating [tables from Pandas data frames](https://krokotsch.eu/pypst/pandas).
## Limitations
Pypst is not a Typst compiler.
It doesn't check for syntax errors or compile Typst code into PDFs.
Pypst interprets any string as a Typst literal, so `stroke = "1pt"` will be rendered as `stroke: 1pt`.
If you want to pass a string literal, you need to wrap it in quotes, for example, `caption = '"My Caption"'`.
When Pypst encounters a string wrapped in quotes without need, it will automatically remove them.
For a content block, you can wrap the string in brackets instead, for example, `caption = "[My content]"`.
## Roadmap
If there is time and people are interested, I would like to add the following features:
- [x] Complete table attributes (for example, `fill` is missing)
- [ ] Support automatic formating for common workflows, like automatically merging multi-level columns with mean and standard deviation
- [x] Add more Typst elements (like headings or lists) to make building more complex documents easier
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
|
https://github.com/Enter-tainer/typstyle | https://raw.githubusercontent.com/Enter-tainer/typstyle/master/tests/assets/unit/markup/raw-backtick.typ | typst | Apache License 2.0 | #(``` ` ```,)
#(```js` ```)
#(```js`code` ```)
#(```md`code`+other`code` ```)
|
https://github.com/ShapeLayer/ucpc-solutions__typst | https://raw.githubusercontent.com/ShapeLayer/ucpc-solutions__typst/main/tests/utils_make-prob-overview/test.typ | typst | Other | #import "/lib/lib.typ" as ucpc
#import ucpc.presets: difficulties as lv
#ucpc.utils.make-prob-overview(
font-size: .8em,
[A], [Problem A], lv.easy, [Author a],
[B], [Problem B], lv.normal, [Author b],
[C], [Problem C], lv.hard, [Author c],
[D], [Problem D], lv.challenging, [Author b],
) |
https://github.com/typst-community/guidelines | https://raw.githubusercontent.com/typst-community/guidelines/main/src/main.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/src/util.typ": *
#import mantys: *
#let doc = toml("/typst.toml").package
#show: mantys.with(
..doc,
title: "API & Style Guidelines",
subtitle: "Guidelines for Package & Template Development",
date: datetime(year: 2024, month: 05, day: 01),
abstract: [
This document contains stylistic guidelines and API best practices and patterns to help package and template developers create simple to use packages with extensible and interoperable APIs.
The stylistic guidelines apply to documents in general and make it easier for others to read and contribute to code.
],
// examples-scope: (),
)
#set heading(offset: 1)
#chapter[Manifest]
// TODO
#chapter[Style]
#include "chapters/style.typ"
#chapter[API]
#include "chapters/api.typ"
|
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/foundations/array.typ | typst | // Test arrays.
--- array-basic-syntax ---
#set page(width: 150pt)
// Empty.
#()
// Not an array, just a parenthesized expression.
#(1)
// One item and trailing comma.
#(-1,)
// No trailing comma.
#(true, false)
// Multiple lines and items and trailing comma.
#("1"
, rgb("002")
,)
--- array-bad-token ---
// Error: 4-6 unexpected end of block comment
// Hint: 4-6 consider escaping the `*` with a backslash or opening the block comment with `/*`
#(1*/2)
--- array-bad-number-suffix ---
// Error: 6-8 invalid number suffix: u
#(1, 1u 2)
--- array-leading-comma ---
// Error: 3-4 unexpected comma
#(,1)
--- array-incomplete-pair ---
// Missing expression makes named pair incomplete, making this an empty array.
// Error: 5 expected expression
#(a:)
--- array-named-pair ---
// Named pair after this is already identified as an array.
// Error: 6-10 expected expression, found named pair
#(1, b: 2)
--- array-keyed-pair ---
// Keyed pair after this is already identified as an array.
// Error: 6-14 expected expression, found keyed pair
#(1, "key": 2)
--- array-bad-conversion-from-string ---
// Error: 8-15 expected array, bytes, or version, found string
#array("hello")
--- spread-into-array ---
// Test spreading into array and dictionary.
#{
let l = (1, 2, 3)
let r = (5, 6, 7)
test((..l, 4, ..r), range(1, 8))
test((..none), ())
}
--- spread-dict-into-array ---
// Error: 9-17 cannot spread dictionary into array
#(1, 2, ..(a: 1))
--- array-len ---
// Test the `len` method.
#test(().len(), 0)
#test(("A", "B", "C").len(), 3)
--- array-at-lvalue ---
// Test lvalue and rvalue access.
#{
let array = (1, 2)
array.at(1) += 5 + array.at(0)
test(array, (1, 8))
}
--- array-first-and-at-lvalue ---
// Test different lvalue method.
#{
let array = (1, 2, 3)
array.first() = 7
array.at(1) *= 8
test(array, (7, 16, 3))
}
--- array-at-out-of-bounds ---
// Test rvalue out of bounds.
// Error: 2-17 array index out of bounds (index: 5, len: 3) and no default value was specified
#(1, 2, 3).at(5)
--- array-at-out-of-bounds-negative ---
// Error: 2-18 array index out of bounds (index: -4, len: 3) and no default value was specified
#(1, 2, 3).at(-4)
--- array-at-out-of-bounds-lvalue ---
// Test lvalue out of bounds.
#{
let array = (1, 2, 3)
// Error: 3-14 array index out of bounds (index: 3, len: 3)
array.at(3) = 5
}
--- array-at-with-default ---
// Test default value.
#test((1, 2, 3).at(2, default: 5), 3)
#test((1, 2, 3).at(3, default: 5), 5)
--- array-remove-with-default ---
// Test remove with default value.
#{
let array = (1, 2, 3)
test(array.remove(2, default: 5), 3)
}
#{
let array = (1, 2, 3)
test(array.remove(3, default: 5), 5)
}
--- array-range ---
// Test the `range` function.
#test(range(4), (0, 1, 2, 3))
#test(range(1, 4), (1, 2, 3))
#test(range(-4, 2), (-4, -3, -2, -1, 0, 1))
#test(range(10, 5), ())
#test(range(10, step: 3), (0, 3, 6, 9))
#test(range(1, 4, step: 1), (1, 2, 3))
#test(range(1, 8, step: 2), (1, 3, 5, 7))
#test(range(5, 2, step: -1), (5, 4, 3))
#test(range(10, 0, step: -3), (10, 7, 4, 1))
--- array-range-end-missing ---
// Error: 2-9 missing argument: end
#range()
--- array-range-float-invalid ---
// Error: 11-14 expected integer, found float
#range(1, 2.0)
--- array-range-bad-step-type ---
// Error: 17-22 expected integer, found string
#range(4, step: "one")
--- array-range-step-zero ---
// Error: 18-19 number must not be zero
#range(10, step: 0)
--- array-bad-method-lvalue ---
// Test bad lvalue.
// Error: 2:3-2:14 cannot mutate a temporary value
#let array = (1, 2, 3)
#(array.len() = 4)
--- array-unknown-method-lvalue ---
// Test bad lvalue.
// Error: 2:9-2:13 type array has no method `yolo`
#let array = (1, 2, 3)
#(array.yolo() = 4)
--- array-negative-indices ---
// Test negative indices.
#{
let array = (1, 2, 3, 4)
test(array.at(0), 1)
test(array.at(-1), 4)
test(array.at(-2), 3)
test(array.at(-3), 2)
test(array.at(-4), 1)
}
--- array-first-and-last ---
// The `first` and `last` methods.
#test((1,).first(), 1)
#test((2,).last(), 2)
#test((1, 2, 3).first(), 1)
#test((1, 2, 3).last(), 3)
--- array-first-empty ---
// Error: 2-12 array is empty
#().first()
--- array-last-empty ---
// Error: 2-11 array is empty
#().last()
--- array-push-and-pop ---
// Test the `push` and `pop` methods.
#{
let tasks = (a: (1, 2, 3), b: (4, 5, 6))
test(tasks.at("a").pop(), 3)
tasks.b.push(7)
test(tasks.a, (1, 2))
test(tasks.at("b"), (4, 5, 6, 7))
}
--- array-insert-and-remove ---
// Test the `insert` and `remove` methods.
#{
let array = (0, 1, 2, 4, 5)
array.insert(3, 3)
test(array, range(6))
array.remove(1)
test(array, (0, 2, 3, 4, 5))
}
--- array-insert-missing-index ---
// Error: 2:2-2:18 missing argument: index
#let numbers = ()
#numbers.insert()
--- array-slice ---
// Test the `slice` method.
#test((1, 2, 3, 4).slice(2), (3, 4))
#test(range(10).slice(2, 6), (2, 3, 4, 5))
#test(range(10).slice(4, count: 3), (4, 5, 6))
#test(range(10).slice(-5, count: 2), (5, 6))
#test((1, 2, 3).slice(2, -2), ())
#test((1, 2, 3).slice(-2, 2), (2,))
#test((1, 2, 3).slice(-3, 2), (1, 2))
#test("ABCD".split("").slice(1, -1).join("-"), "A-B-C-D")
--- array-slice-out-of-bounds ---
// Error: 2-30 array index out of bounds (index: 12, len: 10)
#range(10).slice(9, count: 3)
--- array-slice-out-of-bounds-negative ---
// Error: 2-24 array index out of bounds (index: -4, len: 3)
#(1, 2, 3).slice(0, -4)
--- array-position ---
// Test the `position` method.
#test(("Hi", "❤️", "Love").position(s => s == "❤️"), 1)
#test(("Bye", "💘", "Apart").position(s => s == "❤️"), none)
#test(("A", "B", "CDEF", "G").position(v => v.len() > 2), 2)
--- array-filter ---
// Test the `filter` method.
#test(().filter(calc.even), ())
#test((1, 2, 3, 4).filter(calc.even), (2, 4))
#test((7, 3, 2, 5, 1).filter(x => x < 5), (3, 2, 1))
--- array-map ---
// Test the `map` method.
#test(().map(x => x * 2), ())
#test((2, 3).map(x => x * 2), (4, 6))
--- array-fold ---
// Test the `fold` method.
#test(().fold("hi", grid), "hi")
#test((1, 2, 3, 4).fold(0, (s, x) => s + x), 10)
--- array-fold-closure-without-params ---
// Error: 20-22 unexpected argument
#(1, 2, 3).fold(0, () => none)
--- array-sum ---
// Test the `sum` method.
#test(().sum(default: 0), 0)
#test(().sum(default: []), [])
#test((1, 2, 3).sum(), 6)
--- array-sum-empty ---
// Error: 2-10 cannot calculate sum of empty array with no default
#().sum()
--- array-product ---
// Test the `product` method.
#test(().product(default: 0), 0)
#test(().product(default: []), [])
#test(([ab], 3).product(), [ab]*3)
#test((1, 2, 3).product(), 6)
--- array-product-empty ---
// Error: 2-14 cannot calculate product of empty array with no default
#().product()
--- array-rev ---
// Test the `rev` method.
#test(range(3).rev(), (2, 1, 0))
--- array-join ---
// Test the `join` method.
#test(().join(), none)
#test((1,).join(), 1)
#test(("a", "b", "c").join(), "abc")
#test("(" + ("a", "b", "c").join(", ") + ")", "(a, b, c)")
--- array-join-bad-values ---
// Error: 2-22 cannot join boolean with boolean
#(true, false).join()
--- array-join-bad-separator ---
// Error: 2-20 cannot join string with integer
#("a", "b").join(1)
--- array-join-content ---
// Test joining content.
#([One], [Two], [Three]).join([, ], last: [ and ]).
--- array-intersperse ---
// Test the `intersperse` method
#test(().intersperse("a"), ())
#test((1,).intersperse("a"), (1,))
#test((1, 2).intersperse("a"), (1, "a", 2))
#test((1, 2, "b").intersperse("a"), (1, "a", 2, "a", "b"))
--- array-chunks ---
// Test the `chunks` method.
#test(().chunks(10), ())
#test((1, 2, 3).chunks(10), ((1, 2, 3),))
#test((1, 2, 3, 4, 5, 6).chunks(3), ((1, 2, 3), (4, 5, 6)))
#test((1, 2, 3, 4, 5, 6, 7, 8).chunks(3), ((1, 2, 3), (4, 5, 6), (7, 8)))
#test(().chunks(10, exact: true), ())
#test((1, 2, 3).chunks(10, exact: true), ())
#test((1, 2, 3, 4, 5, 6).chunks(3, exact: true), ((1, 2, 3), (4, 5, 6)))
#test((1, 2, 3, 4, 5, 6, 7, 8).chunks(3, exact: true), ((1, 2, 3), (4, 5, 6)))
--- array-chunks-size-zero ---
// Error: 19-20 number must be positive
#(1, 2, 3).chunks(0)
--- array-chunks-size-negative ---
// Error: 19-21 number must be positive
#(1, 2, 3).chunks(-5)
--- array-windows ---
// Test the `windows` method.
#test(().windows(5), ())
#test((1, 2, 3).windows(5), ())
#test((1, 2, 3, 4, 5).windows(3), ((1, 2, 3), (2, 3, 4), (3, 4, 5)))
#test((1, 2, 3, 4, 5, 6, 7, 8).windows(5), ((1, 2, 3, 4, 5), (2, 3, 4, 5, 6), (3, 4, 5, 6, 7), (4, 5, 6, 7, 8)))
--- array-windows-size-zero ---
// Error: 20-21 number must be positive
#(1, 2, 3).windows(0)
--- array-windows-size-negative ---
// Error: 20-22 number must be positive
#(1, 2, 3).windows(-5)
--- array-sorted ---
// Test the `sorted` method.
#test(().sorted(), ())
#test(().sorted(key: x => x), ())
#test(((true, false) * 10).sorted(), (false,) * 10 + (true,) * 10)
#test(("it", "the", "hi", "text").sorted(), ("hi", "it", "text", "the"))
#test(("I", "the", "hi", "text").sorted(key: x => x), ("I", "hi", "text", "the"))
#test(("I", "the", "hi", "text").sorted(key: x => x.len()), ("I", "hi", "the", "text"))
#test((2, 1, 3, 10, 5, 8, 6, -7, 2).sorted(), (-7, 1, 2, 2, 3, 5, 6, 8, 10))
#test((2, 1, 3, -10, -5, 8, 6, -7, 2).sorted(key: x => x), (-10, -7, -5, 1, 2, 2, 3, 6, 8))
#test((2, 1, 3, -10, -5, 8, 6, -7, 2).sorted(key: x => x * x), (1, 2, 2, 3, -5, 6, -7, 8, -10))
--- array-sorted-key-function-positional-1 ---
// Error: 12-18 unexpected argument
#().sorted(x => x)
--- array-zip ---
// Test the `zip` method.
#test(().zip(()), ())
#test((1,).zip(()), ())
#test((1,).zip((2,)), ((1, 2),))
#test((1, 2).zip((3, 4)), ((1, 3), (2, 4)))
#test((1, 2).zip((3, 4), exact: true), ((1, 3), (2, 4)))
#test((1, 2, 3, 4).zip((5, 6)), ((1, 5), (2, 6)))
#test(((1, 2), 3).zip((4, 5)), (((1, 2), 4), (3, 5)))
#test((1, "hi").zip((true, false)), ((1, true), ("hi", false)))
#test((1, 2, 3).zip((3, 4, 5), (6, 7, 8)), ((1, 3, 6), (2, 4, 7), (3, 5, 8)))
#test(().zip((), ()), ())
#test((1,).zip((2,), (3,)), ((1, 2, 3),))
#test((1, 2, 3).zip(), ((1,), (2,), (3,)))
#test(array.zip(()), ())
--- array-zip-exact-error ---
// Error: 13-22 second array has different length (3) from first array (2)
#(1, 2).zip((1, 2, 3), exact: true)
--- array-zip-exact-multi-error ---
// Error: 13-22 array has different length (3) from first array (2)
// Error: 24-36 array has different length (4) from first array (2)
#(1, 2).zip((1, 2, 3), (1, 2, 3, 4), exact: true)
--- array-enumerate ---
// Test the `enumerate` method.
#test(().enumerate(), ())
#test(().enumerate(start: 5), ())
#test(("a", "b", "c").enumerate(), ((0, "a"), (1, "b"), (2, "c")))
#test(("a", "b", "c").enumerate(start: 1), ((1, "a"), (2, "b"), (3, "c")))
#test(("a", "b", "c").enumerate(start: 42), ((42, "a"), (43, "b"), (44, "c")))
#test(("a", "b", "c").enumerate(start: -7), ((-7, "a"), (-6, "b"), (-5, "c")))
--- array-dedup ---
// Test the `dedup` method.
#test(().dedup(), ())
#test((1,).dedup(), (1,))
#test((1, 1).dedup(), (1,))
#test((1, 2, 1).dedup(), (1, 2))
#test(("Jane", "John", "Eric").dedup(), ("Jane", "John", "Eric"))
#test(("Jane", "John", "Eric", "John").dedup(), ("Jane", "John", "Eric"))
--- array-dedup-key ---
// Test the `dedup` method with the `key` argument.
#test((1, 2, 3, 4, 5, 6).dedup(key: x => calc.rem(x, 2)), (1, 2))
#test((1, 2, 3, 4, 5, 6).dedup(key: x => calc.rem(x, 3)), (1, 2, 3))
#test(("Hello", "World", "Hi", "There").dedup(key: x => x.len()), ("Hello", "Hi"))
#test(("Hello", "World", "Hi", "There").dedup(key: x => x.at(0)), ("Hello", "World", "There"))
--- array-to-dict ---
// Test the `to-dict` method.
#test(().to-dict(), (:))
#test((("a", 1), ("b", 2), ("c", 3)).to-dict(), (a: 1, b: 2, c: 3))
#test((("a", 1), ("b", 2), ("c", 3), ("b", 4)).to-dict(), (a: 1, b: 4, c: 3))
--- array-to-dict-bad-item-type ---
// Error: 2-16 expected (str, any) pairs, found integer
#(1,).to-dict()
--- array-to-dict-bad-pair-length-1 ---
// Error: 2-19 expected pairs of length 2, found length 1
#((1,),).to-dict()
--- array-to-dict-bad-pair-length-3 ---
// Error: 2-26 expected pairs of length 2, found length 3
#(("key",1,2),).to-dict()
--- array-to-dict-bad-key-type ---
// Error: 2-21 expected key of type str, found integer
#((1, 2),).to-dict()
--- array-zip-positional-and-named-argument ---
// Error: 13-30 unexpected argument: val
#().zip((), val: "applicable")
--- array-sorted-bad-key ---
// Error: 32-37 cannot divide by zero
#(1, 2, 0, 3).sorted(key: x => 5 / x)
--- array-sorted-uncomparable ---
// Error: 2-26 cannot compare content and content
#([Hi], [There]).sorted()
--- array-sorted-uncomparable-lengths ---
// Error: 2-26 cannot compare 3em with 2pt
#(1pt, 2pt, 3em).sorted()
--- array-sorted-key-function-positional-2 ---
// Error: 42-52 unexpected argument
#((k: "a", v: 2), (k: "b", v: 1)).sorted(it => it.v)
--- issue-3014-mix-array-dictionary ---
// Error: 8-17 expected expression, found named pair
#(box, fill: red)
--- issue-3154-array-first-empty ---
#{
let array = ()
// Error: 3-16 array is empty
array.first()
}
--- issue-3154-array-first-mutable-empty ---
#{
let array = ()
// Error: 3-16 array is empty
array.first() = 9
}
--- issue-3154-array-last-empty ---
#{
let array = ()
// Error: 3-15 array is empty
array.last()
}
--- issue-3154-array-last-mutable-empty ---
#{
let array = ()
// Error: 3-15 array is empty
array.last() = 9
}
--- issue-3154-array-at-out-of-bounds ---
#{
let array = (1,)
// Error: 3-14 array index out of bounds (index: 1, len: 1) and no default value was specified
array.at(1)
}
--- issue-3154-array-at-out-of-bounds-default ---
#{
let array = (1,)
test(array.at(1, default: 0), 0)
}
--- issue-3154-array-at-out-of-bounds-mutable ---
#{
let array = (1,)
// Error: 3-14 array index out of bounds (index: 1, len: 1)
array.at(1) = 9
}
--- issue-3154-array-at-out-of-bounds-mutable-default ---
#{
let array = (1,)
// Error: 3-26 array index out of bounds (index: 1, len: 1)
array.at(1, default: 0) = 9
}
--- array-unopened ---
// Error: 2-3 unclosed delimiter
#{)}
--- array-unclosed ---
// Error: 3-4 unclosed delimiter
#{(}
--- array-reduce ---
// Test the `reduce` method.
#test(().reduce(grid), none)
#test((1, 2, 3, 4).reduce((s, x) => s + x), 10)
--- array-reduce-missing-reducer ---
// Error: 2-13 missing argument: reducer
#().reduce()
--- array-reduce-unexpected-argument ---
// Error: 19-21 unexpected argument
#(1, 2, 3).reduce(() => none)
|
|
https://github.com/denizenging/site | https://raw.githubusercontent.com/denizenging/site/master/post/2024/0617/index.tr.typ | typst | #import "@local/pub-post:0.0.0": *
#show: template(
time: (17, 36),
title: "Yazmanın Değeri",
description: "İlk yazımda sizlere yazmaya ve paylaşmaya olan arzumdan söz edeceğim",
categories: ("Engin Düşünceler"),
)
Son birkaç yıldır --- özellikle şu zamanlar --- yazmayı çok istedim.
Yazmadan ziyada aslında duygu ve düşüncelerimi paylaşmak istedim.
Mütevazı hayatımdan alıntılar, keşiflerim ve öğrendiklerim... Ne yazık
ki bu hayalimi gerçekleştirememiştim, ta ki bu güne kadar!
Kabul edelim ki hepimiz unutuyoruz, özellikle de karnımız zilleri
çalmaya başladıysa. Glukoz ile çalışan, bir hayli basit primatlar olarak
bu oldukça doğal bizim için. Ünlü yazar #en[<NAME>]'un
#en[_Three Men in A Boat_] kitabında da dediği gibi:
#quote(block: true, attribution: "<NAME>")[
It is very strange, this domination of our intellect by our
digestive organs. We cannot work, we cannot think, unless our
stomach wills so. It dictates to us our emotions, our passions.
]
|
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/布局/对准/对准.typ | typst | = 对准
沿轴对齐某物的位置。
== 可能的值为:
- start:在文本方向的起点对齐。
- end:在文本方向的末尾对齐。
- left: 向左对齐。
- center:水平居中对齐。
- right: 右对齐。
- top: 顶部对齐。
- horizon:中间垂直对齐。
- bottom:底部对齐。
这些值在全局范围内可用,并且在对齐类型的范围内可用,因此您可以编写以下两个值之一:
#image("屏幕截图 2024-04-14 151641.png")
== 2D对齐
要同时沿两个轴对齐,请使用 +运算符添加两个对齐方式。例如,top + right将内容与右上角对齐。
#image("屏幕截图 2024-04-14 151749.png")
== 领域
x和字段y分别保存对齐的水平和垂直分量(作为另一个alignment)。他们可能是none。
#image("屏幕截图 2024-04-14 151839.png")
== 定义
该对齐方式所属的轴。
#image("屏幕截图 2024-04-14 152539.png") |
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CU/minea/1_generated/02_oktober/01.typ | typst | #import "../../../all.typ": *
== (01.) #translation.at("M_POKROV")
#columns(2, gutter: 2pt, [
// ---------------------------------------
// MALA VECIEREN
// ---------------------------------------
=== #translation.at("M")
==== #translation.at("HOSPODI_VOZVACH")
#generateTable((
// HV Stich na 4
sText("4:"),
gText("Ѿ стра́жи ᲂу҆́треннїѧ до но́щи, ѿ стра́жи ᲂу҆́треннїѧ, да ᲂу҆пова́етъ і҆и҃ль на гдⷭ҇а."),
"",
jObj4("",1, "", "На вы́шнихъ прⷭ҇то́лѣхъ почива́ѧй є҆ди́нъ бг҃ъ, превы́ше земны́хъ по́далъ є҆́сть ви́дѣти своемꙋ̀ ᲂу҆го́дникꙋ, пребжⷭ҇твеннѣи рꙋ́цѣ моле́бнѡ простира́ющꙋ, ѡ҆ гра́дѣ и҆ ѡ҆ лю́дехъ молѧ́щꙋюсѧ тебѐ влⷣчце: покры́й и҆ ны́нѣ гпⷭ҇жѐ на́съ, ѡ҆мофо́ромъ ми́лости твоеѧ̀."),
// HV Stich na 3
sText("3:"),
gText("Ꙗ҆́кѡ ᲂу҆ гдⷭ҇а млⷭ҇ть, и҆ мно́гое ᲂу҆ негѡ̀ и҆збавле́нїе, и҆ то́й и҆зба́витъ і҆и҃лѧ ѿ всѣ́хъ беззако́нїй є҆гѡ̀."),
"",
jObj4("",none, "", "На вы́шнихъ прⷭ҇то́лѣхъ ..."),
// HV Stich na 2
sText("2:"),
gText("Хвали́те гдⷭ҇а всѝ ꙗ҆зы́цы, похвали́те є҆го̀ всѝ лю́дїе."),
"",
jObj4("",none, "", "Гра́дъ почита́ющїй тѧ̀, и҆ по до́лгꙋ сла́вѧщїй, покрыва́еши пречⷭ҇таѧ, твои́мъ ѡ҆мофо́ромъ честны́мъ ѿ нахожде́нїѧ проти́вныхъ, ѿ гла́да же и҆ трꙋ́са и҆ междоꙋсо́бныѧ бра́ни, дв҃о, неискꙋсобра́чнаѧ бг҃оневѣ́сто, человѣ́кѡвъ покро́въ и҆ застꙋпле́нїе."),
// HV Stich na 1
sText("1:"),
gText("Ꙗ҆́кѡ ᲂу҆тверди́сѧ млⷭ҇ть є҆гѡ̀ на на́съ, и҆ и҆́стина гдⷭ҇нѧ пребыва́етъ во вѣ́къ."),
"",
jObj4("",none, "", "Пала́та ѡ҆дꙋшевле́ннаѧ хрⷭ҇то́ва, вѣ̑рнымъ же стѣна̀ и҆ покро́въ и҆ держа́ва: тобо́ю гра́дъ тво́й бг҃омт҃и, спаса́етсѧ: твое́ю по́мощїю правовѣ́рнїи ца́рїе вѣнча́ютсѧ: тобо́ю хва́лѧщїисѧ при́снѡ, побѣ̑ды на пога̑ныѧ но́сѧтъ: покрыва́еши бо ѡ҆мофо́ромъ ми́лости твоеѧ̀."),
// S:I:
col2(gText(translation.at("SI"))),
"",
jObj4("",6, "", "Прїиди́те, празднолю́бныхъ собо́ри, прест҃ꙋ́ю дв҃ꙋ воспои́мъ: дне́сь бо а҆́гг҃ли со свѧты́ми пра́зднꙋютъ честны́й покро́въ бж҃їѧ мт҃ре, и҆ къ похвале́нїю на́съ земны́хъ призыва́ютъ вопи́ти непреста́ннымъ гла́сомъ: ра́дꙋйсѧ, цр҃кве чꙋ́дное ᲂу҆краше́нїе, и҆ всемꙋ̀ мі́рꙋ преди́вный покро́въ. ра́дꙋйсѧ, вѣ̑рнымъ посо́бїе на бра́нехъ, и҆ во́инѡмъ соблюде́нїе. ра́дꙋйсѧ, покро́ве и҆ застꙋпле́нїе на́ше: тебе́ бо хрⷭ҇тїа́нскїй ро́дъ непреста́ннѡ блажи́тъ."),
))
==== #translation.at("STICHOVNI")
#generateTable((
// S Stich na 1
sText("1:"),
jObj4("",2, "До́ме є҆ѵфра́ѳовъ:", "Сла́ва на нб҃сѣ́хъ, ꙳ и҆ на землѝ весе́лїе, ꙳ ꙗ҆́кѡ цр҃и́ца ꙳ со всѣ́ми ст҃ы́ми, ꙳ за всѧ̑ ны̀ въ цр҃кви мо́литсѧ."),
"",
gText(jObj4("",none, "", "Слы́ши дщѝ и҆ ви́ждь, приклонѝ ᲂу҆́хо твоѐ.")),
// S Stich na 2
sText("2:"),
jObj4("",none, "", "Прїиди́те вѣ́рнїи ꙳ во ст҃ꙋ́ю це́рковь, ꙳ та́мѡ бо бцⷣа ꙳ ѡ҆мофо́ромъ покрыва́етъ лю́ди ꙳ ѿ всѣ́хъ напа́стей вра́жїихъ."),
"",
gText(jObj4("",none, "", "Лицꙋ̀ твоемꙋ̀ помо́лѧтсѧ бога́тїи лю́дстїи.")),
// S Stich na 3
sText("3:"),
jObj4("",none, "", "Пречⷭ҇таѧ бцⷣе, ꙳ не забꙋ́ди на́съ ꙳ твои́хъ рабѡ́въ: ꙳ но за ны̀ моли́сѧ, ꙳ вѣ́рою пою́щыѧ тѧ̀."),
// S:I:
col2(gText(translation.at("SI"))),
"",
jObj4("",3, "", "Прїиди́те всѝ празднолю́бцы, честны́й покро́въ бж҃їѧ мт҃ре ᲂу҆блажи́мъ: къ сн҃ꙋ бо рꙋ́цѣ ми́лостивнѡ прострѐ, и҆ ст҃ы́мъ покро́вомъ є҆ѧ̀ мі́ръ ѡ҆гражде́нъ бы́сть. тѣ́мже ᲂу҆сты̀ и҆ се́рдцемъ, и҆ пѣ́сньми и҆ пѣ̑нїи дх҃о́вными, со всѣ́ми притека́ющими пра́зднꙋимъ свѣ́тлѡ."),
))
==== #translation.at("TROPAR")
#generateTable((
// Tropar 1
sText($#sym.dot$),
jObj4("",4, "", "Дне́сь благовѣ́рнїи лю́дїе свѣ́тлѡ пра́зднꙋемъ, ѡ҆сѣнѧ́еми твои́мъ, бг҃ома́ти, прише́ствїемъ, и҆ къ твоемꙋ̀ взира́юще пречⷭ҇томꙋ ѻ҆́бразꙋ, ᲂу҆ми́льнѡ глаго́лемъ: покры́й на́съ честны́мъ твои́мъ покро́вомъ, и҆ и҆зба́ви на́съ ѿ всѧ́кагѡ ѕла̀, молѧ́щи сн҃а твоего̀ хрⷭ҇та̀ бг҃а на́шего, спастѝ дꙋ́шы на́шѧ."),
))
#colbreak()
// ---------------------------------------
// VECIEREN V NEDELU VECER
// ---------------------------------------
=== #translation.at("V")
==== #translation.at("HOSPODI_VOZVACH")
#generateTable((
// HV Stich na 8
sText("8:"),
gText("И҆з̾ глꙋбины̀ воззва́хъ къ тебѣ̀ гдⷭ҇и, гдⷭ҇и, ᲂу҆слы́ши гла́съ мо́й."),
"",
jObj4("",1, "Ѽ ди́вное чꙋ́до:", "Ѽ вели́кое застꙋпле́нїе печа̑льнымъ є҆сѝ бцⷣе чⷭ҇таѧ, ско́раѧ помо́щнице, спасе́нїе мі́ра и҆ ᲂу҆твержде́нїе: ми́лости пꙋчи́на, бж҃їѧ мꙋ́дрости и҆сто́чниче, мі́рꙋ покро́въ, воспое́мъ вѣ́рнїи, свѣ́тлый ѡ҆мофо́ръ є҆ѧ̀ хва́лѧще неизрече́ннѡ: ѡ҆бра́дованнаѧ ра́дꙋйсѧ, съ тобо́ю гдⷭ҇ь, подаѧ́й мі́рꙋ тобо́ю ве́лїю ми́лость."),
// HV Stich na 7
sText("7:"),
gText("Да бꙋ́дꙋтъ ᲂу҆́ши твоѝ, вне́млющѣ гла́сꙋ моле́нїѧ моегѡ̀."),
"",
jObj4("",none, "", "Ѽ вели́кое застꙋпле́нїе ..."),
// HV Stich na 6
sText("6:"),
gText("А҆́ще беззакѡ́нїѧ на́зриши гдⷭ҇и, гдⷭ҇и, кто̀ постои́тъ; ꙗ҆́кѡ ᲂу҆ тебѐ ѡ҆чище́нїе є҆́сть."),
"",
jObj4("",none, "", "Ди́вный проречѐ и҆са́їа: бꙋ́детъ бо, речѐ, въ послѣ̑днѧѧ дни̑ ꙗ҆́вѣ гора̀ гдⷭ҇нѧ, и҆ до́мъ гдⷭ҇ень верхꙋ̀ горы̀, ᲂу҆ви́дѣхомъ же влⷣчце, ѡ҆ тебѣ̀ и҆́стинное сбытїѐ, го́ры бо и҆ хо́лми ᲂу҆краси́шасѧ многоимени́тыми цр҃квами твои́хъ пра́здникѡвъ. тѣ́мъ ве́селѡ вопїе́мъ тѝ: ѡ҆бра́дованнаѧ ра́дꙋйсѧ, съ тобо́ю гдⷭ҇ь, подаѧ́й мі́рꙋ тобо́ю ве́лїю ми́лость."),
// HV Stich na 5
sText("5:"),
gText("И҆́мене ра́ди твоегѡ̀ потерпѣ́хъ тѧ̀ гдⷭ҇и, потерпѣ̀ дꙋша̀ моѧ̀ въ сло́во твоѐ, ᲂу҆пова̀ дꙋша̀ моѧ̀ на гдⷭ҇а."),
"",
jObj4("",none, "", "Ѽ чꙋ́дное ᲂу҆краше́нїе всѣ̑мъ вѣ̑рнымъ є҆сѝ: прⷪ҇ро́ческое и҆́стинное сбытїѐ, а҆пⷭ҇лѡмъ сла́ва, и҆ мꙋ́ченикѡмъ ᲂу҆добре́нїе: дѣ́вствꙋ похвала̀, и҆ всемꙋ̀ мі́рꙋ преди́вный покро́въ: покры́й влⷣчце, ѡ҆мофо́ромъ ми́лости твоеѧ̀ всѧ̑ лю́ди вопїю́щыѧ: ѡ҆бра́дованнаѧ ра́дꙋйсѧ, съ тобо́ю гдⷭ҇ь, подаѧ́й мі́рꙋ тобо́ю ве́лїю ми́лость."),
// HV Stich na 4
sText("4:"),
gText("Ѿ стра́жи ᲂу҆́треннїѧ до но́щи, ѿ стра́жи ᲂу҆́треннїѧ, да ᲂу҆пова́етъ і҆и҃ль на гдⷭ҇а."),
"",
jObj4("",4, "Ꙗ҆́кѡ до́блѧ:", "Ꙗ҆́кѡ бг҃онасажде́нный ра́й, ꙳ и҆ мѣ́сто живо́тнагѡ дре́ва, ꙳ напаѧ́емо дх҃омъ ст҃ы́мъ, ꙳ тебѐ бцⷣе свѣ́дꙋще, ꙳ творца̀ всѣ́хъ ро́ждшꙋю, ꙳ живо́тнымъ хлѣ́бомъ вѣ̑рныѧ пита́юща. ꙳ є҆мꙋ́же ѡ҆ на́съ съ прⷣте́чею моли́сѧ, ꙳ честны́мъ ѡ҆мофо́ромъ покрыва́ющи лю́ди ꙳ ѿ всѧ́кїѧ напа́сти."),
// HV Stich na 3
sText("3:"),
gText("Ꙗ҆́кѡ ᲂу҆ гдⷭ҇а млⷭ҇ть, и҆ мно́гое ᲂу҆ негѡ̀ и҆збавле́нїе, и҆ то́й и҆зба́витъ і҆и҃лѧ ѿ всѣ́хъ беззако́нїй є҆гѡ̀."),
"",
jObj4("",none, "", "Ꙗ҆́кѡ бг҃онасажде́нный ра́й, ..."),
// HV Stich na 2
sText("2:"),
gText("Хвали́те гдⷭ҇а всѝ ꙗ҆зы́цы, похвали́те є҆го̀ всѝ лю́дїе."),
"",
jObj4("",none, "", "Ѡ҆свѧща́етсѧ нб҃о и҆ землѧ̀, ꙳ це́рковь же свѣ́титсѧ, ꙳ и҆ лю́дїе всѝ веселѧ́тсѧ: ꙳ се́ бо мт҃и бж҃їѧ, ꙳ со а҆́гг҃льскими вѡ́инствы, съ прⷣте́чею и҆ бг҃осло́вомъ, ꙳ съ прⷪ҇рѡ́ки же и҆ а҆пⷭ҇лы, ꙳ неви́димѡ вше́дши, за хрⷭ҇тїа́ны мо́литсѧ хрⷭ҇тꙋ̀, ꙳ поми́ловати гра́дъ и҆ лю́ди, ꙳ сла́вѧщыѧ тоѧ̀ покро́ва пра́здникъ."),
// HV Stich na 1
sText("1:"),
gText("Ꙗ҆́кѡ ᲂу҆тверди́сѧ млⷭ҇ть є҆гѡ̀ на на́съ, и҆ и҆́стина гдⷭ҇нѧ пребыва́етъ во вѣ́къ."),
"",
jObj4("",none, "", "Добро́тꙋ тѧ̀ і҆а́кѡвлю, ꙳ и҆ лѣ́ствицꙋ нбⷭ҇нꙋю, ꙳ по не́йже гдⷭ҇ь сни́де на зе́млю: ꙳ тѣ́хъ же тогда̀ ѻ҆́брази ꙳ твою̀ че́сть и҆ сла́вꙋ бцⷣе, проѧвлѧ́хꙋ. ꙳ тѧ̀ ро́ждшꙋю бг҃а всѣ́хъ блажа́тъ нбⷭ҇нїи а҆́гг҃ли, ꙳ и҆ земні́и человѣ́цы, ꙳ ꙗ҆́кѡ за ве́сь мі́ръ мо́лишисѧ, ꙳ покрыва́ющи ми́лостїю, ꙳ тво́й честны́й соверша́ющихъ пра́здникъ."),
// S:I:
col2(gText(translation.at("SI"))),
"",
jObj4("",8, "", "Сра́дꙋютсѧ съ на́ми ᲂу҆́мнаѧ всѧ̑ чинонача̑лїѧ, дꙋхо́внѡ совокꙋ́пльше чꙋ́вственный ли́къ, ви́дѧще цр҃и́цꙋ и҆ влⷣчцꙋ всѣ́хъ, многоиме́ннѣ вѣ́рными сла́вимꙋ: ра́дꙋютсѧ же и҆ дꙋ́си првⷣныхъ, сꙋ́щїи зри́телїе видѣ́нїѧ, на воздꙋ́сѣ моле́бнѡ простира́ющꙋ всечестнѣ́и своѝ рꙋ́цѣ, просѧ́щꙋ ᲂу҆мире́нїѧ мі́рꙋ и҆ спасе́нїѧ дꙋ́шъ на́шихъ."),
))
==== #translation.at("LITIA")
#generateTable((
// L Stich na 1
sText("1:"),
jObj4("",3, "", "Наста̀ дне́сь пресвѣ́тлый пра́здникъ пречⷭ҇таѧ дв҃о, честна́гѡ твоегѡ̀ покро́ва. па́че со́лнца ѡ҆свѣща́еши лю́ди, вѣ́рою ѿ чи́ста се́рдца тебѐ и҆́стиннꙋю мт҃рь бж҃їю и҆сповѣ́дающыѧ, и҆ сн҃ꙋ твоемꙋ̀ вопїю́щыѧ: хрⷭ҇тѐ бж҃е, моли́твами приснодв҃ы, чи́стѡ и҆ безъ и҆стлѣ́нїѧ ро́ждшїѧ тѧ̀, не преда́ждь врагѡ́мъ ра́тꙋющымъ твоѐ достоѧ́нїе, но ꙗ҆́кѡ ми́лостивъ спасѝ въ ми́рѣ дꙋ́шы на́шѧ."),
// L Stich na 2
sText("2:"),
jObj4("",4, "", "Хвале́нїе тебѣ̀ є҆́же по достоѧ́нїю лю́дїе сн҃а твоегѡ̀ пречⷭ҇таѧ влⷣчце, жела́юще воспѣ́ти, но не возмо́гꙋтъ. всегда́ бо всѣ́хъ вѣ́рныхъ застꙋпа́еши, и҆ покрыва́еши честны́мъ твои́мъ покро́вомъ, и҆ мо́лишисѧ спасти́сѧ всѣ̑мъ, вѣ́рою тебѐ чⷭ҇тꙋю и҆ и҆́стиннꙋю бцⷣꙋ почита́ющымъ."),
// L Stich na 3
sText("3:"),
jObj4("",2, "", "Пренепоро́чнꙋю всѣ́хъ цр҃и́цꙋ пречⷭ҇тꙋю дв҃ꙋ, соше́дшесѧ вѣ́рнїи дне́сь ᲂу҆блажи́мъ мт҃рь хрⷭ҇та̀ бг҃а: при́снѡ бо къ сн҃ꙋ рꙋ́цѣ ми́лостивнѡ простира́етъ. ю҆́же и҆ преди́вный а҆ндре́й ви́дѣ на воздꙋ́сѣ, честны́мъ свои́мъ покро́вомъ лю́ди покрыва́ющꙋю, и҆ то́й ᲂу҆ми́льнѡ возопїи́мъ: ра́дꙋйсѧ, покро́ве и҆ застꙋпле́нїе, и҆ спасе́нїе дꙋша́мъ на́шымъ."),
// S:I:
col2(gText(translation.at("SI"))),
"",
jObj4("",2, "", "Смы́слъ ѡ҆чи́стивше и҆ ᲂу҆́мъ, со а҆́гг҃лы и҆ мы̀ торжествꙋ́емъ, свѣ́тлѡ начина́юще дв҃дскꙋю пѣ́снь, ѻ҆трокови́цѣ, невѣ́стѣ всѣ́хъ цр҃ѧ̀ хрⷭ҇та̀ бг҃а на́шегѡ, воскреснѝ гдⷭ҇и, глаго́люще, въ поко́й тво́й, ты̀, и҆ кївѡ́тъ ст҃ы́ни твоеѧ̀: ꙗ҆́кѡ бо пала́тꙋ краснꙋ̀ сїю̀ ᲂу҆краси́въ, и҆ причтѐ ю҆̀ гра́дꙋ твоемꙋ̀ влⷣко, стро́ити и҆ покрыва́ти ѿ сопоста́тныхъ пога̑нъ держа́вною си́лою твое́ю, моли́твами є҆ѧ̀."),
))
==== #translation.at("STICHOVNI")
#generateTable((
// S Stich na 1
sText("1:"),
jObj4("",2, "", "Ꙗ҆́кѡ всѣ́хъ вы́шши сꙋ́щи нбⷭ҇ныхъ и҆ земны́хъ, херꙋві̑мъ сла́внѣйши, и҆ всеѧ̀ тва́ри честнѣ́йши, тебѐ застꙋ́пницꙋ и҆ покро́въ держа́венъ хрⷭ҇тїа́нскомꙋ ро́дꙋ хрⷭ҇то́съ дарова̀, покрыва́ти и҆ спаса́ти лю́ди согрѣ́шшыѧ, къ тебѣ̀ притека́ющыѧ. тѣ́мже прибѣ́жище всѣ̑мъ, тебѐ воспѣва́емъ влⷣчце, и҆ честны́й пра́здникъ твоегѡ̀ покро́ва пра́зднꙋемъ свѣ́тлѡ, хрⷭ҇та̀ молѧ́ще дарова́ти на́мъ ве́лїю ми́лость."),
"",
gText(jObj4("",none, "", "Слы́ши дщѝ и҆ ви́ждь, приклонѝ ᲂу҆́хо твоѐ.")),
// S Stich na 2
sText("2:"),
jObj4("",5, "", "Воспои́мъ лю́дїе свѣ́тлѡ дв҃дскꙋю пѣ́снь ѻ҆трокови́цѣ невѣ́стѣ, бж҃їи мт҃ри всѣ́хъ цр҃ѧ̀ хрⷭ҇та̀: предста̀ цр҃и́ца ѡ҆деснꙋ́ю тебѐ влⷣко, рѧ́сны златы́ми ѡ҆дѣ́ѧна, и҆ преꙋкра́шена бжⷭ҇твенными лѣ́потами: сїю́ бо и҆збра́ннꙋю въ жена́хъ, па́че всегѡ̀ мі́ра ᲂу҆добри́въ, благоизво́ли роди́тисѧ ѿ неѧ̀, вели́кїѧ ра́ди ми́лости: и҆ сїю̀ помо́щницꙋ лю́демъ свои̑мъ дарова̀, стро́ити, и҆ покрыва́ти ѿ всѧ́кїѧ бѣды̀ рабы̑ своѧ̑, є҆ди́нꙋ благослове́ннꙋю."),
"",
gText(jObj4("",none, "", "Лицꙋ̀ твоемꙋ̀ помо́лѧтсѧ бога́тїи лю́дстїи.")),
// S Stich na 3
sText("3:"),
jObj4("",7, "", "Гора̀ є҆сѝ вели́ка и҆ пресла́вна, па́че горы̀ сїна́йскїѧ бцⷣе: ѻ҆на́ бо не терпѧ́щи сни́тїѧ сла́вы бж҃їѧ во ѻ҆́бразѣхъ и҆ сѣ́нехъ, ѻ҆гне́мъ возгара́шесѧ, и҆ гро́ми и҆ мѡ́лнїѧ та́мѡ бы́ша. ты́ же всего̀ въ себѣ̀ бжⷭ҇твеннаго ѻ҆гнѧ̀ сꙋ́щи, бж҃їѧ сло́ва во чре́вѣ неѡпа́льнѡ носи́ла є҆сѝ, ма́нїемъ носѧ́щаго всѧ̑: и҆ мт҃рне дерзнове́нїе къ немꙋ̀ и҆мꙋ́щи, помозѝ влⷣчце, вѣ́рнѡ пра́зднꙋющымъ тво́й честны́й пра́здникъ, и҆ не забꙋ́ди ми́лостивнѡ посѣща́ющи: прїѧ́ла бо є҆сѝ ѿ бг҃а да́ръ стро́ити, и҆ покрыва́ти хрⷭ҇тоимени́тое твоѐ ста́до, рабы̑ твоѧ̑."),
// S:I:
col2(gText(translation.at("SI"))),
"",
jObj4("",2, "", "Ꙗ҆́кѡ вѣнце́мъ пресвѣ́тлымъ всечⷭ҇таѧ бцⷣе, покро́вомъ твои́мъ честны́мъ, цр҃ковь бж҃їѧ приѡдѣ́ѧсѧ: и҆ свѣ́титсѧ ра́дꙋющисѧ дне́сь, и҆ та́йнѡ ликовствꙋ́етъ влⷣчце, вопїю́щи тебѣ̀: ра́дꙋйсѧ ѡ҆дѣѧ́нїе честно́е, и҆ вѣ́нче бж҃їѧ сла́вы. ра́дꙋйсѧ є҆ди́на сла́вы соверше́нїе, и҆ вѣ́чное весе́лїе: ра́дꙋйсѧ, къ тебѣ̀ притека́ющымъ приста́нище и҆ и҆збавле́нїе, и҆ спасе́нїе на́ше."),
))
==== #translation.at("TROPAR")
#generateTable((
// Tropar 1
sText($#sym.dot$),
jObj4("",4, "", "Дне́сь благовѣ́рнїи лю́дїе свѣ́тлѡ пра́зднꙋемъ, ѡ҆сѣнѧ́еми твои́мъ, бг҃ома́ти, прише́ствїемъ, и҆ къ твоемꙋ̀ взира́юще пречⷭ҇томꙋ ѻ҆́бразꙋ, ᲂу҆ми́льнѡ глаго́лемъ: покры́й на́съ честны́мъ твои́мъ покро́вомъ, и҆ и҆зба́ви на́съ ѿ всѧ́кагѡ ѕла̀, молѧ́щи сн҃а твоего̀ хрⷭ҇та̀ бг҃а на́шего, спастѝ дꙋ́шы на́шѧ."),
))
#colbreak()
])
|
|
https://github.com/EpicEricEE/typst-marge | https://raw.githubusercontent.com/EpicEricEE/typst-marge/main/tests/resolve/margin/test.typ | typst | MIT License | #import "/src/resolve.typ": resolve-margin
#set page(margin: (inside: 3cm, outside: 2cm, rest: 4cm))
#context assert.eq(here().page(), 1)
#context assert.eq(resolve-margin(top), 4cm)
#context assert.eq(resolve-margin("left"), 3cm)
#context assert.eq(resolve-margin(right), 2cm)
#context assert.eq(resolve-margin(start), 3cm)
#context assert.eq(resolve-margin("inside"), 3cm)
~
#set page(margin: (left: 3cm, right: 2cm, y: auto))
#context assert.eq(here().page(), 2)
#context assert.eq(resolve-margin(top), 2.5cm)
#context assert.eq(resolve-margin("left"), 3cm)
#context assert.eq(resolve-margin(right), 2cm)
#context assert.eq(resolve-margin(start), 3cm)
#context assert.eq(resolve-margin("inside"), 2cm)
~
#set page(flipped: true)
#context assert.eq(here().page(), 3)
#context assert.eq(resolve-margin(top), 2.5cm)
#context assert.eq(resolve-margin("left"), 3cm)
#context assert.eq(resolve-margin(right), 2cm)
#context assert.eq(resolve-margin(start), 3cm)
#context assert.eq(resolve-margin("inside"), 3cm)
~
#set page(margin: (x: 2cm))
#context assert.eq(here().page(), 4)
#context assert.eq(resolve-margin(top), 2.5cm)
#context assert.eq(resolve-margin("left"), 2cm)
#context assert.eq(resolve-margin(right), 2cm)
#context assert.eq(resolve-margin(start), 2cm)
#context assert.eq(resolve-margin("inside"), 2cm)
~
#set page(
width: 100pt,
height: auto,
flipped: false,
margin: 1em + 10%,
)
// #context assert.eq(here().page(), 5)
#context assert.eq(resolve-margin(top), 11pt)
#context assert.eq(resolve-margin(left), 21pt)
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/docs/guides/guide-for-latex-users.md | markdown | Apache License 2.0 | ---
description: |
Are you a LaTeX user? This guide explains the differences and
similarities between Typst and LaTeX so you can get started quickly.
---
# Guide for LaTeX users { # }
This page is a good starting point if you have used LaTeX before and want to try
out Typst. We will explore the main differences between these two systems from a
user perspective. Although Typst is not built upon LaTeX and has a different
syntax, you will learn how to use your LaTeX skills to get a head start.
<!-- Mention that Typst is not built upon LaTeX -->
Just like LaTeX, Typst is a markup-based typesetting system: You compose your
document in a text file and mark it up with commands and other syntax. Then, you
use a compiler to typeset the source file into a PDF. However, Typst also
differs from LaTeX in several aspects: For one, Typst uses more dedicated syntax
(like you may know from Markdown) for common tasks. Typst's commands are also
more principled: They all work the same, so unlike in LaTeX, you just need to
understand a few general concepts instead of learning different conventions for
each package. Moreover Typst compiles faster than LaTeX: Compilation usually
takes milliseconds, not seconds, so the web app and the compiler can both
provide instant previews.
In the following, we will cover some of the most common questions a user
switching from LaTeX will have when composing a document in Typst. If you prefer
a step-by-step introduction to Typst, check out our [tutorial]($tutorial).
## How do I create a new, empty document? { #getting-started }
That's easy. You just create a new, empty text file (the file extension is
`.typ`). No boilerplate is needed to get started. Simply start by writing your
text. It will be set on an empty A4-sized page. If you are using the web app,
click "+ Empty document" to create a new project with a file and enter the
editor. [Paragraph breaks]($parbreak) work just as they do in LaTeX, just use a
blank line.
```example
Hey there!
Here are two paragraphs. The
output is shown to the right.
```
## How do I create section headings, emphasis, ...? { #elements }
LaTeX uses the command `\section` to create a section heading. Nested headings
are indicated with `\subsection`, `\subsubsection`, etc. Depending on your
document class, there is also `\part` or `\chapter`.
In Typst, [headings]($heading) are less verbose: You prefix the line with the
heading on it with an equals sign and a space to get a first-order heading:
`[= Introduction]`. If you need a second-order heading, you use two equals
signs: `[== In this paper]`. You can nest headings as deeply as you'd like by
adding more equals signs.
Emphasis (usually rendered as italic text) is expressed by enclosing text in
`[_underscores_]` and strong emphasis (usually rendered in boldface) by using
`[*stars*]` instead.
Here is a list of common markup commands used in LaTeX and their Typst
equivalents. You can also check out the [full syntax cheat sheet]($syntax).
| Element | LaTeX | Typst | See
|:-----------------|:--------------------------|:-----------------------|:--------------------
| Strong emphasis | `\textbf{strong}` | `[*strong*]` | [`strong`]($strong) |
| Emphasis | `\emph{emphasis}` | `[_emphasis_]` | [`emph`]($emph) |
| Monospace / code | `\texttt{print(1)}` | ``[`print(1)`]`` | [`raw`]($raw) |
| Link | `\url{https://typst.app}` | `[https://typst.app/]` | [`link`]($link) |
| Label | `\label{intro}` | `[<intro>]` | [`label`]($label) |
| Reference | `\ref{intro}` | `[@intro]` | [`ref`]($ref) |
| Citation | `\cite{humphrey97}` | `[@humphrey97]` | [`cite`]($cite) |
| Bullet list | `itemize` environment | `[- List]` | [`list`]($list) |
| Numbered list | `enumerate` environment | `[+ List]` | [`enum`]($enum) |
| Term list | `description` environment | `[/ Term: List]` | [`terms`]($terms) |
| Figure | `figure` environment | `figure` function | [`figure`]($figure) |
| Table | `table` environment | `table` function | [`table`]($table) |
| Equation | `$x$`, `align` / `equation` environments | `[$x$]`, `[$ x = y $]` | [`equation`]($math.equation) |
[Lists]($list) do not rely on environments in Typst. Instead, they have
lightweight syntax like headings. To create an unordered list (`itemize`),
prefix each line of an item with a hyphen:
````example
To write this list in Typst...
```latex
\begin{itemize}
\item Fast
\item Flexible
\item Intuitive
\end{itemize}
```
...just type this:
- Fast
- Flexible
- Intuitive
````
Nesting lists works just by using proper indentation. Adding a blank line in
between items results in a more [widely]($list.tight) spaced list.
To get a [numbered list]($enum) (`enumerate`) instead, use a `+` instead of the
hyphen. For a [term list]($terms) (`description`), write `[/ Term: Description]`
instead.
## How do I use a command? { #commands }
LaTeX heavily relies on commands (prefixed by backslashes). It uses these
_macros_ to affect the typesetting process and to insert and manipulate content.
Some commands accept arguments, which are most frequently enclosed in curly
braces: `\cite{rasmus}`.
Typst differentiates between [markup mode and code mode]($scripting/#blocks).
The default is markup mode, where you compose text and apply syntactic
constructs such as `[*stars for bold text*]`. Code mode, on the other hand,
parallels programming languages like Python, providing the option to input and
execute segments of code.
Within Typst's markup, you can switch to code mode for a single command (or
rather, _expression_) using a hash (`#`). This is how you call functions to, for
example, split your project into different [files]($scripting/#modules) or
render text based on some [condition]($scripting/#conditionals). Within code
mode, it is possible to include normal markup [_content_]($content) by using
square brackets. Within code mode, this content is treated just as any other
normal value for a variable.
```example
First, a rectangle:
#rect()
Let me show how to do
#underline([_underlined_ text])
We can also do some maths:
#calc.max(3, 2 * 4)
And finally a little loop:
#for x in range(3) [
Hi #x.
]
```
A function call always involves the name of the function ([`rect`]($rect),
[`underline`]($underline), [`calc.max`]($calc.max), [`range`]($array.range))
followed by parentheses (as opposed to LaTeX where the square brackets and curly
braces are optional if the macro requires no arguments). The expected list of
arguments passed within those parentheses depends on the concrete function and
is specified in the [reference]($reference).
### Arguments
A function can have multiple arguments. Some arguments are positional, i.e., you
just provide the value: The function `[#lower("SCREAM")]` returns its argument
in all-lowercase. Many functions use named arguments instead of positional
arguments to increase legibility. For example, the dimensions and stroke of a
rectangle are defined with named arguments:
```example
#rect(
width: 2cm,
height: 1cm,
stroke: red,
)
```
You specify a named argument by first entering its name (above, it's `width`,
`height`, and `stroke`), then a colon, followed by the value (`2cm`, `1cm`,
`red`). You can find the available named arguments in the [reference
page]($reference) for each function or in the autocomplete panel when typing.
Named arguments are similar to how some LaTeX environments are configured, for
example, you would type `\begin{enumerate}[label={\alph*)}]` to start a list
with the labels `a)`, `b)`, and so on.
Often, you want to provide some [content]($content) to a function. For example,
the LaTeX command `\underline{Alternative A}` would translate to
`[#underline([Alternative A])]` in Typst. The square brackets indicate that a
value is [content]($content). Within these brackets, you can use normal markup.
However, that's a lot of parentheses for a pretty simple construct. This is why
you can also move trailing content arguments after the parentheses (and omit the
parentheses if they would end up empty).
```example
Typst is an #underline[alternative]
to LaTeX.
#rect(fill: aqua)[Get started here!]
```
### Data types
You likely already noticed that the arguments have distinctive data types. Typst
supports many [data types]($type). Below, there is a table with some of the most
important ones and how to write them. In order to specify values of any of these
types, you have to be in code mode!
| Data type | Example |
|:--------------------------------|:----------------------------------|
| [Content]($content) | `{[*fast* typesetting]}` |
| [String]($str) | `{"<NAME>"}` |
| [Integer]($int) | `{23}` |
| [Floating point number]($float) | `{1.459}` |
| [Absolute length]($length) | `{12pt}`, `{5in}`, `{0.3cm}`, ... |
| [Relative length]($ratio) | `{65%}` |
The difference between content and string is that content can contain markup,
including function calls, while a string really is just a plain sequence of
characters.
Typst provides [control flow constructs]($scripting/#conditionals) and
[operators]($scripting/#operators) such as `+` for adding things or `==` for
checking equality between two variables. You can also define your own
[variables]($scripting/#bindings) and perform computations on them.
### Commands to affect the remaining document { #rules }
In LaTeX, some commands like `\textbf{bold text}` receive an argument in curly
braces and only affect that argument. Other commands such as
`\bfseries bold text` act as switches, altering the appearance of all subsequent
content within the document or current scope.
In Typst, the same function can be used both to affect the appearance for the
remainder of the document, a block (or scope), or just its arguments. For
example, `[#text(weight: "bold")[bold text]]` will only embolden its argument,
while `[#set text(weight: "bold")]` will embolden any text until the end of the
current block, or, if there is none, document. The effects of a function are
immediately obvious based on whether it is used in a call or a
[set rule.]($styling/#set-rules)
```example
I am starting out with small text.
#set text(14pt)
This is a bit #text(18pt)[larger,]
don't you think?
```
Set rules may appear anywhere in the document. They can be thought of as
default argument values of their respective function:
```example
#set enum(numbering: "I.")
Good results can only be obtained by
+ following best practices
+ being aware of current results
of other researchers
+ checking the data for biases
```
The `+` is syntactic sugar (think of it as an abbreviation) for a call to the
[`{enum}`]($enum) function, to which we apply a set rule above.
[Most syntax is linked to a function in this way.]($syntax) If you need to style
an element beyond what its arguments enable, you can completely redefine its
appearance with a [show rule]($styling/#show-rules) (somewhat comparable to
`\renewcommand`).
## How do I load a document class? { #templates }
In LaTeX, you start your main `.tex` file with the `\documentclass{article}`
command to define how your document is supposed to look. In that command, you
may have replaced `article` with another value such as `report` and `amsart` to
select a different look.
When using Typst, you style your documents with [functions]($function).
Typically, you use a template that provides a function that styles your whole
document. First, you import the function from a template file. Then, you apply
it to your whole document. This is accomplished with a
[show rule]($styling/#show-rules) that wraps the following document in a given
function. The following example illustrates how it works:
```example:single
>>> #let conf(
>>> title: none,
>>> authors: (),
>>> abstract: [],
>>> doc,
>>> ) = {
>>> set text(font: "Linux Libertine", 11pt)
>>> set par(justify: true)
>>> set page(
>>> "us-letter",
>>> margin: auto,
>>> header: align(
>>> right + horizon,
>>> title
>>> ),
>>> numbering: "1",
>>> )
>>>
>>> show heading.where(
>>> level: 1
>>> ): it => block(
>>> align(center,
>>> text(
>>> 13pt,
>>> weight: "regular",
>>> smallcaps(it.body),
>>> )
>>> ),
>>> )
>>> show heading.where(
>>> level: 2
>>> ): it => box(
>>> text(
>>> 11pt,
>>> weight: "regular",
>>> style: "italic",
>>> it.body + [.],
>>> )
>>> )
>>>
>>> set align(center)
>>> text(17pt, title)
>>>
>>> let count = calc.min(authors.len(), 3)
>>> grid(
>>> columns: (1fr,) * count,
>>> row-gutter: 24pt,
>>> ..authors.map(author => [
>>> #author.name \
>>> #author.affiliation \
>>> #link("mailto:" + author.email)
>>> ]),
>>> )
>>>
>>> par(justify: false)[
>>> *Abstract* \
>>> #abstract
>>> ]
>>>
>>> set align(left)
>>> columns(2, doc)
>>>}
<<< #import "conf.typ": conf
#show: conf.with(
title: [
Towards Improved Modelling
],
authors: (
(
name: "<NAME>",
affiliation: "Artos Institute",
email: "<EMAIL>",
),
(
name: "<NAME>",
affiliation: "Honduras State",
email: "<EMAIL>",
),
),
abstract: lorem(80),
)
Let's get started writing this
article by putting insightful
paragraphs right here!
```
The [`{import}`]($scripting/#modules) statement makes
[functions]($function) (and other definitions) from another file available.
In this example, it imports the `conf` function from the `conf.typ` file. This
function formats a document as a conference article. We use a show rule to apply
it to the document and also configure some metadata of the article. After
applying the show rule, we can start writing our article right away!
<div class="info-box">
Functions are Typst's "commands" and can transform their arguments to an output
value, including document _content._ Functions are "pure", which means that they
cannot have any effects beyond creating an output value / output content. This
is in stark contrast to LaTeX macros that can have arbitrary effects on your
document.
To let a function style your whole document, the show rule processes everything
that comes after it and calls the function specified after the colon with the
result as an argument. The `.with` part is a _method_ that takes the `conf`
function and pre-configures some if its arguments before passing it on to the
show rule.
</div>
In the web app, you can choose from predefined templates or even
create your own using the template wizard. You can also check out the
[`awesome-typst` repository](https://github.com/qjcg/awesome-typst) to find
templates made by the community. We plan to add support for templates to Typst's
package manager in the future.
You can also [create your own, custom templates.]($tutorial/making-a-template)
They are shorter and more readable than the corresponding LaTeX `.sty` files by
orders of magnitude, so give it a try!
## How do I load packages? { #packages }
Typst is "batteries included," so the equivalent of many popular LaTeX packages
is built right-in. Below, we compiled a table with frequently loaded packages
and their corresponding Typst functions.
| LaTeX Package | Typst Alternative |
|:--------------------------------|:------------------------------------------------------------- |
| graphicx, svg | [`image`]($image) function |
| tabularx | [`table`]($table), [`grid`]($grid) functions |
| fontenc, inputenc, unicode-math | Just start writing! |
| babel, polyglossia | [`text`]($text.lang) function: `[#set text(lang: "zh")]` |
| amsmath | [Math mode]($category/math) |
| amsfonts, amssymb | [`sym`]($category/symbols) module and [syntax]($syntax/#math) |
| geometry, fancyhdr | [`page`]($page) function |
| xcolor | [`text`]($text.fill) function: `[#set text(fill: rgb("#0178A4"))]` |
| hyperref | [`link`]($link) function |
| bibtex, biblatex, natbib | [`cite`]($cite), [`bibliography`]($bibliography) functions |
| lstlisting, minted | [`raw`]($raw) function and syntax |
| parskip | [`block`]($block.spacing) and [`par`]($par.first-line-indent) functions |
| csquotes | Set the [`text`]($text.lang) language and type `["]` or `[']` |
| caption | [`figure`]($figure) function |
| enumitem | [`list`]($list), [`enum`]($enum), [`terms`]($terms) functions |
Although _many_ things are built-in, not everything can be. That's why Typst has
a built-in [package manager]($packages) where the community can share their
creations and automations. Let's take, for instance, the _tablex_ package: This
package allows you to customize your tables in ways the built-in table does not
yet support. To use tablex in your document, you can just write:
```typ
#import "@preview/tablex:0.0.5": tablex, gridx
```
(The `@preview` is a _namespace_ that is used while the package manager is still
in its early and experimental state. It will be replaced in the future.)
Aside from the official package repository, you might also want to check out the
[awesome-typst repository](https://github.com/qjcg/awesome-typst), which
compiles a curated list of resources created for Typst.
If you need to load functions and variables from another file within your
project, for example to use a template, you can use use the same
[`{import}`]($scripting/#modules) statement with a file name rather than a
package specification. To instead include the textual content of another file,
you can use an [`{include}`]($scripting/#modules) statement. It will retrieve
the content of the specified file and put it in your document.
## How do I input maths? { #maths }
To enter math mode in Typst, just enclose your equation in dollar signs. You can
enter display mode by adding spaces or newlines between the equation's contents
and its enclosing dollar signs.
```example
The sum of the numbers from
$1$ to $n$ is:
$ sum_(k=1)^n k = (n(n+1))/2 $
```
[Math mode]($category/math) works differently than regular markup or code mode.
Numbers and single characters are displayed verbatim, while multiple consecutive
(non-number) characters will be interpreted as Typst variables.
Typst pre-defines a lot of useful variables in math mode. All Greek (`alpha`,
`beta`, ...) and some Hebrew letters (`alef`, `bet`, ...) are available through
their name. Some symbols are additionally available through shorthands, such as
`<=`, `>=`, and `->`.
Refer to the [symbol pages]($reference/symbols) for a full list of the symbols.
If a symbol is missing, you can also access it through a
[Unicode escape sequence]($syntax/#escapes).
Alternate and related forms of symbols can often be selected by
[appending a modifier]($symbol) after a period. For example,
`arrow.l.squiggly` inserts a squiggly left-pointing arrow. If you want to insert
multiletter text in your expression instead, enclose it in double quotes:
```example
$ delta "if" x <= 5 $
```
In Typst, delimiters will scale automatically for their expressions, just as if
`\left` and `\right` commands were implicitly inserted in LaTeX. You can
customize delimiter behavior using the [`lr` function]($math.lr). To
prevent a pair of delimiters from scaling, you can escape them with backslashes.
Typst will automatically set terms around a slash `/` as a fraction while
honoring operator precedence. All round parentheses not made redundant by the
fraction will appear in the output.
```example
$ f(x) = (x + 1) / x $
```
[Sub- and superscripts]($math.attach) work similarly in Typst and LaTeX.
`{$x^2$}` will produce a superscript, `{$x_2$}` yields a subscript. If you want
to include more than one value in a sub- or superscript, enclose their contents
in parentheses: `{$x_(a -> epsilon)$}`.
Since variables in math mode do not need to be prepended with a `#` or a `/`,
you can also call functions without these special characters:
```example
$ f(x, y) := cases(
1 "if" (x dot y)/2 <= 0,
2 "if" x "is even",
3 "if" x in NN,
4 "else",
) $
```
The above example uses the [`cases` function]($math.cases) to describe f. Within
the cases function, arguments are delimited using commas and the arguments are
also interpreted as math. If you need to interpret arguments as Typst
values instead, prefix them with a `#`:
```example
$ (a + b)^2
= a^2
+ text(fill: #maroon, 2 a b)
+ b^2 $
```
You can use all Typst functions within math mode and insert any content. If you
want them to work normally, with code mode in the argument list, you can prefix
their call with a `#`. Nobody can stop you from using rectangles or emoji as
your variables anymore:
```example
$ sum^10_(🥸=1)
#rect(width: 4mm, height: 2mm)/🥸
= 🧠 maltese $
```
If you'd like to enter your mathematical symbols directly as Unicode, that is
possible, too!
Math calls can have two-dimensional argument lists using `;` as a delimiter. The
most common use for this is the [`mat` function]($math.mat) that creates
matrices:
```example
$ mat(
1, 2, ..., 10;
2, 2, ..., 10;
dots.v, dots.v, dots.down, dots.v;
10, 10, ..., 10;
) $
```
## How do I get the "LaTeX look?" { #latex-look }
Papers set in LaTeX have an unmistakeable look. This is mostly due to their
font, Computer Modern, justification, narrow line spacing, and wide margins.
The example below
- sets wide [margins]($page.margin)
- enables [justification]($par.justify), [tighter lines]($par.leading) and
[first-line-indent]($par.first-line-indent)
- [sets the font]($text.font) to "New Computer Modern", an OpenType derivative of
Computer Modern for both text and [code blocks]($raw)
- disables paragraph [spacing]($block.spacing)
- increases [spacing]($block.spacing) around [headings]($heading)
```typ
#set page(margin: 1.75in)
#set par(leading: 0.55em, first-line-indent: 1.8em, justify: true)
#set text(font: "New Computer Modern")
#show raw: set text(font: "New Computer Modern Mono")
#show par: set block(spacing: 0.55em)
#show heading: set block(above: 1.4em, below: 1em)
```
This should be a good starting point! If you want to go further, why not create
a reusable template?
## What limitations does Typst currently have compared to LaTeX? { #limitations }
Although Typst can be a LaTeX replacement for many today, there are still
features that Typst does not (yet) support. Here is a list of them which, where
applicable, contains possible workarounds.
- **Native charts and plots.** LaTeX users often create charts along with their
documents in PGF/TikZ. Typst does not yet include tools to draw diagrams, but
the community is stepping up with solutions such as
[`cetz`](https://github.com/johannes-wolf/typst-canvas). You can add those
to your document to get started with drawing diagrams.
- **Change page margins without a pagebreak.** In LaTeX, margins can always be
adjusted, even without a pagebreak. To change margins in Typst, you use the
[`page` function]($page) which will force a page break. If you just want a few
paragraphs to stretch into the margins, then reverting to the old margins, you
can use the [`pad` function]($pad) with negative padding.
- **Include PDFs as images.** In LaTeX, it has become customary to insert vector
graphics as PDF or EPS files. Typst supports neither format as an image
format, but you can easily convert both into SVG files with
[online tools](https://cloudconvert.com/pdf-to-svg) or
[Inkscape](https://inkscape.org/). We plan to add automatic conversion for
these file formats to the Typst web app, too!
- **Page break optimization.** LaTeX runs some smart algorithms to not only
optimize line but also page breaks. While Typst tries to avoid widows and
orphans, it uses less sophisticated algorithms to determine page breaks. You
can insert custom page breaks in Typst using `[#pagebreak(weak: true)]` before
submitting your document. The argument `weak` ensures that no double page
break will be created if this spot would be a natural page break anyways. You
can also use `[#v(1fr)]` to distribute space on your page. It works quite
similar to LaTeX's `\vfill`.
- **Bibliographies are not customizable.** In LaTeX, the packages `bibtex`,
`biblatex`, and `natbib` provide a wide range of reference and bibliography
formats. These packages also allow you to write `.bbx` files to define your
own styles. Typst only supports a small set of citation styles at the moment,
but we want to build upon this by supporting [Citation Style Language
(CSL)](https://citationstyles.org), an XML-based format backed by Zotero that
allows you to describe your own bibliography styles.
|
https://github.com/yingziyu-llt/blog | https://raw.githubusercontent.com/yingziyu-llt/blog/main/typst_document/ml/content/chapter1.typ | typst | #import "../template.typ": *
= 模型评估和选择
== 衡量模型的好坏
我们用错误率、精度、误差等来衡量模型预测的好坏。衡量方法定义如下
#definition("衡量模型预测的方法")[
- 错误率:$E=a/m$,即错误样本数在总样本数中的占比
- 精度: 精度=1-错误率
- 误差: 实际预测输出和真实结果直接的差异
- 训练误差/经验误差: 在训练集上的误差
- 泛化误差: 在测试集上的误差
]
我们一般来说,希望模型能够在新样本上表现的很好,能找到样本间的潜在规律。
当训练的过于好了的时候,可能模型会将样本的个别规律当成共同潜在规律,在遇到新样本的时候表现就会不好。
这种情况叫做*过拟合*(overfitting),其对面为*欠拟合*(underfitting)。
== 模型的评估
为了测试模型性能,我们要合理划分数据集,将其变成互斥的训练集和测试集。关于分割,也有很多种方法,下面介绍几种。
=== 留出法
对数据进行随机划分,可以将数据划分为两个部分。但是,随机划分可能会改变数据分布情况,因此我们要对数据集进行分层随机抽样,保证正例和反例接近1:1。
同时,单次划分可能也会对模型性能的衡量带来误差,所以要做多组实验取平均值。
=== 交叉验证法
将数据集划分为多个大小接近的子集,每次选择一个子集作为测试数据,其他作为训练数据,这样就可以做多次实验。这种方法一般叫做 k-fold cross validation。
如果分成的份数和数据集大小一致,那么称作留一法(Leave-One-Out)。这种方法的好处在于准确度不和划分方法挂钩,使得这种方法准确率很高,但是计算量很大,在数据量很大的情况下不适用。
=== 自助法
使用有放回的取样,反复多次取样,将取出的元素放入集合$D'$,最终$D'$中将有很多元素重复,其覆盖面应当为$1-1/e$。将其作为训练集,$D"/"D'$为测试集。
这个方法特别适合在数据集很小,难以划分的时候使用,同时由于其可以产生很多种数据集,适合在集成学习中使用。
== 性能衡量
对于分类问题,我们最常用的衡量模型性能的方法是错误率$E(f;D)=1/m sum_{i=1}^m I I(y_i!=f(x_i))$,即错误率。对于回归问题,我们最常用的衡量模型性能的方法是$E(f;D)=1/m sum_{i=1}^m (y_i-f(x_i))^2)$,即均方误差(mean squared error/MSE)。
然而,对于很多情况下,我们更加关心错误到底是怎么的错误,是将正确的识别成错误的还是将错误的识别成正确的。这时就需要用混淆矩阵来详细衡量。
=== P-R曲线
#definition("查准率和查全率")[
对于一个二分类问题,设一类是有,一类是无。那么查准率(precision)就是在认为是有的部分中多少是真的有,查全率(recall)就是在真的有的部分里面有多少识别为有。
]
#figure(caption: "混淆矩阵,列为判定,行为真实")[
#table(
columns: (1fr,1fr,1fr),
[],[真],[假],
[真],[TP(真正例)],[FN(假反例)],
[假],[FP(假正例)],[TN(真反例)]
)
]
那么查准率$ P = "TP" / ("TP" + "FP") $,查全率 $ R = "TP" / ("TP" + "FN") $
将查全率和查准率作为横、纵坐标绘图,形成P-R曲线。如果一个模型可以完全包住另一个,那么这个一定比另一个优。如果有交点,那么就不能这么说了。
对于有交点的两个曲线,有很多种衡量方法。第一种是看两个的线下面积,面积大的就好。然而,面积很难计算,所以要换一种量度。
最简单的一种是计算平衡点(BEP)即P=R的那个点。这个点越大,可以认为这个模型更好。
还有相对比较复杂的一种量度是$F 1$度量,即P和R的调和平均值。$"F1"=2 * "P" * "R" / ("P" + "R") = 2 * "TP" / ("样例总数" + "TP" - "TN")$
而对于查错和查全两个后果不同的情况,我们需要修正这个公式,转化为加权的调和平均值$1/F_beta=1/(1+beta^2) * (1/P + beta / R)$,$F_beta = (1 + beta^2) * P * R / (beta^2 * P + R)$,对于$beta$,越大越倾向于查全。
对于在多次训练中获得多组测试结果,得到多组混淆矩阵,那么怎么求P和R呢?
我们有marco-P和marco-R度量法,就是分别求出各组的P和R,然后求平均值。还有micro-P和micro-R度量法,就是将各组的TP,TN,FP,FN分别求平均,用均值算出P和R。
=== ROC曲线和AUC
考虑到实际使用中更常见的是返回一个[0,1]的值,如果大于某个阈值就为1,否则为0.
那么分类过程就是对每个例子的的可能性排序,最终找到某个分点,使得小于该分点的都判断为负,而大于该分点的都判断为正。
对于不同的实际任务,相当重要的一点是要衡量这个排序的质量。
那么,更加重要的一点是要去衡量这个排序的质量。
为了衡量这个指标,我们画出ROC曲线。其横轴为假正例率,纵轴为真正例率。对于理想情况,我们可以求出每个点,然后描点画图。
但在实际中,我们只能做有限个实验,不能够描出所有点。做一组数据并进行如下操作:
+ 估算出每个数据是真的置信度
+ 用置信度从小到大排序
+ 先将阈值设为0,描点(0,0)
+ 逐步提升阈值,使得阈值触碰每个预测的置信度。
+ 设前一个描点为(x,y),如果碰上的是正例,那么描点$(x,y+1/m^+)$,否则描点$(x+1/m^-,y)$
线下的面积被称为AUC,是一个很好的度量方法。当AUC=1时,说明存在一个阈值使得该分类器是一个理想分类器,当$0.5<"AUC"<1$时,说明它是一个可用的分类器,其越大证明效果越好。
当$"AUC"=0.5$时,说明其和随机决定没有区别,是没用的。当其小于0.5时,说明其劣于随机分类,将其分类结果反过来也是可用的。
== 统计检验
看不懂,跳了
== 偏差与方差
我们先假设$f(x;D)$是在训练集$D$学到的预测输出,$y$是其真实标记,$y_D$是受到噪声干扰后的标记。
我们设其预测的期望$overline(f)(x) = EE_D [f(x;D)]$
用不同训练集训练产生的方差为$"var"(x) = EE_D [(f(x;D) - overline(f)(x))^2]$,噪声为$epsilon^2 = EE_D [(y - y_D)^2]$,期望数据和真实数据的偏差$"bias"^2 = (y - overline(f)(x))^2$
假定噪声的期望为0,那么$E(f;D) = EE_D [(f(x;D) - y)^2] "MSE LOSS"=dots="bias"^2(x) + "var"(x) + epsilon^2$
这就是偏差-方差分解。bias衡量了模型的拟合能力,而var衡量了模型对不同训练数据的敏感性,$epsilon$衡量了问题本身的难度。
|
|
https://github.com/fenjalien/metro | https://raw.githubusercontent.com/fenjalien/metro/main/src/impl/num/process.typ | typst | Apache License 2.0 | #import "parse.typ": parse
#let group-digits(options, input, rev: true) = {
let (first, other) = if options.digit-group-size != 3 {
(options.digit-group-size,) * 2
} else {
(options.digit-group-first-size, options.digit-group-other-size)
}
let input = if rev {
input.split("").slice(1, -1).rev()
} else {
input
}
let len = input.len()
let start = calc.rem(first, input.len())
let result = (
input.slice(0, start),
..for i in range(start, len, step: other) {
(input.slice(i, calc.min(i + other, len)),)
}
)
return if rev {
result.map(x => x.rev().join()).rev()
} else {
result
}.join(options.group-separator)
}
#let check-exponent-thresholds(options, exp) = (options.exponent-mode == "scientific" or exp - 1 < options.exponent-thresholds.first() or exp+1 > options.exponent-thresholds.last())
#let process-exponent(options, exp) = {
let exponent = parse(options, exp)
if exponent.all(x => x == auto) {
exponent = (
none, // sign
exp, // The not parsed exponent
none // Decimal
)
}
if exponent.at(2) != none {
exponent.insert(2, options.output-decimal-marker)
}
let sign = exponent.first()
exponent = exponent.slice(1).join()
if exponent != "0" or options.print-zero-exponent {
if sign == "-" or options.print-implicit-plus or options.print-exponent-implicit-plus {
exponent = if sign == "-" { sym.minus } else {sym.plus} + exponent
}
exponent = if options.output-exponent-marker != none {
options.output-exponent-marker + exponent
} else {
math.attach(if options.print-mantissa {options.spacing + options.exponent-product + options.spacing } + options.exponent-base, t: exponent)
}
} else {
exponent = none
}
return exponent
}
#let process-power(options, pwr) = {
let power = parse(options, pwr)
if power.all(x => x == auto) {
return pwr
}
if power.at(2) != none {
power.insert(2, options.output-decimal-marker)
}
return power.join()
}
#let process-uncertainty(options, pm) = {
let uncertainty = parse(options, pm)
if uncertainty.all(x => x == auto) {
uncertainty = (
none,
pm,
none
)
}
if uncertainty.at(2) != none {
uncertainty.insert(2, options.output-decimal-marker)
}
uncertainty = options.spacing + if uncertainty.first() == "-" { sym.minus.plus } else { sym.plus.minus } + options.spacing + uncertainty.slice(1).join()
return uncertainty
}
#let non-zero-integer-regex = regex("[^0]")
#let exponent-mode(options, integer, decimal, exponent) = {
exponent = if exponent == none { 0 } else { int(exponent) }
if options.exponent-mode in ("scientific", "threshold") {
let i = integer.position(non-zero-integer-regex)
if i != none and i < integer.len() {
let exp = integer.len() - i - 1 + exponent
if check-exponent-thresholds(options, exp) {
exponent = exp
decimal = integer.slice(i+1) + decimal
integer = integer.slice(i, i+1)
}
} else if integer.len() > 1 or integer == "0" {
let i = decimal.position(non-zero-integer-regex)
let exp = exponent - i - 1
if check-exponent-thresholds(options, exp) {
integer = decimal.slice(i, i+1)
decimal = decimal.slice(i+1)
exponent = exp
}
}
} else if options.exponent-mode == "engineering" {
if integer.len() > 1 {
let l = calc.rem(integer.len(), 3)
if l == 0 { l = 3 }
exponent += integer.slice(l).len()
decimal = integer.slice(l) + decimal
integer = integer.slice(0, l)
} else if integer == "0" {
let i = decimal.position(non-zero-integer-regex)
let l = 3 - calc.rem(i, 3)
integer = decimal.slice(i, i+l)
decimal = decimal.slice(i+l)
exponent -= i+l
}
} else if options.exponent-mode == "fixed" {
let n = options.fixed-exponent
let i = exponent - n
exponent = n
if i < 0 {
if integer.len() < -i {
integer = "0" * -(i - integer.len()) + integer
}
decimal = integer.slice(i) + decimal
integer = integer.slice(0, i)
} else if i > 0 {
if decimal.len() < i {
decimal += "0" * (i - decimal.len())
}
integer += decimal.slice(0, i)
decimal = decimal.slice(i)
}
}
return (integer, decimal, exponent)
}
// Rounds the digits of a number based on the last digit sliced off of it.
// `decimal` should be `""` if there is no decimal
#let round-digits(options, integer, decimal, index) = {
let digit
if integer.len() > index {
decimal = ""
digit = integer.at(index)
integer = integer.slice(0, index) + "0" * (integer.len() - index)
} else if integer.len() == index {
digit = decimal.first()
decimal = ""
} else if integer.len() + decimal.len() > index {
index -= integer.len()
digit = decimal.at(index)
decimal = decimal.slice(0, index)
}
if digit != none and (options.round-direction == "up" or (
options.round-direction == "nearest" and (
options.round-half == "even" and calc.odd(int((integer + decimal).last()))
or options.round-half == "up" and int(digit) >= 5
)
)) {
if decimal != "" {
let len = decimal.len()
decimal = str(int(decimal) + 1)
if len < decimal.len() {
decimal = if options.round-pad { decimal.slice(1) } else { "" }
integer = str(int(integer) + 1)
} else {
decimal = "0" * (len - decimal.len()) + decimal
}
} else {
let i = integer.len() - index
integer = str(int(integer) + calc.pow(10, i))
}
}
return (integer, decimal)
}
#let round-mode(options, sign, integer, decimal) = {
let round-digits = round-digits.with(options)
if options.round-mode == "places" {
if decimal.len() > options.round-precision {
(integer, decimal) = round-digits(integer, decimal, options.round-precision + integer.len())
if float(integer + "." + decimal) < options.round-minimum {
(integer, decimal) = str(options.round-minimum).split(".")
sign = "<" + sign
} else if int(integer) + int(decimal) == 0 and options.round-zero-positive {
sign = "+"
}
} else if decimal.len() < options.round-precision and options.round-pad {
decimal += "0" * (options.round-precision - decimal.len())
}
} else if options.round-mode == "figures" {
if int(integer) == 0 {
integer = ""
}
let len = integer.len() + decimal.len()
if options.round-precision < len {
(integer, decimal) = round-digits(integer, decimal, options.round-precision + if integer.len() < options.round-precision { decimal.position(non-zero-integer-regex) })
} else if len < options.round-precision and options.round-pad {
decimal += "0" * (options.round-precision - len)
}
}
// else if options.round-mode == "uncertainty" {
// }
return (sign, integer, decimal)
}
#let process(options, sign, integer, decimal, exponent, power, uncertainty) = {
let parse-numbers = not (integer == auto and decimal == auto)
if not parse-numbers {
(sign, integer, decimal, exponent, power) = (sign, integer, decimal, exponent, power).map(x => if x != auto { x })
}
if integer == none {
integer = ""
}
if decimal == none {
decimal = ""
}
// Exponent options
if options.exponent-mode != "input" {
(integer, decimal, exponent) = exponent-mode(options, integer, decimal, exponent)
}
// Rounding options
if options.round-mode != none and (options.round-mode != "uncertainty" and uncertainty == none) { // or (options.round-mode == "uncertainty" and uncertainty != none)) {
(sign, integer, decimal) = round-mode(options, sign, integer, decimal)
}
if options.drop-zero-decimal and decimal.match(non-zero-integer-regex) == none {
decimal = ""
}
// Minimum digits
if integer.len() < options.minimum-integer-digits {
integer = "0" * (options.minimum-integer-digits - integer.len()) + integer
}
if decimal.len() < options.minimum-decimal-digits {
decimal += "0" * (options.minimum-decimal-digits - decimal.len())
}
if options.group-digits in ("all", "decimal", "integer") {
let group-digits = group-digits.with(options)
if options.group-digits in ("all", "integer") and integer.len() >= options.group-minimum-digits {
integer = group-digits(integer)
}
if options.group-digits in ("all", "decimal") and decimal.len() >= options.group-minimum-digits {
decimal = group-digits(decimal, rev: false)
}
}
let mantissa = if parse-numbers {
""
if (integer.len() == 0 or integer != "0") or options.print-zero-integer {
if integer.len() == 0 {
"0"
} else {
integer
}
}
if decimal != "" or options.retain-explicit-decimal-marker {
options.output-decimal-marker
if options.zero-decimal-as-symbol and int(decimal) == 0 {
options.zero-symbol
} else {
decimal
}
}
} else {
options.number
}
options.print-mantissa = options.print-unity-mantissa or mantissa not in ("1", "")
options.spacing = if not options.tight-spacing {
sym.space.thin
}
if options.drop-uncertainty {
uncertainty = none
} else if uncertainty != none {
uncertainty = process-uncertainty(options, uncertainty)
}
if options.drop-exponent {
exponent = none
} else if exponent != none {
exponent = process-exponent(options, exponent)
}
if power != none {
power = process-power(options, power)
}
return (options, sign, mantissa, exponent, power, uncertainty)
} |
https://github.com/EGmux/PCOM-2023.2 | https://raw.githubusercontent.com/EGmux/PCOM-2023.2/main/lista2/lista2.typ | typst | #set heading(numbering: "1.")
== Princípios de comunicações - 2023.2 Lista 2 (Transmissão banda base)
#include "./lista2q1.typ"
#include "./lista2q2.typ"
#include "./lista2q3.typ"
#include "./lista2q4.typ"
#include "./lista2q5.typ"
#include "./lista2q6.typ"
#include "./lista2q7.typ"
#include "./lista2q8.typ"
#include "./lista2q9.typ"
|
|
https://github.com/simon-epfl/notes-ba2-simon | https://raw.githubusercontent.com/simon-epfl/notes-ba2-simon/main/fds/fds-tricks-2.typ | typst | #set text(font: "DejaVu Sans")
#show heading.where(level: 1): contents => text(size: 20pt, contents)
#show heading: contents => pad(bottom: 10pt, contents)
#set quote(block: true)
#set heading(numbering: (ignore_first, ..n) => {
if (n.pos().len() != 0) {
numbering("1.1.", ..n)
}
})
#let stick-together(a, threshold: 3em) = {
block(a + v(threshold), breakable: false)
v(-1 * threshold)
}
= Tricks de FDS (part II)
== Comment faire la différence entre un circuit séquentiel et combinatoire ?
En fait, on veut vérifier si notre output est une fonction d'un état + d'inputs ou juste d'inputs.
- vérifier que tous les inputs sont bien utilisés dans le `always` block. Si ce n'est pas le cas, un input pourrait changer tandis que l'output reste le même #sym.arrow séquentiel.
- vérifier que tous les cas sont bien gérés dans le `case` block #sym.arrow combinatoire.
== FSM en Verilog
- next-state logic (en always \@\*).
- next-state update (en always \@\(posedge clock\) avec gestion du reset).
- output update (en always \@\*).
== Trouver la fréquence maximale
- calculer tous les temps $t_"setup" + t_"comb"_"max" + t_"cQ"_"max"$ pour chaque chemin entre deux flip-flops.
- prendre le plus grand temps.
- calculer la fréquence maximale : $f_"max" = 1 / t_"max"$.
== Vérifier les hold violations
- calculer tous les temps $t_"comb"_"min" + t_"cQ"_"min"$ pour chaque chemin entre deux flip-flops
- comparer avec $t_"hold"$
#pagebreak()
== Cours
- set-reset latch: type de circuit de mémoire basique utilisé en électronique. deux entrées : S et R. Elles sont asynchrones (pas besoin que la clock passe en front montant ou en front descendant pour qu'elles marchent) ! Elles ne dépendent que des inputs.
- D latch : comme un set-reset latch, mais plus contrôlable (input R devient input C qui nous permet de contrôler quand on veut que l'output du latch se verrouille ou suive la valeur de S).
- D flip-flops : comme un D latch, mais l'output ne prend la valeur de S que quand l'input C change de valeur (devient vrai). Quand il est vrai, même si S change, l'output reste fixe (à la valeur qu'il avait quand C était en front montant).
#sym.arrow on utilise l'horloge pour décider quand notre state change, pour des soucis de synchronisation ! En effet si les composants commencent à maj leur valeur n'importe quand, et d'autres à un moment précis, on a des pb de stabilité (pendant un court moment, un output peut être invalide).
- FSM (finite state machine) --> if Moore then no input (it only takes the previous state), if Mealy it takes both the previous input *and* other inputs
- clock duty ratio : $ "time of high sign"/"duration of low sign"$
#pagebreak()
== Setup and Hold times
*Contamination Clock To Q*: quand la clock se déclenche, le temps minimum que peut mettre Q à changer.
*Propagation Delay Clock to Q*: quand la clock se déclenche, le temps maximum que met Q à se stabiliser.
#image("assets/q1d2.png")
Pour que D2 prenne correctement la valeur (pas de métastabilité), il faut que Q soit stable, au moins
- $t_"setup"$ : avant que la clock ne se déclenche
- $t_"hold"$ : après que la clock se soit déclenchée
Donc on veut check :
- $t_"cQ"_"max" + t_"comb"_"max" + t_"setup" <= t_"clock"$ (que le temps avant que les calculs commencent + les calculs + le temps de setup soient inférieurs au temps de la clock)
- $t_"cQ"_"min" + t_"comb"_"min" >= t_"hold"$ (que le temps avant que les calculs commencent + les calculs soient supérieurs au temps de hold sinon la valeur n'est pas tenue)
Valerio se tourne, va chercher un stylo chez Habib, et se retourne vers moi. Je prends le stylo toutes les 5 secondes #sym.arrow *clock !* et :
- j'ai besoin de voir le stylo 2s avant de commencer à le prendre (le moment où je vais arriver à 0 mod 5s) #sym.arrow *t_setup !* Cette condition est violée si je vais trop vite (la clock est trop rapide), que Valerio a pas le temps de me montrer le stylo. Soit Valerio doit aller plus vite (on diminue le *t_comb*), soit on doit me ralentir (on diminue le *clock*).
- je mets 1s à prendre le stylo *t_hold !* Cette condition est violée si Valerio va chercher un autre stylo chez Habib trop vite.
$ "on veut que la valeur soit bonne avant le " t_"setup" \ arrow.l.r.double t_c Q + t_"comb" <= t_"clock" - t_"setup" $
Maintenant on rajoute un $ t_"skew"$, qui décale un peu le $t_"clock"$ (c'est du temps bonus qu'on ajoute) :
$ arrow.l.r.double t_c Q + t_"comb" <= t_"clock" - t_"setup" + t_"skew" $
Calculer le temps entre FF A vers FF B pour calculer $f_max$
$t_c Q_max + t_"comb" + t_"setup" + t_"skew_du_FF_A" - t_"skew_du_FF_B" = "delay" $
Vérifier le hold:
$t_c Q_min + t_"comb" + t_"skew_du_FF_A" - t_"skew_du_FF_B" = "delay" $
#image("assets/metastability.png")
solution : mettre deux flip flop en série. si le premier est métastable, le signal va se stabiliser entre les deux et le deuxième flip flop va prendre un signal non métastable.
https://www.youtube.com/watch?v=xCA54Qu4WtQ
#image("assets/setup.png")
== Clock Skew
différence de temps entre le moment où la clock se déclenche pour le flip flop 1 et le flip flop 2. si le skew est positif, c'est cool ! on a plus de temps pour faire les calculs. si le skew est négatif, c'est moins cool, on a moins de temps pour faire les calculs.
== Memory
Il y a deux MUX : un pour activer l'écriture au bon endroit (qui ne s'active que quand `WE` est vrai), un pour choisir ce qui sort.
#image("assets/dffp.png")
=== Memory dans Verilog
`reg [nombre-de-bits-par-word-1:0] nom-de-la-variable [nombre-de-words-1:0];`
#image("assets/verilog-db.png", width: 50%)
#image("assets/verilog-db-schema.png", width: 40%)
#pagebreak()
== RISC-V
- `li [reg] [value]`: load an immediate into a register
- `and [reg1] [reg2] [reg3]`: bitwise and, stores the result in reg1
- `add [reg1] [reg2] [reg3]`: add, stores the result in reg1
- `srai [reg1] [reg2] [value]`: shift right "arithmetic" (there is a "logic" version) immediate, stores the result in reg1
- `bne [reg1] [reg2] [label]`: branch not equal, jumps to label if reg1 and reg2 are not equal
#image("assets/datasize.png")
#pagebreak()
=== Instruction de type Branch
On stocke un `offset` décalage `immediate` qui dit de cb d'instructions on veut sauter, c'est toujours un multiple de 2 (et souvent même de 4 car on ne veut pas sauter au milieu d'une instruction). C'est pour ça que nous n'avons pas besoin de stocker le dernier bit de `immediate` (qui est toujours 0).
Il peut donc faire 13 bits (car on peut stocker 12 bits et on sait que le dernier est 0).
Attention il est stocké de façon wtf. On stocke ensemble les bits de 4 à 1, le bit 11, et ensuite le reste.
#image("assets/branch.png")
=== Instruction de type Memory
- `lw t1, 12(t2)` : load word, va chercher la valeur à l'adresse `12 + t2` et la met dans `t1`
=== Instruction de type Jump (non conditionnels)
comme les instructions de type branch mais utilisés pour sauter qqpart sans condition (comme lors d'un appel de fonction)
- `jal return_address, label/immediate` - jump and link, va à `label` et stocke l'adresse de retour (c'est à dire l'adresse de l'instruction juste après le jump (donc pc + 4)) dans `return_address`. Attention quand on précise un `immediate`, il sera ensuite multiplié par 2 (car on ne peut pas sauter au milieu d'une instruction).
- `jalr return_address, t1, immediate_offset` - jump and link register, va à l'adresse stockée dans `t1` et stocke l'adresse de retour dans `return_address`... par contre ici le `immediate_offset` n'est pas multiplié par 2.
=== Données
```asm
.data
matrix:
.byte 12, 78, 35, 11, 34, 113, 46, 122, 56, 24, 57, 33
.text
# la is a pseudoinstruction that the assembler converts
# into two lower-level instructions: auipc x8 ADDRESS_OF_MATRIX
# and addi x8 x8 -(INITIAL_PC) (register s0 is an alias for x8).
la s0, matrix
addi s1, zero, 4
addi s2, zero, 3
add s3, zero, zero
start:
```
voir série papier.
#pagebreak()
== Multicycle vs Singlecycle CPU
#image("assets/singlevsmulti.png")
ici on voit que le single cycle doit prendre l'instruction la plus lente de l'instruction set (comme le `load`) et régler sa clock en fonction (ce qui fait que les instructions plus rapides attendent, comme `store`).
en multicycle, chaque instruction est divisée en plusieurs étapes, et chaque étape est exécutée en un cycle. ainsi les cycles sont plus petits et le contrôle est plus fin #sym.arrow moins de temps perdu
|
|
https://github.com/AxiomOfChoices/Typst | https://raw.githubusercontent.com/AxiomOfChoices/Typst/master/Courses/Math%2018_155%20-%20Differential%20Analysis%201/Assignments/Assignment%202.typ | typst | #import "/Templates/generic.typ": latex, header
#import "@preview/ctheorems:1.1.0": *
#import "/Templates/math.typ": *
#import "/Templates/assignment.typ": *
#show: doc => header(title: "Assignment 2", name: "<NAME>", doc)
#show: latex
#show: NumberingAfter
#show: thmrules
#let col(x, clr) = text(fill: clr)[$#x$]
#let pb() = {
pagebreak(weak: true)
}
#set page(numbering: "1")
#let bar(el) = $overline(#el)$
#set enum(numbering: "a)")
*Sources consulted* \
Classmates: . \
Texts: Class Notes.
= Question
== Statement
Let $u,v in D'(RR^n)$.
+ Show that $supp (u + v) seq supp (u) union supp (v)$.
+ If $u_k -> u$ in $D' (RR^n)$ then $supp (u)$ is a subset of the closure of $union.big_(k=1)^infinity supp (u_k)$.
== Solution
+ Let $x in.not supp u union supp v$, then since $x in.not supp u$ we have for some neighborhood $U$, $u|_U = 0$, and since $x in.not supp v$ we have for some other neighborhood $V$, $v|_V = 0$. Now if we consider the open set $U sect V$ then we get
$
u|_(U sect V) = 0, quad v|_(U sect V) = 0
$
so $(u + v)|_(U sect V) = 0$ and thus $x in.not supp(u + v)$.
+ Let $U^c = overline(union.big_(k=1)^infinity supp (u_k))$, then $U seq RR^n backslash supp (u_k)$ for every $k$, by theorem 4 in Dyatlov's notes we know that $u_k|_(RR^n backslash supp (u_k)) = 0$ so $u_k|_U = 0$ for all $k$. But then for any $f in C_c^infinity (U)$ we have that $pair(u_k, f) = 0$ for all $k$. So by definition of $u_k -> u$ we know that $pair(u,f) = 0$. Thus $supp u seq U^c$ and so we are done.
= Question
== Statement
Prove that for any open set $Omega seq RR^n$ there is a collection of $phi_i in C_c^infinity (Omega)$ such that
+ $sum f_i = 1$ in $Omega$
+ $0 <= f_i <= 1$ in $Omega$.
+ For any compact set $K seq Omega$ we $supp phi_i sect K = nothing$ for all but finitely many $i$.
Use the following outline, write $Omega = union.big_(i = 1)^omega K_i$ where $K_i$ are all compact sets, and $O_i$ open sets satisfying $K_i seq O_i seq K_(i+1)$.
Let $psi_i$ be a cutoff for $K_i$ supported in $O_i$ satisfying
+ $phi_i = 1$ on $K_i$.
+ $0 <= phi_i <= 1$.
Show the following
+ $0 <= psi_(i+1) - psi_i <= 1$ in $Omega$.
+ $psi_(i+1) - psi_i$ in $C_c^infinity (Omega)$.
+ $1 = psi_1 + sum_(i=1)^infinity (psi_(i+1) - psi_i)$ where the sum makes sense as a locally finite sum, also for any compact set $K seq Omega$ there is an $n_k >= 0$ such that $psi_(i+1) - psi_i = 0$ on $K$
== Solution
+ We need to check 3 cases: $x in K_i, x in K_(i+1) backslash K_i, x in.not K_(i+1)$. In the first case we have $psi_i = 1$ and $psi_(i+1) = 1$ so we have $psi_(i+1) - psi_i = 0$. In the second case we have $psi_i <= 1, psi_(i+1) = 1$, so we have $0 <= psi_(i+1) - psi_i <= 1$. In the third case we have $psi_i = 0, psi_(i+1) <= 1$, so we have $0 <= psi_(i+1) - psi_i <= 1$.
+ This is immediate as $psi_i$ and $psi_(i+1)$ are both in $C_c^infinity (Omega)$.
+ Let $K$ be any compact subset of $Omega$, then since $O_i$ are a covering of $K$ there exists a $j$ such that $K seq O_j$, then $K seq K_(j+1)$. But then $psi_(n+1) - psi_n = 0$ on $K$ for each $n >= j+1$ by part a), so we know that the sum is finite. This gives us
$
psi_1 + sum_(i=1)^infinity (psi_(i+1) - psi_i)
= psi_1 + sum_(i=1)^(j+1) (psi_(i+1) - psi_i)
= psi_(j+1) = 1
$
on $K$.
= Question
== Statement
#let norm2(x) = math.abs([#h(-2pt) #math.norm(x) #h(-2pt)])
Set $norm(f)_(C^N (K)) = sup_(K) sum_(abs(alpha) <= N) abs(diff^alpha f)$ and $norm2(f)_(C^N (K)) = max_(x in K, abs(alpha) <= N) abs(diff^alpha f)$.
Show that these are equivalent norms.
Show that
$
norm(phi psi)_(C^N (K)) <= C_N norm(phi)_(C^N (K)) dot norm(psi)_(C^N (K))
$
== Solution
First to show they are equivalent norms, we have
$
norm(f)_(C^N (K))
= sup_(K) sum_(abs(alpha) <= N) abs(diff^alpha f)
<=sup_(K) sum_(abs(alpha) <= N) max_(abs(alpha) <= N) abs(diff^alpha f)
<=sup_(K) 2 n^N max_(abs(alpha) <= N) abs(diff^alpha f)
\ <= 2 n^N max_(x in K, abs(alpha) <= N) abs(diff^alpha f)
<= 2 n^N norm2(f)_(C^N (K)).
$
And we also have
$
norm2(f)_(C^N (K))
= max_(x in K, abs(alpha) <= N) abs(diff^alpha f)
<= max_(x in K) sum_(|alpha| <= N) abs(diff^alpha f)
= norm(f)_(C^N (K)).
$
For the next part we do a straight forward computation.
$
norm(phi psi)_(C^N (K))
= sup_(K) sum_(beta + gamma = alpha, abs(alpha) <= N) abs(diff^beta phi diff^gamma psi)
= sup_(K) sum_(abs(beta), abs(gamma) <= N) abs(diff^beta phi) abs(diff^gamma psi)
\ = sup_(K) sum_(abs(beta), abs(gamma) <= N) abs(diff^beta phi) abs(diff^gamma psi)
<= (sup_(K) sum_(abs(beta) <= N) abs(diff^beta phi))
(sup_(K) sum_(abs(gamma) <= N) abs(diff^gamma psi))
= norm(phi)_(C^N (K)) norm(psi)_(C^N (K))
$
= Question
== Statement
Prove that if a linear map $u : C_c^infinity (Omega) -> RR$ is positive, in the sense that $pair(u,f) >= 0$ for all $f>=0,f in C_c^infinity (Omega)$, then $u$ is a distribution or order zero.
Next show that $u$ is a positive measure.
== Solution
Let $psi$ be a cutoff function of a compact set $K$, then for any $phi$ with $supp phi in K$ fix $M = norm(phi)_(C(K))$ and note that $phi + M psi >= 0$ and $phi - M psi <=0$ so we have
$
0 <= pair(u, phi + M psi) = pair(u, phi) + M pair(u, psi)
\
0 >= pair(u, phi - M psi) = pair(u, phi) - M pair(u, psi)
$
which when rearranged give us
$
-M pair(u, psi) <= pair(u, phi) <= M pair(u, psi).
$
Thus we have
$
abs(pair(u, phi)) <= M pair(u, psi) = norm(phi)_(C(K)) pair(u,psi).
$
Now to show it is a measure, consider for any compact $K seq RR^n$ the inclusion $C_c^infinity (K) seq C(K)$. By the Hahn-Banach theorem, since the functional $u$ is bounded by the norm in $C(K)$ then we can extend $u$ to a functional $hat(u)_K : C(K) -> RR$. By the Riesz–Markov–Kakutani theorem we then know that $pair(hat(u)_K, f) = integral_(K) f d mu_K$. We know that the measure is positive because the functional is positive.
We take the sequence $K_n = ov(B_n (0))$ and define a measure on all of $RR^n$ by taking the limit of these measures, which makes sense as they must agree on their intersection.
|
|
https://github.com/crd2333/crd2333.github.io | https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/AI/Embodied%20AI.typ | typst | #import "/src/components/TypstTemplate/lib.typ": *
#show: project.with(
title: "Embodied AI",
lang: "zh",
)
= Low-level Embodied Intelligence with Foundation Models
- From [stanford CS25] V3 I Low-level Embodied Intelligence with Foundation Models
- 据说是 “入门必看,用讲故事的形式介绍了 Google 解决主流问题的方法和模型”
== 引言
- Why Embodied Intelligence
- Real world is really complex.
- We want robot to have "ambient intelligence".
- 有很多种方法实现具身智能,其中一种是创造足够复杂的模拟交互式环境
- e.g. GibsonEnv in CVPR18, and iGibson in RA-L2020, IROS2021, CoRL2021
- 从大规模模拟数据到基础模型
- 如果使用 model-free 的强化学习,即使学会一件最简单的事也要成千上万的模拟尝试
- 尝试利用基础模型中的大量语义先验(可以把基础模型看作是大量数据的压缩,是一个可供查询的知识库)
- 结合大规模 offline 数据集和大容量模型(e.g. Transformer),使用语言作为通用胶水
- From "Internet Al" to "Embodied Al"
- 数据集:从静态数据集到动态的模拟环境
- 任务:从视觉的 Classification, Segmentation, Detection Generation, Captioning 等到更复杂的 Visual Navigation, Manipulation, Rearragement, Embodied-QA, Mobile Manipulation, Instruction Following...
- Where are we in foundation models + robotics
- high level reasoning(planning)
- low level control
- 为什么我们认为基础模型一般用于高层次规划而难以用于低层次控制?
+ Challenge 1: 莫拉维克悖论(Moravec's paradox),一个人工智能和机器人技术的观察结果。与朴素的直觉假设恰好相反,模型前向推理需要的计算量其实很少,但感觉运动和感知技能则需要大量的计算资源(原因是模型一直在通过推理进行学习,但它们并不像人类一样把感知和运动刻在了 DNA 里)
+ Challenge 2: training data biase,比如我们的基础模型(大语言模型)训练的数据中有教你如何做菜的文章,却没有教你如何把手向左移动 5 cm(fine-tune 当然是一个办法,但其实不算特别好做,后面会展开)
+ Challenge 3: LLM 缺少对 low level control 的 interface,换句话说,很难要求 LLM 输出关节的具体角度或编写详尽的控制代码
- 所以本次课分为两部分
+ Part 1: Model consolidation, joint scaling, positive transfer
+ Part 2: New interfaces of LLMs
== Model Consolidation, Joint Scaling, Positive Transfer
- 模型整合:在一个模型中完成高层次规划和低层次控制
- 聚合 scale:不仅 scale 昂贵的机器人数据,也 scale 视觉语言模型的预训练数据
- 正向转移:模型受益于多样的 joint training(互联网语言、视觉、视觉语言等多模态跨模态的训练)
- 其实 NLP 那边到处可见 positive transfer,因为太常见所以没人提,但是 Robotics 这边还没发展到那个阶段,也还没有那么多数据
- 从这张图中我们可以看到模型越来越趋向于整合,它的背后是任务的整合,把所有任务都表示成(vision plus text to text task)
#fig("/public/assets/AI/Embodied/2024-10-06-13-11-04.png")
- #link("https://palm-e.github.io/")[PaLM-E]
#fig("/public/assets/AI/Embodied/2024-10-07-11-11-52.png")
- 它使用 muitimodal tokens(来自 ViT 或机器人 sensory data),训练参数把它变换到单词语义的嵌入空间,很自然地把基础大语言模型与多模态数据对齐
- Trained on: robot data, Internet-scale VQA, captioning,在多种数据上展现出 Positive Transfer
- Neural 3D scene, and robot state encoders into the LLM
- Obiect-centric reasoning(传统的 ViT 是基于网格的,不是很能保持对象及其关系的理解)
- RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
- 比起 PaLM-E,RT-2 结合了低层次控制(更加 end to end?)
- 它可以在桌上的一堆玩具中把 extinct animal 跟 dinosaur 以及 action to pick up 联系起来(即使在机器人训练数据集中没有见过,不过互联网 catalog 中可能有)
=== 一般的 VLM 和 RT-1 的架构
- VLM 大致架构(Google 内部使用的 PaLi),用 ViT 抽取图像特征,然后和文本信息一起送入另一个 Transformer
#fig("/public/assets/AI/Embodied/2024-10-07-11-13-52.png", width: 70%)
- #link("https://robotics-transformer1.github.io/")[RT-1 架构](时间:2022.12)
#fig("/public/assets/AI/Embodied/2024-10-07-11-17-03.png")
- language instruction 通过通用 sentence encoder 抽特征,然后和图像一起送入 FILM EfficientNet,先后压缩成 81 个、8 个 tokens,加上位置编码后送入Transformer 块,生成 7 个自由度的 actions
- FILM EfficientNet 是一个卷积神经网络(basically ResNet),它把图像 tokonize 化,同时把吸收理解语言嵌入并附加在 ResNet 的中间层
- 其实有很多种融合 Vision Language 的方法(early fufusuon, late fusion, cross attention...),这只是其中一种考虑到 latency 的做法
- RT-1 与一般的具有特殊输出标记的 VLM 十分相似,所以直接用大型预训练 VLM 作为 policy 是非常自然的
=== #link("https://robotics-transformer2.github.io/")[RT-2 架构], VLA
- 时间:2023.7
- Represent actions in VLM
- 机器人机械臂的位置、角度,可以建模成浮点数、人类语言、正整数
- Google 尝试后使用了 discretized to 256 bins 的做法,使用一系列数字来表征动作
#fig("/public/assets/AI/Embodied/2024-10-07-11-27-58.png", width: 80%)
- training the model
#fig("/public/assets/AI/Embodied/2024-10-07-11-30-01.png", width: 80%)
- 在预训练好的 VLM 上 co-fine-tune,即混合互联网图文数据和机器人数据,使得模型保持 Internet acale knowledge 并且不会因为过小的机器人数据而过拟合
- 这可能是以后每个垂直领域都要用的方法
- 机器人的数据基本上是构造成 QA 的形式
- Inference
#fig("/public/assets/AI/Embodied/2024-10-07-11-29-10.png", width: 80%)
- 继续将任务指令构造成 QA,和机器人观察(自回归,过去+现在,相机的 RGB 图像)送入模型
- 使用 constraint decoding(?) 确保模型始终输出8 个数字,然后 de-tokenize 成 anti-factor delta pose,送给机器人去运行
- 因为整个前向过程涉及到亿级参数,所以是以大概 3 \~ 10 Hz 的速率远程运行在 TPU 集群上
- Chain of thoughts 现象
- 通过 prompt 引导 VLA 去一步步思考计划,用这种 augmented instructions 去微调
- 结果模型展现出逻辑链的能力
- Summary of Vision-Language-Action (VLA) models
- Improved generalization
- New tasks and obiects
- Chain-of-Thought (CoT) reasoning
- Improving underlying model can improve robot control
- Future
+ Increasing motion diversity
+ Extending on CoT capabilities
=== #link("https://robotics-transformer-x.github.io/")[RT-X 数据集](时间:2023.10)
- 另一个 positive transfer 的例子
- 集合了许多家实验室的数据,把机器人数据集变得更大,包含多种任务和机器人实例
- 在这个数据集上训练的机器人表现出显著的正向迁移特性
== New interfaces of LLMs
- LLMs <-> reward as an interface <-> low-level control
- Language Models as general pattern machines
=== #link("https://language-to-reward.github.io/")[Reward as An Interface]
- 时间:2023.6
- 动机
- 当我们规定 action representation 后,Language Model 经过微调就能生成 action tokens,从一个 high-level 的角度看,动作可以视为语言模型的另一种特殊语言
- 我们是否能生成超出微调范围的 more expressive actions?之前我们说缺少一个 API,但是其实一个具体的 API 往往限死了模型的上限
- 作者认为最佳的 API 应该是奖励函数,它是通用的且已被强化学习所验证,更重要的,它是 actions 的重参数化(reparameterization)
- 试想我们让机器人拿起一个水杯,这是一个 skill(或者说,一种观察和行动之间的映射)
- 但是 skills 有更通用的定义 —— 一组目标和一组约束,这有利于模型在不同 skill 之间能够 tranferable
- 而更进一步,目标和约束都可以用奖励函数来表征
- 模型
- 模型总览图
#fig("/public/assets/AI/Embodied/2024-10-07-11-09-21.png")
- 我们可以要求模型输出奖励函数,然后用一个 Motion Controller(可以是强化学习,或者单纯的一个预测控制的模型如 MuJoCo MPC)在这组奖励函数上去优化
- 然后,原来的 LLM 可以视为 reward translation,基本上它分两阶段把自然语言 describe 成动作描述,再 encoder 成 reward functions
- 消融实验证明,结果的提升不仅来自 reward encoder,two-stage 的 translator 也很重要
#fig("/public/assets/AI/Embodied/2024-10-07-11-09-53.png")
- 实验结果
+ 在 Quadruped Robot(自由度较多所以很难让模型直接输出动作)上,语言模型似乎能生成合适的奖励函数,让四足机器人两脚站起来
+ 甚至可以通过自然语言调整来一步步指导它学会太空漫步这种复杂动作(一步步指导这种行为在之前的模型上也是不可能的)
+ 在 transfer to real 的过程中,模型可能会输出过于灵活的动作导致硬件无法执行,所以需要一些正则化和限制
- 这些做法看起来不那么端到端,但也启示我们:当你的领域与语言差太大时,最好找一个中间表示,并要求模型在直接进入更模糊的表示之前用这个中间表示进行解释
=== #link("https://general-pattern-machines.github.io/")[LLM as Genereal Pattern Machines]
- 时间:2023.7
- Example
+ high-level semantics: The student takes out the \_\_\_\_
+ Low-level patterns: 70, 76, 66, 60, 54, \_\_\_\_
- 过去我们把语言模型视为一个语义引擎(semantic engine),这里我们探索使用 LLM 的低级接口,本质上是要求它推理不同的序列,并且效果出奇地好,可以解决 ARC, PCFG 等任务,这里聚焦于 Sequence Improvement
#fig("/public/assets/AI/Embodied/2024-10-07-11-07-45.png")
- Sequence Improvement 是使用 state, action, reward tuples 来 prompt LM,看它是否能产生获得更高 reward 的行动。换句话说,是在 LLM 的 context 中做强化学习(而不需要特定的 algorithm, replaybuffer)
- 这将能允许我们使用 clicker trainning(一种 human feedback 的方式,当机器人做正确的事时 click 一下,即给正向 reward)进行训练
== Summary
- 这次演讲的关键要点是
- 我们看到越来越多的基础模型不仅在机器人的 semantic reasoning 方面,而且更多地在生成动作这种 low-level control 方面使用
- Part 1:Model consolidation, joint scaling, positive transfer
- Rethink the Scaling Law of Robotics Transformer.
- A new recipe for scaling robot model and data.
+ RT-2 shows you can do more with the same data.
+ RT-X shows you can do a lot more with more data.
- Positive transfers everywhere.
- Part 2:New interfaces of LLMs
- Develop new(lower) level interfaces to LLMs.
- GPT 总结
- 本次演讲介绍了两种通过将大型语言模型(LLM)与机器人技术集成来实现低级具身智能的新方法,重点是“RT-2”和“语言奖励”。
+ 前者通过协同微调机器人轨迹数据和广泛的基于网络的视觉语言任务,将先进的视觉语言模型与机器人控制相结合,从而形成了具有强大泛化能力的鲁棒 RT-2 模型。这种方法允许机器人执行未经训练的命令,并有效地执行多阶段语义推理任务,体现了上下文理解和对用户命令的响应方面的重大进步。
+ 后者使用 LLM 生成奖励代码,在高级语言指令和低级机器人动作之间架起一座桥梁。该方法允许实时用户交互,有效地控制机器人手臂执行各种任务,并优于基线方法。
- 这些项目表明,语言模型可以扩展到其传统的高级推理任务领域之外,不仅在解释和生成指令方面,而且在精细生成低级机器人动作方面都发挥着至关重要的作用。
|
|
https://github.com/AU-Master-Thesis/thesis | https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/sections/4-results/scenarios.typ | typst | MIT License | #import "../../lib/mod.typ": *
== Scenarios <s.r.scenarios>
#let scenario(name) = [_*#name*_]
// #jonas[Read this, and all subsections]
The performance of #acr("MAGICS") is evaluated across #numbers.written(scen.len()) different scenarios. The first five, #boxed(color: theme.peach, [*S-X*]), scenarios adhere to the original paper's@gbpplanner experiments, where the last two, #boxed(color: colors.ours, [*S-X*]), are specifically designed for the global planning extension that #acr("MAGICS") provides. The scenarios are:
// #jonas[fixed the boldness overload here]
#set enum(numbering: box-enum.with(color: colors.theirs, prefix: "ST-"))
+ #scenario[Circle:] The environment of this scenario is empty. Robots are placed along the perimeter of a circle centered at the origin with radius $r$. Every robot is tasked with reaching the opposite side of the circle.
+ #scenario[Environment Obstacles:] This scenario is similar to the Circle scenario, but with obstacles near the center of the circle. // Again robots are placed in a circle, centered at $(0, 0)$ with $r=50m$, and are tasked with reaching the opposite side of the circle.
+ #scenario[Varying Network Connectivity:] Identical environment to the Environment Obstacles scenario, where the radius of the robots communication range, $r_"comms"$ is varied.
+ #scenario[Junction:] This environment is much more constrained, only with two roads; a vertical and horizontal one, centered in their cross-axis. Thus creating a simple crossroads at the very center of the environment. Robots are spawned repeatedly in two formations. One group begins on the left side with instructions to cross to the right, while the other starts at the top, with a mission to reach the bottom.
+ #scenario[Communications Failure:] This scenario is based in the same environment as the Circle scenario. It tests how resilient the #acrpl("GBP") algorithm is to spurious loss of communication between robots. By randomly toggling a robots ability to communicate with others at different probabilities every timestep.
#set enum(numbering: box-enum.with(color: colors.ours, prefix: "SO-"))
+ #scenario[Solo Global Planning:] This scenario serves as an isolated test for automated placement of waypoints with #acr("RRT*") and the impact of the tracking factor, $f_t$, on a single robot. This scenario takes place in a complex maze-like environment, where a single robot is spawned in one end, and tasked with reaching the other end.
+ #scenario[Collaborative Global Planning:] This scenario is the same as to the solo scenario, but with many robots. Several spawning locations are possible, where each individual robot get a task to traverse the complex environment. The purpose of this scenario is to test the interactivity between the tracking factors and the interrobot factors, $f_i$.
+ #scenario[Iteration Amount:] Same environment as the Circle scenario. This explores the effect of varying $M_I$ and $M_E$. The goal is to determine optimal values for these parameters and whether increasing the iteration count consistently improves performance.
+ #scenario[Iteration Schedules:] Identical environment and formation as the Circle scenario. This scenario explores the effect of the various iteration schedules, presented in @s.iteration-schedules.
// + _*Communications Failure:*_ This scenario is based in the same environment as the Circle scenario. In this scenario simulates the possibility of communication failure between the robots by flipping a communication toggle with some probability at every timestep.
// In this scenario simulates the possibility of communication failure between the robots by flipping a communication toggle with some probability at every timestep.
Specific details and parameters for each scenario are presented in the following sections #numref(<s.r.scenarios.circle>)-#numref(<s.r.scenarios.iteration-schedules>). Parameters that are the same for this thesis and @gbpplanner are selected to be identical. The numerical value of a few parameters in some of the scenarios are not listed explicitly. In these cases, an argument for the selected interpretation is presented to justify the values chosen. An asterisk is used as a postfix for the values for which this applies, e.g. $x^*$. For example the lookahead multiple $l_m$ is only set as a default value to $3$ in the source codes configuration class #source-link("https://github.com/aalpatya/gbpplanner/blob/fd719ce6b57c443bc0484fa6bb751867ed0c48f4/inc/Globals.h#L49", "./inc/Globals.h:49") and not overwritten in any of the provided experimental configuration files. A lot of the values are the same between scenarios; thus, to make the differences stand out, each value that is different from its value in the Circle scenario is colored #text(theme.peach, [orange]). Parameters related to the #acr("GBP") algorithm have been explained in detail throughout the @methodology. New parameters not explained previously are:
#term-table(
[$bold(C_("radius"))$], [The radius of the circle that the robots are spawned in. Omitted in the Junction scenario, as it is not applicable.],
[$bold(s e e d)$], [
The seed used for the #acr("PRNG"). Randomness is used in one place throughout the different scenarios. For randomly selecting a robots radius $r_R$ in the scenarios using a circle formation. In the work of @gbpplanner, the Mersenne Twister #acr("PRNG") from the C++ standard library is used. In the reimplementation the WyRand algorithm is used as it was more easily available through@bevy_prng. This of course introduces a slight potential for deviation between results. Nevertheless this is deemed acceptable given that the randomness does not have a large affect on the #acr("GBP") algorithm. And secondly the seeds used for the tested scenarios is not listed explicitly in @gbpplanner. To make the results more robust, each scenario is executed five times, each with a different seed. Note that in @gbpplanner this approach is only used for the Communication Failure Experiment. The chosen seeds are in @eq.seeds:
$ #equation.as-set(params.seeds) $<eq.seeds>
They have been chosen such that all seeds have a minimum Hamming Distance of $5$, to ensure a moderate amount of entropy@review-of-methodologies-and-metrics-for-assessing-the-quality-of-random-number-generators. For a full summary of all experimental parameters used in the reproduction experiments see @appendix.reproduction-experiment-parameters. Additionally, environment visualizations are provided in figures #numref(<f.scenarios.circle>), #numref(<f.scenarios.environment-obstacles>), and #numref(<f.scenarios.junction>).
]
)
#show quote: emph
=== #scen.circle.n <s.r.scenarios.circle>
// EXCEPT from their paper
// Robots of various sizes are initialized in a circle formation of radius 50 m, with an initial speed of 15 m/s, towards a stationary horizon state at the opposite side of the circle. The radii of the robots are sampled randomly from rR ∼ U (2, 3) m. We set tK−1 as the ideal time taken for a single robot moving from the initial speed to rest across the circle at constant acceleration. This would correspond to a smooth (zero jerk) trajectory. Each robot has a communication range rC = 50 m representing a partially connected network of robots. In addition, we set σd = 1 m.
This scenario is the basis for all the other scenarios expect for the Junction scenario. Robots are placed in a circle of radius $C_("radius") =50m$ and are tasked to reach the opposite side of the circle, see @f.scenarios.circle. For example, if a robot is placed at an arc length of $theta$, then it has to reach the point at arc length $theta + pi$ of the circle, also known as its _antipodal point_. See a listing of all parameters in @t.scenarios.circle.
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.circle.env, title: [Environment]),
params.tabular(params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.circle.factor, title: [Factor Settings]),
),
caption: [#scenario[Circle] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.circle>
#figure(
grid(
// columns: (35%, 35%),
columns: 2,
std-block(image("../../figures/img/obstacle-environment-start.png")),
std-block(image("../../figures/img/obstacle-environment-end.png")),
),
// image("../../../figures/out/circle.svg", width: 30%),
// std-block(todo[Add image]),
// std-block(image("../../figures/img/circle-scenario-preview-1.png")),
// std-block(image("../../figures/img/circle-scenario-preview-2.png")),
// std-block(image("../../figures/img/circle-scenario-preview-3.png")),
caption: [
Screenshot of the #scenario[Circle] environment with $N_R = 20$. The left image shows the robot in their start positions at $t=0$. Their factorgraph is extended infront of them pointing to the antipodal point across the circle. The right hand side shows the end of a run with each robot having moved across the circle. The traversed path of each robot is traced.
],
)<f.scenarios.circle>
The challenge of this scenario, is that the optimal path for each robot all intersect each other at the same point, at the same time. To handle this case, the robots will have to collaboratively arrive at a solution that accomedate all of them. To make the layout of each table more compact, they have been grouped into three categories; _Environment_, _GBP Algorithm_ and _Factor Settings_. Robot radiis are sampled randomly from a uniform distribution over the interval $[2.0, 3.0] m$. For the time horizon $t_(K-1)$ no numerical value is explicitly stated. Instead, the following sentence is used to explain its value: #quote[towards a stationary horizon state at the opposite side of the circle ... We set#" "$t_(K-1)$ as the ideal time taken for a single robot moving from the initial speed to rest across the circle at constant acceleration. This would correspond to a smooth (zero jerk) trajectory@gbpplanner]. This statement is interpreted as follows. For a robot to move across the circle it has to traverse $2 times C_("radius") = 100m$. With initial speed $|v_0| = 15 m"/"s$ and final speed $|v_("final")| = 0 m"/"s$ the time taken to traverse at constant acceleration can be computed using the kinematic equations for velocity and displacement@essential-university-physics. Doing this one get $t_(K-1) = 13 + 1/3 s$. See @appendix.interpretation-of-parameters for the derivation. The value of the _seed_ parameter is not stated either.
// Note that the parameters in the #source-link("https://github.com/aalpatya/gbpplanner/blob/fd719ce6b57c443bc0484fa6bb751867ed0c48f4/config/circle_cluttered.json", "config/circle_cluttered.json") and #source-link("https://github.com/aalpatya/gbpplanner/tree/main/config", "config/*.json") does not match any of the experimental scenarios listed in @gbpplanner.
// #note.a[ref to discussion about mismatch between code and paper experiments]
#v(-0.25em)
=== #scen.environment-obstacles.n <s.r.scenarios.environment-obstacles>
#v(-0.25em)
// EXCEPT from their paper
// Environment obstacles: Also shown in figure 6 with dotted lines are the makespans for ORCA and our GBP planner when 6 polygonal obstacles are placed in the middle of the circle. The paths can be seen in figure 3. The results are for one layout of obstacles averaged over 5 seeds. For NR = 25 and 30 some robots using ORCA became deadlocked with the obstacle configuration. Our method performs well with obstacles, producing makespans that are only slightly higher than in those in free space
In this scenario the robots are placed in a circle similar to the Circle scenario, see @s.r.scenarios.circle. The environment is sparsely populated with six small obstacles; two triangles, three squares and one rectangle. The obstacles are placed near the middle of the environment.
The addition of obstacles adds significantly to the difficultly, since the robots must find collision-free paths around each other and also adjust their routes to avoid obstacles obstructing the optimal path across the circle's interior. Refer to @t.scenarios.environment-obstacles for the full list of parameters.
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.clear-circle.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.clear-circle.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.clear-circle.factor, previous: params.circle.factor, title: [Factor Settings]),
),
caption: [#scenario[Environment Obstacles] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.environment-obstacles>
#figure(
grid(
columns: 2,
std-block(image("../../figures/img/environment-obstacles-start.png")),
std-block(image("../../figures/img/environment-obstacles-end.png")),
),
// image("../../../figures/out/clear-circle.svg", width: 30%),
// std-block(todo[Add image]),
// std-block(image("../../figures/img/environment-obstacles-scenario-preview.png")),
caption: [Screenshot of the #scenario[Environment Obstacles] environment with $N_R = 20$. The left image shows the robot in their start positions at $t=0$. Their factorgraph is extended infront of them pointing to the antipodal point across the circle. The right hand side shows the end of a run with each robot having moved across the circle, with the traversed path of each robot traced in its color. The environment contains six static obstacles shown in dark gray. The red #sr outline of each obstacle is boundary of the collider volume used to check for environment collisions.]
)<f.scenarios.environment-obstacles>
=== #scen.varying-network-connectivity.n <s.r.scenarios.varying-network-connectivity>
// EXCEPT from their paper
// Varying network connectivity: Robots within a communication range rC of each other form a partially connected network, and can collaboratively modify their planned paths. We investigate the effect of varying rC for NR = 30 for the 100 m diameter circle formation with obstacles. Table I shows that as rC increases robots take more of their neighbours into account, resulting in greater makespans but small changes in the distances travelled and path smoothness. This highlights the applicability of our method to real networks where sensing and communication range may be limited.
This scenario uses the same environment as the Environment Obstacles scenario, see @s.r.scenarios.environment-obstacles. Now each robots' communication range $r_C$ is varied from $20m$ up to $80m$ at $20m$ intervals, see @t.scenarios.network. The purpose of this scenario is to check what affect connectivity between factorgraphs has on the robots' capability to plan collaboratively. When $r_C$ is small, it is more likely that robots have few connected neighbours, resulting in multiple smaller disjoint subclusters. When $r_C$ grows larger, the connectivity of the overall network will grow as disjoint subclusters are merged into larger connected ones.
// As $r_C$
// Robots within a communication range $r_C$ of each other form a partially connected network. By varying $r_C$ most
// As more robots are connected with each other the more information about each others uncertainty is available during the variable optimization step. As a result each robot should arrive
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.varying-network-connectivity.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.varying-network-connectivity.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.varying-network-connectivity.factor, previous: params.circle.factor, title: [Factor Settings]),
),
caption: [#scenario[Junction] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.network>
=== #scen.junction.n <s.r.scenarios.junction>
Robots working in crowded environments may need to operate at high speeds with high levels of coordination, such as when traversing junctions between shelves in a warehouse. This scenario simulates one such junction with channel widths of 16 meters, and robots moving at $15m "/" s$. Having a time horizon of $t_(K-1) = 2s$, forces the robots to have a short prediction of future states. In addition $sigma_d =0.5m$.
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.junction.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.junction.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.junction.factor, previous: params.circle.factor, title: [Factor Settings]),
),
caption: [Junction scenario parameters. $C_("radius")$ is not relevant for this scenario, as robots are not spawned in a circle. Likewise the number of robots $N_R$ is not fixed, but instead given as $Q_("in") times 500 "/" Delta_t = Q_("in") times 50 s$. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.p.junction>
A desirable trait of multirobot systems is to maintain a high flow rate without causing blockages at junctions. To test this the the rate $Q_("in")$ at which robots enter the central section of the junction is adjusted, and the rate $Q_("out")$ at which they exit is measured. To measure flow, the central section is observed over 500 timesteps to represent steady-state behavior. Robots must exit the junction in the same direction they entered, without collisions. $Q_("in")$ is adjusted over the list of values in @eq.qin. For all parameters refer to @t.scenarios.p.junction, and a screenshot of the scenario at both high and low values of $Q_"in"$ is shown in @f.scenarios.junction.
$ Q_("in") in [0.5, 1, ..., 7] $<eq.qin>
// - $Q_("in")$ vary $"robots" "/" s$
// - $Q_("out")$ measure $"robots" "/" s$
// - $Q_("in")$ should equal $Q_("out")$
#figure(
grid(
columns: 2,
std-block(scale(x: 95%, y: 95%, image("../../figures/img/junction-experiment-preview-qin-2.png"))),
std-block(image("../../figures/img/junction-experiment-preview.png")),
),
// image("../../../figures/out/junction.svg", width: 30%),
// std-block(todo[Add image]),
// std-block(image("../../figures/img/junction-experiment-preview.png")),
caption: [
Screenshot of the #scenario[Junction] scenario with $Q_("in") = 2 "robots/s"$ to the left and $Q_("in") = 6 "robots/s"$ to the right. The green #sg edges represent the the joint factorgraph.
],
)<f.scenarios.junction>
=== #scen.communications-failure.n <s.r.scenarios.communications-failure>
// EXCEPT from their paper
// Our GBP planner relies on per-timestep peer-to-peer communication between robots. It is assumed that each robot follows a protocol similar to [11]; it always broadcasts its state information. We consider a communications failure scenario where a robot is not able to receive messages from robots it is connected to. We would expect more cautious behaviour when planning a trajectory. We simulate a communication failure fraction γ: at each timestep the robot cannot receive any messages from a randomly sampled proportion γ of its connected neighbours. We repeat the circle experiment with 21 robots at two different initial speeds of 10 m/s and 15 m/s, measuring the makespan. The reported result is an average over 5 different random seeds. To be fair, at any timestep for any robot, the failed communications are exactly the same given a fixed seed for both initial velocities considered.
This scenario uses the same environment as the #scenario[Environment Obstacles] scenario, see @s.r.scenarios.environment-obstacles. The purpose of this scenario is to test the planning algorithm's performance under sub-optimal external communication conditions. In real-world situations, this could be caused by phenomena such as packet loss, due to congestion in the radio frequency band, or high interference from other electrical equipment transmitting messages. Under these conditions, the desired behavior to exhibit for a planning algorithm, is increased caution when determining a trajectory. To simulate this, the same non-zero probability $gamma$ is assigned to each robot.
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.communications-failure.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.communications-failure.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.communications-failure.factor, previous: params.circle.factor, title: [Factor Settings]),
)
, caption: [#scenario[Communications Failure] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.communications-failure>
At every simulated timestep, a robot's ability to communicate with other factorgraphs through any established interrobot factors are toggled with probability $gamma$. Leading to one of the four possible states between a robots and one of its paired neighbours listed at the end in @s.m.algorithm. The scenario is tested with 21 robots at two different initial speeds of 10 m/s and 15 m/s, see @t.scenarios.communications-failure.
=== #scen.solo-gp.n <s.r.scenarios.solor-gp>
#let body = [
In this scenario, a complex maze-like environment has been constructed#footnote[Found in #gbp-rs(content: "AU-Master-Thesis/gbp-rs"), in file #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/c17370455af38a6cab0eb5acea1a576247a0e732/config/simulations/Complex/environment.yaml", "config/simulations/Complex/environment.yaml")]<fn.solo-gp.environment>, similar to the one shown in @f.m.maze-env. A demonstration of this environment, with and without global planning enabled is shown in an accompanying video#footnote[Video found on #link("https://youtu.be/Uzz57A4Tk5E", "YouTube: Master Thesis - Path Tracking vs Waypoint Tracking vs Default")]. A single robot is spawned in the bottom-left corner of the environment, see @f.scenarios.solo-gp. The robot is tasked with reaching the top-right corner of the environment. The environment is designed to be challenging, with narrow corridors, tight corners, and a plethora of possible paths. The accompanying `formation.yaml`#footnote[Found in #gbp-rs(content: "AU-Master-Thesis/gbp-rs"), in file #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/97e40fdba0005fd91f903af04df74c31cbc1c05c/config/simulations/Solo%20GP/formation.yaml", "config/simulations/Solo GP/formation.yaml")]<fn.solo-gp.formation> file specifies the planning strategy to be that of
]
#let rrt-params = (
max-iterations: $5000000$,
step-size: $5.0$,
collision-radius: $3.0$,
neighbourhood-radius: $8.0$
)
#let tab = [
#figure(
params.tabular(rrt-params, title: [RRT\*]),
caption: [Parameters for the #acr("RRT*") pathfinding algorithm.],
supplement: [Table],
kind: table
)<t.scenarios.solo-gp.rrt-params>
]
#grid(
columns: (1fr, 40mm),
gutter: 1em,
body,
tab
)
#v(-0.5em)
`rrt-star` instead of `only-local`, which has been the case for all of the above scenarios. The robot is equipped with a #acr("RRT*") pathfinding component, which, in @f.scenarios.solo-gp, has just finished computing the path it is expected to take. The path is shown as a faint green line across the corridors of the map. Parameters are listed in @t.scenarios.solo-gp, with #acr("RRT*") specific parameters in @t.scenarios.solo-gp.rrt-params.
#figure(
std-block(
width: 70%,
block(
clip: true,
pad(
x: -8mm,
y: -0.2mm,
image("../../figures/img/solo-gp.png")
)
)
),
caption: [
Screenshot of the complex maze-like environment. The robot#sg has just spawned in the bottom-left corner, and its #acr("RRT*") pathfinding has just finished. The expected path is shown as a faint green line #inline-line(stroke: theme.green.lighten(50%) + 2pt) across the corridors of the map. Here the tracking factors have been enabled.
]
)<f.scenarios.solo-gp>
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.solo-gp.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.solo-gp.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.solo-gp.factor, previous: params.circle.factor, title: [Factor Settings]),
),
supplement: [Table],
kind: table,
caption: [#scenario[Solo GP] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.solo-gp>
=== #scen.collaborative-gp.n <s.r.scenarios.collaborative-gp>
Here, the same complex environment as in the #scen.solo-gp.n scenario is used. However, instead of a single robot, multiple robots are spawned in different locations across the environment. That means the `formation.yaml`@fn.solo-gp.formation file is significantly more detailed. Specifically, there are four spawning location, one in each corner; where each robot is tasked with reaching the opposite corner. Each robot is, once again, equipped with a #acr("RRT*") pathfinding component, which will compute a collision-free path in terms of the static environment. @f.scenarios.collaborative-gp shows a screenshot of the simulation in progress, where multiple robots are using the waypoint tracking approach, as described in @s.m.planning.waypoint-tracking. Parameters are listed in @t.scenarios.collaborative-gp, with the same #acr("RRT*") specific parameters as for Solo Global Planning in @t.scenarios.solo-gp.rrt-params.
#figure(
// std-block(todo[image of many robots driving in the maze]),
std-block(
width: 70%,
block(
clip: true,
pad(
x: -8mm,
y: -0.2mm,
image("../../figures/img/collaborative-gp.png")
)
)
),
caption: [Screenshot of the maze-like <EMAIL>, with multiple robots spawning in nine different formations, sharing three spawning points and three exit points, see the accompanying `formation.yaml` file#footnote[Found in #gbp-rs(content: "AU-Master-Thesis/gbp-rs"), in file #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/97e40fdba0005fd91f903af04df74c31cbc1c05c/config/simulations/Collaborative%20GP%20Low%20Qin/formation.yaml", "config/simulations/Collaborative GP/formation.yaml")]. Tracking factors are not enabled here.]
)<f.scenarios.collaborative-gp>
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.collaborative-gp.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.collaborative-gp.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.collaborative-gp.factor, previous: params.circle.factor, title: [Factor Settings]),
)
, caption: [#scenario[Collaborative GP] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.collaborative-gp>
=== #scen.iteration-amount.n <s.r.scenarios.iteration-amount>
This scenario explores the effect of varying $M_I$ and $M_E$. Both $M_I$ and $M_E$ are varied over the second to tenth number in the Fibonacci sequence, see <eq.fibonacci>, and the rest of the parameters listed in @t.scenarios.iteration-amount.
$ M_I and M_E = {"fib(i)" | i in [2,3, ..., 10]} $<eq.fibonacci>
This sequence of values is selected to test the algorithm with initially few iterations, gradually increasing the number of iterations with larger differences between each step.
#v(0.5em)
#par(first-line-indent: 0pt)[
*Expectation*: Performance should converge after 8-10 iterations of both $M_I$ and $M_E$. Performance is poor with very few iterations for either parameter. Consequently, performance will be suboptimal if either $M_I$ or $M_E$ are close to $1$ while the other is higher.
]
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.iteration-amount.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.iteration-amount.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.iteration-amount.factor, previous: params.circle.factor, title: [Factor Settings]),
),
caption: [#scenario[Iteration Amount] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.iteration-amount>
#pagebreak(weak: true)
=== #scen.iteration-schedules.n <s.r.scenarios.iteration-schedules>
The parameters for this scenario are listed in @t.scenarios.iteration-schedules. The number of robots spawning in the circular formation is kept constant at $N_R = 30$ for all runs. Each of the five schedules presented in @s.iteration-schedules are tested with internal iteration $M_i = 50$, and external iteration $M_E$ varied between ${5, 10, 15, 20, 25}$. $M_E$ is varied instead of $M_I$ as it represents the external communication that would be subject to jitter and latency in a real scenario.
#figure(
grid(
columns: (40%, 30% - 0.5em, 30% - 0.5em),
column-gutter: 0.5em,
params.tabular(params.iteration-schedules.env, previous: params.circle.env, title: [Environment]),
params.tabular(params.iteration-schedules.gbp, previous: params.circle.gbp, title: [GBP Algorithm], extra-rows: 0),
params.tabular(params.iteration-schedules.factor, previous: params.circle.factor, title: [Factor Settings]),
)
, caption: [#scenario[Iteration Schedules] scenario parameters. Text in #text(theme.peach, [orange]) marks the differences from the Circle scenario.],
)<t.scenarios.iteration-schedules>
|
https://github.com/OthoDeng/Typst-notes | https://raw.githubusercontent.com/OthoDeng/Typst-notes/main/README.md | markdown | # Typst-notes
Personal notes using Typst
_After one notebook is finished, a compiled pdf version will be uploaded in the same directory._
|
|
https://github.com/Gekkio/gb-ctr | https://raw.githubusercontent.com/Gekkio/gb-ctr/main/appendix/opcode-tables.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "../common.typ": *
== Instruction set tables
These tables include all the opcodes in the Sharp SM83 instruction set. The style and layout of these tables was inspired by the opcode tables available at pastraiser.com @pastraiser.
#set page(flipped: true)
#let colors = (
load_8b: rgb("#87CEEB"),
load_16b: rgb("#98FB98"),
alu_8b: rgb("#FFD700"),
alu_16b: rgb("#FFC0CB"),
rotate_shift_bit: rgb("#30D5C8"),
control: rgb("#F4A460"),
misc: rgb("#DB7093"),
undefined: rgb("#C0C0C0"),
)
#let inset = 5pt
#let border = (content) => box(inset: inset, content)
#let col_labels = ([], ..range(16).map((x) => border("x" + str(x, base: 16))))
#let ops = toml("../opcodes.toml")
#let opcode_table = (opcodes) => {
monotext(6pt, weight: "bold")[
#table(
inset: 0pt,
columns: (auto, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr),
align: (col, row) => {
if col == 0 {
right
} else {
center
}
},
fill: (col, row) => {
if col == 0 or row == 0 {
none
} else {
let opcode = (col - 1) + (row - 1) * 0x10
if opcodes.len() > opcode {
let op = opcodes.at(opcode)
let category = ops.categories.at(op.category)
colors.at(category.kind)
}
}
},
..col_labels,
..range(16).map((y) => {
(
border(str(y, base: 16) + "x"),
..range(16).map((x) => {
let opcode = x + (y * 0x10)
let op = opcodes.at(opcode)
let dest = label("op:" + op.category)
link(dest, block(width: 100%, inset: inset, op.mnemonic))
})
)
}).flatten()
)
]
}
#figure(
opcode_table(ops.opcodes),
caption: "Sharp SM83 instruction set",
kind: table
)
*Legend:*
#let color-box = (fill, content) => box(fill: fill, stroke: black, inset: 6pt)[#content]
#color-box(colors.load_8b)[8-bit loads]
#color-box(colors.load_16b)[16-bit loads]
#color-box(colors.alu_8b)[8-bit arithmetic/logical]
#color-box(colors.alu_16b)[16-bit arithmetic]
#color-box(colors.rotate_shift_bit)[Rotates, shifts, and bit operations]
#color-box(colors.control)[Control flow]
#color-box(colors.misc)[Miscellaneous]
#color-box(colors.undefined)[Undefined]
#grid(
columns: (auto, 1fr),
gutter: 1em,
[*n*], [unsigned 8-bit immediate data],
[*nn*], [unsigned 16-bit immediate data],
[*e*], [signed 8-bit immediate data],
)
#figure(
opcode_table(ops.cb_opcodes),
caption: "Sharp SM83 CB-prefixed instructions",
kind: table
)
|
https://github.com/SeniorMars/tree-sitter-typst | https://raw.githubusercontent.com/SeniorMars/tree-sitter-typst/main/examples/text/raw-code.typ | typst | MIT License | // Test code highlighting.
---
#set page(width: 180pt)
#set text(6pt)
```typ
= Chapter 1
#lorem(100)
#let hi = "Hello World"
#show heading: emph
```
---
#set page(width: 180pt)
#set text(6pt)
```rust
/// A carefully designed state machine.
#[derive(Debug)]
enum State<'a> { A(u8), B(&'a str) }
fn advance(state: State<'_>) -> State<'_> {
unimplemented!("state machine")
}
```
---
#set page(width: 180pt)
#set text(6pt)
```py
import this
def hi():
print("Hi!")
```
---
#set page(width: 180pt)
#set text(6pt)
```cpp
#include <iostream>
int main() {
std::cout << "Hello, world!";
}
```
---
#set page(width: 180pt)
#set text(6pt)
#rect(inset: (x: 4pt, y: 5pt), radius: 4pt, fill: rgb(239, 241, 243))[
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<h1>Topic</h1>
<p>The Hypertext Markup Language.</p>
<script>
function foo(a, b) {
return a + b + "string";
}
</script>
</body>
</html>
```
]
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/hint-00.typ | typst | Other | // Error: 1:17-1:19 expected length, found integer: a length needs a unit – did you mean 12pt?
#set text(size: 12)
|
https://github.com/Leadlegend/Curriculum-Vitae | https://raw.githubusercontent.com/Leadlegend/Curriculum-Vitae/main/modules/professional.typ | typst | Apache License 2.0 | #import "../template.typ": *
#cvSection("Professional Experience")
#cvEntry(
title: [Research Intern],
society: [Tencent Ai Lab],
logo: "./src/logos/tencent.png",
date: [2021 - 2022],
location: [Beijing, China],
description: list(
[Participated in construction and maintenance of multi-lingual universal-domain knowledge graph #link("https://tac.nist.gov/publications/2017/participant.papers/TAC2017.TAI.proceedings.pdf")[Topbase]],
[Implemented and optimized an entity linking model based on knowledge graph Topbase via distributed parallel development on multi-GPU, using data parallel and gradient parallel to improve contrastive learning effectiveness and inference accuracy.],
[Designed a domain-specific semi-supervised named-entity recognition model based on contrastive learning paradigm Self-Tuning, introduced training signal annealing and self-distillation for better performance and achieved around 0.70 F1-score in sports and education domains with very low demand for annotated data.],
[Implemented a controlled text generation model based on Chinese GPT-2 and #link("https://github.com/dbiir/UER-py")[UER-py], which takes keyword as input and outputs descriptive advertising text; Optimized model’s keyword generation coverage and output diversity through in-domain pretraining and Mention Flags. The project received “Tencent Monthly Innovation Award” and has been applied to Tencent advertisement business of online reading platform.],
),
tags: ("Knowledge Graph", "Information Extraction", "Pre-training")
)
/*
#cvEntry(
title: [Data Analyst],
society: [ABC Company],
logo: "./src/logos/abc_company.png",
date: [2017 - 2020],
location: [New York, NY],
description: list(
[Analyze large datasets using SQL and Python, collaborate with cross-functional teams to identify business insights],
[Create data visualizations and dashboards using Tableau, develop and maintain data pipelines using AWS],
)
)
#cvEntry(
title: [Data Analysis Intern],
society: [PQR Corporation],
logo: "./src/logos/pqr_corp.png",
date: [Summer 2017],
location: [Chicago, IL],
description: list(
[Assisted with data cleaning, processing, and analysis using Python and Excel, participated in team meetings and contributed to project planning and execution],
)
)
*/
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/bugs/math-text-break.typ | typst | Apache License 2.0 | // Test text with linebreaks in math.
---
$ x := "a\nb\nc\nd\ne" $
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/calc-06.typ | typst | Other | // Test the `abs` function.
#test(calc.abs(-3), 3)
#test(calc.abs(3), 3)
#test(calc.abs(-0.0), 0.0)
#test(calc.abs(0.0), -0.0)
#test(calc.abs(-3.14), 3.14)
#test(calc.abs(50%), 50%)
#test(calc.abs(-25%), 25%)
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.