repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/SWEatUNIPD/docs | https://raw.githubusercontent.com/SWEatUNIPD/docs/main/20241016%20-%20Prima%20Riunione.typ | typst | #set text(lang: "it")
#set page(margin: (x: 2.5cm, y: 2.5cm))
#set heading(numbering: "1.")
#align(center + top, [
#figure(
image("img/logo.svg", width: 50%),
)
#v(15pt);
#text(size: 23pt,weight: "bold", [Verbale: Prima riunione]) \
#text(size: 15pt, [16 ottobre 2024])
#v(10em)
#grid(
columns: (1fr, 1fr),
align: (right, left),
grid.vline(x: 1),
inset: (x:5pt,y:4pt),
[Uso],[Interno],
[Destinatari], [Prof. <NAME>],
[], [Prof. <NAME>],
grid.cell(colspan: 2, []),
[Responsabile], [<NAME>],
[Redattori], [<NAME>],
[Verificatori], [<NAME>],
[], [<NAME>],
[], [<NAME>],
[], [<NAME>],
[], [<NAME>]
)
])
#align(center+bottom, [
#figure(
image("img/unipd.png", width: 15%),
)
Corso di Ingegneria del Software\
A.A. 2024/25
])
#pagebreak(weak: true)
#show outline.entry.where(
level: 1
): it => {
v(12pt, weak: true)
strong(it)
}
#outline(indent: auto)
#pagebreak()
= Informazioni generali
== Luogo e data dell'incontro
- *Luogo*: Chiamata Discord
- *Data*: 16-10-2024
- *Ora*: 16:30
- *Durata*: 1 ora e mezza
== Partecipanti
- <NAME>
- <NAME>
- <NAME>
- <NAME>
- <NAME>
- <NAME>
- <NAME>
= Ordine del giorno
== Nome del team
Il gruppo, a seguito di una serie di nomi proposti dai vari membri, ha convenuto nell'utilizzo del nome #strong[SWE@].
== Logo rappresentativo del team
A seguito di vari loghi proposti, il gruppo ha deciso l'utilizzo del seguente logo:
#align(center, [
#figure(image("img/logo.svg", width: 40%))
])
== Strumenti da utilizzare
Il gruppo ha raggiunto la decisione finale di utilizzare i seguenti strumenti:
- #strong[Discord] e #strong[Telegram] per le comunicazioni interne
- #strong[Gmail] come provider di posta elettronica per le comunicazioni esterne, con la seguente email: #link("mailto:<EMAIL>", text(size: 10pt, [<EMAIL>]))
- #strong[Git] per il versionamento del codice, in particolar modo sfruttando la piattaforma GitHub. È stata creata inoltre un'organizzazione per la raccolta di tutti i repository inerenti al progetto: #link("https://github.com/SWEat-UNIPD")
- #strong[Typst] per la stesura delle varie documentazioni inerenti al progetto
== Scelta dei capitolati e invio e-mail proponenti
Dopo aver analizzato i capitolati d'appalto presentati dalle varie aziende, per quanto ne riguarda la fattibilità e le abilità dei singoli componenti del gruppo, le preferenze sono ricadute sui capitolati C3 (VarGroup S.p.A.) e C4 (SyncLab S.r.L.).\
Sono state inviate inoltre le relative e-mail alle singole per organizzare un'incontro conoscitivo e risolvere alcuni dubbi e/o perplessità relative alle specifiche dei progetti proposti.
|
|
https://github.com/dashuai009/dashuai009.github.io | https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/009.typ | typst | #let date = datetime(
year: 2024,
month: 6,
day: 23,
)
#metadata((
"title": "clash 配置chatgpt的rule",
"author": "dashuai009",
description: "这是一种将C++更加现代的代码组织方式。 模块是一组源代码文件,独立于导入它们的翻译单元进行编译。",
pubDate: date.display(),
subtitle: [clash rule],
))<frontmatter>
#import "../__template/style.typ": conf
#show: conf
#date.display();
#outline()
= 问题
openai.com检查的内容很多,clash的rule模式一般过不去。我这里必须切到golbal模式才可以访问。找了半天,clash可以配置Parser,在机场提供的配置文件外,添加规则配置。自定义规则可以单独设置,我想要gpt相关的访问全部走美国节点。
= 使用Parser
Parser是针对机场给的配置文件,新增一些规则,这样机场节点少了也可能还可以复用。
```
clash > Profiles > 右键配置 > Parsers > Edit parsers
```
向打开的文件添加以下内容。
```text
parsers: # array
- url: YOUR_URL
yaml:
prepend-proxy-groups:
- name: 🚀 OpenAI
type: select
proxies:
- 美国 A
- 美国 B
- 美国 C
- 美国 D [0.5x]
- 美国 E [0.5x]
prepend-rules:
- DOMAIN,browser-intake-datadoghq.com,🚀 OpenAI
- DOMAIN,static.cloudflareinsights.com,🚀 OpenAI
- DOMAIN-SUFFIX,ai.com,🚀 OpenAI
- DOMAIN-SUFFIX,algolia.net,🚀 OpenAI
- DOMAIN-SUFFIX,api.statsig.com,🚀 OpenAI
- DOMAIN-SUFFIX,auth0.com,🚀 OpenAI
- DOMAIN-SUFFIX,chatgpt.com,🚀 OpenAI
- DOMAIN-SUFFIX,chatgpt.livekit.cloud,🚀 OpenAI
- DOMAIN-SUFFIX,client-api.arkoselabs.com,🚀 OpenAI
- DOMAIN-SUFFIX,events.statsigapi.net,🚀 OpenAI
- DOMAIN-SUFFIX,featuregates.org,🚀 OpenAI
- DOMAIN-SUFFIX,host.livekit.cloud,🚀 OpenAI
- DOMAIN-SUFFIX,identrust.com,🚀 OpenAI
- DOMAIN-SUFFIX,intercom.io,🚀 OpenAI
- DOMAIN-SUFFIX,intercomcdn.com,🚀 OpenAI
- DOMAIN-SUFFIX,launchdarkly.com,🚀 OpenAI
- DOMAIN-SUFFIX,oaistatic.com,🚀 OpenAI
- DOMAIN-SUFFIX,oaiusercontent.com,🚀 OpenAI
- DOMAIN-SUFFIX,observeit.net,🚀 OpenAI
- DOMAIN-SUFFIX,segment.io,🚀 OpenAI
- DOMAIN-SUFFIX,sentry.io,🚀 OpenAI
- DOMAIN-SUFFIX,stripe.com,🚀 OpenAI
- DOMAIN-SUFFIX,turn.livekit.cloud,🚀 OpenAI
- DOMAIN-SUFFIX,openai.com,🚀 OpenAI
```
注意
1. url后的YOUR_URL就是机场提供的配置链接。
2. prepend-proxy-groups是将之后定义的组加到已有groxy-groups的前面。新建一个“🚀 OpenAI”的组,之后在Proxies里就能看到新建了这个组。复制想要的proxies的名字列表。
3. prepend-rules的配置来自#link("https://github.com/blackmatrix7/ios_rule_script/blob/master/rule/Clash/OpenAI/OpenAI.yaml")[这里]
4. 注意:我的配置文件跟上边不太同,去掉了最后三个个IP的配置和DOMAIN-KEYWORD的配置,报了个错忘了是啥了;加上了`- DOMAIN-SUFFIX,openai.com,🚀 OpenAI`这一行,让`openai.com`域名也走美国节点。
5. 可能还需要重启一下clash
= Parser配置说明
#table(
columns: (1fr, auto),
inset: 10pt,
align: horizon,
table.header("键", "操作"),
"append-rules", "数组合并至原配置 rules 数组后",
"prepend-rules", " 数组合并至原配置 rules 数组前",
"append-proxies", " 数组合并至原配置 proxies 数组后",
"prepend-proxies", " 数组合并至原配置 proxies 数组前",
"append-proxy-groups", " 数组合并至原配置 proxy-groups 数组后",
"prepend-proxy-groups", " 数组合并至原配置 proxy-groups 数组前",
"mix-proxy-providers ", "对象合并至原配置 proxy-providers 中",
"mix-rule-providers", " 对象合并至原配置 rule-providers 中",
"mix-object", " 对象合并至原配置最外层中",
"commands ", "在上面操作完成后执行简单命令操作配置文件",
)
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/quotes_04.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test turning smart quotes off.
He's told some books contain questionable "example text".
#set smartquote(enabled: false)
He's told some books contain questionable "example text".
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/alignment_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test post-fix alignment.
$
& "right" \
"a very long line" \
"left" \
$
|
https://github.com/barddust/Kuafu | https://raw.githubusercontent.com/barddust/Kuafu/main/src/BeforeMathematics/set.typ | typst | #import "/mathenv.typ": *
= Sets
We have already used some concepts and operations in before chapters, such as belonging and some notation of common sets of numbers. In this chapter, we take one more step into set theory, but just a little one. Since the set theory is the basis of mathematics, you fill found it huger than your expectation when you digging deeper!
== Basic Concepts of Set
#definition[
A *Set* is a collection of objects without order. These objects are called the *elements* of members of the set. For a set $A$, if a object $x$ is inside $A$, we say that $x$ belongs to $A$, denoted as $x in A$. Otherwise, $x$ is not belong to $A$, denoted as $x in.not A$.
The expression of a set: Wrap all elements inside a curly bracket ${...}$. For a finite number of elemets, list them inside a curly bracket, separated by a comma. $A$ contains $2,3,4$, then $A = {2,3,4}$.
Sometimes, we want to collect all elements having certain same property. We divide bracket into two parts by a vertical line $|$ or a colon $:$. The former part is the form of elements, the later is the expected properties elements satisfy. $B$ is a set of all points laid on a unit circle, then $B = { (x,y) | x,y in RR and x^2 + y^2 = 1}$, i.e., the set $B$ consists of a bunch of coordinates in $RR$, which are the solution of the equation $x^2 + y^2 = 1$.
]
#definition[
For two sets $A,B$, we say they are equal to each other, iff for any object, if it belongs not $A$, then to $B$, too; vice versa. i.e.,
$ forall A,B (A = B <=> forall e (e in A <=> e in B)) $
]
#definition(name: "empty set")[
Empty set is a set containing no any elements, denoted as $nothing$, i.e.,
$ forall x(x in.not nothing) $
]
#proposition[
There is only one empty set.
]
#proof[
Suppose there are two empty sets, say $nothing_1, nothing_2$, we need to show that for any objects $x$ we have $x in nothing_1 <=> x in nothing_2$.
Consider the contrary, we need to assume that there exists a object $x_0$, belonging to $nothing_1$ but not $nothing_2$. However such object cannot be existed since no element belongs to empty set. The assumption does not hold at the beginning, or does not exist at all. We have no reason to say it is wrong, which means the original proposition is corrct logicaclly.
]
#definition(name: "subset")[
For any two sets $A, B$, we call $A$ as a *subset* of $B$, iff all elements in $A$ also belong to $B$, i.e.,
$ forall x (x in A => x in B) $
We say that $A$ is a *proper subset* of $B$, iff $A$ is a subset of $B$ and $A != B$. Notaion: $ A sub.neq B$.
]
#example[
Let $A = {1,2,3,4,5}$, then ${1,2}, {2,4,5}, {1,2,3,4,5}$ are both the subset of $A$, only the first two are the proper subset of $A$.
]
#proposition[
For any set $A$,
+ $ nothing sube A$;
+ $ nothing sube nothing$.
]
#proof[
Similar as before, assume the contrary, but there are no elements inside a empty set, assumption fails, the original proposition is right in logic.
]
== Operations of Set
#definition(name: "union")[
For any two sets $A,B$, a set is called the *union* of $A$ and $B$, deoted as $A union B$, if all elements in this set belong to either $A$, or $B$, or both.
$
A union B := {x | x in A or x in B}
$
]
#example[
Let $A = {1,2,3,4,5}$, $B = {2,3,5,7,11,13}$, then $A union B = {1,2,3,4,5,7,11,13}$.
]
#definition(name: "intersection")[
For any two sets $A,B$, a set is called the *intersection* of $A$ and $B$, deoted as $A sect B$, if all elements in this set belong to $A$ and $B$ at the same time.
$
A sect B := {x | x in A and x in B}
$
]
#example[
Let $A = {1,2,3,4,5}$, $B = {2,3,5,7,11,13}$, then $A sect B = {2,3,5}$.
]
#definition(name: "complement")[
Let $B sube A$, we call the rest of $B$ removing $A$ the *complement* of $B$, denoted as $complement_(A) B$.
$
complement_(A) B := {x | x in A and x in.not B}
$
]
In fact, it is unnecessary requiring $B sube A$ to consider what exactly differece is between two sets. There is a more general operation.
#definition(name: "difference set")[
For any $A, B$, a set is said to be the *difference set* of $A$ and $B$, if all its elements belong to $A$ but not $B$. The notation is $A\\B$, i.e.,
$
A\\B := {x | x in A and x in.not B}
$
$A\\B$ is also called the relative complement of $B$ with respect to $A$.
]
#remark[
Suppose $B sube.not A$, there may be two cases. The intersection of $A$ and $B$ exists, then $A\\B$ is actually $complement_(A) (A sect B)$; otherwise $A\\B = A$.
]
#proof[
(1) The intersection of $A$ and $B$ exists,i.e., $A sect B != nothing$. We need to prove that $ A\\B = complement_(A)(A sect B)$.
Firstly prove sufficiency. For any object $x$, $x in A\\B => x in A and x in.not B$.
$x in.not B$ is always true,whose disjunction with another proposition is also true, i.e., $x in.not A or x in.not B$ is true. And hence $x in.not A sect B$.
$x$ belongs to $A$, but not to $A sect B$ , and $A sect B sube A$. By definition, $x in complement_(A) (A sect B)$.
Then prove necessity. For any object $x$, $x in complement_(A) (A sect B) => x in A and x in.not A sect B$, and the later implies that $ x in.not A or x in.not B$, i.e., $x in.not B$.
$x in A$, $X in.not B$, by definition, $x in A\\B$.
]
|
|
https://github.com/k0tran/cont_labs | https://raw.githubusercontent.com/k0tran/cont_labs/master/reports/lab3.typ | typst | #import "template.typ": *
#show: lab.with(n: 3)
= Установка docker
Для установки docker engine был использован скрипт из прошлых лабораторных работ (который запускался на этапе provision):
#pic(img: "lab3/docker_script.png")[Скрипт для установки docker engine]
Проверка корректности установки:
#pic(img: "lab3/docker_hw.png")[Запуск контейнера hello-world]
#pagebreak()
= Образы (images)
#pic(img: "lab3/docker_images.png")[Список образов Docker]
#pic(img: "lab3/docker_run_ubuntu.png")[Загрузка образа Ubuntu]
Для того что бы удалить образ необходимо сначала удалиить контейнер (на рисунке показана попытка удалить образ остановив контейнер, но не удалив его):
#pic(img: "lab3/docker_rmi.png")[Удаление образа Ubuntu]
#pagebreak()
= Запуск контейнера
#pic(img: "lab3/ubuntu_bash.png")[Запуск `/bin/bash` в интерактивном режиме]
#pic(img: "lab3/docker_ps.png")[Списки запущенных и всех контейнеров]
#pic(img: "lab3/docker_history.png")[История контейнера ubuntu]
#pagebreak()
= Управление контейнерами
#pic(img: "lab3/docker_start.png")[Запуск контейнера]
#pic(img: "lab3/docker_top.png")[Просмотр запущенных процессов в контейнере]
Команда `sudo docker stats 02c5d1db2835` выводит следующую информацию о контейнере:
#pic(img: "lab3/docker_stats.png")[Ресурсы, занимаемые контейнером]
#pic(img: "lab3/docker_attach.png")[Подсоединение к контейнеру]
#pic(img: "lab3/docker_pause.png")[Приостановка контейнера]
#pic(img: "lab3/docker_kill.png")[Удаление процессов контейнера]
#pic(img: "lab3/docker_stop.png")[Остановка и удаление контейнера]
#pagebreak()
= Контейнеризация приложений
#pic(img: "lab3/cont_git_clone.png")[Клонирование репозитория]
#pic(img: "lab3/cont_app_dockerfile.png")[Создание Dockerfile внутри `gettning-started/app`]
Внутри файла следующее содержимое:
```Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000
```
#pic(img: "lab3/cont_build.png")[Сборка контейнера]
#pic(img: "lab3/cont_run.png")[Запуск контейнера]
После добавление правила на проброс порта и перезапуска виртуальной машины (не забываем снова запустить контейнер) на `http://localhost:3000` высвечивается следующая страница:
#pic(img: "lab3/cont_site.png")[Проверка запущеного контейнера]
Остановить контейнер можно следующей командой:
#pic(img: "lab3/cont_stop.png")[Остановка контейнера]
#pagebreak()
= Мультиконтейнерные приложения
#pic(img: "lab3/mcont_net_mysql.png")[Создание сети и поднятие mysql]
#pic(img: "lab3/mcont_mysql.png")[Проверка mysql]
#pic(img: "lab3/mcont_netshoot.png")[Запуск контейнера `nicolaka/netshoot`]
#pic(img: "lab3/mcont_dig.png")[Результаты выполнения команды `dig mysql`]
#pic(img: "lab3/mcont_stop.png")[Завершение рабочих контейнеров]
#pagebreak()
= Docker compose
#pic(img: "lab3/compose_version.png")[Версия docker compose]
#pic(img: "lab3/compose_file.png")[Файл `docker-compose.yml`]
#pic(img: "lab3/compose_up.png")[Запуск контейнеров при помощи docker compose]
|
|
https://github.com/chendaohan/bevy_tutorials_typ | https://raw.githubusercontent.com/chendaohan/bevy_tutorials_typ/main/06_entities_components/entities_components.typ | typst | #set page(fill: rgb(35, 35, 38, 255), height: auto, paper: "a3")
#set text(fill: color.hsv(0deg, 0%, 90%, 100%), size: 22pt, font: "Microsoft YaHei")
#set raw(theme: "themes/Material-Theme.tmTheme")
= 1. 实体
从概念上来讲,实体表示由不同组件组成的一组值。
从技术上来讲,实体只是一个简单的整数 ID (想象一下表格中的行号),可以用来查找相关的数据值。
在 Bevy 中,```Rust Entity``` 就是这个 ID ,```Rust Entity``` 由 ```Rust index``` 和 ```Rust generation```组成,在实体被删除后,```Rust Entity```可以重复利用。
```Rust
struct Entity {
index: u32, // 索引
generation: NonZeroU32, // 记录被使用的次数
}
```
你可以使用```Rust Commands```和```Rust &mut World```来创建和删除实体。
```Rust
fn spawn_player(mut commands: Commands) {
let entity = commands.spawn((
Player,
Level(5),
)).id();
commands.entity(entity).despawn();
}
fn spawn_enemy(world: &mut World) {
let entity = world.spawn((
Enemy,
Level(7),
)).id();
world.despawn(entity);
}
```
一个功能会有一组必要组件,只能增加不能减少,这时应该使用 Bundle 将这组组件捆绑到一起。Bevy 内置了很多的 Bundle 来提供这些功能。例如:```Rust SpriteBundle / MaterialMesh2dBundle / MaterialMeshBundle / PbrBundle / Camera2dBundle / Camera3dBundle / DirectionalLightBundle``` 等等。
```Rust
#[derive(Bundle)]
struct PlayerBundle {
player: Player,
level: Level,
health: Health,
}
commands.spawn((
PlayerBundle {
player: Player,
level: Level(10),
health: Health(500.35),
},
MyPlayer,
));
```
= 2. 组件
组件是与实体相关联的数据。
要创建一个新的组件类型,只需要定义一个 Rust 结构体或枚举,并实现 ```Rust Component``` Trait。
```Rust
#[derive(Component)]
struct Level(u32);
```
= 3. Newtype 组件
使用包装将不是组件的类型变成组件,之所以不直接在原类型上实现 ```Rust Component``` Trait 是因为孤儿原则。
```Rust
#[derive(Component)]
struct Level(u32);
```
= 4. 标记组件
你可以使用空结构体来帮助你标记实体,这些被称为标记组件,在查询/过滤时很有用。
```Rust
#[derive(Component)]
struct Player;
fn despawn_my_players(mut commands: Commands, my_players: Query<Entity, With<MyPlayer>>) {
for entity in &my_players {
info!("player entity: {entity}");
commands.entity(entity).despawn();
}
}
```
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/036%20-%20Guilds%20of%20Ravnica/002_Testing%20the%20Dark%20Waters.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Testing the Dark Waters",
set_name: "Guilds of Ravnica",
story_date: datetime(day: 17, month: 10, year: 2018),
author: "<NAME>",
doc
)
Not all mad scientists are born into money. Some of us have to earn it, and sometimes earning it isn't pretty. As I walk through the sewers beneath the Tenth District, knee-deep in sludge, I ignore the bits of solid waste bumping up against the repellant spell covering my uniform. Instead, I focus on the palatial sprawl of the undercity—the mesmerizing domes, stately columns, and an ornate arch inset with a relief depicting the signing of the Guildpact. It's a dangerous sort of beauty down here, and if it weren't for the noxious gasses and half-million gallons of urine and liquified excrement flowing steadily downstream, I'd say it were charming.
"No time for gawking, I'm afraid," Kel'teth bellows, and I notice I've fallen several paces behind my Golgari guide. He's the most laid-back troll I've ever met, probably because he's constantly taking nibbles from that patch of iridescent mushrooms growing around his armpits. Eyes calm yet alert, he urges me forth.
A rat's swimming in the sludge next to me. A scream wells up in my throat, but I tamp it down, not wanting Kel'teth to think I'm unfit for this task. A rat's nearly the same as a lab mouse, right? Except lab mice don't have those froth-coated fangs. Those menacing eyes. That #emph[mesmerizing] squeak. I'm overcome with the urge to pet it, right there on its furry snout. My hand reaches out, shaking, just a little closer~
A chunk of pitted cement flies past me and hits the rat square in the head with a #emph[sploosh] . It squeals one last time, then sinks down into filth-ridden oblivion. I shake off the all-consuming thoughts I'd had. What the—
"Sewer sirens," Kel'teth says, dusting his hands together. "Rabid as all get-out, but they can't attack you unless you put your hand in their mouth. Best avoid them."
"You know, information like that would have been useful #emph[before] I started the assignment," I say, wiping away the sewer backsplash from my lips.
Kel'teth laughs. "If I'd gone and warned you about every little thing that could kill you in the sewers, we definitely wouldn't be having this conversation right now."
I stick closer to my guide as he briefs me on the eight different varieties of flesh-eating water plants that thrive down here and gives me tips on how to avoid getting electrocuted by eels. As we continue, I notice shadows lurking in dank corners, behind pillars, under bridges, and decide that maybe knowledge #emph[isn't] power. I tune out of Kel'teth's lecture and start concentrating on the gear this side job will earn me—my very own ballast of arcane induction. One of those solid mizzium ones, with the dyna-chromatic retaining bell and double-inversion\/instant conversion canister. Real Izzet ingenuity right there, not like the borrowed one I'm lugging around. I'll be able to do analyst jobs like this one three times as fast—detecting and identifying mana traces with ease, which will leave me with more time to spend in the lab.
#figure(image("002_Testing the Dark Waters/01.jpg", width: 100%), caption: [League Guildmage | Art by: <NAME>ov], supplement: none, numbering: none)
We cut through a series of keyhole archways, make our way around a moss-covered rotunda, and finally we arrive at our destination. It's massive, and nearly as impressive as the two-story portico it's wedged against. It's a giant raft of coagulated fats and solid waste clumped together into a single mass, clogging the water's flow. Just one of the many fatbergs plaguing the sewers of the Tenth District.
Kel'teth clasps his fingers together, puts them down at knee-height, then gestures at me to take a step up. "After you!"
"Wait. You want us to actually get on that thing?" I adjust the bulky canister I'm carrying on my back, trying to keep the weight evenly distributed.
"Well, you're not going to be able to see it properly from down here. Besides, eels will be coming out of their nests soon. They don't usually attack people, but they'll shock the ever-lovin' chitin out of anything when they're groggy."
I don't need any more convincing and hastily hop up onto the fatberg. Most of it looks hard like rock, though some patches look waxy, some parts have mounds oozing fatty gel, and all throughout, broken and discarded objects protrude from the berg's surface. The entire raft bobs slightly, making me nauseated—though in all fairness, I've been pretty close to vomiting since I first stepped foot down here.
"See," Kel'teth says, "normally, we fly a couple of drakes in to vaporize the bergs, but they've become impervious to electrical magic. This one's been bombed a dozen times, and not a scratch on 'er." He pats a fat protrusion lovingly. "Impressive, isn't she?"
"She's a beauty, all right." A dry heave escapes me. The anti-nausea spell is wearing off for sure. We'll have to make this inspection a quick one. "So, I'll just look around and see if I can find any traces of what's causing this. Okay?"
"Take all the time you need," Kel'teth says, settling down upon the berg. He pops a mushroom cap into his mouth, then molds a fat mound, fashioning a pillow behind him. A relaxed smile smears across his face as he leans back with both arms propped behind his head.
I unholster my ballast rod and tap the mizzium-plated canister slung upon my back. A hum starts up—the background noise of trace amounts of mana dispersed in the air. I grip the rod and wave the receptor coils around, collecting mana remnants until the round glass bell fills with the purple crackle of electricity. The energies cancel out, and the hum fades down to nothing. I'm ready to begin. I aim the ballast's copper receptor coils at the surface of the fatberg, sweeping in slow, steady strokes, back and forth. The canister raises in pitch, a sharp #emph[zip] sound that indicates evidence of an artifact. Dig marks in the berg show it's long gone, though, probably scavenged by Golgari reclaimers.
I keep moving forward. The dreck of the sewers knows no guild divisions. One moment, I'm stepping over a pummeled warthog mask from some Gruul festival, and the next, I'm wincing at the sunburst helmet of a Boros soldier cracked right in two. Eventually I find another spot where an artifact once sat. From the warbling moan my ballast emits, I can tell it's a common Rakdos artifact, likely a half-burned effigy of an unfaithful lover or of an unscrupulous neighbor who'd borrowed a fire poker and forgotten to give it back. Definitely not something with enough magic to affect a whole fatberg.
But then the ballast starts making an odd sizzling noise it's never made before. It gets louder and louder as I reach the end of the berg. I look back at Kel'teth, fast asleep. I should probably wake him and ask him to show me forward, but whatever's causing this sizzle is powerful. Arcane. And something the Izzet League hadn't rigged my canister to recognize. That either means they hadn't discovered it yet or they knew about it and wanted to keep it a secret. Both options are equally appealing. And equally lucrative.
Look, I know why I was hired for his job—figure out what's causing the electric-proof fatbergs and report back to the Golgari so they can fix it, but here's the thing: besides working this job in my spare time, I serve as an attendant to <NAME>, a high-level chemister specializing in arcane metallurgy and practical alchemy. I'm stuck at the bottom rung in the laboratory, one of two humans among a couple dozen vedalken attendants, and I spend most of my day sorting cable connectors and degreasing the turbines and trapping rogue elementals siphoning energy from our lab equipment. I've got ideas, though, more ideas than can fit in my head, but so far it seems like I'll only get to move up when someone dies or retires. The way the other attendants are gulping down rejuvenation spells, neither of those things are going to happen in a long, long time. So, if I'm going to make a name for myself, I've got to take risks.
I step into the sewer water, then follow the sizzle down several pipes, each narrower than the one before. I come to a dead end, water flowing into an old and ornate grate rimmed with ancient code and held fast with rusty bolts that probably haven't moved since Niv-Mizzet still had his egg teeth. Turning back isn't an option, though, not when I'm so close. I release the safety clasp on my canister, and a backflow of stored raw mana slips out and swirls toward the grate. The canister drains, causing the aged metal to glow red hot, and as it expands, the bolts shiver then pop loose into the water.
Three hard tugs, and the grate comes free. I set it to the side and duck in. The flickering light still caught in my glass bell casts dancing shadows upon the curved walls of the tunnel. Shiny surfaces reflect light, but there is one spot ahead that's as black as pitch, floating upon the surface of the sewer water. Threads of magic swirl around it, an ominous red with sparkles of white. A spatial rift.
Too late, I notice several eels headed toward me, weaving their way through the patch of strange plants growing around the rift. I scramble, trying to remember what Kel'teth had said about avoiding electrical shocks~water's too shallow here to dive, and there's nothing to grasp onto so I can climb out. Left without options, I hold my ballast rod out in front of me. The entire surface of the water lights up. Electricity flows into the receptors, but they're meant for siphoning trace amounts of mana from their surroundings, not taking the full brunt of an electrical shock. The energy travels up the rod, and the bell explodes into shards. The cannister starts screaming bloody murder, so I unhook it and sling it as far as I can. It hits the water, and seconds later, an explosion of electrical magic fills the sewer. For a long, long moment, my entire body seizes up and my world goes white.
#figure(image("002_Testing the Dark Waters/02.jpg", width: 100%), caption: [Steam Vents | Art by: <NAME>], supplement: none, numbering: none)
Finally, my thoughts congeal. I look around, neck stiff, skin smoldering. The rift is fine, and so are all the plants around it. Like nothing happened to them. Not a single leaf burnt. Not a single petal scorched. Contact with the rift must have imbued them with immunity to electrical magic. The same immunity must have seeped into the fatbergs over time. I take a few plant samples, shaking with the magnitude of this discovery. Never again will I be ordered to sterilize goggles or spit-shine furnace grates.
I'd be lying if I said I hadn't noticed the pressure mounting within the Izzet League lately, although where it's coming from, I don't know. The Izmundi have been demanding more significant discoveries and faster results, so much so that chemisters have resorted to running experiments day and night for fear of losing their labs. Well, I've got their significant discovery right here, so I'm storming over to see <NAME>, right now, and demanding that he gives me the promotion I deserve. And soon after that, I'll be the one giving him orders.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Turns out the best ideas aren't formed when you've just had ten eels' worth of electricity jolting through your brain. Making nonsensical ultimatums to your boss, drenched in sewer water, frizzy hair gone white at the temples, and dragging four hundred zinos worth of broken and illegally borrowed lab equipment behind you~well, that just leaves you standing at the front steps of the Lightning Rod, a box full of your desk belongings in your arms.
I'd watched as they revoked my access spells, took the infinitum key amulets from my neck, stripped me of my gauntlets. Now I'm just a bare-fingered stranger to the building, all the checks and credentials that separated me from Dimir infiltrators trying to steal our inventions and Simic biomancers looking to poach chemisters for their labs, gone. <NAME> can take my job and strip me of my title, but he can't revoke my dream.
So, I've started a laboratory of my own in the boilerpit that runs beneath my apartment building. It's steamy down here, and it reeks of rust and ingenuity. I've scavenged most of the lab equipment I need, erecting a pair of makeshift mana coils fashioned from of mizzium scraps hammered thin as paper. They're holding for now, though, sending arcs of purple light nearly to the ceiling. I've baited traps for the electrical elemental I've heard flickering past in the quiet of night. Yeah, the lab isn't much to look at, but it's coming along. All I'm really missing is one last thing.
A knock comes at the door.
In that box of desk belongings, I'd managed to sneak something past the Izzet guards who'd escorted me out of the building—lab mice. Dead ones. Their little furry corpses tainted with the residuals of experimental magic. With the right coaxing, they often don't stay dead, which makes them highly valuable to Golgari reclaimers. I'd traded with a young reclaimer, six ripe mice for him to find me a blastseeker willing to wield magic creations in an unsanctioned lab for an unconscionable amount of money. I don't expect much, but anything's better than risking blowing up half a city block trying to do it all on my own. Again.
I answer the door. She's even less than I expected, slightly built, and doesn't look like she could lift a spectral converter if her life depended on it. But after my own experiences of being discounted again and again, I know people can be much more than they seem. I smile. "You're here for the blastseeker position?"
"I'm here if you're paying," she says, a gleam in her eye. "<NAME>. Call me Tammy, and we've got a problem."
Straightforward. I like her already. "You've got experience?"
"Five years working at the Crucible as a blastseeker. After that, two in the Foundry."
"References?"
"None that would be caught dead talking to the head of an unsanctioned lab."
Fair enough. "How about a practical test, then? Just to see if we're compatible?"
We work for three hours straight, getting all the components of my experiment set up. Tamsyn is meticulous. She helps me hypercharge the mizzium coils, cranking the handle with a fervor I've only seen in goblins. Then she slices my rift specimens with incredible consistency. I lay them in a shallow trough of rarified vacuole penetrants and then watch as the rift magic separates from the cellulose. Tamsyn even helps me to bolster spectral fields on the electrical orbs we'll use to administer the shocks. Finally, after we've run the serum through the centrifuge and filtered out the organic contaminants, we dispense it to the mice.
We wait a full five minutes for the serum to take effect, then Tamsyn hefts the spectral converter with ease and conjures an orb of electricity. It bobs in the air like a ball of honey-colored lightning. The mouse looks anxious with those pale, pink eyes, then Tamsyn lets the orb rip. The mouse lights up like a fire elemental, so bright my googles warm around the edges. Electricity surges with a violent force into that tiny creature, and it doesn't even twitch a whisker. It's completely impervious to electricity.
"Not even a single strand of fur is scorched. This is amazing! We have to take this to the—" I stop short. We can't do anything with these results. No one's going to take this finding seriously, not without humanoid trials. And I can't conduct those without board approval.
"What?" she asks.
"Nothing." I say, biting my lip. Greatest discovery of my lifetime, and I have to sit on it. I'll apply for official sanctions, of course, but that'll take months. The Golgari will stumble upon the truth well before that, and all of my dreams will be dashed once again. I sigh, then go to euthanize the mouse for dissection, not my favorite part of the job, but you get used to the killing.
"I've got it," Tamsyn says, stepping in front of me. She puts a white rag against the mouth of a bottle of stonemaker vapors, turns the bottle over, then expertly smothers the mouse so fast, it doesn't even realize what's happened. The way she moves, so comfortable in her own skin, you can tell she's got tons of lab experience.
"If you don't mind me asking," I say, a little hesitant, "why do you want to work in an unsanctioned lab? With skills like yours—"
"Skills like mine got my last chemister killed. It was an accident, but the board didn't see it that way. They took away the things I held most dear." Tamsyn holds her bare palms out. The discoloration from the amplification stones imbedded into the gauntlets she'd once worn are painfully familiar to me. My heart nearly reaches for hers, but I stiffen up, keep emotion out of it. I can't afford to hire her, even with the pittance I'd advertised. Now is not the time to complicate things.
#figure(image("002_Testing the Dark Waters/03.png", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
"Well, thank you for coming," I say. "I'll let you know about the job next week. I've got a couple more applicants to interview."
"Are you serious? After what I just did?"
"It was impressive, I'll admit, but it's only fair that I—"
"I need this job, Leighbet. Maybe I'm desperate, but you are, too. That's why we'd make a great team. You've got the big ideas, but you need someone who's good with details and who knows how to game the system. Sanctions aren't the only way to get laboratory approval. I know some people who know some people. I can get your lab declared an Innovative Niche Workshop."
"You know how to do that? How?"
"Hire me, and I'll tell you. You've got something special here, and I want to be a part of it. Please, you won't regret it."
Oh, I know I'll regret it, but you can't dangle lab approval in front of a self-proclaimed chemister and expect her not to bite.
"I'll take care of you, you'll take care of me," Tamsyn says. "Long as my paycheck is on time, we've got no problems, right?"
"Right," I say. Izzet League likes its oversight and its protocols, but the rules are made to be broken.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Tamsyn has worked wonders. The Laboratory of Elemental Metastream Dynamics and Coil Field Fractalization is now one of the labs officially recognized by the Izmundi. Yes, it's a mouthful, but Tamsyn had said that the more descriptors I used, the less likely it would be for someone to scrutinize what we are actually doing.
My incredible blastseeker steps into the lab and catches me admiring the place. "A couple things I need to tell you. Nothing big," she says. "If anyone comes knocking, asking for a <NAME>, tell them she's away at a conference and won't be back for a week. And the official number of attendants we have is twelve. Memorize their names and the errands they're out running. Each has a backstory to make them more believable. Lastly, if you're caught and interrogated, you never knew me."
I laugh. "Did you blackmail a member of the board to get this pushed through, or what?"
She doesn't laugh back.
I keep laughing, but more like a chuckle now. "But you didn't, right?"
"I thought you were serious about your science, Leighbet." She stares me down. I don't dare blink. "I took the liberty of putting out an ad for test subjects. They're all in the waiting room."
"We have a waiting room?" I peek out the door into the hallway, and sure enough, three goblins and two humans are out there sitting on wooden crates. I give them a tight-lipped smile, then duck back in the lab. "You got people to actually show up? For free?"
"I mentioned two hundred zigs in the ad."
"Two hundred zigs? Each?"
"This will work, Leighbet, and when it does, money will no longer be an object."
I nod, her certainty reassuring me. I carefully measure and administer the serum to each of the test subjects, documenting everything. Someday, historians will want to know more about the discovery that propelled me from humble attendant to master chemister.
Tamsyn and I stand shoulder to shoulder, waiting nervously for the serum to kick in. My gut churns~if this works, no~#emph[when] this works, I'll go directly to the board myself to give them a demonstration.
Tamsyn steps up to the first subject. "I'm going to give you a slight shock. Please tell me if you feel any level of discomfort." As brusque as she is, she's great at putting the test subjects at ease. Even the hard edges of her facial features seem softer.
The goblin nods—kind of cute with her long, sloping nose, bright yellow eyes, and brass rings in her left ear. Tamsyn picks up the spectral converter, dials it down to one, and conjures up an orb no bigger than a coat button. Gently, she urges it toward the goblin, who is shaking, her green skin gone ash gray. The orb hits her in the shoulder, then vanishes without note.
"Did you feel anything?" Tamsyn asks her.
"No!" she says, nearly jumping out of her seat. She settles, looking bashful. "Sorry, this is my first time being a test subject. I'm a little nervous."
"You're fine," Tamsyn says with a reassuring chuckle as she sets the dial to four. "Okay, I'm going to try a slightly larger shock. Remember, please let me know if you feel any pain whatsoever." The orb is the size of a drake egg now and hits the goblin in her chest this time. No effect.
"A bit of a tickle, maybe?" the goblin offers.
"Okay, this is going to be a big one. You're sure you're okay to continue?"
The goblin nods again, with more confidence this time. Tamsyn dials all the way up to eight, and as the full-sized orb nears our test subject, it's me who's shaking.
#figure(image("002_Testing the Dark Waters/04.jpg", width: 100%), caption: [Invert \/\/ Invent | Art by: <NAME>], supplement: none, numbering: none)
The blast hits her in the head—a surge that should have knocked her unconscious, but she sits there, mouth agape. "I felt something. Like a tap to the forehead."
"Did it hurt?" Tamsyn asks, offering the goblin a cup of water to calm her nerves. The goblin drinks it down rapidly, still trembling.
"Not a bit. This is amazing. What was in that stuff you gave us, anyway? I mean, I know you can't tell me~I'm trying to get an attendant job myself. It's so competitive out there, but I'm not giving up!"
"I'm sure you'll be on the other side of these experiments in no time," Tamsyn says. "Now, if you'll have a seat back out in the waiting room, we'll finish up with the other subjects and then get your payments processed."
"Sounds great!" The goblin shuffles off, a lightness in her step.
The other four tests go exactly the same, successes all around. For good measure, Tamsyn rams five quick bursts into the last subject's chest, no response. Tamsyn and I look at each other.
"This is it," I say. "We've done it!"
#figure(image("002_Testing the Dark Waters/05.jpg", width: 100%), caption: [Radical Idea | Art by: Izzy], supplement: none, numbering: none)
"We have."
"This is perfect! Only the test subjects are all out there~waiting on their money." This isn't going to be pretty, but I can tell them the paperwork still needs to be processed, and it'll take a few days. I'll hit up some early investors, and then—
"Leighbet." Tamsyn says my name like I'm an impetuous child. "Imagine what would happen if we let them go out into the world with rift-tainted magic. It'd be traced back to the source, for sure. You used to be an analyst. You know how relentless they are. Then where would that leave us?"
"But what can we do about it? Quarantine them all? For how long?" If the League gets wind of the source of the rift magic, all my leverage is gone. I could kiss my career prospects goodbye. Then, slowly, slowly, I see what Tamsyn is trying so hard not to say. This is still my project. I am in charge. If an order like this is coming, it has to come from me. "There's only one way we can ensure that these finding don't get out," I say.
<NAME>.
I think of the lab mice I've euthanized over the years. Hundreds. Thousands. At first, it was hard. I felt awful, but I guess it became routine at some point. We're not talking mice here, though, we're talking people. Five souls, standing in between me and greatness. If I do this, if I step over this line, there's no going back. My brain whispers to me—all these horrific thoughts, and I'm listening to them, then entertaining them, and finally agreeing with them~and those small steps have made the leap to villainhood more accessible.
It will take both of us to hold them down while we press the cloths to their faces. I pick up the bottle of stonemaker vapor. Four doses for each test subject should be enough. Then I remember the light in the eyes of that chatty goblin girl, with her own dreams and aspirations~"Tamsyn, sorry, I don't know if I can do it."
She seems disappointed but not surprised. "Don't worry. You don't have to. I already gave them all a hyper-concentrated dose of sleep elixir laced with an ethereal death accelerant." Tamsyn carefully stacks the five empty cups and tosses them in the waste bin. "They went gently, peacefully. It's not like we're #emph[complete] monsters."
I'm fully unprepared for how cold my heart feels in the stifling, steamy heat of the boilerpits.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
I'm not sure what I've gotten myself into, but I know how to get myself out. All I need is two thousand zigs to pay Tamsyn the money I owe her, then I'll shut down the lab, have a mind mage wipe my memories, and get on with my life. My options are limited, and my time is short, but there's one way to get money in a hurry. I scour the postings for test subjects at the Crucible, looking for the highest paying experiments. I sign up for as many as I can and hope for the best. The first couple go without a hitch—the twenty injections down my spine barely hurt, and that minor explosion when fire and water magic had mixed~I hadn't really needed all my eyelashes anyway.
The third experiment, I find myself trekking through the heart of the Simic Combine. To say that I have reservations is an understatement. Pushing the boundaries of elemental science is one thing, but tinkering with bioengineering gives me pause. It's dangerous. Unnatural. But Simic biomancers pay test subjects three times as much as Izzet chemisters do, so I soothe my fears by imagining the seven hundred zigs soon to be clanging in my pocket.
Their labs make my skin crawl, great vats of blue-green liquids, silhouettes of something moving inside with more arms and legs than anything ought to have. The amount of paperwork they make me fill out is daunting—a full medical history, a psychiatric profile, and a liability waiver requiring the contact information for my shaman in case of emergency and a description of burial rituals, should the worst happen. I'm on the penultimate page of the final questionnaire when I come to a stumbling block:
#emph[Have you been exposed to any regrowth helixes or irradiated enchantments in the past seven days?]
My hand trembles, but I mark "no" even though I had a dose of each in my experiments this morning. I can't afford to skip out on this one. They administer the test, hooking me up to a half-dozen hoses and tubes, delivering their mystical potions to my veins. I immediately feel lightheaded.
"Everything's all right? Are you okay to continue?" the lead biomancer asks. He's human, but reptilian scales glisten all along his skin. His lidless eyes are as black as that spatial rift had been, and I fear I might fall right into them.
I swallow my nerves and nod. Every single one of my arm hairs starts to itch as Simic magic transforms me from the inside out. The tingling sensation hits me in the marrow of my bones, and before I know it, my teeth are reshuffling, becoming jagged and ragged like a mouthful of fangs. My spine is twisting, growing, each vertebra lengthening, reaching out into sharpened points, and the warm brown of my skin turns an ashen gray, rough like old leather. I stare at my hands as silver-blue claws erupt from my nailbeds.
"Something's definitely not right," the biomancer says. "You're sure you haven't been exposed to regrowth helixes lately?"
I try to answer him, to admit that #emph[maybe] I had, but there's so much froth spilling out of my mouth, I can't talk through it.
Disoriented and terrified, I rip the tubes from my arms. The biomancer tries to subdue me, but I drag my claws through his lab coat, through his scaly flesh, then flee as fast as I can. I rush down the corridor, hundreds of bloated faces staring out at me from fluid-filled growth vats. The corridor opens into an atrium with a large reflection pool that casts shimmering light all around me. I feel like I'm drowning. I fight my way to the exit, gasping for breath as the air hits my face, but I don't stop running. There's only one place deserving of a monstrous wretch like me. The sewers.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
I cower in the deep shadows of a bridge pile, half submerged, half out of my mind. I'm so hideous, even the sewer sirens won't come near me. I think this is the end, that my life can't get any worse, but then Tamsyn rounds the corner, spectral converter conjuring an orb that lights up the sewer. Shadows dissipate, and I am seen.
"Leighbet," Tamsyn says.
"Tamsyn," I reply. "I'm still short on your paycheck, but if you give me more time, I can—"
"You know this isn't about money."
Yeah, I got that feeling. "Back when you told me about accidentally killing your chemister~that was a lie, wasn't it?"
"You got me."
"You killed him on purpose?"
"There was no chemister, Leighbet. I never was a blastseeker." Something foreign ripples beneath her skin, and that feeling I'd had, about her moving so comfortably within her own body, all of that drains right out of my head. "And I never worked in the Crucible or the Foundry. Too many deterrents and security measures. But small labs like yours are easy to infiltrate, and if you time it right, you can catch a genius on the rise~"
"You really think I'm a genius?" I say, then shake off the stroke to my ego and concentrate on what's important. "You're a shapeshifter?" And then I realize. "A Dimir spy."
"In the flesh," Tamsyn says. Her skin ripples again. "At least close enough to it."
Damn. And she was so good in the lab. Really knew her stuff. I take a deep breath. "Now when you said 'genius,' did you mean in a literal sense, or—" But before I can get clarification, I catch a glimpse of something approaching us fast—wings crack like sails bitten by storm winds, yellow eyes burn like fire. An arc drake, sent down here to break up the fatbergs, is flying right toward us. Electricity crackles on its breath, then I see it take a big inhale. "Drake!" I scream.
#figure(image("002_Testing the Dark Waters/06.jpg", width: 100%), caption: [Crackling Drake | Art by: <NAME>], supplement: none, numbering: none)
"You think I'm really falling for that?" Tamsyn says. The electricity within the converter pointed at me warbles a deep and menacing tone.
I don't have time for fear. My mind's cycling through the electrical safety rules my sewer guide had briefly covered: #emph[Able to climb, get out in time. In the water, dive like an otter.] I dive deep, hold my breath, and hope for the best.
Electricity is unpredictable, indiscriminate, and a natural-born killer. It snakes through the sewer water, through me. My entire body clenches up so tight, I feel like I'm about to snap in half. Finally, when it releases, I'm overcome with a thirst so intense that I must force myself not to guzzle mouthfuls of sewer water. My heart's okay, and my brain mostly, but I don't mistake my luck for any sort of mercy. I'm struck again, this time in the gut with a fist. My lungs expel the air I'm clinging to as "Tamsyn" rams into me. Bubbles escape to the surface, and I try to do the same, but she's got a hold of me, dragging me back down. I scratch and claw my way back up to the surface, and she butts her forehead into my chin, and while I'm trying to shake it off, she's conjured another orb.
#figure(image("002_Testing the Dark Waters/07.jpg", width: 100%), caption: [House Guildmage | Art by: <NAME>], supplement: none, numbering: none)
"A mind like yours is a terrible thing to waste, but your serum is a Dimir discovery now. Goodbye, Leighbet."
I don't know why, but the thought of losing credit for my discovery scares me worse than the thought of losing my life. I look down at my claws—sharp, intimidating. I'm no berserker, and until now, I didn't have a feral bone in my body, but that doesn't mean I'm going down without a fight. I lash out at Tamsyn, swinging at her face. She ducks, then sends an orb straight into my stomach. I hunch forward with the pain it brings, a deep, pulsing cramp that makes the edges of my vision go white. I push past it, make another go at it. My claw catches skin this time, barely, drawing a line of pale-green blood. Almost instantly, the wound knits back together. She dials the spectral converter two clicks past the max, then conjures a giant orb and moves it slowly toward me.
This isn't working. I'm a thinker, not a fighter. If I'm going to best her, it's going to have to be with my mind. I back up as she approaches, but then, something squishes behind me—a fatberg completely blocks my exit. I've got no choice, so I turn, sink a claw into the top, then sling myself up onto the surface. I'm completely exposed, but I'm faster up here, and I'm able to duck as she fires at me.
Tamsyn is trying to mount the fatberg as well, but I rock back and forth, making it more difficult. She slips back under the water. I try to make a run for it but trip over an old welding jar. It's heavy and made of thick glass. I start looking around, surprised the reclaimers hadn't scavenged this fatberg yet. Among the usual debris and trash, I catch sight of several objects that could be useful with a little inventiveness and elbow grease. I look at the welding jar again. Only a few remnants of solder remain inside, but the jar itself can serve as a retaining bell. If I can find enough pieces, I can build a makeshift ballast to absorb the shock from Tamsyn's orbs.
Her head peaks up, and she lobs an orb at me. It grazes my leg, which goes stiff and hurts so bad, I'm barely able to stand. I raise the welding jar like I'm about to throw it at her, and she ducks back under. I don't have much time. She won't fall for that twice.
I shuffle over to an old trident wedged into the berg. The shaft is splintered and already plucked of its inset stones, but I can still feel the sizzle of magic working through its veins. It would make an excellent rod for my ballast. Two cracked mana coils might work as receptors. I don't think anyone's ever tried that, but I have to make do with what I've got. The solder is old, so I poke it back to life with a small jolt of crafting magic. Finally, it creeps forward, joining the bell to the trident, and just as the final piece worms its way down to the coils, I look up and see Tamsyn has found her way onto the berg. I heft my ballast up to fight, but it's nowhere near ready.
I point over her shoulder, and my eyes go wide. "Not again!"
I dive flat, like those drills you learn as a kid to minimize your chance of getting struck by lightning when an arc drake is on the loose. Tamsyn looks back over her shoulder, squinting into the darkness, and then I jump up and seize both my ballast and the moment as I take a fierce backswing then throw my entire body into it as the welding jar makes contact with her jaw. She spins, once, twice, then plunges face-first into the gritty sewer water. Nice hit. Would have knocked out a human, but a shapeshifter, I'm not so sure.
A proper ballast would have a cannister to store the charge, but there's no way I can jury-rig something that complicated. But if the Warwitt-Isley principle of microfractural gains and allowances holds true in less than ideal circumstances, I might have a chance if I can find something to redirect the energy. I see a piece of refuse that might work, half buried in a mound of fatty gel. I step over and wrench it free. It's the end cap to an old boiler tank—a rusty mess on the bowed-out side, and the inside is mizzium plating painted on so thin it wouldn't be worth the effort to remove. All those years of spit shining furnace grates finally comes in handy, and in no time, the mizzium gleams, providing a nice, concave surface for the magic to flow across.
Tamsyn catches me off guard, coming up from the far side of the berg. I don't have time to attach the cap, so I just hold onto it for dear life. When the next blast orb comes, I meet it with my makeshift ballast. Electricity slides into the receptors, up the rod, then collects in the bowl of the cap. For a brief moment, both Tamsyn and I stand there, stunned that it actually worked, but then she's barreling toward me with a vengeance, arms outstretched, another orb ready to fire. Before my charge can dissipate, I sling it at her, and the blast surges forth and hits her in the chest.
Her whole body lights up. The impact sends her flying in one direction, and the cap goes careening in the other. I blink away the ghost images of Tamsyn temporarily burned into my retinas then see my real enemy trying with all her might to get back onto her feet. But before she can, I jam my knee into her back, pull her head up by the hair. The feral monster inside me reels, demanding revenge, but when I look down at my arms—my skin is smooth again, my nails are nice and blunt, and I realize the awful effects of the experiment have worn off. I am back to my normal self~but I still feel #emph[changed] , and I'm pretty sure I can't blame it all on Simic magic.
"This is my discovery," I hiss at her. "I can't let it fall into Dimir hands."
"I won't tell anyone, swear it," she pleads.
"I know you won't, Tamsyn," I say to her, and I become like lightning—unpredictable, powerful, merciless—all those things that make it both beautiful and deadly. Like with the mice, the decision to kill is easier this time, and as Tamsyn's vertebrae snap, I take comfort that she doesn't know pain for long. I step back and watch her corpse as the spells that held her human form fade away. The body lays there, a treasure for the reclaimer that will eventually happen upon her. I gather what's left of my makeshift ballast to take back to my lab. A little tinkering, and I'll have #emph[two] discoveries to bring before the board, and if they don't accept my findings~
The monster inside shifts beneath my skin. Together, we will make a great team.
|
|
https://github.com/SkytAsul/fletchart | https://raw.githubusercontent.com/SkytAsul/fletchart/main/src/declarative.typ | typst | #import "internals.typ": flowchart-create
#let flowchart-parse-elements(elements-args) = {
assert.eq(type(elements-args), content, message: "elements parameter has wrong type")
let internal-elements = (:)
let elements = if elements-args.has("children") { elements-args.children } else { (elements-args,) }
for content-piece in elements{
assert.eq(content-piece.func(), metadata, message: "Invalid content child type")
let obj = content-piece.value
if obj.class == "element" {
internal-elements.insert(obj.id, obj)
} else {
panic("Unknown object class " + obj.class)
}
}
return internal-elements
}
/// Creates a flowchart based on a list of elements.
/// - elements (content): Content of the flowchart.
///
/// A combination of `action` and `condition` function calls.
///
/// - elements-style-override (dict): A dictionnary mapping element
/// type names to a dictionnary of styles. This inner dictionnary
/// can have `shape` and `fill` fields.
///
/// - debug (bool): Whether to show debug informations or not.
#let fc-declarative(elements, elements-style-override: (:), debug: false) = {
let internal-elements = flowchart-parse-elements(elements)
let options = (
elements-style-override: elements-style-override,
debug: debug
)
flowchart-create(internal-elements, options)
}
|
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CSL_old/oktoich/Hlas8/6_Sobota.typ | typst | #let V = (
"HV": (
("","","Múčenicy Hospódni vsjáko místo osvjaščájete, i vsják nedúh uvračújete: i nýňi molíte, izbávitisja ot sítej vrážijich dušám nášym, mólim vý."),
("","","Múčenicy tvojí Hóspodi, zabývše žitéjskaja, i nebréhše o mučénijich, búduščija rádi žízni, tojá nasľídnicy javíšasja: ťímže i so ánhely rádujutsja. Ťích molítvami, dáruj ľúdem tvojím véliju mílosť."),
("","","Ášče kája dobroďíteľ, i ášče kája pochvalá, podobájet svjatým: mečém bo prekloníša výji, tebé rádi preklóňšaho nebesá i sošédšaho: izlijáša króvi svojá, tebé rádi istoščívšaho sebé, i zrák rábij prijémša: smiríšasja dáže do smérti, niščetú tvojú podražájušče. Íchže molítvami, po mnóžestvu ščedrót tvojích, Bóže pomíluj nás."),
("","O preslávnaho čudesé!","Drévu neplódnomu upodóbichsja, óbrazom jáko lístvijem tščétnym obložén: i bojúsja Vladýko. Da ne otsík, otsléši mjá vo óhň neuhasímyj víčnyj. No podážď obraščénija mňí vrémja, jáko da prinesú ti dobroďítelnych ďijánij plód dóbryj, i spodóbľusja cárstvija tvojehó."),
("","","Hóspodi, Hóspodi, tvojehó rabá mené, mnóho prohňivľájušča tvojú bláhosť na vsjákij déň čelovikoľúbče, ne otvérži ot licá tvojehó Christé, nižé nakaží tvojím právednym hňívom: sohriších, sohriších tí, jáko niktóže ín, ispovíduju, uščédri i spasí mja, tvojejá Mátere molítvami."),
("","","Jehdá sjádeši vo slávi tvojéj, jáko cár vsích, na prestóľi sudá, predstojáščym stráchom tí svjatým ánhelom tvojím, i predstánet tí jestestvó vsé čelovíčeskoje Christé, sudítisja imúščeje: tohdá molítvami Mátere tvojejá Hóspodi, ot vsích izbávi múk usópšyja vírno."),
("Bohoródičen","","Cár nebésnyj za čelovikoľúbije na zemlí javísja, i s čelovíki poživé: ot Ďívy bo čístyja plóť prijémyj. I iz nejá prošédyj s vosprijátijem, jedín jésť Sýn, suhúb jestestvóm, no ne ipostásiju. Ťímže soveršénna tohó Bóha, i soveršénna čelovíka voístinnu propovídajušče, ispovídujem Christá Bóha nášeho: jehóže molí, Máti beznevístnaja, pomílovatisja dušám nášym."),
),
"S": (
("","","Múčenicy Hospódni, molíte Bóha nášeho, i isprosíte dušám nášym mnóžestvo ščedrót, i očiščénije mnóhich prehrišénij, mólimsja."),
("Pokójin","","Pláču i rydáju, jehdá pomyšľáju smérť, i víždu vo hrobích ležáščuju, po óbrazu Bóžiju sozdánnuju nášu krasotú, bezobráznu i bezslávnu, ne imúščuju vída. Ole čudesé, čtó jéže o nás sijé býsť tájinstvo! Káko predáchomsja tľíniju? Káko priprjahóchomsja smérti? Voístinnu Bóžijim poveľínijem, jákože jésť písano, podajúščaho prestávlšymsja upokojénije."),
("","","Smérť tvojá Hóspodi, bezsmértija býsť chodátajica: ášče by ne vo hróbi položén býl jesí. Ne by ráj otvérzlsja. Ťímže prestávlšichsja upokój, jáko čelovikoľúbec."),
("Bohoródičen","","Čístaja Ďívo, Slóva vratá, Bóha nášeho Máti, molí spastísja nám."),
),
)
#let P = (
"1": (
("","","Pojím Hóspodevi, provédšemu ľúdi svojá skvozí čermnóje móre, jáko jedín slávno proslávisja."),
("","","Umertví plóti mojejá mudrovánije, prečístaja Ďívo, jáže žízň róždšaja, i múki vsjákija izbávi mjá."),
("","","Zaréju presvítloju, ot tebé Ďívo, vozsijávšaho, duší mojejá zínicy prosvití: jáko da slávľu ťá."),
("","","Molísja Vladýčice, jedína svjatája Bohoródice, jáko da ulučú v déň sudá spasénije, i božéstvennuju svítlosť."),
("","","Prečístaja Bohoródice, utiší boľízni sérdca mojehó, jáže Christá neboľíznenno páče umá i slóva róždšaja."),
),
"3": (
("","","Tý jesí utverždénije pritekájuščich k tebí Hóspodi: tý jesí svít omračénnych, i pojét ťá dúch mój."),
("","","Izbávi mjá molítvami tvojími prečístaja, vrážijich stríl: i ot pomyšlénij nachoďáščich, poščadí sérdce mojé."),
("","","Iscilí preneporóčnaja duší mojejá rány, Máti Bóžija, utiší tvojími molítvami serdéčnoje smuščénije."),
("","","Ukrotí neustávnoje pomyšlénij mojích, čístaja Vladýčice, i otimí ot duší vsjáku pečáľ, jáže rádosť róždšaja."),
("","","Rádujsja, spasénije pritekájuščich k tebí, Ďívo Máti Christóva: rádujsja, apóstolov i múčenikov pochvaló."),
),
"4": (
("","","Uslýšach Hóspodi, smotrénija tvojehó tájinstvo: razumích ďilá tvojá i proslávich tvojé božestvó."),
("","","Duší mojejá jázvy Slóve Bóžij iscilí, ujazvívyjsja mené rádi jáko čelovikoľúbec, i umá mojehó ťmú prosvití."),
("","","Dremánijem ľínosti mojejá són hrichóvnyj postíže Ďívo: bódrennoju tvojéju molítvoju vozdvíhni mjá k pokajániju."),
("","","Jáže ot istóčnik istočívšaja vódu ostavlénija, preneporóčnaja, sérdce mojé napój izsóchšeje vsjákimi prestupléňmi."),
("","","Presvjatája Bohonevísto, Vladýčice míra, tý mja spasí, bíd premiňájušči, i strastéj molvú othoňájušči."),
),
"5": (
("","","Útreňujušče vopijém tí Hóspodi, spasí ny: tý bo jesí Bóh náš: rázvi tebé inóho ne vímy."),
("","","Prízri prečístaja, i uslýši hlás mój, i vsjá očísti bezzakónija mojá Vladýčice."),
("","","Sólnce nezachodímoje róždšaja Bohoródice, vsehó mja prosvití omračénnaho strasťmí: da slávľu ťá i chvaľú preneporóčnaja."),
("","","Uščédri presvjatája otrokovíce dúšu mojú, i izbávi sudá, i víčnujuščaho mučénija."),
("","","Nedúhovavšuju dúšu mojú iscilí, róždšaja vsích vzémľuščaho nedúhi, presvjatája Vladýčice."),
),
"6": (
("","","Bézdna hrichóv i prehrišénij búrja mjá smuščájet, i vo hlubinú núždnaho ríjet mjá otčájanija: no tvojú krípkuju rúku mňí prostrí jáko Petróvi, o Upráviteľu, i spasí."),
("","","Umertví Bohoródice plóti mojejá bezčínnaja dvizánija, jáže umerščvlényja zlóboju oživívšaja, tvojím živým roždestvóm: jáko da slávľu ťá, jáko vinú čelovíčeskaho obnovlénija."),
("","","Bézdnu ščedrót róždši Bohoródice Ďívo, skórbnych žitijá dúšu mojú spasí, i rádosti duchóvnyja dvéri otvérzi mí: na ťá bo jedínu nadéždu vozložích."),
("","","Da tvojá rádostno vospojú velíčija Ďívo prečístaja, i velíkuju blahodáť čudés, bezmístnaja pomyšlénija tomjáščaja mjá vsehdá, molítvami tvojími preminí."),
("","","Rodilá jesí páče jestestvá ňídr otéčeskich nerazlúčnaho, i tobóju Ďívo k čelovíkom besídovavša: no jáko pochvalá nás vsích, i utverždénije jesí Bohoródice, k tebí pribihájuščyja uščédri."),
),
"S": (
("","","Slóvo Ótčeje na zémľu sníde, i ánhel svítel Bohoródici hlahólaše: rádujsja blahoslovénnaja, jáže čertóh jedína sochráňšaja, začátije prijémši prevíčnaho Bóha i Hóspoda, da ot prélesti jáko Bóh spasét ród čelovíčeskij."),
),
"7": (
("","","Ótrocy jevréjstiji v peščí popráša plámeň derznovénno: i na rósu óhň preložíša vopijúšče: blahoslovén jesí Hóspodi Bóže vo víki."),
("","","Uvý mňí, káko izbíhnu múk ždúščich mjá, zľí požív zďí? Káko sudijí strášnomu javľúsja okajánnyj? Bohorodíteľnice Vladýčice, pomóščnica tohdá búdi mí."),
("","","Uhasí plámeň strastéj mojích, i utiší búrju sérdca mojehó Bohomáti čístaja: i izbávi mjá prečístaja, bisóv tomlénija, i ohňá víčnaho."),
("","","Vížď skórb mojú Ďívo, júže navedé mnóžestvo zól mojích: i préžde otšéstvija pokój mí podážď, Máternimi tvojími molítvami umoľájušči Sýna tvojehó."),
("","","Plámeň strastéj sérdca mojehó pohasí rosóju tvojéju Ďívo, i ohňá strášnaho ischití otrokovíce i víčnaho sudá i bisóvskaho tomlénija."),
),
"8": (
("","","Blahoslovíte ótrocy, Tróicy ravnočíslenniji, soďíteľa Otcá Bóha, pójte snizšédšeje Slóvo, i óhň v rósu pretvóršeje, i prevoznosíte vsém žízň podavájuščaho, Dúcha vsesvjatáho vo víki."),
("","","Umertví Ďívo, vsjá ťilésnaja náša dvizánija, jáže mértva zmíja živonósnym roždestvóm tvojím soďílavši: i žízň nestaríjuščuju ulučíti nám pomolísja, jáko da ťá pojém vo víki."),
("","","Ťá Bohorodíteľnice, na pómošč vsehdá prizyváju, ne posramí mené sťažávšaho studú ďilá: no uščédri mjá prečístaja, i plámene ischití, i víčnych izbávi mučénij."),
("","","Obrádovannaja pomóščnica mí javísja vo obstojániji zól, izbavľájušči ot nachoďáščich mí bisóvskich navít, jáko da ublážáju ťá vsehdá Bohoródice, predstáteľnice vsích."),
("","","Presvjatája otrokovíce, apóstolov pochvaló, múčenikov slávo, slávy víčnyja pričástnika mjá sotvorí, vzyvájušča: blahoslovíte Dúcha presvjatáho vo víki vsjá."),
),
"9": (
("","","Ustávy prešlá jesí jestestvá, soďíteľa začénši i Hóspoda, i dvér spasénija mírovi bylá jesí: ťím ťá Bohoródice neprestánno veličájem."),
("","","Svítom tvojím nevečérnim, dvére svíta, duší mojejá ťmú otžení ľútuju, i pokajánija mí putí svitozárnyja pokaží: jáko da šéstvuja v ních, mráka otbíhnu hrichóvnaho, i neprestánno veličáju ťá."),
("","","Očísti rabý tvojá bláže, i prehrišénij proščénije dáruj: i víčnujuščaho plámene izbávi nás, i pričástniki soďílaj tvojehó cárstvija Slóve Bóžij: íbo za jéže choťá spastí ród čelovíč, krest preterpíl jesí."),
("","","Udób Bohoródice, prejtí mi žitijá podážď prechoždénije, i napástej vostánija i strastéj, jáko blahá ukroťášči, i nastavľájušči k dobroďíteli nebésnaho cárstvija, i pokója božéstvenna: jáko da spasájem proslavľáju ťá."),
("","","Nedúhovavšuju dúšu mojú strástiju ľútoju, jáže róždši Bóha preblaháho, jáko mílostiva iscilí, i izbávi prísno oskorbľájuščich mjá prečístaja, i pribyvájuščich zloľútych: jáko da spasájem, ťá usérdno veličáju, vozvelíčivšuju ród náš."),
),
)
#let U = (
"S1": (
("","","Svitíla úmnaja javístesja svjatíji múčenicy, mhlú bo prélesti uprazdníste víroju, i dušévnyja sviščý váša prosvitíste, i s Ženichóm so slávoju vnidóste v nebésnyj čertóh. i nýňi mólim vý, molítesja spastísja dušám nášym."),
("","Poveľínnoje tájno","Vozderžánijem strastéj ohnepáľnyja ispepelívše zráki i dvižénija, Christóvy múčenicy, prijáša blahodáť, nedúhi othoňáti nemoščstvújuščich, i živúšče i po koncí čudoďíjstvovati. Voístinnu čúdo preslávnoje, jáko kósti náhi istočájut iscilénija: sláva jedínomu Bóhu nášemu."),
("Bohoródičen","","Íže nás rádi roždéjsja ot Ďívy, i raspjátije preterpív blahíj, isprovérhij smértiju smérť, i voskresénije javléj jáko Bóh, ne prézri jáže sozdál jesí rukóju tvojéju: javí čelovikoľúbije tvojé mílostive, prijimí róždšuju ťá Bohoródicu moľáščujusja za ný, i spasí Spáse náš ľúdi otčájannyja."),
),
"S2": (
("","","Prorócy, múčenicy Christóvy, i svjatítelije, íže pódvih dóbryj blahočestnomúdrenno zakónno skončávše, i prijímše vincý neuvjadájemyja ot Bóha, neprestánno molíte o nás jehó bláhosť: jáko da dárujet nám prehrišénij proščénije, jáko Bóh blahouvítliv."),
("Mértven","","Prestávľšyjasja blahočéstno ot vrémennych, v selénijich izbránnych s právednymi sopričtí Vladýko, ťích upokojevája na mísťi prázdnujuščich, i v sládosti nekončájemij rájsťij, jáko bláh, vóľnaja prehrišénija i nevóľnaja ích, za krájňuju bláhosť prostívyj čelovikoľúbče."),
("Mértven","","Hlubinóju múdrosti čelovikoľúbno vsjá strójaj, i poléznoje vsím podavájaj, jedíne soďíteľu, upokój Hóspodi, dúšy ráb tvojích: na ťá bo upovánije vozložíša, tvorcá i ziždíteľa i Bóha nášeho."),
("Bohoródičen","","Tebé i sťínu, i pristánišče ímamy, i molítvennicu blahoprijátnu k Bóhu, jehóže rodilá jesí, Bohoródice beznevístnaja, vírnych spasénije."),
),
"K": (
"P1": (
"1": (
("","","Pojím Hóspodevi, provédšemu ľúdi svojá skvozí čermnóje móre, jáko jedín slávno proslávisja."),
("","","Čestníji bíserije javístesja, vinéc prosviščájušče čestnýja cérkve, strastotérpcy Christóvy dóblestvenňijšiji."),
("","","Oblistáša dohmát dobroďítelej božéstvennymi svitlosťmí premúdriji svjatítelije, i vírnych serdcá prosvitíša."),
("","","Tebé v prorócich i v prepodóbnych čúdnaho mólim Slóve: ťích moľbámi spasí nás."),
("","","Jehdá súd právednyj priídeši sotvoríti právedňijšij: neosuždénno Slóve, ťích moľbámi spasí nás."),
("Bohoródičen","","Hóspoda róždšuju ťá svídyj, žénskij lík tebé vo sľíd Ďívo stradáľčestvovav, tomú privedésja."),
),
"2": (
("","","Vódu prošéd jáko súšu, i jehípetskaho zlá izbižáv, Izráiľťanin vopijáše: izbáviteľu i Bóhu nášemu pojím."),
("","","Smértiju smérť Christóvu, i strasťmí strásť podražávše čéstnúju, múčenicy vsí, božéstvennyja i blažénnyja žízni polučíša."),
("","","Júnosti prehrišénija prezrjá, i hrichí prevoschoďá Christé Spáse, préžde usópšyja rabý tvojá so izbránnymi tvojími sopričtí."),
("","","Júže polučíša slávu že i rádosť, íže blažénnoje sťažávše prebyvánije, podážď bohátno tvojím rabóm, jáže prijál jesí mnohomílostive."),
("","","Začalá jesí Slóvo Ótčeje, plóti sojedínšejesja po ipostási, jéže iz tebé vseneporóčnaja otrokovíce, ád božéstvennoju síloju uprazdnívšeje."),
),
),
"P3": (
"1": (
("","","Utverždéj v načáľi nebesá, i zémľu osnovávyj na vodách mnóhich, utverdí mja k píniju slavoslóvija tvojehó Hóspodi."),
("","","Mérzkija žértvy stradáľcy kripčájšeju mýsliju ottrjásše, žértva čisťíjša Slóvu požéršemusja býša."),
("","","Obnovľájušče slovesý svjáščennoďíteľnymi obetšávšaja vsjákimi strasťmí, obnovívšaho mír božéstvenniji učenicý javístesja."),
("","","Jáže prorókom préžde javlénno, dánnaja blahodáť vsesvjatáho Dúcha, póstniki v posľídňaja božéstvennych darovánij ispólni."),
("","","Likóm svjatých sočetáj Bóže, vírno ot žitijá prešédšyja, i rajá žíteli sotvorí, za neizrečénnuju mílosť."),
("Bohoródičen","","Íže nikákože vmiščájemyj Iisús i Hospóď, neťisnovmístno v tvojé vselísja osvjaščénnoje črévo prečístaja Ďívo vsepítaja."),
),
"2": (
("","","Nebésnaho krúha verchotvórče Hóspodi, i cérkve ziždíteľu, tý mené utverdí v ľubví tvojéj, želánij kráju, vírnych utverždénije, jedíne čelovikoľúbče."),
("","","Očístivšesja drévňaho prarodíteľnaho padénija, kreščénijem i páki poroždénijem, strujámi že krovéj svojích okropívšesja Christú blažénniji, scárstvujete."),
("","","Íže mértv vóleju polóžsja vo hróbi, i živúščyja vo hrobích vzyvájaj Spáse, blahovolí, jáže ot nás prijál jesí, v selénijich právednych tvojích sím vodvorjátisja."),
("","","Blahoutróbijem božéstvennyja tvojejá Vladýko, i sojestéstvenňi razumivájemyja bláhosti, umolén byvája, hrichóv razrišénije podáv rabóm tvojím Spáse, upokój."),
("","","Voplóščsja iz čréva tvojehó, i býv čelovík, íže izrjádňi sýj jedín čelovikoľúbec, spasájet čelovíka ot vrát smértnych, Bohomáti prečístaja jedína vsepítaja."),
),
),
"P4": (
"1": (
("","","Uslýšach Hóspodi, smotrénija tvojehó tájinstvo, razumích ďilá tvojá, i proslávich tvojé božestvó."),
("","","Prešédše póprišče mučénija, teplotóju duchóvnoju stradáľcy Bohoblažénniji, veščéstvennuju prélesť popalíste."),
("","","Prepodóbnyja tvojá Hóspodi, i svjatíteli svítlo proslávil jesí: ťích božéstvennymi moľbámi, pričástnika mjá slávy tvojejá sotvorí."),
("","","Jéže božéstvennaho Dúcha vdochnovénije, proróki prosvitívšeje, ženám kríposť darová, vrahóv pohubľáti šatánija."),
("","","Slikovstvováti spodóbi so vsími svjatými, jáže prestávil jesí preblahíj rabý tvojá, v selénijich svjatých tvojích, umolén byvájaj."),
("","","Zakóna božéstvennaho podáteľa, jestestvá zakóna výše rodilá jesí, vozsozidájušča vseneporóčnaja, čelovíčeskoje jestestvó popólzšejesja."),
),
"2": (
("","","Tý mojá kríposť Hóspodi, tý mojá i síla, tý mój Bóh, tý mojé rádovanije, ne ostávľ ňídra Ótča, i nášu niščetú positív. Ťím s prorókom Avvakúmom zovú ti: síľi tvojéj sláva čelovikoľúbče."),
("","","Da tvojejá Vladýko slávy úzrjat, i svítlosť tvojú svitodátelnuju na nebesích polučát svítlo, vsják víd mučénij nosíti preterpíša, múčenicy božéstvenniji, tebí Christé pojúšče: síľi tvojéj sláva čelovikoľúbče."),
("","","Mnóhi obíteli u tebé Spáse súť, po dostojániju vsém razďiľájemy, po míri dobroďítelej: týja úbo ščédre ispólniti sích spodóbi, jáže víroju prestávľšyjasja, blahočéstno tebí vopijúščyja: síľi tvojéj sláva čelovikoľúbče."),
("","","Ráven nám čelovík javílsja jesí, bezsmértne, vsím rávno smérť preterpíl jesí, i žízni pokazál jesí púť: jejáže prestávľšyjasja spodóbi jáko čelovikoľúbec, proščénije sohrišénij jáko Vladýka podavája, i svíta pričástije dáruj."),
("","","Tý vírnych pochvalá jesí beznevístnaja, tý predstáteľnica, i tý pribížišče christiján: sťiná i pristánišče, k Sýnu tvojemú moľbý nósiši vseneporóčnaja, i spasáješi ot bíd, íže víroju i ľubóviju Bohoródicu čístuju ťá víduščyja."),
),
),
"P5": (
"1": (
("","","Prosvití nás poveľiniji tvojími Hóspodi, i mýšceju tvojéju vysókoju tvój mír podážď nám, čelovikoľúbče."),
("","","Preterpíša jázvy plotskíja božéstvenniji múčenicy, i sehó rádi jázvy náša prísno isciľájut, ujazvľájušče démony."),
("","","Svjatíji Bóžiji svjatíteli, molíte so vsími prepodóbnymi Christá, hrichóv ostavlénije darováti nám."),
("","","Plóť ránami i obučénijem iznurívše, žený prepodóbnyja čelovikoľúbče, v nadéždach blahích spodóbišasja."),
("","","Smérť pohubív, smértiju tvojéju bezsmértne, uméršyja v nadéždi i žízni, vírnyja tvojá rabý jáko čelovikoľúbec upokój."),
("Bohoródičen","","Uprazdníla jesí čístaja práotec osuždénije, vsích opravdávšaho plótiju róždši Iisúsa jedínaho Hóspoda."),
),
"2": (
("","","Vskúju mjá otrínul jesí ot licá tvojehó svíte nezachodímyj, i pokrýla mjá jésť čuždája ťmá okajánnaho? No obratí mja, i k svítu zápovidej tvojích putí mojá naprávi, moľúsja."),
("","","Neskazánnyja tvojejá slávy, i jéže páče slóva tvojehó blažénstva, v selénijich svjatých, iďíže hlás krasén prázdnujuščich, prestávlennyja k bezstrástňij žízni uščédriv, spodóbi jedíne čelovikoľúbče."),
("","","Iďíže číni ánheľstiji, iďíže právednych sobóri rádujutsja, Avraáma v ňídrich vselí Spáse rabý tvojá, i so derznovénijem strášnomu i božéstvennomu prestólu predstáti blahovolí blahoutróbne."),
("","","Očiščénije nám javílsja jesí, právda že i izbavlénije, i jázvoju tvojéju náša némošči uvračevál jesí. Tý úbo ščédre, jáko bláh, prestávľšyjasja v sládosti rájsťij učiní."),
("","","Mílostivno mílostive, čelovíčestva míru vosprijál jesí, íže božéstvennyja slávy prevýššimi ukrašájaj vozvyšéňmi, ot ďivíčeskich ložésn plóť prijém oduševlénu, slovésnu, jéjuže smérť pohubíl jesí."),
),
),
"P6": (
"1": (
("","","Očísti mjá Spáse, mnóha bo bezzakónija mojá, i iz hlubiný zól vozvedí, moľúsja: k tebí bo vozopích, i uslýši mjá, Bóže spasénija mojehó."),
("","","Hnojéniji vášimi čéstnými dúš isciľájete nedúhi svjatíji múčenicy, i sohnítija ťilés prísno otjémlete, ujazvľájušče lukávych démonov mnóžestvo."),
("","","Prepodóbnych lík, svjatítelej že soslóvije, i žén svjaščénnych božéstvennoje sočetánije, tvérdo stradáľčestvovavše, na nebesích naslaždénija nasľídovaša."),
("","","Umertvívše póstnicy plóť, živót ulučíste: pásše že dóbri Christóvo stádo, svjatítelije premúdriji, bezsmértnyja slávy po končíňi spodóbistesja."),
("","","Živých Slóve živót, i uméršich pokój sýj, jáže ot nás božéstvennym tvojím manovénijem, prestávlenyja rabý tvojá, Avraáma v ňídra uhódnika tvojehó vselí."),
("Bohoródičen","","Vselísja v ťá čístaja, Ótčeje sijánije, i neveščéstvennymi zarjámi svojehó božestvá, ťmú razruší mnohobóžija, i mír prosvití."),
),
"2": (
("","","Očísti mjá Spáse, mnóha bo bezzakónija mojá, i iz hlubiný zól vozvedí, moľúsja: k tebí bo vozopích, i uslýši mjá, Bóže spasénija mojehó."),
("","","Múčency tvojí Spáse, mnóhija múki preterpíša, ľubóviju tvojéju dušámi ujazvívšesja svjatíji, tvojejá želájušče prisnosúščnyja slávy, i sládkaho tvojehó pričástija."),
("","","Črévo vrahá smértiju rasterzál jesí Spáse, i vsjá jáže v ném júzniki voskresíl jesí, živót darovávyj: jehóže prestávlenyja blahoďíteľu, spodóbi."),
("","","Sléz i vozdychánija súščyja vo áďi rabý tvojá svobodí Spáse: jáko jedín bo blahoutróben, otjál jesí vsjáku slézu ot vsjákaho licá, vírno blahoslovjáščich ťá."),
("","","Íže jestestvóm soďíteľ, soďíteľstvujetsja v ložesnách tvojích, pólnyj jáko Bóh, istoščavájetsja vseneporóčnaja: jedín bezsmértnyj, za vsích spasénije umirájet smértiju."),
),
),
"K": (
("","","So svjatými upokój Christé dúšy ráb tvojích, iďíže ňísť boľízň, ni pečáľ, ni vozdychánije, no žízň bezkonéčnaja."),
("","","Sám jedín jesí bezsmértnyj, sotvorívyj i sozdávyj čelovíka: zemníji úbo ot zemlí sozdáchomsja, i v zémľu tújužde pójdem, jákože poveľíl jesí, sozdávyj mjá, i rekíj mí: jáko zemľá jesí, i v zémľu otídeši, ámože vsí čelovícy pójdem, nadhróbnoje rydánije tvorjášče písň, allilúia."),
),
"P7": (
"1": (
("","","Ótrocy Bohočestíviji v Vavilóňi óbrazu zlatómu ne pokloníšasja, no posreďí péšči óhnennyja orošájemi písň vospiváchu, hlahóľušče: prevoznosímyj otcév i nás Bóže, blahoslovén jesí."),
("","","Razrušívše oplót zločéstija vjazáňmi svjaščénnymi vášimi vsechváľniji, mojích sohrišénij brémja rastočíte, i vopijúšča múčenicy spasíte: blahoslovén Bóh otéc nášich."),
("","","Túčami svjaščénnych vášich krovéj, jereséj óhň uhasíste: i plámenem svjatítelije premúdriji popalíste nečéstija prélesti plévely, i vírnych dúšy prosvitíste."),
("","","Póstnicy vozderžánijem plóti umertvívše, živút po smérti: prorók že právednych lík, i sočetánije žénskoje stradáľčestvovavšeje proslávisja. Ťích molítvami Christé, ot bíd izbávi nás."),
("","","Vsích v nadéždi žízni usópšich dúšy Christé, upokój, prezirájaj jáže v žitijí ťích sohrišénija, za mnóhoje blahoutróbije tvojé, blahoutróbne jedíne Spáse, otcév Bóže blahoslovén jesí."),
("Bohoródičen","","Vsjá jáko čístaja vosprijála jesí vo utróbu Slóvo voploščájemo: tohó molí prečístaja, prehrišénij očístiti mjá duší i ťíla, k tebí čístoju víroju pritekájuščaho."),
),
"2": (
("","","Ótrocy jevréjstiji v peščí popráša plámeň derznovénno, i na rósu óhň preložíša, vopijúšče: blahoslovén jesí Hóspodi Bóže vo víki."),
("","","Vsé želánije múčenikov prostrésja ko jedínomu Vladýci, ľubóviju tomú sojediňájemoje, i pojúščeje: blahoslovén jesí Hóspodi Bóže vo víki."),
("","","Podajéši víroju prestávlšymsja svítlosť božéstvennaho cárstvija, netľínija odéždu dárujaj vopijúščym: blahoslovén jesí Hóspodi Bóže vo víki."),
("","","Ispólni rádosti i vesélija, jáže prestávil jesí rabý tvojá ščédre, spodóbivyj ích vzyváti tebí i píti: blahoslovén jesí Hóspodi Bóže vo víki."),
("","","Razrišája Jévinu kľátvu, v preneporóčnuju vselílsja jesí Ďívu, blahoslovénija istóčnik istočája vopijúščym: blahoslovén prečístaja plód tvojehó čréva."),
),
),
"P8": (
"1": (
("","","Bohohlahóliviji ótrocy v peščí so ohném plámeň popirájušče pojáchu: blahoslovíte ďilá Hospódňa Hóspoda."),
("","","Chrámy ídoľskija nizložíste stradáľčestvujušče, i sebé Tróicy božéstvennyja chrámy sozdáste, strastotérpcy Hospódni ánhelom sobesídnicy."),
("","","Svjaščénnicy tvojí Christé, v právdu oblékšesja prepodóbno s súščimi ot víka nýňi rádujutsja, tvojú dobrótu zrjášče božéstvennuju jasňíjše."),
("","","Svjaščénňijšich molítvami prorók tvojích, i žén prisnoslávnych, súščich ot víka právednych, mílosti tvojá Slóve míru tvojemú dáruj."),
("","","Pravosúde, jehdá chóščeši sudíti, jáže ot nás prijál jesí, neosuždény sobľudí, prezrív ťích sohrišénija Vladýko."),
("Bohoródičen","","Jávľšisja Ďívo, duší mojejá temňíjšija óblaki prosviščénijem tvojím otžení, právdy sólnca róždšaja."),
),
"2": (
("","","Musikíjskim orhánom sohlasújuščym, i ľúdem bezčíslennym poklaňájuščymsja óbrazu v deíri, trí ótrocy ne povinúvšesja, Hóspoda vospiváchu, i slavoslóvľachu vo vsjá víki."),
("","","Zemlenýja pódvihi prošédše, nebésnyja prijáša vincý múčenicy ístinniji, vopijúšče tí neprestánno: Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Sošéd v róv preispódňijšij, íže vo hrobích živúščyja voskrésíl jesí živonačáľnoju dlániju, i pódal jesí rabóm tvojím, víroju préžde usópšym pokój ščédre."),
("","","Jáko istóčnik žízni prisnosúščnyja, jáko sládosti sýj potók, rabý tvojá blahočéstno k tebí prestávlenyja spodóbi píti, i slavoslóviti ťá vo vsjá víki."),
("","","Maríje Ďívo Bohoródice, jáže Bóha spása čelovíkom róždšaja ťilésno, spasáj víroju pojúščyja tvojé roždestvó, i prevoznosjáščyja vo vsjá víki."),
),
),
"P9": (
"1": (
("","","Ťá neiskusobráčnuju Máter Bóha výšňaho, ťá páče umá róždšuju slóvom voístinnu Bóha, výššuju prečístych síl, nemólčnymi slavoslovléniji veličájem."),
("","","Stojáchu múčenicy pred neprávednymi sudíšči, vsjú neprávdu blahodátiju Christóvoju osudívše, i jáže ot ních deržímyja izjémľušče, i vincý právdy prijémľušče."),
("","","Javístesja jákože kormíla cérkve, korábľ vés blahočéstno naprávivše Bóžijich zápovidej, vsí pervosvjaščennonačáľnicy i pástyrije blažénniji: sehó rádi jáko okormíteli vás čtím."),
("","","Sobór mólit ťá Hóspodi prorók, i prepodóbnych, soslóvije žén stradáľčestvovavšich svitľíjše, i vozsijávšich vo obučéniji čelovikoľúbče, mólit ťá, ščedróty tvojá nám dáruj."),
("","","Jejáže spodóbišasja čésti i slávy víčnyja vsí svjatíji Christóvy: jáže vírno otšédšyja žitijá sújetnaho, pričastítisja rabý tvojá spodóbi, mílostive, molítvami svjatých tvojích."),
("Bohoródičen","","Nosjášči ispolnénije blahích vo objátijich vseneporóčnaja, ispólni nás moľbý ráb tvojích: i jáže k Bóhu šéstvija náša isprávi, dobroďíteľňi ukripľájušči žíteľstvovati."),
),
"2": (
("","","Ustrašísja vsják slúch neizrečénna Bóžija snizchoždénija, jáko výššij vóleju sníde dáže i do plóti, ot ďivíčeskaho čréva býv čelovík. Ťímže prečístuju Bohoródicu vírniji veličájem."),
("","","Imúšče múčenicy Christóvy nepobidímuju kríposť, i neodoľíjemuju, uničižíste mučítelej bezbóžnoje veľínije, i cárstvija nebésnaho jávi spodóbistesja, prosviščájemi Tróičnymi zarjámi dostochváľniji."),
("","","Razrušísja ád hórkij, razrušívšu tí jehó čelovikoľúbče, i voskresívšu jáže ot víka támo spjáščyja: no i nýňi jáko bláh prešédšyja k tebí blahoutróbne, nevečérňaho svíta tvojehó spodóbi."),
("","","Vés jesí Spáse sládosť: vés jesí želánije, i ľubóv voístinnu nenasýtnaja: vés jesí dobróta neskazánnaja: prestávľšyjasja úbo k tebí, tvojejá krasotý naslaždátisja blahovolí, i božéstvennyja dobróty tvojejá spodóbi."),
("","","Spasí mja Máti Bóžija, róždšaja Christá Spása mojehó, Bóha i čelovíka, suhúba jestestvóm, ne ipostásiju: jedinoródna úbo ot Otcá, iz tebé že pervoródna vsejá tvári: jehóže v suhúbom jesteství veličájem."),
),
),
),
"CH": (
("","","Veľmí podvizástesja svjatíji, múki bezzakónnych preterpíste dóblestvenňi, i Christá ispovídavše pred carí: i páki prestávľšesja ot žitijá, síly ďíjstvujete v míri: i nemoščnýja isciľájete ot strastéj ích svjatíji, moľášče spastísja dušám nášym."),
("","","Múčency Christóvy nepobidímiji, pobidívše prélesť síloju kréstnoju, naslaždájetesja blahodáti víčnyja žízni: mučítelej preščénija ne ubojástesja, múkami uraňájemi veselístesja: i nýňi króvi váša býša iscilénija dušám nášym, molíte spastísja dušám nášym."),
("","","Vo broňá víry obólkšesja dóbri, i óbrazom kréstnym vooružívše sebé, vójini blahomóščniji javístesja: mučítelem múžeski protívistesja, i dijávoľu prélesť nizložíste, pobídnicy bývše, vincév spodóbistesja, moľáščesja prísno o nás, vo jéže spastísja nám."),
("Pokóin","","Bezmérna jésť blúdno živúščym múka, skréžet zubóv, i pláč neuťíšimyj, hejénna óhnennaja, i tmá kromíšnaja, i čérv neusypájemyj, slézy neďíjstvíteľny, i súd nemílostivyj. Sehó rádi préžde koncá vozopijím, hlahóľušče: Vladýko Christé, jáže izbrál jesí, so izbránnymi tvojími upokój."),
("Bohoródičen","","Áz, Ďívo svjatája Bohoródice, k pokróvu tvojemú pribiháju, vím, jáko obrjášču tobóju spasénije: móžeši bo čístaja, pomoščí mňí."),
),
"ST": (
("","O preslávnaho čudesé!","Cárskoje mí podpisáv svoboždénije, šáry červlénymi, pérsty tvojá okrovavív Vladýko, i obahrív tvojéju króviju. Ťímže ťá víroju mólim: s pervoródnymi tvojími pričtí, i právednych tvojích rádosti spodóbi ulučíti, prešédšyja k tebí blahoutróbnomu."),
("","","Svjaščénstvovav jáko čelovík, zaklán býv jáko áhnec, prinošénije prinosílsja jesí Otcú, čelovíka ot tlí izymája: prestávľšyjasja úbo jáko čelovikoľúbec, v živúščich straňí učiní, iďíže potócy sládosti prolivájutsja, iďíže istóčnicy prisnosúščestva istekájut."),
("","","Hlubinóju múdrosti neizrečénnyja tvojejá preďilopolaháješi živót, i provídiši búduščaja, i k žízni druhój preseľáješi rabý tvojá, jáže prijál jesí úbo, učiní na vodách upokojénija, vo svítlosti svjatých, Hóspodi, iďíže hlás rádovanija i chvalénija."),
("Bohoródičen","","Slóvo sýj nevídimyj, jedinoráslennyj Otcú i Dúchu sozráčnyj, mené rádi čelovíka plótiju javílsja jesí: jáko mílostiv úbo i čelovikoľúbec, jáže ot žitijá prešédšyja, blahoľípija tvojehó i krasotý dobrótami, ťích prosvití živonačáľňijšij."),
("","","Jáko začénši beznačáľnoje Slóvo Bóha Otcá, máternim tvojím derznovénijem priľížno molí, Bohorodíteľnice, učiníti rabý tvojá, iďíže likovánije právednych neprestánnoje veseľáščichsja i chváľaščich ťá, iďíže svítlosť víčnujuščaja, i sládkij hlás prázdnujuščaho."),
),
)
#let L = (
"B": (
("","","Pomjaní nás Christé Spáse míra, jákože razbójnika pomjanúl jesí na drévi: i spodóbi vsích jedíne ščédre, nebésnaho cárstvija tvojehó."),
("","","Vsím boľíznem krípko priobščájuščesja, jázvami vášimi svjatými strastotérpcy, ujazvíste vsjá tmý bisóv, božéstvennoju blahodátiju ."),
("","","Prepodóbniji Christóvy, i božéstvenniji svjatítelije, sobór proróčeskij, i vsích svjatých právednych, nebésnoje polučíša nasľídije: íchže po dostojániju da ublážím."),
("","","Vsích vírno ot žitijá prestávlennych učiní Bóže, v stranách právednych: i rajá pokaží nasľídniki, Bohoľípno pisnoslóvjaščich ťá."),
("","","Pínije tí Tróice, prinošú konéčnoje: íchže ot zemlí prestávila jesí vo víri, svjatých spodóbi selénija, i mené blúdnaho pomíluj."),
("Bohoródičen","","Vsích dóbrych ispolnénije róždši, ispólni náša, Ďívo, molénija. Sohrišénij nám ostavlénija prosjášči, i prosviščénija i vélija mílosti."),
),
)
|
|
https://github.com/jelber2/hapmers | https://raw.githubusercontent.com/jelber2/hapmers/main/manuscript/hapmers3.typ | typst | MIT License | #import "@preview/arkheion:0.1.0": arkheion, arkheion-appendices
#show: arkheion.with(
title: "Using readmers and hapmers in assessing phase switching after read error correction of Oxford Nanopore Sequences",
authors: (
(name: "<NAME>", email: "<EMAIL>", affiliation: "Institute of Medical Genetics, Center for Pathobiochemistry and Genetics, Medical University of Vienna"),
(name: "<NAME>", email: "<EMAIL>", affiliation: "Institute of Medical Genetics, Center for Pathobiochemistry and Genetics, Medical University of Vienna"),
(name: "<NAME>", email: "<EMAIL>", affiliation: "Institute of Medical Genetics, Center for Pathobiochemistry and Genetics, Medical University of Vienna"),
(name: "<NAME>", email: "<EMAIL>", affiliation: "Institute of Medical Genetics, Center for Pathobiochemistry and Genetics, Medical University of Vienna")
),
abstract: [Methods for sequence error correction can improve sequence accuracy; however, there can be unintended errors added during error correction. One such example is phase switching, whereby sequences derived from genomes containing more than one parental copy have contributions from more than one parental haplotype. Such switches are mistakes that can confound downstream analyses especially _de novo_ genome assembly. While DNA sequences do not possess words such as linguistic languages, one can partition a DNA sequence into word-like objects called k-mers. K-mers include pieces of DNA sequence of k length that can describe various properties of DNA sequences. With regard to phase switching, there are k-mers present in one parental haplotype not found in other(s). These so-called hapmers can represent inaccuracies in that parental haplotype's assembly but also correct, unique DNA variation. Here we investigated the effect of DNA sequence error correction on phase switching at the sequence/read level. Using several error-correction methods, we find all methods tested are similar to raw, presumably, phase-switch-free Oxford Nanopore Technologies (ONT) sequences in the percentage of readmers (k-mers from the ONT sequences) matching one parental haplotype's hapmers. This work demonstrates an efficient method to assess if an error-correction method has introduced phase switching implemented in the Julia programming language.],
keywords: ("k-mers", "hapmers", "Oxford Nanopore Technologies", "ONT", "phase switches"),
date: "October 18, 2024",
)
#set cite(style: "chicago-author-date")
#show link: underline
= Introduction
The advancement of sequencing technologies, such as Oxford Nanopore Technologies (ONT) sequencing, has revolutionized genomic research by enabling rapid and cost-effective long-read DNA sequencing. However, ONT sequencing reads have a tendency to have insertion and deletion (indels) errors, especially when the template sequence has homopolymers @delahaye2021sequ<EMAIL>. Indel sequencing errors can be corrected by a variety of computational methods, but it is not always clear whether non-intentional errors are introduced during the indel error-correction process as no method is perfect @huang2021homopolish@wang2020nanoreviser. One such type of unintentionally-introduced error is phase switching, a phenomenon where reads derived from multiple parental genome copies contain a mixture of more than one parental copy's DNA nucleotides @merqury. Phase switching may not be detrimental to all analyses, but it can confound downstream applications such as _de novo_ genome assembly, whereby there could be loss of variation such as SNPs for phasing a genomic region resulting in that region being unphaseable (<NAME>, _pers. comm._).
#parbreak()
While DNA sequences do not possess words such as linguistic languages, one can partition a DNA sequence into word-like objects called k-mers @waterman198852. K-mers include pieces of DNA sequence of k length that can describe various properties of DNA sequences. With regard to phase switching, there are k-mers present in one parental haplotype not found in other(s). These so-called hapmers can represent inaccuracies in that parental haplotype's assembly but also correct, unique DNA variation (<NAME>, _pers. comm._). Further, during phase switching, there is a mixture of hapmers from heterozygous genomics regions that do not coexist on the same homologous chromosome.
#parbreak()
While there are computational methods for estimating phase switching at the assembled contig or chromosome level @yak@merqury, here we investigate the effect of DNA sequence error correction on phase switching at the sequence/read level by comparing the k-mers in ONT reads (readmers) and percentage of hapmers from one or the other parental haplotype with human cell-line DNA. Using a computational approach implemented in the Julia programming language, we present a systematic evaluation of several error-correction methods.
= Materials Methods
== Cell Culture
We cultured HG002/RM8391 cells (NIST, Coriell), following the manufacturer's culturing and expanding guidelines.
== DNA Extraction, Library Prep, Sequencing
We extracted DNA from the HG002-cultured cells using automated extraction with EZ1 Advanced XL Robot (Qiagen) and EZ1 DNA Tissue Kit (Qiagen) and then performed left-size selection using Short Read Eliminator (SRE) Kit (Circulomics) to enrich reads greater than 25,000 bases. We incubated the samples with the SRE-reagent in a ratio 1:1 for 1 hour at 50°C to increase DNA yields. Next, we used the ONT SQK-LSK114 kit for ONT library preparation and the ONT-protocol for “Ligation sequencing 30kb human gDNA” (Version: GDH_9174_v114, revD, 10.11.2022) starting from the step “DNA repair and end-prep”. Finally, we sequenced the libraries on two R10.4.1 ONT PromethION flowcells using an ONT P2 Solo device connected to an ONT GridION.
== Read Processing
We basecalled the resulting pod5 squiggles (i.e., the current raw output files from ONT sequencing) with Dorado 0.6.0 @ONT2024 using the dna\_r10.4.1\_e8.2\_400bps\_sup\@v4.3.0 basecalling model to initially generate SUP, raw ONT reads.
== Error correction
While we are well aware there are numerous raw ONT sequence error-correction tools, we focused on the following as these were used in a prior analysis and were assessed to see if they might negatively impact phase switching at the sequence/read level. During error correction, we made a Snakemake workflow @snakemake for processing and also used BBMap/BBTools's scripts @Bushnell such as partition.sh version 39.01, shred.sh version 39.01, and filterbyname.sh version 39.06 for various operations such as subsetting the reads, ensuring reads > 1,000,000 bases were split for effective downstream processing by one of the error-correction tools, Peregrine 2021 @Peregrine_2021, and downstream comparison of the same reads, respectively.
=== Herro
#cite(<Stanojevic>, form: "prose") developed the deep-learning, haplotype-aware ONT error-correction tool Herro. We used Herro commit \# 5e7b70f and the version 0.1 R10.4.1 read-correction model. We first used seqkit 2.5.1 @Shen to obtain read identifiers and minimap2 version 2.26-r1175 @Li1 to perform read overlapping, running the Herro developers' create_batched_alignments.sh script. We then ran Herro inference (i.e., error correction) using a batch size of 64 and a single Nvidia A100 40GB graphical processing unit.
=== Brutal Rewrite
Brutal Rewrite is an error-correction method that uses k-mers for error correction @Br. We used Brutal Rewrite commit \# ad87f92, the graph algorithm for de Bruijn graph @pevzner1989tuple based error correction, and a k-mer length of 19 on Herro-corrected reads.
=== Peregrine 2021
Peregrine 2021 @Peregrine_2021 is genome assembler similar to the Peregrine genome assembler @Peregrine with both assemblers using sparse-hierarchical minimizers. We used Peregrine 2021 version 0.4.13 performing one round of overlapped-based error correction using the Brutal Rewrite-corrected reads as input and 6, 56, and 80 as values for the reduction factor, k-mer size, and window size for Peregrine 2021 settings, respectively.
=== DeChat
DeChat is an error-correction method that we used on corrected reads but is designed for raw ONT reads @Li2024.05.09.593079. We used default settings for version 1.0.0, using the Peregrine 2021-corrected reads as input and retaining only the _recorrected.fa_ read DeChat output.
== Phase switching analyses
=== Read alignment
We used minimap2 version 2.28-r1209 with the following options to map raw, Herro, Brutal Rewrite, Peregrine 2021, and DeChat reads against either the maternal or paternal haplotypes separately from _hg002v1.0.1.fasta_ @HG002: output "X" and "=" in cigar strings, no secondary reads, soft-clipping of supplementary alignments, base-level alignment, output SAM format, and the high-quality, long-read preset. We then used SAMtools 1.9 @samtools to retain only primary alignments and those with mapping qualities equal to 60 and converted those SAM alignments into pairwise-mapping format (PAF) alignments with minimap2's paftools.js sam2paf subprogram. Next, we retained the longest, primary alignment from each read and excluded those reads that Herro split into parts for Herro, Brutal Rewrite, Peregrine 2021, and DeChat reads.
=== K-mers and hapmers
We developed a Julia programming language script @julia that takes read alignment information from minimap2 PAF files, FASTA files, and FASTA-index files of the reads and extracts ONT read k-mers (hereafter readmers) and associated k-mers unique to each haplotype (hereafter hapmers). We required that for a read alignment to be used for readmer and hapmer analysis, that query start and query stop had to be identical for maternal and paternal alignments (i.e., the same region of the read aligned to both haplotypes). Next, we iterated over each alignment meeting our criteria to extract the readmers and noted the total hapmers and matching hapmers for both maternal and paternal haplotypes. We processed the resulting tables with a Julia programming language script, converting the number of matches to percentages. For each read alignment, we aggregated the difference in the percentages of matching hapmers by taking the absolute value of the difference between the maternal and paternal matching percentages. For example, if we had a read alignment with 75% maternal hapmer matches and 25% paternal hapmer matches, then we would have 50% as the resulting value for that read alignment and likewise 50% if percentages were flipped between haplotypes (i.e., we would not have -50%). Finally, we generated histograms with a bin width of 1 from 1% to 100% with a cut-off of 6,000 read alignments per bin on y-axis or no cut-off. For all analyses we used a k-mer length of 15.
=== Alignment statistics
Using the same reads for hapmer analysis, we realigned these sequences to both _hg002v1.0.1.fasta_ assembly haplotypes simultaneously using minimap2 version 2.28-r1209 with the following options: output "X" and "=" in cigar strings, no secondary reads, soft-clipping of supplementary alignments, base-level alignment, output SAM format, and the high-quality, long-read preset back to an assembly preset (i.e., lr:hqae and not lr:hq). We then used the program Bam Error Stats Tools ("best", commit \#c1c69bb, @best) to analyze errors in the aligned reads.
= Results
We generated 15,048,314 raw, super (i.e., SUP) accuracy reads that we error corrected with Herro, resulting in 4,578,144 reads of which 582,208 were split parts of reads and not analyzed for phase-switching analysis. Brutal Rewrite does not filter out reads, but there were 4,490,689 reads remaining after one round of overlapping with Peregrine 2021 of which 532,757 were split parts and not analyzed for phase-switching analysis. DeChat did not filter out any reads from Peregrine 2021. @table1 shows results for the number of read alignments meeting criteria for each correction method, prior to requiring the same read identifier for all 5 datasets for downstream hapmer analysis.
In general, the percentage of read alignments passing our filtering criteria of mapping quality 60 and only the longest, primary alignment increased with each successive correction method, except there were lower percentages for Brutal Rewrite than Herro (@table1).
#show figure.caption: strong
#show figure.where(
kind: table
): set figure.caption(position: top)
#figure(
caption: [Number of read alignments and percent out of total reads after filtering for alignments with mapping quality 60 and the longest, primary alignments. Herro, Brutal Rewrite, Peregrine 2021, and DeChat read alignments were further filtered for any reads split by the Herro algorithm. Coverage is estimated based on a haploid genome size of 3.1 billion bases],
table(
align: center,
columns: (auto, auto, auto, auto, auto, auto),
row-gutter: (2pt, auto, auto, auto, auto, auto),
stroke: 0.5pt,
inset: 5pt,
[], [Total Reads], [Maternal Alignments], [Paternal Alignments], [N50], [Coverage],
[Raw], [15,048,314], [4,029,514 (26.7772%)], [3,962,042 (26.3288%)], [21,268], [40.3x],
[Herro], [4,578,144], [1,733,156 (37.8572%)], [1,705,413 (37.2512%)], [24,437], [26.3x],
[Brutal Rewrite], [4,578,144], [1,733,152 (37.8571%)], [1,705,385 (37.2506%)], [24,437], [26.3x],
[Peregrine 2021], [4,490,689], [1,724,488 (38.4014%)], [1,697,130 (37.7922%)], [24,152], [25.7x],
[DeChat], [4,490,689], [1,724,727 (38.4067%)], [1,697,341 (37.7969%)], [24,153], [25.7x],
)
) <table1>
There were about 1,145,743 reads alignments that had the same query start and query stop for maternal and paternal haplotypes and were present in the raw, Herro, Brutal Rewrite, Peregrine 2021, and DeChat datasets. This 5-way intersection allowed us to keep reads identical but change merely what correction if any was applied to examine readmers matching hapmers.
Looking at @figure1, one can see that the majority of the read alignments had close to 100% hapmers from a single parental haplotype suggesting little to no phase switching. The Peregrine 2021 method introduced a few taller, lower-percentage peaks, which could indicate a slight increase in phase switching at the read level. That being said, this was only a tiny increase relative to the almost 1 million read alignments in the rest of the histogram (@figure2).
#show figure.caption: strong
#figure(
image("hapmers-histogram-Newsame_reads2.svg", width: 110%),
caption: [Histograms of absolute value of difference in matching hapmer percentage between maternal and paternal HG002 haplotypes. The y-axis was cut-off at 6,000 read alignments for visualization of low frequency data. The vast majority of occurrences were close to 100% suggesting that most of the read alignments possessed hapmers from a single haplotype and little to no phase switching.]
) <figure1>
#show figure.caption: strong
#figure(
image("hapmers-histogram-Newsame_reads-full2.svg", width: 110%),
caption: [Histograms of absolute value of difference in matching hapmer percentage between maternal and paternal HG002 haplotypes. The y-axis is not cut-off and shows the staggering number of read alignments with almost 100% hapmers from a single haplotype.]
) <figure2>
@figure2 shows in greater detail such a high proportion of the total read alignments coming from a single haplotype (i.e., how many read alignments had approximately 100% matching hapmers to a single haplotype).
There were some interesting scenarios whereby almost 12% analyzed, read alignments aligned in regions where 0 hapmers (i.e., homozygous genomic regions) were present (@table2). Keeping in mind the distinction between a matching hapmer (i.e., a hapmer in a heterozygous genomic region matching a readmer in a read alignment) and a hapmer (i.e., a hapmer in a heterozygous region), we also observed read alignments with 0 matching hapmers but >0 total hapmers in the genomic region (@table2).
#show figure.caption: strong
#show figure.where(
kind: table
): set figure.caption(position: top)
#figure(
caption: [Number of read alignments (out of 1,145,743 total) based on whether the reads were aligned in heterozygous or homozygous regions.],
table(
align: center,
columns: (auto, auto, auto, auto),
row-gutter: (2pt, 2pt, auto, auto, auto, auto, auto),
stroke: 0.5pt,
inset: 5pt,
[], table.cell(colspan: 2)[Read Alignments in Heterozygous Regions #parbreak() (>0 Total Hapmers)], [Read Alignments in Homozygous Regions #parbreak() (0 Total Hapmers)],
[], [Readmers Matching Hapmers], table.cell(colspan: 2)[Readmers Matching 0 Hapmers],
[Raw], [1,009,418], [7,483], [128,842],
[Herro], [1,009,411], [2,232], [134,100],
[Brutal Rewrite], [1,009,411], [2,232], [134,100],
[Peregrine 2021], [1,007,110], [2,084], [136,549],
[DeChat], [1,007,150], [2,044], [136,549],
)
) <table2>
#show figure.caption: strong
#show figure.where(
kind: table
): set figure.caption(position: top)
#figure(
caption: [Alignment statistics from the program Best (Bam Error Stats Tools) of reads used in hapmer analyses. Abbreviations: Compres for compressed, Kbp for Kilobase pair, Inser for Insertion, Del for Deletion, Hp for homopolymer.],
table(
align: center,
columns: (auto, auto, auto, auto),
row-gutter: (2pt, auto, auto, auto, auto, 2pt, 2pt, auto, auto, auto, auto, 2pt, 2pt, auto, auto, auto, auto, auto),
stroke: 0.5pt,
inset: 5pt,
[], [Identity], [Identity Quality Value], [Gap Compres Identity],
[Raw], [0.980335], [17.062956], [0.984901],
[Herro], [0.999637], [34.396470], [0.999770],
[Brutal Rewrite], [0.999637], [34.398948], [0.999772],
[Peregrine 2021], [0.999726], [35.619356], [0.999830],
[DeChat], [0.999741], [35.862487], [0.999842],
[], [Matches per Kbp], [Mismatches per Kbp], [Non-hp Inser per Kbp],
[Raw], [985.021819], [7.253553], [2.826602],
[Herro], [999.725438], [0.052275], [0.033748],
[Brutal Rewrite], [999.725298], [0.051433], [0.033581],
[Peregrine 2021], [999.790755], [0.031596], [0.024639],
[DeChat], [999.803664], [0.027125], [0.023584],
[], [Non-hp Del per Kbp], [Hp Inser per Kbp], [Hp Del per Kbp],
[Raw], [3.413819], [1.954716], [4.310809],
[Herro], [0.044347], [0.055095], [0.177940],
[Brutal Rewrite], [0.044946], [0.054916], [0.178323],
[Peregrine 2021], [0.026350], [0.040332], [0.151300],
[DeChat], [0.022643], [0.039365], [0.146568],
)
) <table3>
Using the same reads used in the hapmer analysis but looking at the alignment statistics such as alignment identity, matches/mismatches per kilobase, and non-homopolymer/homopolymer insertion/deletions per kilobase: we saw that the raw reads had much lower identities, matches per kilobase, and higher values for other categories compared to all other corrected reads. Each successive correction method seemed to improve the alignment statistics, but one notable thing was that Brutal Rewrite had a lower non-homopolymer insertion rate per kilobase than Herro, but a higher non-homopolymer deletion rate per kilobase. The same difference between insertion and deletion rates occurred for homopolymers between Brutal Rewrite and Herro.
= Discussion
#parbreak()
== General
The findings presented here provide significant insights into the effects of various error-correction methods on phase switching at the read level in ONT sequencing. One of the key observations is that all tested error-correction methods—Herro, Brutal Rewrite, Peregrine 2021, and DeChat—exhibited similar percentages of readmers matching one parental haplotype's hapmers compared to raw ONT sequences. This suggests that while these methods improve overall sequence accuracy, they may not significantly exacerbate phase switching issues.
== Phase Switching Insights
The results indicate that the majority of read alignments primarily matched hapmers from a single parental haplotype, which suggests minimal phase switching occurred. However, the presence of taller, lower-percentage peaks in the Peregrine 2021 reads (@figure1) raises questions about Peregrine 2021's haplotype awareness. Although this increase was minor, it highlights the need for further investigation into how certain error-correction tools can inadvertently introduce errors during the correction process. Peregrine 2021 is very fast in read overlapping as it avoids all-versus-all read overlapping (i.e., the computationally expensive process of overlapping each and every single read) through its unique, sparse-hierarchical minimizer index that helps find overlap candidates instead all-versus-all overlapping. Perhaps not doing all-versus-all overlapping increases increases phase-switching ever so slightly. We experimented with alternative values for the reduction factor and window size when running Peregrine 2021, we used 4 and 64, respectively, but this did not change the overall pattern observed in the initial Peregrine 2021 hapmer analysis. As Peregrine 2021 uses hand-coded heuristics in error correction, it may not have performed as well as Herro, which is a deep learning based method, especially given that deep learning methods outperform hand-coded heuristic methods on low coverage data used here (<NAME>, _pers. comm._).
#parbreak()
Interestingly, the analysis revealed a subset of read alignments that had no matching hapmers to readmers in regions where hapmers existed. This could be indicative of sequencing errors from those genomic regions, misalignments, or perhaps even low-levels of contamination. While we did not further investigate these situations, understanding these discrepancies could help refine error-correction strategies.
== Error Rates in Different Correction Methods
An intriguing finding is the observed difference in non-homopolymer insertion and deletion rates between the Brutal Rewrite and Herro methods. Specifically, Brutal Rewrite demonstrated a lower non-homopolymer insertion rate per kilobase but a higher non-homopolymer deletion rate compared to Herro. This raises questions about the underlying mechanics of these error-correction algorithms. It may suggest that Brutal Rewrite is more conservative in adding new bases but potentially more aggressive in removing bases that it deems erroneous. As our estimated coverage was only 26.3x for the reads used in Brutal Rewrite correction, perhaps higher coverage would have been beneficial. Further in regions of low-complexity repeats, Brutal Rewrite tends to select the shortest repetitive sequence (<NAME> and <NAME>, _pers. comm._).
#parbreak()
Similar patterns were observed in terms of homopolymer regions, where the insertion and deletion rates differed between the two methods. These variations highlight the importance of understanding the specific behaviors of each correction tool, as they may influence downstream analyses, such as variant calling and _de novo_ genome assembly.
== Future Directions
Future studies should focus on further characterizing the mechanisms behind phase switching and error introduction across different correction methods. A more comprehensive understanding of how each tool interacts with specific genomic features—such as homozygosity and heterozygosity—could lead to improved error-correction algorithms that maintain haplotype integrity while enhancing sequence accuracy.
== Conclusion
In conclusion, while current methods show promise in maintaining haplotype fidelity and reducing sequencing errors, ongoing research is essential to refine these tools and their applications in genomics. Addressing the discrepancies noted in this study will be crucial for advancing our understanding of error-correction for ONT sequences.
= Acknowledgements
We would like to acknowledge the Medical University of Vienna's High Performance Computing cluster for computing resources, <NAME> for thoughts regarding the effect of phase switching on _de novo_ genome assembly, <NAME> for thoughts regarding hapmer and readmer error rates, <NAME> for thoughts regarding Peregrine-2021, and <NAME> and <NAME> for thoughts regarding Brutal Rewrite.
= Data Availabilty
Code documenting how analyses were conducted is available at https://github.com/jelber2/hapmers . Raw, basecalled SUP accuracy reads are available in unaligned BAM format at the following Zenodo repository https://doi.org/10.5281/zenodo.13841954 .
= Author contributions
JPE executed all analyses and wrote the manuscript. DH cultured HG002 cells. TL performed DNA extraction, library preparation, and sequencing. FL reviewed the manuscript.
// Add bibliography and create Bibiliography section
#bibliography("bibliography.bib") |
https://github.com/almarzn/portfolio | https://raw.githubusercontent.com/almarzn/portfolio/main/templates/typst/.template/competencies.typ | typst | #import "./shared/sizes.typ": scale
#import "./shared/flex.typ": flex
#import "@preview/splash:0.3.0": tailwind
#let competency-main-skills(
list
) = [
#for skill in list [
#box(
radius: 16pt,
inset: 6pt,
fill: tailwind.slate-200,
skill
)
]
]
#let competency-block(radius: 12pt, element) = block(width: 127pt)[
#block(
// fill: white,
width: 100%,
// radius: radius,
inset: (top: 8pt),
align(center + horizon)[
#text(weight: "bold", size: scale.h6, element.name)
]
)
#block(inset: (x: 8pt), list(..element.items))
]
#let competency-grid(
..elements
) = block(inset: (x: -16pt))[
#style(styles => {
let blocks = elements.pos().map(el => competency-block(el, radius: 4pt))
let max-height = calc.max(..blocks.map(block => measure(block, styles).height))
grid(
columns: 4,
gutter: 8pt,
..blocks.map(el => box(
height: (max-height) + 20pt,
radius: 8pt,
inset: 4pt,
fill: tailwind.slate-100,
align(top, el)
))
)
})
]
#let create-block-description(name, id, content) = {
if id in content {
return ((name: name, items: content.at(id)))
}
return ()
}
#let competencies(content) = {
// competency-main-skills(
// content.languages + content.frameworks
// )
let rows = content
.map((item) => (name: item.title, items: item.items))
.fold((), (acc, val) => {
if (acc.len() == 0 or acc.last().len() == 4) {
return (..acc, (val,))
}
return (..acc.slice(0, -1), (..acc.last(), val))
})
flex(gap: 8pt)[
#for row in rows {
competency-grid(..row)
}
]
} |
|
https://github.com/jamesrswift/journal-ensemble | https://raw.githubusercontent.com/jamesrswift/journal-ensemble/main/README.md | markdown | The Unlicense | # The `my-package` Package
<div align="center">Version 0.1.0</div>
A short description about the project and/or client.
## Template adaptation checklist
- [ ] Fill out `README.md`
- Change the `my-package` package name, including code snippets
- Check section contents and/or delete sections that don't apply
- [ ] Check and/or replace `LICENSE` by something that suits your needs
- [ ] Fill out `typst.toml`
- See also the [typst/packages README](https://github.com/typst/packages/?tab=readme-ov-file#package-format)
- [ ] Adapt Repository URLs in `CHANGELOG.md`
- Consider only committing that file with your first release, or removing the "Initial Release" part in the beginning
- [ ] Adapt or deactivate the release workflow in `.github/workflows/release.yml`
- to deactivate it, delete that file or remove/comment out lines 2-4 (`on:` and following)
- to use the workflow
- [ ] check the values under `env:`, particularly `REGISTRY_REPO`
- [ ] if you don't have one, [create a fine-grained personal access token](https://github.com/settings/tokens?type=beta) with [only Contents permission](https://stackoverflow.com/a/75116350/371191) for the `REGISTRY_REPO`
- [ ] on this repo, create a secret `REGISTRY_TOKEN` (at `https://github.com/[user]/[repo]/settings/secrets/actions`) that contains the so created token
if configured correctly, whenever you create a tag `v...`, your package will be pushed onto a branch on the `REGISTRY_REPO`, from which you can then create a pull request against [typst/packages](https://github.com/typst/packages/)
- [ ] remove/replace the example test case
- [ ] (add your actual code, docs and tests)
- [ ] remove this section from the README
## Getting Started
These instructions will get you a copy of the project up and running on the typst web app. Perhaps a short code example on importing the package and a very simple teaser usage.
```typ
#import "@preview/my-package:0.1.0": *
#show: my-show-rule.with()
#my-func()
```
### Installation
A step by step guide that will tell you how to get the development environment up and running. This should example how to clone the repo and where to (maybe a link to the typst documentation on it), along with any pre-requisite software and installation steps.
```
$ First step
$ Another step
$ Final step
```
## Usage
A more in-depth description of usage. Any template arguments? A complicated example that showcases most if not all of the functions the package provides? This is also an excellent place to signpost the manual.
```typ
#import "@preview/my-package:0.1.0": *
#let my-complicated-example = ...
```
## Additional Documentation and Acknowledgments
* Project folder on server:
* Confluence link:
* Asana board:
* etc...
|
https://github.com/deadManAlive/ui-thesis-typst-template | https://raw.githubusercontent.com/deadManAlive/ui-thesis-typst-template/master/primer/title.typ | typst | #import "../config.typ": cfg
#let title = [
#set align(center)
#set text(
size: 14pt
)
#image(
"assets/makara_black.png",
width: 2.5cm,
)
*UNIVERSITAS INDONESIA*
#v(2em)
*#upper(cfg.title)*
#v(6em)
*SKRIPSI*
#v(1em)
*Diajukan sebagai salah satu syarat memperoleh gelar #cfg.degree*
#v(4em)
*#upper(cfg.name)*
*#cfg.npm*
#v(6em)
*#upper(cfg.faculty)*
*#upper(cfg.program)*
#v(2em)
*#upper(cfg.location)*
*#upper(cfg.time)*
] |
|
https://github.com/0x1B05/nju_os | https://raw.githubusercontent.com/0x1B05/nju_os/main/lecture_notes/content/15_操作系统上的进程.typ | typst | #import "../template.typ": *
#pagebreak()
= 操作系统上的进程
== 线程、进程和操作系统
=== ThreadOS 中的线程切换
#image("images/2023-11-25-20-52-47.png")
计算机硬件也是状态机
- “共享的内存,容纳了多个状态机”
- 共享的代码
- 共享的全局变量
- 启动代码的堆栈 (仅启动代码可用)
- $T_{1}$ 的 Task (仅 $T_{1}$ 和中断处理程序可用)
- 堆栈 (中断时,寄存器保存在堆栈上)
- $T_{2}$ 的 Task (仅 $T_{2}$ 和中断处理程序可用)
- 堆栈 (中断时,寄存器保存在堆栈上)
- 状态迁移
- 执行指令或响应中断
==== practice
#image("images/2023-11-25-20-53-04.png")
```sh
0x0000000000101640 in yield ()
(gdb) p tasks[0]
$1 = {{next = 0x105640 <tasks+8192>, entry = 0x0, context = 0x105590 <tasks+8016
>}, stack = "@V\020", '\000' <repeats 13 times>, "\220U\020", '\000' <repeats 80
45 times>...}
(gdb) p *tasks[0]->context
$2 = {cr3 = 0x0, rax = 0, rbx = 0, rcx = 0, rdx = 0, rbp = 0, rsi = 1048944, rdi = 0, r8 = 0, r9 = 0, r10 = 0, r11 = 0, r12 = 0, r13 = 0, r14 = 0, r15 = 0, rip = 1058556, cs = 8, rflags = 512, rsp = 1070656, ss = 0, rsp0 = 0}
(gdb) p/x &tasks
$5 = 0x103640
(gdb) p/x *tasks[0]->context.rsp
$3 = 0x103640
(gdb) p/x $rsp
$4 = 0x20a7b8
```
```c
void yield() {
interrupt(0x81);
}
```
```asm
int $0x81
```
#tip("Tip")[
x86 架构中的`int`指令用于产生一个软中断(software interrupt)或异常该指令的语法如下: `int imm8`. 其中,`imm8`是一个立即数(immediate),表示中断向量号(interrupt vector number)。这个中断向量号对应着一段中断处理程序(interrupt handler)的地址,当`int`指令被执行时,处理器将跳转到相应的中断处理程序。
]
#tip("Tip")[
在 x86 架构中,有一些预定义的中断向量号,用于处理特定的中断事件。例如,中断向量号为 `0x80` 的中断用于系统调用(system call)操作,在 Linux 等操作系统中经常被使用。
]
执行完`int $0x81后`,再看 rsp 的值:
```sh
(gdb) layout asm
(gdb) si
0x0000000000101644 in yield ()
(gdb) p/x $rsp
$7 = 0x20a7b8
(gdb) si
0x00000000001028a2 in __am_irq129 ()
(gdb) p/x $rsp
$8 = 0x20a788
```
#image("images/2023-11-25-20-53-16.png")
#tip("Tip")[
`rdi` 是函数调用的第一个参数
]
这部分代码十分精妙, 需要花时间去阅读和调试. 不要忘了给 am 开`-g`
=== 什么是操作系统?
#image("images/2023-11-25-21-12-23.png")
虚拟化:操作系统同时保存多个状态机
- C 程序 = 状态机
- 初始状态:main(argc, argv)
- 状态迁移:指令执行
- 包括特殊的系统调用指令 syscall
- 有一类特殊的系统调用可以管理状态机
- `CreateProcess(exec_file)`
- 指定一个二进制代码, 在系统里面创建一个新的状态机(windows api)
- `TerminateProcess()`
#tip("Tip")[
提供怎样的 syscall 是自己可以设计的
]
从线程到进程:虚拟存储系统
- 通过虚拟内存实现每次 “拿出来一个执行”
- 中断后进入操作系统代码,“换一个执行”
整个世界都是由最初的 init 状态机创建出来的.
#image("images/2023-11-25-21-12-14.png")
== 复制状态机:fork()
=== 状态机管理:创建状态机
如果要创建状态机,我们应该提供什么样的 API?
UNIX 的答案: `fork`
- 做一份状态机完整的复制 (内存、寄存器现场)
#image("images/2023-11-25-21-17-43.png")
`int fork();`
- 立即复制状态机 (完整的内存)
- 复制失败返回 -1 (errno)
- 新创建进程返回 0
- 执行 fork 的进程返回子进程的进程号
=== Fork Bomb
模拟状态机需要资源
- 只要不停地创建进程,系统还是会挂掉的
- Don't try it (or try it in docker)
- 你们交这个到 Online Judge 是不会挂的
==== 代码解析: Fork Bomb
```bash
:(){:|:&};: = 刚才的一行版本
:() { = 格式化一下
: | : &
}; :
fork() { = bash: 允许冒号作为标识符……
fork | fork &
}; fork
```
#tip("Tip")[
很快系统资源就会被耗尽.
]
=== 这次你们记住 Fork 了!
因为状态机是复制的,因此总能找到 “父子关系”
因此有了进程树 (pstree)
```sh
systemd-+-ModemManager---2*[{ModemManager}]
|-NetworkManager---2*[{NetworkManager}]
|-accounts-daemon---2*[{accounts-daemon}]
|-at-spi-bus-laun-+-dbus-daemon
| `-3*[{at-spi-bus-laun}]
|-at-spi2-registr---2*[{at-spi2-registr}]
|-atd
|-avahi-daemon---avahi-daemon
|-colord---2*[{colord}]
...
```
#tip("Tip")[
- `fork` 是 unix 世界里面创建进程的唯一方法
- 例如, 先把 1-100,000,000 的质数算出来, 然后 fork 10 个进程, 接下来这 10 个进程就可以共享先前这个算出来的进程表
]
=== 理解 fork: 习题 (1)
阅读程序,写出运行结果
```c
pid_t x = fork();
pid_t y = fork();
printf("%d %d\n", x, y);
```
一些重要问题
- 到底创建了几个状态机?
- pid 分别是多少?
- “状态机视角” 帮助我们严格理解
- 同样,也可以 model check!
```c
❯ gcc fork-demo.c && ./a.out
15625 15626
15625 0
0 15627
0 0
❯ ./a.out
15673 15674
0 15675
15673 0
0 0
❯ ./a.out
15683 15684
15683 0
0 15685
0 0
❯ ./a.out
15693 15694
15693 0
0 15695
0 0
```
#image("images/2023-11-26-08-23-17.png")
=== 理解 fork: 习题 (2)
阅读程序,写出运行结果
```c
for (int i = 0; i < 2; i++) {
fork();
printf("Hello\n");
}
```
状态机视角帮助我们严格理解程序行为
- `./a.out`
- `./a.out | cat`
- 计算机系统里没有魔法
- (无情执行指令的) 机器永远是对的
```c
❯ gcc fork-printf.c && ./a.out
Hello
Hello
Hello
Hello
Hello
Hello
❯ ./a.out | wc -l
8
❯ ./a.out | cat
Hello
Hello
Hello
Hello
Hello
Hello
Hello
Hello
```
这种神奇的现象怎么调试呢? 任何程序的输出行为都是状态机的行为,
都是执行指令的后果.
`diff <(strace -f ./a.out) <(strace -f ./a.out | cat)`: 输出很多, 没啥参考价值
`strace -f ./a.out`:
```
[pid 17766] write(1, "Hello\nHello\n", 12 <unfinished ...>
[pid 17765] write(1, "Hello\nHello\n", 12 <unfinished ...>
[pid 17764] write(1, "Hello\nHello\n", 12) = 12
write(1, "Hello\nHello\n", 12) = 12
```
`strace -f sh -c "./a.out | cat"`:
```
[pid 22178] write(1, "Hello\nHello\n", 12 <unfinished ...>
[pid 22179] write(1, "Hello\nHello\n", 12 <unfinished ...>
[pid 22177] write(1, "Hello\nHello\n", 12 <unfinished ...>
[pid 22175] write(1, "Hello\nHello\n", 12 <unfinished ...>
[pid 22176] write(1, "Hello\nHello\nHello\nHello\nHello\nHe"..., 48Hello
```
`printf`不总是直接打印到标准输出的, 会根据标准输出连接的是终端还是管道,
会做不同的行为, 如果是一个管道, 会把输出写到一个 libc 的缓冲区里面, 无法看到.
#image("images/2023-11-26-08-23-07.png")
== 重置状态机:`execve()`
如何从 init 状态机得到一个花花世界的呢?
=== 状态机管理:重置状态机
UNIX 的答案: `execve`
- 将当前进程重置成一个可执行文件描述状态机的初始状态
`int execve(const char *filename, char * const argv[], char * const envp[]);`
`execve` 行为
#image("images/2023-11-26-08-36-19.png")
- 执行名为 filename 的程序
- 允许对新状态机设置参数 `argv (v)` 和环境变量 `envp (e)`
- 刚好对应了 `main()` 的参数!
- `execve` 是唯一能够 “执行程序” 的系统调用
- 因此也是一切进程 `strace` 的第一个系统调用
例如:`vim <(strace -f ls -la 2>&1)`
第一行就是:`execve("/usr/bin/ls", ["ls", "-la"], 0x7ffe4c0f9550 /* 50 vars */) = 0`
任何程序的执行都是从`execve`开始
=== 环境变量
“应用程序执行的环境”
- 使用 `env` 命令查看
- `PATH`: 可执行文件搜索路径
- `PWD`: 当前路径
- `HOME`: home 目录
- `DISPLAY`: 图形输出
- `PS1`: shell 的提示符
- `export`: 告诉 shell 在创建子进程时设置环境变量
- 小技巧:`export ARCH=x86_64-qemu` 或 `export ARCH=native`
- 上学期的 `AM_HOME` 终于破案了
#tip("Tip")[
`execve`不仅要传入程序本身的参数, 还要传入程序运行的环境.
]
```c
#include <stdio.h>
#include <unistd.h>
int main() {
char *const argv[] = {
"/bin/bash",
"-c",
"env",
NULL,
};
char *const envp[] = {
"HELLO=WORLD",
NULL,
};
execve(argv[0], argv, envp);
printf("Hello, World!\n");
}
```
output:
```sh
❯ gcc execve-demo.c && ./a.out
PWD=/home/liuheihei/cs_learning/nju_os/demo/15_demo/execve-demo
HELLO=WORLD
SHLVL=0
_=/usr/bin/env
```
==== 环境变量:PATH
可执行文件搜索路径
还记得 `gcc` 的 `strace` 结果吗?
```
[pid 28369] execve("/usr/local/sbin/as", ["as", "--64", ...
[pid 28369] execve("/usr/local/bin/as", ["as", "--64", ...
[pid 28369] execve("/usr/sbin/as", ["as", "--64", ...
[pid 28369] execve("/usr/bin/as", ["as", "--64", ...
```
这个搜索顺序恰好是 PATH 里指定的顺序
```sh
$ PATH="" /usr/bin/gcc a.c
gcc: error trying to exec 'as': execvp: No such file or directory
$ PATH="/usr/bin/" gcc a.c
```
计算机系统里没有魔法。机器永远是对的。
== 销毁状态机:`_exit()`
=== 状态机管理:销毁状态机
有了 `fork`, `execve` 我们就能自由执行任何程序了,最后只缺一个销毁状态机的函数!
UNIX 的答案: `_exit`(立即摧毁状态机)
`void _exit(int status)`
- 销毁当前状态机,并允许有一个返回值
- 子进程终止会通知父进程 (后续课程解释)
#tip("Tip")[
C标准库里面有个`exit`, 为了和这个系统调用作区分, 于是给系统调用添加了`_`
]
这个简单…… 但问题来了:多线程程序怎么办?
=== 结束程序执行的三种方法
`exit` 的几种写法 (它们是不同)
- `exit(0)` - stdlib.h 中声明的 `libc` 函数
- 会调用 `atexit`
- `_exit(0)` - glibc 的 `syscall wrapper`
- 执行 “exit_group” 系统调用终止整个进程 (所有线程)
- 细心的同学已经在 `strace` 中发现了
- 不会调用 `atexit`
- `syscall(SYS_exit, 0)`
- 执行 “exit” 系统调用终止当前线程
- 不会调用 `atexit`
#tip("Tip")[
The `atexit()` function registers the given function to be called at normal process termination, either via `exit(3)` or via return from the program's `main()`.
]
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/docs/cookery/direction/incremental.typ | typst | Apache License 2.0 | #import "mod.typ": *
#show: book-page.with(title: "Streaming, Incremental rendering")
#include "claim.typ"
Example Application: #link("https://github.com/Enter-tainer/typst-preview-vscode")[typst-preview]
Build a server for compilation with #link("https://myriad-dreamin.github.io/typst.ts/cookery/guide/compiler/service.html")[Compiler Service], streaming the artifact, and render it incrementally.
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/scholarly-tauthesis/0.4.1/template/content/README.md | markdown | Apache License 2.0 | # content/
This is where you should add typst text files related to the actual content of
your thesis. Note that when you want to include images, code or typst code from
other files in here, any import paths need to be relative to the file itself.
For example, to include all of the code from the
[`preamble.typ`](../preamble.typ) file in the `template/` folder, the import
statement needs to be
```typst
#import "../preamble.typ": *
```
This is different when compared to most LaTeΧ compilers, where import paths are
relative to the `main` file.
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/wordometer/0.1.0/docs/manual.typ | typst | Apache License 2.0 | #import "@preview/tidy:0.1.0"
#import "/src/lib.typ" as wordometer: *
#set page(numbering: "1")
#set par(justify: true)
#show link: underline.with(stroke: blue.lighten(50%))
#let VERSION = toml("/typst.toml").package.version
#let show-module(path) = {
show heading.where(level: 3): it => {
align(center, line(length: 100%, stroke: black.lighten(70%)))
block(text(1.3em, raw(it.body.text + "()")))
}
tidy.show-module(
tidy.parse-module(
read(path),
scope: (wordometer: wordometer)
),
show-outline: false,
sort-functions: x => {
str(int(not x.name.starts-with("word-count"))) + x.name
}
)
}
#show: word-count
#v(.2fr)
#align(center)[
#stack(
spacing: 12pt,
text(2.6em, `wordometer`),
[_This manual contains #total-words words._],
)
#v(30pt)
A small #link("https://typst.app/")[Typst] package for quick and easy in-document word counts.
#link("https://github.com/Jollywatt/typst-wordometer")[`github.com/Jollywatt/typst-wordometer`]
Version #VERSION
]
#set raw(lang: "typc")
#v(1fr)
#[
#show heading: pad.with(y: 10pt)
= Basic usage
```typ
#import "@preview/wordometer:0.1.0": word-count, total-words
#show: word-count
In this document, there are #total-words words all up.
#word-count(total => [
The number of words in this block is #total.words
and there are #total.characters letters.
])
```
= Excluding elements
You can exclude elements by name (e.g., `"caption"`), function (e.g., `figure.caption`), where-selector (e.g., `raw.where(block: true)`), or label (e.g., `<no-wc>`).
```typ
#show: word-count.with(exclude: (heading.where(level: 1), strike))
= This Heading Doesn't Count
== But I do!
In this document #strike[(excluding me)], there are #total-words words all up.
#word-count(total => [
You can exclude elements by label, too.
#[That was #total.words, excluding this sentence!] <no-wc>
], exclude: <no-wc>)
```
]
#v(1fr)
#pagebreak()
#show-module("/src/lib.typ") |
https://github.com/kazuyanagimoto/quarto-awesomecv-typst | https://raw.githubusercontent.com/kazuyanagimoto/quarto-awesomecv-typst/main/README.md | markdown | MIT License | # Quarto-awesomecv-typst Format
A Quarto + Typst implementation of Byungjin Park’s [Awesome-CV](https://github.com/posquit0/Awesome-CV).
The Typst implementation part is inspired by Paul Tsouchlos’s [modern-cv](https://typst.app/universe/package/modern-cv/)

## Install
If you would like to add the clean theme to an existing directory:
```bash
quarto install extension kazuyanagimoto/quarto-awesomecv-typst
```
or you can use a Quarto template that bundles a .qmd starter file:
```bash
quarto use template kazuyanagimoto/quarto-awesomecv-typst
```
## Customization
### YAML
Set author information in YAML.
```yaml
author:
firstname: Albert
lastname: Einstein
address: "Rämistrasse 101, CH-8092 Zürich, Switzerland, Zürich"
position: "Research Physicist ・ Professor"
contacts:
- icon: fa envelope
text: <EMAIL>
url: "mailto:<EMAIL>"
- icon: PATH_TO_ICON/icon.svg
text: example.com
url: https://example.com
```
- [fontawesome](https://fontawesome.com/search?m=free&o=r) icons are supported
via [duskmoon314/typst-fontawesome](https://github.com/duskmoon314/typst-fontawesome) with `fa` prefix.
- SVG icons can be used by specifying the path to the icon file (e.g., Google Scholar icon from [Academicons](https://jpswalsh.github.io/academicons/))
### Fonts & Color
You can set fonts and the accent color in YAML.
```yaml
style:
color-accent: "516db0"
font-header: "Roboto"
font-text: "Source Sans Pro"
format:
awesomecv-typst:
font-paths: ["PATH_TO_FONT"]
```
By default, it uses the system fonts, but you can specify the font paths in the `font-paths` field (e.g., GitHub Actions usage.)
### Profile Photo
You can set a profile photo in YAML.
If you set it, the header will be left-aligned.
```yaml
profile-photo: "PATH_TO_PHOTO/photo.jpg"
```
## kazuyanagimoto/typstcv
This format is designed to be used with an R package [`kazuyanagimoto/typstcv`](http://kazuyanagimoto.com/typstcv/).
It provides helper functions to create a resume entry from a data frame.
Please refer to the package [documentation](http://kazuyanagimoto.com/typstcv/) for more details.
```r
library(typstcv)
educ <- data.frame(
title = c("Ph.D. in Physics", "Master of Science"),
location = c("Zürich, Switzerland", "Zürich, Switzerland"),
date = c("1905", "1896 - 1900"),
description = c("University of Zürich", "ETH")
)
resume_entry(educ)
#> ```{=typst}
#> #resume-entry(title: "Ph.D. in Physics",location: "Zürich, Switzerland",date: "1905",description: "University of Zürich",)
#> #resume-entry(title: "Master of Science",location: "Zürich, Switzerland",date: "1896 - 1900",description: "ETH",)
#> ```
```
```r
award <- data.frame(
title = c("Nobel Prize in Physics"),
location = c("Stockholm, Sweden"),
date = c("1921"),
description = c("For his services to"),
detail1 = c("Theoretical Physics"),
detail2 = c("Discovery of the law of the photoelectric effect")
)
resume_entry(award, details = c("detail1", "detail2"))
#> #resume-entry(title: "Nobel Prize in Physics",location: "Stockholm, Sweden",date: "1921",description: "For his services to",)
#> #resume-item[
#> - Theoretical Physics
#> - Discovery of the law of the photoelectric effect
#> ]
```
```r
work <- data.frame(
title = c("Technical Assistant", "Junior Professor", "Associate Professor"),
location = c("Bern, Switzerland", "Bern, Switzerland", "Zürich, Switzerland"),
start = as.Date(c("1902-01-01", "1908-01-01", "1909-01-01")),
end = as.Date(c("1908-01-01", "1909-01-01", "1911-01-01")),
description = c("Federal Patent Office", "University of Bern",
"University of Zürich")
)
work |>
format_date(end = "end", date_format = "%Y", sort_by = "start") |>
resume_entry()
#> ```{=typst}
#> #resume-entry(title: "Associate Professor",location: "Zürich, Switzerland",date: "1909 - 1911",description: "University of Zürich",)
#> #resume-entry(title: "Junior Professor",location: "Bern, Switzerland",date: "1908 - 1909",description: "University of Bern",)
#> #resume-entry(title: "Technical Assistant",location: "Bern, Switzerland",date: "1902 - 1908",description: "Federal Patent Office",)
#> ```
``` |
https://github.com/JanEhehalt/typst-demo | https://raw.githubusercontent.com/JanEhehalt/typst-demo/main/glossary.typ | typst | // this is an example. Check https://typst.app/universe/package/glossarium
#let glossary = (
(
key: "rest",
short: "REST",
long: "Representational State Transfer",
desc: [REST is a software architectural style that defines a set of constraints to be used for creating web services.]
),
)
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/data-07.typ | typst | Other | // Error: 7-18 failed to parse json file: syntax error in line 3
#json("test/assets/files/bad.json")
|
https://github.com/medewitt/bio720 | https://raw.githubusercontent.com/medewitt/bio720/main/behaviour.typ | typst | BSD 3-Clause "New" or "Revised" License | #import "template.typ": *
#show: ams-article.with(
title: "Reading notes: Signalling",
authors: (
(
name: "<NAME>",
department: [Department of Biology],
organization: [Wake Forest University],
location: [Winston Salem, NC 27101],
email: "<EMAIL>",
url: "www.michaeldewittjr.com"
),
),
abstract: "Caveat Emptor",
bibliography-file: "integratedbio.bib",
)
#set heading(numbering: "1.")
#let today = datetime.today()
Mechanisms of learning are different amongst all animals.
Generally all animals can learn something (where animal is taken as a broad catgory).
Molecular models of learning are just *models* and not proven laws.
Some definitions of learning
- Conditioned learning is an approach where learning takes place through negative or positive reinforcement (generally just negative). This is a simple form of learning.
- *Unconditioned stimulus* - the stimulus under no learning or experience
- *Conditioned experience* - the training conditions
- *Conditioned response* - what is the desired behaviour or taught response
- *Habituation* - retraiing to lose a conditioed response if repeated with some experienced result
- *Associative learning* - connection between cause and effect.
- Opelant (instrumental) - the animal makes a decision which has consequences
There is an unresolved debate between the relative importance between conditioning and cognitiion.
- *Transitive Inference* using known relationships to deduce new relationships (e.g., A > B, B > C => A > C)
== Cleaner Fish Paper
Cleaner fish (cleaner #link(https://en.wikipedia.org/wiki/Bluestreak_cleaner_wrasse)[Wrasse]) clean off the growths from fish. Cleaner fish are well marked so that they can be seen by "clients". There is competition between the clients to get return customers. They employee different strategies based on the presence of familiar and unfamiliar clients.
General strategy is to serve the new fish first rather than return clients due to potential loss of new customer pool.
Why don't the sharks eat the cleaner fish?
- Payoff of having parasite removal vs eating the fish
- Neurotransmitter of "I'm cleaning you, don't eat me" or biomolecular signalling
- How do cheaters get punished?
- Are some behaviours learned based on social cues and observation
Local cleaner-fish density determines social complexity of their world in a single (Experimentally controllable) variable
All cleaner fish are born females.
== Ways of recognizing self
- Private self-awareness - you know what you like
- Kinesthetic visual-matching - your movements are matched by the image in the mirror ("public")
Animals that do this:
- Chimps and gorillas
- elephants
- dolphins
- horses
- magpies
- cleaner-fish?
Doing the experiment with fish in the traditional sense is challenging because it is a severe intervention (e.g., stress when fish removed from water, disruption of scales). Generally, the mirror test fails with non-visual creatures.
Mirror neurons fires the same way if you see someone do something your neurons will fire in the same way. This could be a mechanism for emphathy and is likely conditioning.
Guppies. Females choose the males.
Why? Females can only mate once per season vs males who can mate many times. Thus female function is expensive so males must compete.
Males are brightly coloured males are more desirable which could indicate higher fitness due to the ability to survive despite being brightlycolored
== Moving into Sexual Selection
Physiological mechanism--really involving testosterone. and the role that it plays in causing males to do the things they do to attract mates.
Genetic discrimination
One particular gene is present in aboit 93% in the prison population but only 50% in the the general population
Individuals with that gene also receive longer sentences
It's male.
Across mammals testorone has high longevity penalty.
Maximizing an animals probability of surviving to maturity
Survival is one component of fitness. Without reproduction is 0 unless you reproduce regardless of how long you live.
For man but not all animals that involves sexual selection: variation in mating success
And usable males are under much stronger sexual selection than females, because females invest much more in offspring than males; each egg is more metabollically expensive to product than a sperm, so they are produced in smaller quantities than sperm.
Therefore eggs are in limited supply and males must compete for access to them.
- Sexual selection is strong and is imposed on every generation (unlike regular selection which in many cases vaies in strength and direction)
- There is a cost in this choice
- Females pay some cost
- Sexually selection is strong enough that within a few generations before all males have the same high quality of alleles to offer there will be little or no variation among male fitness
So how is there still any variation in male quality for females to select on?
- Not mutation isn't plausible because mutation is not fast enough
- Recombination would speed up selection of alleles
- Alleles carried by males might not be beneficial for females (e.g., things that females like might not be good for male)
- Abiotic changes
- Biotic changes (e.g., predators will adapt to a new adaptation)
One type of biotic interaction seems particular likely to impose constant selection for new or rare genotypes: parasites and their hosts.
== Red Queen Hypothesis
Parasites target the most abundant host genotype (the biigest resource pool for their offspring)
You have to run as fast as you can to stay in one place
That reduces the fitness of that host genotype and one or more other hos genotypes then becomes more abundant
Now the most sucessful are those adapted to the most abundant host genotype
Frequency-depedent selection perpetually favors rare genotypes preventing the alleles making up that genotypes from being lost; in other words, it maintain genetic variation instead of eliminating it.
Alleles that were previously fit are now targets for parasites, so they become a disadvantage.
== Fancy Plummage
Free of parasites if you have nice fur, brightly colored feathers etc, indicate you likely dont have a lot of parasites
Most males do not produce offspring (exponentially distributed--mostly zero and a few with all)
Females have almost equal express
Lyre bird probably selects for intelligence, survivability, and likelihood of surviving their surroundings
Golden collared manakins make have bright plummage (no parasites), clearing (making a clearing takes lots of energy), loud wing snaps (showing athleticism, strenght, energy budget).
== Lack of sexual selection
Less obvious selection occurs when male component is larger (but still less than females)
Why don't males cheat, especially with morphological sexual selective features?
- might be an inordinate cost that can't be born
- High quality males pay a lower price than low quality males for reproduction.
- Forces son to bear the cost of this selective feature
u
== Honest signals
Sexually selected traits that signal high fitness are thought to be impossible to fake (impossible for inferior males to produce).
Females that prefer signals that can be faked are at a disadvantage, selection favors demales that demand an honest signal of genetic quality.
An alternative non-adaptive theory for why females the traint that they do; the sensory bias hypothesis. Rather than an indicator of henetic purpose it is an accident.
**Proximal** explanations tend to be mechanistic on short time scales - e.g. during the breading season testoterone levels in their blood increases in male; by binding to testoterone receptors in key brain regions, alterning behvaiours. E.g., winning a fight boosts your testosterone and makes you more likely to win the next fight.
**Ultimate** explanations focus on the bigger pictures - that mating dance evolved vecause its an accurate indicator of a male's fitness. it indicates good neuromuscular coordination and it demands a lot of energy.
Testosterone has a role is sexual selection. prop
Aromatase is an enzyme that converts thalamus and amygdala turns testoerone in men to estrogen. This makes testosterone less effective (dumber guys) and is used for estrogen-sensitive cancers.
Testerone is a cholestrol, once in the cell it turns on transcription factors altering gene expression
Testosterone has some effects that don't appear tied to the transcription e.g. relatedto protein kinase A to release energy
== Manekins
Testosterone has a large impact on multiple genes which are regulated.
Testosterone binds to androgen receptor
SH: Wing puscle
PEC: Chest muscle
Flutamide: T and AR agonist
NTC: Novel transcript clusters
AR: androgen receptor
ARE: Androgen receptor elements
ER: Estrogen receptor
WGCNA: Weighted gene co-expression network analysis
RPKM:reads per kb of exon per 106 mapped RNA Seq reads qq
PCA: Principal component analysis
3-4 question; half a page to a page. Diagrams and figures allowed.
At lot worse to get too hot than to get hot. Proteins don't work or fold correctly
Reponse is generally
- Work to maintain a constant balance (mammal homeostatis)
- Tie temperature to environment (ectotherms)
Wood frogs can survive freezing.
== Thermostats
- Invertebrates the thermostat is the hypothalamus
- Receives sensory input about body temperature
- If body is cooler than set point, the hypothalamus induces reduces that raise body temperature; if warmer than set point, begins measures to cool it
=== warming
- Blood vessels constrict
- muscles shiver
- thyroid for hormones
- breathing rates
=== cooling
- blood vessels dialate
- skin perspires
- sweat gland
== Endothermy
- most of heat comes from the metabolic rate of internal organs: liver, spleen, gut, kidneys
- regulating the permeability of inner mitochrondial membrane determines how much heat is produced, e.g., endotherms can separate oxidative phosphorlation from electron transport, causing, "proton leak" across the inner mitochrondrial membrane, producing more heat and less ATP. Diverts energy to heat vs ATP
- the rate of heat loss, and the temperature of particular organs or region, can be regulated by a variet of mechanisms, e.g., constricting or dilating blood vessels
- Endotherms have a high cost of energy with a high payoff of having enzymes operating at peak efficiency so more mass
== Ectothermy
- Behavioural thermoregulation: choosing their location, their coloration, their orientation relative to the sun, or some combination of those.
- This has a cost of devoting time to do this
- Thermoconformation: just let their body temperature vary with their surroundings
- in a few extreme cases allow themselves to freeze
== Behavioural thermoregulation does have costs
- greater exposure/ vulnerability to predators
- time not available/ less available for feeding, courtship
What would you expect to determine whether an extotherm choose thermoregulation or conformation?
Habitats vary in the costs and hazards of behavioural thermoregulation
- for example if sunny locations are scarce, it might be more costly for a lizard with a lower than ideal temperature to warm up
- if hard to find sun then costs a lot.
As air temperature increases, body temperature generally increases.
Any deviation are behavioural regulation
== Plasticity and acclimation (adaptation)
- both endotherms and ectotherms can adjust thermal performance curves and temperature set points in response to a particular challenge (e.g., climate, pathogens)
- e.g., to induce a fever, "pyrogens" (compounds produced either by your own body or by pathogens) raise the set point
How do ectotherms like a desert iguana deal with bacterial infection that cause endotherms to raise their body temperature.
Desert iguana experimentally injected with bacteria raise their set points running a behaviour fever?
Partial endothermy: some ectotherms generate heat but only when/where it matters most.
Prioritizes brains and eyes. e.g., swordfish and tuna heat their brains and in some cases their eyes, to about 5 deg above surrounding seawater
== Heat shock proteins
Proteome stability, heat hardening and heat-shock protein expression profiles in cataglyphis desert ants.
HSC70 -
HSP - heat shock proteins
HSF - transcription factors
HSE - transcription effectors
heat harderning - response to heat
Heat shock proteins are chaperones; this means that they help protein fold correctly. These can refold or prevent denaturing of proteins.
homologs - came from same ancestry
paralogs - homolog within a lineage
Orthologs are genes derived from a single ancestral gene in the last common ancestor of the compared species. This short, simple definition includes two distinct, explicit statements that are important to rationalize; furthermore, it does not include other requirements that might seem natural but are not actually intrinsic to orthology. First, the requirement of a single ancestral gene is central to the concept of orthology. Once the ancestral genome is shown to have contained two paralogous genes that gave rise to the genes in question, it will be incorrect to consider the latter orthologs, even if, on some occasions, there may be the appearance of orthology (see below). Second, the definition specifies the presence of an ancestral gene in the last common ancestor of the compared species rather than in some arbitrary, more ancient ancestor. Of course, this definition assumes the existence of a distinct common ancestor of the compared species, a proposition sometimes challenged for prokaryotes owing to the high incidence of HGT (see discussion below).
Paralogs are genes related via duplication. Note the generality of this definition, which does not include a requirement that paralogs reside in the same genome or any indication as to the age of the duplication leading to the emergence of paralogs (some of these duplications occurred at the earliest stages of life's evolution but the respective genes nevertheless qualify as paralogs).
= Exams
Architecture of regulatory pathways and learning/ behaviour
Look for common themes (e.g., the fish transitive inference, the testorone paper).
Something along the lines of a sprandral vs adaption
People have spectulated on origins of heat shock protein. genes can get mutations that have phenotypic protens can correct (E.g., mutations coudl accumulate after being corrected by heat shock. then when exposed to a new environment it could be a step change). Evolutionary "capacitance".
This doesn't make sense from an evolutionary perspective. you don't know the future.
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/text/raw-code-04.typ | typst | Other | #set page(width: 180pt)
#set text(6pt)
#rect(inset: (x: 4pt, y: 5pt), radius: 4pt, fill: rgb(239, 241, 243))[
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<h1>Topic</h1>
<p>The Hypertext Markup Language.</p>
<script>
function foo(a, b) {
return a + b + "string";
}
</script>
</body>
</html>
```
]
|
https://github.com/iyno-org/Holistic-Clinical-Neurosurgery | https://raw.githubusercontent.com/iyno-org/Holistic-Clinical-Neurosurgery/main/covers/front-cover.typ | typst | #import "@preview/bubble:0.2.1": *
#show: bubble.with(
title: "1. Spinal Cord",
subtitle: "Gross and General Spinal Cord Malformations",
author: "<NAME>",
affiliation: "International Youth Neurosurgical Organization",
date: "",
//main-color: "4DA6FF", //set the main color
) |
|
https://github.com/pluttan/shem | https://raw.githubusercontent.com/pluttan/shem/main/lab3/lab3.typ | typst | #import "@docs/bmstu:1.0.0":*
#import "@preview/tablex:0.0.8": tablex, rowspanx, colspanx, cellx
#show: student_work.with(
caf_name: "Компьютерные системы и сети",
faculty_name: "Информатика и системы управления",
work_type: "лабораторной работе",
work_num: "3",
discipline_name: "Схемотехника",
theme: "Преобразователи кодов",
author: (group: "ИУ6-42Б", nwa: "<NAME>"),
adviser: (nwa: "<NAME>"),
city: "Москва",
table_of_contents: true,
)
#let xb = $op("x", limits: #true)^"__"$
#let ab = $op("a", limits: #true)^"__"$
#let bb = $op("b", limits: #true)^"__"$
#let nt(a,b) = $op((#a), limits: #true)^#b$
= Задание
1. Исследование преобразователя ДДК 8421в заданный код:
- Выполнить синтез схемы преобразователя кода. Результаты синтеза представить в базисе И-НЕ;
- Выполнить синтез схемы двоично-десятичного счетчика на JK-триггерах по безвентильной схеме с естественным порядком изменения состояний; скоммутировать схемы счетчика и преобразователя кода; выходные сигналы счетчика использовать в качестве переменных A3, A2, A1, A0;
- Исследовать схему преобразователя кода в статическом и динамическом режимах. В статическом режиме сигналы выходные сигналы счетчика использовать в качестве переменных A3, A2, A1, A0. В статическом режиме сигналы на вход счетчика подавать от ключа, в динамическом режиме – от генератора импульсов.
Провести анализ работы преобразователя кода по таблице истинности и временной диаграмме входных и выходных сигналов преобразователя кода.
2. Исследование преобразователя заданного ДДК в ДДК 8421:
- выполнить синтез схемы преобразователя кода. Результаты синтеза представить в базисе И-НЕ;
- скоммутировать схемы 4-разрядного двоичного счетчика и преобразователя кода; выходные сигналы счетчика использовать в качестве переменных В3, В2, В1, В0;
- исследовать схему преобразователя кода в статическом и динамическом режимах. В статическом режиме выходные сигналы счетчика использовать в качестве переменных В3, В2, В1, В0. В статическом режиме сигналы на вход счетчика подавать от ключа, в динамическом режиме – от генератора импульсов. Провести анализ работы преобразователя кода по таблице истинности и временной диаграмме входных и выходных сигналов преобразователя кода.
3. Составить отчет.
= Выполнение
== Цель работы
Изучение принципов построения и методов синтеза преобразователей двоично-десятичных кодов (ДДК); моделирование электрических схем ДДК.
== Исследование преобразователя ДДК 8421 в заданный код
=== Таблица переходов
#align(center)[
#tablex(
columns: 10,
inset: 4pt,
align: center + horizon,
$space A space$,$a_3$,$a_2$,$a_1$,$a_0$,$space B space$, $b_3$,$b_2$,$b_1$,$b_0$,
$0$, $0$, $0$, $0$, $0$, $0$, $0$, $0$, $0$, $0$,
$1$, $0$, $0$, $0$, $1$, $1$, $0$, $0$, $0$, $1$,
$2$, $0$, $0$, $1$, $0$, $3$, $0$, $0$, $1$, $1$,
$3$, $0$, $0$, $1$, $1$, $4$, $0$, $1$, $0$, $0$,
$4$, $0$, $1$, $0$, $0$, $5$, $0$, $1$, $0$, $1$,
$5$, $0$, $1$, $0$, $1$, $8$, $1$, $0$, $0$, $0$,
$6$, $0$, $1$, $1$, $0$, $9$, $1$, $0$, $0$, $1$,
$7$, $0$, $1$, $1$, $1$, $11$, $1$, $0$, $1$, $1$,
$8$, $1$, $0$, $0$, $0$, $12$, $1$, $1$, $0$, $0$,
$9$, $1$, $0$, $0$, $1$, $13$, $1$, $1$, $0$, $1$,
)]
=== Совершенная дизъюнктивная форма
$
b_3 =
ab_3 a_2 ab_1 a_0 or
ab_3 a_2 a_1 ab_0 or
ab_3 a_2 a_1 a_0 or
a_3 ab_2 ab_1 ab_0 or
a_3 ab_2 ab_1 a_0
$
// 0101 0110 0111 1000 1001
// 0000 0001 0010 0011 0100
$
b_2 =
ab_3 ab_2 a_1 a_0 or
ab_3 a_2 ab_1 ab_0 or
a_3 ab_2 ab_1 ab_0 or
a_3 ab_2 ab_1 a_0
$
//0011 0100 1000 1001
$
b_1 =
ab_3 ab_2 a_1 ab_0 or
ab_3 a_2 a_1 a_0
$
// 0010 0111
$
b_0 =
ab_3 ab_2 ab_1 a_0 or
ab_3 ab_2 a_1 ab_0 or
ab_3 a_2 ab_1 ab_0 or
ab_3 a_2 a_1 ab_0 or
ab_3 a_2 a_1 a_0 or\ or
a_3 ab_2 ab_1 a_0\
$
// 0001 0010 0100 0110 0111 1001
#pagebreak()
=== Карты Карно и минимизация СДНФ
Минимизируем СДНФ и приведем их к базису И-НЕ
#img(image("1.png", width: 50%), [Карта Карно для $b_3$])
$
b_3 = a_3 or a_2 a_1 or a_2 ab_1 a_0 = nt(ab_3 and nt(a_2 a_1,"___________") and nt(a_2 ab_1 a_0 ,"________________"), "_______________________________________________")
$
#img(image("2.png", width: 50%), [Карта Карно для $b_2$])
$
b_2 = a_2 ab_1 ab_0 or a_3 or ab_2 a_1 a_0 = nt(ab_3 and nt(a_2 ab_1 ab_0,"________________") and nt(ab_2 a_1 a_0,"________________"),"_____________________________________________________")
$
#img(image("3.png", width: 50%), [Карта Карно для $b_1$])
$
b_1 = a_2 a_1 a_0 or ab_2 a_1 ab_0 = nt(nt(a_2 a_1 a_0,"________________") and nt(ab_2 a_1 ab_0,"________________"),"___________________________________________")
$
#img(image("4.png", width: 50%), [Карта Карно для $b_0$])
$
b_0 = a_2 a_1 a_0 or ab_2 ab_1 a_0 or a_1 ab_0 or a_2 ab_0 =\ = nt(nt(a_2 a_1 a_0,"________________") and nt(ab_2 ab_1 a_0,"________________") and nt(a_1 ab_0,"____________")and nt(a_2 ab_0,"____________"), "____________________________________________________________________________")
$
#pagebreak()
== Исследование преобразователя заданного ДДК в ДДК 8421
=== Таблица переходов
#align(center)[
#tablex(
columns: 10,
inset: 4pt,
align: center + horizon,
$space A space$,$a_3$,$a_2$,$a_1$,$a_0$,$space B space$, $b_3$,$b_2$,$b_1$,$b_0$,
$0$, $0$, $0$, $0$, $0$, $0$, $0$, $0$, $0$, $0$,
$1$, $0$, $0$, $0$, $1$, $1$, $0$, $0$, $0$, $1$,
$2$, $0$, $0$, $1$, $0$, $3$, $0$, $0$, $1$, $1$,
$3$, $0$, $0$, $1$, $1$, $4$, $0$, $1$, $0$, $0$,
$4$, $0$, $1$, $0$, $0$, $5$, $0$, $1$, $0$, $1$,
$5$, $0$, $1$, $0$, $1$, $8$, $1$, $0$, $0$, $0$,
$6$, $0$, $1$, $1$, $0$, $9$, $1$, $0$, $0$, $1$,
$7$, $0$, $1$, $1$, $1$, $11$, $1$, $0$, $1$, $1$,
$8$, $1$, $0$, $0$, $0$, $12$, $1$, $1$, $0$, $0$,
$9$, $1$, $0$, $0$, $1$, $13$, $1$, $1$, $0$, $1$,
)]
=== Совершенная дизъюнктивная форма
$
a_3 = b_3 b_2 bb_1 bb_0 or b_3 b_2 bb_1 b_0\
a_2 = bb_3 b_2 bb_1 b_0 or b_3 bb_2 bb_1 bb_0 or b_3 bb_2 bb_1 b_0 or b_3 bb_2 b_1 b_0\
a_1 = bb_3 bb_2 b_1 b_0 or bb_3 b_2 bb_1 bb_0 or b_3 bb_2 bb_1 b_0 or b_3 bb_2 b_1 b_0\
a_0 = bb_3 bb_2 bb_1 b_0 or bb_3 b_2 bb_1 bb_0 or b_3 bb_2 bb_1 bb_0 or b_3 bb_2 b_1 b_0 or b_3 b_2 bb_1 b_0
$
=== Карты Карно и минимизация СДНФ
Минимизируем СДНФ и приведем их к базису И-НЕ
#img(image("5.png", width: 47%), [Карта Карно для $a_3$])
$
a_3 = b_3 b_2
$
#img(image("6.png", width: 50%), [Карта Карно для $a_2$])
$
a_2 = bb_3 b_2 b_0 or b_3 bb_2 = nt(nt(bb_3 b_2 b_0,"_______________") and nt(b_3 bb_2,"____________"), "_______________________________________")
$
#img(image("7.png", width: 50%), [Карта Карно для $a_1$])
$
a_1 = bb_3 b_2 bb_0 or b_3 bb_2 b_0 or b_1 = nt(nt(bb_3 b_2 bb_0,"_________________") and nt(b_3 bb_2 b_0,"_________________" ) and bb_1, "_____________________________________________________")
$
#img(image("8.png", width: 50%), [Карта Карно для $a_0$])
$
a_0 = bb_3 b_2 bb_0 or b_3 bb_2 bb_0 or b_3 b_2 b_0 or b_3 b_1 or bb_3 bb_2 bb_1 b_0 =\ =
nt(nt(bb_3 b_2 bb_0,"_________________") and nt(b_3 bb_2 bb_0,"_________________" ) and nt(b_3 b_2 b_0,"______________" ) and nt(b_3 b_1, "___________") and nt(bb_3 bb_2 bb_1 b_0, "_____________________"), "____________________________________________________________________________________________________________")
$
= Схема
Синтезируем одну схему, левая часть которой будет преобразовывать ДДК 8421 в заданный ДДК, а правая заданный ДДК в ДДК 8421. Если коды на входе совпадут с кодами на выходе, значит все было сделано верно.
#img(image("9.png", width: 100%), [Схема])
Как видно, коды совпали, значит схема выполнена верно.
= Контрольные вопросы
+ Как определяются избыточные наборы переменных?
Избыточными наборами переменных являются все те наборы, которые не используются для кодирования десятичных цифр.
+ Каково правило записи функции в вид СКНФ?
Все наборы переменных, на которых функция принимает значение 0, записываются в виде элементарных дизъюнкций, причем переменные, имеющие значение 0, записываются в прямой записи, иначе – в инверсной. Затем выполняется конъюнкция всех элементарных дизъюнкций, которая и является заданной функцией.
+ Как учитываются избыточные наборы переменных при минимизации функций алгебры логики?
Избыточные наборы можно добавлять к записи функции для минимизации, на результат выражения они не влияют.
+ Каково правило перехода от ДНФ функции алгебры логики к функции, выраженной в базисе функции И-НЕ?
ДНФ функции минимизируется, затем выражается в базисе И-НЕ, используя законы двойного отрицания и законы де Моргана.
|
|
https://github.com/caro3dc/typststuff | https://raw.githubusercontent.com/caro3dc/typststuff/main/styletest.typ | typst | #import "jx-style.typ": *
#show: docu.with(
debug: true,
headingnum: "1.1"
)
= Snoo PINGAS usual, I see.
a |
|
https://github.com/TGM-HIT/typst-protocol | https://raw.githubusercontent.com/TGM-HIT/typst-protocol/main/docs/manual.typ | typst | MIT License | #import "@preview/tidy:0.3.0"
#import "@preview/crudo:0.1.0"
#import "template.typ": *
#import "/src/lib.typ" as template
#let package-meta = toml("/typst.toml").package
// #let date = none
#let date = datetime(year: 2024, month: 10, day: 9)
#show: project.with(
title: "TGM HIT protocol template",
// subtitle: "...",
authors: package-meta.authors.map(a => a.split("<").at(0).trim()),
abstract: [
A protocol template for students of the HIT department at TGM Wien.
],
url: package-meta.repository,
version: package-meta.version,
date: date,
)
// the scope for evaluating expressions and documentation
#let scope = (template: template)
= Introduction
This template is aimed at students of the information technology department at the TGM technical secondary school in Vienna. It can be used both in the Typst app and using the CLI:
Using the Typst web app, you can create a project by e.g. using this link: #link("https://typst.app/?template=" + package-meta.name + "&version=latest").
To work locally, use the following command:
#raw(
block: true,
lang: "bash",
"typst init @preview/" + package-meta.name
)
If you are getting started writing your protocol, you will likely be better off looking into the document created by the template: it contains instruction and examples on the most important features of this template. If you have not yet initialized the template, a rendered version is linked in the README. If you are new to Typst, also check out the Typst documentation: https://typst.app/docs/.
The rest of this manual documents the individual functions offered by this package. This is useful if you want to know what customization options are available, or you're not sure what parts of the template package do.
As a school-specific template, this package does not offer enough configurability to be used at different institutions. However, if you like this template, feel free to adapt the code (MIT-licensed) to your needs, or open a Github issue if you think the template could be adapted to work for your requirements.
= Module reference
== `tgm-hit-protocol`
#{
let module = tidy.parse-module(
read("/src/lib.typ"),
// label-prefix: "tgm-hit-protocol.",
scope: scope,
)
tidy.show-module(
module,
sort-functions: none,
style: tidy.styles.minimal,
)
}
|
https://github.com/topdeoo/NENU-Thesis-Typst | https://raw.githubusercontent.com/topdeoo/NENU-Thesis-Typst/master/utils/datetime.typ | typst | #let datetime-display-cn-cover(date) = {
date.display("[year] 年 [month] 月")
}
#let datetime-display-cn-declare(date) = {
date.display("[year] 年 [month] 月 [day] 日")
} |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/show-bare-00.typ | typst | Other | #set page(height: 130pt)
#set text(0.7em)
#align(center)[
#text(1.3em)[*Essay on typography*] \
<NAME>
]
#show: columns.with(2)
Great typography is at the essence of great storytelling. It is the medium that
transports meaning from parchment to reader, the wave that sparks a flame
in booklovers and the great fulfiller of human need.
|
https://github.com/erfan-khadem/resume | https://raw.githubusercontent.com/erfan-khadem/resume/main/modules_en/projects.typ | typst | Apache License 2.0 | #import "../brilliant-CV/template.typ": *
#cvSection("Projects & Associations")
#cvEntry(
title: [Advanced Algorithms in Rust],
society: [#link("github.com/TheAlgorithms/Rust", [#underline("TheAlgorithms/Rust")])],
date: [2022],
location: [Github],
description: list(
[Implement advanced graph algorithms such as centroid decomposition and max-flow.],
[Implement various cryptographic algorithms including Salsa20 and HMAC.],
[Improve and optimize the implementation of various algorithms like SHA256 and DSU.],
[Many other big and small improvements and implementation.]
)
)
#cvEntry(
title: [SSH Anti-Censorship Tool],
society: [#link("github.com/er888kh/ssh-liberty-bridge", [#underline("SSH-liberty-bridge")])],
date: [2023],
location: [Github],
description: list(
[Implemented in go with the user management component written in python.],
[Has been used successfully for bypassing GFW restrictions in Iran and bringing internet freedom to the citizens.],
[Integrated in hiddify's excellent GFW bypass solution.],
)
)
#cvEntry(
title: [Path Tracing Ray-Tracer],
society: [#link("github.com/er888kh/ptrt", [#underline("ptrt")])],
date: [2022],
location: [Github],
description: list(
[A simple path tracer implemented in pure C++.],
[Using various statistical and SIMD tricks to improve performance.],
[Parallel processing implemented using OpenMP.],
//[#image("../src/projects/ptrt-week1.jpeg")]
)
)
#cvEntry(
title: [Jungle Marble Blast Game Clone],
society: [#link("github.com/er888kh/bp-cpp", [#underline("bp-cpp")])],
date: [2023],
location: [Github],
description: list(
[This was the project for our basic-programming course.],
[Implemented using SDL and Dear ImGui libraries in C++.],
//[#image("../src/projects/gameplay.png")]
)
) |
https://github.com/benjavicente/tt-typst | https://raw.githubusercontent.com/benjavicente/tt-typst/main/README.md | markdown | # Trabajo de título
Base para escribir el trabajo de título en [typst](https://typst.app/home/).
Esto no es un template [en el sentido de typst](https://typst.app/docs/tutorial/making-a-template/#toy-template). Solo entrega configuración requeridas y buenos defectos.
## Compilar
Instalar [typst](https://github.com/typst/typst?tab=readme-ov-file#installation)
```bash
# typst watch [input] [output]
typst watch main.typ main.pdf
```
## Setup recomendado en VSCOde
- Instalar una extensión de LSP para typst. [Tinymist Typst](https://marketplace.visualstudio.com/items?itemName=myriad-dreamin.tinymist) tiene buenos defectos, y [Typst LSP](https://marketplace.visualstudio.com/items?itemName=nvarner.typst-lsp) es minimalista, pero conviene deshabilitar que se compile automáticamente al guardar.
```json
"typst-lsp.exportPdf": "never"
```
- Instala la extensión [PDF Viewer](https://marketplace.visualstudio.com/items?itemName=mathematic.vscode-pdf) si quieres ver el PDF al lado dentro del código. Otras extensiones y apps no actualizan el PDF automáticamente.
|
|
https://github.com/AugustinWinther/structured-uib | https://raw.githubusercontent.com/AugustinWinther/structured-uib/main/example/main.typ | typst | MIT License | // IMPORTS
#import "@preview/structured-uib:0.1.0": *
#import "@preview/codedis:0.1.0": code
// TEMPLATE SETTINGS
#show: report.with(
task-no: "1",
task-name: "Måling og behandling av måledata",
authors: (
"<NAME>",
"<NAME>",
"<NAME>"
),
mails: (
"<EMAIL>",
"<EMAIL>",
"<EMAIL>"
),
group: "1-1",
date: "29. Apr. 2024",
supervisor: "Professor Professorsen",
)
// INNHOLDSFORTEGNELSE (automatisk fyllt ut)
#outline()
// 1 - MÅLSETTING
= Oppgavens målsetting <chap.målsetting>
Hva gjør vi og hvorfor? Kanskje vi får bruk for @eq.cool? Eller koden fra @code.litt_kode i @chap.kode?
$
f(x) = (pi e^(i x))
$<eq.cool>
// 2. MÅLEOPPSTILLING
= Beskrivelse av måleoppstilling <chap.måleoppstilling>
Hvordan gjorde vi det? Jo, bare se på @fig.katapult!
#figure(
caption: [En liten katapult #footnote([Den er egt. ganske stor!]).],
image("katapult.png", width: 30%)
)<fig.katapult>
// 3. UTFØRELSE
= Utførelse og målinger <chap.utførelse>
Hva fikk vi? Er $pi = 3.14$? Diskusjonen finnes i @chap.konklusjon.
#figure(
caption: [Bare litt data],
table(
columns: 3,
[*Data 1*], [*Data 2*], [*Data 3*],
[420], [1337], [1999]
)
)<tabl.data>
// 4. KONKLUSJON
= Konklusjon og diskusjon <chap.konklusjon>
Hvorfor skjedde det som skjedde? Hvorfor fikk vi det vi fikk i @tabl.data? Kanskje vi finner svaret i Newtons Principa Matemtika @bibl.newton.
// REFERANSER (automatisk fyllt ut)
#bibliography("references.bib")
// APPENDIKS innhold etter denne "show" linjen
#show: appendices
= Kode <chap.kode>
#figure(
caption: [Bare litt kode],
code(read("litt_kode.py"))
)<code.litt_kode>
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/let-26.typ | typst | Other | // Error: 7-11 cannot destructure named elements from an array
#let (a: a, b) = (1, 2, 3)
|
https://github.com/ralphmb/typst-template-stats-dissertation | https://raw.githubusercontent.com/ralphmb/typst-template-stats-dissertation/main/writeup/sections/litreview.typ | typst | Apache License 2.0 | == Topic 1
#lorem(200)
== Topic 2
#lorem(200)
== Topic 3
#lorem(150)
== Topic 4
#lorem(120) |
https://github.com/jamesrswift/ionio-illustrate | https://raw.githubusercontent.com/jamesrswift/ionio-illustrate/main/dist/0.2.0/src/lib.typ | typst | MIT License | #import "@preview/cetz:0.1.2"
#import "util.typ": *
#import "defaults.typ": *
/// Returns an object representing mass spectrum content.
///
/// - data (array): The mass spectrum in the format of a 2D array, or an array of dictionarys.
/// By default, the mass-charges ratios are in the first column, and the relative
/// intensities are in the second column.
/// - args (dictionary): Override default behaviour of the mass spectrum by overriding methods,
/// or setting fields.
/// -> dictionary, none
#let mass-spectrum(
data,
args: (:),
) = {
let prototype = (
// --------------------------------------------
// Public member data
// --------------------------------------------
data: data,
keys: (
mz: 0,
intensity: 1
),
size: (14,5),
range: (40, 400),
style: mass-spectrum-default-style,
labels: (
x: [Mass-Charge Ratio],
y: [Relative Intensity (%)]
),
linestyle: (this, idx)=>{},
plot-extras: (this)=>{},
)
// Overrides. This ensures the prototype is properly formed by the time we need it
prototype = merge-dictionary(prototype,args)
prototype.style = merge-dictionary(mass-spectrum-default-style,prototype.style)
// --------------------------------------------
// Methods : Utility
// --------------------------------------------
/// Get the intensity of a mass-peak for a given mass-charge ratio
//
// - mz (string, integer, float): Mass-charge ratio for which the intensity is being queried
// -> float
prototype.get-intensity-at-mz = (mz) => {
let intensity_arr = (prototype.data).filter(
it=>float(it.at(prototype.keys.mz, default:0))==mz
)
if ( intensity_arr.len() == 0 ) {return 0}
return float(
intensity_arr.at(0).at(prototype.keys.intensity)
)
}
// --------------------------------------------
// Methods : Additional Content
// --------------------------------------------
// Plot-extras function that will place content above a mass peak
// - mz (string, integer, float): Mass-charge above which to display content
// - content (content, string, none): Content to show above mass peak. Defaults to given mz
// - y-offset (length): Distance at which to display content above mass peak
// -> content
prototype.callout-above = (mz, content: none, y-offset: 0.3em) => {
if ( mz <= prototype.range.at(0) or mz >= prototype.range.at(1) ){ return }
if ( content == none ) { content = mz}
return cetz.draw.content(
anchor: "bottom",
(mz, (prototype.get-intensity-at-mz)(mz)), box(inset: y-offset, [#content]),
..prototype.style.callouts
)
}
prototype.callipers = (
start, end, // mass-charge ratios
height: none,
content: none,
) => {
if (content == none){ content = [-#calc.abs(start - end)] }
// Determine height
let start_height = (prototype.get-intensity-at-mz)(start)
let end_height = (prototype.get-intensity-at-mz)(end)
if ( height == none ) { height = calc.max(start_height, end_height) + 5 }
let draw-arrow(x, y) = cetz.draw.line(
(x - 0.5, y + 2),(x + 0.5, y + 2),
..prototype.style.callipers.line
)
// Draw
return {
// Start : horizontal arrow
draw-arrow(start, start_height)
draw-arrow(end, end_height)
cetz.draw.merge-path({
cetz.draw.line((start, start_height + 2), (start, height))
cetz.draw.line((start, height), (end, height))
cetz.draw.line((end, height),(end, end_height + 2))
}, ..prototype.style.callipers.line)
// Content
cetz.draw.content(
( (start + end) / 2, height),
anchor: "bottom",
box(inset: 0.3em, content),
..prototype.style.callipers.content
)
}
}
prototype.title = (content, anchor: "top-left", ..args) => {
return cetz.draw.content(
anchor: anchor,
(prototype.range.at(0), 110),
box(inset: 0.5em, content),
..prototype.style.title,
..args
)
}
// --------------------------------------------
// Methods : Property Setup, Internal
// --------------------------------------------
prototype.setup-plot = (ctx, x, y, ..arguments) => {
cetz.axes.scientific(
size: prototype.size,
// Axes
top: none, bottom: x,
right: none, left: y, // TODO: Optional secondary axis
..arguments
)
}
prototype.setup-axes = () => {
let axes = (:)
axes.x = cetz.axes.axis(
min: prototype.range.at(0),
max: prototype.range.at(1),
label: prototype.labels.x,
//ticks: (step: 10, minor-step: none)
)
axes.y = cetz.axes.axis(
min: 0,
max: 110,
label: prototype.labels.y,
ticks: (step: 20, minor-step: none)
)
return axes
}
// --------------------------------------------
// Methods : Rendering
// --------------------------------------------
// ms.display-single-peak handles the rendering of a single mass peak
prototype.display-single-peak = (idx, mz, intensity, ..arguments) =>{
if (mz > prototype.range.at(0) and mz < prototype.range.at(1) ){
cetz.draw.line(
(mz, 0),
(rel: (0,intensity)),
..arguments, // Global style is overriden by individual style
..(prototype.linestyle)(prototype, idx)
)
}
}
/// The ms.display method is responsible for rendering
prototype.display = () => {
// Setup canvas
cetz.canvas({
import cetz.draw: *
let (x,y) = (prototype.setup-axes)()
// Begin group
cetz.draw.group(ctx=>{
// Style
let style = merge-dictionary(
merge-dictionary(mass-spectrum-default-style, cetz.styles.resolve(ctx.style, (:), root: "mass-spectrum")),
prototype.style
)
// Setup scientific axes
(prototype.setup-plot)(ctx, x, y, ..style.axes)
cetz.axes.axis-viewport(prototype.size, x, y,{
// Add in plot extras first
(prototype.plot-extras)(prototype)
// Add each individual mass peak
if prototype.data.len() > 0 {
for (i, row) in data.enumerate() {
let x = float(row.at(prototype.keys.mz))
let y = float(row.at(prototype.keys.intensity))
(prototype.display-single-peak)(x, x, y, ..style.peaks)
}
}
})
})
})
}
// Asserts
assert(type(prototype.keys.mz) in (int, str))
assert(type(prototype.keys.intensity) in (int, str))
return prototype
}
#let MolecularIon(charge:none) = [M#super()[#charge+]] |
https://github.com/MichaelFraiman/TAU_template | https://raw.githubusercontent.com/MichaelFraiman/TAU_template/main/mfraiman-lists.typ | typst | // LISTS
// numbering with letters
#let enum_l(..items) = {
enum(
numbering: {
(..nums) => {
set text(
weight: "extrabold"
)
numbering("a.", nums.pos().first())
}
}
,
..items)
}
#let enum_cond(..items) = {
enum(
numbering: {
(..nums) => {
set text(
weight: "extrabold"
)
numbering("1.", nums.pos().first())
}
}
,
..items)
}
// numbering with no indentation
#let enum_big(
spacing: 1em,
..items) = {
enum(
numbering: {
(..nums) => {
if nums.pos().len() == 1 {
set text(weight: "extrabold")
"("
nums.pos().map(str).first()
")"
} else {
"("
set text(weight: "bold")
nums.pos().map(str).first()
"."
set text(weight: "medium")
nums.pos().slice(1).map(str).join(".")
")"
}
}
},
full: true,
indent: 0em,
number-align: start + top,
spacing: spacing,
tight: false,
..items
)
}
// inline numbering
#let rr(
counter,
value: none,
pts: none
) = [#{
text(weight: "bold", style: "normal")[#{
"("
if value == none {
counter.step()
counter.display()
} else {
str(value)
}
if pts != none {
text(weight: "regular")[#{", " + pts + " points"}]
}
[)]
}]
}]
#let list_bull(
spacing: 1em,
..items
) = {
list(
marker: sym.diamond.filled,
spacing: spacing,
tight: false,
..items
)
}
#let ur(
t,
marker: sym.diamond.filled
) = text(style: "italic")[
#smallcaps[#{
marker + [~] + t
}]
]
#let enum_col(
columns: (1fr),
indent: 0em,
counter_inline,
..items
) = {
let i = ()
for item in items.pos() {
i.push({
h(indent)
rr(counter_inline) + item
})
}
grid(
columns: columns,
row-gutter: 1em,
..i
)
} |
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/MATLAB/touying/docs/docs/themes/simple.md | markdown | ---
sidebar_position: 1
---
# Simple Theme

This theme is derived from [Polylux](https://polylux.dev/book/themes/gallery/simple.html), created by <NAME>.
Considered a relatively simple theme, you can use it to create straightforward slides and easily incorporate features you like.
## Initialization
You can initialize the Simple theme using the following code:
```typst
#import "@preview/touying:0.2.1": *
#let s = themes.simple.register(s, aspect-ratio: "16-9", footer: [Simple slides])
#let s = (s.methods.enable-transparent-cover)(self: s)
#let (init, slide, slides, title-slide, centered-slide, focus-slide) = utils.methods(s)
#show: init
```
The `register` function takes parameters such as:
- `aspect-ratio`: The aspect ratio of the slides, either "16-9" or "4-3," with the default being "16-9."
- `footer`: Content to be displayed in the footer, with the default being `[]`. You can also pass a function like `self => self.info.author`.
- `footer-right`: Content to be displayed on the right side of the footer, with the default being `states.slide-counter.display() + " / " + states.last-slide-number`.
- `background`: Background color, with the default being white.
- `foreground`: Text color, with the default being black.
- `primary`: Theme color, with the default being `aqua.darken(50%)`.
## Slide Function Family
The Simple theme provides a series of custom slide functions:
```typst
#centered-slide(section: ..)[
...
]
```
A slide with content centered on the slide. The `section` parameter can be used to create a new section.
---
```typst
#title-slide[
...
]
```
Similar to `centered-slide`, this is just for consistency with Polylux syntax.
---
```typst
#slide(
repeat: auto,
setting: body => body,
composer: utils.side-by-side,
section: none,
subsection: none,
// simple theme args
footer: auto,
)[
...
]
```
This is the default ordinary slide function with a header and footer. The header is set to the current section, and the footer is as per your settings.
---
```typst
#focus-slide(foreground: ..., background: ...)[
...
]
```
Used to draw attention. It optionally accepts a foreground color (default is `white`) and a background color (default is `auto`, i.e., `self.colors.primary`).
## `slides` Function
```typst
#import "@preview/touying:0.2.1": *
#let s = themes.simple.register(s, aspect-ratio: "16-9", footer: [Simple slides])
#let s = (s.methods.enable-transparent-cover)(self: s)
#let (init, slide, slides, title-slide, centered-slide, focus-slide) = utils.methods(s)
#show: init
#show: slides
= Title
== First Slide
Hello, Touying!
#pause
Hello, Typst!
```

## Example
```typst
#import "@preview/touying:0.2.1": *
#let s = themes.simple.register(s, aspect-ratio: "16-9", footer: [Simple slides])
#let s = (s.methods.enable-transparent-cover)(self: s)
#let (init, slide, slides, title-slide, centered-slide, focus-slide) = utils.methods(s)
#show: init
#title-slide[
= Keep it simple!
#v(2em)
Alpha #footnote[Uni Augsburg] #h(1em)
Bravo #footnote[Uni Bayreuth] #h(1em)
Charlie #footnote[Uni Chemnitz] #h(1em)
July 23
]
#slide[
== First slide
#lorem(20)
]
#focus-slide[
_Focus!_
This is very important.
]
#centered-slide(section: [Let's start a new section!])
#slide[
== Dynamic slide
Did you know that...
#pause
...you can see the current section at the top of the slide?
]
```
|
|
https://github.com/N4tus/uf_algo | https://raw.githubusercontent.com/N4tus/uf_algo/main/algo_base.typ | typst | MIT License | #import "types.typ": *
#import "style_base.typ": eval_line
#let _ex_format(name, fn) = makeup => {
let format = if t_check(TMakeup, makeup) {
makeup.format
} else {
makeup
}
if name == none or name in format {
fn(format)
} else {
panic("formatter '" + repr(format) + "' is missing '" + name + "' field")
}
}
#let block(body, indent, makeup) = {
(body: body, indent: indent, makeup: makeup)
}
#let _choose_formatter(format, default, name) = {
if name in format {
format.at(name)
} else {
default
}
}
#let _line_format(line_part, format: none) = {
if type(line_part) == "array" {
line_part.flatten().map(part => if type(part) == "function" { part(format) } else { part })
} else if type(line_part) == "function" {
line_part(format)
} else {
line_part
}
}
#let constant(name) = _ex_format(name, format => format.at(name))
#let expr(name, ..values, format: none, format_name: "format") = _ex_format(name, _format => {
let values = values.pos()
let default_format = if format != none { format } else { (..v)=>v.pos().flatten().map(v => if t_check(TNumber, v) { str(v) } else {v}) }
let formatter = _choose_formatter(_format.at(name), default_format, format_name)
formatter(..values.map(_line_format.with(format: _format)))
})
#let control_flow(name, ..values, format_head: none, format_tail: none, format_head_name: "format_head", format_tail_name: "format_tail", indent: 1em, indent_name: "indent", with_body: d=>d) = (..body) => {
(
expr(name, ..values, format: format_head, format_name: format_head_name),
block(with_body(body.pos()), _ex_format(name, format => _choose_formatter(format.at(name), indent, indent_name)), _ex_format(name, format => if "makeup" in format.at(name) {format.at(name).makeup})),
expr(name, ..values, format: format_tail, format_name: format_tail_name),
)
}
#let _mk_array(e) = if type(e) == "array" {e} else {(e,)}
#let append_content(before: (), after: ()) = content => {
_mk_array(before) + _mk_array(content) + _mk_array(after)
}
#let algo_body(makeup) = (..content) => {
t_assert(TMakeup, makeup)
let resolve_groups(elem, start_idx) = {
let i = start_idx
let items = ()
while i < elem.len() {
let e = elem.at(i)
if t_check(TGroupStart, e) {
let (inner, next_idx) = resolve_groups(elem, i+1)
i = next_idx
let start_style = if "start_style" in e { e.start_style } else { v=>v }
let end_style = if "end_style" in e { e.end_style } else { v=>v }
let use_start = if "use_start" in e { e.use_start } else { v=>v }
let use_end = if "use_end" in e { e.use_end } else { v=>v }
let item = if use_start and use_end {
(e.group)(inner, start_style, end_style)
} else if use_start {
(e.group)(inner, start_style)
} else if use_end {
(e.group)(inner, end_style)
} else {
(e.group)(inner)
}
items.push(item)
} else if t_check(TGroupEnd, e) {
return (items.join(""), i+1)
} else {
items.push(e)
i = i+1
}
}
(items.join(""), i)
}
let indent(lines, level, makeup) = {
for l in lines {
if t_check(TBody, l) {
let m = (l.makeup)(makeup)
indent(l.body, level + (l.indent)(makeup), if m != none { m } else { makeup })
} else if type(l) == "array" {
indent(l, level, makeup)
} else {
let elem = l(makeup)
if elem == none { continue }
let elem = if type(elem) == "array" {
resolve_groups(eval_line(elem, makeup), 0).at(0)
} else {
elem
}
(h(level) + elem,)
}
}
}
indent(content.pos(), 0em, makeup)
}
#let algo(name, makeup) = (..content) => {
t_assert(TMakeup, makeup)
figure(align(
start,
[
#box(line(length: 100%))
#name
#box(line(length: 100%))
#enum(..algo_body(makeup)(..content), numbering: "1:")
#box(line(length: 100%))
]
))
}
|
https://github.com/dadn-dream-home/documents | https://raw.githubusercontent.com/dadn-dream-home/documents/main/contents/10-ket-luan/index.typ | typst | = Kết luận
Như vậy, nhóm đã đạt được những yêu cầu được đề ra ban đầu:
#block(inset: (left: 1cm))[
_Thứ nhất_, nhận và hiển thị dữ liệu từ thiết bị thông qua việc sử dụng
giao thức MQTT, cũng như điều khiển ứng dụng và màn hình LCD.
_Thứ hai_, kiểm tra dữ liệu nhận được vượt quá ngưỡng cho phép thông qua
backend, cũng như giao diện thiết lập ở frontend.
_Thứ ba_, điều khiển thiết bị thông qua ứng dụng frontend.
_Thứ tư_, ghi nhận hoạt động thiết bị thông qua backend cùng với cơ sở dữ
liệu.
_Thứ năm_, sử dụng design pattern trong hiện thực, cụ thể là các pattern
*Mediator*, *MVC*, *Observer*, *Adaptor*.
]
Trong quá trình thực hiện, nhóm cũng có gặp một số khó khăn. Nếu có sai sót,
nhóm mong nhận được sự góp ý của thầy và các bạn. |
|
https://github.com/rabotaem-incorporated/calculus-notes-2course | https://raw.githubusercontent.com/rabotaem-incorporated/calculus-notes-2course/master/sections/02-measure-theory/05-measurable-functions.typ | typst | #import "../../utils/core.typ": *
== Измеримые функции
#notice[
Так как все меры можно продолжить на $sigma$-алгебру, далее мы будем считать, что все меры определены на $sigma$-алгебре.
]
#def(label: "def-lebesgue-set")[
Пусть $f: E --> overline(RR)$. _Лебеговыми множествами фукнции $f$_ называются
$
E{f <= a} = {x in E: f(x) <= a} = f^(-1)([-oo, a]), \
E{f >= a} = {x in E: f(x) >= a} = f^(-1)([a, +oo]), \
E{f < a} = {x in E: f(x) < a} = f^(-1)([-oo, a)), \
E{f > a} = {x in E: f(x) > a} = f^(-1)((a, +oo]). \
$
]
#th(label: "def-measurable-correctness")[
Пусть $E$ --- измеримо, $f: E --> overline(RR)$. Тогда равносильны следующие условия:
+ $E{f <= a}$ измеримы для любого $a in RR$;
+ $E{f < a}$ измеримы для любого $a in RR$;
+ $E{f > a}$ измеримы для любого $a in RR$;
+ $E{f >= a}$ измеримы для любого $a in RR$.
]
#proof[
- "$1) <==> 3)$" --- очевидно, так как $E{f <= a} = E without E{f > a}$.
- "$2) <==> 4)$" --- аналогично.
- "$1) <==> 2)$" --- $E{f < a} = Union_(n = 1)^oo E{f <= a - 1/n}$.
- "$3) <==> 4)$" --- $E{f > a} = Union_(n = 1)^oo E{f >= a + 1/n}$.
]
#def(label: "def-mfn")[
$f: E --> overline(RR)$ называется _измеримой_, если измеримы все ее лебеговы множества#rf("def-lebesgue-set"). (Эквивалентно, если $E$ измерима, и измеримы все лебеговы множества конкретного вида#rf("def-measurable-correctness")).
]
#examples(label: "mfns")[
+ #sublabel("const") Константа на измеримом множестве измерима.
+ #sublabel("indicator") Характеристическая функция множества: $E$ и $A$ измеримы,
$
bb(1)_A (x) = cases(
1\, "если" x in A\,,
0\, "если" x in E without A.
)
$
+ #sublabel("continious") Пусть $E in Ll^m$ и $f in C(RR^m)$. Тогда $f$ измерима относильно меры Лебега. Это получается из того, что $f^(-1)((-oo, a))$ --- открыт, значит лежит в $Bb^m subset Ll^m$. Значит функция измерима.
]
#props(label: "non-integral-mfn-props")[
1. #sublabel("domain-measurable")
Если $f: E --> overline(RR)$ измерима#rf("def-mfn"), то $E$ --- измеримо.
2. #sublabel("restriction-mfn")
Если $f: E --> overline(RR)$ измерима, $overline(E) subset E$ измеримо, то $f bar_overline(E)$ тоже измерима.
3. #sublabel("preimage-segment-measurable")
Если $f$ измерима, то прообраз любого промежутка измерим.
4. #sublabel("preimage-open-measurable")
Если $f$ измерима, то прообраз любого открытого множества измерим.
5. #sublabel("abs-mfn") #sublabel("neg-mfn")
Если $f$ измерима, то $abs(f)$ и $-f$ измеримы.
6. #sublabel("max-mfn") #sublabel("min-mfn")
#sublabel("pos-part-mfn") #sublabel("neg-part-mnf")
Если $f, g: E --> overline(RR)$ измеримы, то $max{f, g}$ и $min{f, g}$ измеримы.
В частности, $f_+ = max{f, 0}$ и $f_- = max{-f, 0}$ измеримы.
7. #sublabel("union-mfn")
Пусть $E = Union E_n$ (конечное или счетное --- не важно), $f: E --> overline(RR)$ и $f bar_E_n$ измеримо для любого $n$. Тогда $f$ измеримо.
8. #sublabel("mfn-is-mfn-restriction")
Каждая измеримая функция $f: E --> overline(RR)$ --- сужение на $E$ некоторой измеримой функции $g: X --> overline(RR)$.
]
#proof[
1. Очевидно из определения#rf("def-mfn").
2. $tilde(E){f <= a} = tilde(E) sect E{f <= a}$#rf("def-mfn").
3. $E{a < f < b} = E{f < b} sect E{f > a}$ --- измеримо#rf("def-mfn"). Аналогично для промежутков другого вида.
4. Пусть $G subset RR$ открытое, тогда#rf("borel-set-over-cells") $G = usb_(j = 1)^oo (a_j, b_j]$. Тогда $f^(-1)(G) = usb_(j = 1)^oo f^(-1)(a_j, b_j]$, а счетное объединение измеримо.
5. По определению#rf("def-mfn"), $E{-f <= a} = E{f >= -a}$. $E{abs(f) <= a} = cases(emptyset\, &a < 0, E{-a <= f <= a}\, quad &a >= 0)$.
6. По определению#rf("def-mfn"), $E{max{f, g} <= a} = E{f <= a} sect E{g <= a}$. Аналогично для $min$.
7. По определению#rf("def-mfn"), $E{f <= a} = Union E_n {f <= a}$.
8. $ g(x) = cases(f(x)\, quad & "если" x in E\,,0\, & "иначе".) $
Так как $E$ измерима#rf("non-integral-mfn-props", "domain-measurable"), эта функция тоже#rf("non-integral-mfn-props", "union-mfn").
]
#th(label: "inf-sup-lim-mfn")[
Пусть $f_n: E --> overline(RR)$ --- измеримые. Тогда:
1. $sup f_n$ и $inf f_n$ измеримы.
2. $overline(lim) f_n$ и $underline(lim) f_n$ измеримы.
3. Если $f = lim f_n$ (поточечный предел), то $f$ измерима.
]
#proof[
1. $
E{sup f_n <= a} &=^rf("def-mfn")
{x in E: sup{f_1(x), f_2(x), ...} <= a} \
&= {x in E: a #[--- верхняя граница множества] {f_1(x), f_2(x), ...}} \
&= {x in E: f_n (x) <= a space forall n} = sect.big_(n = 1)^oo E{f_n <= a}.
$
Аналогично для $inf$.
2. $overline(lim) f_n (x) = inf_n sup_(k >= n) f_k (x)$. Измерим.
3. $f = overline(lim) f_n$, значит $f$ --- измерима.
]
#th(label: "compose-mfn")[
Пусть $f: E --> H subset RR^m$, $f = (f_1, f_2, ..., f_m)$. $f_1$, $f_2$, ..., $f_m$ --- измеримые. Пусть $phi: H --> RR$, $phi in C(H)$. Тогда $F = phi compose f: E --> RR$ измерима.
]
#proof[
Хотим понять, что $E{F < a} =^rf("def-mfn") F^(-1)(-oo, a) = f^(-1)(phi^(-1)(-oo, a))$ измеримо. $phi^(-1)(-oo, a) =: U$ --- открытое множество в $H$, так как $phi$ непрерывна. Тогда $U = H sect G$, где $G$ --- открыто в $RR^m$. Тогда $f^(-1)(U) = f^(-1)(H sect G) = f^(-1)(G)$. Остается доказать, что если $G subset RR^m$ открыто, то $f^(-1)(G)$ измеримо.
Открытые множества представляются в виде счетного объединения ячеек#rf("borel-set-over-cells"):
$G = usb_(n = 1)^oo (a_n, b_n]$, где $a_n, b_n in RR^m$. $f^(-1)(G) = usb_(n = 1)^oo f^(-1)(a_n, b_n]$. Достаточно понять про прообраз одной ячейки. $f^(-1)(c, d] = sect.big_(k = 1)^m f^(-1)_k (c_k, d_k]$ (разложили по кооридинатам). Значит $F$ измерима.
]
#follow[
В условии теоремы в качестве $phi$ можно взять поточечный предел непрерывных функций#rf("inf-sup-lim-mfn")#rf("mfns", "continious").
]
#def(name: "Арифметические операции с бесконечностями", label: "inf-arithmetic")[
1. Если $x in RR$, то $x + (+oo) = x - (-oo) = +oo$ и $x + (-oo) = x - (+oo) = -oo$.
2. Если $x > 0$, то $x dot (plus.minus oo) = plus.minus oo$, если $x < 0$, $x dot (plus.minus oo) = minus.plus oo$.
3. Если $x in overline(RR)$, то $x dot 0 = 0$.
4. #text(red, $(+oo) - (+oo) = (+oo) + (-oo) = (-oo) - (-oo) = 0$).
Деление на $0$ все еще не определено. #text(red, strong[Свойство 4 нарушает ассоциативность сложения:])
$ (1 + (+oo)) + (-oo) &= 0, \ 1 + ((+oo) + (-oo)) &= 1. $
]
#th(label: "mfn-arithmetic")[
1. #sublabel("sum-mfn") #sublabel("prod-mfn") Сумма и произведение измеримых функций --- измеримая функция.
2. #sublabel("compose-mfn") $f: E --> H subset RR, phi in C(H)$. Если $f$ измерима, то $phi compose f$ измерима.
3. #sublabel("power-mfn") Если $f >= 0$ и измерима, $p > 0$, то $f^p$ измерима. (Соглашение: $(+oo)^p = +oo$).
4. #sublabel("inf-mfn") Если $f$ измерима, то $1/f$ измерима на множестве $E{f != 0}$.
]
#proof[
1. $f, g: E --> overline(RR)$ измеримы. Если значения функций конечны, то $phi(x, y) = x + y$. Можно рассмотреть#rf("compose-mfn") композицию $phi (f, g)$. Аналогично с умножением. Вопрос только с бесконечностями (в теореме#rf("compose-mfn") обе функции действуют в $RR$).
Если какие-то функции, режем функцию на части#rf("non-integral-mfn-props", "union-mfn"): $E{f = -oo}$, $E{-oo < f < 0}$, $E{f = 0}$, $E{0 < f < +oo}$, $E{f = +oo}$, аналогично для $g$. Тогда на пересечении где есть хотя бы одна бесконечность, всегда получается константа#rf("mfns", "const"). На остальных кусках применяем рассуждение выше.
2. Просто одномерный частный случай теоремы#rf("compose-mfn").
3. $E{f^p <= a} =^rf("def-mfn") cases(emptyset\, "если" a < 0, E{f <= a^(1/p)}\, "если" a >= 0).$
4. $tilde(E) = E{f != 0}$. $ tilde(E){1/f <= a} =^rf("def-mfn") cases(tilde(E){f >= 1/a} sect tilde(E){f < 0} "если" a < 0, tilde(E){f < 0} union tilde(E){f >= 1/a} "если" a > 0, tilde(E){f < 0} "если" a = 0). $
]
#follow(label: "mfn-arithmetic'")[
1. Произведение конечного числа измеримых функций --- измеримая функция.
2. Натуральная степень измеримой функции --- измеримая функция.
3. Линейная комбинация измеримых функций --- измеримая функция.
]
#th(label: "continious-mfn")[
Пусть $E in Ll^m$, $f in C(E)$. Тогда $f$ измерима относильно меры Лебега.
]
#proof[
$E{f < a} = f^(-1)(-oo, a)$ --- открыто в $E$. Значит $E{f < a} =^rf("def-lebesgue-set") E sect G$, где $G$ открыто в $RR^m$. Значит оно измеримо#rf("def-mfn"). Мне кажется, или это уже где-то#rf("mfns", "continious") было?..
]
#def(label: "def-simple-fn")[
Измеримая#rf("def-mfn") фукнция $f: E --> RR$ (с конечными значениями) называется _простой_, если она принимает конечное число значений.
_Допустимым разбиением простой функции_ называется $E = usb_(k = 1)^n E_k$ такое, что $f bar_E_k$ --- постоянная.
]
#notice[
У любой простой есть допустимое разбиение. В качестве $E_k$ берется $E{f = y_k}$. $y_k$ --- все значения функции.
]
#props(label: "simple-fn-props")[
1. Если $E = usb_(k = 1)^n E_k$ --- измеримо и $f$ постоянна на каждом $E_k$, то $f$ --- простая.
2. #sublabel("common-partition") Для любых двух простых функций есть общее допустимое разбиение.
3. #sublabel("sum-simple") #sublabel("prod-simple") Сумма и произведение простых функций --- простая функция.
4. #sublabel("linear-comb-simple") Линейная комбинация простых функций --- простая функция.
5. #sublabel("max-simple") #sublabel("min-simple") $max$ и $min$ конечного числа простых функций --- простая функция.
]
#proof[
1. Очевидно.
2. $E = usb_(k = 1)^m A_i = usb_(j = 1)^n B_j$. Рассмотрим разбиение на $A_i sect B_j$. Оно подойдет.
3. Свойство 2#rf("simple-fn-props", "common-partition") + тривиальщина#rf("def-simple-fn").
4. Свойство 3#rf("simple-fn-props", "sum-simple") + индукция.
5. Свойство 2#rf("simple-fn-props", "common-partition") + индукция + тривиальщина#rf("def-simple-fn").
]
#th(label: "simple-approx")[
Пусть $f: X --> overline(RR)$, $f >= 0$ измеримая#rf("def-mfn"). Тогда существует последовательность простых функций#rf("def-simple-fn") $phi_n: X --> RR$, $0 <= phi_1 <= phi_2 <= ...$. и $f = lim phi_n$ поточечно. Если $f$ ограниченная, то $phi_n$ можно выбрать так, что $phi_n arrows.rr f$#rf("def-fn-converge") на $X$.
]
#proof[
Пусть $Delta_k := [k/n, (k + 1)/n)$ при $k = 0, 1, ..., n^2 - 1$, $Delta_(n^2) := [n, +oo]$. То есть мы разрезали $[0, +oo]$ на $n^2 + 1$ кусков.
Рассмотрим $A_k := f^(-1) (Delta_k)$ --- измеримые#rf("non-integral-mfn-props", "preimage-segment-measurable"). Тогда пусть $phi_n$ на $A_k$ равняется $k/n$. Это простая функция.
Докажем, что предел получится правильный: $lim phi_n (x) = f(x)$:
- Если $f(x) = +oo$, то $x in f^(-1)([n, +oo])$. Значит $phi_n (x) = n$.
- Если $f(x) < +oo$, то при больших $n$, $x$ не будет лежать в прообразе луча. Пусть $x in.not f^(-1)([n, +oo])$. Тогда $f(x) - 1/n < phi_n (x) <= f(x)$, и, следовательно, $lim phi_n (x) = f(x)$. Причем если $f$ ограничена, то при $n >= N space f^(-1) [n, +oo) = nothing$. Значит $abs(phi_n (x) - f(x)) < 1/n$. Это и есть равномерная сходимость.
Есть только небольшая проблема: не выполняется $0 <= phi_1 <= phi_2 <= ...$. Можно просто брать максимумы по префиксам, тогда все будет хорошо. Но предлагается другой способ: заметить, что $phi_(2^n) <= phi_(2^(n+1))$. Тогда можно просто взять подпоследовательность $phi_(2^n)$. Ну а почему это верно? Пусть для $2^n$ вы выбрали значение $k / 2^n$. Тогда $k/2^n <= f(x) < (k + 1)/2^n$. Тогда для $2^(n + 1)$ можно выбрать либо значение $(2k)/2^(n + 1)$ (если мы попали в первую половину), либо $(2k + 1)/2^(n + 1)$. Оба не меньше.
]
|
|
https://github.com/dariakoz/Typst-Template | https://raw.githubusercontent.com/dariakoz/Typst-Template/main/main.typ | typst | #import "lib/intro.typ": data, tickbox, project_number, toc
#import "lib/todo.typ": todo, outline-todos
#import "@preview/wordometer:0.1.1": word-count, total-words
#import "@preview/sourcerer:0.2.1": code
#import "@preview/acrostiche:0.3.1": *
#let project_title = "DEIN TITEL HIER"
#let project_author = "<NAME>"
// -------------------------------------------
// -------- General document settings --------
// -------------------------------------------
#set page(
paper: "a4",
margin: (left: 3cm, right: 2cm, top: 2.5cm, bottom: 2.5cm),
)
#set document(title: project_title, author: project_author)
#set heading(numbering: "1.1")
#set text(font: "Arial", size: 11pt, lang: "de")
#set par(justify: true, leading: 14pt)
#show heading: head => {
set block(below: 16pt)
head
}
#show par: p => {
p
set block(below: 24pt)
}
#set list(indent: 1cm)
#align(center, heading(
project_title,
outlined: false,
numbering: none,
))
#h(2em)
#stack(
// Adjust your project number here (1-4)
project_number(1),
data("vorgelegt am", "29.04.2024"),
data("von", project_author),
data("Martikelnr.", "G230XXXPI"),
data("DHGE Campus", "Gera"),
data("Studienbereich", "Technik"),
data("Studiengang", "Praktische Informatik"),
data("Kurs", "PIB23"),
data("Ausbildungsstätte", "DB Systel GmbH"),
data("Betreuer", "<NAME>"),
data("", "/")
)
#pagebreak()
//Seitenzahl Counter
#set page(
numbering: "I",
number-align: end
)
#counter(page).update(1)
#numbering("I")
// -----------------------------------
// ------- Inhaltsverzeichnis --------
// -----------------------------------
#toc()
// Format primary headings in bold.
#show outline.entry.where(
level: 1
): it => {
text(weight: "regular", it)
}
#pagebreak()
// -------------------------------------
// ------ Abbildungsverzeichnis --------
// -------------------------------------
#hide(
heading("Abbildungsverzeichnis", numbering: none)
)
#outline(
title: "Abbildungsverzeichnis",
target: figure.where(kind: image)
.or(figure.where(kind: raw))
)
#pagebreak()
// ----------------------------------
// ----- Tabellenverzeichnis --------
// ----------------------------------
#hide(
heading("Tabellenverzeichnis", numbering: none)
)
#outline(
title: "Tabellenverzeichnis",
target: figure.where(kind: table)
)
#pagebreak()
// -------------------------------------
// ------ Abkürzungsverzeichnis --------
// ---------------------------------------
//sollte eigentlich weg
#heading(
numbering: none,
"Abkürzungsverzeichnis"
)
#init-acronyms((
"ERP": ("Enterprise Resource Planning"),
"B1": ("Business One"),
))
#print-index(
title: "",
sorted: "up",
outlined: false,
delimiter: ""
)
// Reset page counter and change format to numeric.
#set page(numbering: "1")
#counter(page).update(1)
s
// -----------------------
// ------- Inhalt --------
// -----------------------
//------- I --------
#include "/Abschnitte/Thema 1.typ"
//------- II --------
// To prevent headline to be on the last line of a page.
= Analyse
== Ist-Zustand
=== Mobile Anwendung
=== Web Services
=== Interaktion
== Soll-Zustand
== Handlungsbedarf
Nach voriger Auswertung ergeben sich grob folgende Anforderungen die bearbeitet werden müssen:
- Ananas
- Apfel
- Birne
= Technische Struktur
#figure(
table(
columns: (auto, 1fr, 1fr),
inset: 9pt,
align: left,
[*Art*], [*Vorteile*], [*Nachteile*],
"Art 1",
list(indent: 0cm,
[*Lorem!* \
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy.],
[*Stet clita kasd* \
At vero eos et accusam et justo duo dolores et ea rebum.]),
list(indent: 0cm,
[Gibt's nicht])
),
caption: [Übersicht der Vor- und Nachteile],
) <kinds>
== Lorem ipsum
== Dolor sit
= Planung
= Umsetzung
= Finalisierung
== Unit-Tests
== Deployment
//
// -- Bibliography --
//
#set page(numbering: "A")
#counter(page).update(1)
#bibliography(
"sources.bib",
title: "Literaturverzeichnis",
style: "csl/duale-hochschule-gera-eisenach-fussnote.csl"
)
#pagebreak()
//
// -- Declaration of honour --
//
#set page(numbering: none)
#pad(
bottom: 1em,
align(center, heading("Ehrenwörtliche Erklärung", numbering: none, outlined: false))
)
Ich erkläre hiermit ehrenwörtlich,
#enum(
tight: false,
spacing: 1em,
[dass ich meine Projektarbeit/Studienarbeit/Bachelorarbeit mit dem Thema: \
*#project_title* \
ohne fremde Hilfe angefertigt habe,],
[dass ich die Übernahme wörtlicher Zitate aus der Literatur sowie die Verwendung der Ge-danken anderer Autoren an den entsprechenden Stellen innerhalb der Arbeit gekennzeichnet habe und],
[dass ich meine Projektarbeit/Studienarbeit/Bachelorarbeit bei keiner anderen Prüfung vorgelegt habe.]
)
\
Ich bin mir bewusst, dass eine falsche Erklärung rechtliche Folgen haben wird.
#pad(
top: 1.5cm,
grid(
columns: (1fr, 1fr),
rows: (auto),
gutter: 3pt,
stack(
spacing: 0.5em,
line(length: 80%),
"Ort, Datum"
),
stack(
spacing: 0.5em,
line(length: 80%),
"Unterschrift"
),
)
)
|
|
https://github.com/BeiyanYunyi/resume | https://raw.githubusercontent.com/BeiyanYunyi/resume/main/modules_zh/commercial.typ | typst | #import "../brilliant-CV/template.typ": *
#cvSection("商业项目")
#cvEntry(
title: [小红书前端平台],
society: [Vue 3],
date: [2024.7 - 今],
location: [小红书前端平台全栈项目],
description: list(
[使用Vue 3及自研组件库构建小红书前端平台],
[用于管理公司内RN、H5等不同架构项目的发布、灰度及AB测试]
)
)
#cvEntry(
title: [铭邦消防],
society: [React#hBar()Taro#hBar()VantUI#hBar()Express#hBar()MongoDB#hBar()Typegoose#hBar()Material UI#hBar()tRPC],
date: [2023.7 - 2023.9],
location: [微信小程序+后台全栈项目],
description: list(
[使用Taro、React、VantUI和SWR构建前台微信小程序],
[使用Vite、React、Material UI、tRPC和React Query构建后台前端],
[使用Express、tRPC和Typegoose构建后台后端],
[独立完成后台前后端、参与完成前台前端]
)
) |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/text/quotes-03.typ | typst | Other | // Test escape sequences.
The 5\'11\" 'quick\' brown fox jumps over the \"lazy" dog\'s ear.
|
https://github.com/layandreas/typst-resume | https://raw.githubusercontent.com/layandreas/typst-resume/main/alta-typst.typ | typst | #let primary_colour = rgb("#05173d") // vivid purple
#let link_colour = rgb("#12348e") // blue
#let icon(name, shift: 1.5pt) = {
box(
baseline: shift,
height: 10pt,
image("icons/" + name + ".svg")
)
h(3pt)
}
#let findMe(services) = {
set text(8pt)
let icon = icon.with(shift: 2.5pt)
services.map(service => {
icon(service.name)
if "display" in service.keys() {
link(service.link)[#{service.display}]
} else {
link(service.link)
}
}).join(h(10pt))
[
]
}
#let term(period, location) = {
text(9pt)[#icon("calendar") #period #h(1fr) #icon("location") #location]
}
#let max_rating = 5
#let skill(name, rating) = {
let done = false
let i = 1
name
h(1fr)
while (not done){
let colour = rgb("#c0c0c0") // grey
if (i <= rating){
colour = primary_colour
}
box(circle(
radius: 4pt,
fill: colour
))
if (max_rating == i){
done = true
} else {
// no spacing on last
h(2pt)
}
i += 1
}
[\ ]
}
#let styled-link(dest, content) = emph(text(
fill: link_colour,
link(dest, content)
))
#let alta(
name: "",
links: (),
tagline: [],
content,
) = {
set document(
title: name + "'s CV",
author: name,
)
set text(9.8pt, font: "IBM Plex Sans")
set page(
margin: (x: 54pt, y: 52pt),
)
show heading.where(
level: 2
): it => text(
fill: primary_colour,
[
#{it.body}
#v(-7pt)
#line(length: 100%, stroke: 1pt + primary_colour)
]
)
show heading.where(
level: 3
): it => text(it.body)
show heading.where(
level: 4
): it => text(
fill: primary_colour,
it.body
)
[= #name]
findMe(links)
tagline
columns(
2,
gutter: 15pt,
content,
)
}
|
|
https://github.com/crd2333/template-report | https://raw.githubusercontent.com/crd2333/template-report/master/template/main.typ | typst | MIT License | #import "../lib.typ": *
#show: project.with(
title: "Test Title",
title_2: "Project 1",
author: ("crd233", "张三"),
date: (2024, 2, 30),
cover_style: "report", // report, report1, report2, report3 或其它,false 或 "" 表示无封面
header: "type1", // true or "type1" 使用默认页眉,"type2" 为一个略详细一点的页眉
footer: "type1",
show_toc: true,
show_name: true, // 是否匿名
lang: "zh", // 支持切换语言为 en、zh,会连带更改日期格式、图表标题等
)
= 大标题测试
== 小标题测试
=== 三级标题测试 <title>
引用标题@title
== 文字测试
=== 关于字体
字体:先在"Arial"中寻找,找不到才在黑体、宋体等中文字体中寻找,通过这种方法实现*先英文字体、后中文字体*的效果。这个字体可以先去搜索下载(#link("https://github.com/notofonts/noto-cjk/releases")[下载链接],下载Noto Serif CJK和Noto Sans CJK),或者直接在终端中输入"typst fonts"查看你电脑上的字体,然后修改`font.typ`相关内容为你拥有且喜爱的字体。
English test: Let's have a try! And
this is a dot. This is a quote "". 英文标点也是正常显示。
_斜体_与*粗体*,_Italic_ and *bold*。但是中文没有斜体(事实上,如果字体选择不佳,连粗体都没有),一般用楷体代替 ```typ #show emph: text.with(font: ("Arial", "LXGW WenKai"))```
如果需要真正的斜体,可以使用伪斜体(旋转得到,可能会有 bug?):#fake-italic[中文伪斜体]。
中英文字体之间正常情况下会自动添加空格,像这样test一下。手动添加空格也可以(对Arial和思源字体而言),像这样 test 一下,间隙增加可以忽略不计。如果换用其它字体,可能会出现手动空格导致间隙过大的情况。
=== 关于缩进
使用一个比较 tricky 的包 #link("https://github.com/flaribbit/indenta")[indenta] 来达到类似 LaTeX 中的缩进效果:两行文字间隔一行则缩进,否则不缩进。可能会遇到一些 bug,此时可以使用```typ #noindent[Something]```来取消缩进,比如下面这样:
#hline()
#noindent[
这是一个没有缩进的段落。
空一行,本来应该缩进,但被取消。\
采用连接符换行。
]
#hline()
而在原始情况下是这样:
这是一个有缩进的段落。
空一行,缩进,但被取消。
不空行,视为跟之前同一段落。\
采用连接符换行。
#hline()
#indent 另外,通过 `#indent`(或 `#tab`)能缩进内容,在 indenta 失效时可以使用。
== 图表测试
=== 图片
#fig(caption: "测试图片, 浙江大学", "../assets/校名.jpg") <test>
图片测试引用 @test
=== 公式
Given an $N times N$ integer matrix $(a_(i j))_(N times N)$, find the maximum value of $sum_(i=k)^m sum_(j=l)^n a_(i j)$ for all $1 <= k <= m <= N$ and $1 <= l <= n <= N$.
$ F_n = floor(1 / sqrt(5) phi.alt^n) $ <fib>
$ F_n = floor(1 / sqrt(5) phi.alt^n) $ <->
引用公式Fibonacci: @fib。添加 `<->` 后不再有编号和计数,但也不能引用了。
为了更加简化符号输入,有这么一个包 #link("https://github.com/typst/packages/tree/main/packages/preview/quick-maths/0.1.0")[quick-maths],定义一些快捷方式,比如:
```typ
#show: shorthands.with(
($+-$, $plus.minus$),
($|-$, math.tack),
($<=$, math.arrow.l.double) // Replaces '≤', use =< as '≤'
)
```
$ x^2 = 9 quad <==> quad x = +-3 $
$ A or B |- A $
$ x <= y $
=== 代码
code使用codly实现,会自动捕捉所有成块原始文本,像下面这样,不一定非要调用code命令(调用code命令则是套一层 figure,加上 caption)。
#no-codly[
```raw
disabled code
```
]
```raw
enabled code
```
#strike[代码块经过特殊处理,注释内的斜体、粗体、数学公式会启用 eval]。感觉经常遇到 bug,先禁用了(`lib.typ` 中 ```typ // show raw: comment_process```)
```cpp
cout << "look at the comment" << endl; // _italic_, *bold*, and math $sum$
```
#code(
caption: "This is a code listing",
)[
```c
#include <stdio.h>
int main() {
printf("Hello World!");
return 0;
}
```
] <code>
引用 @code
=== 表格
表格通过原生 table 封装到 figure 中,并添加自动数学环境参数:```typ automath: true```,通过正则表达式检测数字并用 `$` 包裹。
#tbl(
automath: true,
caption: "《计算机组成》:Multiplier",
fill: (x, y) => if y == 0 {aqua.lighten(40%)},
columns: 4,
[Iteration],[Step],[Multiplicand],[Product / Multiplicator],
[0],[initial values],[01100010],[00000000 00010010],
table.cell(rowspan: 2)[1],[0 $=>$ no op],[01100010],[00000000 00010010],
[shift right],[01100010],[00000000 00001001],
table.cell(rowspan: 2)[2],[1 $=>$ prod += Mcand],[01100010],[01100010 00001001],
[shift right],[01100010],[00110001 00000100],
table.cell(rowspan: 2)[3],[0 $=>$ no op],[01100010],[00110001 00000100],
[shift right],[01100010],[00011000 10000010],
table.cell(colspan: 4)[......]
)
#align(center, (stack(dir: ltr)[
#tbl(
// automath: true,
fill: (x, y) => if y == 0 {aqua.lighten(40%)},
columns: 4,
[t], [1], [2], [3],
[y], [0.3s], [0.4s], [0.8s],
caption: [常规表],
) <timing>
][
#h(50pt)
][
#tlt(
// automath: true,
columns: 4,
[t], [1], [2], [3],
[y], [123.123s], [0.4s], [0.8s],
caption: [三线表],
) <timing-tlt>
]))
引用@timing,引用@timing-tlt。
由于习惯了 markdown 的表格,所以 typst 的表格语法可能不太习惯(其实强大很多),但是也有类 markdown 表格 package 的实现:
#tblm(caption: "tablem 实现的类 markdown 表格")[
| *Name* | *Location* | *Height* | *Score* |
| ------ | ---------- | -------- | ------- |
| John | Second St. | 180 cm | 5 |
| Wally | Third Av. | 160 cm | 10 |
]
使用 typst 的数据加载语法,可以读取 csv, json 等格式的数据,以此实现了一些更加快捷(但比较简单,如果要支持合并单元格之类则较困难)的表格。比如下面这个 csv 表格:
#csvtable(
caption: "CSV Table",
```
1,2,3
4,5,6
```
)
= Chapter 2 <caption_2>
#fig("../assets/校名.jpg", caption: "测试图片, 浙江大学", width: 50%) <test2>
图片测试引用 @test2,可以看到现在的编号是 2 开头。
== 列表
Bubble list 语法(更改了图标,使其更类似 markdown,且更大)和 enum 语法:
- 你说
- 得对
- 但是
- 原神
+ 是一款
+ 由米哈游
+ 开发的
+ 开放世界
+ 冒险
+ 游戏
Term list 语法:
/ a: Something
/ b: Something
== 添加脚注
我们可以添加一个脚注。#footnote[脚注内容]
== 参考文献
可以像这样引用参考文献:@wang2010guide 和 @kopka2004guide。
== 其它工具
从第三方包中(or自己编写)预先挑选了一些比较实用的工具,比如:
=== Fletcher
Typst 中的 cetz 就像 LaTeX 中的 tikz 一样,提供强大的画图功能,但是个人感觉略繁琐。#link("https://github.com/Jollywatt/typst-fletcher")[Fletcher] 基于 cetz 提供了 diagrams with arrows 的简单画法。
#import fletcher.shapes: diamond
#diagram(
node-stroke: .1em,
node-fill: gradient.radial(blue.lighten(80%), blue, center: (30%, 20%), radius: 80%),
spacing: 4em,
edge((-1,0), "r", [open(path)], label-pos: 0, label-side: center),
node((0,0), [reading], radius: 2em),
edge([read()]),
node((1,0), [eof], radius: 2em),
edge([close()]),
node((2,0), [closed], radius: 2em, extrude: (-2.5, 0)),
edge((0,0), (0,0), [read()], marks: "--|>", bend: 130deg),
edge((0,0), (2,0), [close()], bend: -40deg),
)
#align(center, grid(
columns: 3,
gutter: 8pt,
diagram(cell-size: 15mm, $
G edge(f) edge("d", pi) & im(f) \
G slash ker(f) edge("ur", tilde(f))
$),
diagram(
node-stroke: 1pt,
edge-stroke: 1pt,
node((0,0), [Start], corner-radius: 2pt, extrude: (0, 3)),
edge(),
node((0,1), align(center)[
Hey, wait,\ this flowchart\ is a trap!
], shape: diamond),
edge("d,r,u,l", [Yes], label-pos: 0.1)
),
diagram($
e^- edge("rd", marks: "-<|-") & & & edge("ld") e^+ \
& edge(gamma, "wave") \
e^+ edge("ru", marks: "-|>-") & & & edge("lu") e^- \
$)
))
=== syntree & treet
语法树,像这样,可以用字符串解析的方式来写,不过个人更喜欢后一种自己写 `tree` 的方式,通过合理的缩进更加易读。
#let bx(col) = box(fill: col, width: 1em, height: 1em)
#grid(
columns:2,
gutter: 4em,
syntree(
nonterminal: (font: "Linux Biolinum"),
terminal: (fill: red),
child-spacing: 3em, // default 1em
layer-spacing: 2em, // default 2.3em
"[S [NP This] [VP [V is] [^NP a wug]]]"
),
tree("colors",
tree("warm", bx(red), bx(orange)),
tree("cool", bx(blue), bx(teal)))
)
#tab 文件夹型的树,像这样
#tree-list(root: "root")[
- 1
- 1.1
- 1.2
- 1.2.1
- 2
- 3
- 3.1
- 3.1.1
]
=== timeline
#timeline(show-grid: true, caption: "Timeline",
{
headerline(group(([*2023*], 4)), group(([*2024*], 4)))
headerline(
group(..range(4).map(n => strong("Q" + str(n + 1)))),
group(..range(4).map(n => strong("Q" + str(n + 1)))),
)
taskgroup(title: [*Research*], {
task("Research the market", (0, 2), style: (stroke: 2pt + gray))
task("Conduct user surveys", (1, 3), style: (stroke: 2pt + gray))
})
milestone(
at: 3.75,
style: (stroke: (dash: "dashed")),
align(center, [
*Conference demo*\
Dec 2023
])
)
milestone(
at: 6.5,
style: (stroke: (dash: "dashed")),
align(center, [
*App store launch*\
Aug 2024
])
)
})
=== emojis
直接使用(directly): 😆🛖🐡
内置表情(built-in emoji namespace): `#emoji.rocket` #emoji.rocket
由 #link("https://fontawesome.com/download")[Font awesome] 提供的图标(需要下载字体):#fa-github(),具体有哪些可查 #link("https://fontawesome.com/search?o=r&m=free")[Font awesome gallery]。
=== boxes(admonitions & thms)
下面是我自己写的基于 showybox 的 admonitions 块和定理块。
#note()[我自己写的admonition块]
#info(caption: "标题与字号可以自定义", caption_size: 18pt, size: 9pt)[图标、内容字号也可以传入修改]
#tab 好康且自动计数的定理块:
#theorem(title: [#text(fill: green, "This is a title")])[Now the counter increases by 1 for type `Theorem`.] <thm2>
#theorem(footer: [The showybox allowes you add footer for boxes, useful when giving some explanation.])[#lorem(10)] <thm1>
#definition[The counter will be reset after the first level of heading changes, i.e. counting within one chapter(can be changed)).]
#corollary(title: "a title", [Another body!])[Corollary counter based on theorem(can be changed).]
#lemma[#lorem(20)]
#proof[By default the `Proof` will not count itself.\ And the `Proof` box will have a square at the right bottom corner.]
#example()[By default the `example` will not count itself.]
#noindent[
@thm1, @thm2
]
=== 伪代码(算法)
lovelace包,可以用来写伪代码,body 最好用 typ,比如:
#algo(title: [caption for algorithm])[
- *input:* integers $a$ and $b$
- *output:* greatest common divisor of $a$ and $b$
+ *if* $a > b$ *then*
+ $a <- a - b$ #comment[and a comment]
+ *else*
+ $b <- b - a$ #comment[and another comment]
+ *end*
+ *return* $a$
]
算法默认情况下不启用每一章节的计数清空功能,如有需要可以自己实现。
=== wrap_content
文字图片包裹,不用自己考虑分栏了。在大多数时候是比较有效的,但有的时候不是很好看,可能还是得自己手动 grid。
#let fig = figure(
rect(fill: teal, radius: 0.5em, width: 8em),
caption: [A figure],
)
#let body = lorem(40)
#wrap-content(
align: bottom + right,
column-gutter: 2em,
fig
)[
#indent #body
]
#wrap-top-bottom(fig, fig, body)
=== 真值表
快速制作真值表,只支持 $not and or xor => <=>$,新版还支持卡诺图画法?
#truth-tbl(caption: "真值表", $A and B$, $B or A$, $A => B$, $(A => B) <=> A$, $ A xor B$)
#tab 更复杂的用法(自己填data),三个参数分别是样式函数、表头、表内容:
#truth-tbl-empty(
caption: "空真值表",
sc: (a) => {if (a) {"T"} else {"F"}},
($a and b$, $a or b$),
(false, [], true, [] , true, false)
)
=== todo(checklist)
#grid(
columns: 2,
column-gutter: 8pt,
[
- [ ] 加入更多layouts,比如前言、附录
- [x] 重构代码,使得可以根据语言切换文档类型
- [-] Jupiter
- [/] Saturn
- [>] Forwarded
- [<] Scheduling
- [?] question
- [!] important
- [\*] star
],
[
- [b] bookmark
- [I] idea
- [p] pros
- [c] cons
- [f] fire
- [k] key
- [w] win
- [u] up
- [d] down
]
)
=== Pinit
#warning()[You should add a blank line before the `#pinit-xxx` function call, otherwise it will cause misalignment.]
#v(1.5em)
$ (#pin(1)q_T^* p_T#pin(2))/(p_E#pin(3))#pin(4)p_E^*#pin(5) >= (c + q_T^* p_T^*)(1+r^*)^(2N) $
#pinit-highlight-equation-from((1, 2, 3), 3, height: 3.5em, pos: bottom, fill: rgb(0, 180, 255))[
In math equation
]
#pinit-highlight-equation-from((4, 5), 5, height: 1.5em, pos: top, fill: rgb(150, 90, 170))[
price of Terran goods, on Trantor
]
`print(pin6"hello, world"pin7)`
#pinit-highlight(6, 7)
#pinit-point-from(7)[In raw text]
#v(4em)
这玩意儿的用法略灵活,可以看它的仓库 #link("https://github.com/typst/packages/tree/main/packages/preview/pinit/0.2.0")[pinit]
=== mitex
使用 #link("https://github.com/typst/packages/tree/main/packages/preview/mitex/0.2.4")[mitex] 包渲染 LaTeX 数学环境,比如:
通过这个包,可以快速把已经在 Markdown 或 LaTeX 中的公式重复利用起来;同时,利用市面上众多的 LaTeX 公式识别工具,可以减少很多工作。
#mitex(`
\newcommand{\f}[2]{#1f(#2)}
\f\relax{x} = \int_{-\infty}^\infty
\f\hat\xi\,e^{2 \pi i \xi x}
\,d\xi
`)
#mitext(`
\iftypst
#set math.equation(numbering: "(1)", supplement: "equation")
\fi
A \textbf{strong} text, a \emph{emph} text and inline equation $x + y$. And here we set the equation numbering to be like (1), (2), ...
Block equation \eqref{eq:pythagoras}.
\begin{equation}
a^2 + b^2 = c^2 \label{eq:pythagoras}
\end{equation}
`)
#pagebreak()
#bibliography("../assets/exbib.bib", style: "ieee", title: "References")
|
https://github.com/omroali/SegmentationsFeaturingAndTracking | https://raw.githubusercontent.com/omroali/SegmentationsFeaturingAndTracking/main/Report/report.typ | typst | #import "@preview/problemst:0.1.0": pset
#show: pset.with(
class: "Computer Vision", student: "<NAME> - 28587497", title: "Assignment 1", date: datetime.today(),
)
= Image Segmentation and Detection
In this task, 3 balls from some images needed to be extracted from a set of 62 images and segmented into a mask. The balls were of three types: American Football, Football, and Tennis Ball. The images represent a sequence or frames where the balls are in motion. The task was to segment the balls from the background and calculate the Dice Similarity Score (DS) of the segmented balls against the ground truth mask of the balls.
== Ball Segmentation
In this section, I will demonstrate how the balls were segmented from the background using a series of image-processing techniques.
The steps are as follows:
#figure(
table(
columns: 4, stroke: none,
figure(image("./assets/frame-85.png", width: 90%,)),
figure(image("./assets/segmentation/steps/2-hsv.png", width: 90%,)),
figure(image("./assets/segmentation/steps/3-intensity.png", width: 90%,)),
figure(image("./assets/segmentation/steps/4-gblur.png", width: 90%,)),
[Original \ Step 1 \ Grab the original image from path],[HSV \ Step 2 \ Converted the RGB image into HSV colour space], [Intensity \ Step 3 \ Extracted the Intensity Channel],[GBlur \ Step 4 \ Apply Gaussian Blur with k=[3x3] with 2 iterations],
[],[],[],[],
figure(image("./assets/segmentation/steps/5-blur.png", width: 90%,)),
figure(image("./assets/segmentation/steps/6-adap_gaus_thresh.png", width: 90%,)),
figure(image("./assets/segmentation/steps/7-open.png", width: 90%,)),
figure(image("./assets/segmentation/steps/8-dilate.png", width: 90%,)),
[Median Blur \ Step 5 \ Apply Median Blur with k=[3x3] with 2 iterations],[Adaptive Gaussian Threshold \ Step 6 \ With Blocksize = 19 and c = 5],[Opening \ Step 7 \ to disconnecting white pixels k = (4,4) with 5 iterations],[Dilate \ Step 8 \ expanding white pixels to remove noisy black holes holes k = (3,3) with 4 iterations ],
[],[],[],[],
figure(image("./assets/segmentation/steps/9-erode.png", width: 90%,)),
figure(image("./assets/segmentation/steps/10-fill.png", width: 90%,)),
figure(image("./assets/segmentation/steps/11-dilate_and_erode.png", width: 90%,)),
figure(image("./assets/segmentation/steps/12-contour.png", width: 90%,)),
[Erode \ Step 9 \ re-shrinking the white pixels to let the balls begin to connect k = (3,3) with 2 iterations],[Fill \ Step 10 \ Fill the enclosed black areas],[Dilate and Erode \ Step 11 \ series of dilations and erosions to remove the noisy black pixels whislt maintaining \*],[Contour \ Step 12 \ find the contours of the (bitwise_not) image],
[],[],[],[],
),
)
At this stage, step 12, it can be seen that the balls have been segmented from the background. However, this is only the case in the particular sample, frame 85. In the images below, Frame 100 is re-displayed to show that the ball segmentation still had artifacts that needed to be removed. Two main culprits were the shadows of the American Football in the latter frames, as well as some sections of the background that were not yet caught by the segmentation process so far.
In order to remove the shadow, it was found that the intensity channel of the HSV image was quite effective and computationally cheap to apply.
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/segmentation/steps/12-contour_100.png", width: 100%,)),
figure(image("./assets/segmentation/steps/13-thresh_image_100.png", width: 100%,)),
figure(image("./assets/segmentation/steps/13-thresh_image_bitwise_and.png", width: 100%,)),
[Contour \ Step 12 \ Grab the original image from path],
[Thresholing Intensity \ Step 13.1 \ A threshold of 0.4 was applied to the intensity image from step 3 to remove shadows],
[Combine \ Step 13.2 \ The contour and the thresholded intensity image were combined],
),)
The inverse of the thresholded intentity, in step 13.1, was combined with the contour using a 'bitwise and' operation.
```python
combo = cv2.bitwise_not(cv2.bitwise_or(thresh_intensity_image, contour_image)
```
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/segmentation/steps/14-convex_hull.png", width: 100%,)),
figure(image("./assets/segmentation/steps/16-final.png", width: 100%,)),
[],
[Convex Hull \ Step 14 ],
[Circularity \ Step 15 ],
),)
Finally, a convex hull was applied to the contours to remove sharp edges and improve the circularity of the contours. The circularity of the contours was calculated and only contours with a circularity greater than 0.72 were kept.
== Dice Similarity Score
The Dice Similarity Score (DS) of every segmented ball image was compared against the ground truth mask of the ball image. The Dice Similarity Score (DS) is defined as:
$ "DS" = (2*|M sect S|) / (|M| + |S|) $
where M is the ground truth mask and S is the segmented mask. The DS score ranges from 0 to 1, where 1 indicates a perfect match between the two masks.
A bar chart of the DS has been plotted for every frame. The lowest recorded score is 0.82 for frame 93 and the highest is 0.96 for frame 78.
#figure(
image("./assets/dice_score_barchart.png", width: 100%,
)
)<seg_barchart>
Additionally, a violin plot, was used to display the distribution of the DS score across all the frames, it can be seen that there is quite a skew of the data towards the higher range of the DS score, with the greatest proportion of the data sitting around the 0.93-0.94 range.
#figure(
image("./assets/dice_score_violin.png", width: 100%,
),
)<seg_boxplot>
Below are the best and worst frames in terms of the Dice Similarity Score achieved by the segmentation process.
#figure(
table(
columns: 3, stroke: none,
[],[Best Frame 78 - DS 0.961],[],
[Segmented Image], [Ground Truth], [Difference],
figure(image("./assets/segmentation/best/frame-78.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-78_GT.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-78_ds_0.9608055770720372.jpg", width: 100%,)),
[],[],[],
[],[Worst Frame 78 - DS 0.961],[],
figure(image("./assets/segmentation/worst/frame-93.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-93_GT.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-93_ds_0.8195569136745607.jpg", width: 100%,)),
[Segmented Image 93], [Ground Truth 93], [DS 0.820],
),
)
== Discussion
The implementation was found to be quite good, where the DS scores had a maximum of 0.96, minimum of 0.82, mean of 0.91 and standard deviation of 0.04. However, the implementation is quite clunky and very tuned for this task.
Overall, the number of steps in order to reach the solution is quite large, at 15 steps. The additional processing was ultimately due to a whack-a-mole situation, where refinements in one area of the task cause another area to worsen. This makes this solution temperamental and not very robust to changes in the input data, however, for very refined results, this is likely to be a similar case for most image segmentation methodologies that do not more complex models such as deep learning.
In the initial processing, it was found that the Intensity value provided a very good initial starting point, as opposed to using grayscale, as the intensity of the balls has a very distinct value compared to the background. The Gaussian Blur and Median Blur were very effective at removing the nose from this image, whilst maintaining the edges of the balls. The Adaptive Gaussian Threshold (AGT) was effective in detecting the edges in the image, compared to a standard threshold, because it takes into consideration a normalised local area for its thresholding calculation.
The range of morphological filters was essential in getting a cleaner mask of the balls, with just the right amount of erosion and dilation to remove the noise from the background. Different ranges of kernel sizes were used for the erosion and dilation, however all used some scale of the `MORPH_ELLIPSE` structuring element. It is important to note that in the first applications of the morphological filters, the region being eroded or dilated was the background, not the balls. This was because the image had not yet been inverted and so the balls were considered the background.
Once the morphological filtering was complete, the balls were segmented using the `opencv.findContours` function. As mentioned, this was found to not be refined enough, so the intensity channel was used to remove the shadows from the American Football, a convex hull was applied to the contours to connect sharp edges, and the circularity of the contours where only contours with a circularity greater than 0.72 were kept.
There are certainly some improvements that could be made, and would have possibly decided to take a different approach had this task been reattempted. A possible alternative first step could be to make use of the colour channels of the image to segment the balls. This could be done by applying a threshold to the colour channels and then combining the results. This would have been a more robust approach where the main challenge would be the distinction of the white football from the walls of the room. Additionally, considering the constrained environment, it could have been possible to create a mask of the problematic parts of the room and have these be removed from the image before segmentation.
= Feature Calculation
== Shape Features
In this section, I will demonstrate how a range of shape features (Solidity, Non-compactness, Circularity, Eccentricity) can be calculated from the contours of the segmented balls.
In order to get the contours of the segmented balls, I made use of the openCV 'findContours' function which returns a list of contours and a hierarchy for all the contours in the image.
```python
img = cv2.imread(image)
img = cv2.bitwise_and(cv2.cvtColor(msk, cv2.COLOR_GRAY2BGR), img)
contours, _ = cv2.findContours(msk, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
```
Each contour is split into the corresponding balls in order to combine the contours of the same ball into an array, this is done by exploiting the areas of each ball. The details of the code can be explored as needed, however the core logic is as follows:
```python
area = cv2.contourArea(contour)
if area > 1300: # football
append_ball = BALL_LARGE
elif area > 500: # soccer_ball
append_ball = BALL_MEDIUM
else: # tennis ball
append_ball = BALL_SMALL
```
Using this, the features can be evaluated for every ball.
=== Circularity
Circularity is defined as the ratio of the area of the circle with the same perimeter as the object to the area of the object. For a given contour C, the circularity is calculated by:
```python
area = cv2.contourArea(contour)
perimeter = cv2.arcLength(contour, closed=True)
circularity = (4 * math.pi * area) / (perimeter**2)
```
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/features/Circularity_histogram_American_Football.png", width: 100%,)),
figure(image("./assets/features/Circularity_histogram_Football.png", width: 100%,)),
figure(image("./assets/features/Circularity_histogram_Tennis.png", width: 100%,)),
),
)
Both the Football and the Tennis ball have a higher circularity than the American Football. This is expected as the American Football has a more elongated shape compared to the other two balls. Visually, the football has a smaller variance in circularity compared to the tennis ball. This is likely due to the relative size of the football compared to the tennis ball, where the football is larger and has a more consistent shape from the perspective of an image and will not suffer from distortion and be impacted by smaller pixel ranges.
=== Eccentricity
Eccentricity is the ratio of the distance between the foci of the ellipse to the major axis length. For a given contour C, the eccentricity is calculated by:
```python
ellipse = cv2.fitEllipse(contour)
a = max(ellipse[1])
b = min(ellipse[1])
eccentricity = (1 - (b**2) / (a**2)) ** 0.5
```
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/features/Eccentricity_histogram_American_Football.png", width: 100%,)),
figure(image("./assets/features/Eccentricity_histogram_Football.png", width: 100%,)),
figure(image("./assets/features/Eccentricity_histogram_Tennis.png", width: 100%,)),
),
)
The American Football has the highest eccentricity of all the balls, which is expected as it has a more elongated shape compared to the other two balls. The football and the tennis ball both have very distributed eccentricity values.
=== Solidity
Solidity is the ratio of the area of the object to the area of the convex hull of the object. For a given contour C, the solidity is calculated by:
```python
area = cv2.contourArea(contour)
convex_hull = cv2.convexHull(contour)
convex_area = cv2.contourArea(convex_hull)
solidity = area / convex_area
```
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/features/Soliditiy_histogram_American_Football.png", width: 100%,)),
figure(image("./assets/features/Soliditiy_histogram_Football.png", width: 100%,)),
figure(image("./assets/features/Soliditiy_histogram_Tennis.png", width: 100%,)),
),
)
The solidity of the American Football is much higher and more consistent than the other two balls. This is likely due to the ball being larger in size and so a convex hull around the ball is likely to be more similar to the ball itself. This follows through as we see the football having a higher solidity than the tennis ball, and the tennis ball's solidity is much more distributed than the others.
=== Non-compactness
Non-compactness is the ratio of the area of the object to the area of the circle with the same perimeter as the object. For a given contour C, the non-compactness is calculated by:
```python
area = cv2.contourArea(contour)
perimeter = cv2.arcLength(contour, closed=True)
non_compactness = 1 - (4 * math.pi * area) / (perimeter**2)
```
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/features/Non-Compactness_histogram_American_Football.png", width: 100%,)),
figure(image("./assets/features/Non-Compactness_histogram_Football.png", width: 100%,)),
figure(image("./assets/features/Non-Compactness_histogram_Tennis.png", width: 100%,)),
),
)
The tennis ball has the tightest distribution of non-compactness values, with the American Football having the highest non-compactness values. This is because the American Football has a more elongated shape that changes its dimensions as the perspective shifts.
== Texture Features
In this section, texture features are calculated from the segmented balls. The texture features are evaluated by calculating the normalised Grey-Level Co-occurrence Matrix (GLCM) in four orientations (0°, 45°, 90°, 135°) for every individual ball. The GLCM is a matrix that describes how often different combinations of pixel intensity values occur in an image. The GLCM is calculated for each of the colour channels (red, green, blue) and for each of the four orientations then averaged across the orientations to determine the texture features of the ball. These features include Angular Second Moment, Contrast, Correlation, and Entropy.
For each feature, one colour channel is selected to demonstrate the feature calculation. The blue channel was selected for the Angular Second Moment, the red channel for Contrast, and the green channel for Correlation.
To get the GLCM, the balls were segmented in a similar way to what was described previously in the shape features section. However, this time, the mask generated was overlayed onto the original image to get the pixel values of the balls.
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/colour_american_football_mask.png")),
figure(image("./assets/colour_football_mask.png")),
figure(image("./assets/colour_tennis_mask.png")),
))
The GLCM was calculated using the 'greycomatrix' function from the skimage library. The first row and column were both stripped away from the GLCM to remove the background noise. The GLCM was then normalised using 'greycoprops' and the texture features were evaluated for all four orientations. These were then averaged to get the final average texture value for the ball.
=== Applied Texture Features
Below are a set of violin plot of the texture featuures for the three balls. The Angular Second Moment was calculated for the blue channel, the Contrast for the red channel, and the Correlation for the green channel, where the yellow plot represents the Tennis Ball, the white plot represents the Football, and the orange plot represents the American Football.
#figure(
table(
columns: 5, stroke: none,
figure(image("./assets/features/asm_data_blue_channel.png",)),
figure(image("./assets/features/asm_range_data_blue_channel.png")),
figure(image("./assets/features/contrast_data_red_channel.png")),
figure(image("./assets/features/correlation_green_channel.png")),
),)
The *ASM*, is a measure of textural uniformity within an image. The ASM will be high when the image has constant or repetitive patterns, meaning the pixel intensities are similar or identical throughout the image. The yellow tennis ball has a high ASM mean in the blue channel, which suggests that the pixel intensities in the blue channel for the yellow tennis ball are very similar or identical throughout the image. This could be due to the yellow color having a low blue component in the RGB color model. The orange American football has a low ASM mean in the blue channel, which suggests that there is a lot of variation or randomness in pixel intensities in the blue channel for the orange American football. This could be due to the orange color having a low blue component in the RGB color model, or due to variations in lighting conditions or reflections on the ball. The large standard deviation indicates that the pixel intensities in the blue channel for the orange American football vary widely. This could be due to factors such as lighting conditions, shadows, or variations in the ball's color.
The *ASM range* in an image indicates the spread of textural uniformity across the image. A high range would indicate that there are areas of the image with very high textural uniformity. The yellow tennis ball has a low ASM range mean in the blue channel. This means that the pixel intensities in the blue channel for the yellow tennis ball are very similar or identical throughout the image.
The *contrast* of an image, specifically in a color channel, refers to the difference in color that makes an object distinguishable. In the red channel of an image, objects with a high amount of red will have a high intensity. The yellow tennis ball and the white football have the highest contrast means in the red channel because yellow, and white have a combination of red and green in the RGB color model.
The *correlation* of an image, specifically in a color channel, refers to the degree to which neighboring pixel values are related. In the green channel of an image, objects with a high amount of green will have a high intensity.
The orange American football has a high correlation mean in the green channel, which suggests that it has a strong relationship with neighboring pixel values in the green channel. This could be due to the orange color having less green component compared to yellow or due to different lighting conditions. The yellow tennis ball has a low correlation mean in the green channel because yellow is a combination of red and green in the RGB color model. This means that the yellow tennis ball will have a high intensity in the green channel, but the correlation is low.
In terms of using the features to classify the balls, the Shape features can be very useful, especially in classifying the American Football as it has the most distinctive shape of the three balls. In particular, the solidity of the American Football has a very high value, and low distribution, compared to the others, making it a particularly distinguishing feature. The texture features can also be useful in identifying the balls but this is colour dependent. The tennis ball tends to have the lowest correlation and the American Football the highest.
#pagebreak()
= Object Tracking
== Kalman Filter
A Kalman filter is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone.
It can be used in the context of object tracking to estimate the position of an object based on noisy measurements of its position. The Kalman filter uses a motion model to predict the next state of the object and an observation model to update the state based on the measurements.
The Kalman filter consists of two main steps: the prediction step and the update step.
These require a few sets of parameters to be set up, such as the state vector x, the state covariance matrix P, the motion model F, the observation model H, the process noise covariance matrix Q, and the measurement noise covariance matrix R.\
```python
x = np.matrix([xO]).T
Q = kq * np.matrix([[nx, 0, 0, 0], [0, nvx, 0, 0], [0, 0, ny, 0], [0, 0, 0, nvy]])
P = Q
F = np.matrix([[1, dt, 0, 0], [0, 1, 0, 0], [0, 0, 1, dt], [0, 0, 0, 1]])
H = np.matrix([[1, 0, 0, 0], [0, 0, 1, 0]])
R = kr * np.matrix([[nu, 0], [0, nv]])
N = len(z[0])
s = np.zeros((4, N))
```
Intial Parameters:
```
nx=0.16, ny=0.36, nvx=0.16, nvy=0.36, nu=0.25, nv=0.25
xO=[0,0,0,0], z=[], kr=1, kq = 1, dt=0.05
```
In the prediction step, the filter uses the motion model to predict the next state of the object based on the previous state.
```python
def kalman_predict(x, P, F, Q):
xp = F * x
Pp = F * P * F.T + Q
return xp, P
def kalman_update(x, P, H, R, z):
S = H * P * H.T + R
K = P * H.T * np.linalg.inv(S)
zp = H * x
xe = x + K * (z - zp)
Pe = P - K * H * P
return xe, Pe
```
In the update step, the filter uses the observation model to update the state based on the measurements.
```python
for i in range(N):
xp, Pp = kalman_predict(x, P, F, Q)
x, P = kalman_update(xp, Pp, H, R, z[:, i])
val = np.array(x[:2, :2]).flatten()
s[:, i] = val
px = s[0, :]
py = s[1, :]
```
The plotted graph of the initial noisy coordinates [na,nb] and the estimated coordinates [x\*,y\*] can be seen below.
#figure(image("./assets/tracking/kalman_filter.png", width: 100%,))
In order to evaluate the performance of the Kalman filter, the root mean squared error (RMSE) can be evaluated to show how close the estimated trajectory is to the ground truth trajectory.
$ "RMSE" = sqrt( 1/n sum_(i=1)^(n) (x_i - hat(x)_i)^2 +(y_i - hat(y)_i)^2) $
```python
rms = np.sqrt(1/len(px) * (np.sum((x - px)**2 + (y - py)**2)))
```
The initial Kalman filter implementation used a starting point of (0,0) for the x and y coordinates and had a mean of 9.68 and an RMS of 26.21. This performance was suboptimal compared to the noise, which had a mean of 6.96 and an RMS of 7.42.
The starting point was changed to the first point in the trajectory. The measurement noise parameter `nv` was set to be three magnitudes lower than `nu`, reflecting the smaller changes in the y positional values compared to the x positional values. The process noise parameters were also adjusted, with `nx` increased to 1.6 and `ny` to increase the noise on the x positional data which allows the filter to be more flexible in its predictions allowing for smoother transitions between the timesteps.
The final optimised parameters were set to: \
```
kq = 1.75E-2, kr = 1.5E-3, nx = 1.6, ny = 0.32, nvx = 0.16*1.75E-2
nvy = 0.36*1.75E-2, nu = 0.25, nv = 0.25E-3
```
These adjustments resulted in a significant improvement in the filter's performance, with the mean reduced to 6.81 and the RMS reduced to 7.26, bringing the prediction closer to the ground truth.
#pagebreak()
= Appendix.
== Worst DS Frames
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/segmentation/worst/frame-93.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-93_GT.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-93_ds_0.8195569136745607.jpg", width: 100%,)),
[Segmentation-93], [GT-93], [Difference 0.820],[],[],[],
figure(image("./assets/segmentation/worst/frame-115.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-115_GT.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-115_ds_0.822869077143976.jpg", width: 100%,)),
[Segmentation-115], [GT-115], [Difference 0.823],[],[],[],
figure(image("./assets/segmentation/worst/frame-94.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-94_GT.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-94_ds_0.8287037037037037.jpg", width: 100%,)),
[Segmentation-94], [GT-94], [Difference 0.829],[],[],[],
figure(image("./assets/segmentation/worst/frame-116.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-116_GT.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-116_ds_0.8313458262350937.jpg", width: 100%,)),
[Segmentation-116], [GT-116], [Difference 0.831],[],[],[],
figure(image("./assets/segmentation/worst/frame-103.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-103_GT.png", width: 100%,)),
figure(image("./assets/segmentation/worst/frame-103_ds_0.8532547699214366.jpg", width: 100%,)),
[Segmentation-103], [GT-103], [Difference 0.853],[],[],[],
),
)
#pagebreak()
== Best DS Frames
#figure(
table(
columns: 3, stroke: none,
figure(image("./assets/segmentation/best/frame-78.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-78_GT.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-78_ds_0.9608055770720372.jpg", width: 100%,)),
[Segmentation-78], [GT-78], [Difference 0.961],[],[],[],
figure(image("./assets/segmentation/best/frame-62.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-62_GT.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-62_ds_0.9565345513481656.jpg", width: 100%,)),
[Segmentation-62], [GT-62], [Difference 0.957],[],[],[],
figure(image("./assets/segmentation/best/frame-70.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-70_GT.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-70_ds_0.9555305827580671.jpg", width: 100%,)),
[Segmentation-70], [GT-70], [Difference 0.956],[],[],[],
figure(image("./assets/segmentation/best/frame-69.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-69_GT.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-69_ds_0.9537412757669209.jpg", width: 100%,)),
[Segmentation-69], [GT-69], [Difference 0.954],[],[],[],
figure(image("./assets/segmentation/best/frame-61.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-61_GT.png", width: 100%,)),
figure(image("./assets/segmentation/best/frame-61_ds_0.9512928442573662.jpg", width: 100%,)),
[Segmentation-61], [GT-61], [Difference 0.951],[],[],[],
),
)
#pagebreak()
== All Features
#figure(image("./assets/features/asm_data.png", width: 80%,))
#figure(image("./assets/features/asm_range_data.png", width: 80%,)),
#figure(image("./assets/features/contrast_data.png", width: 80%,)),
#figure(image("./assets/features/correlation_data.png", width: 80%,)),
#pagebreak()
== Code
=== image_segmentation.py
```python
import os
import cv2
from cv2.typing import MatLike
import numpy as np
from segmentation.utils import fill
import math
class ImageSegmentation:
def __init__(self, image_path: str, save_dir: str = None):
self.processing_data = []
self.image_path = image_path
self.image = cv2.imread(image_path)
self.processing_images = []
self.save_dir = save_dir
def log_image_processing(self, image, operation: str):
"""log the image processing"""
self.processing_data.append(operation)
self.processing_images.append(image)
def gblur(self, image, ksize=(3, 3), iterations=1):
"""apply gaussian blur to the image"""
blur = image.copy()
for _ in range(iterations):
blur = cv2.GaussianBlur(blur, ksize, cv2.BORDER_DEFAULT)
self.log_image_processing(blur, f"gblur,kernel:{ksize},iterations:{iterations}")
return blur
def mblur(self, image, ksize=3, iterations=1):
"""apply gaussian blur to the image"""
blur = image.copy()
for _ in range(iterations):
blur = cv2.medianBlur(blur, ksize)
self.log_image_processing(
blur, f"medianblur,kernel:{ksize},iterations:{iterations}"
)
return blur
def adaptive_threshold(self, image, blockSize=15, C=3):
"""apply adaptive threshold to the image"""
image = image.copy()
adaptive_gaussian_threshold = cv2.adaptiveThreshold(
src=image,
maxValue=255,
adaptiveMethod=cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
thresholdType=cv2.THRESH_BINARY,
blockSize=blockSize,
```
```python
C=C,
)
self.log_image_processing(
adaptive_gaussian_threshold,
f"adaptive_threshold,blockSize:{blockSize},C:{C}",
)
return adaptive_gaussian_threshold
def dilate(self, image, kernel=(3, 3), iterations=1, op=cv2.MORPH_ELLIPSE):
"""apply dilation to the image"""
image = image.copy()
kernel = cv2.getStructuringElement(op, kernel)
dilation = cv2.dilate(
src=image,
kernel=kernel,
iterations=iterations,
)
self.log_image_processing(
dilation,
f"erode,kernel:{kernel},iterations:{iterations}",
)
return dilation
def erode(self, image, kernel=(3, 3), iterations=1, op=cv2.MORPH_ELLIPSE):
"""apply dilation to the image"""
image = image.copy()
kernel = cv2.getStructuringElement(op, kernel)
dilation = cv2.erode(
src=image,
kernel=kernel,
iterations=iterations,
)
self.log_image_processing(
dilation,
f"dilate,kernel:{kernel},iterations:{iterations}",
)
return dilation
def closing(self, image, kernel=(5, 5), iterations=10):
"""apply closing to the image"""
image = image.copy()
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, kernel)
closing = cv2.morphologyEx(
src=image,
op=cv2.MORPH_CLOSE,
kernel=kernel,
iterations=iterations,
)
self.log_image_processing(
closing,
f"closing,kernel:{kernel},iterations:{iterations}",
)
return closing
```
```python
def opening(self, image, kernel=(5, 5), iterations=1, op=cv2.MORPH_ELLIPSE):
"""apply opening to the image"""
image = image.copy()
kernel = cv2.getStructuringElement(op, kernel)
opening = cv2.morphologyEx(
src=image,
op=cv2.MORPH_OPEN,
kernel=kernel,
iterations=iterations,
)
self.log_image_processing(
opening,
f"opening,kernel:{kernel},iterations:{iterations}",
)
return opening
def generic_filter(self, image, kernel, iterations=1, custom_msg="genertic_filter"):
result = image.copy()
for i in range(iterations):
result = cv2.filter2D(result, -1, kernel)
self.log_image_processing(
result, f"{custom_msg},kernel:{kernel},iterations:{iterations}"
)
return result
def dilate_and_erode(
self, image, k_d, i_d, k_e, i_e, iterations=1, op=cv2.MORPH_ELLIPSE
):
image = image.copy()
for _ in range(iterations):
for _ in range(i_d):
image = self.dilate(image, (k_d, k_d), op=op)
for _ in range(i_e):
image = self.erode(image, (k_e, k_e), op=op)
self.log_image_processing(
image,
f"dilate_and_erode,k_d:{(k_d,k_d)},i_d={i_d},k_e:{(k_e, k_e)},i_e={i_e},iterations:{iterations}",
)
return image
def fill_image(self, image_data, name, show=True):
self.log_image_processing(
image_data[name],
f"fill_{name}",
)
image_data[f"fill_{name}"] = {
"image": fill(image_data[name]["image"].copy()),
"show": show,
}
```
```python
def find_ball_contours(
self,
image,
circ_thresh,
min_area=400,
max_area=4900,
convex_hull=False,
):
img = image.copy()
cnts = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
blank_image = np.zeros(img.shape, dtype=img.dtype)
for c in cnts:
# Calculate properties
peri = cv2.arcLength(c, True)
# Douglas-Peucker algorithm
approx = cv2.approxPolyDP(c, 0.0001 * peri, True)
# applying a convex hull
if convex_hull == True:
c = cv2.convexHull(c)
# get contour area
area = cv2.contourArea(c)
if area == 0:
continue # Skip to the next iteration if area is zero
circularity = 4 * math.pi * area / (peri**2)
if (
(len(approx) > 5)
and (area > min_area and area < max_area)
and circularity > circ_thresh
):
cv2.drawContours(blank_image, [c], -1, (255), cv2.FILLED)
return blank_image
@staticmethod
def preprocessing(image):
image_data = {}
image_data["original"] = {
"image": image.image,
"show": True,
}
image_data["grayscale"] = {
"image": cv2.cvtColor(image.image, cv2.COLOR_BGRA2GRAY),
"show": False,
}
```
```python
image_data["hsv"] = {
"image": cv2.cvtColor(image.image.copy(), cv2.COLOR_BGR2HSV),
"show": False,
}
(_, _, intensity) = cv2.split(image_data["hsv"]["image"])
image_data["intensity"] = {
"image": intensity,
"show": False,
}
image_data["gblur"] = {
"image": image.gblur(
image_data["intensity"]["image"], ksize=(3, 3), iterations=2
),
"show": False,
}
image_data["blur"] = {
"image": image.mblur(
image_data["intensity"]["image"], ksize=3, iterations=2
),
"show": False,
}
intensity_threshold = cv2.threshold(
image_data["intensity"]["image"], 125, 255, cv2.THRESH_BINARY
)[1]
image_data["intensity_threshold"] = {
"image": intensity_threshold,
"show": False,
}
name = "adap_gaus_thrsh"
image_data[name] = {
"image": image.adaptive_threshold(
image=image_data["blur"]["image"].copy(),
blockSize=19,
C=5,
),
"show": False,
}
image_data["open"] = {
"image": image.opening(
image=image_data["adap_gaus_thrsh"]["image"].copy(),
kernel=(5, 5),
iterations=4,
),
"show": False,
}
image_data["dilate"] = {
"image": image.dilate(
image=image_data["open"]["image"].copy(),
kernel=(3, 3),
iterations=2,
),
"show": False,
```
```python
}
image_data["erode"] = {
"image": image.erode(
image=image_data["open"]["image"].copy(),
kernel=(3, 3),
iterations=2,
),
"show": True,
}
fill_erode = image.fill_image(image_data, "erode")
image_data["dilate_and_erode"] = {
"image": image.dilate_and_erode(
image_data["fill_erode"]["image"],
k_d=4,
i_d=5,
k_e=5,
i_e=2,
iterations=1,
),
"show": False,
}
contours = image.find_ball_contours(
cv2.bitwise_not(image_data["dilate_and_erode"]["image"]),
0.32,
)
image_data["contours"] = {
"image": contours,
"show": False,
}
image_data["im_1"] = {
"image": cv2.bitwise_not(
image_data["intensity_threshold"]["image"],
),
"show": False,
}
image_data["im_2"] = {
"image": cv2.bitwise_not(
image_data["contours"]["image"],
),
"show": False,
}
image_data["segmentation_before_recontour"] = {
"image": cv2.bitwise_not(
cv2.bitwise_or(
image_data["im_1"]["image"], image_data["im_2"]["image"]
),
),
"show": True,
}
recontours = image.find_ball_contours(
```
```python
image_data["segmentation_before_recontour"]["image"],
0.0,
min_area=100,
max_area=4900,
convex_hull=True,
)
image_data["convex_hull"] = {
"image": recontours,
"show": True,
}
image_data["opening_after_segmentation"] = {
"image": image.opening(
image_data["convex_hull"]["image"],
kernel=(3, 3),
iterations=5,
),
"show": True,
}
image_data["segmentation"] = {
"image": image.find_ball_contours(
image_data["opening_after_segmentation"]["image"],
0.72,
250,
5000,
True,
),
"show": True,
}
return image_data
```
#pagebreak()
=== utils.py
```python
import os
import glob
from natsort import natsorted
import numpy as np
import matplotlib.pyplot as plt
import cv2
def get_images_and_masks_in_path(folder_path):
images = sorted(filter(os.path.isfile, glob.glob(folder_path + "/*")))
image_list = []
mask_list = []
for file_path in images:
if "data.txt" not in file_path:
if "GT" not in file_path:
image_list.append(file_path)
else:
mask_list.append(file_path)
return natsorted(image_list), natsorted(mask_list)
# source and modofied from https://stackoverflow.com/a/67992521
def img_is_color(img):
if len(img.shape) == 3:
# Check the color channels to see if they're all the same.
c1, c2, c3 = img[:, :, 0], img[:, :, 1], img[:, :, 2]
if (c1 == c2).all() and (c2 == c3).all():
return True
return False
from heapq import nlargest, nsmallest
def dice_score(processed_images, masks, save_path):
eval = []
score_dict = {}
for idx, image in enumerate(processed_images):
score = dice_similarity_score(image, masks[idx], save_path)
score_dict[image] = score
if len(eval) == 0 or max(eval) < score:
max_score = score
max_score_image = image
if len(eval) == 0 or min(eval) > score:
min_score = score
min_score_image = image
eval.append(score)
avg_score = sum(eval) / len(eval)
max_text = f"Max Score: {max_score} - {max_score_image}\n"
min_text = f"Min Score: {min_score} - {min_score_image}\n"
avg_text = f"Avg Score: {avg_score}\n"
```
```python
print("--- " + save_path + "\n")
print(max_text)
print(min_text)
print(avg_text)
print("---")
FiveHighest = nlargest(5, score_dict, key=score_dict.get)
FiveLowest = nsmallest(5, score_dict, key=score_dict.get)
with open(f"{save_path}/dice_score.txt", "w") as f:
f.write("---\n")
f.write(max_text)
f.write(min_text)
f.write(avg_text)
f.write("---\n")
f.write("Scores:\n")
for idx, score in enumerate(eval):
f.write(f"\t{score}\t{masks[idx]}\n")
f.write("---\n")
f.write("5 highest:\n")
for v in FiveHighest:
f.write(f"{v}, {score_dict[v]}\n")
f.write("---\n")
f.write("5 lowest:\n")
for v in FiveLowest:
f.write(f"{v}, {score_dict[v]}\n")
frame_numbers = [extract_frame_number(key) for key in score_dict.keys()]
plt.figure(figsize=(12, 3))
plt.bar(frame_numbers, score_dict.values(), color="c")
plt.title("Dice Score for Each Image Frame")
plt.xlabel("Image Frame")
plt.ylabel("Dice Similarity Similarity Score")
plt.ylim([0.8, 1])
plt.xticks(
frame_numbers, rotation=90
) # Rotate the x-axis labels for better readability
plt.grid(True)
plt.tight_layout() # Adjust the layout for better readability
plt.savefig(f"Report/assets/dice_score_barchart.png")
# standard deviation
std_dev = np.std(eval)
print(f"Standard Deviation: {std_dev}")
mean = np.mean(eval)
print(f"Mean: {mean}")
# plot boxplot
plt.figure(figsize=(12, 3))
plt.violinplot(eval, vert=False, showmeans=True)
plt.title("Dice Score Distribution")
plt.xlabel("Dice Similarity Score")
plt.grid(True)
plt.tight_layout()
plt.text(0.83, 0.9, f'Standard Deviation: {std_dev:.2f}', transform=plt.gca().transAxes)
```
```python
plt.text(0.83, 0.80, f'Mean: {mean:.2f}', transform=plt.gca().transAxes)
plt.savefig(f"Report/assets/dice_score_violin.png")
def extract_frame_number(path):
components = path.split("/")
filename = components[-1]
parts = filename.split("-")
frame_number_part = parts[-1]
frame_number = frame_number_part.split(".")[0]
return int(frame_number)
def dice_similarity_score(seg_path, mask_path, save_path):
seg = cv2.threshold(cv2.imread(seg_path), 127, 255, cv2.THRESH_BINARY)[1]
mask = cv2.threshold(cv2.imread(mask_path), 127, 255, cv2.THRESH_BINARY)[1]
intersection = cv2.bitwise_and(seg, mask)
dice_score = 2.0 * intersection.sum() / (seg.sum() + mask.sum())
difference = cv2.bitwise_not(cv2.bitwise_or(cv2.bitwise_not(seg), mask))
cv2.imwrite(save_path + f"/difference_ds_{dice_score}.jpg", difference)
return dice_score
def show_image_list(
image_dict: dict = {},
list_cmaps=None,
grid=False,
num_cols=2,
figsize=(20, 10),
title_fontsize=12,
save_path=None,
):
list_titles, list_images = list(image_dict.keys()), list(image_dict.values())
assert isinstance(list_images, list)
assert len(list_images) > 0
assert isinstance(list_images[0], np.ndarray)
if list_titles is not None:
assert isinstance(list_titles, list)
assert len(list_images) == len(list_titles), "%d imgs != %d titles" % (
len(list_images),
len(list_titles),
)
if list_cmaps is not None:
assert isinstance(list_cmaps, list)
assert len(list_images) == len(list_cmaps), "%d imgs != %d cmaps" % (
len(list_images),
len(list_cmaps),
)
```
```python
num_images = len(list_images)
num_cols = min(num_images, num_cols)
num_rows = int(num_images / num_cols) + (1 if num_images % num_cols != 0 else 0)
# Create a grid of subplots.
fig, axes = plt.subplots(num_rows, num_cols, figsize=figsize)
# Create list of axes for easy iteration.
if isinstance(axes, np.ndarray):
list_axes = list(axes.flat)
else:
list_axes = [axes]
for i in range(num_images):
img = list_images[i]
title = list_titles[i] if list_titles is not None else "Image %d" % (i)
cmap = (
list_cmaps[i]
if list_cmaps is not None
else (None if img_is_color(img) else "gray")
)
list_axes[i].imshow(img, cmap=cmap)
list_axes[i].set_title(title, fontsize=title_fontsize)
list_axes[i].grid(grid)
list_axes[i].axis("off")
for i in range(num_images, len(list_axes)):
list_axes[i].set_visible(False)
fig.tight_layout()
if save_path is not None:
fig.savefig(save_path)
plt.close(fig)
def fill(img):
des = cv2.bitwise_not(img.copy())
contour, hier = cv2.findContours(des, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
cv2.drawContours(des, [cnt], 0, 255, -1)
return cv2.bitwise_not(des)
```
#pagebreak()
=== seg_main.py
```python
import os
import cv2
from tqdm import tqdm
from datetime import datetime
from segmentation.image_segmentation import ImageSegmentation
from segmentation.utils import (
dice_score,
get_images_and_masks_in_path,
show_image_list,
)
import multiprocessing as mp
dir_path = os.path.dirname(os.path.realpath(__file__))
path = "data/ball_frames"
def store_image_data(log_data, time: datetime):
"""method to store in a text file the image data for processing"""
check_path = os.path.exists(f"process_data/{time}/data.txt")
if not check_path:
with open(f"process_data/{time}/data.txt", "w") as f:
for log in log_data:
f.write(f"{log}\n")
def process_image(inputs: list[list, bool]) -> None:
"""method to process the image"""
[image_path, save, time, save_dir] = inputs
image = ImageSegmentation(image_path, save_dir)
data = image.preprocessing(image)
processed_images = {}
for key in data.keys():
if data[key]["show"] is not False:
processed_images[key] = data[key]["image"]
log_data = image.processing_data
name = os.path.splitext(os.path.basename(image_path))[0]
save_path = None
if save:
save_path = f"{save_dir}/{name}"
if not os.path.exists(save_dir):
os.mkdir(save_dir)
store_image_data(log_data, time)
if data["segmentation"]["image"] is not None:
segmentation_path = f"{save_dir}/segmentation/"
if not os.path.exists(segmentation_path):
os.mkdir(segmentation_path)
seg_path = f"{segmentation_path}{os.path.basename(image.image_path)}"
cv2.imwrite(seg_path, data["segmentation"]["image"])
```
```python
show_image_list(
image_dict=processed_images,
figsize=(10, 10),
save_path=save_path,
)
def process_all_images(images, save=False):
time = datetime.now().isoformat("_", timespec="seconds")
save_path = f"process_data/{time}"
seg_path = f"{save_path}/segmentation"
with mp.Pool() as pool:
inputs = [[image, save, time, save_path] for image in images]
list(
tqdm(
pool.imap_unordered(process_image, inputs, chunksize=4),
total=len(images),
)
)
pool.close()
pool.join()
return save_path, seg_path
def main():
images, masks = get_images_and_masks_in_path(path)
processed_image_path, seg_path = process_all_images(images, True)
processed_images, _ = get_images_and_masks_in_path(seg_path)
dice_score(processed_images, masks, seg_path)
if __name__ == "__main__":
main()
```
#pagebreak()
=== seg_main.py
```python
import os
import re
import cv2
from cv2.gapi import bitwise_and
from matplotlib import pyplot as plt
from matplotlib.artist import get
from segmentation.utils import get_images_and_masks_in_path
import numpy as np
from segmentation.utils import fill
import math
from skimage.feature import graycomatrix, graycoprops
BALL_SMALL = "Tennis"
BALL_MEDIUM = "Football"
BALL_LARGE = "American\nFootball"
def shape_features_eval(contour):
area = cv2.contourArea(contour)
# getting non-compactness
perimeter = cv2.arcLength(contour, closed=True)
non_compactness = 1 - (4 * math.pi * area) / (perimeter**2)
# getting solidity
convex_hull = cv2.convexHull(contour)
convex_area = cv2.contourArea(convex_hull)
solidity = area / convex_area
# getting circularity
circularity = (4 * math.pi * area) / (perimeter**2)
# getting eccentricity
ellipse = cv2.fitEllipse(contour)
a = max(ellipse[1])
b = min(ellipse[1])
eccentricity = (1 - (b**2) / (a**2)) ** 0.5
return {
"non_compactness": non_compactness,
"solidity": solidity,
"circularity": circularity,
"eccentricity": eccentricity,
}
def texture_features_eval(patch):
# # Define the co-occurrence matrix parameters
distances = [1]
angles = np.radians([0, 45, 90, 135])
levels = 256
symmetric = True
```
```python
normed = True
glcm = graycomatrix(
patch, distances, angles, levels, symmetric=symmetric, normed=normed
)
filt_glcm = glcm[1:, 1:, :, :]
# Calculate the Haralick features
asm = graycoprops(filt_glcm, "ASM").flatten()
contrast = graycoprops(filt_glcm, "contrast").flatten()
correlation = graycoprops(filt_glcm, "correlation").flatten()
# Calculate the feature average and range across the 4 orientations
asm_avg = np.mean(asm)
contrast_avg = np.mean(contrast)
correlation_avg = np.mean(correlation)
asm_range = np.ptp(asm)
contrast_range = np.ptp(contrast)
correlation_range = np.ptp(correlation)
return {
"asm": asm,
"contrast": contrast,
"correlation": correlation,
"asm_avg": asm_avg,
"contrast_avg": contrast_avg,
"correlation_avg": correlation_avg,
"asm_range": asm_range,
"contrast_range": contrast_range,
"correlation_range": correlation_range,
}
def initialise_channels_features():
def initialise_channel_texture_features():
return {
"asm": [],
"contrast": [],
"correlation": [],
"asm_avg": [],
"contrast_avg": [],
"correlation_avg": [],
"asm_range": [],
"contrast_range": [],
"correlation_range": [],
}
return {
"blue": initialise_channel_texture_features(),
"green": initialise_channel_texture_features(),
"red": initialise_channel_texture_features(),
}
def initialise_shape_features():
return {
"non_compactness": [],
"solidity": [],
```
```python
"circularity": [],
"eccentricity": [],
}
def get_all_features_balls(path):
features = {
BALL_LARGE: {
"shape_features": initialise_shape_features(),
"texture_features": initialise_channels_features(),
},
BALL_MEDIUM: {
"shape_features": initialise_shape_features(),
"texture_features": initialise_channels_features(),
},
BALL_SMALL: {
"shape_features": initialise_shape_features(),
"texture_features": initialise_channels_features(),
},
}
images, masks = get_images_and_masks_in_path(path)
for idx, _ in enumerate(images):
image = images[idx]
mask = masks[idx]
msk = cv2.imread(mask, cv2.IMREAD_GRAYSCALE)
_, msk = cv2.threshold(msk, 127, 255, cv2.THRESH_BINARY)
# overlay binay image over it's rgb counterpart
img = cv2.imread(image)
img = cv2.bitwise_and(cv2.cvtColor(msk, cv2.COLOR_GRAY2BGR), img)
contours, _ = cv2.findContours(msk, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for contour in contours:
area = cv2.contourArea(contour)
ball_img = np.zeros(msk.shape, dtype=np.uint8)
cv2.drawContours(ball_img, contour, -1, (255, 255, 255), -1)
fill_img = cv2.bitwise_not(fill(cv2.bitwise_not(ball_img)))
rgb_fill = cv2.bitwise_and(cv2.cvtColor(fill_img, cv2.COLOR_GRAY2BGR), img)
out = fill_img.copy()
out_colour = rgb_fill.copy()
# Now crop image to ball size
(y, x) = np.where(fill_img == 255)
(topy, topx) = (np.min(y), np.min(x))
(bottomy, bottomx) = (np.max(y), np.max(x))
padding = 3
out = out[
topy - padding : bottomy + padding, topx - padding : bottomx + padding
]
out_colour = out_colour[
```
```python
topy - padding : bottomy + padding, topx - padding : bottomx + padding
]
# getting ball features
shape_features = shape_features_eval(contour)
texture_features_colour = {
"blue": texture_features_eval(out_colour[:, :, 0]),
"green": texture_features_eval(out_colour[:, :, 1]),
"red": texture_features_eval(out_colour[:, :, 2]),
}
# segmenting ball by using area
if area > 1300: # football
append_ball = BALL_LARGE
elif area > 500: # soccer_ball
append_ball = BALL_MEDIUM
else: # tennis ball
append_ball = BALL_SMALL
for key in shape_features:
features[append_ball]["shape_features"][key].append(shape_features[key])
for colour in texture_features_colour.keys():
for colour_feature in texture_features_colour[colour]:
features[append_ball]["texture_features"][colour][
colour_feature
].append(texture_features_colour[colour][colour_feature])
return features
def feature_stats(features, ball, colours=["blue", "green", "red"]):
def get_stats(array):
return {
"mean": np.mean(array),
"std": np.std(array),
"min": np.min(array),
"max": np.max(array),
}
def get_ball_shape_stats(features, ball):
feature_find = ["non_compactness", "solidity", "circularity", "eccentricity"]
return {
feature: get_stats(features[ball]["shape_features"][feature])
for feature in feature_find
}
def get_ball_texture_stats(features, ball, colour):
feature_find = ["asm_avg", "contrast_avg", "correlation_avg"]
return {
texture: get_stats(features[ball]["texture_features"][colour][texture])
for texture in feature_find
}
```
```python
stats = {
ball: {
"shape_features": get_ball_shape_stats(features, ball),
"texture_features": {
colour: get_ball_texture_stats(features, ball, colour)
for colour in colours
},
},
}
return stats
def get_histogram(data, Title):
"""
data {ball: values}
"""
for ball, values in data.items():
plt.figure(figsize=(3,3))
plt.hist(values, bins=20, alpha=0.5, label=ball)
plt.xlabel(Title)
plt.ylabel("Frequency")
plt.legend()
plt.tight_layout()
plt.savefig("Report/assets/features/"+ Title + "_histogram_" + ball.replace("\n", "_"))
# plt.show()
if __name__ == "__main__":
features = get_all_features_balls("data/ball_frames")
balls = [
BALL_SMALL,
BALL_MEDIUM,
BALL_LARGE,
]
non_compactness = {
ball: features[ball]["shape_features"]["non_compactness"] for ball in balls
}
solidity = {ball: features[ball]["shape_features"]["solidity"] for ball in balls}
circularity = {
ball: features[ball]["shape_features"]["circularity"] for ball in balls
}
eccentricity = {
ball: features[ball]["shape_features"]["eccentricity"] for ball in balls
}
get_histogram(non_compactness, "Non-Compactness")
get_histogram(solidity, "Soliditiy")
get_histogram(circularity, "Circularity")
```
```python
get_histogram(eccentricity, "Eccentricity")
channel_colours = ["red", "green", "blue"]
def get_ch_features(feature_name):
return {
colour: {
ball: features[ball]["texture_features"][colour][feature_name]
for ball in balls
}
for colour in channel_colours
}
def get_ch_stats(feature_data, colours=channel_colours):
return [[feature_data[colour][ball] for ball in balls] for colour in colours]
asm_avg = get_ch_features("asm_avg")
contrast_avg = get_ch_features("contrast_avg")
correlation_avg = get_ch_features("correlation_avg")
asm_range = get_ch_features("asm_range")
asm_data = get_ch_stats(asm_avg)
contrast_data = get_ch_stats(contrast_avg)
correlation_data = get_ch_stats(correlation_avg)
asm_range_data = get_ch_stats(asm_range)
asm_title = "ASM Avg"
contrast_title = "Contrast Avg"
correlation_title = "Correlation Avg"
asm_range_title = "ASM Range Avg"
plt_colours = ["yellow", "white", "orange"]
channels = ["Red Channel", "Green Channel", "Blue Channel"]
plt.figure()
def get_boxplot(data, title, colours=plt_colours, rows=3, columns=3, offset=0):
channels = ["Red Channel", "Green Channel", "Blue Channel"]
fig = plt.figure(figsize=(8,3)) # Get the Figure object
fig.suptitle(title) # Set the overall title
for i, d in enumerate(data):
ax = plt.subplot(rows, columns, i + offset + 1)
ax.set_facecolor(channel_colours[i])
ax.patch.set_alpha(0.5)
violins = plt.violinplot(
d, showmeans=True, showmedians=False, showextrema=False
)
for j, pc in enumerate(violins["bodies"]):
pc.set_facecolor(colours[j])
pc.set_edgecolor("black")
pc.set_alpha(0.2)
plt.xticks([1, 2, 3], balls, rotation=45)
plt.title(channels[i])
```
```python
def get_boxplot_specific(data, title, i, colours=plt_colours):
plt.figure(figsize=(2.5,6))
d = data[i]
violins = plt.violinplot(
d, showmeans=True, showmedians=False, showextrema=False
)
for j, pc in enumerate(violins["bodies"]):
pc.set_facecolor(colours[j])
pc.set_edgecolor("black")
pc.set_alpha(0.5)
plt.xticks([1, 2, 3], balls, rotation=45)
plt.title(title + '\n' + channels[i])
ax = plt.gca() # Get the current Axes instance
ax.set_facecolor(channel_colours[i]) # Set the background color
ax.patch.set_alpha(0.1) # Set the alpha value
columns = 3
rows = 1
get_boxplot_specific(asm_data, asm_title, 2)
plt.tight_layout()
plt.savefig("Report/assets/features/asm_data_blue_channel")
plt.close()
get_boxplot_specific(asm_range_data, asm_range_title, 2)
plt.tight_layout()
plt.savefig("Report/assets/features/asm_range_data_blue_channel")
plt.close()
get_boxplot_specific(contrast_data, contrast_title, 0)
plt.tight_layout()
plt.savefig("Report/assets/features/contrast_data_red_channel")
plt.close()
get_boxplot_specific(correlation_data, correlation_title, 1)
plt.tight_layout()
plt.savefig("Report/assets/features/correlation_green_channel")
plt.close()
```
#pagebreak()
=== tracking_main.py
```python
from matplotlib import pyplot as plt
import numpy as np
def kalman_predict(x, P, F, Q):
xp = F * x
Pp = F * P * F.T + Q
return xp, Pp
def kalman_update(x, P, H, R, z):
S = H * P * H.T + R
K = P * H.T * np.linalg.inv(S)
zp = H * x
xe = x + K * (z - zp)
Pe = P - K * H * P
return xe, Pe
def kalman_tracking(
z,
x01=0.0,
x02=0.0,
x03=0.0,
x04=0.0,
dt=0.5,
nx=0.16,
ny=0.36,
nvx=0.16,
nvy=0.36,
nu=0.25,
nv=0.25,
kq=1,
kr=1,
):
# Constant Velocity
F = np.matrix([[1, dt, 0, 0], [0, 1, 0, 0], [0, 0, 1, dt], [0, 0, 0, 1]])
# Cartesian observation model
H = np.matrix([[1, 0, 0, 0], [0, 0, 1, 0]])
# Motion Noise Model
Q = kq*np.matrix([[nx, 0, 0, 0], [0, nvx, 0, 0], [0, 0, ny, 0], [0, 0, 0, nvy]])
# Measurement Noise Model
R = kr*np.matrix([[nu, 0], [0, nv]])
x = np.matrix([x01, x02, x03, x04]).T
P = Q
N = len(z[0])
s = np.zeros((4, N))
```
```python
for i in range(N):
xp, Pp = kalman_predict(x, P, F, Q)
x, P = kalman_update(xp, Pp, H, R, z[:, i])
val = np.array(x[:2, :2]).flatten()
s[:, i] = val
px = s[0, :]
py = s[1, :]
return px, py
def rms(x, y, px, py):
return np.sqrt(1/len(px) * (np.sum((x - px)**2 + (y - py)**2)))
def mean(x, y, px, py):
return np.mean(np.sqrt((x - px)**2 + (y - py)**2))
if __name__ == "__main__":
x = np.genfromtxt("data/x.csv", delimiter=",")
y = np.genfromtxt("data/y.csv", delimiter=",")
na = np.genfromtxt("data/na.csv", delimiter=",")
nb = np.genfromtxt("data/nb.csv", delimiter=",")
z = np.stack((na, nb))
dt = 0.5
nx = 160.0
ny = 0.00036
nvx = 0.00016
nvy = 0.00036
nu = 0.00025
nv = 0.00025
px1, py1 = kalman_tracking(z=z)
nx = 0.16 * 10
ny = 0.36
nvx = 0.16 * 0.0175
nvy = 0.36 * 0.0175
nu = 0.25
nv = 0.25 * 0.001
kq = 0.0175
kr = 0.0015
px2, py2 = kalman_tracking(
nx=nx,
ny=ny,
nvx=nvx,
nvy=nvy,
nu=nu,
nv=nv,
kq=kq,
kr=kr,
z=z,
)
```
```python
plt.figure(figsize=(12, 5))
plt.plot(x, y, label='trajectory')
plt.plot(px1, py1, label=f'intial prediction, rms={round(rms(x, y, px1, py1), 3)}')
print(f'initial rms={round(rms(x, y, px1, py1), 3)}, mean={round(mean(x, y, px1, py1), 3)}')
plt.plot(px2, py2,label=f'optimised prediction, rms={round(rms(x, y, px2, py2), 3)}')
print(f'optimised rms={round(rms(x, y, px2, py2), 3)}, mean={round(mean(x, y, px2, py2), 3)}')
plt.scatter(na, nb,marker='x',c='k',label=f'noisy data, rms={round(rms(x, y, na, nb), 3)}')
print(f'noise rms={round(rms(x, y, na, nb), 3)}, mean={round(mean(x, y, na, nb), 3)}')
plt.legend()
plt.title("Kalman Filter")
plt.savefig("Report/assets/tracking/kalman_filter.png")
# plt.show()
```
|
|
https://github.com/Walfisch115/thb-typst-template | https://raw.githubusercontent.com/Walfisch115/thb-typst-template/main/poster/main.typ | typst | #import "conf.typ": conf
#show: conf.with(
title: [
Max. Schriftgröße 32, max. 2-zeilig, ZA Schriftgröße +4 Kameragestützte Feinpositionierung eines Quadrokopters
],
author: "<NAME>",
date: "HEUTE",
supervision: [
Betreuung Prof. Dr. sc. techn. <NAME> • Technische Hochschule Brandenburg • Dr. <NAME> • Stadtwerke Brandenburg an der Havel
],
)
= Aufgabenstellung
Ziel der Arbeit ist es einen Quadrokopter vom Typ MikroKopter autonom, mit Hilfe einer On-Board-Kamera und bildverarbei-tenden Verfahren, möglichst genauer einem Objekt zu positionieren. Zuvor soll eine GPS-gestützte Grobpositionierung des MikroKopters in der Nähe des gesuchten Objektes erfolgen.
= Konzept
Die beiden Aufgaben werden teilweise mit On-Board Mitteln und teilweise mit Hilfe einer Bodenstation (am Boden befindliches Notebook) abgearbeitet.
#figure(
image("Bild1.png"),
caption: [Wegpunktliste und resultierendes Suchgebiet]
)
Während des Wegpunktefluges werden die Videodaten der On-Board-Kamera und die Sensordaten des MikroKopters an die Bodenstation übertragen. Mit bildverarbeitenden Verfahren wird das Objekt im Videobild gesucht. Sobald das Objekt detektiert wurde, wird der GPS-Flug gestoppt und die Kontrolle an die Bodenstation übergeben.
- Bodenstation
- Objektposition im Videobild und der empfangenen Sensordaten, die Abweichung von der Sollposition
Die Bodenstation berechnet fortlaufend, unter Verwendung der detektierten Objektposition im Videobild und der empfangenen Sensordaten, die Abweichung von der Sollposition (Regelabweichung). Aus dieser Abweichung werden Steuerkommandos generiert und an den MikroKopter zurück gesendet, mit dem Ziel die Regelabweichung zu minimieren.
= Bildverarbeitung
Die Bildverarbeitung ist darauf ausgelegt ein einfaches, einfarbiges Objekt zu erkennen. Als erstes wird die durch das Weitwinkelobjektiv hervorgerufene Verzerrung beseitigt und Kompressionsartefakte sowie Bildrauschen durch einen Gaußfilter entfernt. Um das einfarbige Objekt zu finden, wird eine Transformation in einen HSI-Farbraum mit den Anteilen Farbton, Sättigung und Helligkeit vorgenommen. Das Objekt kann durch Wählen passender Intervalle der HSI-Anteile herausgefiltert werden @boersch2007. Anschließend wird eine Blobsuche durchgeführt, mit deren Hilfe zusammenhängende Bildpunkte ermittelt werden. Der Mittelpunkt des größten gefundenen Blobs wird für die Berechnung der Regelabweichung verwendet.
= Regelung
Aus der ermittelten Abweichung von der Sollposition werden Steuerkommandos für den MikroKopter berechnet, die der Abweichung entgegenwirken. Diese Aufgabe übernimmt ein Proportional-Differential-Integral-Regler (PID-Regler).Der P-Regler generiert Steuerkommandos proportional zu der Größe der Regelabweichung. Der D-Regler erzeugt Kommandos, welche einer Änderung der Regelabweichung entgegenwirken.
= Testsystem
Die Auswertung der Positionsregelung erfolgt mit Hilfe einer Bodenkamera, die die Bewegung des MikroKopter entlang einer genau definierten Fluglinie aufzeichnet. Mit einem Bewertungstool wird die Position des MikroKopters im aufgezeichneten Video frameweise markiert und in metrische Werte umgerechnet.
= Ergebnisse
Das linke Diagramm zeigt die Abweichung des MikroKopters von der Sollposition mit und ohne Störgröße. Die Störgröße wurde mit einem Standventilator mit eingeschalteter Schwenkfunktion erzeugt. Es ist zu sehen, dass die Position ohne eine Störgröße auf ±20 cm genau gehalten wird. Wird der Ventilator eingeschaltet vergrößert sich die Abweichung auf ±40 cm. Im rechten Diagramm wurde der MikroKopter mit zwei verschiedenen Geschwindigkeiten in den Bereich des zu suchenden Objektes geflogen. Dem Diagramm ist zu entnehmen, dass der MikroKopter abbremst und nach ca. 10 Sekunden seine Position über dem Objekt stabilisiert hat.
#figure(
image("Bild2.png"),
caption: [Diagramm]
)
= Fazit
In dieser Arbeit wurde gezeigt, dass eine autonome kameragestützte Positionierung des MikroKopters möglich ist. Die Genauigkeit der Positionierung, die mit dem entwickelten System erreicht wurde, übersteigt die Möglichkeiten einer reinen GPS-basierten Lösung deutlich.
#bibliography(
"bibliography.bib",
title: "Quellen",
) |
|
https://github.com/andrin-geiger/hslu_template_typst | https://raw.githubusercontent.com/andrin-geiger/hslu_template_typst/master/Typst_Template.typ | typst | #import "template.typ": *
#import "@preview/acrostiche:0.3.1": *
#import "@preview/i-figured:0.2.4"
#import "@preview/anti-matter:0.1.1": anti-matter, fence, set-numbering
#set heading(numbering: "1.")
#show heading: i-figured.reset-counters
#show figure: i-figured.show-figure
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: project.with(
title: "",
subtitle: "Bacherlorarbeit",
authors: (
"",
""
),
betreuer: "",
experte: "",
auftraggeber: "",
logo: "img/hslu_logo.svg",
institut: "Hochschule Luzern - Departement Informatik",
// Insert your abstract after the colon, wrapped in brackets.
// Example: `abstract: [This is my abstract...]`
abstract: lorem(59),
version: "Version 1.0",
studiengang: "BSc Informatik"
)
#init-acronyms((
"Wipro": ("Wirtschaftsprojekt",),
"BAA": ("Bacherlorarbeit",),
))
#include "chapters/01_introduction.typ"
#include "chapters/02_state-of-research.typ"
#include "chapters/03_concept.typ"
#include "chapters/04_methodology.typ"
#include "chapters/05_realization.typ"
#include "chapters/06_evaluation.typ"
#include "chapters/07_conclusion.typ"
#fence()
#pagebreak()
#set heading(numbering: none)
= Glossar
#print-index(title: [])
#pagebreak()
= Abbildungsverzeichnis
#i-figured.outline(title: [])
= Tabellenverzeichnis
#i-figured.outline(title: [], target-kind: table,)
#pagebreak()
#bibliography(title: [Literatur], "Referenzen.bib", style: "american-psychological-association")
#pagebreak()
// reset the numbering for the appendix
#counter(heading).update(0)
#set heading(numbering: "A.")
#include "chapters/a_appendix.typ" |
|
https://github.com/edsomjr/RLT | https://raw.githubusercontent.com/edsomjr/RLT/main/computacao/linux_shell_scripting_tutorial_ver_1_0/resume.typ | typst | MIT License | #import "../template.typ": *
= Exercícios
Os exercícios abaixo devem ser solucionados com línguagens Shell, como `bash`, `nushell`, `zsh`, `fish`, etc.
Na décima página, um exercício é proposto -- _e erronamente solucionado_ -- com o intuito de entender melhor os filtros em Shell scripting:
0. Suponha que exista um arquivo chamado `hotel.txt` com 100 linhas de dados. Imprima começando da linha 20 à linha 30 e armazene este resultado em um novo arquivo `hlist`.
#bash(shebang: false)[```
$ tail +20 < hotel.txt | head -10 > hlist
```]
#nushell(shebang: false)[```
$ open hotel.txt | lines | skip 20 | first 10 | save hlist
```]
#line(length: 100%)
Os exercícios a seguir são da página 44 em diante, e devem ser feitos escrevendo um script shell.
1. Adicione dois números que são recebidos pela linha de comando como argumentos, e se esses dois números não são dados, mostre um erro e seu uso correto:
#nushell[```
# Add two numbers and return its sum.
def main [
a: int # First number to add.
b: int # Second number to add.
]: nothing -> int {
$a + $b
}
```]
2. Retorne o maior número entre três argumentos dados pela linha de comando, e se três argumentos não forem dados, mostre um erro e seu uso correto.
#nushell[```
# Return the greastest number between the three given.
def main [
a: int # First number to analyse.
b: int # Second number to analyse.
c: int # Third number to analyse.
]: nothing -> int {
[ $a $b $c ] | sort | last
}
```]
3. Imprima a sequência 5, 4, 3, 2, 1 utilizando a repetição `while`:
#bash[```
m=5;
while [ $m -gt 0 ]
do
echo "$m"
m=`expr $m - 1`
done
```]
#nushell[```
# Create the sequence 5, 4, 3, 2, 1 using the while loop.
def main []: nothing -> list<int> {
mut result = []
mut i = 5
while $i != 0 {
$result = $result ++ [ $i ]
$i = $i - 1
}
$result
}
```]
3.1. Há outras formas de se resolver sem `while`, quais seriam?
#bash[```
while ((m > 0)); do
echo "$m"
m=`expr $m - 1`
done
```]
#nushell[```
# Create the sequence 5, 4, 3, 2, 1 using the command seq.
def main []: nothing -> list<int> {
seq 5 -1 1
}
```]
3.2. E para um máximo e mínimo qualquer dado pela linha de comando, como resolver?
#bash[```
if [ $# -lt 2 ]
then
echo "MAX or MIN missing"
exit 1
fi
i=$1
while [ $i -ge $2 ]
do
echo "$i"
i=`expr $i - 1`
done
```]
#nushell[```
# Create a decreasing sequence starting from `max` and ending in `min`.
def main [
max: int # Start of the sequence.
min: int # End of the sequence.
]: nothing -> list<int> {
seq $max -1 $min
}
```]
3.3. Imprimir a ordem crescente quando o primeiro argumento for menor que o segundo, senão imprimir a ordem decrescente.
#bash[```
if [ $# -lt 2 ]
then
echo "MAX or MIN missing"
exit 1
fi
if [ $2 -gt $1 ]
then
i=$1
j=$2
while [ $i -le $j ]
do
echo "$i"
i=`expr $i + 1`
done
elif [ $1 = $2 ]
then
echo "$1"
else
i=$1
j=$2
while [ $i -ge $j ]
do
echo "$i"
i=`expr $i - 1`
done
fi
```]
4. Usando a palavra-chave `case` performe operações matemáticas básicas como adição (`+`), subtração (`-`), multiplicação (`x`), e divisão (`/`).
#nushell[```
# Using case (match) keyword to perform basic math operations such as addition (+), subtraction (-), multiplication (x), and division (/).
def main [
a: int # First operand.
operator: string # Operator.
b: int # Second operand.
]: nothing -> int {
match $operator {
'+' => { $a + $b },
'-' => { $a - $b },
'x' => { $a * $b },
'/' => { $a / $b },
_ => { NaN }
}
}
```]
4.1. Ao invés de apenas três argumentos, resolva para um número ilimitado de argumentos (podendo ter $1, 2, dots.c$ argumentos).
5. Mostre a data, tempo, usuário, e diretório atual.
#nushell[```
# Show the current date, time, username, and current directory.
def main []: nothing -> any {
{
date: (date now | into string)
user: (whoami)
directory: $env.PWD
}
}
```]
6. Faça o reverso do primeiro argumento dado.
#nushell[```
# Reverse the given argument.
export def main [
arg: string # String to be reversed.
]: nothing -> any {
$arg | str reverse
}
```]
7. Dado um número como argumento, calcule a soma de todos os digitos.
#nushell[```
# Given number, calculate the sum of its digits.
export def main [
arg: int # Number to sum each digit.
]: nothing -> any {
$arg | into string | split chars | into int | math sum
}
```]
8. Performe aritmética real (números com pontos decimais) dado os argumentos.
*Mesma solução da questão quatro para Nushell.*
9. Calcule a seguinte operação entre os dois números reais: $5.12 + 2.5$.
#nushell(shebang: false)[```
$ 5.12 + 2.5
```]
#bash(shebang: false)[```
$ echo 5.12 + 2.5 | bc
```]
10. Como guardar o resultado de uma operação no conjunto dos números reais:
#nushell(shebang: false)[```
$ let a = 5.66
$ let b = 8.67
$ let c = $a + $b
```]
11. Determine se um arquivo existe dado seu caminho pela linha de comando.
#nushell[```
# Given path of a possible file, verify if it exists.
export def main [
path: string # Path to verify if file exists.
]: nothing -> bool {
($path | path type) == "file"
}
```]
12. Dado um argumento, verifique se ele contém o caractére asterisco (`*`), se não conter, adicione ao fim e o imprima. Se conter, imprima: "Symbol is not required.".
#nushell[```
# Add an asterisk to the argument.
export def main [
arg: string # String to verify.
]: nothing -> string {
if ($arg | str contains '*') {
"Symbol is not required."
} else {
$arg + '*'
}
}
```]
13. Dado o diretório de um arquivo como o primeiro argumento, imprima seu conteúdo a partir do segundo argumento (o ínicio), até o terceiro argumento, a quantidade de linhas seguintes.
#nushell[```
# Add an asterisk to the argument.
export def main [
path: string # String to verify.
start: int # Start of the file.
end: int # End of the file.
]: nothing -> string {
if ($path | path type) != "file" {
error make {
msg: $'Not a file.'
label: {
text: $'($path) is not a valid value.'
span: (metadata $path).span
}
}
}
open $path | lines | skip $start | first $end
}
```]
14. Implemente um programa que aceite as seguintes flags:
- `-c` para limpar a tela (de _clear_);
- `-d` mostrar a lista de arquivos no diretório atual;
- `-m` comecar o programa `mc` (Midnight Commander Shell) se estiver instalado;
- `-e` comecar o editor padrão do sistema.
#nushell[```
# Quick launcher of certain commands.
export def main [
--clear (-c) # Clear the screen.
--list-directory (-d) # List the files of the current directory.
--midnight-commander (-m) # Execute the Midnight Commander Shell.
--editor (-e) # Execute the default editor of the system.
]: nothing -> any {
if $clear {
clear
} else if $list_directory {
ls ./
} else if $midnight_commander {
exec mc
} else if $editor {
exec $env.EDITOR
}
}
```]
15. Escreva um programa nomeado `sayHello`, e coloque-o para executar assim que o shell iniciar. Então imprima qualquer uma das seguintes mensagens no seu sistema (da forma que você quiser) de acordo com o horário:
- "Bom dia!",
- "Boa tarde!",
- "Boa noite!".
16. Imprima a mensagem "Bom Dia, Mundo!", em negrito e efeito piscante, com cores diferentes como vermelho, marron, etc, utilizando o comando `echo`.
17. Implemente um programa que rode de fundo (como serviço) que imprimirá continuamente o tempo atual no lado direito acima da tela, enquanto o(a) usuário(a) pode usar normalmente o sistema.
18. Escreva um programa que implemente utilitários de diálogo: items de menu e ações de acordo com o que foi selecionado como o seguinte:
#table(
columns: (auto, auto, 1fr),
inset: 10pt,
align: horizon,
table.header(
[*Item Menu*], [*Propósito*], [*Ações para o Item Menu*],
),
"Data/tempo",
"Ver o tempo e data atual.",
"Data e tempo deve ser mostrado usando uma caixa de informação numa utilidade de diálogo.",
"Calendário",
"Ver o calendário atual",
"O calendário deve ser mostrado usando uma caixa de informação numa utilidade de diálogo.",
"Delete",
"Deletar o arquivo selecionado.",
"Primeiro peça ao usuário o nome do diretório onde todos os arquivos estão presentes, se não tiver nome, então assume-se o diretório atual. Então mostre todos osarquivos usando apenas aquele diretório, os arquivos devem ser selecionáveis. Após o usuário escolher um, confirme com o usuário e então se confiramdo, delete o arquivo, reporte erros se não conseguir.",
"Exit",
"Sair deste programa",
"Sair do programa atual (do script que estiver rodando)"
)
19. Escreva um programa que mostre as informações do sistema, como:
- Usuário logado atual e seu nome no sistema;
- Shell atual;
- Diretório _home_;
- Tipo do Sistema Operacional;
- Configurações de caminho;
- Diretório atual;
- Número total de usuários de logados;
- Informações do Linux, como versão, número de lançamento, etc;
- Todos os _shells_ disponíveis.
- Configurações do mouse;
- Informações do processador como tipo, velocidade, etc;
- Informações de memória;
- Informações do dispositivo físico de armazenamento, tamanho total, memória cache, modelo, etc;
- Sistema de arquivos montado.
|
https://github.com/soul667/typst_template | https://raw.githubusercontent.com/soul667/typst_template/main/readme.md | markdown | # typst模板
> 本仓库为作者自用模板仓库,用于快速创建新项目。
## 基本模板
包含基本和常用论文模板。能快速完成平时作业笔记,论文写作等。设置分类成`data.json`,可以不用阅读代码即可修改。11
## 论文模板
> 论文模板部分参照北京大学学位论文模板 |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/calc-37.typ | typst | Other | // Error: 7-9 missing argument: end
#range()
|
https://github.com/1sSay/USPTU_conspects | https://raw.githubusercontent.com/1sSay/USPTU_conspects/main/src/math/SystemsOfLinearEquations.typ | typst | // Global settings and templates
#set text(14pt)
#let def(term, color: black) = {
box(stroke: color, inset: 7pt, text()[ #term ])
}
// Lecture header and date
#let subject = text()[Математика]
#let lecture_header = text()[Системы линейных алгебраических уравнений]
#let date = text()[07.09.2024]
// Header
#align(center, heading(level: 1)[#subject. \ #lecture_header ])
#align(center, text(weight: "thin")[#date])
#align(center, text(weight: "thin")[Конспект Сайфуллина Искандара БПО09-01-24])
// Content
#let make = sym.supset.sq
#let see = sym.angle.spheric
= Системы линейный алгебраических уравнений
#box(stroke: black, inset: 7pt, text(weight: "black")[ Системой линейных алгебраических уравнений называется система такого типа:
$ cases(
alpha_1^1 x^1 + alpha_2^1 x ^ 2 + dots + alpha_n^1 x^n = beta^1\
alpha_1^2 x^1 + alpha_2^2 x ^ 2 + dots + alpha_n^2 x^n = beta^2\
dots\
alpha_1^m x^1 + alpha_2^m x ^ 2 + dots + alpha_n^m x^n = beta^m
) $
$alpha_j^i, x^j, beta^i in KK$, где $KK$ - поле]
)
#heading(level: 2)[Классификация СЛАУ]
- $exists #text()[решение] -> #text()[СЛАУ совместна]$\
- $exists.not #text()[решение] -> #text()[СЛАУ несовместна]$\
- $exists ! #text()[решение] -> #text()[СЛАУ определена]$\
- $not(exists ! #text()[решение]) -> #text()[СЛАУ неопределена]$\
- $forall i in {1, 2, ..., m} beta_i = 0 -> #text()[СЛАУ однородна]$\
- $exists i in {1, 2, ..., m} beta_i eq.not 0 -> #text()[СЛАУ неоднородна]$
#heading(level: 2)[Альтернативные записи СЛАУ]
#heading(level: 3)[Матричная форма:]
$
A X = B #text()[, где]\
A = mat(
alpha_1^1, alpha_2^1, ..., alpha_n^1;
alpha_1^2, alpha_2^2, ..., alpha_n^2;
dots.v, dots.v, dots.down, dots.v;
alpha_1^m, alpha_2^m, ..., alpha_n^m;
), B = mat(
beta^1; beta^2; dots.v; beta^m
), X = mat(
x^1; x^2; dots.v; x^m
)
$
#heading(level: 3)[Векторная форма:]
$
sum_(i=1)^n a_i x^i = b #text()[, где] \
forall i = {1, 2, ... n}, a_i = vec(alpha_i^1, alpha_i^2, dots.v, alpha_i^m) #text()[,]
b = vec(beta^1, beta^2, dots.v, beta^m)
$
\
#heading(level: 2)[Метод Крамера]
#box(stroke: black, inset: 7pt, text(fill: red)[TODO])
#heading(level: 2)[Метод Гаусса-Жордана]
#box(stroke: black, inset: 7pt, text(fill: red)[TODO])
#heading(level: 2)[Матричный метод]
#box(stroke: black, inset: 7pt, text(fill: red)[TODO])
|
|
https://github.com/TOD-theses/paper-T-RACE | https://raw.githubusercontent.com/TOD-theses/paper-T-RACE/main/thesis.typ | typst | #import "@preview/definitely-not-tuw-thesis:0.1.0": *
#import "custom-styles.typ": custom-styles
#show: thesis.with(
font: "TeX Gyre Heros",
title: (
en: "Using state changes to detect and simulate transaction order dependency in Ethereum",
de: "Simulation und Analyse von Transaktionsreihenfolgen in Ethereum",
),
subtitle: (:),
lang: "en",
author: (name: "<NAME>", student-number: 11841833),
advisor: (name: "<NAME>", pre-title: "Ao.Univ.Prof. Dr."),
assistants: ((
name: "<NAME>",
pre-title: "Ass.Prof.in Dr.in",
),),
curriculum: (en: "Software Engineering & Internet Computing", de: "Software Engineering & Internet Computing"),
keywords: ("Ethereum", "TOD", "Frontrunning"),
date: datetime.today(),
)
#show: flex-caption-styles
#show: toc-styles
#show: general-styles
#show: front-matter-styles
#show: custom-styles
#include "front-matter.typ"
#outline()
#show figure.where(kind: "algorithm"): set figure(supplement: "Algorithm")
#show: main-matter-styles
#set heading(numbering: "1.1.1.a")
#include "main.typ"
#show: back-matter-styles
#set page(header: none)
#outline(title: "List of Figures", target: figure.where(kind: image))
#outline(title: "List of Tables", target: figure.where(kind: table))
#outline(title: "List of Algorithms", target: figure.where(kind: "algorithm"))
#bibliography("refs.bib")
#show: appendix-styles
#include "appendix.typ" |
|
https://github.com/Harry-Chen/kouhu | https://raw.githubusercontent.com/Harry-Chen/kouhu/master/doc/manual.typ | typst | MIT License | #import "@preview/mantys:0.1.4": *
#import "@preview/tidy:0.2.0"
#import "/lib.typ": *
#show link: it => {
set text(fill: blue)
it
}
#show raw: it => {
set text(size: 10pt, font: "JetBrains Mono")
it
}
#set page(paper: "a4")
#let infos = toml("../typst.toml")
#show: mantys.with(
..infos,
title: "kǒuhú",
subtitle: "口胡",
date: datetime.today(),
abstract: [
`kouhu` (口胡 in Chinese) is typst package aimed at generating Chinese lipsum text (a.k.a. blind text). It provides a set of built-in text samples containing both Simplified and Traditional Chinese characters. You can choose from generated fake text, classic or modern Chinese literature, or specify your own text.
The idea of `kouhu` is inspired by #link("https://ctan.org/pkg/zhlipsum")[`zhlipsum`] LaTeX package and #link("https://typst.app/universe/package/roremu")[`roremu`] Typst package.
All builtin text samples are excerpted from `zhlipsum` (w/o non-UTF-8 paragraphs). Please refer to #link("http://mirrors.ctan.org/macros/latex/contrib/zhlipsum/zhlipsum-en.pdf")[its documentation] for detailed description.
#text(font: "Noto Serif CJK SC", size: 10pt, (
for k in builtin-text-list().keys() {[
+ #raw(k): #kouhu(builtin-text: k, indicies: (1,), length: 70)
#parbreak()
]}
))
],
examples-scope: (kouhu: kouhu, builtin-text-list: builtin-text-list),
titlepage: titlepage.with(toc: false),
index: none
)
= Usage
#tidy-module(
read("../lib.typ"),
name: "Kouhu",
show-outline: false,
include-examples-scope: true,
extract-headings: 1,
tidy: tidy
)
|
https://github.com/DashieTM/nix-introduction | https://raw.githubusercontent.com/DashieTM/nix-introduction/main/topics/homemanager.typ | typst | #import "../utils.typ": *
#polylux-slide[
== Home-Manager
#v(15pt)
- extension to nixos/nix-darwin
- user level variant of nixos
- available on other distributions
- configures user configuration files
#pdfpc.speaker-note(```md
```)
]
|
|
https://github.com/Sheffield-iGEM/redesign-guide | https://raw.githubusercontent.com/Sheffield-iGEM/redesign-guide/main/main.typ | typst | #import "@preview/ilm:1.2.1": *
#set text(lang: "en")
#show: ilm.with(
title: [A Practical Guide to Protein Redesign],
author: "<NAME>, 2024",
date: datetime(year: 2024, month: 08, day: 15),
abstract: [
'Ilm (Urdu: #text(lang: "ur", font: ("Noto Nastaliq Urdu", "Noto Naskh Arabic"), size: 0.8em)[عِلْم]) is the Urdu term for knowledge. In its general usage, 'ilm may refer to knowledge of any specific thing or any form of "learning". Subsequently, the term came to be used to refer to various categories of "sciences", especially when used in its plural form ('ulum).
],
preface: [
#align(center + horizon)[
This template is made possible by <NAME>'s\ excellent #link("https://practicaltypography.com")[_Practical Typography_] book.
Thank you <NAME>!
]
],
bibliography: bibliography("refs.bib"),
)
= An Excursion Into _de novo_ Protein Design
== External links
'Ilm adds a small maroon circle to external (outgoing) links #link("https://github.com/talal/ilm")[like so].
This acts as a hint for the reader so that they know that a specific text is a hyperlink. This is far better than #underline[underlining a hyperlink] or making it a #text(fill: blue)[different color]. Don't you agree?
If you want to disable this behavior then you can do so by setting the concerning option to `false`:
```typst
#show: ilm.with(
external-link-circle: false
)
```
== Blockquotes
'Ilm also exports a `blockquote` function which can be used to create blockquotes. The function has one argument: `body` of the type content and can be used like so:
```typst
#blockquote[
A wizard is never late, <NAME>gins. Nor is he early. He arrives precisely when he means to.
]
```
The above code will render the following:
#blockquote[A wizard is never late, <NAME>. Nor is he early. He arrives precisely when he means to. -- Gandalf]
== Small- and all caps
'Ilm also exports functions for styling text in small caps and uppercase, namely: `smallcaps` and `upper` respectively.
These functions will overwrite the standard #link("https://typst.app/docs/reference/text/smallcaps/")[`smallcaps`] and #link("https://typst.app/docs/reference/text/upper/")[`upper`] functions that Typst itself provides. This behavior is intentional as the functions that 'Ilm exports fit in better with the rest of the template's styling.
Here is how Typst's own #std-smallcaps[smallcaps] and #std-upper[upper] look compared to the 'Ilm's variants:\
#hide[Here is how Typst's own ] #smallcaps[smallcaps] and #upper[upper]
They both look similar, the only difference being that 'Ilm uses more spacing between individual characters.
If you prefer Typst's default spacing then you can still use it by prefixing `std-` to the functions: ```typst #std-smallcaps()``` and ```typst #std-upper()```.
== Footer
If a page does not begin with a chapter then we display the chapter's name, to which the current section belongs, in the footer. #link(<demo>)[Click here] to go to @demo down below and see the footer in action.
= Figures
The template also displays an index of figures (images), tables, and listings (code blocks) at the end of the document, if you set the respective options to `true`:
```typst
#show: ilm.with(
figure-index: (enabled: true),
table-index: (enabled: true),
listing-index: (enabled: true)
)
```
== Tables
In order to increase the focus on table content, we minimize the table's borders by using thin gray lines instead of thick black ones. Additionally, we use small caps for the header row. Take a look at the table below:
#let unit(u) = math.display(math.upright(u))
#let si-table = table(
columns: 3,
table.header[Quantity][Symbol][Unit],
[length], [$l$], [#unit("m")],
[mass], [$m$], [#unit("kg")],
[time], [$t$], [#unit("s")],
[electric current], [$I$], [#unit("A")],
[temperature], [$T$], [#unit("K")],
[amount of substance], [$n$], [#unit("mol")],
[luminous intensity], [$I_v$], [#unit("cd")],
)
#figure(caption: ['Ilm's styling], si-table)
For comparison, this is how the same table would look with Typst's default styling:
#[
#set table(inset: 5pt, stroke: 1pt + black)
#show table.cell.where(y: 0): it => {
v(0.5em)
h(0.5em) + it.body.text + h(0.5em)
v(0.5em)
}
#figure(caption: [Typst's default styling], si-table)
]
= Code
== Custom font
'Ilm uses the _Iosevka_@wikipedia_iosevka font for raw text instead of the default _Fira Mono_. If Iosevka is not installed then the template will fall back to Fira Mono.
#let snip(cap) = figure(caption: cap)[
```rust
fn main() {
let user = ("Adrian", 38);
println!("User {} is {} years old", user.0, user.1);
// tuples within tuples
let employee = (("Adrian", 38), "die Mobiliar");
println!("User {} is {} years old and works for {}", employee.0.1, employee.0.1, employee.1);
}
```
]
#show raw: set text(font: "Fira Mono")
For comparison, here is what `code` in Fira Mono looks like:
#snip("Code snippet typeset in Fira Mono font")
#show raw: set text(font: ("Iosevka", "Fira Mono"))
and here is how the same `code` looks in Iosevka:
#snip("Code snippet typeset in Iosevka font")
In the case that both code snippets look the same then it means that Iosevka is not installed on your computer.
= Footer Demo
== Subheading
#lorem(120)
#lorem(55)
#lorem(120)
#pagebreak()
== Subheading Two <demo>
#lorem(55)
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/002%20-%20Return%20to%20Ravnica/011_The%20Seven%20Bells%2C%20Part%202.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"The Seven Bells, Part 2",
set_name: "Return to Ravnica",
story_date: datetime(day: 14, month: 11, year: 2012),
author: "<NAME>",
doc
)
== The Journal of Bori Andon, Day Ten
Wrong, wrong, wrong. That approach proved fruitless, and the bells did not chime. To complicate matters, the man with black hair was watching me again. He’s probably reporting my failure to the council and they’re all having a good laugh at my expense. Also, birds keep flying overhead. I’ve heard that pigeons can be trained as spies. I see now that wind was too prosaic of a solution. I must convert the energy of people’s thoughts and propel it to the Great Bell Towers, thereby ringing the bells. How much is the weight of thought? How much energy does a person’s brain emit?
I have recalibrated the imager again. This device—the Hypno-Oblatrix—will collect and condense all thoughts in the vicinity of the bell towers. I’ll then transmit the converted energy of those thoughts directly to the bell tower thereby ringing all the bells simultaneously. Those affected will feel a tingling along in the forebrain… oh, scratch that. I have no idea what they’ll experience.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Watcher’s Report
They haven’t even finished cleaning up the glass yet, and he’s back. He’s carrying another device. I can tell it’s new because the hat is even taller. Judging by the faces on the passersby, it’s emitting foul-smelling fumes. Request permission to leave post. I’m going down below to watch from a safe distance.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
== Azorius Incident Report
A mass mind attack has just been launched on the Kalnika Quarter. There have been widespread reports of memory loss, disorientation, and bleeding out of ears. Suspect or suspects still at large.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
== The Journal of Bori Andon, Day Ten, Addendum
The Hypno-Oblatrix failed to ring the bells. But something amazing happened instead. I activated the device, and people fell to the ground like ragdolls. As planned, I transmitted the collected energy toward the bell towers. #emph[Unexpectedly] , I saw hundreds of glowing lines crisscrossing the air like metaphysical threads. Because of my location, I could see that each line intersected with the great bell towers. What could they be? Borderlines? Conduits? Sensors? What is this madness?
I’m terrified of the implications. I’ve uncovered some kind of secret Æther channels running throughout the quarter. The bell towers are nexus points. Maybe Niv-Mizzet tasked me the theorem not as a punishment, but to uncover this baffling secret. I was meant to find this, but what did I find?
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("011_The Seven Bells, Part 2/02.jpg", width: 100%), caption: [Izzet Charm | Art by Zoltan Boros], supplement: none, numbering: none)
Watcher’s Report
Well, I’m shocked. Dumbfounded. I don’t know what to say. That quack did it. He didn’t ring the bells, but he found our secret conduits! He revealed our telepathic network, including the use of the great bell-towers as the transmitter nodes. We need to get our hands on that device and kill him quickly. The entire Kalnika operation is threatened.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
== The Journal of Bori Andon, Day Twelve
I write this in a dingy lodger’s house at the edge of the quarter. I returned to find intruders in my lab. They captured me in the foyer and continued ransacking my equipment. They wore masks and thick gloves and smelled musty. One had my Hypno-Oblatrix and kept demanding information on how it worked.
They meant to kidnap me. Maybe kill me. But I was saved by a pair of Boros investigators who showed up at my door, and the two sides got into a fight. In the midst of the chaos and fire, I grabbed my journal and my spare imager and scurried out the secret door.
I have set myself up here in this pitiful room meant for thieves and miscreants. I’ve already resourced the components I need to recalibrate my device. Now that my enemies have showed their hand, I realize what I was meant to do all along. Niv-Mizzet didn’t just want me to find the channels. He wanted me to destroy them.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("011_The Seven Bells, Part 2/04.jpg", width: 100%), caption: [Perilous Shadow | Art by <NAME>], supplement: none, numbering: none)
Watcher’s Report
We were in Andon’s flat, but some Boros investigating the mass mind attack arrived unexpectedly. While we were silencing them, Andon slipped out a secret door. Who knows what he’s planning to do? All operatives must focus on finding him. And will somebody please get an enforcer? This is getting out of control.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("011_The Seven Bells, Part 2/06.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
== The Journal of Bori Andon, Day Thirteen
I write this from a filthy pipe near an entrance to the undercity. My enemies are everywhere. They mean to kill me. I fear they are in my head twisting my every thought. I must write down the events of the past few hours so I will never forget. I can’t allow myself to be manipulated. The council must hear this news. Yes, <NAME>ay himself. I must warn them that we’re all in danger.
I completed a new device designed to destroy the Æther channels—to scourge them from the world with fire. In the moments after I activated the device, the channels glowed, flickered, and then exploded. Threads of fire zinged through the streets. Unexpectedly, the sky darkened with storm clouds. The earth trembled under my boots. In the distance, the glorious Dome of the Black Dove crumbled like sand. The pain inside my brain was intolerable, so I turned and fled.
As I ran, something slammed into my back and I fell to the ground. I was dragged to my feet again by man with yellow eyes. He wore a single spiked glove, and his voice sounded hollow and unreal.He said:
You’re causing the Dimir a great deal of trouble.
The man dragged me through a red door and into a dark passage. I attacked him there, but he was strong beyond all expectation. He lifted me off the ground by my throat and slammed me against the wall. When he spoke, I could see my enemy’s true nature—a vampire.
And then he invaded my mind. He told me to listen to the bells. The bells would give me my orders. But before he could completely infiltrate my consciousness, I flipped the switch on the imager strapped to my chest and blasted sonus-ripples at the monster. Startled, he reeled backwards and let go of my throat. I fled the burning streets, where I managed to elude him in the chaos. I will spend the night in an ashbin on the highest roof I can find. I pray that I am high enough to avoid the bells. I must avoid the bells. I must avoid the bells.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Watcher’s Report
Andon escaped the enforcer. But it’s for the best, because the enforcer might have eaten him. Now that news about the capabilities of Andon’s device has reached the highest levels, our orders have changed again. Do not kill! I repeat, do not kill! We are to subdue and capture. Bori Andon is going to become one of us. I enjoy the thought of a sleeper agent in the highest echelons of the Izzet. The crazy ones are always the most easily manipulated.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("011_The Seven Bells, Part 2/08.jpg", width: 100%), caption: [Supreme Verdict | Art by <NAME>ley], supplement: none, numbering: none)
== The Journal of Bori Andon, Day Thirteen
I write this from inside an ashbin on the roof of a tenement building. Forgive my poor penmanship, for the light coming through the slats is dim and the air is clogged with dust.
They’re following me. Their shadows are codes to one another. There’s a sentry mocking me on every rooftop. But they can’t make their move because my ears are filled with pebbles and glue! They cannot lure me with those damnable bells. I must make it to Nivix. I must find the dragon and tell him that I’ve been corrupted. But the streets no longer make sense. I keep finding myself on a bridge that I can’t cross. No matter which way I turn, I find myself facing the other direction and walking the way that I’ve just come. It’s the edge of an hour. Time for the bells to start ringing. If I hear the chimes, my enemies will take over my mind, and I will lose myself forever.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
== Azorius Incident Report
Every bell in the vicinity of the Kalnika Quarter rang at once. There were widespread complaints of hearing loss, headaches, and disorientation. The bells rang with such force that several were sundered in half. Three of the great bell towers were damaged by the reverberations. Under Article 90903.35b, this is classified as an act of arcane maliciousness, not a random event.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
== The Journal of Bori Andon, Day One
I have begun a new journal. I searched my lab thoroughly, and while everything is in order, my old journal is nowhere to be found.
Today at the council, I was greeted by a standing ovation. The seven great bells had rung at once, thereby solving the Theorem of Simultaneous Discordance. They wanted to know how I did it, but I demurred. It wasn’t me, I insisted. I can tell they think I’m just being modest. Maybe I did ring them, but I honestly can’t recall.
A few moments ago, a courier knocked on my door. He brought a package sent by an anonymous person. It’s a present, of course, because today is my birthday. Inside was a delicate silver bell with the most delightful tone. It now hangs in my foyer. I hope that one day I get to thank my secret admirer.
|
|
https://github.com/lyzynec/orr-go-brr | https://raw.githubusercontent.com/lyzynec/orr-go-brr/main/10/main.typ | typst | #import "../lib.typ": *
#knowledge[
#question(name: [Explain and demonstrate how we can describe _uncertainty in
a linear model of a dynamical system in frequency domain_ (using a
_weighting filter_ $W$ and the $Delta$ term). Discuss the _additive_ and
_multiplicative_ (input and output) models of uncertainty and their
inverses.])[
Most basic model of uncertainty is $Delta(s)$
$
max_omega |Delta(j omega)| <= 1, Delta "stable"
$
we can imagine $Delta(s)$ as some kind of wobbly function that whose
norm of value on imaginary axis is less than or equal to zero.
Now, as uncertainty is higher at higher frequency, we introduce
new function $w(omega)$ approximated by $W(s)$ as
$W(j omega) approx w(omega)$.
The transfer function of _uncertainty_ is than
$
W(s) Delta(s), max_omega |Delta(j omega)| <= 1, Delta "stable"
$
#part(name: [Additive Model])[
Described as
$
G(s) = underbrace(G_0 (s), "nominal model")
+ underbrace(W(s) Delta(s), "additive uncertainty")
$
with
$
|G(j omega) - G_0 (j omega)| < |W(j omega)|
$
#align(center)[#image("includes/add.png", width: 60%)]
]
#part(name: [Multiplicative Model])[
Described as
$
G(s) = underbrace((1 + W(s) Delta(s)), "multiplicative uncertainty")
G_0 (s)
$
with
$
(|G(j omega) - G_0 (j omega)|)/(|G_0 (j omega|)) < |W(j omega)|
$
#align(center)[#image("includes/mul.png", width: 60%)]
]
#part(name: [Inverse Additive Model])[
You get the picture
#align(center)[#image("includes/inv_add.png", width: 60%)]
]
#part(name: [Inverse Multiplicative Model])[
You get the picture
#align(center)[#image("includes/inv_mul.png", width: 60%)]
]
]
#question(name: [Explain and demonstrate the usage of _Linear Franctional
Transformation_ (LFT) as a general (unifying) framework for modeling
uncertainty.])[
For
$
P, N in RR^((n_1 + n_2) times (m_1 + m_2))
$
where
$
P = mat(
P_(1 1), P_(1 2);
P_(2 1), P_(2 2)
),
N = mat(
N_(1 1), N_(1 2);
N_(2 1), N_(2 2)
)
$
And matricies
$
Delta &in RR^(m_1 times n_1)\
K &in RR^(m_2 times n_2)
$
The *lower* and *upper* LFT is
$
cal(F)_l (P, K)
&= P_(1 1) + P_(1 2) K (I - P_(2 2) K)^(-1) P_(2 1)\
cal(F)_u (N, Delta)
&= N_(2 2) + N_(2 1) Delta (I - N_(1 1) Delta)^(-1) N_(1 2)
$
#grid(columns: 2, column-gutter: 3em,
image("includes/lft_pk.png"),
image("includes/lft_dn.png")
)
As example the multiplicative uncertainty
#align(center)[#image("includes/lft_input_mul.png", width: 60%)]
will put
$
P = mat(
0, W;
G_0, G_0
)
$
]
#question(name: [Define the $cal(H)_oo$ norm of a system. Both SISO and
MIMO. In the MIMO case this is related to SVD decomposition of a matrix,
therefore be ready to explaining the decomposition itself (and, for example,
to explain the difference between SVD and eigen decomposition).])[
#align(center)[#grid(columns: 2,
row-gutter: 10pt, column-gutter: 10pt, align: left,
[SISO], $norm(bold(G))_oo = sup_(omega in RR) |bold(G) (j omega)|$,
[MIMO], $norm(bold(G))_oo = sup_(omega in RR)
overline(sigma) (bold(G) (j omega))$
)]
where $overline(sigma)$ is the largest singular value (from SVD).
In Matlab callable as
```matlab
norm(G, Inf)
```
Singluar values for symmetric positive definite matrix are eigenvalues.
But in general singular values of matrix $X$ are square roots of
eigenvalues of $X^* X$.
]
#question(name: [State the conditions of robust (closed-loop) stability in
presence of additive or multiplicative uncertainty in the system. State them
both in the form of a frequency--wise inequality but also in the form
utilizing the $cal(H)_oo$ norm of a closed-loop transfer function.])[
$
|W(j omega) L_0 (j omega)| &< |1 + L_0 (j omega)|, forall omega\
norm(W T)_oo &<= 1
$
]
#question(name: [State the conditions of robust (closed-loop) stability
within the LFT framework (hint: _small gain theorem_).])[
For
$
cal(F)_u (N, Delta) = N_(2 2) + N_(2 1) Delta
(I - N_(1 1) Delta)^(-1) N_(1 2)
$
where
$
M := N_(1 1)
$
$
|1 - M(j omega) Delta (j omega)| > 0, forall omega, forall |Delta| <= 1
$
The condition is
$
1 - |M(j omega)| > 0, forall omega\
norm(M)_oo < 1
$
]
#question(name: [State the conditions of robust performance in presence of
multiplicative uncertainty and discuss how it could be approximated using
the bound on the $cal(H)_oo$ norm of some related closed--loop transfer
function.])[
#align(center)[#grid(columns: 2,
row-gutter: 10pt, column-gutter: 10pt, align: left, inset: 5pt,
[nominal performance], $norm(W_P S)_oo < 1$,
[robust performance], $
&|W_P S| + |W T| > 1, forall omega\
&norm(mat(W_P S; W T))_oo = max_omega sqrt(|W_P S|^2 + |W T|^2) < 1
$
)]
]
]
#skills[
#question(name: [Create a model of an uncertain system in frequency domain,
that is, you should provide a (model of a) nominal system and a weighting
filter together with the specification which kind of uncertainty structure
they are representing (additive, input/output multiplicative, inverse...).
Design some decent controller using the techniques that you already master
and check if the resulting feedback interconnection is robustly stable and
if the performance is robust as well.])[]
]
|
|
https://github.com/vimkat/typst-ohm | https://raw.githubusercontent.com/vimkat/typst-ohm/main/src/templates/document.typ | typst | MIT License | #import "../lib/vars.typ"
#import "../components/logo.typ": logo
#let document(doc, blue-as-black: false, lang: "de") = {
set text(fill: vars.blue) if blue-as-black
// Text styles
set text(font: vars.font, lang: lang)
set par(leading: 0.75em)
// Headings
show heading: it => [ #v(2em, weak: true) #it #v(1em, weak: true) ]
doc
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/minerva-report-fcfm/0.1.0/departamentos.typ | typst | Apache License 2.0 | #let adh = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Área de Humanidades"
),
logo: read("logos/adh.svg")
)
#let das = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Astronomía"
),
logo: read("logos/das.svg")
)
#let dcc = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ciencias de la Computación"
),
logo: read("logos/dcc.svg")
)
#let dfi = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Física"
),
logo: read("logos/dfi.svg")
)
#let dgf = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Geofísica"
),
logo: read("logos/dgf.svg")
)
#let dic = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Civil"
),
logo: read("logos/dic.svg")
)
#let die = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Eléctrica"
),
logo: read("logos/die.svg")
)
#let dii = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Industrial"
),
logo: read("logos/dii.svg")
)
#let dim = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Matemática"
),
logo: read("logos/dim.svg")
)
#let dimec = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Mecánica"
),
logo: read("logos/dimec.svg")
)
#let dimin = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería de Minas"
),
logo: read("logos/dimin.svg")
)
#let diqbm = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Química, Biotecnología y Materiales"
),
logo: read("logos/diqbm.svg")
)
#let geo = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Deparamento de Geología"
),
logo: read("logos/geo.svg")
)
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/return-02.typ | typst | Other | // Test return with joining and content.
// Ref: true
#let f(text, caption: none) = {
text
if caption == none [\.#return]
[, ]
emph(caption)
[\.]
}
#f(caption: [with caption])[My figure]
#f[My other figure]
|
https://github.com/diogro/memorial | https://raw.githubusercontent.com/diogro/memorial/master/IB-USP-2024/cv.typ | typst |
#set text(
font: "Skolar PE TEST",
size: 10pt,
lang: "por",
region: "br"
)
#set page(
paper: "a4",
header:[
#set text(9pt, font: "Skolar PE TEST")
Curriculum vitae
#h(1fr) <NAME>
],
numbering: "1",
)
#show par: set block(spacing: 1em)
#set par(
leading: 1em,
first-line-indent: 1em,
justify: true
)
#show heading: it => {
block[#it.body]
v(0.3em)
}
== Informações pessoais
*<NAME>*
- Email: <EMAIL>
- Google Scholar: - #link("https://scholar.google.com/citations?user=ymsxHCMAAAAJ&hl=en")
- ORCID: - *0000-0002-7603-0092* #link("https://orcid.org/0000-0002-7603-0092")[(#underline[link])]
- Research-ID: B-9282-2014
== Pós-doutorado
- 2020-atual - *Pós doutorando no Departamento de Ecologia e Evolução*
- Financiado pela _Princeton Presidential Postdoctoral Research Fellows Program_
- Universidade de Princeton, NJ, EUA
- Documento @fig-princeton[]
- 2019-2020 - *Pós doutorando no Departamento de Genética e Biologia Evolutiva*
- Financiado por uma Bolsa FAPESP de pós-doutorado
- Instituto de Biociências, Universidade de São Paulo
- Documento @fig-outorga_postdoc[]
== Formação Acadêmica
- 2014-2019 - *Programa de Pós Graduação em Genética e Biologia Evolutiva*
- Orientador: <NAME>
- Doutorado em Ciências com a tese "Evolução da covariação genética em caracteres complexos: interação entre o mapa genótipo-fenótipo e seleção natural"
- Financiado por uma Bolsa FAPESP de doutorado
- Universidade de São Paulo
- Documentos @fig-diploma_doc[] e @fig-outorga_doc[]
- 2014, 2016 - *Aluno de Doutorado visitante na Universidade de Bath*
- Supervisor: Dr. <NAME>
- Financiado por uma Bolsa BEPE-FAPESP de doutorado (2016)
- Financiado pelo programa _Future Research Leaders Incubator Scheme_ (2016) e _Global Research Scholarship Scheme_ (2014) da Universidade de Bath
- Universidade de Bath, Somerset, Reino Unido
- Documentos @fig-bath_letter[] e @fig-outorga_BEPE[]
- 2010-12 - *Programa de Pós Graduação em Genética e Biologia Evolutiva*
- Orientador: <NAME>
- Mestre em Ciências com a dissertação "Evolução Morfológica e Modularidade"
- Financiado por uma Bolsa FAPESP de mestrado
- Universidade de São Paulo
- Documentos @fig-diploma_mest[] e @fig-outorga_mest[]
- 2007-09 - *Curso de Ciências Biológicas*
- Bacharel com ênfase em Biologia Evolutiva
- Universidade de São Paulo
- Documento @fig-diploma_bio[]
- 2003-07 - *Curso de Ciências Moleculares*
- Bacharel com ênfase em biologia, matemática aplicada e mecânica estatística
- Universidade de São Paulo
- Documento @fig-diploma_ccm[]
== Publicações em pré-print
1. _Reassessing the modularity of gene co-expression networks using the Stochastic Block Model_ #link("https://www.biorxiv.org/content/10.1101/2023.05.31.542906v1")[(#underline[link])] (2023)
*<NAME>*, <NAME>, <NAME>. bioRxiv 2023.05.31.542906. doi: _10.1101/2023.05.31.542906_ #link("https://doi.org/10.1101/2023.05.31.542906")[(#underline[link])]
- Documento @fig-SBM[]
2. _Saturating the eQTL map in Drosophila melanogaster: genome-wide patterns of cis and trans regulation of transcriptional variation in outbred populations_ #link("https://www.biorxiv.org/content/10.1101/2023.05.20.541576v3")[(#underline[link])] (2023)
<NAME>, *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. bioRxiv. doi: _10.1101/2023.05.20.541576_ #link("https://doi.org/10.1101/2023.05.20.541576")[(#underline[link])]
- Documento @fig-eQTL[]
3. _From GWAS to signal validation: An approach for estimating SNP genetic effects while preserving genomic context_ #link("https://www.biorxiv.org/content/10.1101/2023.03.09.531909v3")[(#underline[link])] (2023)
<NAME>., <NAME>, *<NAME>*, <NAME>, <NAME>. bioRxiv. doi: _10.1101/2023.03.09.531909_ #link("https://doi.org/10.1101/2023.03.09.531909")[(#underline[link])]
- Documento @fig-FocalSNP[]
== Publicações em periódicos indexados
1. _Characterizing the landscape of gene expression variance in humans_ #link("https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1010833")[(#underline[link])] (2023)
<NAME>.\*, *<NAME>*\*, <NAME>, <NAME>, <NAME>. _PLoS Genetics_. 19, e1010833. doi: _10.1371/journal.pgen.1010833_ #link("http://dx.doi.org/10.1371/journal.pgen.1010833")[(#underline[link])]
- \* co-primeiro autores
- Documento @fig-exp_var[]
2. _Morphological integration during postnatal ontogeny: implications for evolutionary biology_ #link("https://academic.oup.com/evolut/article/77/3/763/6957025")[(#underline[link])] (2023)
<NAME>., <NAME>, *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. _Evolution_. 2023 77(3): 763–775. doi:- _10.1093/evolut/qpac052_ #link("https://doi.org/10.1093/evolut/qpac052")[(#underline[link])]
- Documento @fig-ontogenia[]
3. _Are cats less stressed in homes than in shelters? A study of personality and faecal cortisol metabolites levels_ #link("https://www.sciencedirect.com/science/article/abs/pii/S0168159119301881")[(#underline[link])] (2020)
<NAME>., *<NAME>*, <NAME>, <NAME>, <NAME>. _Applied Animal Behaviour Science_. 2020 224:104919. doi:10.1016/j.applanim.2019.104919
- Documento @fig-cats[]
4. _Genomic Perspective on Multivariate Variation, Pleiotropy, and Evolution_ #link("https://academic.oup.com/jhered/article/110/4/479/5463195")[(#underline[link])] (2019)
*<NAME>*, <NAME>, <NAME>. _Journal Of Heredity_. 2019 110(4):479-493. doi:10.1093/jhered/esz011
- Documento @fig-qtls[]
5. _Measuring the magnitude of morphological integration: the effect of differences in morphometric representations and the inclusion of size_ #link("https://onlinelibrary.wiley.com/doi/abs/10.1111/evo.13864")[(#underline[link])] (2019)
<NAME>., <NAME>, *<NAME>*, <NAME>, <NAME>. _Evolution_. 2019 73(12):2518-2528. doi:10.1111/evo.13864
- Documento @fig-Integration[]
6. _The evolution of phenotypic integration: How directional selection reshapes covariation in mice_ #link("http://onlinelibrary.wiley.com/doi/10.1111/evo.13304/abstract")[(#underline[link])] (2017)
<NAME>.\*, *<NAME>*\*, <NAME>, <NAME>, <NAME>. _Evolution_, 2017 71(10):2370–2380. doi:10.1111/evo.13304
- \* co-primeiro autores
- Documento @fig-ratones[]
7. _Insights from Systems Biology in Physiological Studies: Learning from Context_ #link("https://www.karger.com/Article/FullText/478648")[(#underline[link])] (2017)
<NAME>., *<NAME>*, <NAME>. _Cell Physiology and Biochemestry_, 2017 42(3):939-951. doi:10.1159/000478648
- Documento @fig-sysbio[]
8. _Modularity: Genes, Development, and Evolution_ #link("http://annualreviews.org/doi/abs/10.1146/annurev-ecolsys-121415-032409")[(#underline[link])] (2016) *<NAME>*\*, <NAME>\*, <NAME>, <NAME>. _Annual Review of Ecology, Evolution, and Systematics_, 2016 47:463-486
- \* co-primeiro autores
- Documento @fig-Modularity_Genes[]
9. _Costly learning: preference for familiar food persists despite negative impact on survival_ #link("http://rsbl.royalsocietypublishing.org/content/12/7/20160256")[(#underline[link])] (2016)
<NAME>., <NAME>, *<NAME>*, <NAME>. _Biology Letters_, 2016 20160256. doi:10.1098/rsbl.2016.0256
- Documento @fig-Costly[]
10. _A case study of extant and extinct Xenarthra cranium covariance structure: Implications and applications to paleontology_ #link("http://www.bioone.org/doi/abs/10.1017/pab.2015.49")[(#underline[link])] (2016)
<NAME>., *<NAME>*, <NAME>. _Paleobiology_, 2016 42(3):465-488 doi:10.1017/pab.2015.49
- Documento @fig-Xenarthra[]
11. _EvolQG - An R package for evolutionary quantitative genetics_ #link("http://f1000research.com/articles/4-925/v3")[(#underline[link])] (2015)
*<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>._F1000 Research_, 2015 4:925 doi:10.12688/f1000research.7082.3
- Documento @fig-evolqg[]
12. _Fitness Trade-offs Result in the Illusion of Social Success_ #link("https://doi.org/10.1016/j.cub.2015.02.061")[(#underline[link])] (2015)
<NAME>., <NAME>, <NAME>, <NAME>, *<NAME>*, <NAME>, <NAME>. _Current Biology_, 2015 25(8):1086–1090 doi:10.1016/j.cub.2015.02.061
- Documento @fig-dicty[]
13. _Directional Selection can Drive the Evolution of Modularity in Complex Traits_ #link("http://www.pnas.org/content/112/2/470.abstract")[(#underline[link])] (2015)
*<NAME>*, <NAME>. _PNAS_, 2015 112(2):470-475 doi:10.1073/pnas.1322632112
- Documento @fig-DirectionalSelection[]
14. _Modularity, Noise, and Natural Selection_ #link("http://onlinelibrary.wiley.com/doi/10.1111/j.1558-5646.2011.01555.x/abstract")[(#underline[link])] (2012)
<NAME>., *<NAME>*, <NAME>. _Evolution_, 2012 66(5):1506–1524 doi:10.1111/j.1558-5646.2011.01555.x
- Documento @fig-Noise[]
15. _Selection Response Decomposition (SRD): A New Tool for Dissecting Differences and Similarities Between Matrices_ #link("http://link.springer.com/article/10.1007%2Fs11692-010-9107-2")[(#underline[link])] (2012)
<NAME>., *<NAME>*, <NAME>, <NAME> e <NAME>. _Evolutionary Biology_, 2011 38:225-241 doi:10.1007/s11692-010-9107-2
- Documento @fig-SRD[]
== Capítulos de livro
1. _How does modularity in the genotype-phenotype map interact with development and evolution?_ #link("https://link.springer.com/chapter/10.1007/978-3-030-18202-1_11")[(#underline[link])] (2019) *<NAME>*. In: _Old Questions and Young Approaches to Animal Evolution_. <NAME>., <NAME>. (eds) Fascinating Life Sciences. Springer, Cham. doi:10.1007/978-3-030-18202-1_11
- Documento @fig-cap_oqya[]
2. _Modularity and Integration_ #link("http://www.sciencedirect.com/science/article/pii/B9780128000496000445")[(#underline[link])] (2016) <NAME>., <NAME>, <NAME>, *<NAME>*, <NAME>. _Encyclopedia of Evolutionary Biology_, 2016 vol. 3, pp. 34–40. Oxford: Academic Press. doi:10.1016/B978-0-12-800049-6.00044-5
- Documento @fig-cap_mod[]
== Participação em congressos
1. *Society for Molecular Biology and Evolution (SMBE) Meeting*, Ferrara, Itália (2023)
- <NAME>., *<NAME>*, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.
- Longitudinal sequencing reveals polygenic and epistatic nature of genomic response to selection
- Poster
- Documento @fig-smbe2023[]
2. *Ecological and Evolutionary Genomics Gordon Research Conference*, Smithfield, RI, Estado Unidos da América (2023)
- <NAME>., *<NAME>*, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.
- Longitudinal sequencing reveals polygenic and epistatic nature of genomic response to selection
- Poster
- Documento @fig-GRC2023[]
3. *II Joint Congress on Evolutionary Biology*, Montpellier, França (2018)
- *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>
- Genetic architecture and the evolution of variational modularity
- Poster
- Documento @fig-evol2018[]
4. *5th meeting of the European Society for Evolutionary Developmental Biology (EED)*, Uppsala, Suécia (2016)
- *<NAME>*, <NAME>
- The Evolution of Pleiotropy and Modularity
- Palestrante convidado
- Documento @fig-evodevo[]
5. A *Evolution Meeting*, Guarujá, Brasil (2015)
- *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>
- Changes in multivariate covariance structures under directional selection
- Apresentação oral
- Documento @fig-evol2015L[]
5. B *Evolution Meeting*, Guarujá, Brasil (2015)
- *<NAME>*, <NAME>
- The Effect of Directional Selection on Pleiotropy and Modularity
- Poster
- Documento @fig-evol2015P[]
6. *V Brasilian Meeting on Simulational Physics*, Ouro Preto, Brasil (2017)
- *<NAME>*, <NAME>
- Order-disorder phase transitions on dissociating model lipid membranes
- Poster
- Documento @fig-sims[]
== Cursos Ministrados
1. *Programação para Biologia* (2023)
- Disciplina para graduação na Universidade de Princeton
- Departamento de Ecologia e Biologia Evolutiva
- Carga horária de 50h
- Documentos @fig-EEB330[] e @fig-genomics[]
2. *Statistical models: linking data to theory* (2023)
- Curso de extensão no Programa de Formação em Ecologia Quantitativa
- Instituto Serrapilheir a
- @fig-serrapilheira1[] e @fig-serrapilheira2[]
3. *Escrita Científica* (2020, 2021)
- Disciplina ministrada no Programa de Pós-Graduação em Ciências Biologias (Zoologia)
- Carga horária de 120h
- Documento @fig-escrita[]
4. *Minicurso de Biologia Evolutiva* (2020)
- Curso ministrado no 1º Curso de Férias em Genética e Evolução
- Departamento de Genética e Biologia Evolutiva, IB-USP
- Carga horária de 6h
- Documento @fig-BE[]
5. *Modularidade: conectando padrões e processos em evolução multivariada* (2019)
- Disciplina ministrada no Programa de Pós-Graduação em Biologia Comparada da FFCLRP/USP
- Carga horária de 30h
- Documento @fig-modularidade[]
6. *MCMC: passo a passo*, IB, USP (2017)
- Workshop sobre o método de _Markov Chain Monte Carlo_ (MCMC).
- Carga horária de 20h
- Documento @fig-mcmc[]
7. *Introdução a manipulação de dados com a linguagem R* (2012)
- Curso na semana temática do instituto oceanográfico da USP
- Carga horária de 12h
- Documento @fig-io[]
== Monitorias
1. *Epidemiology: An Ecological and Evolutionary Perspective* (2022)
- Curso para graduação na Universidade de Princeton
- Profs. <NAME>, <NAME>
- Documento @fig-genomics[]
2. *Human Genomics: The Past, Present, and Future of the Human Genome* (2021)
- Curso para graduação na Universidade de Princeton
- Profs. <NAME>, <NAME>, <NAME>
- Documento @fig-genomics[]
3. *Biologia Evolutiva*, IB, USP (2012, 2013, 2019)
- Curso para graduação em biologia dos Profs. <NAME> e <NAME>
- Departamento de Genética e Biologia Evolutiva
- Documento @fig-bioevol[]
4. *Uso da linguagem R para análise de dados ecológicos*, IB, USP (2012-14)
- Curso para pós-graduação em ecologia dos Profs. Paulo Inácio Prado e <NAME>
- Departamento de Ecologia
- Documento @fig-R[]
5. *Southern-Summer School on Mathematical Biology*, ICTP, Unesp (2014, 2015, 2016)
- Curso para pós-graduação em matemática aplicada à biologia
- Organização <NAME> (Santiago), <NAME> (São Paulo), <NAME> (São Paulo)
- International Center for Theoretical Physics e Instituto de Física Teórica da Unesp
- Documento @fig-SSSMB[]
6. *Curso de Campo Ecologia da Mata Atlântica*, IB, USP (2015)
- Curso para pós-graduação em ecologia dos Profs. <NAME>, Paulo Inácio Prado e <NAME>
- Departamento de Ecologia
- Documento @fig-campo[]
7. *Processos Evolutivos*, IB, USP (2011)
- Curso para graduação em biologia dos Profs. <NAME> e <NAME>
- Programa de aperfeiçoamento de ensino (PAE)
- Departamento de Genética e Biologia Evolutiva
- Documento @fig-pae[]
#pagebreak()
#include "certificados.typ" |
|
https://github.com/sebaseb98/clean-math-thesis | https://raw.githubusercontent.com/sebaseb98/clean-math-thesis/main/customization/great-theorems-customization.typ | typst | MIT License | // packages
#import "@preview/great-theorems:0.1.1": *
#import "@preview/headcount:0.1.0": *
// local
#import "colors.typ": *
#let mathcounter = counter("mathblocks") // counter for mathblocks
#let my_mathblock = mathblock.with(
radius: 0.3em,
inset: 0.8em,
counter: mathcounter,
numbering: dependent-numbering("1.1"),
breakable: false,
titlix: title => [(#title):],
)
// theorem etc. settings
#let theorem = my_mathblock(
blocktitle: "Theorem",
fill: color1.lighten(90%),
stroke: color1.darken(20%),
)
#let proposition = my_mathblock(
blocktitle: "Proposition",
fill: color2.lighten(90%),
stroke: color2.darken(20%),
)
#let corollary = my_mathblock(
blocktitle: "Corollary",
fill: color3.lighten(90%),
stroke: color3.darken(20%),
)
#let lemma = my_mathblock(
blocktitle: "Lemma",
fill: color4.lighten(90%),
stroke: color4.darken(20%),
)
#let definition = my_mathblock(
blocktitle: "Definition",
fill: color5.lighten(95%),
stroke: color5.darken(20%),
)
#let remark = my_mathblock(
blocktitle: "Remark",
fill: color1.lighten(90%),
stroke: color1.darken(20%),
)
#let reminder = my_mathblock(
blocktitle: "Reminder",
fill: color3.lighten(90%),
stroke: color3.darken(20%),
)
#let example = my_mathblock(
blocktitle: "Example",
fill: color2.lighten(90%),
stroke: color2.darken(20%),
)
#let question = my_mathblock(
blocktitle: "Question",
fill: color3.lighten(75%),
stroke: color3.darken(20%),
)
#let proof = proofblock() |
https://github.com/jonathan-iksjssen/jx-style | https://raw.githubusercontent.com/jonathan-iksjssen/jx-style/main/0.2.0/test.typ | typst | #import "@jx/jx-style:0.2.0": *
#show: docu.with(
docutype: "paper",
rp-authors: (
rp-author("Clarkson", "<NAME>"),
rp-author("Hammond", "<NAME>"),
rp-author("May", "<NAME>"),
),
rp-title: "Strange Bugs and Where To Find Them",
rp-subtitle: [A Sample of `jx-style`'s research paper format],
rp-supplement: lorem(10),
rp-supplement2: lorem(6),
rp-school: "Squidward Community College",
rp-subject: "Inquiries, Investigation, and Immersion",
section: "BSIT-1A",
rp-submittedTo: "The BBC",
font: "tex gyre heros"
)
#show "ae": "æ"
#show "oe": "œ"
= Pingas Blast
#let lorems = lorem(50)
#lorem(50) `pingas` #lorem(20)
```cs
namespace yourMom;
public class PINGAS {
public static void Main() {
Console.WriteLine("SnooPINGAS usual, I see?")
}
}
``` |
|
https://github.com/ludwig-austermann/typst-din-5008-letter | https://raw.githubusercontent.com/ludwig-austermann/typst-din-5008-letter/main/README.md | markdown | MIT License | # typst-din-5008-letter
A template for [DIN 5008](https://www.edv-lehrgang.de/din-5008) inspired [typst](https://typst.app/home) letter.
Furthermore, there is a envelope template.
## Reference
The reference can be found at the [Documentation](documentation.md).
You should also take a look at the examples, to see how this package works in practise. Especially, `example_letter.typ` shows off many package options.
## Usage
Right now, a entry in the public typst package `@preview` repository is not planned. However, a local package can be achieved downloading the repo in the local package directory (see [typst package README]())
### local package
This method provides simpler versioning and a cleaner folder structure. However, as this method works only locally, it is not supported in the typst web app.
The usage is pricipally the same as in the next section, but the first line changes to
```typst
#import "@local/letter:0.1.0": letter
```
### repo copy in letter folder
To use the template, create a file next to `letter.typ` and use the important parts of this:
```typst
#import "letter.typ" : letter
```
You can also specify the package path as root when working locally.
### Tipps
- If you repeatedly use the same settings, you should consider adding a `my_defaults.typ` to your project, where you bundle all styling, hooks and wordings options all together, to then import these options. This can be achieved with a local package.
- Take a look at the examples, which should clearly show some possibilities and their usage.
## Showcase
The resulting pdfs are included in the `examples` directory, but the first pages looks like this:
| `example_letter` | `nice_defaults` | `template_letter` | `envelope_ex` |
|---|---|---|---|
|  |  |  |  |
## Notable differences from DIN standard
- Normally a `theme-color` is not included, but here the argument does not live in the `extra-options`, but in the `styling-options`. The default color is `navy` - a dark variant of blue. I think it is a nice feature and having it this way, simplifies usage in various places. You can, however, set this to `black`, to obtain DIN behaviour.
- Im not sure, I made every spacing the way it is supposed to be in standard.
- The default font is set to `Source Sans Pro` with font size `12pt`
- Not all options present in the DIN are available, maybe some will be added later.
- The usage of reference signs is not standard DIN conform, as I understand, still it is a option / argument.
- Additionally to _Subsubjects_, I also added _Subsubsubjects_ and so on.
- Some of the TODO's make up other differences.
## ToDo's
Because certain functionality is not possible in typst right now, here are some things I want to add later on.
- [ ] Footer depending on page number with correct sizing
- [ ] Even and odd pages distinction, so that margin fits (left and right margin)
- [x] More flexible design
- [ ] Option, to show pagenumber + margin! only if more than one page
- [ ] typed options
- [ ] Possibility to specify options in various places?
- [ ] Add some opinionated default block / hooks as extension to load, for easier experience
Furthermore, if someone has any ideas on how to improve this class, feel free to create an issue or better a PR. |
https://github.com/ustctug/ustc-thesis-typst | https://raw.githubusercontent.com/ustctug/ustc-thesis-typst/main/main.typ | typst | MIT License | #import "template.typ": *
#show: thesis.with(
title: "中国科学技术大学学位论文模板示例文档",
author: "张三",
)
// #frontmatter()
// #include("chapters/abstract.typ")
#outline()
// #listoffigures
// #listoftables
// #include("chapters/notation.typ")
// #show: mainmatter
#include("chapters/intro.typ")
#include("chapters/floats.typ")
#include("chapters/math.typ")
#include("chapters/citations.typ")
#bibliography("refs.bib")
// #appendix()
// #include("chapters/complementary.typ")
// #backmatter()
// #include("chapters/acknowledgements.typ")
// #include("chapters/publications.typ")
|
https://github.com/SillyFreak/typst-packages-old | https://raw.githubusercontent.com/SillyFreak/typst-packages-old/main/pre-plantuml/docs/manual.typ | typst | MIT License | #import "@preview/tidy:0.3.0"
#import "template.typ": *
#import "../src/lib.typ" as template
#let package-meta = toml("../typst.toml").package
#let date = none
// #let date = datetime(year: ..., month: ..., day: ...)
#show: project.with(
title: "pre-plantuml",
// subtitle: "...",
authors: package-meta.authors.map(a => a.split("<").at(0).trim()),
abstract: [
Extract PlantUML diagrams from Typst documents to be rendered into images.
],
url: package-meta.repository,
version: package-meta.version,
date: date,
)
// the scope for evaluating expressions and documentation
#let scope = (template: template)
= Introduction
This package provides two #link("https://typst.app/universe/package/prequery")[prequeries] for using PlantUML in Typst:
- #ref-fn("plantuml-url()") provides the encoded diagram URL to preprocessors for downloading the diagram file;
- #ref-fn("plantuml-source()") provides the diagram source code to preprocessors for generating the diagram image locally.
See the Prequery library documentation for more general information on how this can be used.
= Module reference
== `template`
#{
let module = tidy.parse-module(
read("../src/lib.typ"),
// label-prefix: "template.",
scope: scope,
)
tidy.show-module(
module,
sort-functions: none,
style: tidy.styles.minimal,
)
}
|
https://github.com/gdg-cnu/thumbnail-renderer | https://raw.githubusercontent.com/gdg-cnu/thumbnail-renderer/main/using-typst-app/main.typ | typst | MIT License | #import "/lib/theme.typ"
#show: theme.theme.with()
#let ai-data = theme.slide.theme-color.green
#let dev = theme.slide.theme-color.blue
#let devrel = theme.slide.theme-color.red
#let game = theme.slide.theme-color.yellow
#let authors = (
(
title: [
여기에 제목을 작성합니다.
],
author: "발표자명",
author-meta: "Chonnam National University",
author-image-path: "/assets/profile01.jpg",
theme: dev
),
(
title: [\
불량 메타버스 인디게임은\
게임 엔진의 꿈을 꾸는가
],
author: "<NAME>",
author-meta: "Chonnam National University",
author-image-path: "/assets/profile01.jpg",
theme: game
),
(
title: [\
불량 메타버스 인디게임은\
게임 엔진의 꿈을 꾸는가
],
author: "<NAME>",
author-meta: "Chonnam National University",
author-image-path: "/assets/profile01.jpg",
theme: devrel
),
(
title: [\
불량 메타버스 인디게임은\
게임 엔진의 꿈을 꾸는가
],
author: "<NAME>",
author-meta: "Chonnam National University",
author-image-path: "/assets/profile01.jpg",
theme: ai-data
),
)
// ------------------------------------------
#for each in authors [
#theme.slide.title(
title: each.title,
author: each.author,
author-meta: each.author-meta,
author-image-path: each.author-image-path,
theme: each.theme
)
]
|
https://github.com/LaPreprint/typst | https://raw.githubusercontent.com/LaPreprint/typst/main/README.md | markdown | MIT License | <h1 align="center">LaPreprint for Typst</h1>
<p align="center">
<a href="https://github.com/rowanc1/LaPreprint/blob/main/LICENSE"><img alt="MIT License" src="https://img.shields.io/github/license/rowanc1/lapreprint?style=flat-square"></a>
<a href="https://github.com/rowanc1/lapreprint"><img alt="Stars" src="https://img.shields.io/github/stars/rowanc1/lapreprint?style=social"></a>
<a href="https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Frowanc1%2FLaPreprint&via=rowancockett&text=Checkout%20the%20LaPreprint%20template%20for%20Typst"><img alt="Twitter" src="https://img.shields.io/twitter/follow/rowancockett"></a>
</p>
<p align="center">
<b>Easily create beautiful preprints in Typst</b>
</p>
<p align="center">
<img width="75%" src="https://raw.githubusercontent.com/rowanc1/LaPreprint/68f5ccbf1e1e26819149bcea1d547dfdfc5ceecd/example/files/screenshot.png">
</p>
```typst
#import "lapreprint.typ": template
#show: template.with(
title: "A beautiful preprint template"
)
```
# Features
With simple options you can enable/disable:
- color schemes of blue, red, purple
- author, ORCID, and affiliation support
- branding and logo support
- left margin with custom headings
- date accepted, published & submitted
- customizable font choices
- running title, affiliation
- multiple abstracts, e.g. plain language summary or english/french!
- keywords, page count, nice headings
- easily customize APA or IEEE citation style
- Optional: full width after first page
# Quick start
The preprint template uses Typst (not LaTeX[^latex]) for typesetting, see [typst docs](https://typst.app/docs).
[^latex]: If you are looking for LaPreprint for LaTeX, please see: https://github.com/roaldarbol/LaPreprint
Copy [`lapreprint.typ`](./lapreprint.typ) to your own folder that you are working in, you only need the single `lapreprint.typ` file:
```typst
#import "lapreprint.typ": template
```
Take a look at the examples in the [GitHub repository](https://github.com/rowanc1/LaPreprint), for example, [this file](https://github.com/rowanc1/lapreprint/blob/main/examples/pixels/main.typ#L5), the basics are to use the template with a `#show` command:
```typst
#show: template.with(
title: "A beautiful preprint template"
// ... all sorts of other options that are explained below!
)
```
## Logos and Branding
The theme of the document can be set to a specific color, which changes the headers and links. The default `theme` is blue, however, the following examples use:
```typst
theme: red.darken(50%),
```
You can also supply a logo, which is either an image file location or content, allowing you to add additional information about the journal or lab-group to the top-right of the document. You can also set the `paper-size`, `heading-numbering` and `font-face`. The default font-face used is Noto Sans, which you may need to [download](https://fonts.google.com/noto/specimen/Noto+Sans).
```typst
logo: "my-logo.png",
theme: purple.darken(20%),
paper-size: "us-letter",
heading-numbering: "1.a.i",
font-face: "Noto Sans",
```

## Title and Subtitle
You can have both a title and a subtitle:
```typst
title: "Pixels and their Neighbours",
subtitle: "A Tutorial on Finite Volume",
```
Will become:

## Authors and Affiliations
You can add both author and affiliations lists, each author should have a `name`, and can optionally add `orcid`, `email`, and `affiliations`. The affiliations are just content that is put in superscript, e.g. `"1,2"`, have corresponding identifiers in the top level `affiliations` list, which requires both an `id` and a `name`. If you wish to include any additional information in the affiliation (e.g. an address, department, etc.), it is content and can have whatever you want in it.
```typst
authors: (
(
name: "<NAME>",
orcid: "0000-0002-7859-8394",
email: "<EMAIL>",
affiliations: "1,2"
),
// Add other authors here...
),
affiliations: (
(id: "1", name: "University of British Columbia"),
(id: "2", name: "Curvenote Inc."),
),
```

Note that the orcid and email icons are actually aligned to the text. Details, details!
For other information that you wish to affiliate with a specific author, you can use the `affiliations` field with any identifier you like (e.g. `†`) and then use the margin content or affiliations fields on the preprint to explain what it means.
## Abstract and Keywords
You can include one or more abstracts as well as keywords. For a simple `abstract` the default title used is "Abstract" and you can include it with:
```typst
abstract: lorem(100),
keywords: ("Finite Volume", "Tutorial", "Reproducible Research"),
```
To include one or more specific abstracts, for example, different languages or a "Plain Language Summary", provide a list, with `title` and `content` in a dictionary:
```typst
abstract: (
(title: "Abstract", content: lorem(100)),
(title: "Plain Language Summary", content: lorem(25)),
),
```

## Margin content
The content on the first page is customizable. The first content is the `kind`, for example, "Original Research", "Review Article", "Retrospective" etc. And then the `date`, which is by default the date you compiled the document.
```typst
kind: "Notebook Tutorial",
date: datetime(year: 2023, month: 08, day: 21),
```
You can also set `date` to be a dictionary or list of dictionaries with `title` and `date` as the two required keys. The first date will be bolded as well as used in the document metadata and auto `short-citation`.
```typst
kind: "Notebook Tutorial",
date: (
(title: "Published", date: datetime(year: 2023, month: 08, day: 21)),
(title: "Accepted", date: datetime(year: 2022, month: 12, day: 10)),
(title: "Submitted", date: datetime(year: 2022, month: 12, day: 10)),
),
```

The rest of the margin content can be set with `margin` property, which takes a `title` and `content`, content is required, however the title is optional.
```typst
margin: (
(
title: "Correspondence to",
content: [
<NAME>\
#link("mailto:<EMAIL>")[<EMAIL>]
],
),
// ... other properties
)
```
You can use the margin property for things like funding, data availability statements, explicit correspondence requests, key points, conflict of interest statements, etc.

### Setting the Margin
The first page has a left hand margin of 25%, if you do nothing this will continue for your whole document. To override this, you can reset the margin after the first page by placing `#set page(margin: auto)` in a place where you want the page break between the first and second pages[^margin].
If you opt for resetting to a full-width margin, you will want to not supply a `bibliography-file` to the template, and instead do this in your content, this is because the template will revert to the first-page margin as well as inserting a page break[^bug]. See the section on bibliography below.
[^margin]: If you know a better way of doing this to automate it in the template, please open an issue or pull-request!!
[^bug]: This seems like a bug to me in typst, but maybe is by design?
## Headers and Footers
You can control the headers and footer by providing the following information:
```typst
open-access: true,
doi: "10.1190/tle35080703.1",
venue: [ar#text(fill: red.darken(20%))[X]iv],
short-title: "Finite Volume Tutorial",
short-citation: auto,
date: datetime.today()
```
The first page will show an open-access statement and the `doi` if available. For DOIs, only include the actual identifier, not the URL portions:

Subsequent pages will show the `short-title` and `short-citation`. If the citation is `auto` (the default) it will be created in APA formatting using the paper authors.

The footers show the `venue` (e.g. the journal or preprint repository) the `date` (which is by default `today()`) as well as the page count.

## Bibliography
The bibliography is only included in the theme if you supply the `bibliography-file` and an optional `bibliography-style`. The default `bibliography-style` is `"apa"`, you can override this if you like, for example, to `"ieee"`.
If you have opted for full-page margins or have an appendix, you may want to place the bibliography yourself as including in the template will default back to the same margin as the first page. You can also handle the bibliography yourself with:
```typst
#{
show bibliography: set text(8pt)
bibliography("main.bib", title: text(10pt, "References"), style: "apa")
}
```
# Acknowledgements
The Typst LaPreprint template (and this Readme!) is inspired by [@roaldarbol LaTeX template](https://github.com/roaldarbol/LaPreprint), which is in the style of eLife and PLoS. Portions of the template were based on the example IEEE template in Typst.
|
https://github.com/yan-aint-nickname/uni | https://raw.githubusercontent.com/yan-aint-nickname/uni/main/practice-6-sem/main.typ | typst | MIT License | #import "template.typ": *
#import "@preview/codelst:2.0.1": sourcecode, sourcefile
// #import "diagrams/main.typ" as main_algo
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: project.with(
title: "Форматы хранения и передачи данных XML и JSON",
authors: (
"<NAME>. ИДз-21-21.",
),
)
#show figure: set block(breakable: true)
#outline()
= Форматы хранения и передачи данных XML и JSON. Основные понятия
XML (eXtensible Markup Language) и JSON (JavaScript Object Notation) — два популярных формата для хранения и передачи данных в веб-приложениях и между системами. XML представляет собой язык разметки, который использует теги для описания структуры данных, в то время как JSON — это текстовый формат, использующий пары "ключ-значение" для представления данных. Оба формата широко используются благодаря своей гибкости и возможности легко передавать данные между различными платформами и языками программирования.
= Области применения
XML и JSON применяются в самых разных областях, включая веб-разработку, обмен данными между серверами и клиентами, конфигурацию программного обеспечения и многое другое.
- *XML*:
- Веб-службы (SOAP)
- Конфигурационные файлы (например, для Java приложений)
- Документы и электронные публикации
- *JSON*:
- Веб-API (RESTful сервисы)
- Конфигурационные файлы (например, для JavaScript и других языков)
- Хранение и передача данных в NoSQL базах данных (например, MongoDB)
= Структура и наборы правил. Примеры кода
== XML
XML использует теги для обозначения элементов данных и их вложенности. Вот простой пример XML-документа:
#figure(
caption: [Простой пример `xml`],
sourcecode()[```xml
<note>
<to>Elena</to>
<from>Ivan</from>
<heading>Reminder</heading>
<body>Please buy me a coffee!</body>
</note>
```]
) <xml-example-simple>
Основные правила XML:
- Все элементы должны иметь закрывающий тег.
- Теги чувствительны к регистру.
- Документ должен иметь один корневой элемент.
== JSON
JSON представляет данные в формате "ключ-значение". Вот пример JSON-документа:
#figure(
caption: [Простой пример `json`],
sourcecode()[```json
{
"note": {
"to": "Elena",
"from": "Ivan",
"heading": "Reminder",
"body": "Please buy me a coffee!"
}
}
```]
) <json-example-simple>
Основные правила JSON:
- Данные представляются в виде пар "ключ-значение".
- Ключи заключаются в двойные кавычки.
- Строки также заключаются в двойные кавычки.
- Объекты разделяются запятыми.
Так же стоит отметить официальный сайт с грамматикой, где подробнее описаны правила с иллюстрациями @json-official
В качестве примера использования и примеров кода я написал небольшой web-сервис на языке программирования Go @golang-official.
#figure(
caption: "Файл с web-сервисом",
sourcefile(
file: "main.go",
read("main.go"),
)
)
Для запуска программы необходимо выполнить и установленные компилятор Go, который можно скачать на официальном сайте @golang-official. Версия языка должна быть не ниже 1.22.3
```bash
go run main.go
```
После запуска веб-сервер будет доступен на `localhost:8080`
Для проверки работы сервиса необходимо запустить утилиту командной строки curl @curl-official
Для получения ответа в формате xml
```bash
curl --request GET \
--url 'http://localhost:8080/hello?type=xml&value=world'
```
Ответ от сервера
```xml
<GreetingResp>
<greeting to="Hello world"></greeting>
</GreetingResp>
```
Для получения ответа в формате json
```bash
curl --request GET \
--url 'http://localhost:8080/hello?type=json&value=world'
```
Ответ от сервера
```json
{
"greeting": {
"to": "Hello world"
}
}
```
В качестве хорошего примера более сложного файла json. Есть проект по киновселенной звездных войн @swapi-web, в котором собраны все персонажи, планеты, космические корабли и прочее, проект предоставляет возможность скачивать всю информацию через REST-api в формате `json`.
Можно проверить запустив эту команду в терминале
#figure(
caption: [Команда запроса 10 персонажей из франшизы "Звёздные войны"],
sourcecode()[```bash
curl "https://swapi.dev/api/people?page=1" | yq -oj -p=json > swapi.json
```],
) <curl-swapi>
Обычно, при скачивании файлов формата `json`, индентация и переносы строк опускаются, чтобы сохранить память, так как json строковый тип данных и для каждой новой строки необходимы символы `\n`#footnote[Перенос курсора в начало следующей строки] и \u{23b5}#footnote[Пробел].\ Поэтому в команде из @curl-swapi я использую утилиту `yq` @yq-website, которой я передаю в аргументы результаты выполнения утилиты `curl` через конвейер. `yq` полезна тем, что форматирует файлы, а так же позволяет переводить файлы одного формата в другой, как я покажу в дальнейшем.
Результатом выполнения данной команды будет файл - формата `json`, с 10-ю персонажами франшизы.
#figure(
caption: [Результат выполнения команды из @curl-swapi],
sourcefile(
file: "swapi.json",
read("swapi.json"),
),
) <curl-swapi-result>
Так как тема работы сравнение форматов `xml` и `json`, то я при помощи `yq` переформатирую файл из @curl-swapi-result в формат `xml`.
#figure(
caption: [Команда преобразования `json` в `xml` при помощи `yq`],
sourcecode()[```bash
yq swapi.json -o=xml > swapi.xml
```]
) <yq-json-2-xml>
#figure(
caption: [Результат выполнения команды из @yq-json-2-xml],
sourcefile(
file: "swapi.xml",
read("swapi.xml"),
),
) <yq-json-2-xml-result>
Здесь сразу можно заметить различия между этим форматами. Более детальный разбор этих форматов описаны ниже.
= Преимущества и недостатки
== XML
*Преимущества*:
- Гибкость: позволяет создавать сложные иерархические структуры данных.
- Стандартизация: поддерживается множеством стандартов и спецификаций (например, XSLT, XPath).
- Валидация: поддержка схем (XSD) для проверки структуры и содержания данных.
*Недостатки*:
- Объемность: данные в XML могут занимать больше места по сравнению с другими форматами.
- Сложность обработки: требует больше ресурсов для парсинга и обработки.
== JSON
*Преимущества*:
- Легкость: более компактный и легкий для чтения и записи формат.
- Быстрота обработки: требует меньше ресурсов для парсинга по сравнению с XML.
- Совместимость с JavaScript: легко интегрируется в веб-приложения.
*Недостатки*:
- Ограниченная поддержка схем: менее строгие правила для валидации структуры данных.
- Отсутствие комментариев: формат не поддерживает включение комментариев в данные.
= Альтернативы
Кроме XML и JSON, существуют и другие форматы для хранения и передачи данных:
/ YAML: (YAML Ain't Markup Language): человекочитаемый формат, часто используемый для конфигурационных файлов.
#figure(
caption: [Часть стандартного файла-конфигурации для утилиты lazygit @lazygit-repo, использующий формат `yaml` для управления поведением программы.],
sourcefile(
file: "lazy-config.yaml",
read("lazy-config.yaml"),
),
)
/ CSV/TSV: (Comma/Tab-Separated Values): простой текстовый формат для табличных данных. В примере ниже я использовал символ `&`, для разделения значений внутри столбца, но данные могут быть испорчены или в них может быть неправильно проставлены запятые, тогда произойдет коллизия разделителей @delim-collision. Данный формат плохо подходит для работы с данными, где необходимо учитывать связи между сущностями.
```csv
Ndex,Name,Type
#0566,Archen,Rock&Flying
#0079,Slowpoke,Water&Psychic
```
/ Protobuf: (Protocol Buffers): двоичный формат, разработанный Google для эффективной сериализации структур данных. @protobuf-website
#figure(
caption: [Пример protobuf, который описывает телефонную книгу (`example.proto`)],
sourcefile(
file: "example.proto",
read("example.proto"),
),
)
Для компиляции `.proto` в `.go` необходим `protoc` - компилятор `.proto`-файлов. Для это его необходимо установить в систему как пакет, выполнив команду из @install-protoc-gen-go. Для генерации намнеобходимо запустить компилятор как показано в @compile-protoc-gen-go
#figure(
caption: [Команда установки утилилиты `protoc`],
sourcecode()[```bash
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
```]
) <install-protoc-gen-go>
#figure(
caption: [Команда запуска утилиты `protoc` для генерации `go`-файлов из `.proto`-файлов],
sourcecode()[```bash
protoc --go_out=. example.proto
```]
) <compile-protoc-gen-go>
#figure(
caption: [Результат компиляции @install-protoc-gen-go при помощи команды @compile-protoc-gen-go],
sourcefile(
file: "example.pb.go",
read("example.pb.go"),
),
) <compiled-protoc-go>
#figure(
caption: [Установка пакетов из сети для правильной работы программы],
sourcecode()[```bash
go mod tidy
```]
) <go-mod-tidy>
Теперь я могу через интерфейс сгенерированный в @compiled-protoc-go, использовать структуры в своей программе. Необходимо так же выполнить установку пакетов см. @go-mod-tidy
#figure(
caption: [Пример использования интерфейса `profobuf`],
sourcecode()[```go
p := Person{
Id: 1234,
Name: "<NAME>",
Email: "<EMAIL>",
Phones: []*pb.Person_PhoneNumber{
{Number: "555-4321", Type: pb.PhoneType_PHONE_TYPE_HOME},
},
}
```]
)
`protobuf` это удобный и эффективный формат работы с данными, к тому же существует множетсво реализация для разных языков, как компилируемых, так и интерпретируемых.
/ TOML: (Tom's Obvious, Minimal Language): формат для конфигурационных файлов, который отличается простотой синтаксиса и удобством для человека.
#figure(
caption: [Часть стандартного файла-конфигурации проекта менеджера пакетов pip @pip-pyproject-toml, использующий формат `toml` для конфигурации зависимостей проекта.],
sourcefile(
file: "pyproject.toml",
read("pyproject.toml"),
),
)
/ BSON: (Binary JSON): бинарный формат сериализации данных, использующийся в MongoDB @bson-in-mongodb, обеспечивающий быструю обработку и меньший размер данных по сравнению с JSON.
#figure(
table(
columns: (auto, auto, auto),
table.header([], [JSON], [BSON]),
[Кодировка], [Файлы пишутся в текстовом формате.], [Файлы пишутся в бинарном формате.],
[Скорость], [Быстро читается, но медленнее создается.], [Медленно читается, но быстрее создается и сканируется.],
[Память], [Меньше по размеру], [Больше по размеру],
[Кодирование и декодирование], [Можно отправлять по сети, например по http, без кодирования и декодирования.], [Файлы кодируются перед хранением и декодируются перед отображением.],
[Парсинг], [Человекочитаемый формат, который не требует преобразования.], [Нужно преобразовать, так как они бинарного формата и не человекочитаемы.],
[Типы данных], [Имеет определенный набор типов данных: строка, логический, число, массив, объект и null.], [Предлагает дополнительные типы данных, такие как bindata для бинарных данных и decimal128 для числовых данных. @bson-spec],
[Примeнение], [Используется для отправки данных через сеть], [Используется в базах данных],
),
caption: [Различия BSON и JSON]
)
/ MessagePack: бинарный формат сериализации данных, который эффективнее по размеру и скорости по сравнению с JSON.
Помимо широко известных форматов, существуют менее популярные, но все же полезные альтернативы для хранения и передачи данных:
/ Avro: формат сериализации данных от Apache, используемый в проектах Hadoop. Он поддерживает эволюцию схем данных и эффективное сжатие.
/ CBOR: (Concise Binary Object Representation): бинарный формат, предназначенный для компактного представления данных.
/ Thrift: формат и фреймворк от Apache для разработки и сериализации межплатформенных сервисов.
/ UBJSON: (Universal Binary JSON): формат, направленный на предоставление бинарной версии JSON.
/ FlatBuffers: формат сериализации данных, разработанный Google, ориентированный на высокую производительность и минимальные затраты на десериализацию.
/ Ion: формат данных, разработанный Amazon, который сочетает в себе характеристики JSON и BSON с дополнительными возможностями, такими как поддержка схем и аннотаций.
/ Smile: бинарный формат, основанный на JSON, предназначенный для компактного и быстрого хранения и передачи данных.
Каждый из этих форматов имеет свои уникальные особенности и области применения, что позволяет выбрать наиболее подходящий в зависимости от требований конкретного проекта.
Эти альтернативы могут быть предпочтительнее в зависимости от конкретных требований проекта и условий использования.
= Заключение
В данной работе мы рассмотрели два ключевых формата для хранения и передачи данных — XML и JSON. Оба формата играют важную роль в современной веб-разработке и обмене данными между различными системами.
*XML* (eXtensible Markup Language) обладает высокой гибкостью и позволяет создавать сложные иерархические структуры данных. Он широко используется в веб-службах (SOAP), конфигурационных файлах и электронных публикациях. Преимущества XML включают стандартизацию и поддержку схем валидации (XSD), однако его недостатки заключаются в объемности и сложности обработки данных.
*JSON* (JavaScript Object Notation), с другой стороны, является более легким и компактным форматом, который быстро обрабатывается и легко интегрируется в веб-приложения. JSON широко используется в веб-API (RESTful сервисы), конфигурационных файлах и NoSQL базах данных. Преимущества JSON включают его простоту и совместимость с JavaScript, но он имеет ограниченную поддержку схем и не поддерживает включение комментариев.
Мы также рассмотрели альтернативы XML и JSON, такие как YAML, CSV, Protobuf, TOML и BSON. Каждая из этих альтернатив имеет свои уникальные особенности и области применения, что позволяет выбирать наиболее подходящий формат в зависимости от конкретных требований проекта.
Таким образом, выбор между XML и JSON, а также их альтернативами, зависит от конкретных условий и требований проекта. XML подходит для задач, требующих сложных структур данных и строгой валидации, тогда как JSON предпочтителен для легких и быстрых операций обмена данными в веб-приложениях. Альтернативные форматы могут быть полезны в специфических сценариях, где важны компактность, производительность или удобство для человека.
Изучение и понимание различных форматов данных позволяет разработчикам и инженерам выбирать оптимальные решения для своих проектов, обеспечивая эффективное хранение и передачу данных в современных информационных системах.
Полный код проекта Вы можете найти в репозитории https://github.com/yan-aint-nickname/uni в директории `practice-6-sem`
#bibliography("bibliography.yaml")
|
https://github.com/csimide/SEU-Typst-Template | https://raw.githubusercontent.com/csimide/SEU-Typst-Template/master/seu-thesis/utils/packages.typ | typst | MIT License | // 统一在此文件导入包,方便后续更新
#import "@preview/a2c-nums:0.0.1": int-to-cn-num
#import "@preview/cuti:0.2.1": show-cn-fakebold, fakebold
#import "@preview/i-figured:0.2.4" |
https://github.com/donabe8898/typst-slide | https://raw.githubusercontent.com/donabe8898/typst-slide/main/opc/並行prog/02/syakyo01.typ | typst | MIT License | #show link: set text(blue)
#set text(font: "Noto Sans CJK JP",size:13pt)
#show heading: set text(font: "Noto Sans CJK JP")
#show raw: set text(font: "0xProto Nerd Font")
#show raw.where(block: true): block.with(
fill: luma(245),
inset: 10pt,
radius: 10pt
)
#align(center)[
```go
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan int, 5) // バッファサイズ5のチャネル1を作成
ch2 := make(chan int, 5) // バッファサイズ5のチャネル2を作成
fmt.Printf("Start!\n") // スタート
// 2つのゴルーチン起動
go r("A", ch1)
go r("B", ch2)
// 受信チャネル
for ch := range ch1 {
fmt.Println("A channel", ch)
}
for ch := range ch2 {
fmt.Println("B channel", ch)
}
fmt.Printf("Finish!\n")
}
func process(num int, str string) {
// num秒処理
for i := 0; i <= num; i++ {
time.Sleep(1 * time.Second)
fmt.Println(i, str)
}
}
func r(str string, ch chan int) {
process(2, str)
ch <- 1
fmt.Println(str, "process 1 done")
process(1, str)
ch <- 2
fmt.Println(str, "process 2 done")
process(2, str)
ch <- 3
fmt.Println(str, "process 3 done")
close(ch)
}
```
]
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/数学模型/作业/hw7.typ | typst | #import "../../template.typ": proof, note, corollary, lemma, theorem, definition, example, remark, proposition,der, partialDer, Spec
#import "../../template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: note.with(
title: "作业7",
author: "YHTQ",
date: none,
logo: none,
withOutlined : false,
withTitle :false,
withHeadingNumbering: false
)
= 1
#let ek(x) = $e^(- (#x)/2)$
不难验证 $n$ 次正交多项式构成线性空间。注意到任取 $p_(n-1), p_(n-2)$ 是 $n-1, n-2$ 次正交多项式,$i = 1, 2, ..., d$ ,下面的递推式都可产生 $n$ 次正交多项式 $p_n$:
$
p_n = (x_i + a)p_(n-1) + b p_(n-2) \
where inner(x_i p_(n-1), p_(n-1)) + a inner(p_(n-1), p_(n-1)) = 0\
inner(x_i p_(n-1), p_(n-2)) + b inner(p_(n-2), p_(n-2)) = 0\
$
先考虑小一点的 $n$
- $n = 1$ 时,多项式 $sum a_i x_i + c$ 是正交多项式就是说:
$
integral_(RR^d)^() (sum a_i x_i + c) ek(norm(x)^2) dif x = 0
$
由对称性,当且仅当 $c = 0$,因此 $a_i$ 可以任取,共同零点当然只有 $0$
- $n = 2$ 时,多项式 $sum a_(i j) x_i x_j + sum_i b_i x_i + c$ 是正交多项式意味着:
$
integral_(RR^d)^() (sum a_(i j) x_i x_j + sum_i b_i x_i + c) ek(norm(x)^2) dif x = 0\
integral_(RR^d)^() x_k (sum a_(i j) x_i x_j + sum_i b_i x_i + c) ek(norm(x)^2) dif x = 0, forall k\
$
仍然利用对称性化简得:
$
integral_(RR^d)^() (sum a_(i i) x_i^2 + c) ek(norm(x)^2) dif x = 0\
integral_(RR^d)^() b_k x_k^2 ek(norm(x)^2) dif x = 0, forall k\
$
当且仅当 $b_k = 0$ 且:
$
sum 2 pi a_(i i) + 2 pi c = 0\
sum a_(i i) + c = 0
$
其余 $a_(i j)$ 自由调整,无共同零点。
我们断言 $n$ 为偶数时无共同零点,为奇数时只有 $0$ 作为共同零点,为此:
#lemmaLinear[][
设 $f(x)$ 是 $RR$ 上一维 $n$ 次正交多项式(权函数为 $ek(x^2)$),则 $forall i, f(x_i)$ 都是 $RR^d$ 上 $n$ 次多项式
]<lemma1>
#proof[
任取小于 $n$ 次多项式 $p(x_1, x_2, ..., x_n) = sum_i p_i (x_2, ..., x_n) x_1^i$,将有:
$
integral_(RR^d)^() sum_i p_i (x_2, ..., x_n) x_1^i f(x_1) ek(norm(x)^2) dif X\
= sum_i integral_(RR^d)^() p_i (x_2, ..., x_n) x_1^i f(x_1)ek(norm(x)^2) dif X\
= sum_i integral_(RR^d)^() p_i (x_2, ..., x_n) x_1^i f(x_1)ek(x_1^2) ek((x_2^2 + ... + x_n^2)) dif X\
= sum_i integral_(RR^(d-1))^() p_i (x_2, ..., x_n) ek((x_2^2 + ... + x_n^2)) dif x_2 dif x_3 ... dif x_n integral_RR x_1^i f(x_1)ek(x_1^2) dif x_1\
$
当然有 $i < n$,因此上式为零
]
#lemmaLinear[][
设 $f(x)$ 是 $RR$ 上一维 $n >= 2$ 次正交多项式(权函数为 $ek(x^2)$),则:
- $n$ 为奇数时,$f$ 形如 $x g(x^2)$; $n$ 为偶数时,$f$ 形如 $g(x^2)$
- $f$ 以 $0$ 为零点当且仅当 $n$ 为奇数
]<lemmaNonZero>
#proof[
$n = 1, 2$ 时前面已经给出了结论,对于更大的 $n$,注意到有递推式:
$
p_n = (x + a)p_(n-1) + b p_(n-2) \
where inner(x p_(n-1), p_(n-1)) + a inner(p_(n-1), p_(n-1)) = 0\
inner(x p_(n-1), p_(n-2)) + b inner(p_(n-2), p_(n-2)) = 0\
$
递归证明结论:
- 若 $n$ 是偶数,往证 $a = 0$,这是因为:
$
inner(x p_(n-1), p_(n-1)) = inner(x^2 g(x^2), x g(x^2)) = integral_(RR)^() x^3 g(x^2) ek(x^2) dif x = 0
$
(对称性给出)\
继而:
$
p_n = x p_(n-1) + b p_(n-2)
$
容易归纳证明上面结论都成立。
- 若 $n$ 是奇数,同样往证 $a = 0$,这是因为:
$
inner(x p_(n-1), p_(n-1)) = inner(x g(x^2), g(x^2)) = integral_(RR)^() x g(x^2) ek(x^2) dif x = 0
$
(对称性给出)\
继而类似可以论证
]
#lemmaLinear[][
设 $U$ 是任意正交变换,$f(X)$ 是 $n$ 次正交多项式,则 $f(U X)$ 也是 $n$ 次正交多项式
]<lemma2>
#proof[
注意到正交变换不改变多项式的次数,做简单的积分换元即可。
]
由一元正交多项式的结论,设 $f_n$ 是 $RR$ 上 $n$ 次正交多项式,有 $n$ 个不同的根 $x_1, x_2, ..., x_n$,令 $Z_1 = {x_1, ..., x_n}$\
假设 $RR^d$ 上 $n$ 次正交多项式的共同零点集为 $S$,断言:
- $S subset Z_1^d$,既然由 @lemma1 $f_n (x_1), f_n (x_2), ..., f_n (x_d)$ 都是 $n$ 维正交多项式,结论是显然的
- 设 $G$ 是 $RR^d$ 上所有正交变换构成的群,则 $G Z = Z$,这是 @lemma2 的直接推论
然而 $Z_1^d$ 是离散点集,$G$ 在每个非零点的轨道都是一个圆,显然以上两者同时成立蕴含 $Z = {0} orC emptyset$\
@lemmaNonZero 表明 $n$ 为偶数时,$f_n (x_i)$ 不以零为零点,当然就有 $Z = emptyset$\
而当 $n$ 为奇数时,设 $f(x)$ 是 $n$ 次 $d$ 维正交多项式。对 $n, d$ 作归纳,$n = 1 orC d = 1$ 时已经证明,注意到若设:
$
f(x) = sum_(i= 0)^n p_i (x_2, ..., x_n) x_1^i
$
则任取小于 $n$ 次的多项式 $q(x_2, ..., x_n)$ 将有:
$
0 = inner(q(x_2, ..., x_n), sum_(i= 0)^n p_i (x_2, ..., x_n) x_1^i) \
= sum_(i= 0)^n inner(q(x_2, ..., x_n), p_i (x_2, ..., x_n)) c_i \
= inner(q(x_2, ..., x_n), sum_(i= 0)^n c_i p_i (x_2, ..., x_n)) \
where c_i = integral_(RR^d)^() x_1^i ek(norm(x_1)^2) dif x_1
$
表明 $sum_(i= 0)^n c_i p_i (x_2, ..., x_n)$ 要么恰为 $0$ ,要么是 $d-1$ 维的 $n$ 次正交多项式,无论何者都可得到:
$
sum_(i= 0)^n c_i p_i (0) = 0
$
类似的,任取小于 $n-2$ 次的多项式 $q$,将有:
$
0 = inner(q(x_2, ..., x_n) x_1^2, sum_(i= 0)^n p_i (x_2, ..., x_n) x_1^i)
$
(事实上 $i$ 为奇数时 $c_i = 0$)
类似可以证明:
$
sum_(i= 0)^n c_(i+2) p_i (0) = 0\
sum_(i= 0)^n c_(i+4) p_i (0) = 0\
...
$
不算入 $i$ 是奇数的情形,共计 $(d + 1)/2$ 个变量和方程,并且可以验证系数矩阵满秩,因此 $p_i (0) = 0$,这就证明了 $Z = {0}$
== 2
#let sumn0(i) = $sum_(#i = 0)^(n-1)$
由定义:
$
D_n^* (P) = 1/n sumn0(i) 1_(y > x_i) - integral_(0)^(1) 1_(y > x) dif x
$
我们希望上式取最大/最小值。不难发现验证若增大 $k$ 分量 $y_k$,只要前项不发生变化后项便不断减小,进而最大/最小值点处一定有:
$
exists i, y_k = x_(i k)
$
先证明一维情形下,假设 $b^r <= n < b^(r+1)$,上述定义就是:
$
1/n sumn0(i) 1_(x_k > x_i) - x_k\
$
设:
$
k = sum_(s = 0)^infinity a_s (k) b^s\
i = sum_(s = 0)^infinity a_s (i) b^s\
$
则:
$
x_k > x_i <=> sum_(s = 0)^infinity a_s (k) b^(-s -1) > sum_(s = 0)^infinity a_s (i) b^(-s -1)
$
统计使得 $x_k > x_i$ 的所有情况
- $a_0 (k) > a_0 (i)$,也即 $i = 0, 1, ..., a_0 (k) - 1$,这些数的数量在以下两者之间:
$
a_0 (k) [n/b], a_0 (k) ([n/b] + 1)
$
- 否则,$a_0 (k) = a_0 (i), a_1 (k) > a_1 (i)$,则使得上式成立的 $i$ 的个数在以下两者之间:
$
a_1(k) [n/b^2], a_1(k) ([n/b^2] + 1)
$
- 以此类推,最终得到:
$
1/n sumn0(i) 1_(x_k > x_i) - x_k >= 1/n sum_(s = 0)^r a_s (k) ([n/b^(s+1)] - n/b^(s+1)) >= -1/n sum_(s = 0)^r a_s (k) >= -1/n r (b-1)\
1/n sumn0(i) 1_(x_k > x_i) - x_k <= 1/n sum_(s = 0)^r a_s (k) ([n/b^(s+1)] - n/b^(s+1) + 1) <= 1/n sum_(s = 0)^r a_s (k) <= 1/n r (b-1)
$
从而 $abs(D_n^*(P)) <= 1/n r (b-1) <= (b - 1)/n (log n)/(log b) $
对于高维情形,由于 $b_i$ 都互素,因此由中国剩余定理各个维度上是独立的,类似即可证明:
$
abs(D_n^*(P)) <= 1/n product_(i) (b_i - 1) (log n)/(log b_i)
$
|
|
https://github.com/dainbow/MatGos | https://raw.githubusercontent.com/dainbow/MatGos/master/themes/7.typ | typst | #import "../conf.typ": *
= Теорема о равномерной непрерывности функции, непрерывной на компакте
#definition[
*Метрическим пространством* $(X, rho)$ называется множество $X$ такое, что на $X times X$ определена
числовая функция $rho: X times X -> RR$ такая что:
+ $rho(x, y) >= 0$, причём $rho(x, y) = 0 <=> x = y$
+ $rho(x, y) = rho(y, x)$
+ $forall x, y, z in X : space rho(x, z) <= rho(x, y) + rho(y, z)$
]
#definition[
*Компактным* множеством в метрическом пространстве $X$ называется такое
множество $K$, что из любого его открытого покрытия можно выделить конечное
подпокрытие.
]
#definition[
Множество $E subset RR^n$ называется *ограниченным*, если:
#eq[
$exists r >= 0 : forall M in E : |O M| <= r$,
]
где $O = (0, 0, dots, 0)$.
]
#definition[
Точка $x_0 in E$ называется *изолированной*, если:
#eq[
$exists delta > 0 : U_delta (x_0) sect E = {x_0}$.
]
]
#definition[
Точка $x_0 in E$ называется *внутренней*, если:
#eq[
$exists delta > 0 : U_delta (x_0) subset.eq E$.
]
]
#definition[
Точка $x_0 in E$ называется точкой *прикосновения*, если:
#eq[
$forall delta > 0 : U_delta (x_0) sect E != emptyset$.
]
]
#definition[
Множество всех точек прикосновения $E$ называется *замыканием* этого множества $overline(E)$.
]
#definition[
Множество $E$ называется *замкнутым*, если $E = overline(E)$.
]
#definition[
Множество всех внутренних точек $E$ называется *внутренностью* этого множества $"int" E$.
]
#definition[
Множество $E$ называется *открытым*, если $E = "int" E$.
]
#theorem[
Дополнение замкнутого множества -- открытое множество, и наоборот.
]
#theorem[
В $RR^n$ верно следующее утверждение:
#eq[
$\ E - "компактное" <=> cases(E "ограниченное", E "замкнутое")$
]
]
#theorem[
Множество $E$ является компактным в $RR^n <=> forall seq(x) subset E : exists seq(idx: k, x_n) :
exists lim_(k -> oo) x_n_k = x in E$
]
#definition[
Функция $f : X -> RR$, где $X$ -- метрическое пространство, называется
равномерно непрерывной на множестве $X' subset X$, если
#eq[
$forall epsilon > 0 : exists delta > 0 : forall x_1, x_2 in X': rho(x_1, x_2) < delta : space abs(f(x_1) - f(x_2)) < epsilon $
]
]
#theorem(
"Кантора о равномерной непрерывности",
)[
Если $f : K -> RR$ непрерывна на компактном множестве $K subset RR^n$, то она
равномерно непрерывна на $K$.
]
#proof[
От противного, выпишем отрицание равномерной непрерывности:
#eq[
$exists epsilon > 0 : forall delta > 0 : exists x_1, x_2 in K : norm(x_1 - x_2) < delta : space abs(f(x_1) - f(x_2)) >= epsilon$
]
Выбирая $delta := 1, 1 / 2, 1 / 3, ..., 1 / m, ...$ построим последовательность
пар из отрицания непрерывности: ${(x_(1, m), x_(2, m))}_(m = 1)^oo subset K^2$.
Причём
#eq[
$forall m in NN : norm(x_(1, m) - x_(2, m)) < 1 / m : abs(f(x_(1, m)) -f(x_(2, m))) >= epsilon$
]
По одному из определений компактности выделим из последовательности пар
подпоследовательность, у которой сходятся первые координаты:
#eq[
$exists {(x_(1, m_k), x_(2, m_k))}_(k = 1)^oo : lim_(k -> oo) x_(1, m_k) = x_0 in K$
]
Причём заметим, что (комбинируем то, как мы строили последовательность пар и
сходимости первых координат подпоследовательности):
#eq[
$forall epsilon > 0 : exists K in NN : forall k > 0 : norm(x_(2, m_k) - x_0) &<=\ norm(x_(1, m_k) - x_0) + norm(x_(1, m_k) - x_(2, m_k)) &< 2epsilon$
]
То есть
#eq[
$lim_(k -> oo) x_(1, m_k) = lim_(k -> oo) x_(2, m_k) = x_0 attach(=>, t: "непрерывность" f)\ lim_(k-> oo) (f(x_(1, m_k)) - f(x_(2, m_k))) = 0$
]
Противоречие!
]
|
|
https://github.com/TomVer99/Typst-checklist-template | https://raw.githubusercontent.com/TomVer99/Typst-checklist-template/main/template/template.typ | typst | MIT License | #let small-text-size = 8pt
#let large-text-size = 14pt
#let abnormals = rgb("FF9A00")
#let style-state = state("style", "0")
#let emer-page(black-and-white: false, body) = {
let size = 0.33in
let color = if black-and-white {
rgb("FFFFFF")
} else {
abnormals
}
let padding = if black-and-white {
1pt
} else {
0pt
}
set page(
"a4",
margin: 0.4in,
background: [
#align(left + top)[
#block(width: size, height: 100%, fill: black)[
#for i in range(18) {
polygon(
(0pt, 0pt),
(size + padding, -(size + padding)),
(size + padding, 0pt),
(0pt, size + padding),
fill: color,
)
v(0.1in)
}]
]
#align(right + bottom)[
#block(width: size, height: 100%, fill: black)[
#for i in range(18) {
polygon(
(0pt, -(size + padding)),
(size + padding, 0pt),
(size + padding, size + padding),
(0pt, 0pt),
fill: color,
)
v(0.1in)
}]
]
],
)
body
set page("a4", margin: 0.4in, background: none)
}
#let topic(
title,
fill-clr: rgb("FFFFFF"),
body,
) = {
box(width: 100%, stroke: 1pt, outset: 4pt, fill: fill-clr)[
#v(5pt)
#align(center)[#text(size: 20pt, upper(strong(title)))]
#v(5pt)
]
columns(2)[
#body
]
}
#let section(
title,
fill-clr: rgb("FFFFFF"),
body,
) = (
context {
if (title != none and title != "") {
box(width: 100%, stroke: 1pt, outset: 4pt, fill: fill-clr)[
#v(5pt)
#align(center)[#upper(strong(title))]
#v(5pt)
]
linebreak()
v(-(small-text-size))
if (style-state.get() == 1) {
box(width: 1fr, stroke: 1pt, outset: 4pt)[
#v(5pt)
#body
#v(5pt)
]
} else {
body
v(-10pt)
ellipse(width: 30pt, height: 20pt, fill: rgb("000000"))[
#align(center + horizon)[#text(size: 8pt, fill: rgb("FFFFFF"), upper(strong("END")))]
]
}
}
}
)
#let step(a, b) = {
if ((a != none and a != "") or (b != none and b != "")) {
a
" "
box(width: 1fr, repeat[.])
" "
b
linebreak()
}
}
#let small-caution(
caution,
) = {
text(size: small-text-size)[
#text("Caution: ", fill: rgb("FF0000"))
]
v(-1.6em)
align(top + right)[#box(width: 85%)[#align(left)[
#text(size: small-text-size)[#caution]
]]]
}
#let note(note) = {
text(size: small-text-size)[Note:]
v(-1.6em)
align(top + right)[#box(width: 90%)[#align(left)[
#text(size: small-text-size)[#note]
]]]
}
#let page-number() = (
context {
counter(page).display()
}
)
#let checklist(
title: none,
disclaimer: none,
style: 0,
body,
) = {
set page(
"a4",
margin: 0.4in,
footer: [
#line(start: (0pt, -15pt), length: 100%)
#place(
left,
dy: -10pt,
text(size: 8pt, fill: rgb("000000"))[
#datetime.today().display()
],
)
#place(
center,
dy: -10pt,
text(size: 8pt, fill: rgb("000000"))[
#page-number()
],
)
#place(
right,
dy: -10pt,
text(size: 8pt, fill: rgb("000000"))[
#title
],
)
],
)
set text(size: large-text-size, font: "Open Sans")
style-state.update(style)
if title != none {
box(width: 100%, stroke: 1pt, outset: 4pt, fill: rgb("FFFFFF"))[
#v(5pt)
#align(center)[#text(size: 20pt, upper(strong(title)))]
#v(5pt)
]
}
if disclaimer != none {
box(width: 100%, stroke: 1pt, outset: 4pt, fill: rgb("FF1100").lighten(40%))[
#v(5pt)
#align(center)[#text(size: 20pt, upper(strong(disclaimer)))]
#v(5pt)
]
}
body
}
|
https://github.com/pklaschka/typst-hidden-bib | https://raw.githubusercontent.com/pklaschka/typst-hidden-bib/main/lib.typ | typst | MIT License | #let truly-hidden = (body) => hide( // make sure it's invisible
place(
center, float: false, // Stop adding lines and stuff
body
)
)
#let hidden-bibliography = (b) => {
set bibliography(style: "chicago-notes")
set heading(outlined: false)
truly-hidden(b)
}
#let hidden-cite = (..args) => truly-hidden(
cite(..args) // "Print" the hidden citation
)
#let hidden-citations(body) = {
// Within this block, all citations should be hidden
show cite: it => truly-hidden(it)
body
} |
https://github.com/ticks-tan/wtu-typst | https://raw.githubusercontent.com/ticks-tan/wtu-typst/main/demo.typ | typst | #import "template/wtu-essays.typ": *
// 使用模板
#show: doc => conf(
// 中文论文标题
zhTitle: "关于XXX的研究",
colleges: "化学与化工学院",
major: "应用化学",
class: "应化xxx班",
studentId: "2000000000",
zhAuthor: "杰克",
enAuthor: "Jack",
teacher: "张三",
date: "二零二三年十二月七日",
zhAbstract: [
本文简单介绍了Typst语法,帮助人们快速入门。
],
zhKeywords: ("Typst", "Wtu Essays"),
enAbstract: [
This article provides a brief introduction to the Typst syntax to help people get started quickly.
],
enKeywords: ("Typst", "Wtu Essays"),
doc
)
/* ---------------- 正文 ----------------------*/
= Typst简单语法
== 标题
#CodeBlock[```typst
一级标题:
= XXX
二级标题
== XXX
三级标题
=== XXX
```]
== 文字效果
#CodeBlock[```typst
加粗:
*中间的是加粗文字*
斜体:
_中间的是斜体_
下划线:
#underline[下划线内容]
```]
*加粗*文字使用:`*XXX*`
_斜体_文字使用:`_XXX_`
#underline[下划线]
== 图片
#CodeBlock[```typst
#image(path, width?)
path: 图片路径,支持相对路径比如:./xxx 表示当前目录下文件
width: 可选,指定图片宽度,默认为100%
如果需要本主题的标号,需要使用 figure,比如:
#figure(
image("./template/wtu_txt.png", width: 60%),
caption: "图片标题",
)
```]
#image("./template/wtu_txt.png")
#figure(
image("./template/wtu_txt.png", width: 60%),
caption: "图片标题",
)
== 表格
#CodeBlock[```typst
自带表格用到不多,格式也很复杂,该主题为了方便使用,
使用了网上的包 tablem ,支持 Markdown 格式的表格,比如带标题版本:
#Table(caption: "班级姓名表")[
| Name | Age | Score |
| --- | --- | --- |
| Jack | 18 | 90 |
| Rose | 21 | 88 |
]
你也可以使用不带标题的版本:
#Table[
| Name | Age | Score |
| --- | --- | --- |
| Jack | 18 | 90 |
| Rose | 21 | 88 |
]
或者
#tablem[
| Name | Age | Score |
| --- | --- | --- |
| Jack | 18 | 90 |
| Rose | 21 | 88 |
]
```]
#Table(caption: "班级姓名表")[
| Name | Age | Score |
| --- | --- | --- |
| Jack | 18 | 90 |
| Rose | 21 | 88 |
]
#Table[
| Name | Age | Score |
| --- | --- | --- |
| Jack | 18 | 90 |
| Rose | 21 | 88 |
]
#tablem[
| Name | Age | Score |
| --- | --- | --- |
| Jack | 18 | 90 |
| Rose | 21 | 88 |
]
== 代码块
#CodeBlock[```typst
代码块为了美观也使用了第三方包 sourcecode ,支持Markdown 格式代码,
使用示例由于无法正常展示,请参阅 Markdown 格式代码和Typst官方
代码标题和序号版本的:
#CodeBlock(caption: "标题")[<Markdown格式代码>]
#CodeBlock[<Markdown格式代码语法>]
不带版本:
#sourcecode[<Markdown格式代码>]
```]
#CodeBlock[```cpp
import std.print;
int main() {
return 0;
}
```]
== 公式
#CodeBlock[```typst
行内公式:$x + y^2 = a / c$ 。
整行公式(自带标号):
$ x^2 = a * sum_(alpha=0)^(100)(sqrt(2alpha)/(a*b)) $
```]
行内公式:$x + y^2 = a / c$ 。
整行公式:
$ x^2 = a * sum_(alpha=0)^(100)(sqrt(2alpha)/(a*b)) $
#bibliography("./demo.bib", ..GBTBib)
= Typst高级功能
/*----------------- 附录 ----------------------*/
#StartAppendix()
= 关于本模板
- 附录必须手动使用 `#StartAppendix()` 。
- 参考文献使用:`#bibliography(文件路径, ..GBTBib)` 以使用 GB/T国标 |
|
https://github.com/gomazarashi/typst_showybox | https://raw.githubusercontent.com/gomazarashi/typst_showybox/main/Typstで式やテキストを装飾する【Showyboxパッケージ】.md | markdown | # はじめに
今回は、Typstで式やテキストを装飾する**showybox**パッケージを紹介します。[Typst Universe](https://typst.app/universe)上のページは[こちら](https://typst.app/universe/package/showybox/)、GitHub上のリポジトリは[こちら](https://github.com/Pablo-Gonzalez-Calderon/showybox-package)で、2024年8月17日現在、最新バージョンとして**2.0.1**がリリースされています。
この記事では基本的に公式リポジトリの[Showybox's Manual.pdf](https://github.com/Pablo-Gonzalez-Calderon/showybox-package/blob/main/Showybox's%20Manual.pdf)を参考にしながら、showyboxパッケージの使い方を解説します。誤訳などが含まれている場合はコメント等でご指摘頂けると非常にありがたいです。
# 環境
2024年8月22日の時点での最新バージョンを利用しています。
- Typst: 0.11.1
- showybox: 2.0.1
# 導入
パッケージの導入は、Typst Universeのページに存在する他のパッケージと同様に行えます。
```typ
#import "@preview/showybox:2.0.1"
```
# 基本的な記述
showyboxパッケージを利用する際は、基本的に次のように記述します。
```typ
showybox()[これがshowyboxパッケージの\ 基本的な記述です。]
```
このように記述すると、次のように表示されます。見やすさのために改行を行っています。

記法としては、`#showybox()[]`で`showybox`関数を呼び出し、丸括弧`()`に各種パラメーター、角括弧`[]`に装飾したいテキスト等を記述する、という形になります。
デフォルトでは上記のような、
- No title (タイトルなし)
- No shadow (影なし)
- Not breakable (ページおよびコンテナの終端での改行なし)
- Black borders (黒い枠線)
- White background (白い背景)
- 5pt of border radius (角丸の半径5pt)
- 1pt of border thickness (枠線の太さ1pt)
というパラメーターが設定されています。
# パラメーター
バージョン`2.0.1`では、次のパラメーターが利用可能です。
- title
- footer
- frame
- title-style
- body-style
- footer-style
- sep
- shadow
- width
- align
- breakable
- spacing
- above
- below
## title(タイトル)
`title`パラメーターを利用することで、showyboxのタイトルを設定することができます。
```typ
#showybox(title:"これはタイトルです")[これは本文です]
```

タイトルに設定できるのは[str(文字列)](https://typst.app/docs/reference/foundations/str/)と[content(コンテンツ)](https://typst.app/docs/reference/foundations/content/)です。何も設定しなかった場合(デフォルト)は`""`(空文字列)が設定されます。
## footer(フッター)
`footer`パラメーターを利用することで、showyboxのフッターを設定することができます。
```typ
#showybox(title:"これはタイトルです",footer:"これはフッターです")[これは本文です]
```

こちらもタイトルと同様、設定できるのは`str`と`content`です。何も設定しなかった場合(デフォルト)は`""`(空文字列)が設定されます。
## frame
このパラメーターには、showyboxのフレームの外観を指定するためのすべてのプロパティが含まれています。その内容を以下に示します。
- title-color
- body-color
- footer-color
- border-color
- radius
- thickness
- dash
- inset
- title-inset
- body-inset
- footer-inset
### title-color, body-color, footer-color, border-color(背景色)
それぞれの名称の通り、タイトルの背景、本文の背景、フッターの背景、枠線の色を指定します。指定できる色の種類や指定方法は[color](https://typst.app/docs/reference/visualize/color/)を参照してください。
一例は以下の通りです。
```typ
#showybox(title: "Green's Theorem", frame: (
border-color: olive,
title-color: olive.lighten(10%),
body-color: olive.lighten(95%),
footer-color: olive.lighten(80%),
), footer: "証明は省略する。")[
閉曲線$C$で囲まれた領域$D$において、$C^1$級関数$P(x,y)$と$Q(x,y)$に対して、以下が成り立つ。
$ integral.cont_C (P dif x + Q dif y ) = integral.double_D ((diff Q)/(diff x)-(diff P)/(diff y)) dif x dif y $
]
```

それぞれのプロパティは独立して指定できますが、ここでは見た目の統一感を出すためにあえて`olive`を基調としています。それに対して`lighten`メソッドを利用することで、それぞれの色を明るくしています。このような色の指定方法も、先程紹介した[color](https://typst.app/docs/reference/visualize/color/)のページが参考になると思います。
### radius(角丸)
`radius`プロパティは、角丸の半径を指定します。デフォルトは`5pt`です。
```typ
#showybox(title: "これはタイトルです", frame: (radius: 10pt), footer: "これはフッターです")[これは本文です]
```

値は`relative-length`や`dictionary`の形式で指定できます。
### thickness(枠線の太さ)
`thickness`プロパティは、枠線の太さを指定します。デフォルトは`1pt`です。
```typ
#showybox(title: "これはタイトルです", frame: (thickness: 2pt), footer: "これはフッターです")[これは本文です]
```

こちらも`relative-length`や`dictionary`の形式で指定できます。`dictionary`の場合は、`top`, `bottom`, `left`, `right`, `x`, `y`, `rest`のプロパティを指定できます。
ちなみに`box`内を分割している線(`separeaor`)の太さは、`thickness`プロパティではなく後述する`sep`プロパティで指定します。
### dash(破線)
`dash`プロパティは、枠線の破線の形状を`str`で指定します。デフォルトの値は `"solid"`(実線)です。
```typ
#showybox(title: "これはタイトルです", frame: (dash: "dashed"), footer: "これはフッターです")[これは本文です]
```

指定できる値は
- **solid**: 実線
- **dotted**: 点線
- **densely-dotted**: 密な点線
- **loosely-dotted**: 疎な点線
- **dashed**: 破線
- **densely-dashed**: 密な破線
- **loosely-dashed**: 疎な破線
- **dash-dotted**: 破線と点線の組み合わせ
- **densely-dash-dotted**: 密な破線と点線の組み合わせ
- **loosely-dash-dotted**: 疎な破線と点線の組み合わせ
です。
### inset, title-inset, body-inset, footer-inset(余白)
`inset`プロパティは、枠線の内側の余白を指定します。`title-inset`, `body-inset`, `footer-inset`は、それぞれタイトル、本文、フッターの内側の余白を指定します。
```typ
#showybox(title: "これはタイトルです", frame: (inset: 20pt, ), footer: "これはフッターです")[これは本文です]
```

値は`relative-length`や`dictionary`の形式で指定できます。`dictionary`の場合は、`top`, `bottom`, `left`, `right`, `x`, `y`, `rest`のプロパティを指定できます。デフォルトの値は`(x: 1em, y: 0.65em)`です。
`title-inset`, `body-inset`, `footer-inset`の値を指定した場合は、これらのプロパティはそれぞれタイトル、本文、フッターの余白を個別に設定します。このとき、`inset` プロパティは無視され、個別のインセットが優先されます。
## Title style
`title-style`パラメーターは、タイトルのスタイルを指定します。指定できるプロパティは次の通りです。
- color
- weight
- align
- sep-thickness
- boxed-style
### color
タイトルに用いるテキストの色を指定します。デフォルトは`white`です。
### weight
タイトルに用いるテキストの太さを指定します。指定できる値は`100`から`900`までの100刻み、または事前に定義されたキーワード("thin"、"extralight"、"light"、"regular"、"medium"、"semibold"、"bold"、
"extrabold"、"black")です。デフォルトは`regular`です。
### align
タイトルのテキストの配置を指定します。指定できる値は`"left"`(左寄せ)、`"center"`(中央寄せ)、`"right"`(右寄せ)です。デフォルトは`"left"`です。
### sep-thickness
タイトルと本文を分割する線の太さを指定します。デフォルトは`1pt`です。
```typ
#showybox(
title-style: (weight: 800, color: teal.darken(40%), sep-thickness: 0pt, align: center),
frame: (title-color: teal.lighten(80%), border-color: teal.darken(40%), thickness: (left: 2pt), radius: 0pt),
title: "自己情報量",
)[
事象$E$が起こる確率を$P(E)$とするとき、事象$E$の自己情報量$I(E)$は次のように定義される。
$ I(E) = log 1/(P(E)) = -log P(E) $
]
```

### boxed-style
このプロパティが`None`でない場合(すなわち辞書型の場合)、タイトルはfloating boxとして、本文の上部に表示されます。詳細は後述します。
## Boxed style
`boxed-style`で指定できるのは次のプロパティです。
- anchor
- offset
- radius
### anchor
`anchor`は、boxedtitle のアンカーをどこに配置するかを示す、キー`x`と`y`を持つ辞書型の値です。各方向の値の一覧は以下の通りです:
x方向のアンカー:
`left`: boxed-title の左側にアンカーを設定する。
`center`: boxed-title の水平中央にアンカーを設定する。
`right`: boxed-title の右側にアンカーを設定する。
y方向のアンカー:
`top`: boxed-title の上部にアンカーを設定する。
`horizon`: boxed-title の垂直中央にアンカーを設定する。
`bottom`: boxed-title の下部にアンカーを設定する。
デフォルトは `(x: left, y: horizon)` です。
### offset
`offset`はキー `x` と `y` を持つ辞書型の値で、boxed-title の x方向およびy方向へのオフセット量を指定します。 デフォルトは` (x: 0pt, y: 0pt)` です。
### radius
`radius`は、boxed-title の角丸の半径を指定します。デフォルトは `5pt` です。辞書型として設定した場合、`top-left`, `top-right`, `bottom-left`, `bottom-right` の各角に異なる半径を設定することができます。
```typ
#showybox(
title-style: (boxed-style: (anchor: (x: center, y: horizon), radius: (top-left: 10pt, bottom-right: 10pt, rest: 0pt))),
frame: (
title-color: green.darken(40%),
body-color: green.lighten(80%),
footer-color: green.lighten(60%),
border-color: green.darken(60%),
radius: (top-left: 10pt, bottom-right: 10pt, rest: 0pt),
),
title: "ラプラス変換",
)[
実数$t gt.eq 0$について定義された関数$f(t)$のラプラス変換とは
$ F(s)=integral_0^oo f(t)e^(-s t) dif t $
で定義される$s$の関数$F(s)$のことである。
]
```

## Body style
このパラメータには、`showybox` の本文に対して一般的なスタイルのプロパティを設定するのに役立つすべてのオプションが含まれています。
- color
- align
### color
本文のテキストの色を指定します。デフォルトは`black`です。
### align
本文のテキストの配置を指定します。デフォルトは`"left"`です。
## Footer style
このパラメータには、`showybox` のフッターのプロパティを設定するのに役立つすべてのオプションが含まれています。これらのプロパティの一部は `footer` パラメータを設定する際に指定することもできますが、似たスタイルの複数の `showybox` を作成する際に特に便利です。
- color
- weight
- align
- sep-thickness
### color
フッターのテキストの色を指定します。デフォルトは`luma(85)`です。詳しくは[luma](https://typst.app/docs/reference/visualize/color/#definitions-luma)を参照してください。
### weight
フッターのテキストの太さを指定します。デフォルトは`regular`です。
### align
フッターのテキストの配置を指定します。デフォルトは`"left"`です。
### sep-thickness
フッターと本文を分割する線の太さを指定します。デフォルトは`1pt`です。
```typ
#showybox(
footer-style: (sep-thickness: 0pt, align: right, color: black),
title: "シグモイド関数",
footer: [
シグモイド関数はニューラルネットワークにおける活性化関数として広く用いられる。
],
)[
比較的単純な非線形関数であるシグモイド関数は、以下のように定義される。
$ phi(x) =sigma.alt_1(x) =1/(1+e^(-x)) =(tanh(x/2)+1)/2 $
]
```

## Separator properties (sep)
セパレータに関するプロパティを指定します。スパレーターの使い方は後述します。
- thickness
- dash
- gutter
### thickness
`thickness`プロパティは、セパレーターの太さを指定します。デフォルトは`1pt`です。
### dash
`dash`プロパティは、セパレーターの破線の形状を指定します。デフォルトの値は `"solid"`(実線)です。
### gutter
セパレーターの上下の余白を指定します。デフォルトは`0.65em`です。
## Shadow properties
`shadow`プロパティは、showyboxに影を付けるためのプロパティを指定します。このプロパティを指定しないか、あるいは`none`を指定した場合、影は設定されません。
- color
- offset
### color
影の色を指定します。デフォルトは`luma(128)`です。
### offset
影の位置のオフセットを相対的な長さ、あるいは`x`と`y`をキーとする辞書型で指定します。デフォルトは`4pt`です。
```typ
#showybox(
shadow: (color: aqua.lighten(55%), offset: 3pt),
frame: (title-color: blue.darken(30%), border-color: blue.darken(30%), body-color: aqua.lighten(80%)),
title: "ガウスの発散定理",
title-style: (weight: 600)
)[
ガウスの発散定理は次のように表される。
$ integral_S bold(A) dot bold(n) dif S = integral_V nabla dot bold(A) dif V $
]
```

## Width
showyboxの幅を指定します。デフォルトは`100%`です。
## Align
showyboxの配置を指定します。デフォルトは`left`です。
## Breakable
このパラメーターは、showyboxがページまたはコンテナの終端で改行するかどうかを指定します。デフォルトは`false`です。
## Spacing, above, and below
spacing は、`above` または `below` が指定されていない場合に、showyboxの上下にどれだけのスペースを挿入するかを設定します。デフォルトでは、ドキュメント内のブロックのデフォルトのスペースが適用されます。
# Separators
同じshowybox内において、コンテンツを2つ以上に分割したい場合、`showybox()`関数の括弧内にコンテンツをコンマで区切って記述するか、閉じ括弧の直後にコンテンツを並べることで実現できます。
# Encapsulation
showyboxの中に別のshowyboxを入れ子にすることもできます。
```typ
= Encapsulation
#showybox(title: "Parent container", lorem(10), columns(2)[
#showybox(title-style: (boxed-style: (:)), title: "Child 1", lorem(10))
#colbreak()
#showybox(title-style: (boxed-style: (:)), title: "Child 2", lorem(10))
])
```

# まとめ
今回はshowyboxパッケージの使い方について解説しました。showyboxパッケージを利用することで、式やテキストを装飾することができ、見やすくなるだけでなく、情報を整理しやすくなります。ぜひ、Typstでの執筆時に活用してみてください。
また、誤訳などが含まれている場合はコメント等でご指摘頂けると非常にありがたいです。ここまで読んでいただき、ありがとうございました。
# 参考文献
- [showybox 2.0.1](https://typst.app/universe/package/showybox/)
- [Pablo-Gonzalez-Calderon/showybox-package](https://github.com/Pablo-Gonzalez-Calderon/showybox-package)
- [Showybox's Manual.pdf](https://github.com/Pablo-Gonzalez-Calderon/showybox-package/blob/main/Showybox's%20Manual.pdf) |
|
https://github.com/lucafluri/typst-templates | https://raw.githubusercontent.com/lucafluri/typst-templates/master/README.md | markdown | # Typst Templates
- [IEEE Submission Template (Adjusted)](./IEEE/)
- [FHNW MSE Project Report](./Project_Report_FHNW_MSE/) |
|
https://github.com/storopoli/Bayesian-Statistics | https://raw.githubusercontent.com/storopoli/Bayesian-Statistics/main/slides/09-robust_regression.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "@preview/polylux:0.3.1": *
#import themes.clean: *
#import "utils.typ": *
#import "@preview/cetz:0.1.2": canvas, plot
#new-section-slide("Robust Regression")
#slide(title: "Recommended References")[
- #cite(<gelman2013bayesian>, form: "prose") - Chapter 17: Models for robust
inference
- #cite(<mcelreath2020statistical>, form: "prose") - Chapter 12: Monsters and
Mixtures
- #cite(<gelman2020regression>, form: "prose"):
- Chapter 15, Section 15.6: Robust regression using the t model
- Chapter 15, Section 15.8: Going beyond generalized linear models
]
#focus-slide(background: julia-purple)[
#align(center)[#image("images/memes/not_normal_transparent.png")]
]
#slide(title: [Robust Models])[
Almost always data from real world are really strange.
#v(1em)
For the sake of convenience, we use simple models. But always ask yourself. How
many ways might the posterior inference depends on the following:
#v(1em)
- extreme observations (outliers)?
- unrealistic model assumptions?
]
#slide(title: [Outliers])[
#v(2em)
Models based on the *normal distribution are notoriously "non-robust" against
outliers*, in the sense that a *single observation can greatly affect the
inference of all model's parameters*, even those that has a shallow relationship
with it.
]
#slide(title: [Overdispersion])[
Superdispersion and underdispersion #footnote[
rarer to find in the real world.
]
refer to data that have more or fewer variation than expected under a
probability model @gelman2020regression.
#v(2em)
For each one of the models we covered, there is a *natural extension* in which
*a single parameter* is added to allow for overdispersion @gelman2013bayesian.
]
#slide(title: [Overdispersion Example])[
Suppose you are analyzing data from car accidents. The model we generally use in
this type of phenomena is *Poisson regression*.
#v(1em)
Poisson distribution has the same parameter for both the mean and variance: the
rate parameter $λ$.
#v(1em)
Hence, if you find a higher variability than expected under the Poisson
likelihood function allows, then probably you won't be able to model properly
the desired phenomena.
]
#slide(title: [Student's $t$ instead of Normal])[
Student's $t$ distribution has *wider #footnote[or "fatter".] tails* than the
Normal distribution.
#v(1em)
This makes it a good candidate to *fit outliers without instabilities in the
parameters' inference*.
#v(1em)
From the Bayesian viewpoint, there is nothing special or magical in the
Gaussian/Normal likelihood.
It is just another distribution specified in a statistical model. We can make
our model robust by using the Student's $t$ distribution as a likelihood
function.
]
#slide(title: [Student's $t$ instead of Normal])[
#align(center)[
#canvas(
length: 0.9cm,
{
plot.plot(
size: (16, 9),
x-label: none,
y-label: "PDF",
x-tick-step: 1,
y-tick-step: 0.1,
y-min: -0.01,
y-max: 0.42,
{
plot.add(
domain: (-4, 4), samples: 200,
// label: "Normal", // FIXME: depends on unreleased cetz 2.0.0
x => gaussian(x, 0, 1),
)
plot.add(
domain: (-4, 4), samples: 200,
// label: [Student's $t$ with $ν = 3$], // FIXME: depends on unreleased cetz 2.0.0
x => student(x, 3),
)
},
)
},
)
]
]
#slide(title: [Student's $t$ instead of Normal])[
#text(size: 16pt)[
By using a Student's $t$ distribution instead of the Normal distribution as
likelihood functions, the model's error $σ$ does _not_ follow a Normal
distribution, but a Student's $t$ distribution:
$
bold(y) &tilde "Student"(ν, α + bold(X) bold(β), σ) \
α &tilde "Normal"(μ_α, σ_α) \
bold(β) &tilde "Normal"(μ_bold(β), σ_bold(β)) \
ν &tilde "Log-Normal"(2, 1) \
σ &tilde "Exponential"(λ_σ)
$
Note that we are including an extra parameter $ν$, which represents the
Student's $t$ distribution degrees of freedom, to be estimated by the model
@gelman2013bayesian.
This controls how wide or narrow the "tails" of the distribution will be. A
heavy-tailed, positive-only prior is advised.
]
]
#slide(title: [Beta-Binomial instead of the Binomial])[
The binomial distribution has a practical limitation that we only have one free
parameter to estimate #footnote[since $n$ already comes from data.] ($p$). This
implies in the *variance to determined by the mean*. Hence, the binomial
distribution *cannot* tolerate overdispersion.
#v(1em)
A robust alternative is the *beta-binomial distribution*, which, as the name
suggests, is a *beta mixture of binomials distributions*. Most important, it
*allows that the variance to be independent of the mean*, making it *robust
against overdispersion*.
]
#slide(title: [Beta-Binomial instead of Binomial])[
The *beta-binomial distribution* is a binomial distribution, where the
probability of success $p$ is parameterized as a $"Beta"(α, β)$.
#v(1em)
Generally, we use $α$ as the binomial's probability of the success $p$, and $β$ #footnote[sometimes specified as $φ$] is
the additional parameter to control and allow for overdispersion.
#v(1em)
Values of $β >= 1$ make the beta-binomial behave the same as a binomial.
]
#slide(title: [Beta-Binomial instead of Binomial])[
#text(size: 18pt)[
$
bold(y) & tilde "Beta-Binomial"(n, p, φ) \
p & tilde "Logistic/Probit"(α + bold(X) bold(β)) \
α & tilde "Normal"(μ_α, σ_α) \
bold(β) & tilde "Normal"(μ_bold(β), σ_bold(β)) \
φ & tilde "Exponential"(1)
$
#v(1em)
It is also proper to include the overdispersion $β$ parameter as an additional
parameter to be estimated by the model @gelman2013bayesian
@mcelreath2020statistical. A heavy-tailed, positive-only prior is advised.
]
]
#slide(title: [Student's $t$ instead Binomial])[
#text(size: 15pt)[
Also known as Robit #footnote[there is a great discussion between Gelman, Vehtari and Kurz at
#link("https://discourse.mc-stan.org/t/robit-regression-not-robust/21245/")[
Stan's Discourse
].] @gelman2013bayesian @gelman2020regression. The idea is to make the
logistic regression robust by using a *latent variable* $z$ as the linear
predictor.
$z$'s errors, $ε$, are distributed as a Student's $t$ distribution:
$
y_i &= cases(0 "if" z_i < 0, 1 "if" z_i > 0) \
z_i &= X_i bold(β) + ε_i \
ε_i &tilde "Student"(ν, 0, sqrt((ν - 2) / ν)) \
ν &tilde "Gamma"(2, 0.1) ∈ [2, oo)
$
Here we are using the gamma distribution as a truncated Student's $t$
distribution for the degrees of freedom parameter $ν >= 2$. Another option would
be to fix $ν = 4$.
]
]
#slide(title: [Negative Binomial instead of Poisson])[
This is the overdispersion example. The Poisson distribution uses a *single
parameter for both its mean and variance*.
#v(1em)
Hence, if you find overdispersion, probably you'll need a robust alternative to
Poisson. This is where the *negative binomial*, with an extra parameter $φ$,
that makes it *robust to overdispersion*.
#v(1em)
$φ$ controls the probability of success $p$, and we generally use a gamma
distribution as its prior.
$φ$ is also known as a "reciprocal dispersion" parameter.
]
#slide(title: [Negative Binomial instead of Poisson])[
$
bold(y) &tilde "Negative Binomial"(e^((α + bold(X) bold(β))), φ) \
φ &tilde "Gamma"(0.01, 0.01) \
α &tilde "Normal"(μ_α, σ_α) \
bold(β) &tilde "Normal"(μ_bold(β), σ_bold(β))
$
#v(2em)
Here we also give a heavy-tailed, positive-only prior to $φ$. Something like the $"Gamma"(0.01, 0.01)$ works.
]
#slide(title: [Negative Binomial Mixture instead of Poisson])[
Even using a negative binomial likelihood, if you encounter acute
overdispersion, specially when there is a lot of zeros in your data
(zero-inflated), your model can still perform a bad fit to the data.
#v(1em)
Another suggestion is to use a mixture of negative binomial
@mcelreath2020statistical.
]
#slide(title: [Negative Binomial Mixture instead of Poisson])[
Here, $S_i$ is a dummy variable, taking value $1$ if the $i$th observation has a
value $≠ 0$.
$S_i$ can be modeled using logistic regression:
$
bold(y) &cases(
= 0 "if" S_i = 0, tilde "Negative Binomial"(e^((α + bold(X) bold(β))), φ ) "if" S_i = 1,
) \
P(S_i = 1) &= "Logistic/Probit"(bold(X) bold(γ)) \
γ &tilde "Beta"(1, 1)
$
#v(1em)
$γ$ is a new coefficients which we give uniform prior of $"Beta"(1, 1)$.
]
#slide(title: [Why Use Non-Robust Models?])[
The *central limit theorem* tells us that the *normal distribution* is an
appropriate model for data that arises as a *sum of independent components*.
Even when they are naturally not implicit in a phenomena structure, *simpler
non-robust models are computational efficient*.
Finally, there's *occam's razor*, also known as the *principle of parsimony*,
which states the preference for simplicity in the scientific method.
Of course, you must always guide the model choice in a *principled manner*,
taking into account the underlying phenomena data generating process. And make
sure to make *posterior predictive checks*.
]
|
https://github.com/RiccardoTonioloDev/Bachelor-Thesis | https://raw.githubusercontent.com/RiccardoTonioloDev/Bachelor-Thesis/main/config/variables.typ | typst | Other | // All reusable variables here
#let myLang = "it"
#let myName = "<NAME>"
#let myMatricola = "2042332"
#let myTitle = "Predizione della Profondità con Deep Learning da Immagini di Telecamera Monoculare"
#let myDegree = "Tesi di laurea"
#let myUni = "Università degli studi di Padova"
#let myDepartment = "Dipartimento di Matematica ''Tullio Levi-Civita''"
#let myFaculty = "Corso di Laurea in Informatica"
#let profTitle = "Prof. "
#let myProf = "<NAME>"
#let myCompany = "VIMP Group, " + myUni
#let myTutor = "<NAME>"
#let myLocation = "Padova"
#let myAA = "2023-2024"
#let myTime = "Settembre 2024" |
https://github.com/ekmanib/curriculum-vitae | https://raw.githubusercontent.com/ekmanib/curriculum-vitae/main/docs/header.typ | typst | #grid(
columns: (3fr, 1fr),
rows: (120pt),
gutter: 3pt,
text(font: "Jost", size: 24pt)[
= <NAME>
== BsC. Economía
],
image("../assets/me3.1.jpg", height: 100%, width:100%)
)
|
|
https://github.com/typst-community/valkyrie | https://raw.githubusercontent.com/typst-community/valkyrie/main/tests/types/version/test.typ | typst | Other | #import "/src/lib.typ" as z
#import "/tests/utility.typ": *
#show: show-rule.with();
#let schema = z.version()
= types/version
== Input types
#let _ = z.parse(version(0, 1, 0), schema) |
https://github.com/pal03377/presentation-feedback-log-template-typst | https://raw.githubusercontent.com/pal03377/presentation-feedback-log-template-typst/main/feedbacklog.typ | typst | MIT License | // Enter your thesis data here:
#let title = "Fancy Thesis Title"
#let degree = "Master"
#let program = "Informatics"
#let supervisor = "Prof. Dr. <NAME>"
#let advisor = "<NAME>, M.Sc."
#let author = "<NAME>"
#let presentation_date = "Sep 11, 2023"
#show heading: set text(size: 1.5em)
// Title section
#align(center)[
#heading[
Presentation Feedbacklog#linebreak()
#degree's Thesis #author#linebreak()
#title
]
#v(2em)
#text(size: 1.5em)[
#datetime.today().display("[month repr:short] [day], [year]")
]
#v(2em)
// Information table
#line(length: 100%, stroke: gray)
#table(
columns: (9em, auto),
align: left,
stroke: none,
[Author:], [#author],
[Supervisor:], [#supervisor],
[Advisor:], [#advisor],
[Presentation date:], [#presentation_date],
)
#line(length: 100%, stroke: gray)
#v(2em)
]
// Helper functions
#let feedback_counter = counter("feedback")
#let feedback(it) = block[
#feedback_counter.step()
*#feedback_counter.display(). Feedback:*
#it
]
#let response(it) = pad(left: 3em)[
#text(fill: blue)[
*Response:*
#it
]
]
// ===========================================
// Content
#feedback[ // Example
Since the attention span of the audience is rather short, try getting to the problem as fast as possible. Several slides in the presentation display redundant or repetitive information and could be deleted
]
#response[ // Example
To get to the problem earlier in the presentation as well as to present only relevant information, some of the slides are removed (Agenda) and some of them are merged (Context, Motivation and Objectives). Therefore the "Problem" slide is now at slide 4.
]
#feedback[ // Example
The structure of the presentation is rather unique; it would be great if you could help the audience to follow it easier.
]
#response[ // Example
To help the audience to follow the presentation easier, a progress-bar was added at the bottom of the slides to show the milestones of the presentation and highlight the one that indicates the current slide.
]
|
https://github.com/kiwiyou/algorithm-lecture | https://raw.githubusercontent.com/kiwiyou/algorithm-lecture/main/advanced/05-disjoint-set.typ | typst | #import "@preview/cetz:0.1.2"
#import "@preview/algorithmic:0.1.0"
#import "../slide.typ"
#show: slide.style
#show link: slide.link
#show footnote.entry: slide.footnote
#let algorithm(..args) = text(
font: ("linux libertine", "Pretendard"),
size: 17pt,
)[#algorithmic.algorithm(..args)]
#let func(body) = text(font: ("linux libertine", "Pretendard"))[#smallcaps[#body]]
#align(horizon + center)[
= 알고리즘 중급 세미나
05: 분리 집합
#text(size: 0.8em)[
연세대학교 전우제#super[kiwiyou] \
2023.01.15.r1
]
]
#slide.slide[분리 집합][
- 정점 $V$개가 있을 때, 다음 쿼리 $Q$개를 처리하기
- `1 u v`: $u$와 $v$를 잇는 양방향 간선을 추가
- `2 u v`: $u$와 $v$ 사이를 이동할 수 있는지 출력
- DFS를 이용하면 1번 쿼리에 $cal(O)(1)$, 2번 쿼리에 $cal(O)(V)$
- 더 빠르게 할 수 있을까?
]
#slide.slide[분리 집합][
- 집합 $N$개에 각각 대표 원소 $1, 2, dots, N$이 있을 때, 다음 연산을 처리
- 원소 $a$가 포함된 집합의 대표 원소 찾기: `find(a)`
- 집합 $A$와 $B$를 모두 두 집합의 합집합으로 대체: `union(a, b)`
- A가 B를 포함하면 B의 부모를 A로 설정, 전체 집합을 트리 구조로 표현
- `find(a)`는 트리의 루트를 찾는 연산
- `union(a, b)`는 루트와 루트 사이에 간선을 추가하는 연산
#pagebreak()
- `find`의 최적화
- 편의상 집합 번호 = 대표 원소로 두고, 부모가 자신과 같으면 루트로 정의
#algorithm({
import algorithmic: *
Function([Find], args: ($a$, ), {
Assign[$r$][$a$]
While(cond: $"parent"(r) eq.not r$, {
Assign[$r$][$"parent"(r)$]
})
Return[$r$]
})
})
- 매번 루트까지 올라가야 하므로 최악 $cal(O)(N)$
#pagebreak()
- 루트까지 올라간 결과를 저장해 두기
#algorithm({
import algorithmic: *
Function([Find], args: ($a$, ), {
Assign[$r$][$a$]
While(cond: $"parent"(r) eq.not r$, {
Assign[$r$][$"parent"(r)$]
})
Assign[$"parent"(a)$][$r$]
Return[$r$]
})
})
- 여전히 amortized $cal(O)(N)$
- 재귀를 이용하여 매 반복마다 저장해 둔다면?
#pagebreak()
- 경로 압축#super[Path Compression]: amortized $cal(O)(log N)$ - 증명 생략
#algorithm({
import algorithmic: *
Function([Find], args: ($a$, ), {
If(cond: $"parent"(a) eq.not a$, {
Assign[$"parent"(a)$][$"find"("parent"(a))$]
})
Return[$"parent"(a)$]
})
})
#algorithm({
import algorithmic: *
Function([Find], args: ($a$, ), {
Assign[$r$][$a$]
While(cond: $"parent"(r) eq.not r$, {
Assign[$"parent"(r)$][$"parent"("parent"(r))$]
Assign[$r$][$"parent"(r)$]
})
Return[$r$]
})
})
#pagebreak()
- `union`의 최적화
- Union by size
#algorithm({
import algorithmic: *
Function([Union], args: ($a$, $b$ ), {
If(cond: $"size"(a) < "size"(b)$, {
Call([Union], [$b$, $a$])
})
Assign[$"size"(a)$][$"size"(a) + "size"(b)$]
Assign[$"parent"(b)$][$a$]
})
})
- 경로 압축 없이 amortized $cal(O)(log N)$, 경로 압축과 함께 amortized $cal(O)(alpha(N))$
#pagebreak()
- Union by rank
#algorithm({
import algorithmic: *
Function([Union], args: ($a$, $b$ ), {
If(cond: $"rank"(a) < "rank"(b)$, {
Call([Union], [$b$, $a$])
})
If(cond: $"rank"(a) = "rank"(b)$, {
Assign[$"rank"(a)$][$"rank"(a) + 1$]
})
Assign[$"parent"(b)$][$a$]
})
})
- 경로 압축 없이 amortized $cal(O)(log N)$, 경로 압축과 함께 amortized $cal(O)(alpha(N))$
- rank의 값은 최대 $log N$
#pagebreak()
- 실시간으로 사이클 판정 가능
- 연결 요소 개수를 실시간으로 추적 가능
- 다음 빈 공간 찾기
]
#slide.slide[과제][
- #slide.problem("11085", "군사 이동")
- #slide.problem("14595", "동방 프로젝트 (Large)")
- #slide.problem("29618", "미술 시간")
]
#slide.slide[최소 스패닝 트리][
- 스패닝 트리: 양방향 연결 그래프의 모든 정점을 포함하는 트리
- 최소 스패닝 트리: 스패닝 트리 중 트리 간선의 가중치 합이 최소인 것
- Prim, Kruskal 외 여러 가지 알고리즘이 있음
]
#slide.slide[Prim 알고리즘][
- 한 정점에서 시작해서 가중치가 가장 작은 간선을 계속 붙여 나감
- 붙일 수 있는 간선의 목록을 우선순위 큐로 관리하면 $cal(O)(E log V)$
]
#slide.slide[Kruskal 알고리즘][
- 가중치가 작은 간선부터 차례대로, 사이클을 이루지 않는다면 트리에 추가
- 정렬에 $cal(O)(E log E)$, 사이클 확인에 $cal(O)(E alpha(V))$
]
#slide.slide[과제][
- #slide.problem("27945", "슬슬 가지를 먹지 않으면 죽는다")
- #slide.problem("28473", "도로 위의 표지판")
] |
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/020.%20hundred.html.typ | typst | hundred.html
The Hundred-Year Language
April 2003(This essay is derived from a keynote talk at PyCon 2003.)It's hard to predict what
life will be like in a hundred years. There are only a few
things we can say with certainty. We know that everyone will
drive flying cars,
that zoning laws will be relaxed to allow buildings
hundreds of stories tall, that it will be dark most of the
time, and that women will all be trained in the martial arts.
Here I want to zoom in on one detail of this
picture. What kind of programming language will they use to
write the software controlling those flying cars?This is worth thinking about not so
much because we'll actually get to use these languages as because,
if we're lucky, we'll use languages on the path from this
point to that.I think that, like species, languages will form evolutionary trees,
with dead-ends branching off all over. We can see this
happening already.
Cobol, for all its sometime popularity, does not seem to have any
intellectual descendants. It is an evolutionary dead-end-- a
Neanderthal language.I predict a similar fate for Java. People
sometimes send me mail saying, "How can you say that Java
won't turn out to be a successful language? It's already
a successful language." And I admit that it is, if you
measure success by shelf space taken up by books on it
(particularly individual books on it), or by
the number of undergrads who believe they have to
learn it to get a job. When I say Java won't
turn out to be a successful language, I mean something more
specific: that Java
will turn out to be an evolutionary dead-end, like Cobol.This is just a guess. I may be wrong. My point here is not to dis Java,
but to raise the issue of evolutionary
trees and get people asking, where on the tree is language X?
The reason to ask this question isn't just so that
our ghosts can say, in a
hundred years, I told you so. It's because staying close to
the main branches is a useful heuristic for finding languages that will
be good to program in now.At any given time, you're probably happiest on
the main branches of an evolutionary tree.
Even when there were still plenty of Neanderthals,
it must have sucked to be one. The
Cro-Magnons would have been constantly coming over and
beating you up and stealing your food.The reason I want to
know what languages will be like in a hundred years is so that
I know what branch of the tree to bet on now.The evolution of languages differs from the evolution of species
because branches can converge. The Fortran branch, for example,
seems to be merging with the descendants
of Algol. In theory this is possible for species too, but it's
not likely to have happened to any bigger than a cell.Convergence
is more likely for languages partly because the space of
possibilities is smaller, and partly because mutations
are not random. Language designers deliberately incorporate
ideas from other languages.It's especially useful for language designers to think
about where the evolution of programming languages is likely
to lead, because they can steer accordingly.
In that case, "stay on a main branch" becomes more than a
way to choose a good language.
It becomes a heuristic for making the right decisions about
language design.Any programming language can be divided into
two parts: some set of fundamental operators that play the role
of axioms, and the rest of the language, which could in principle
be written in terms of these fundamental operators.I think the fundamental operators are the most important factor in a
language's long term survival. The rest you can change. It's
like the rule that in buying a house you should consider
location first of all. Everything else you can fix later, but you
can't fix the location.I think it's important not just that the axioms be well chosen,
but that there be few of them. Mathematicians have always felt
this way about axioms-- the fewer, the better-- and I think they're
onto something.At the very least, it has to be a useful exercise to look closely
at the core of a language to see if there are any axioms that
could be weeded out. I've found in my long career as a slob that
cruft breeds cruft, and I've seen this happen in software as
well as under beds and in the corners of rooms.I have a hunch that
the main branches of the evolutionary tree pass through the languages
that have the smallest, cleanest cores.
The more of a language you can write in itself,
the better.Of course, I'm making a big assumption in even asking what
programming languages will be like in a hundred years.
Will we even be writing programs in a hundred years? Won't
we just tell computers what we want them to do?There hasn't been a lot of progress in that department
so far.
My guess is that a hundred years from now people will
still tell computers what to do using programs we would recognize
as such. There may be tasks that we
solve now by writing programs and which in a hundred years
you won't have to write programs to solve, but I think
there will still be a good deal of
programming of the type that we do today.It may seem presumptuous to think anyone can predict what
any technology will look like in a hundred years. But
remember that we already have almost fifty years of history behind us.
Looking forward a hundred years is a graspable idea
when we consider how slowly languages have evolved in the
past fifty.Languages evolve slowly because they're not really technologies.
Languages are notation. A program is a formal description of
the problem you want a computer to solve for you. So the rate
of evolution in programming languages is more like the
rate of evolution in mathematical notation than, say,
transportation or communications.
Mathematical notation does evolve, but not with the giant
leaps you see in technology.Whatever computers are made of in a hundred years, it seems
safe to predict they will be much faster than
they are now. If Moore's Law continues to put out, they will be 74
quintillion (73,786,976,294,838,206,464) times faster. That's kind of
hard to imagine. And indeed, the most likely prediction in the
speed department may be that Moore's Law will stop working.
Anything that is supposed to double every eighteen months seems
likely to run up against some kind of fundamental limit eventually.
But I have no trouble believing that computers will be very much
faster. Even if they only end up being a paltry million
times faster, that should change the ground rules for programming
languages substantially. Among other things, there
will be more room for what
would now be considered slow languages, meaning languages
that don't yield very efficient code.And yet some applications will still demand speed.
Some of the problems we want to solve with
computers are created by computers; for example, the
rate at which you have to process video images depends
on the rate at which another computer can
generate them. And there is another class of problems
which inherently have an unlimited capacity to soak up cycles:
image rendering, cryptography, simulations.If some applications can be increasingly inefficient while
others continue to demand all the speed the hardware can
deliver, faster computers will mean that languages have
to cover an ever wider range of efficiencies. We've seen
this happening already. Current implementations of some
popular new languages are shockingly wasteful by the
standards of previous decades.This isn't just something that happens with programming
languages. It's a general historical trend. As technologies improve,
each generation can do things that the previous generation
would have considered wasteful. People thirty years ago would
be astonished at how casually we make long distance phone calls.
People a hundred years ago would be even more astonished that
a package would one day travel from Boston to New York via Memphis.I can already tell you what's going to happen to all those extra
cycles that faster hardware is going to give us in the
next hundred years. They're nearly all going to be wasted.I learned to program when computer power was scarce.
I can remember taking all the spaces out of my Basic programs
so they would fit into the memory of a 4K TRS-80. The
thought of all this stupendously inefficient software
burning up cycles doing the same thing over and over seems
kind of gross to me. But I think my intuitions here are wrong. I'm
like someone who grew up poor, and can't bear to spend money
even for something important, like going to the doctor.Some kinds of waste really are disgusting. SUVs, for example, would
arguably be gross even if they ran on a fuel which would never
run out and generated no pollution. SUVs are gross because they're
the solution to a gross problem. (How to make minivans look more
masculine.)
But not all waste is bad. Now that we have the infrastructure
to support it, counting the minutes of your long-distance
calls starts to seem niggling. If you have the
resources, it's more elegant to think of all phone calls as
one kind of thing, no matter where the other person is.There's good waste, and bad waste. I'm interested
in good waste-- the kind where, by spending more, we can get
simpler designs. How will we take advantage of the opportunities
to waste cycles that we'll get from new, faster hardware?The desire for speed is so deeply engrained in us, with
our puny computers, that it will take a conscious effort
to overcome it. In language design, we should be consciously seeking out
situations where we can trade efficiency for even the
smallest increase in convenience.Most data structures exist because of speed. For example,
many languages today have both strings and lists. Semantically, strings
are more or less a subset of lists in which the elements are
characters. So why do you need a separate data type?
You don't, really. Strings only
exist for efficiency. But it's lame to clutter up the semantics
of the language with hacks to make programs run faster.
Having strings in a language seems to be a case of
premature optimization.If we think of the core of a language as a set of axioms,
surely it's gross to have additional axioms that add no expressive
power, simply for the sake of efficiency. Efficiency is
important, but I don't think that's the right way to get it.The right way to solve that problem, I think, is to separate
the meaning of a program from the implementation details.
Instead of having both lists and strings, have just lists,
with some way to give the compiler optimization advice that
will allow it to lay out strings as contiguous bytes if
necessary.Since speed doesn't matter in most of a program, you won't
ordinarily need to bother with
this sort of micromanagement.
This will be more and more true as computers get faster.Saying less about implementation should also make programs
more flexible.
Specifications change while a program is being written, and this is not
only inevitable, but desirable.The word "essay" comes
from the French verb "essayer", which means "to try".
An essay, in the original sense, is something you
write to try to figure something out. This happens in
software too. I think some of the best programs were essays,
in the sense that the authors didn't know when they started
exactly what they were trying to write.Lisp hackers already know about the value of being flexible
with data structures. We tend to write the first version of
a program so that it does everything with lists. These
initial versions can be so shockingly inefficient that it
takes a conscious effort not to think about what they're
doing, just as, for me at least, eating a steak requires a
conscious effort not to think where it came from.What programmers in a hundred years will be looking for, most of
all, is a language where you can throw together an unbelievably
inefficient version 1 of a program with the least possible
effort. At least, that's how we'd describe it in present-day
terms. What they'll say is that they want a language that's
easy to program in.Inefficient software isn't gross. What's gross is a language
that makes programmers do needless work. Wasting programmer time
is the true inefficiency, not wasting machine time. This will
become ever more clear as computers get faster.I think getting rid of strings is already something we
could bear to think about. We did it in Arc, and it seems
to be a win; some operations that would be awkward to
describe as regular expressions can be described
easily as recursive functions.How far will this flattening of data structures go? I can think
of possibilities that shock even me, with my conscientiously broadened
mind. Will we get rid of arrays, for example? After all, they're
just a subset of hash tables where the keys are vectors of
integers. Will we replace hash tables themselves with lists?There are more shocking prospects even than that. The Lisp
that McCarthy described in 1960, for example, didn't
have numbers. Logically, you don't need to have a separate notion
of numbers, because you can represent them as lists: the integer
n could be represented as a list of n elements. You can do math this
way. It's just unbearably inefficient.No one actually proposed implementing numbers as lists in
practice. In fact, McCarthy's 1960 paper was not, at the time,
intended to be implemented at all. It was a theoretical exercise,
an attempt to create a more elegant alternative to the Turing
Machine. When someone did, unexpectedly, take this paper and
translate it into a working Lisp interpreter, numbers certainly
weren't represented as lists; they were represented in binary,
as in every other language.Could a programming language go so far as to get rid of numbers
as a fundamental data type? I ask this not so much as a serious
question as as a way to play chicken with the future. It's like
the hypothetical case of an irresistible force meeting an
immovable object-- here, an unimaginably inefficient
implementation meeting unimaginably great resources.
I don't see why not. The future is pretty long. If there's
something we can do to decrease the number of axioms in the core
language, that would seem to be the side to bet on as t approaches
infinity. If the idea still seems unbearable in a hundred years,
maybe it won't in a thousand.Just to be clear about this, I'm not proposing that all numerical
calculations would actually be carried out using lists. I'm proposing
that the core language, prior to any additional notations about
implementation, be defined this way. In practice any program
that wanted to do any amount of math would probably represent
numbers in binary, but this would be an optimization, not part of
the core language semantics.Another way to burn up cycles is to have many layers of
software between the application and the hardware. This too is
a trend we see happening already: many recent languages are
compiled into byte code. <NAME> once told me that,
as a rule of thumb, each layer of interpretation costs a
factor of 10 in speed. This extra cost buys you flexibility.The very first version of Arc was an extreme case of this sort
of multi-level slowness, with corresponding benefits. It
was a classic "metacircular" interpreter written
on top of Common Lisp, with a definite family resemblance
to the eval function defined in McCarthy's original Lisp paper.
The whole thing was only a couple hundred lines of
code, so it was very easy to understand and change. The
Common Lisp we used, CLisp, itself runs on top
of a byte code interpreter. So here we had two levels of
interpretation, one of them (the top one) shockingly inefficient,
and the language was usable. Barely usable, I admit, but
usable.Writing software as multiple layers is a powerful technique
even within applications. Bottom-up programming means writing
a program as a series of layers, each of which serves as a
language for the one above. This approach tends to yield
smaller, more flexible programs. It's also the best route to
that holy grail, reusability. A language is by definition
reusable. The more
of your application you can push down into a language for writing
that type of application, the more of your software will be
reusable.Somehow the idea of reusability got attached
to object-oriented programming in the 1980s, and no amount of
evidence to the contrary seems to be able to shake it free. But
although some object-oriented software is reusable, what makes
it reusable is its bottom-upness, not its object-orientedness.
Consider libraries: they're reusable because they're language,
whether they're written in an object-oriented style or not.I don't predict the demise of object-oriented programming, by the
way. Though I don't think it has much to offer good programmers,
except in certain specialized domains, it is irresistible to
large organizations. Object-oriented programming
offers a sustainable way to write spaghetti code. It lets you accrete
programs as a series of patches.
Large organizations
always tend to develop software this way, and I expect this
to be as true in a hundred years as it is today.
As long as we're talking about the future, we had better
talk about parallel computation, because that's where this
idea seems to live. That is, no matter when you're talking, parallel
computation seems to be something that is going to happen
in the future.Will the future ever catch up with it? People have been
talking about parallel computation as something imminent
for at least 20
years, and it hasn't affected programming practice much so far.
Or hasn't it? Already
chip designers have to think about it, and so must
people trying to write systems software on multi-cpu computers.The real question is, how far up the ladder of abstraction will
parallelism go?
In a hundred years will it affect even application programmers? Or
will it be something that compiler writers think about, but
which is usually invisible in the source code of applications?One thing that does seem likely is that most opportunities for
parallelism will be wasted. This is a special case of my more
general prediction that most of the extra computer power we're
given will go to waste. I expect that, as with the stupendous
speed of the underlying hardware, parallelism will be something
that is available if you ask for it explicitly, but ordinarily
not used. This implies that the kind of parallelism we have in
a hundred years will not, except in special applications, be
massive parallelism. I expect for
ordinary programmers it will be more like being able to fork off
processes that all end up running in parallel.And this will, like asking for specific implementations of data
structures, be something that you do fairly late in the life of a
program, when you try to optimize it. Version 1s will ordinarily
ignore any advantages to be got from parallel computation, just
as they will ignore advantages to be got from specific representations
of data.Except in special kinds of applications, parallelism won't
pervade the programs that are written in a hundred years. It would be
premature optimization if it did.How many programming languages will there
be in a hundred years? There seem to be a huge number of new
programming languages lately. Part of the reason is that
faster hardware has allowed programmers to make different
tradeoffs between speed and convenience, depending on the
application. If this is a real trend, the hardware we'll
have in a hundred years should only increase it.And yet there may be only a few widely-used languages in a
hundred years. Part of the reason I say this
is optimism: it seems that, if you did a really good job,
you could make a language that was ideal for writing a
slow version 1, and yet with the right optimization advice
to the compiler, would also yield very fast code when necessary.
So, since I'm optimistic, I'm going to predict that despite
the huge gap they'll have between acceptable and maximal
efficiency, programmers in a hundred years will have languages
that can span most of it.As this gap widens, profilers will become increasingly important.
Little attention is paid to profiling now. Many people still
seem to believe that the way to get fast applications is to
write compilers that generate fast code. As the gap between
acceptable and maximal performance widens, it will become
increasingly clear that the way to get fast applications is
to have a good guide from one to the other.When I say there may only be a few languages, I'm not including
domain-specific "little languages". I think such embedded languages
are a great idea, and I expect them to proliferate. But I expect
them to be written as thin enough skins that users can see
the general-purpose language underneath.Who will design the languages of the future? One of the most exciting
trends in the last ten years has been the rise of open-source
languages like Perl, Python, and Ruby.
Language design is being taken over by hackers. The results
so far are messy, but encouraging. There are some stunningly
novel ideas in Perl, for example. Many are stunningly bad, but
that's always true of ambitious efforts. At its current rate
of mutation, God knows what Perl might evolve into in a hundred
years.It's not true that those who can't do, teach (some of the best
hackers I know are professors), but it is true that there are a
lot of things that those who teach can't do. Research imposes
constraining caste restrictions. In any academic
field there are topics that are ok to work on and others that
aren't. Unfortunately the distinction between acceptable and
forbidden topics is usually based on how intellectual
the work sounds when described in research papers, rather than
how important it is for getting good results. The extreme case
is probably literature; people studying literature rarely
say anything that would be of the slightest use to those
producing it.Though the situation is better in the sciences,
the overlap between the kind of work you're allowed to do and the
kind of work that yields good languages is distressingly small.
(<NAME> has grumbled eloquently
about this.) For example, types seem to be an inexhaustible source
of research papers, despite the fact that static typing
seems to preclude true macros-- without which, in my opinion, no
language is worth using.The trend is not merely toward languages being developed
as open-source projects rather than "research", but toward
languages being designed by the application programmers who need
to use them, rather than by compiler writers. This seems a good
trend and I expect it to continue.
Unlike physics in a hundred years, which is almost necessarily
impossible to predict, I think it may be possible in principle
to design a language now that would appeal to users in a hundred
years.One way to design a language is to just write down the program
you'd like to be able to write, regardless of whether there
is a compiler that can translate it or hardware that can run it.
When you do this you can assume unlimited resources. It seems
like we ought to be able to imagine unlimited resources as well
today as in a hundred years.What program would one like to write? Whatever is least work.
Except not quite: whatever would be least work if your ideas about
programming weren't already influenced by the languages you're
currently used to. Such influence can be so pervasive that
it takes a great effort to overcome it. You'd think it would
be obvious to creatures as lazy as us how to express a program
with the least effort. In fact, our ideas about what's possible
tend to be so limited by whatever language we think in that
easier formulations of programs seem very surprising. They're
something you have to discover, not something you naturally
sink into.One helpful trick here
is to use the length of the program as an approximation for
how much work it is to write. Not the length in characters,
of course, but the length in distinct syntactic elements-- basically,
the size of the parse tree. It may not be quite true that
the shortest program is the least work to write, but it's
close enough that you're better off aiming for the solid
target of brevity than the fuzzy, nearby one of least work.
Then the algorithm for language design becomes: look at a program
and ask, is there any way to write this that's shorter?In practice, writing programs in an imaginary hundred-year
language will work to varying degrees depending
on how close you are to the core. Sort routines you can
write now. But it would be
hard to predict now what kinds of libraries might be needed in
a hundred years. Presumably many libraries will be for domains that
don't even exist yet. If SETI@home works, for example, we'll
need libraries for communicating with aliens. Unless of course
they are sufficiently advanced that they already communicate
in XML.At the other extreme, I think you might be able to design the
core language today. In fact, some might argue that it was already
mostly designed in 1958.If the hundred year language were available today, would we
want to program in it? One way to answer this question is to
look back. If present-day programming languages had been available
in 1960, would anyone have wanted to use them?In some ways, the answer is no. Languages today assume
infrastructure that didn't exist in 1960. For example, a language
in which indentation is significant, like Python, would not
work very well on printer terminals. But putting such problems
aside-- assuming, for example, that programs were all just
written on paper-- would programmers of the 1960s have liked
writing programs in the languages we use now?I think so.
Some of the less imaginative ones,
who had artifacts of early languages built into their ideas of
what a program was, might have had trouble. (How can you manipulate
data without doing pointer arithmetic? How can you implement
flow charts without gotos?) But I think the smartest programmers
would have had no trouble making the most of present-day
languages, if they'd had them.If we had the hundred-year language now, it would at least make a
great pseudocode. What about using it to write software?
Since the hundred-year language
will need to generate fast code for some applications, presumably
it could generate code efficient enough to run acceptably well
on our hardware. We might have to give more optimization advice
than users in a hundred years, but it still might be a net win.Now we have two ideas that, if you combine them, suggest interesting
possibilities: (1) the hundred-year language could, in principle, be
designed today, and (2) such a language, if it existed, might be good to
program in today. When you see these ideas laid out like that,
it's hard not to think, why not try writing the hundred-year language
now?When you're working on language design, I think it is good to
have such a target and to keep it consciously in mind. When you
learn to drive, one of the principles they teach you is to
align the car not by lining up the hood with the stripes painted
on the road, but by aiming at some point in the distance. Even
if all you care about is what happens in the next ten feet, this
is the right answer. I
think we can and should do the same thing with programming languages.
NotesI believe Lisp Machine Lisp was the first language to embody
the principle that declarations (except those of dynamic variables)
were merely optimization advice,
and would not change the meaning of a correct program. Common Lisp
seems to have been the first to state this explicitly.Thanks to <NAME>, <NAME>, and <NAME> for
reading drafts of this, and to <NAME>, <NAME>, and the
rest of the Python crew for inviting me to speak at PyCon.
You'll find this essay and 14 others in
Hackers & Painters.
|
|
https://github.com/lublak/typst-ctxjs-package | https://raw.githubusercontent.com/lublak/typst-ctxjs-package/main/typst-package/lib.typ | typst | MIT License | #let ctxjs = plugin("ctxjs.wasm")
#let string-to-bytes(data) = {
let data = data
if type(data) == str {
data = bytes(data)
} else if type(data) == array {
data = bytes(data)
} else if type(data) == content {
data = bytes(data.text)
}
data
}
#let create-context(ctxname) = {
ctxjs.create_context(string-to-bytes(ctxname))
}
#let eval(ctxname, js) = {
cbor.decode(ctxjs.eval(string-to-bytes(ctxname), string-to-bytes(js)))
}
#let call-function(ctxname, fnname, args) = {
cbor.decode(ctxjs.call_function(string-to-bytes(ctxname), string-to-bytes(fnname), cbor.encode(args)))
}
#let define-vars(ctxname, vars) = {
cbor.decode(ctxjs.define_vars(string-to-bytes(ctxname), cbor.encode(vars)))
}
#let eval-format(ctxname, js, args) = {
cbor.decode(ctxjs.eval_format(string-to-bytes(ctxname), string-to-bytes(js), cbor.encode(args)))
}
#let load-module-bytecode(ctxname, bytecode) = {
ctxjs.load_module_bytecode(string-to-bytes(ctxname), bytecode)
}
#let load-module-js(ctxname, modulename, module) = {
ctxjs.load_module_js(string-to-bytes(ctxname), string-to-bytes(modulename), string-to-bytes(module))
}
#let call-module-function(ctxname, modulename, fnname, args) = {
cbor.decode(ctxjs.call_module_function(string-to-bytes(ctxname), string-to-bytes(modulename), string-to-bytes(fnname), cbor.encode(args)))
}
#let get-module-properties(ctxname, modulename) = {
cbor.decode(ctxjs.get_module_properties(string-to-bytes(ctxname), string-to-bytes(modulename)))
} |
https://github.com/noahjutz/AD | https://raw.githubusercontent.com/noahjutz/AD/main/notizen/komplexitaet/start_value.typ | typst | #import "@preview/cetz:0.2.2"
#import "/config.typ": theme
#cetz.canvas(length: 100%, {
import cetz.draw: *
import cetz.plot
plot.plot(
axis-style: none,
y-max: 25,
name: "plot", {
plot.add(
domain: (0, 15),
style: (stroke: black),
x => x + 2 * calc.sin(x)
)
plot.add(
domain: (0, 9.6),
style: (stroke: gray),
x => calc.pow(x, 2) / 10
)
plot.add(
domain: (9.6, 15),
style: (stroke: theme.primary),
x => calc.pow(x, 2) / 10
)
plot.add-vline(
style: (stroke: (paint: gray, dash: "dashed")),
9.6
)
plot.add-anchor(
"n0",
(9.6, 0)
)
})
content("plot.n0", anchor: "north")[$n_0$]
}) |
|
https://github.com/Student-Smart-Printing-Service-HCMUT/ssps-docs | https://raw.githubusercontent.com/Student-Smart-Printing-Service-HCMUT/ssps-docs/main/contents/categories/task3/3.1.typ | typst | Apache License 2.0 | = Architecture design
== Kiến trúc phân lớp (Layered architecture)
#block(inset:(left:1cm))[
Hệ thống HCMUT-SSPS sẽ sử dụng kiến trúc phân lớp (Layered architecture)
- Ưu điểm: giảm bớt phụ thuộc vì các lớp sẽ độc lập với các lớp khác, giúp việc mở rộng hay debug hệ thống đơn giản và dễ dàng hơn.
- Nhược điểm: việc tách biệt hoàn toàn các lớp trong thực tế tương đối khó hiện thực. Các luồng dữ liệu đi qua nhiều lớp khác nhau có thể làm giảm tốc độ xử lý của hệ thống.
]
=== Kế hoạch trình bày giao diện (Interface strategy)
#block(inset:(left:1cm))[
Giao diện người dùng sẽ được chia thành nhiều thành phần, mỗi thành phần đảm nhiều các chức năng cụ thể:
- User Interaction: bao gồm Get Order List, Upload Files, Configuring Orders, Executing Orders, Log In.
- Manager Interaction: Managing Printers, Get Logs, Log In.
Giao diện người dùng được hiện thực trên công nghệ React với Typescript bởi tính gọn nhẹ thân thiện với người dùng và tính dễ tiếp cận, type-safe dành cho developer.
]
=== Data storage approach (Phương pháp lưu trữ dữ liệu)
#block(inset:(left:1cm))[
Dữ liệu sẽ được lưu trữ qua hai server:
- PostgreSQL server: Lưu trữ dữ liệu về đơn đặt in (Order), máy in (Printer), lịch sử (history log).
- MinIO server: Lưu trữ các file được người dùng tải lên.
]
=== Quản lý API (API management)
Các API được quản lý bằng công cụ quản lý Swagger. Các API của hệ thống được chia ra làm 2 loại:
#block(inset:(left:1cm))[
- 3rd Party: người dùng gọi API khi muốn sử dụng các dịch vụ thanh toán/nạp tiền (Momo, Paypal,...) và dịch vụ lưu trữ file (MinIO).
- PostgreSQL server: sinh viên gọi API để theo dõi đơn hàng của mình và location, người quản lý chỉnh sửa thông tin, lưu lại lịch sử đơn hàng,...
]
#pagebreak()
=== Các biểu đồ phân lớp
*Box-line diagram*
#figure(caption: "Box-line diagram", image("../../images/Box_line_diagram.png", fit: "contain"))
_Mô tả box-line diagram:_
#block()[
- Được xây dựng theo mô hình microservice - biểu hiện interaction của khách hàng và người quản lý với từng dịch vụ, tác động đến kho lưu trữ dữ liệu.
- Tương tác của khách hàng:
#block(inset:(left:1cm))[
- Lấy ra danh sách đơn hàng bằng cách truy cập vào hệ cơ sở dữ liệu.
- Đăng nhập dựa vào xác thực ở bên thứ ba.
- Tải file từ máy lên minIO.
- Điều chỉnh cấu hình của danh sách đơn hàng trên giao diện người dùng và thực hiện thanh toán đơn hàng nhờ vào dịch vụ bên thứ ba. Sau khi thanh toán đơn hàng, thông tin thanh toán sẽ được lưu vào hệ cơ sở dữ liệu.
- Mua coin (phương tiện thanh toán trên hệ thống) bằng dịch vụ bên thứ 3 (Momo, paypal,...).
]
- Tương tác của quản lý:
#block(inset:(left:1cm))[
- Quản lý máy in - cập nhật, xóa, thêm và lưu trữ thông tin máy in sau khi điều chỉnh lên hệ cơ sở dữ liệu.
- Lưu lại lịch sử đơn hàng trên hệ cơ sở dữ liệu.
]
]
#pagebreak()
*Deployment diagram*:
#figure(caption: "Deployment Diagram", image("../../images/Deployment_diagram.png", fit: "contain"))
_Mô tả deployment diagram:_
#block()[
- Được xây dựng để mô tả quá trình ứng dụng hệ thống và các dịch vụ được deploy lên Internet và tiếp cận đến khách hàng thông qua các giao thức đã được định nghĩa.
- Khách hàng và quản lý truy cập giao diện ứng dụng thông qua giao thức HTTP với port 3000.
- Sau khi truy cập vào giao diện ứng dụng, người dùng sử dụng cách dịch vụ của hệ thống như in ấn, theo dõi lịch sử, đăng nhập, quản lý máy in,... bằng giao thức HTTP với port 8080.
- Các dịch vụ đó sẽ gọi API đến các nơi lưu trữ dữ liệu thông qua giao thức HTTP với các port khác nhau được quy ước như sau:
#block(inset:(left:1cm))[
\u{2218} MinIO: 9000
#linebreak()
\u{2218} PostgreSQL: 5432
#linebreak()
\u{2218} 3rd Authentication: 802
#linebreak()
\u{2218} 3rd Party Payment Service: 443
]
]
|
https://github.com/TGM-HIT/typst-diploma-thesis | https://raw.githubusercontent.com/TGM-HIT/typst-diploma-thesis/main/template/chapters/conclusio.typ | typst | MIT License | #import "../lib.typ": *
= Conclusio
Hier findet eine letzte Zusammenfassung der Arbeit statt.
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/scholarly-tauthesis/0.4.0/template/content/A.typ | typst | Apache License 2.0 | /** A.typ
*
* This is an example appendix chapter in a multi-file typst project.
*
***/
#import "../preamble.typ": *
#import "@preview/scholarly-tauthesis:0.4.0" as tauthesis
= Example attachment <example-attachment>
The point of attachments is to allow an author of a scientific work to attach
research-related objects, that would interrupt the natural flow of the main
matter or deviate from its main points, to the work. For example, large tables,
long derivations of mathematical formulae or detailed implementations of
algorithms in a specific programming language are better placed in the
attachments.
In the main matter, it is better to present algorithmic descriptions in
pseudo-code, as this allows the presentation to omit details related to the
specific programming language being used, which makes the description more
portable. In the case of mathematical theorems, the main matter presentation
should contain at least the pre- and post-conditions of the theorem, but unless
its proof is a main point of the work, it is better to put it into the
attachments, and tell the reader which attachment it can be found from via a
cross-reference.
|
https://github.com/tiankaima/typst-notes | https://raw.githubusercontent.com/tiankaima/typst-notes/master/f4610b-ml_hw/main.typ | typst | #import "@preview/colorful-boxes:1.3.1": *
#set text(font: ("New computer Modern", "Source Han Serif SC", "Source Han Serif"))
#show raw: set text(font: "New Computer Modern Mono")
#set page(
header: [
USTC-ML-24 书面作业 #h(1fr) PB21000030 马天开
],
footer: context [
#let headings = query(selector(heading.where(level: 1)).before(here()))
#if headings.len() > 0 {
text(size: 8pt, headings.last().body)
}
#h(1fr)
#counter(page).display(
"1/1",
both: true,
)
],
)
#set heading(numbering: "1.")
#set par(first-line-indent: 1em)
#let problem = (title, it) => outline-colorbox(
title: title,
color: "blue",
radius: 2pt,
width: auto,
centering: true,
it,
)
#align(center)[
#text(weight: "bold", size: 16pt)[
Homework 1
]
]
#problem("1. Poisson Regression")[
In this question we will construct another kind of a commonly used GLM, which is called Poisson
Regression. In a GLM, the choice of the exponential family distribution is based on the kind of problem at hand. If we are solving a classification problem, then we use an exponential family distribution with support over discrete classes (such as Bernoulli or Categorical). Similarly, if the output is real valued, we can use Gaussian or Laplace (both are in the exponential family). Sometimes the desired output is to predict counts, for e.g., predicting the number of emails expected in a day, or the number of customers expected to enter a store in the next hour, etc. based on input features (also called covariates). You may recall that a probability distribution with support over integers (i.e. counts) is the Poisson distribution, and it also happens to be in the exponential family.
In the following sub-problems, we will start by showing that the Poisson distribution is in the exponential family, derive the functional form of the hypothesis, derive the update rules for training models, and finally using the provided dataset train a real model and make predictions on the test set.
]
#problem("1.1")[
Consider the Possion distribution parameterized by $lambda$:
$
p(y;lambda) = (e^(-lambda) lambda^y) / (y!)
$
(here $y$ have positive integer values and $y!$ is the factorial of $y$.) Show that the Possion distribution is in the exponential family, and clearly state the value for $b(y), eta, T(y)$, and $alpha(eta)$.
]
Recall, an exponential family distribution is of the form:
$
p(y;bold(eta)) = b(y) exp(bold(eta) ^T T(y) - alpha(eta))
$
in this case, we have:
$
p(
y;lambda
) = underbrace(1 / (y!), b(y)) exp(underbrace(ln(lambda), eta) dot underbrace(y, T(y)) - underbrace(lambda, a(eta)))
$
thus, we have:
$
cases(
display(b(y) &= 1 / (y!)),
eta &= ln(lambda),
T(y) &= y,
alpha(eta) &= lambda = e^(eta)
)
$
#problem("1.2")[
Consider performing regression using a GLM model with a Possion response variable. What is the canonical response function for the family? (You may use the fact that a Possion random variable with parameter $lambda$ has mean $lambda$. )
]
$
g(eta) = E[T(y); eta] = lambda = e^(eta)
$
#problem("1.3")[
For a training set ${(x^(i), y^(i)); i = 1, ... n}$, let the log-likelihood of an example be $log(p(y^(i) mid(|) x^(i); theta))$. By taking the derivative of the log-likelihood with respect to $theta_j$, derive the stochastic gradient descent update rule when using a GLM model with Possion responses $y$ and the canonical response function.
]
To start with, we rewrite $p(y; lambda)$ to $p(y|x; theta)$ using $lambda = e^eta = e^(theta^T x)$:
$
p(y|x; theta) = (e^(-e^(theta^T x)) (e^(theta^T x))^y) / (y!)
$
Therefore:
$
ln p(y|x; theta) = (y dot theta^T x - e^(theta^T x)) - ln(y!)
$
Considering the log-likelihood of the entire dataset:
$
partial / (partial theta_j) sum_(i=1)^(n) ln p(y^((i))|x^((i)); theta) = sum_(i=1)^(n) (
y^((i)) - e^(theta^T x^((i)))
) dot x_j^((i))
$
For stochastic gradient descent, we update the parameters as follows:
$
theta_j <- theta_j + alpha (y^((i)) - e^(theta^T x^((i)))) x_j^((i))
$
or simply put:
$
theta_j <- theta_j + alpha (y - h_theta (x^((i)))) x_j^((i))
$
#problem("2. Convexity of Generalized Linear Models")[
In this question we will explore and show some key properties of Generalized Linear Models, specifically those related to the use of Exponential Family Distributions to model the output.
Most commonly, GLMs are trained by using the negative log-likelihood (NLL) as the loss function. This is broadly equivalent to Maximum Likelihood Estimation (i.e., maximizing the likelihood is the same as minimizing the negative log-likelihood). In this problem, our goal is to show that the NLLloss of a GLM is a convex function w.r.t the model parameters. As a reminder, this is convenient because a convex function is one for which any local minimum is also a global minimum, and there is xtensive research on how to optimize various convex loss functions efficiently with various algorithms such as gradient descent or stochastic gradient descent.
To recap, an exponential family distribution is one whose probability density can be represented
$
p(y;eta) = b(y) exp(eta^T y - alpha(eta))
$
where $eta$ is the natural parameter of the distribution. Moreover, in a Generalized Linear Model, $eta$ is modeled as $eta^T x$, where $x in RR^d$ are the input features of the example, and $theta in RR^d$ are learnable parameters. In order to show that the NLL loss is convex for GLMs, we break down the process into sub-parts, and approach them one at a time. Our approach is to show that the second derivative (i.e., Hessian) of the loss w.r.t. the model parameters is Positive Semi-Definite (PSD) at all values of the model parameters. We will also show some nice properties of Exponential Family distributions as intermediate steps.
For the sake of convenience we restrict ourselves to the case where η is a scalar. Assume
$
p(Y|X;theta) ~ "ExponentialFamily"(eta)
$
where $eta in RR$ is a scalar and $T(y)=y$. This makes the exponential family representation take the form
$
p(y;eta) = b(y) exp(eta y - alpha(eta))
$
]
#problem("2.1")[
Derive an expression for the mean of the distribution. Show that $EE[y;eta]=partial/(partial eta) a(eta)$ (note that $EE[Y;eta]=EE[Y|X;eta]$ since $eta=theta^T x$). In other words, show that the mean of an exponential family distribution is the first derivative of the log-partition function with respect to the natrual parameter ($eta$).
*Hint*: Start by observing that:
$
partial / (partial eta) integral p(y;eta)dif y = integral partial / (partial eta) p(y;eta)dif y
$
]
Notice that $forall eta, integral p(y;eta) dif y = 1$, thus:
$
partial / (partial eta) integral p(y;eta) dif y = 0
$
// I'm still a bit confused on interchanging the derivative and integral sign, but I'll try to work it out.
For the right hand side, just plug in $p(y; eta)$:
$
integral partial / (partial eta) p(y;eta) dif y &= integral partial / (partial eta) (
b(y) exp(eta^T y - alpha(eta))
) dif y \
&= integral p(y; eta) (y - (partial / (partial eta) alpha(eta))) dif y \
&= integral y dot p(y; eta) dif y - integral (partial / (partial eta) alpha(eta) p(y; eta)) dif y \
&= EE[y; eta] - partial / (partial eta) alpha(eta)\
&= 0 quad qed
$
#problem("2.2")[
Next, derive an expression for the variance of the distribution. In paticular, show that $display("Var"(Y) = (partial^2 a(eta))/(partial eta^2))$ (again, note that $"Var"(Y) = "Var"(Y|X; theta)$). In other words, show that the variance of an exponential family distribution is the second derivative of the log-partition function with respect to the natural parameter ($eta$).
*Hint*: Build upon the result in the previous sub-problem can simplify the derivation.
]
// Thinking it's relationship to a generating function:
Using the same idea as before:
$
(partial^2) / (partial eta^2) integral p(y; eta) dif y = integral (partial^2) / (partial eta^2) p(y; eta) dif y
$
For the right hand side, we have:
$
partial^2 / (partial eta^2) p(y; eta) &= partial / (partial eta) (p(y; eta) (y - partial / (partial eta) a(eta))) \
&=p(y; eta)(y - partial / (partial eta) a(eta))^2 - p(y; eta) partial^2 / (partial eta^2) a(eta) \
&=p(y;eta)(
y^2 - 2y partial / (partial eta) a(eta) + (partial / (partial eta) a(eta))^2 - partial^2 / (partial eta^2) a(eta)
)\
$
Integrate both sides with respect to $y$:
$
0 & = E[y^2;eta] - 2 E[y;eta] partial / (partial eta) a(eta) + (
partial / (partial eta) a(eta)
)^2 - partial^2 / (partial eta^2) a(eta) \
$
Plug in $display(E[y;eta] = (partial)/(partial eta) a(eta))$:
$
E[y^2;eta] = (partial / (partial eta) a(eta))^2 + (partial^2) / (partial eta^2) a(eta)
$
So the variance is $display("Var"(Y) = (partial^2) / (partial eta^2) a(eta))$.
#problem("2.3")[
Finally, write out the loss function $cal(l)(theta)$, the NLL of the distribution, as a function of $theta$. Then, calculate the Hessian of the loss with respect to $theta$, and show that it is always positive semi-definite. This concludes that the NILL loss of GLM is convex.
*Hint1*: Use the chain rule of calculs along with the results of the previous parts to simplify your derivations.
*Hint2*: Recall that variance of any probability distribution is always non-negative.
]
The loss function is:
$
cal(l)(theta) = -sum_(i=1)^(n) ln p(y^((i))|x^((i)); theta)
$
let's first calculate $display((partial)/(partial theta_j)p(y^((i))|x^((i));theta))$:
$
(partial) / (partial theta_j)p(
y^((i))|x^((i));theta
) &= (partial p) / (partial eta) (partial eta) / (partial theta_j) \
&= p(y^((i))|x^((i)); theta)(y^((i))-((partial a(eta)) / (partial eta))_(eta = theta^T x^((i))))x_j^((i))
$
Notice here $eta = theta^T x^((i))$. Thus,
$
(partial cal(l)) / (partial theta_j) &= - sum_(i=1)^n 1 / (p(y^((i))|x^((i));theta)) dot p(y^((i))|x^((i));theta) (
y^((i)) - lr((partial a(eta)) / (partial eta)|)_(eta = theta^T x^((i)))
)x_j^((i)) \
&= - sum_(i=1)^n (y^((i)) - lr((partial a(eta)) / (partial eta)|)_(eta = theta^T x^((i)))) x_j^((i))
$
One step further, we calculate $display((partial^2 cal(l))/(partial theta_j partial theta_k))$:
$
(partial^2 cal(l)) / (partial theta_j partial theta_k) = sum_(i=1)^n lr(
(partial^2 a(eta)) / (partial eta^2)|
)_(eta = theta^T x^((
i
))) x_j^((i)) x_k^((i))
$
The Hessian matrix is:
$
H = (H_(j k)) = (partial^2 cal(l)) / (partial theta_j partial theta_k)
$
To show it's always positive semi-definite, we define $X = (x_(i j)) = (x_j^((i)))$, and $display(A = (a_(i j)) = lr((partial^2a (eta))/(partial eta^2)|)_(eta = theta^T x^((i))) dot delta_(i j))$, then we have:
$
H = X^T A X
$
notice $A_(i i) = "Var"(Y^((i))) >= 0$ (Here $Y^((i))$ is a random variable with distribution $p(y^((i))|x^((i));theta)$), thus $A$ is positive semi-definite. To show $H=X^T A X$ is also semi-positive, use definition, forall $y in RR^n$:
$
y^T H y = y^T X^T A X y = (X y)^T A (X y) >= 0
$
#problem("3. Multivariate Least Squares")[
So far in class, we have only considered cases where our target variable y is a scalar value. Suppose that instead of trying to predict a single output, we have a training set with multiple outputs for each example:
$
{(x^((i)), y^((i))), i = 1, ... m}, quad x^((i)) in RR^n, y^((i)) in RR^p
$
Thus for each training example, $y^((i))$ is vector-valued, with $p$ entries. We wish to use a linear model to predict the outputs, as in least squares, by specifying the parameter matrix $Theta$ in:
$
y = Theta^T x
$
where $Theta in RR^(n times p)$.
]
#problem("3.1")[
The cost function for this case is:
$
J(Theta) = 1 / 2 sum_(i=1)^m sum_(j=1)^p (Theta^T x^((i))_j - y^((i))_j)^2
$
Write $J(Theta)$ in matrix-vector notation (i.e., without any summations).
*Hint*: Start with the $m times n$ design matrix
$ X = mat((x^((i)))^T; (x^((2)))^T; ...; (x^((m)))^T) $
and the $m times p$ target matrix:
$ Y = mat((y^((1)))^T; (y^((2)))^T; ...; (y^((m)))^T) $
and then work out how to express $J(Theta)$ in terms of these matrices.
]
Let's begin by "guessing" the form:
$
J(Theta) &= 1 / 2 sum_(i=1)^m norm(Theta^T x^((i)) - y^((i)))_2^2\
&= 1 / 2 sum_(i=1)^m (Theta^T x^((i)) - y^((i)))^T (Theta^T x^((i)) - y^((i)))\
&= (...)\
&= 1 / 2 tr((X Theta - Y) (X Theta - Y)^T)\
$
Now prove the guess:
$
J(Theta) &= 1 / 2 tr((X Theta - Y) (X Theta - Y)^T)\
&= 1 / 2 sum_(i=1)^m ((X Theta - Y) (X Theta - Y)^T)_(i i)\
&= 1 / 2 sum_(i=1)^m (sum_(j=1)^p (X Theta - Y)_(i j) (X Theta - Y)_(j i)^T)\
&= 1 / 2 sum_(i=1)^m sum_(j=1)^p (X Theta - Y)_(i j)^2 \
$
Notice:
$
Y_(i j) &= y^((i))_j \
X Theta &= ((x^((1)))^T, (x^((2)))^T, ..., (x^((m)))^T)^T dot Theta\
&= Theta^T dot ((x^((1)))^T, (x^((2)))^T, ..., (x^((m)))^T)\
&= (Theta^T x^((1)), Theta^T x^((2)), ..., Theta^T x^((m))) \
=> quad (X Theta)_(i j) &= (Theta^T x^((i)))_j
$
Thus, we have:
$
J(Theta) &= 1 / 2 sum_(i=1)^m sum_(j=1)^p (Theta^T x^((i))_j - y^((i))_j)^2 quad qed
$
#problem("3.2")[
Find the closed form solution for $Theta$ which minimizes $J(Theta)$. This is equivalent to the normal equations for the multivariate case.
]
Note that $tr(A^T A) = tr(A A^T)$, thus:
$
J(Theta) = 1 / 2 tr((X Theta - Y) (X Theta - Y)^T) = 1 / 2 tr((Theta^T X^T - Y^T) (X Theta - Y))
$
Take the gradient with respect to $Theta$:
$
nabla_Theta J(Theta) &= 1 / 2 nabla_Theta tr(Theta^T X^T X Theta - Theta^T X^T Y - Y^T X Theta + Y^T Y)\
&=1 / 2 nabla_Theta tr(Theta^T X^T X Theta - 2 Theta^T X^T Y + Y^T Y)\
&=1 / 2(X^T X Theta + X^T X Theta - 2 X^T Y)\
&=X^T X Theta - X^T Y\
$
Setting $nabla_Theta J(Theta) = 0$, we obtain:
$
Theta = (X^T X)^(-1) X^T Y
$
#problem("3.3")[
Suppose instead of considering the multivariate vectors $u^((i))$ all at once, we instead compute each variable $y^((i))$ seprately for each $j=1,...p$. In this case, we have $p$ individual linear models, of the form
$
y_j^((i)) = theta_j^T x^((i)), quad j = 1, ..., p
$
(So here, each $theta_j in RR^n$.) How do the parameters from these $p$ independent least squares problems compare to the multivariate soulution?
]
For each subproblem $j$, we contrust a vector $y_j$ as the following:
$
y_j = mat(y_j^((1)); y_j^((2)); ...; y_j^((m))); quad j = 1, ..., p
$
which then leads us to the $j$-th least squares problem:
$
J(theta_j) = 1 / 2 norm(X theta_j - y_j)_2^2
$
which has the closed form solution:
$
theta_j = (X^T X)^(-1) X^T y_j
$
Lining up all the $theta_j$, we have:
$
mat(theta_1, theta_2, ..., theta_p) &= mat(X^T X)^(-1) X^T y_1, (X^T X)^(-1) X^T y_2, ..., (X^T X)^(-1) X^T y_p) \
&= (X^T X)^(-1) X^T mat(y_1, y_2, ..., y_p) \
&= (X^T X)^(-1) X^T Y\
&= Theta
$
#box[
#outline-colorbox(color: "green", title: "Note")[
Another way of looking at this is to look at the overall $J(Theta)$, this is skipped in previous discussions, but $J(Theta)$ is just adding up all $(X Theta -Y)_(i j)^2$ for all $i$ and $j$. (which is why $tr(A A^T)=tr(A^T A)$ is mentioned cause they're just different orderings of the same sqaured sums.)
With this prespective, the "independece" of each $theta_j$ is quite easy to find, we can re-write the sum to add $j = 1,..., p$ first instead of $i$, which is to say:
$
J(Theta) = 1 / 2 sum_(j=1)^p sum_(i=1)^m (Theta^T x^((i))_j - y^((i))_j)^2
$
which is essentially:
$
J(Theta) = sum_(j=1)^p J(theta_j)
$
Then you'll easily see that:
$
nabla_(theta_i) nabla_(theta_j) J(Theta) = 0, quad i != j
$
For a $Theta$ to satisfy $nabla_Theta J(Theta) = 0$, one must have $nabla_(theta_i) J(theta_j) = 0$ for all $i = 1, ..., p$, which is exactly the same as the multivariate solution.
This doesn't count as a serious proof, but it's a good way to understand the problem.
]
]
#problem("4. Incomplete, Positive-Only Labels")[
In this problem we will consider training binary classifiers in situations where we do not have full access to the labels. In particular, we consider a scenario, which is not too infrequent in real life, where we have labels only for a subset of the positive examples. All the negative examples and the rest of the positive examples are unlabelled.
We formalize the scenario as follows: Let ${x^((i)), t^((i))}_(i=1)^n$ be a standard dataset of i.i.d distributed examples. Here $x^((i))$'s are the inputs/features and $t^((i))$'s are the labels. Now consider the situation where $t^((i))$'s are not observed by us. Instead, we only observe the lables of some of the positive examples. Concretely, we assume that we observe $y^((i))$'s that are generated by
$
forall x,quad p(y^((i))=1 | t^((i))=1, x^((i))=x)=alpha\
forall x,quad p(y^((i))=0 | t^((i))=1, x^((i))=x)=1-alpha\
forall x,quad p(y^((i))=0 | t^((i))=0, x^((i))=x)=0\
forall x,quad p(y^((i))=1 | t^((i))=0, x^((i))=x)=1
$
where $alpha in (0,1)$ is some unknown scalar. In other words, if the unobserved "true" label $t^((i))$ is $1$, then with $alpha$ chance we observe a label $y^((i))=1$. On the other hand, if the unobserved "true" label $t^((i))$ is $0$, then we always observe $y^((i))=0$.
Our final goal in the problem is to contruct a binary classifier $h$ of the true label $t$, with only access to the partial label $y$. In other words, we want to contruct $h$ such that $h(x^((i))) approx p(t^((i))=1 | x^((i)))$ as closely as possible, using only $x$ and $y$.
in the following sub-questions, we will attempt to solve the problem with inly partial observations. That is, we only have access to ${(x^((i)), y^((i))}_(i=1)^n$, and will try to predict $p(t^((i))=1 | x^((i)))$.
]
#problem("4.1")[
Show that under our assumptions, for any $i$,
$
p(t^((i)) = 1| y^((i))=1, x^((i))) = 1
$
That is, observing a positive partial label $y^((i))=1$ tells us for sure the hidden true label is $1$. Use Bayes rule to derive this (an informal explanation will not earn credit).
]
Using Bayes rule:
$
p(t^((i))=1 | y^((i))=1, x^((i))) &= (p(t^((i)) =1 , y^((i)) = 1, x^((i)))) / (p(y^((i))=1 , x^((i))))\
&= (p(t^((i)) =1 , y^((i)) = 1, x^((i)))) / (p(
t^((i)) =1 , y^((i)) = 1, x^((i))
) + underbrace(p(
t^((i)) = 0 , y^((i)) = 1, x^((i)))
, "0")) \
&= 1
$
Note:
$
p(t^((i)) = 0, y^((i)) = 1, x^((i))) = p(y^((i))=1 | t^((i)) = 0, x^((i))) dot p(t^((i)) = 0, x^((i))) = 0
$
#problem("4.2")[
Show that for any example, the probability that the true label $t^((i))$ is postive is $1\/alpha$ times the probability that the partial label is positve. That is, show that:
$
p(t^((i)) = 1 | x^((i))) = 1 / alpha p(y^((i)) = 1 | x^((i)))
$
Note that the euqation above suggests that if we know the value of $alpha$, then we can convert a function $h(dot)$ that approximately predicts the probability $h(x^((i)))approx p(y^((i))=1 | x^((i)))$ into a function that approximately predicts $p(t^((i))=1 | x^((i)))$ by multiplying the factor $1\/alpha$.
]
$
p(y^((i)) = 1 | x^((i))) &= p(t^((i)) = 1, y^((i)) = 1 | x^((i))) \
&= p(t^((i)) = 1 | x^((i))) dot p(y^((i)) = 1 | t^((i)) = 1, x^((i))) \
&= p(t^((i)) = 1 | x^((i))) dot alpha quad qed
$
#problem("4.3")[
The solution to estimate $p(t^((i)) | x^((i)))$ outlined in the previous sub-question requires the knowledge of $alpha$ which we don't have. Now we will design a way to estimate $alpha$ based on the function $h(dot)$ that approximately predicts $p(y^((i)) = 1 | x^((i)))$.
To simplify the analysis, let's assume that we have magically obtained a function $h(x)$ that prefectly predicts the value of $p(y^((i)) = 1 | x^((i)))$, that is, $h(x^((i))) = p(y^((i)) = 1 | x^((i)))$.
We make the crucial assumption that $p(t^((i)) = 1 | x^((i))) in {0,1}$. This assumption means the process of generating the "true" label $t^((i))$ is a noise-free process. This assumption is not very unreasoanle to make. Note, we are NOT assuming that the observed label $y^((i))$ is noise-free, which would be an unreasonable assumption!
Now we will show that:
$
alpha = EE[h(x^((i))) | y^((i)) = 1]
$
To show this, prove that $h(x^((i))) = alpha$ when $y^((i)) = 1$.
(The above result motivates the following algorithm to estimate $alpha$ by estimating the RHS of the equation above using samples: Let $V_+$ be the set of labeled (and hence postive) examples in the validation set $V$, given by $V_+={x^((i)) in V | y^((i)) = 1}$ Then we use $alpha approx sum_(x^((i)) in V^+) h(x^((i))).$)
]
This requires some probability theory techinques, Recall:
$
EE[k | X = x] = integral k / p(x) p(x) dif x = k integral dif x = k
$
$
EE[h(x^((i))) | y^((i)) = 1] &= EE[h(x^((i))) bb(1)_(y^((i)) = 1)] / p(y^((i)) = 1) \
&= EE[EE[bb(1)_(y^((i))= 1) | x^((i))] dot h(x^((i)))] / EE[p(y^((i)) = 1 | x^((i)))] \
&=EE[h(x^((i))) dot h(x^((i)))] / EE[h(x^((i)))] \
$
Note that:
$
h(x^((i))) = alpha dot p(t^((i)) = 1 | x^((i))) in {0, alpha}
$
Assuming $p(h(x^((i))) = alpha) = beta$, we have:
$
EE[h(x^((i))) | y^((i)) = 1] = (beta dot alpha^2) / (beta dot alpha) = alpha quad qed
$ |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/layout/par-indent-03.typ | typst | Other | #set par(hanging-indent: 1em)
Welcome \ here. Does this work well?
|
https://github.com/QuadnucYard/pigeonful-typ | https://raw.githubusercontent.com/QuadnucYard/pigeonful-typ/main/examples/kfc.typ | typst | MIT License | #import "../src/lib.typ": pigeonful
#set page(width: auto, height: auto, margin: 4pt)
#pigeonful(
entries: (
"层次": [个人练习生],
"单位": [加里敦大学],
"院系": [肯德基学院],
"专业": [疯狂星期四工程(Vme50)],
"学习方式": [全日制],
"研究方向": [吮指原味鸡烹饪自动化],
"厨师": [不区分厨师],
"专项计划类型": [非专项计划],
"就业类型": [非定向就业],
),
notifier: [加里敦大学 招生办 2022-09-28 12:42],
notice-body: [加里敦大学肯德基学院已在教育部推免服务系统中完成待录取通知发送工作,请您尽快在推免服务系统中确认录取。逾期未确认者,将视为放弃我院的待录取资格。],
acceptance: [您于9月28日 12:44接受了加里敦大学的待录取通知],
width: 810pt,
)
|
https://github.com/jijinbei/typst_template | https://raw.githubusercontent.com/jijinbei/typst_template/main/manual/example/example.typ | typst | #import "../template.typ": *
#show: doc => conf(title: [トリセツ], doc)
= はじめに
#grid(columns: (50%, 50%), gutter: 3%, align: horizon, [
この度は当製品をお選びいただき、誠にありがとうございます。 ご使用前に、以下の取扱説明書を必ずお読みください。
適切かつ優しくご使用いただくことで、製品を長くお使いいただけます。
], image("image\setsumeisyo.png"))
#encloseText[商品の特徴]
当製品は一点物であり、返品や交換はお受けできません。予めご了承ください。
= 注意事項
#caution[
- 急に不機嫌になることがあります。具体的な理由を尋ねても答えない場合がありますが、放置すると怒りが増すことがあります。常に丁寧に対応してください。
- 定期的に褒めることで、製品の状態が良好に保たれます。些細な変化にも注意を払い、適切なフィードバックを行ってください。ただし、否定的なコメントは避けてください。
]
= メンテナンス
- 長期間ご愛用いただくためには、初めて製品と出逢った日のことを思い出し、感謝の気持ちを忘れないでください。
= 保証
- 当製品は永久保証が付いています。長期間にわたって安心してご使用いただけます。
#v(3em)
= FAQ
#QandA(Q: [不機嫌になったときの対処法は?], A: [
意外と単純なことで喜ぶことがあります。 非日常的なプレゼントやサプライズが効果的です。 時には手紙を添えることで、さらに喜ばれます。
])
= その他の推奨事項
== 感情的になってしまった場合
+ 優しく対応して、強く抱きしめてください。これにより、状態が改善されることがあります。
== 新しい環境への連れ出し
+ 特別な日にはおしゃれなディナーでお楽しみください。
+ 広い心と深い愛で、すべてを受け止めてください。
= おわりに
どうぞこれからもよろしくお願い申し上げます。
|
|
https://github.com/normanlorrain/mdtypst | https://raw.githubusercontent.com/normanlorrain/mdtypst/main/mdtypst/README.md | markdown | Apache License 2.0 | # NOT FOR RELEASE. Work In Progress (porting from mdpdf project)
# Some notes
- [CommonMark](https://commonmark.org/)
- [Python CommonMark parser](https://github.com/readthedocs/commonmark.py), [documentation](https://commonmarkpy.readthedocs.io/en/latest/?badge=latest)
- [design of commonmark library](https://github.com/commonmark/commonmark.js)
- [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
- [dingus](https://spec.commonmark.org/dingus/)
- [PDF Reference](https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/pdf_reference_1-7.pdf)
# From the cookiecutter:
* a starter [Click](http://click.pocoo.org/5/) command-line application
* automated unit tests you can run with [pytest](https://docs.pytest.org/en/latest/)
It created a readthedocs link, not functional:
The project's documentation contains a section to help you
[get started](https://mdtypst.readthedocs.io/en/latest/getting_started.html) as a developer or
user of the library.
Below are some handy resource links.
* [Project Documentation](http://mdtypst.readthedocs.io/)
* [Click](http://click.pocoo.org/5/) is a Python package for creating beautiful command line interfaces in a composable way with as little code as necessary.
* [Sphinx](http://www.sphinx-doc.org/en/master/) is a tool that makes it easy to create intelligent and beautiful documentation, written by <NAME> and licnsed under the BSD license.
* [pytest](https://docs.pytest.org/en/latest/) helps you write better programs.
* [GNU Make](https://www.gnu.org/software/make/) is a tool which controls the generation of executables and other non-source files of a program from the program's source files.
* [cookiecutter-click](https://github.com/patdaburu/cookiecutter-click) Used to start this project.
|
https://github.com/JanEhehalt/typst-demo | https://raw.githubusercontent.com/JanEhehalt/typst-demo/main/Deckblatt.typ | typst |
#set page(margin: (
top: 20mm,
bottom: 10mm,
left: 20mm,
right: 20mm
),
numbering: none
)
#image("provadis.png", width: 53.1mm)
#set align(center)
#v(64pt)
Bachelor Thesis
#v(32pt)
#set text(font: "Times New Roman", size: 14pt)
*Template for scientific work*
*Title and Subtitle*
#set text(font: "Times New Roman", size: 12pt)
#v(68pt)
Zur Erlangung des akademischen Grades
'Bachelor of Science' B.Sc.
im Studiengang 'Informatik'
#v(32pt)
vorgelegt dem Fachbereich Informatik und Wirtschaftlsinformatik der
Provadis School of International Management and Technology
von
#v(18pt)
<NAME>
D211
BIN-T21-F-2
<EMAIL>
015156567570
#v(18pt)
#set align(left)
#table(
stroke: none,
columns: 2,
[Erstgutachter], [Dr. <NAME>],
[Zweitgutachter], [---],
[Ende der Bearbeitungsfrist], [XX.08.2024]
) |
|
https://github.com/cyp0633/hnu-bachelor-thesis-typst-template | https://raw.githubusercontent.com/cyp0633/hnu-bachelor-thesis-typst-template/master/frontpage.typ | typst | // 封面,包含基本信息等
#let frontpage(content) = [
#page(background: [
#align(center + top)[
#box(height: 5cm)
#image("assets/hnu-logo.png", height: 9.14cm)
]
])[#content]
]
#let tleft(content) = {
set text(size: 14pt, font: "Source Han Sans", tracking: 4pt)
grid.cell(align: bottom + right)[
#content
#h(0.05em)
]
}
#let tright(content) = [
#set text(size: 15pt, font: ("Times New Roman", "Source Han Serif"))
#grid.cell(align: bottom + left)[
#content
]
#v(0.2em)
]
#let basic_info(title_1, title_2, name, id, class, college, supervisor) = [
#grid(columns: (6cm, 7.6cm),rows: 1.05cm,stroke: none,
tleft[#text(size: 18pt, tracking: 0pt, weight: "bold")[论文(设计)题目:]], tright[#title_1],
grid.hline(start: 1),
tleft[],tright[#title_2],
grid.hline(start: 1),
tleft[学生姓名:], tright[#name],
grid.hline(start: 1),
tleft[学生学号:],tright[#id],
grid.hline(start: 1),
tleft[专业班级:],tright[#class],
grid.hline(start: 1),
tleft[学院名称:],tright[#college],
grid.hline(start: 1),
tleft[指导老师:],tright[#supervisor],
grid.hline(start: 1),
)
]
#let date(year, month, day) = [
// 默认为当前日期
#text(size: 14pt, font: ("Times New Roman", "Source Han Sans"))[
#if year != "" and month != "" and day != "" [
#year 年 #month 月 #day 日
] else [
#datetime.today().display("[year] 年 [month] 月 [day] 日")
]
]
]
#let date2(date: datetime.today()) = [
#text(size: 14pt, font: ("Times New Roman", "Source Han Sans"))[
#date.display("[year] 年 [month] 月 [day] 日")
]
] |
|
https://github.com/chamik/gympl-skripta | https://raw.githubusercontent.com/chamik/gympl-skripta/main/cj-dila/15-kytice.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/helper.typ": dilo, hrule, poezie
#dilo("Kytice z pověstí národních", "kytice", "<NAME>", "", "19. st; romantismus, národní obrození", "Česko", "1853", "lyricko-epický", "baladické básně")
#columns(2, gutter: 1em)[
*Téma*\
Kolekce lidové slovesnosti, hřích a trest.
*Motivy*\
hřích a trest, tragédie, nacionalismus
*Časoprostor* různě, často symbolicky
*Kompozice* \
#underline[zrcadlová], protější balady spolu vždy nějak souvisí; 13 balad
*Obsah*\
_KYTICE_ -- mateřídouška na hrobě, matka symbolizuje Čechy, děti nás;
_POKLAD_ -- na Velký pátek matka je okouzlena zlatem a nechává dítě ve skále, zlato se promění v kamení. Po roce pokání se jí skála opět zjeví a odnáší si své dítě.
_SVATEBNÍ KOŠILE_ -- Dívka 3 roky čeká na návrat svého milého z vojny. Modlí se za něj, nebo za svoji smrt. Umrlec ji dovede až na hřbitov, ale modlitby ji zachrání.
_POLEDNICE_ -- Matka zavolá polednici na své zlobivé dítě, sama jej ve strachu zardousí.
_ZLATÝ KOLOVRAT_ -- Král potká Doru, kterou si chce vzít. Její macecha ji zabije a rozčtvrtí. Král si nevědomky vezme nevlastní dceru. Kouzelný stařeček vyměňuje kusy zlatého kolovratu za kusy těla Dory. Dora je oživena, král macechu i sestru nechá zabít stejným způsobem.
_ŠTĚDRÝ DEN_ -- Tradice o Vánoce, Hana a Marie jdou k jezeru zjistit svůj osud. Hana vidí svého milého, Marie pohřeb. Osud se vyplní.
_HOLOUBEK_ -- Žena zapomene na svého mrtvého milého (holoubka) a provdá se. Holoubek smutně zpívá a jistá dívka se kvůli tomu utopí.
_ZÁHOŘOVO LOŽE_ -- Poutník vypráví loupežníkovi Záhořovi o loži v pekle, co je pro něj přichystané. Následně se 90 let poká, než se poutník znovu vrátí; společně umírají.
_VODNÍK_ -- Vodník má dítě s dívkou, která se vrátí k matce. Matka ji nepustí zpět do jezera a vodník dítě zabije.
_VRBA_ -- Duše ženy je ve vrbě, kterou setne její manžel. Vrba na něj promluví, ať z ní udělá kolébku a píšťaly pro dítě.
_LILIE_ -- Dívka zemře a na jejím hrobě vykvete lilie. Pán ji dá vykopat a stará se o ni, až dívka obživne. Jede do války ale matka se o ni nestará a opět umírá. Pán matku prokleje.
_DCEŘINA KLETBA_ -- Dívka zabila své dítě; proklíná svého svůdce a svoji matku, že nezabránila neštěstí.
_VĚŠTKYNĚ_ -- Předpovídá budoucnost národa, lidé stále nepochopili slova Libuše a Svatopluka.
*Literárně historický kontext*\
vrcholná tvorba národního obrození
]
#pagebreak()
#show: poezie
*Ukázka* -- Zlatý kolovrat, I.
Okolo lesa pole lán,\
hoj jede, jede z lesa pán,\
na vraném bujném jede koni,\
vesele podkovičky zvoní,\
jede sám a sám.
A před chalupou s koně hop\
a na chalupu: klop, klop, klop!\
„Hola hej! otevřte mi dvéře,\
zbloudil jsem při lovení zvěře,\
dejte vody pít!“
[...]
*Ukázka* -- Záhořovo lože, V.
Devadesáte let přeletělo světem;\
mnoho se zvrátilo zatím od té chvíle:\
kdo onoho času býval nemluvnětem,\
jest nyní starcem, do hrobu se chýle.
Avšak málo jich dozrálo k té době,\
ostatní všickni jsou schováni v hrobě.\
Jiné pokolení -- cizí obličeje --\
vše ve světě cizí, kam se člověk děje;\
jen to slunéčko modravého nebe,\
jenom to nižádné proměny nevzalo;\
a jako před věky lidi těšívalo,\
tak i nyní ještě vždy blaží tebe!
[...]
#pagebreak() |
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/layout/page-03.typ | typst | Other | // Test page fill.
#set page(width: 80pt, height: 40pt, fill: eastern)
#text(15pt, font: "Roboto", fill: white, smallcaps[Typst])
#page(width: 40pt, fill: none, margin: (top: 10pt, rest: auto))[Hi]
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.