repo
stringlengths
26
115
file
stringlengths
54
212
language
stringclasses
2 values
license
stringclasses
16 values
content
stringlengths
19
1.07M
https://github.com/HynDuf/typst-cv
https://raw.githubusercontent.com/HynDuf/typst-cv/master/template/modules_en/projects.typ
typst
Apache License 2.0
// Imports #import "../../lib.typ": cvSection, cvEntry #let metadata = toml("../metadata.toml") #let cvSection = cvSection.with(metadata: metadata) #let cvEntry = cvEntry.with(metadata: metadata) #cvSection("Projects") #cvEntry( title: [C++ Developer], logo: image("../src/logos/UET.png"), society: [#link("https://github.com/HynDuf/mpboot")[MPBoot] - Computational Science and Engineering Lab], date: [2021 - Present], location: [Ha Noi, Vietnam], description: list( [Design, implement, and optimize algorithms for phylogenetic tree construction using C/C++.], [Conduct performance tuning and optimization to ensure algorithms run efficiently on various hardware platforms.], [Work with #link("https://scholar.google.co.nz/citations?user=dZyMRT0AAAAJ")[Dr. <NAME>].], ), tags: ("Bioinformatics", "Phylogenetic", "Algorithm"), )
https://github.com/SkymanOne/ecs-typst-template
https://raw.githubusercontent.com/SkymanOne/ecs-typst-template/main/README.md
markdown
MIT License
# [Typst](https://typst.app/) Template for UoS ECS Individual Project If you have ever suffered with LaTeX, don't! _Typst is a new markup-based typesetting system that is designed to be as powerful as LaTeX while being much easier to learn and use._ This repository aims to provide an easy-to-use and comprehensive template to use for the write-up of Bachelor's or Master's individual project report. It aims to follow the most recent (2023-2024) guidelines on the format and compiles to the _"LaTeX-like"_ pdf. ## Features - Attempts to use mimic the LaTeX class provided by the ECS - Use a single command to begin to generate the whole document - Or use individual components to customise your report to your needs - Easy-to-read template enables customisation of styles - hyperlinks on default pages - Alternating style of pagination depending on the headings ## Installation For detailed installation instructions, please refer to the [official installation guide](https://github.com/typst/typst). Here, we provide basic steps for installing Typst's CLI: - You can get sources and pre-built binaries from the [releases page](https://github.com/typst/typst/releases). - Use package managers like `brew` or `pacman` to install Typst. Be aware that the versions in the package managers might lag behind the latest release. - If you have a [Rust](https://rustup.rs/) toolchain installed, you can also install the latest development version. Nix and Docker users, please refer to the official installation guide for detailed instructions. ## Usage - The margins are set to `binding`: `(inside: 1.5in, outside: 1.0in, top: 1.0in, bottom: 1.0in)` for the print quality - Simply add to your file: ```typ #import "ecsproject.typ": * // Use everything together #show: doc => use_project( title: "My project", author: ( name: "<NAME>", email: none ), supervisor: ( name: "<NAME>", email: none ), examiner: ( name: "<NAME>", email: none ), date: "January 19, 2024", program: "BSc Computer Science", is_progress_report: false, originality_statement: ( acknowledged: "I have acknowledged all sources, and identified any content taken from elsewhere.", resources: "I have not used any resources produced by anyone else.", foreign: "I did all the work myself, or with my allocated group, and have not helped anyone else", material: "The material in the report is genuine, and I have included all my data/code/designs.", reuse: "I have not submitted any part of this work for another assessment.", participants: "My work did not involve human participants, their cells or data, or animals." ), abstract_text: lorem(50), acknowledgments_text: lorem(50), doc ) ``` - Or use the `template.typ` - To customise entries - `title` - title of your project - `author`, `supervisor`, `examiner`: named tuple of `(name, email)` for you, supervisor and second examiner - `date` - date in string format - `program` the title of your program - `is_progress_report` - is it a progress report? - `originality_statement` - originality statement confirmations (optional). If not specified, default values are used - `abstract_text` - your abstract, simple string text - `acknowledgments_text` - acknowledgments, same as above - `page_numbering: "1"` - page numbering style (optional) - `title_numbering: "1."` - heading numbering style (optional) - `display_word_count: false` - display word count in the footer (optional) - If you need modify the structure of your document, shuffle pages around, you can use individual components as demonstrated in `thesis_example.typ` ### Build PDFs locally Once you have installed Typst, you can use it like this: ```sh # Creates `thesis.pdf` in working directory. typst compile template.typ # Creates PDF file at the desired path. typst compile thesis.typ path/to/output.pdf ``` You can also watch source files and automatically recompile on changes. This is faster than compiling from scratch each time because Typst has incremental compilation. ```sh # Watches source files and recompiles on changes. typst watch template.typ ``` ## Working with the Typst Web Editor If you prefer an integrated IDE-like experience with autocompletion and instant preview, the Typst web editor allows you to import files directly into a new or existing document. Here's how you can do this: 1. Navigate to the [Typst Web Editor](https://typst.app/). 2. Create a new blank document. 3. Click on "File" on the top left menu, then "Upload File". 4. Select all .typ and .bib files along with the figures provided in this template repository. **Note:** You can select multiple files to import. The editor will import and arrange all the files accordingly. Always ensure you have all the necessary .typ, .bib, and figures files you need for your document. --- ## Further Resources - [Typst Documentation](https://typst.app/docs/) - [Typst Guide for LaTeX Users](https://typst.app/docs/guides/guide-for-latex-users/) - [Typst VS Code Extension (unofficial)](https://marketplace.visualstudio.com/items?itemName=nvarner.typst-lsp) ## Contribution I am graduating in 2024 from the university, and aware that the guidelines may change, feel free to open an issue or PR to amend the template.
https://github.com/LeeWooQM/TypstNote
https://raw.githubusercontent.com/LeeWooQM/TypstNote/main/test.typ
typst
MIT License
#import "temp.typ": note #import "temp.typ": theorem #import "temp.typ": definition #import "temp.typ": deduction #import "temp.typ": proof #show: doc => note( title: "The Art of writing reasonable organic reaction mechanisms", author: "<NAME>", logo: "/Picture/gege.jpeg", boxed_heading: true, )[ = 引言 == WSND #definition(name: "导数")[ 导数是函数关于自变量的变化率,定义为: $ f^' (x) =( d y ) / ( d x ) $ ] #deduction( name: "Nernst方程" )[ Nernst方程描述了电对的还原电势与浓度的关系,可由Gibbs自由能推导: $ Delta_r G_m^theta = - R T ln Q^theta $ $ Delta_r G_m^theta = n F E $ $ n F E = - R T ln Q^theta $ $ E = - ( R T ) / ( n F ) ln Q^theta $ ] #theorem( name: "数列极限唯一性" )[ 数列的极限是唯一的 \ 若$lim_(x -> + infinity)x_n = a$且 $lim_(x -> + infinity)x_n = b$则 $a=b$ ] #proof()[ 不妨设$a < b$ 假设$a!=b$那么由极限定义得: \ $ exists N_1 "对于" forall n > N_1 : |x_n - a| < ( b - a ) / 2 "推出" x_n < ( a + b ) / 2 $ $ exists N_2 "对于" forall n > N_2 : |x_n - b| < ( b - a ) / 2 "推出" x_n > ( a + b ) / 2 $ $ "对于任意的" n > max (N_1,N_2) : x_n > ( a + b ) / 2 "且" x_n < ( a + b ) / 2 "推出矛盾 "$ 故 $ a = b $ ] #pagebreak() = 序章 #lorem(40) \ `rbx` `rcx` \ ```rs fn main() { println!("Hello,World!"); } ``` ```cpp #include <iostream> using namespace std; int main() { unsigned int i = 0; cin >> i; while (i) { i -= 1; cout << i << endl; } } ``` ]
https://github.com/mismorgano/UG-FunctionalAnalyisis-24
https://raw.githubusercontent.com/mismorgano/UG-FunctionalAnalyisis-24/main/tareas/Tarea-04/Tarea-04.typ
typst
#import "../../config.typ": config, exercise, ip, proof #show: doc => config([Tarea 4], doc) #let cls(S) = $overline(#S)$ #exercise[1.30][Muestra que un e.B $X$ es separable ssi $S_X$ es separable.] #proof[ Primero supongamos que $X$ es separable, como $S_X subset X$ se sigue que $S_X$ es separable. Ahora manera de verlo es notando que $T: X -> S_X$ dada por $T(x) = x/norm(x)$ es continua. Supongamos ahora que $S_X$ es separable donde ${x_n}$ es denso en $S_X$. Sea ${q_r}$ una sucesión densa en $KK$, consideremos $ M := union.big_(r in NN) union.big_(n in NN) {q_r x_n}, $ podemos notar que $M$ es numerable pues es la union numerable de conjuntos numerables. Veamos que $M$ es denso en $X$. Dado $x in X$ y $epsilon>0$ notemos que $x/norm(x) in S_X$ por la densidad de ${x_n}$ es $S_X$ existe $x_n in {x_n}$ tal que $ epsilon/(2norm(x)) > norm(x/norm(x) - x_n) = norm((x-norm(x)x_n)/norm(x)) = 1/norm(x)norm(x-norm(x)x_n), $ lo cual implica que $ norm(x-norm(x)x_n) < epsilon/2. $ Por otro lado, por la densidad de ${q_r} in KK$ existe $q_r in {q_r}$ tal que $ norm(norm(x) - q_r) < epsilon/(2 norm(x_n)), $ lo que implica $ epsilon/2 > norm(x_n)norm(norm(x) - q_r) = norm(x_n (norm(x) - q_r)) = norm(norm(x)x_n - q_r x_n). $ De lo anterior obtenemos que $ norm(x - q_r x_n) <= norm(x - norm(x)x_n) + norm(norm(x)x_n - q_r x_n) < epsilon/2 + epsilon/2 = epsilon, $ como queremos. ] #exercise[1.31][Sea $Y$ un subespacio cerrado de un e.B $X$. Muestra que si $X$ es separable entonces $Y$ y $X slash Y$ son separables. Muestra que si $Y$ y $X slash Y$ son separables, entonces $X$ es separable.] #exercise[1.32][Encuentra dos (obviamente no cerrados) subespacios $F_1$ y $F_2$ de un e.B $X$ tal que $F_1 sect F_2 = {0}$ y ambos son densos en $X$.] #exercise[1.33][Muestra que $B C(0, 1)$ de funciones continuas y acotadas en $(0, 1)$ con la norma sup no es separable.] #exercise[1.39][Sea $H$ un e.H. Prueba la iguladad generalizada del paralelogramo. Si $x_1, dots, x_n in H$, entonces $ sum_(epsilon_i = plus.minus 1) norm(sum_(i=1)^n epsilon_i x_i)^2 = 2^n sum_(i=1)^n norm(x_i)^2. $] #proof[Procedamos por inducción sobre $n$. Para $n=2$ el resultado ] #exercise[1.40][Sea $X$ un e.B cuya norma $norm(dot)$ satisface la iguladad del paralelogramo. Define $ip(x, y)$ por la identidad de poralización y prueba que $ip(x, y)$ es un producto interno. ] #proof[ Definimos $ip(dot, dot):X times X -> KK$ como $ ip(x, y) = 1/4(norm(x+y)^2 - norm(x-y)^2), $ si $KK = RR$ y $ ip(x, y) = 1/4(norm(x+y)^2 - norm(x-y)^2 + i norm(x+i y)^2 - i norm(x-i y)^2). $ Veamos que $ip(dot, dot)$ es un producto interno. Supongamos primero que $KK = RR$ + Sea $y in X$ notemos que la función $x -> ip(x, y)$ es lineal pues $ ip(x+ b z, y) &= 1/4(norm(x+ b z +y)^2 - norm(x+ b z-y)^2) \ &= 1/4(norm((x+y) + b z)^2 - norm((x-y) + b z)^2) \ &= 1/4(2norm(x+y)^2 + 2norm(b z)^2 - norm(x+y-b z)^2 + norm(x-y - b z)^2 - 2norm(x-y)^2 - 2norm(b z)^2) \ &= 1/4(2norm(x+y)^2 - norm(x+y-b z)^2 + norm(x-y - b z)^2 - 2norm(x-y)^2 ) $ + + Notemos que $ip(x, x) = 1/4 norm(2x)^2 = norm(x)^2>=0$ + Se cumple que $ip(x, x) = 0$ ssi $ norm(x)^2 = 0$ ssi $norm(x) =0$ ssi $x=0$ ] #exercise[1.41][Encuentra un e.H $H$ y su subespacio $F$ tal que $H != F + F^perp$. Esto muestra que la suposición de cerrado en el Teorema 1.33 es crucial.]
https://github.com/Kasci/LiturgicalBooks
https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/SK/zalmy/Z100.typ
typst
Milosrdenstvo a spravodlivosť chcem ospievať, \* zahrať ti, Pane, na harfe. Múdro a bez úhony chcem kráčať cestou života; \* kedyže prídeš ku mne? Chcem kráčať v nevinnosti srdca \* uprostred svojej čeľade. Nesprávne predsavzatia si nerobím, \* podvodníka mám v nenávisti; ten nemá ku mne prístup. Skazené srdce sa mi prieči, \* zlomyseľníka nechcem znať. Kto tajnostkársky ohovára blížneho, toho umlčím. \* Kto má oko pyšné a srdce naduté, toho neznesiem. Moje oči hľadajú verných v krajine, aby prebývali so mnou. \* Kto kráča cestou poctivou, ten mi smie slúžiť. V mojom dome nebude bývať pyšný človek \* a luhár neobstojí pred mojimi očami. Každé ráno umlčím všetkých hriešnikov v krajine \* a z mesta Pánovho vyženiem všetkých, čo pášu neprávosť.
https://github.com/andrin-geiger/hslu_template_typst
https://raw.githubusercontent.com/andrin-geiger/hslu_template_typst/master/chapters/01_introduction.typ
typst
= Einleitung #pagebreak()
https://github.com/physicshinzui/typst-templates
https://raw.githubusercontent.com/physicshinzui/typst-templates/main/book/book.typ
typst
#import "book-template.typ": * #show: doc => conf( title: "Template, テンプレ", author: "<NAME>", toc: true, lang: "en", font: "New Computer Modern", // "Hiragino Kaku Gothic Pro", bibliography-file: "../paperpile.bib", doc ) // ========================= // Useful resources: // - https://typst.app/docs/reference/math/ // - https://typst.app/docs/reference/model/bibliography/ // - https://typst.app/docs/reference/model/figure/ //=========Main text=========== = section == Basics #definition[ Write a definition. 定義(definition)を書いてください。 ] <def> @def can be referred by using `@def`.#footnote[This is a footnote.] #example("Example name")[ Write an example ] #definition[ 2nd definition ]<def2> #theorem("Euclid")[There are infinitely many primes. ] #proof[ Write a proof. $ y = a x $ <lin_eq> You can refer to an equation using `@name` like @lin_eq. ] #lemma[Ito lemma][ ] #corollary[ Put a corollary. ] <cor> #requirement[ For every object, its motion keeps linear uniform motion. ] #result[ $m a = F$ ] #theorem[ There are arbitrarily long stretches of composite numbers. ] #proof[ For any $n > 2$, consider $ n! + 2, quad n! + 3, quad ..., quad n! + n $ ] #theorem()[ Unicode can be uesd, e.g., $μ = mu$.\ For more details for math symbols, see #link("https://typst.app/docs/reference/symbols/sym/")[HERE] ] #claim[ We can cite like: @2021-iy @Poincare1908-am @Atkins2011-rd ] #remark[ I remark here. ] == How to insert a figure @pigeon shows a pigeion flying in the sky. #figure( image("../pigeon_free.png", width: 50%), caption: [A pigeion flying], ) <pigeon> #figure( caption: [Timing results], table( columns: 4, [t], [1], [2], [3], [y], [0.3s], [0.4s], [0.8s], ), )
https://github.com/eliapasquali/typst-thesis-template
https://raw.githubusercontent.com/eliapasquali/typst-thesis-template/main/README.md
markdown
Other
# Introduzione Questa repository contiene un template per la tesi triennale in Informatica. Il template e stato realizzato con [Typst](https://typst.app/) e ispirandosi al [template LaTeX già esistente](https://github.com/FIUP/Thesis-template.git) e alla sua versione in [Markdown](https://github.com/FIUP/pandoc-thesis-template) ## Problemi e mancanze - [ ] Aggiungere glossario - [ ] Aggiungere appendici - [x] Aggiungere bibliografia - [x] Gestione "Capitolo x - Titolo capitolo" - [x] Capitoli su pagina dispari - [ ] Supporto alla stampa - [ ] Supporto PDF/A ### Aggiungere glossario Molto probabilmente tramite le [reference](https://typst.app/docs/reference/meta/ref/) presenti in Typst. ### Aggiungere appendici Va aggiunta con la giusta numerazione e poi inserita nel template. ### Aggiungere bibliografia I riferimenti bibliografici possono essere gestiti con BibTeX in `bibliography.bib` oppure con il nuovo formato Hayagriva in `bibliography.yml`. Dopo aver riempito quei file selezionare il formato desiderato in `bibliography.typ`. Le citazioni vanno inserite nel testo con `@citazione` e verranno automaticamente aggiunte alla bibliografia. ### Capitoli su pagina dispari Inserendo il `pagebreak(to: "odd")` all'inizio del capitolo sembra funzionare. Attenzione a non metterlo nel primo capitolo, altrimenti lo si fa cominciare a pagina 3. ### Supporto alla stampa Questo è legato al precedente. ### Supporto PDF/A Non è ancora stato integrato in Typst. Per aggirare il problema è possibile utilizzare un tool di conversione, ad esempio quello di [iLovePDF](https://www.ilovepdf.com/it/pdf-in-pdfa). Per controllare il risultato invece esistono validatori come ad esempio [pdfforge](https://www.pdfforge.org/online/it/convalida-pdfa). ## Utilizzo Per compilare tramite Typst è necessario installarlo (`pacman -S typst` su Arch) oppure utilizzare l'[editor online](https://typst.app/). Durante la scrittura è molto comodo utilizzare la funzione `watch` di Typst che aggiorna il PDF ad ogni modifica. Struttura del template: - `chapters/`: qui vanno inseriti i capitoli con l'effettivo contenuto della tesi. - `appendix/`: contiene capitoli aggiuntivi, bibliografia e glossario - `bibliography/`: contiene i file per la bibliografia - `bibliography.bib`: file per la bibliografia in formato BibTeX - `bibliography.yml`: file per la bibliografia in formato Hayagriva - `bibliography.typ`: file incluso nella struttura dove viene selezionato il formato della bibliografia da utilizzare - `config/`: le varie configurazioni del template: - `variables.typ`: qui vengono definite le variabili con i propri dati personali. - `images/`: tutte le immagini e simili raccolte qui per avere un po' di ordine. - `preface/`: contiene la struttura delle pagine che precedono il vero contenuto: - `acknowledgements.typ`: ringraziamenti vari. - `dedication.typ`: dediche e una piccola citazione. - `summary.typ`: sommario in cui viene descritto di cosa tratta la tesi. - `structure.typ`: contiene la struttura e l'ordine dei capitoli. - `thesis.typ`: vera e propria tesi, file che andrà compilato per produrre il PDF. ## Motivazioni Lo scopo di questo progetto è quello di creare un template funzionante in Typst, che sia il più simile possibile al template LaTeX già esistente e che sia facile da utilizzare. Adattate il template alle vostre esigenze e segnalate eventuali problemi o mancanze. ## Fonti e utilità - [Documentazione Typst](https://typst.app/docs/) - [FIUP Code of Conduct](https://github.com/FIUP/Getting_Started/blob/master/CODE_OF_CONDUCT.md)
https://github.com/iceghost/resume
https://raw.githubusercontent.com/iceghost/resume/main/5-projects/1-bkalendar.typ
typst
=== BKalendar #place(right + top, [ May 2021 - _current_ ]) / Link: #link("https://bkalendar.github.io")[Demo] #sym.dot.c #link("https://github.com/bkalendar")[Source] / Role: Fullstack Developer. This tool allows students to import HCMUT timetables to Google Calendar. With this tool, students can easily manage their schedule and stay organized on Google Calendar. _Result_: Thousands of HCMUT students used and found it helpful. This project involved the skills to read and apply specifications such as the RFC 5545 for iCal, handle dates and integrate Google OAuth2 and Google Calendar API into an application.
https://github.com/swaits/typst-collection
https://raw.githubusercontent.com/swaits/typst-collection/main/finely-crafted-cv/0.1.0/src/resume.typ
typst
MIT License
#import "constants.typ": * #import "helpers.typ": * #import "state.typ": __get, __set // Main resume function #let resume( name: "<NAME>", tagline: none, paper: "us-letter", heading-font: HEADING_FONT, body-font: BODY_FONT, body-size: BODY_SIZE, email: none, phone: none, linkedin-username: none, keywords: "", thumbnail: none, // check out https://qrframe.kylezhe.ng/ for QR code generation body ) = { // Document setup, mostly metadata in a PDF set document( title: "Résumé/CV of " + name, author: name, keywords: keywords, ) // Page setup, including header and footer set page( paper: paper, margin: PAGE_MARGIN, numbering: "1 / 1", header: context { if here().page() == 1 { text( size: HEADER_SIZE, grid( columns: (1fr, auto, auto, auto, auto, auto), gutter: 1em, align: center+horizon, "", icon_and_contact("icons/email.svg", link("mailto:" + email, email)), sym.divides, icon_and_contact("icons/phone.svg", link("tel:" + phone, phone)), sym.divides, icon_and_contact("icons/linkedin.svg", link("https://www.linkedin.com/in/" + linkedin-username, linkedin-username)) ) ) } }, footer: context { let multi_page = counter(page).final().first() > 1 let date = datetime.today().display("[month repr:long] [day], [year]") text(size: FOOTER_SIZE, weight: FOOTER_TEXT_WEIGHT, font: heading-font)[ #grid( columns: (1fr, 1fr, 1fr), align: (left, center, right + horizon), // left column [Last updated: #date], // center column if multi_page { counter(page).display(both: true, "1 of 1") }, // right column if thumbnail != none { move(dy: -0.25in, box(height: 0.5in, thumbnail)) } else { [ ] } ) ] }, ) // Global text settings set text(font: body-font, size: body-size, weight: BODY_WEIGHT, fallback: true) set par(justify: true) // Heading styles set heading(numbering: "I.A.") show heading.where(level: 1): it => { block( above: SECTION_HEADING_SPACE_ABOVE, below: SECTION_HEADING_SPACE_BELOW, breakable: false, [ #text(size: SECTION_HEADING_SIZE, weight: SECTION_HEADING_WEIGHT, font: heading-font)[ #context counter(heading).display() #upper(it.body) ] #hrule(stroke: SECTION_HEADING_HRULE_STROKE) ] ) } // Table style set table( stroke: none, fill: (x, y) => { if y == 0 { TABLE_HEADER_COLOR } else if calc.rem(y, 2) == 0 { TABLE_ZEBRA_COLOR_0 } else { TABLE_ZEBRA_COLOR_1 } }, ) // Save our settings needed in other functions __set("heading-font", heading-font) // Main Headline text(size: HEADLINE_NAME_SIZE, weight: HEADLINE_NAME_WEIGHT, font: heading-font)[#smallcaps(name)] h(1fr) text(size: TAGLINE_SIZE, style: TAGLINE_STYLE, weight: TAGLINE_WEIGHT)[#tagline] hrule(stroke: HEADLINE_HRULE_STROKE) // The rest of the content body } // Alias resume to cv for non-Americans #let cv = resume #let company-heading(name, start: none, end: none, icon: none, body) = { let icon_spacing = if icon != none { COMPANY_ICON_SPACING } else { 0em } block( below: COMPANY_BLOCK_BELOW, breakable: false, grid( columns: (auto, auto, 1fr, auto), column-gutter: (icon_spacing, 0.5em), row-gutter: 0.5em, align: (left + horizon, left + horizon, right + horizon), // first column, optional icon if icon != none { box(height: COMPANY_ICON_SIZE, icon) }, // second column, company name context text( weight: COMPANY_NAME_WEIGHT, size: COMPANY_NAME_SIZE, font: __get("heading-font"), name ), // third column, dots if start != none { repeat(h(0.3em) + "." + h(0.3em)) } else { [] }, // fourth column, optional dates context text( weight: COMPANY_DATE_WEIGHT, size: COMPANY_DATE_SIZE, font: __get("heading-font"), if start != none { [#start -- #end] } else { [ ] } ), // second row, company content grid.cell(colspan: 4, body) ) ) } // Alias to school heading #let school-heading = company-heading #let job-heading(title, location: none, start: none, end: none, comment: none, body) = { block( above: JOB_BLOCK_ABOVE, below: JOB_BLOCK_BELOW, breakable: false, // heading grid( columns: (auto, auto, auto, 1fr, auto), gutter: 0.5em, align: (left + horizon, right + horizon), // first column, title context text( style: JOB_NAME_STYLE, weight: JOB_NAME_WEIGHT, size: JOB_NAME_SIZE, font: __get("heading-font"), title ), // second column, job comment ellipsis if comment != none { [ ... ] }, // third column, job comment ellipsis if comment != none { [ #comment ] }, // fourth column, hfill h(1fr), // fifth column, optional location context text( style: JOB_LOCATION_STYLE, weight: JOB_LOCATION_WEIGHT, size: JOB_LOCATION_SIZE, font: __get("heading-font"), if location != none and start != none { [#location#[;] #h(0.25em)#start#if end != none [ -- #end]] } else if location != none { location } else if start != none { [#start#if end != none [ -- #end]] } ), grid.cell(colspan: 5, body) ) ) } // Alias to degree heading #let degree-heading = job-heading
https://github.com/lucannez64/Notes
https://raw.githubusercontent.com/lucannez64/Notes/master/Maths_Expertes_Ex_30_04_2024.typ
typst
#import "@preview/bubble:0.1.0": * #import "@preview/fletcher:0.4.3" as fletcher: diagram, node, edge #import "@preview/cetz:0.2.2": canvas, draw, tree #import "@preview/cheq:0.1.0": checklist #import "@preview/typpuccino:0.1.0": macchiato #import "@preview/wordometer:0.1.1": * #import "@preview/tablem:0.1.0": tablem #show: bubble.with( title: "Maths Expertes", subtitle: "Pour le 30/04/2024", author: "<NAME>", affiliation: "LHB", year: "2023/2024", class: "101", logo: image("JOJO_magazine_Spring_2022_cover-min-modified.png"), ) #set page(footer: context [ #set text(8pt) #set align(center) #text("page "+ counter(page).display()) ] ) #set heading(numbering: "1.1") #show: checklist.with(fill: luma(95%), stroke: blue, radius: .2em) = Exercices == Numéro 47 page 172 #block(fill: macchiato.yellow, inset: 8pt, radius: 4pt, [Montrer que pour tout entier naturel $n$, le nombre $n^11 - n$ est divisible par $33$]) === Méthode 1 $n$ est un entier naturel et $11$ est un nombre premier, ainsi d'après le corollaire du petit théorème de Fermat $n^11-n eq.triple 0 mod 11$. #linebreak() De plus $3$ est aussi un nombre premier, donc $n^3-n eq.triple 0 mod 3$ #linebreak() $n^11 = (n^3)^3 times n^2$ Par conséquent $n^3 eq.triple n mod 3$ et $(n^3)^3 eq.triple n^3 mod 3 eq.triple n mod 3$ Donc $n^11 eq.triple n times n^2 mod 3 eq.triple n^3 mod 3 eq.triple n mod 3$ Alors il existe un entier $k$ tel que $n^11-n eq 11 times k$ et il existe un entier $k'$ tel que $n^11-n eq 3 times k'$ Ainsi $3 times k' = 11 times k$. Ainsi $3k'$ divise $11k$ et $3k'$ divise $3k' arrow.double 3$ divise $11k$ or $3$ et $11$ sont deux nombres premiers $arrow.l.r.double$ pgcd$(3;11)=1$ #linebreak() Selon le théorème de Gauss $3$ divise $k$ #linebreak() Soit $k = 3k''$ avec $k''$ un entier relatif : $11 times 3k'' = 3 times k'$ #linebreak() $ arrow.l.r.double 11 times k'' = k'$ #linebreak() $n^11-n = 3 times k' arrow.l.r.double$ #linebreak() $n^11-n = 3 times 11 times k''$ #linebreak() $arrow.l.r.double n^11-n = 33k''$ #linebreak() Par conséquent $n^11-n$ est divisible par $33$ === Méthode 2 $n$ est un entier naturel et $11$ est un nombre premier, ainsi d'après le corollaire du petit théorème de Fermat $n^11-n eq.triple 0 mod 11$. #linebreak() De plus $3$ est aussi un nombre premier, donc $n^3-n eq.triple 0 mod 3$ #linebreak() $n^11 = (n^3)^3 times n^2$ Par conséquent $n^3 eq.triple n mod 3$ et $(n^3)^3 eq.triple n^3 mod 3 eq.triple n mod 3$ Donc $n^11 eq.triple n times n^2 mod 3 eq.triple n^3 mod 3 eq.triple n mod 3$ #linebreak() $n^11-n eq.triple 0 mod 3$ Donc $n^11-n$ divisible par $3$ #linebreak() or $33=11 times 3$ donc $33$ a pour facteurs premiers $11$ et $3$ et $33=\p\p\c\m(11,3)$ Comme $n^11-n$ est divisible par $3$ et par $11$ il est aussi divisible par $33$
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/packages/typst.node/npm/android-arm64/README.md
markdown
Apache License 2.0
# `@myriaddreamin/typst-ts-node-compiler-android-arm64` This is the **aarch64-linux-android** binary for `@myriaddreamin/typst-ts-node-compiler`
https://github.com/thanhdxuan/dacn-report
https://raw.githubusercontent.com/thanhdxuan/dacn-report/master/Lab02/contents/00.typ
typst
#{ include "/contents/01-bia.typ" } #pagebreak() #set heading(numbering: "1.") #outline( indent: 1.5em, ) #outline(title: "Danh mục hình ảnh", target: figure.where(kind: image)) // #outline(title: "Danh mục chương trình", target: figure.where(kind: raw)) #pagebreak() #{ include "/contents/02-introduce.typ" } #{ include "/contents/03-datahandling.typ" } #pagebreak() #bibliography("/ref.bib")
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/packages/typst.node/npm/linux-x64-musl/README.md
markdown
Apache License 2.0
# `@myriaddreamin/typst-ts-node-compiler-linux-x64-musl` This is the **x86_64-unknown-linux-musl** binary for `@myriaddreamin/typst-ts-node-compiler`
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10780.typ
typst
Apache License 2.0
#let data = ( ("MODIFIER LETTER SMALL CAPITAL AA", "Lm", 0), ("MODIFIER LETTER SUPERSCRIPT TRIANGULAR COLON", "Lm", 0), ("MODIFIER LETTER SUPERSCRIPT HALF TRIANGULAR COLON", "Lm", 0), ("MODIFIER LETTER SMALL AE", "Lm", 0), ("MODIFIER LETTER SMALL CAPITAL B", "Lm", 0), ("MODIFIER LETTER SMALL B WITH HOOK", "Lm", 0), (), ("MODIFIER LETTER SMALL DZ DIGRAPH", "Lm", 0), ("MODIFIER LETTER SMALL DZ DIGRAPH WITH RETROFLEX HOOK", "Lm", 0), ("MODIFIER LETTER SMALL DZ DIGRAPH WITH CURL", "Lm", 0), ("MODIFIER LETTER SMALL DEZH DIGRAPH", "Lm", 0), ("MODIFIER LETTER SMALL D WITH TAIL", "Lm", 0), ("MODIFIER LETTER SMALL D WITH HOOK", "Lm", 0), ("MODIFIER LETTER SMALL D WITH HOOK AND TAIL", "Lm", 0), ("MODIFIER LETTER SMALL REVERSED E", "Lm", 0), ("MODIFIER LETTER SMALL CLOSED REVERSED OPEN E", "Lm", 0), ("MODIFIER LETTER SMALL FENG DIGRAPH", "Lm", 0), ("MODIFIER LETTER SMALL RAMS HORN", "Lm", 0), ("MODIFIER LETTER SMALL CAPITAL G", "Lm", 0), ("MODIFIER LETTER SMALL G WITH HOOK", "Lm", 0), ("MODIFIER LETTER SMALL CAPITAL G WITH HOOK", "Lm", 0), ("MODIFIER LETTER SMALL H WITH STROKE", "Lm", 0), ("MODIFIER LETTER SMALL CAPITAL H", "Lm", 0), ("MODIFIER LETTER SMALL HENG WITH HOOK", "Lm", 0), ("MODIFIER LETTER SMALL DOTLESS J WITH STROKE AND HOOK", "Lm", 0), ("MODIFIER LETTER SMALL LS DIGRAPH", "Lm", 0), ("MODIFIER LETTER SMALL LZ DIGRAPH", "Lm", 0), ("MODIFIER LETTER SMALL L WITH BELT", "Lm", 0), ("MODIFIER LETTER SMALL CAPITAL L WITH BELT", "Lm", 0), ("MODIFIER LETTER SMALL L WITH RETROFLEX HOOK AND BELT", "Lm", 0), ("MODIFIER LETTER SMALL LEZH", "Lm", 0), ("MODIFIER LETTER SMALL LEZH WITH RETROFLEX HOOK", "Lm", 0), ("MODIFIER LETTER SMALL TURNED Y", "Lm", 0), ("MODIFIER LETTER SMALL TURNED Y WITH BELT", "Lm", 0), ("MODIFIER LETTER SMALL O WITH STROKE", "Lm", 0), ("MODIFIER LETTER SMALL CAPITAL OE", "Lm", 0), ("MODIFIER LETTER SMALL CLOSED OMEGA", "Lm", 0), ("MODIFIER LETTER SMALL Q", "Lm", 0), ("MODIFIER LETTER SMALL TURNED R WITH LONG LEG", "Lm", 0), ("MODIFIER LETTER SMALL TURNED R WITH LONG LEG AND RETROFLEX HOOK", "Lm", 0), ("MODIFIER LETTER SMALL R WITH TAIL", "Lm", 0), ("MODIFIER LETTER SMALL R WITH FISHHOOK", "Lm", 0), ("MODIFIER LETTER SMALL CAPITAL R", "Lm", 0), ("MODIFIER LETTER SMALL TC DIGRAPH WITH CURL", "Lm", 0), ("MODIFIER LETTER SMALL TS DIGRAPH", "Lm", 0), ("MODIFIER LETTER SMALL TS DIGRAPH WITH RETROFLEX HOOK", "Lm", 0), ("MODIFIER LETTER SMALL TESH DIGRAPH", "Lm", 0), ("MODIFIER LETTER SMALL T WITH RETROFLEX HOOK", "Lm", 0), ("MODIFIER LETTER SMALL V WITH RIGHT HOOK", "Lm", 0), (), ("MODIFIER LETTER SMALL CAPITAL Y", "Lm", 0), ("MODIFIER LETTER GLOTTAL STOP WITH STROKE", "Lm", 0), ("MODIFIER LETTER REVERSED GLOTTAL STOP WITH STROKE", "Lm", 0), ("MODIFIER LETTER BILABIAL CLICK", "Lm", 0), ("MODIFIER LETTER DENTAL CLICK", "Lm", 0), ("MODIFIER LETTER LATERAL CLICK", "Lm", 0), ("MODIFIER LETTER ALVEOLAR CLICK", "Lm", 0), ("MODIFIER LETTER RETROFLEX CLICK WITH RETROFLEX HOOK", "Lm", 0), ("MODIFIER LETTER SMALL S WITH CURL", "Lm", 0), )
https://github.com/Servostar/dhbw-abb-typst-template
https://raw.githubusercontent.com/Servostar/dhbw-abb-typst-template/main/src/style.typ
typst
MIT License
// .--------------------------------------------------------------------------. // | Global style of document | // '--------------------------------------------------------------------------' // Author: <NAME> // Edited: 27.06.2024 // License: MIT #import "branding.typ": * #let watermark-color = luma(50%).transparentize(70%) #let watermark-pattern = pattern(size: (5pt, 5pt))[ #place( line( start: (50%, 0%), end: (50%, 100%), stroke: (paint: watermark-color, thickness: 3pt), ), ) ] #let watermark(config) = if config.draft { rotate(-22.5deg)[ #rect( radius: 1em, inset: 1em, stroke: watermark-color, )[ #text(size: 4em, weight: "bold", fill: watermark-pattern, "DRAFT") #linebreak() #text(size: 1.25em, weight: "bold", fill: watermark-color)[ This page is part of a preliminary #linebreak() document version. #linebreak() #text( size: 0.75em, "Further usage without the authors consent is not permitted.", )]]] } #let numberingH(c) = { return numbering(c.numbering, ..counter(heading).at(c.location())) } #let currentH(level: 1) = { let elems = query(selector(heading.where(level: level)).after(here())) if elems.len() != 0 and elems.first().location().page() == here().page() { return (numberingH(elems.first()), elems.first().body) } else { elems = query(selector(heading.where(level: level)).before(here())) if elems.len() != 0 { return (numberingH(elems.last()), elems.last().body) } } return "" } // global style of document #let global_styled_doc(config, body) = { let thesis = config.thesis let style = config.style set text( size: style.text.size, ligatures: true, hyphenate: true, dir: ltr, font: style.text.font, fill: ABB-BLACK, ) show heading: set text( font: style.heading.font, weight: "bold" ) let header-supplement = if config.lang == "de" { "Kapitel" } else { "chapter" } set heading(supplement: [#header-supplement]) set math.equation(numbering: "(1)") show math.equation: set text(font: "Fira Math", size: 12pt) // Set header spacing show heading.where(level: 1): it => v(2em) + it + v(1em) show heading.where(level: 2): it => v(1em) + it + v(0.5em) show heading.where(level: 3): it => v(0.5em) + it + v(0.25em) // set theme for code blocks set raw(tab-size: style.code.tab-size) show raw.where(block: false): it => box( stroke: 1pt + ABB-GRAY-05, radius: 2pt, inset: (left: 2pt, right: 2pt), outset: (top: 4pt, bottom: 4pt), fill: ABB-GRAY-06, text(font: style.code.font, size: style.code.size, it.text), ) show figure.where(kind: raw): it => align(left)[ #let content = it.body #let lang = if content.has("lang") { it.body.lang } else { none } #block( width: 100%, fill: ABB-GRAY-06, stroke: 1pt + ABB-GRAY-05, radius: 0.5em, inset: 0.75em, clip: false, { let (columns, align, make_row) = { if style.code.lines { ( (auto, 1fr), (right + top, left), e => { let (i, l) = e let n = i + 1 let n_str = if (calc.rem(n, 1) == 0) or (true and i == 0) { text( font: style.code.font, size: style.code.size, fill: ABB-BLACK, str(n), ) } else { none } (n_str + h(0.5em), raw(block: true, lang: lang, l)) }, ) } else { ( (1fr,), (left,), e => { let (i, l) = e raw(block: true, lang: lang, l) }, ) } } grid( stroke: none, columns: columns, rows: (auto,), gutter: 0pt, inset: 0.25em, align: (col, _) => align.at(col), fill: ABB-GRAY-06, ..content .text .split("\n") .enumerate() .map(make_row) .flatten() .map(c => if c.has("text") and c.text == "" { v(1em) } else { c }) ) }, ) #v(-1em) #align(center + top, it.caption) ] show figure: set block(breakable: true) // make figure supplements bold // based on: https://github.com/typst/typst/discussions/3871 show figure.caption: c => [ #if c.body.fields().len() > 0 { text(weight: "medium")[ #c.supplement #c.counter.display("1.1.1") ] c.separator } #c.body ] // change the display supplement according to the text langugae // based on: https://github.com/typst/typst/issues/3273 show figure.where(kind: raw): set figure(supplement: context { if text.lang == "de" { "Quelltext" } else { "Listing" } }) // APA style table set table( inset: 0.5em, align: left, stroke: (x, y) => ( left: none, right: none, top: if y == 0 { 1.5pt } else if y < 2 { 1pt } else { 0.5pt }, bottom: if y == 0 { 1pt } else { 1.5pt }, ), ) // make table header bold show table.cell.where(y: 0): set text(weight: "bold") set block(spacing: 2em) set par( justify: true, first-line-indent: 1em, leading: 1em, ) // give links a color show link: set text(fill: style.link.color) show ref: set text(fill: style.link.color) set heading(numbering: none) set page( paper: style.page.format, foreground: watermark(config), header-ascent: style.header.content-padding, footer-descent: style.header.content-padding, margin: ( top: style.page.margin.top + style.header.underline-top-padding + style .header .content-padding, bottom: style.page.margin.bottom + style.footer.content-padding, left: style.page.margin.left, right: style.page.margin.right, ), numbering: (..nums) => { let current-page = here().page() if current-page == 1 { [] } else if query(<end-of-prelude>) .first() .location() .page() > current-page { numbering("I", nums.pos().first()) } else if query(<end-of-content>) .first() .location() .page() >= current-page { numbering("1", nums.pos().first()) } else { numbering("a", nums.pos().first()) } }, footer: context [ #let page-counter = counter(page).get().first() #let page-number = here().page() #set align(center) #if page-number == 1 { [] } else if query(<end-of-prelude>).first().location().page() > page-number { set align(center) numbering("I", page-counter) } else if query(<end-of-content>).first().location().page() >= page-number { numbering( "1 / 1", page-counter, counter(page).at(<end-of-content>).last(), ) } else { numbering("a", page-counter) } ], header: context { set align(left) let current-page = here().page() if current-page == 1 { // logo of ABB and DHBW grid( // set width of columns // we need two, so make both half the page width columns: (50%, 50%), // left align logo of ABB align(left, image("res/ABB.svg", height: style.header.logo-height)), // right align logo of DHBW align(right, image("res/DHBW.svg", height: style.header.logo-height))) } else if query(<end-of-content>) .first() .location() .page() >= current-page and query(<end-of-prelude>) .first() .location() .page() < current-page + 1 { let heading = currentH() heading.at(0) h(0.5em) heading.at(1) v(style.header.underline-top-padding - 1em) line(length: 100%) } else { config.thesis.title v(style.header.underline-top-padding - 1em) line(length: 100%) } }, ) body } #let content_styled(config, body) = { set heading(numbering: "1.1.1") body } #let end_styled(config, body) = { set heading(numbering: "1.1.1") body }
https://github.com/MrToWy/Bachelorarbeit
https://raw.githubusercontent.com/MrToWy/Bachelorarbeit/master/Code/languageInterceptor.typ
typst
#import("../Template/customFunctions.typ"): * #codly( highlights:( (line:4, fill:red, label: <getLanguage>), (line:6, fill:blue, label: <refreshLanguage>), ), ) ```ts export class LanguageInterceptor implements HttpInterceptor { private language: string; constructor(private languageService: LanguageService) { this.language = this.languageService.languageCode; this.languageService.languageSubject.subscribe((language) => { this.language = language; }); } intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { if (req.headers.has('language')) { return next.handle(req); } if (this.language) { const newRequest = req.clone({ setHeaders: { 'language': this.language.toUpperCase() } }); return next.handle(newRequest); } return next.handle(req); } } ```
https://github.com/VadimYarovoy/CourseWork
https://raw.githubusercontent.com/VadimYarovoy/CourseWork/main/typ/introduction.typ
typst
= Введение Основная цель данной курсовой работы заключается в развитии навыков организации и планирования деятельности, необходимой для реализации проекта, направленного на усовершенствование ранее выпущенного программного продукта. Этот проект направлен на исправление обнаруженных пользователями ошибок и внедрение улучшений в пользовательский интерфейс. При этом важно учитывать выявленные риски и соблюдать установленные критерии для успешного завершения промежуточных и конечных этапов проекта. #pagebreak()
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-2CEB0.typ
typst
Apache License 2.0
#let data = ( "0": ("<CJK Ideograph Extension F, First>", "Lo", 0), "1d30": ("<CJK Ideograph Extension F, Last>", "Lo", 0), )
https://github.com/AOx0/expo-nosql
https://raw.githubusercontent.com/AOx0/expo-nosql/main/themes/bristol.typ
typst
MIT License
//============================================== // University of Bristol theme for Typst slides. // Based on a previous version of <NAME>'s // UoB LaTeX Beamer template, found at // https://github.com/dawbarton/UoB-beamer-theme // ============================================= #import "../slides.typ": * #let bristol-theme( color: rgb(171, 31, 45), watermark: "bristol/logo.png", logo: "", secondlogo: "bristol/secondlogo.svg" ) = data => { let title-slide(slide-info, bodies) = { place(right, image(watermark, height:120%)) v(5%) grid(columns: (1fr, 5%), [], // align(bottom + left)[#image(logo, width:40%)], // align(bottom + right)[#image(secondlogo, width:40%)], []) v(-10%) align(left + horizon)[ #pad(left: 3em)[ #block( stroke: ( y: 0mm + color ), inset: 0em, breakable: false, [ #text(1.3em, color)[*#data.title*] \ #v(-2.5em)\ #{ if data.subtitle != none { parbreak() text(.9em, color)[#data.subtitle] } } ] ) #set text(size: .8em) #grid( columns: (1fr,) * calc.min(data.authors.len(), 3), column-gutter: 1em, row-gutter: 1em, ..data.authors ) #v(1em) #data.date ] ] } let default(slide-info, bodies) = { let body = none if bodies.len() == 1 { body = bodies.first() } else{ let colwidths = none let thisgutter = .2em if "colwidths" in slide-info{ colwidths = slide-info.colwidths if colwidths.len() != bodies.len(){ panic("Provided colwidths must be of same length as bodies") } } else{ colwidths = (1fr,) * bodies.len() } if "gutter" in slide-info{ thisgutter = slide-info.gutter } body = grid( columns: colwidths, gutter: thisgutter, ..bodies ) } let decoration(position, body) = { let border = 1mm + color let strokes = ( header: ( bottom: border ), footer: ( top: border ) ) grid(columns: (3%, 94%, 3%), [], block( // stroke: strokes.at(position), width: 100%, inset: (x:.5em, y:.7em), body ), []) } // header decoration("header", grid(columns: (1fr), align(right, grid(rows: (.5em, .5em), // text(color, .7em)[#data.short-title], // [], text(color, .7em)[#section.display()] ) ) ) ) if "title" in slide-info { block( width: 100%, inset: (x: 4.5%, y: -.5em), breakable: false, outset: 0em, heading(level: 1, text(color)[#slide-info.title]) ) v(.7em) } v(1fr) block( width: 100%, inset: (x: 2em), breakable: false, outset: 0em, body ) v(2fr) // footer decoration("footer")[ #h(1fr) #text(color, .6em)[#logical-slide.display()] ] } let wake-up(slide-info, bodies) = { if bodies.len() != 1 { panic("wake up variant of bristol theme only supports one body per slide") } let body = bodies.first() block( width: 100%, height: 100%, inset: 2em, breakable: false, outset: 0em, fill: color, text(size: 1.5em, fill: white, {v(1fr); body; v(1fr)}) ) } ( "title slide": title-slide, "default": default, "wake up": wake-up, ) }
https://github.com/maucejo/tutorial_template
https://raw.githubusercontent.com/maucejo/tutorial_template/main/template/main.typ
typst
MIT License
// #import "@preview/tutorial:0.1.0": * #import "../src/tutorial.typ": * #let corr = false #show: tutorial.with( title: [Titre du TD] ) #obj[ + Objectif 1 + Objectif 2 + Objectif 3 ] = Titre de l'exercice == Présentation de l'exercice La @fig:bielle_manivelle est un mécanisme permettant de transformer un mouvement de rotation en un mouvement de translation. Il est composé de deux pièces, la bielle et la manivelle. La @a. #figure( image("images/bielle_manivelle3.svg"), caption: [Schéma cinématique non paramétré d'un système bielle-manivelle], ) <fig:bielle_manivelle> #subfigure( figure(image("images/bielle_manivelle3.svg"), caption: []), <a>, figure(image("images/bielle_manivelle3.svg"), caption: []), columns: (1fr, 1fr), caption: [Schéma cinématique non paramétré], ) == Questions #question[#lorem(10) #info[ *Consigne* : Répondre à la question suivante.] ] #let rep = [Réponse à la question] #correction(corr, rep)
https://github.com/ustctug/ustc-thesis-typst
https://raw.githubusercontent.com/ustctug/ustc-thesis-typst/main/chapters/intro.typ
typst
MIT License
= 简介 <简介> == 一级节标题 <一级节标题> === 二级节标题 <二级节标题> ==== 三级节标题 <三级节标题> ===== 四级节标题 <四级节标题> ====== 五级节标题 <五级节标题> 本模板 是中国科学技术大学本科生和研究生学位论文的 LaTeX 模板, 按照《#link("https://gradschool.ustc.edu.cn/static/upload/article/picture/ce3b02e5f0274c90b9331ef50ae1ac26.pdf")[中国科学技术大学研究生学位论文撰写手册]》(以下简称《撰写手册》)和 《#link("https://www.teach.ustc.edu.cn/?attachment_id=13867")[中国科学技术大学本科毕业论文(设计)格式]》的要求编写。 Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. == 脚注 <脚注> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. #footnote[Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.]
https://github.com/galaxia4Eva/galaxia4Eva
https://raw.githubusercontent.com/galaxia4Eva/galaxia4Eva/main/typst/Porco%20Dio/album_cover.typ
typst
#set page(width:142mm, height: 125mm, margin: 0mm) #set image(width:142mm, height: 125mm) #set page(background: image("images/Dio.png")) #set text(font:"Gutenberg Textura", size:23mm, fill: rgb("FF10F0")) #block(spacing:10mm)[ #align(center+top)[ Porco ] #align(center+horizon)[ Dio ] ]
https://github.com/LDemetrios/ProgLectures
https://raw.githubusercontent.com/LDemetrios/ProgLectures/main/06-clojure-basics.typ
typst
#import "kotlinheader.typ" : * #import "@preview/cetz:0.1.2" #let background = white #let foreground = black Единственное, что здесь действительно стоит заметить, так это то, что код представляет из себя набор вложенных S-выражений: $("функция" "аргумент1" "аргумент2" ...)$. Запятые ничем не отличаются от пробелов. Код также есть в файле basics.clj. Точки с запятой начинают отнострочные комментарии. #let clj-code(cd, pr:none, rs) = [ #nobreak[ #cd #if pr == none {} else { text(fill:rgb("#00aa00"), pr) } #raw(lang:"clojure", "=> " + rs.text) \ \ ] ] #let clj-errored(cd, err) = [ #nobreak[ #cd #text(fill:red, err) \ ] ] #let clj-res(x) = [#raw(lang:"clojure", "=> " + x.text)\ \ ] #let clj-print(x) = [`=> ` #x] #clj-code(``` 123 ```, ``` 123 ```) #clj-code(``` 1.0 ```, ``` 1.0 ```) #clj-code(``` (+ 1 2 3) ```, ``` 6 ```) #clj-code(``` (+) ```, ``` 0 ```) #clj-code(``` (+ 1, 2, 3) ```, ``` 6 ```) #clj-code(``` (- 5 2) ```, ``` 3 ```) #clj-code(``` (/ 3 2) ```, ``` 3/2 ```) #clj-code(``` (/ 6 4) ```, ``` 3/2 ```) #clj-code(``` 1.5 ```, ``` 1.5 ```) #clj-code(``` (= 1.5 3/2) ```, ``` false ```) #clj-code(``` (/ 3 2) ```, ``` 3/2 ```) #clj-code(``` 3/2 ```, ``` 3/2 ```) #clj-code(``` (* 3/2 2) ```, ``` 3N ```) #clj-code(``` 3 ```, ``` 3 ```) #clj-code(``` 3N ```, ``` 3N ```) #clj-code(``` (type 3/2) ```, ``` clojure.lang.Ratio ```) #clj-errored(``` (+ 8000000000000000000 8000000000000000000) ```, ``` Execution error (ArithmeticException) at java.lang.Math/addExact (Math.java:931) . long overflow ```) #clj-code(``` (+' 8000000000000000000 8000000000000000000) ```, ``` 16000000000000000000N ```) #clj-code(``` (type 3) ```, ``` java.lang.Long ```) #clj-code(``` (type 3N) ```, ``` clojure.lang.BigInt ```) #clj-errored(``` (/ 1 0) ```, ``` Execution error (ArithmeticException) at user/eval2017 (form-init17984610524906141998.clj:1) . Divide by zero ```) #clj-code(``` (/ 1.0 0.0) ```, ``` ##Inf ```) #clj-code(``` (+ 0.1 0.2) ```, ``` 0.30000000000000004 ```) #clj-code(``` "abc" ```, ``` "abc" ```) #clj-code(``` (type "abc") ```, ``` java.lang.String ```) #clj-code(``` \a ```, ``` \a ```) #clj-code(``` (type \a) ```, ``` java.lang.Character ```) #clj-code(``` \tab ```, ``` \tab ```) #clj-code(``` (list 1 2 3) ```, ``` (1 2 3) ```) #clj-errored(``` (1 2 3) ```, ``` Execution error (ClassCastException) at user/eval2047 (form-init17984610524906141998.clj:1) . class java.lang.Long cannot be cast to class clojure.lang.IFn (java.lang.Long is in module java.base of loader 'bootstrap' ; clojure.lang.IFn is in unnamed module of loader 'app') ```) #clj-code(``` (* (+ 2 3) 4) ```, ``` 20 ```) #clj-code(``` (str 1) ```, ``` "1" ```) #clj-code(``` (str "abc" "def") ```, ``` "abcdef" ```) #clj-code(``` (str (str 1) (str 2)) ```, ``` "12" ```) #clj-code(``` (str 1 2) ```, ``` "12" ```) #clj-code(``` (type (list 1 2 3)) ```, ``` clojure.lang.PersistentList ```) #clj-code(``` [1 2 3] ```, ``` [1 2 3] ```) #clj-code(``` (type [1 2 3]) ```, ``` clojure.lang.PersistentVector ```) #clj-code(``` (def x [1 2 3]) ```, ``` #'user/x ```) #clj-code(``` x ```, ``` [1 2 3] ```) #clj-code(``` user/x ```, ``` [1 2 3] ```) #clj-code(``` (nth x 1) ```, ``` 2 ```) #clj-code(``` (cons 4 x) ```, ``` (4 1 2 3) ```) #clj-code(``` (vec (cons 4 x)) ```, ``` [4 1 2 3] ```) #clj-errored(``` (conj 4 x) ```, ``` Execution error (ClassCastException) at user/eval2131 (form-init17984610524906141998.clj:1) . class java.lang.Long cannot be cast to class clojure.lang.IPersistentCollection (java.lang.Long is in module java.base of loader 'bootstrap' ; clojure.lang.IPersistentCollection is in unnamed module of loader 'app') ```) #clj-code(``` (conj x 4) ```, ``` [1 2 3 4] ```) #clj-code(``` (conj (list 1 2 3) 4) ```, ``` (4 1 2 3) ```) #clj-code(``` (assoc x 1 4) ```, ``` [1 4 3] ```) #clj-code(``` x ```, ``` [1 2 3] ```) #clj-code(``` (def x 2) ```, ``` #'user/x ```) #clj-code(``` x ```, ``` 2 ```) #clj-code(``` true ```, ``` true ```) #clj-code(``` false? ```, ``` #object[clojure.core$false_QMARK_ 0x192e860b "clojure.core$false_QMARK_@192e860b"] ```) #clj-code(``` false ```, ``` false ```) #clj-code(``` (not true) ```, ``` false ```) #clj-code(``` (and true false) ```, ``` false ```) #clj-code(``` (or true false) ```, ``` true ```) #clj-code(``` (if true 23 45) ```, ``` 23 ```) #clj-code(``` (def fact (fn [n] (if (< n 1) 1 (* n (fact (- n 1)))))) ```, ``` #'user/fact ```) #clj-code(``` (fact 5) ```, ``` 120 ```) #clj-code(``` (def fact (fn [n] (if (< n 1) 1 (* n (recur (- n 1)))))) ```, ``` Syntax error (UnsupportedOperationException) compiling recur at (/tmp/form-init17984610524906141998.clj:6:12) . Can only recur from tail position ```) #clj-code(``` (def fact (fn [n m] )) ```, ``` #'user/fact ```) #clj-code(``` (print 1) ```, pr:``` 1 ```, ``` nil ```) #clj-code(``` (def fact (fn [n m] (if (< n 1) m (recur (- n 1) (* n m))))) ```, ``` #'user/fact ```) #clj-code(``` (fact 5 1) ```, ``` 120 ```) #clj-code(``` (fact 5 2) ```, ``` 240 ```) #clj-code(``` (def factorial (fn [n] (fact n 1))) ```, ``` #'user/factorial ```) #clj-code(``` (factorial 6) ```, ``` 720 ```) #clj-code(``` (defn factorial [n] (fact n 1)) ```, ``` #'user/factorial ```) #clj-code(``` (filter (fn [x] (= 1 (rem x 2))) [1 2 3 4 5 6 7 8]) ```, ``` (1 3 5 7) ```) #clj-code(``` (filter #(= 1 (rem % 2)) [1 2 3 4 5 6 7 8]) ```, ``` (1 3 5 7) ```) #clj-code(``` (filter odd? [1 2 3 4 5 6 7 8]) ```, ``` (1 3 5 7) ```) #clj-code(``` (filter even? [1 2 3 4 5 6 7 8]) ```, ``` (2 4 6 8) ```) #clj-code(``` (map #(* % %) (range 8)) ```, ``` (0 1 4 9 16 25 36 49) ```) #clj-errored(``` (range) ```, ``) И-и-и... REPL зависла. `range` на самом деле возвращает бесконечную последовательность, которую REPL пытается преобразовать в строку, чтобы напечатать. \ \ #clj-code(``` (take-while #(< % 128) (map #(* % %) (range))) ```, ``` (0 1 4 9 16 25 36 49 64 81 100 121) ```) #clj-code(``` (type (take-while #(< % 128) (map #(* % %) (range)))) ```, ``` clojure.lang.LazySeq ```) #clj-code(``` (map #(print %) (range 100)) ```, pr:``` 0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899 ```, ```(nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil)```) #clj-code(``` (take 1 (map #(print %) (range 100))) ```, pr:``` 012345678910111213141516171819202122232425262728293031 ```, ``` (nil) ```) Откуда здесь 32 числа? `map` действует лениво, но не очень. Она бьёт последовательность на блоки по 32 элемента и преобразует их по требованию, блоками. \ \ #clj-code(``` (defn square [x] (* x x)) ```, ``` #'user/square ```) #clj-code(``` (defn trice [x] (* x 3)) ```, ``` #'user/trice ```) #clj-code(``` (comp square trice) ```, ``` #object[clojure.core$comp$fn__5888 0x17af57d7 "clojure.core$comp$fn__5888@17af57d7"] ```) #clj-code(``` (def very-special-func (comp square trice)) ```, ``` #'user/very-special-func ```) #clj-code(``` (very-special-func 4) ```, ``` 144 ```) #clj-code(``` ((comp square trice) 4) ```, ``` 144 ```) #clj-code(``` (defn cube [x] (* x x x)) ```, ``` #'user/cube ```) #clj-errored(``` (-) ```, ``` Execution error (ArityException) at user/eval1974 (form-init10262687158678453236.clj:1) . Wrong number of args (0) passed to: clojure.core/- ```) #clj-code(``` (- 2) ```, ``` -2 ```) #clj-code(``` (- 5 3) ```, ``` 2 ```) #clj-code(``` (- 5 3 2) ```, ``` 0 ```) #clj-code(``` ((#(comp % %) #(* % %)) 3) ```, ``` 81 ```) #clj-code(``` ((comp) 3) ```, ``` 3 ```) #clj-code(``` (identity 3) ```, ``` 3 ```) #clj-code(``` ((constantly 5)) ```, ``` 5 ```) #clj-code(``` (constantly 5) ```, ``` #object[clojure.core$constantly$fn__5752 0x78ac6be6 "clojure.core$constantly$fn__5752@78ac6be6"] ```) #clj-code(``` (def const5 (constantly 5)) ```, ``` #'user/const5 ```) #clj-code(``` (const5) ```, ``` 5 ```) #clj-code(``` (const5 1 2 3 4 6 7) ```, ``` 5 ```) #clj-code(``` (+ 1 2 3) ```, ``` 6 ```) #clj-code(``` (def l (list 1 2 3)) ```, ``` #'user/l ```) #clj-code(``` (apply + l) ```, ``` 6 ```) #clj-code(``` (partial - 1) ```, ``` #object[clojure.core$partial$fn__5920 0x26b546ec "clojure.core$partial$fn__5920@26b546ec"] ```) #clj-code(``` (def func (partial - 1)) ```, ``` #'user/func ```) #clj-code(``` (func 3) ```, ``` -2 ```) #clj-code(``` (map #(+ 2) [1 2 3]) ```, ``` Error printing return value (ArityException) at clojure.lang.AFn/throwArity (AFn.java:429) . Wrong number of args (1) passed to: user / eval2046/fn--2047 ```) #clj-code(``` (map #(+ 2 %) [1 2 3]) ```, ``` (3 4 5) ```) #clj-code(``` (map (partial + 2) [1 2 3]) ```, ``` (3 4 5) ```) #clj-code(``` (map list [1 2 3] [4 5 6]) ```, ``` ((1 4) (2 5) (3 6)) ```)
https://github.com/jneug/typst-nassi
https://raw.githubusercontent.com/jneug/typst-nassi/main/README.md
markdown
MIT License
# nassi (v0.1.2) **nassi** is a package for [Typst](https://typst.app) to draw [Nassi-Shneiderman diagrams](https://en.wikipedia.org/wiki/Nassi–Shneiderman_diagram) (Struktogramme). ![](assets/example-1.png) ## Usage Import **nassi** in your document: ```typst #import "@preview/nassi:0.1.2" ``` There are several options to draw diagrams. One is to parse all code-blocks with the language "nassi". Simply add a show-rule like this: ````typst #import "@preview/nassi:0.1.2" #show: nassi.shneiderman() ```nassi function ggt(a, b) while a > 0 and b > 0 if a > b a <- a - b else b <- b - a endif endwhile if b == 0 return a else return b endif endfunction ``` ```` In this case, the diagram is created from a simple pseudocode. To have more control over the output, you can add blocks manually using the element functions provided in `nassi.elements`: ````typst #import "@preview/nassi:0.1.2" #nassi.diagram({ import nassi.elements: * function("ggt(a, b)", { loop("a > b and b > 0", { branch("a > b", { assign("a", "a - b") }, { assign("b", "b - a", fill: gradient.linear(..color.map.rainbow), stroke:red + 2pt ) }) }) branch("b == 0", { process("return a") }, { process("return b") }) }) }) ```` ![](assets/example-3.png) Since **nassi** uses **cetz** for drawing, you can add diagrams directly to a canvas. Each block gets a name within the diagram group to reference it in the drawing: ````typst #import "@preview/cetz:0.2.2" #import "@preview/nassi:0.1.2" #cetz.canvas({ import nassi.draw: diagram import nassi.elements: * import cetz.draw: * diagram((4,4), { function("ggt(a, b)", { loop("a > b and b > 0", { branch("a > b", { assign("a", "a - b") }, { assign("b", "b - a") }) }) branch("b == 0", { process("return a") }, { process("return b") }) }) }) for i in range(8) { content( "nassi.e" + str(i+1) + ".north-west", stroke:red, fill:red.transparentize(50%), frame:"circle", padding:.05, anchor:"north-west", text(white, weight:"bold", "e"+str(i)), ) } }) ```` ![](assets/example-cetz-2.png) This can be useful to annotate a diagram: ![](assets/example-cetz.png) See `assets/` for usage examples. ## Changelog ### Version 0.1.2 - Fix for deprecation warnings in Typst 0.12. ### Version 0.1.1 - Fixed labels option not working for branches in other elements. - Added `switch` statements (thanks to @Geronymos). ### Version 0.1.0 Initial release of **nassi**.
https://github.com/dashuai009/dashuai009.github.io
https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/001.typ
typst
#let date = datetime( year: 2022, month: 10, day: 4, ) #metadata(( "title": "2020ccpc-Weihai威海-D-ABC_Conjecture", "author": "dashuai009", description: "", pubDate: date.display(), subtitle: [CCPC, 数论], ))<frontmatter> #import "../__template/style.typ": conf #show: conf #date.display(); // #outline() = 题意 Given a positive integer c, determine if there exists positive integers a, b, such that a + b = c and rad(abc) < c. where $ "rad" (n) = product_(p divides n\ p in upright("Prime")) p $ is the product of all distinct prime divisors of n. == 数据范围 $ 1 lt.eq.slant T lt.eq.slant 10, 1 lt.eq.slant c lt.eq.slant 10^(18) $ = 解析 - c=1 输出no - c的质因子的幂均为1, $mu(c)=-1$ 输出no - 否则,c的质因子中存在一个幂大于等于2的,输出yes $ c=p^2k, a=p k, b=p(p-1)k,\ a+b=c\ a b c= p^4 (p-1) k^3\ "rad"(a b c) = "rad"(p^4 (p-1) k^3) ="rad"(p(p-1)k) lt.eq (p-1)k < c $ 以上,先筛掉$c$的$10^6$之内的质因子p,如果有输出yes; 否则,$c=p^2$ k中k只能为1,这样判断一下$sqrt(c)$是不是素数即可。 == code ```cpp #include <bits/stdc++.h> #define LL long long const int N = 1e6 + 20; LL prime[N]; bool bo[N]; int pcnt; namespace MillerRabin { long long Mul(long long a, long long b, long long mo) { long long tmp = a * b - (long long)((long double)a / mo * b + 1e-8) * mo; return (tmp % mo + mo) % mo; } long long Pow(long long a, long long b, long long mo) { long long res = 1; for (; b; b >>= 1, a = Mul(a, a, mo)) if (b & 1) res = Mul(res, a, mo); return res; } bool IsPrime(long long n) { if (n == 2) return 1; if (n < 2 || !(n & 1)) return 0; static const auto tester = {2, 3, 5, 7, 11, 13, 17, 19, 23}; long long x = n - 1; int t = 0; for (; !(x & 1); x >>= 1) ++t; for (int p : tester) { long long a = p % (n - 1) + 1, res = Pow(a % n, x, n), last = res; for (int j = 1; j <= t; ++j) { res = Mul(res, res, n); if (res == 1 && last != 1 && last != n - 1) return 0; last = res; } if (res != 1) return 0; } return 1; } } // namespace MillerRabin namespace PollardRho { using namespace MillerRabin; unsigned long long seed; long long Rand(long long mo) { return (seed += 4179340454199820289ll) % mo; } long long F(long long x, long long c, long long mo) { return (Mul(x, x, mo) + c) % mo; } long long gcd(long long a, long long b) { return b ? gcd(b, a % b) : a; } long long Get(long long c, long long n) { long long x = Rand(n), y = F(x, c, n), p = n; for (; x != y && (p == n || p == 1); x = F(x, c, n), y = F(F(y, c, n), c, n)) p = x > y ? gcd(n, x - y) : gcd(n, y - x); return p; } void Divide(long long n, long long p[]) { if (n < 2) return; if (IsPrime(n)) { p[++*p] = n; return; } for (;;) { long long tmp = Get(Rand(n - 1) + 1, n); if (tmp != 1 && tmp != n) { Divide(tmp, p); Divide(n / tmp, p); return; } } } } // namespace PollardRho void makePrime() { for (int i = 2; i < N; ++i) { if (!bo[i]) { prime[++pcnt] = i; } for (int j = 1; j <= pcnt && i * prime[j] < N; ++j) { bo[i * prime[j]] = true; if (i % prime[j]) { break; } } } } int main() { makePrime(); int T; std::cin >> T; while (T--) { LL c; std::cin >> c; if (c == 1) { std::cout << "no\n"; } else { bool flag = false; for (int i = 1; i <= pcnt; ++i) { int cnt = 0; while (c % prime[i] == 0) { ++cnt; c /= prime[i]; } if (cnt > 1) { flag = true; break; } } if (flag) { std::cout << "yes\n"; } else { long long p = sqrt(c); if (p * p == c && MillerRabin::IsPrime(p)) { std::cout << "yes\n"; } else { std::cout << "no\n"; } } } } return 0; } ```
https://github.com/NathanBurgessDev/tabletop-war-game-helper
https://raw.githubusercontent.com/NathanBurgessDev/tabletop-war-game-helper/main/Interim%20Report/20363169-interim.typ
typst
#import "interimTemplate.typ": * #set page(numbering: "1",number-align: center) #show: diss-title.with( title: "Mixed Reality Tabletop War Game Assistant", author: "<NAME>", ID: "20363169", email: "<PASSWORD>", programme: "BSc Computer Science", module: "COMP3003", ) #pagebreak() #outline(title: "Table of Contents") // Number all headings #set heading(numbering: "1.1.") // Tweak space above and below headings #show heading: set block(above: 2em, below: 1.3em) // Justified paragraphs #set par(justify: true) #pagebreak() #import "@preview/wordometer:0.1.1": word-count, total-words #show: word-count In this document, there are #total-words words all up. = Introduction and Motivation Tabletop war-gaming is a popular hobby but, with a high barrier to entry, it remains niche and inaccessible to many. The rules to tabletop war-games can be complex and difficult to learn. This can be daunting for new players putting them off the hobby as well as causing arguments between seasoned players over different rules interpretations. The most popular war-gaming systems are produced by _Games Workshop_ @gw-size. One of their more popular systems, _Warhammer 40k_, has a core rule-book of 60 pages @40k-rules and the simplified version of another game system, _Kill Team_, is a rather dense three page spread @kt-lite-rules _Kill Team_ is a miniature war-game known as a "skirmish" game. This means the game is played on a smaller scale with only \~20 miniatures on the table at one time. The aim of the game is to compete over objectives on the board for points. Each player takes turns activating a miniature and performing actions with it. These actions can involving moving to another location, shooting at an enemy or capturing an objective. The game uses dice to determine the results of your models engaging in combat with each other. #figure( image("images/gallowdark.jpg", width:80%), caption:([An example of the _Kill Team_ tabletop game using the _Gallowdark_ terrain. @gallowdark-image]) ) Video games help on-board new players by having the rules of the game enforced by the game itself. This project aims to bring this experience to tabletop war-gaming, specifically the _Kill Team Lite_ @kt-lite-rules system using the _Gallowdark_ setting. This is because the _Kill Team Lite_ rules are publicly available from _Games Workshop's_ website and it is designed to be played on a smaller scale to other war games, making it a good candidate for a proof of concept. As well as this, the _Gallowdark_ @gallowdark setting streamlines the terrain used and removes verticality from the game, making implementation much simpler. Developing a system that can accurately track the position of miniatures and terrain in a _Kill Team_ board would allow for the creation of various tools to assist in the game. For example, a digital game helper would remove the burden of rules enforcement from the players and onto the system, allowing players to focus on the game itself or allow players to make more informed game decisions by being able to preview the options available to them. Model tracking could also be utilised to record a game and view a replay of it or be utilised for content creation to create accurate board representations to viewers with visual effects. #pagebreak() == Related Work Some companies, such as _The Last Game Board_ @last-game-board and _Teburu_ @teburu, sell specialist game boards to provide a mixed reality experience. _The Last Game Board_ achieves this through utilizing the touch screen to recognise specific shapes on the bottom of miniatures to determine location and identity. _The Last Gameboard_ is 17"x17", as a result the number of game systems which are compatible is limited. However, you can connect multiple systems together. The drawback of this is the price point for the system is rather high, with boards starting at \~\$350. #figure( image("images/theLastGameBoard.png", width:80%), caption: [The Last Game Board touchscreen tabletop system @last-game-board]) _Teburu_ @teburu instead takes an RFID based approach, providing a base mat that allows you to connect squares containing RFID receivers and game pieces containing an RFID chip. _Teburu_ connects to a tablet device to provide the digital experience as well as to multiple devices for individual player information. _Teburu_ games allow for game pieces to either be in predetermined positions or within a vague area i.e. within a room. #figure( image("images/teburu.jpg", width:80%), caption:([The Teburu Game System @teburu-video showcasing _The Bad Karmas_ board game. The black board is the main game board. The squares above connect to the board below to transmit the RFID reader information back to the system for display.]) ) An RFID based approach is also used by _<NAME>_ and _<NAME>_ in their paper @rfid-based which places an antenna grid below the game board to detect RFID chips in pieces. This allowed them to find what chip is in range of what antenna, allowing them to find the general location of a game piece. This worked particularly well for larger models where you could put RFID chips far away from each-other on the model. Using the known positions of the chips and dimensions of the model combined with which antenna said chips are in range of allows you to determine an accurate position of each model. They also go into alternate RFID approaches which will be discussed later when outlining the chosen methodology. // INSERT IMAGE #figure( grid( columns: 2, image("images/rfid-plane.png",width:80%), image("images/rfid-software.png",width:80%)), caption: "An example of <NAME> and <NAME>'s approach depicting the antenna grid, RFID tags and physical model alongside the computer's prediction of the model's position" ) _Surfacescapes_ @surfacescapes is a system developed in 2009 by a group of masters students at Carnegie Mellon as a university project. _Surfacescapes_ uses a _Microsoft Surface Tabletop_ (the product line was rebranded to _PixelSense_ in 2011). This uses a rear projection display, and 5 near IR cameras behind the screen @pixelsense-specs. This allows the _PixelSense_ to identify fingers, tags, and blobs touching the screen using the near IR image. _Surfacescapes_ utilises this tag sensing technology to track game pieces using identifiable tags on the bases of miniatures. #figure( image("images/surfacescapes.jpg",width:80%), caption:([An example of _surfacescapes_' in use on the _Microsoft Surface Tabletop_@surfacescapes-images. The position of the models have been tracked by the system and outlined with a green cricle.]) ) _Foundry Virtual Tabletop_ @foundry is an application used to create fully digital _Dungeons and Dragons_ tabletops. These can either be used for remote play or in person play using a horizontal TV as a game board. _Foundry VTT_ allows for the creation of modules to add new functionality to your virtual tabletop. One such module is the _Material Plane_ @material-plane module which allows the tracking of physical miniatures on a TV game board. This functions by placing each miniature on a base containing an IR LED with an IR sensor then placed above the board. This can be configured to either move the closest "virtual" model to where the IR LED is or (with some internal electronics in the bases) can be set up to flash the IR LED in a pattern to attach different bases to specific models. An indicator LED is present to show when the IR LED is active. #figure( image("images/foundry.jpg",width:60%), caption:([An example of one of the _Material Plane_ bases. A miniature would be attached to the top. @material-plane-github.]) ) // _STARS_ @STARS // _TARboard_ // _UltraSound_ // Some of the systems previously mentioned use electronics embedded within the gameboard or within the game pieces to detect the position of minitures. As a result the average person would need specific hardware to use these systems. == Project Description This project aims to create a system that can track tabletop miniatures, in a game of _Kill Team Gallowdark_, using only materials easily accessible to the average miniature war-gamer. Then, utilise this system to implement a "helper" program for the game, providing a digital representation of the physical game state as well as rules enforcement. As a result, the project can be broken down into two main goals. #set enum(numbering: "1.a.") + Detection of the models and terrain to create a virtual representation of the game board. + The position of the miniature must be tracked accurately. + The system must be able to identify between different miniatures. + The system must aim to be noninvasive to the miniatures. + The system must be ambivalent to a model's shape and colour. + The system must be able to complete this task whilst being accessible to the average miniature war-gamer. + Implementing the game logic in the virtual board to guide players through the game. + Allow users to select a model and a subsequent action (normal move, shoot, dash, capture objective). + Calculate the distance a model can move and display this on the virtual board. + Account for terrain that blocks movement. + Calculate the line of sight between the selected model and opposing models then display this on the virtual board. + Account for terrain that blocks line of sight. + Display information about the selected model's odds to hit a target. = Methodology Finding a method to detect and find the positions of miniatures on a game board, whilst not obstructing the game, is a challenge. The majority of the work done so far has been in researching and designing different approaches to this problem. This section will outline the different approaches that have been considered. == RFID An RFID approach is the somewhat obvious solution. This would involve embedding RFID chips underneath the bases of the miniatures. Then some method of reading these chips would then need to be embedded either within or underneath the game board. There are a number of different approaches that could be taken to locating RFID chips which have been outlined by _<NAME>_ and _<NAME>_ @rfid-based in their work on a similar project. RFID solutions would require either an antenna grid underneath the game board or multiple individual RFID readers. This would be a viable option as hiding an antenna grid below a board is a relatively simple and unobstructive task. The same goes for hiding RFID readers beneath a table. The main drawback of RFID is that a reader (at its core) can only detect if a chip is in range or not (referred to as "absence and presence" results). Due to this, some extra methodology would need to be implemented to determine the position of a chip. One approach utilises increasing the range of an RFID reader to its maximum and then subsequently reducing the range repeatedly. Using the known ranges of the readers it would be possible to deduce which tags are within range of which reader - and subsequently deduce from which ranges it is detectable in the position of the RFID chip. This approach could in theory provide a reasonably accurate position of the RFID chip. However, this approach would take a longtime to update as each RFID reader would need to perform several read cycles. Combined with problems caused from interference between the chips / readers, the fact that being able to vary the signal strength of an RFID reader is not a common feature and the need for multiple RFID readers, this approach does not meet the requirements for this project. Another approach utilises measuring the signal strength received from an RFID chip and estimating the distance from the reader to the chip, known as received signal strength indication (RSSI) . This approach is much quicker than the previous method needing only a single read cycle to determine distance. Most modern day RFID readers can report RF phase upon tag reading. However, current RSSI methods have an error range of \~60cm caused by noise data, false positives / negatives and NLOS (non-line of sight) @RSSI. This won't work for this project given than the board size is 70 x 60 cm. Triliteration is a process in which multiple receivers use the time of arrival signal from a tag to determine it's position. This suffers from similar problems to other RFID methods in that it produces an area in which the tag could exist within - as opposed to it's exact position. Combined with the need for 3 RFID readers, this approach fails to be accessible to the target audience. The approach previously mentioned in _<NAME>_ and _<NAME>_'s paper, which utilised an antenna grid below the game board, seemed promising. #figure( image("images/rfidCircle.png",width:60%), caption:([An example from _<NAME>_ and _<NAME>_'s paper @rfid-based describing their overlapping approach. The black square represents the RFID tag whilst the grey circles show the read area of each RFID reader. The intent is to use which readers are in range of the tag to determine an "area of uncertainty" of the tag's location.]) ) This approach worked rather well for _<NAME>_ and _<NAME>_ as they were attempting to do this for _Warhammer 40k_, a game played on a much bigger board typically with larger models. As a result they were able to put RFID tags across larger models, such as vehicles, and utilise the known position of each tag relative to the model and the estimated position from the RFID readers to determine not only an accurate position but also orientation. Unfortunately for this project the size of the miniatures in _Kill Team_ would require the use of one tag or two in very close proximity. A potentially valid method utilizing RFID could be to use the approach outlined by _<NAME>, <NAME>, <NAME>, <NAME>, and <NAME>_ in their paper @rfid-tracking using first-order taylor series approximation to achieve mm level accuracy called _TrackT_. However, this approach is highly complex and can only track a small amount of tags at a time. // READ THIS AND EXPLAIN === Machine / Deep Learning // - YOLO? // - Lots of different ML approaches // - Size of dataset must be big // - Problems with identifying specific minitures // - Problematic when minitures will have similar paint schemes and poses - could be solved by looking at last known positions? // - Difficulty in keeping objects "unique" Modern object tracking systems are often based on a machine learning or deep learning approach. In this case a classifier model would be used to identify each unique miniature on a team, and then subsequently locate it within the game board. The biggest drawback to this approach is the amount of training data needed for each class in a classifier. According to _Darknet's_ documentation (a framework for neural-networks such as YOLO) @yolo each class should have ~2000 training images. Since each user will use their own miniatures, posed and painted in their own way, they would have to create their own dataset for their miniatures and train the model on them each time. As a user's miniatures are likely to follow a similar paint scheme, ML classification could potentially struggle to identify between miniatures from top down and far away if not enough training data is supplied. === ARKit _Apple's_ ARKit supports the scanning and detection of 3D objects @ARKit by finding 3D spatial features on a target. Currently any phone running IOS 12 or above is capable of utilising the ARKit. In this system, you could scan in your models, then use the ARKit to detect them from above. This information could then be conveyed to the main system. This could also allow for the system to be expanded in the future to use a side on camera as opposed to just top down detection, allowing for verticality within other _Kill Team_ game systems. Combined with certain Apple products having an inbuilt LIDAR sensor (such as the _iPhone 12+ Pro_ and _iPad Pro_ 2020 - 2022), which further enhances the ARKit's detection, this could be a viable approach. #figure( image("images/LIDAR.png", width:80%), caption:([Registering an an object in ARKit @ARKit]) ) After some testing utilizing an iPad Pro with a LIDAR scanner, some drawbacks were found with this approach. The object detection required you to be too close to the models detect them, and when being removed and re-added ARKit either took a long time to re-locate a model or failed to do so at all. As a result I won't be using this approach for the project. Although, for future work, this could be an interesting option to explore allowing you to implement AR features such as highlighting models on your phone or tablet or displaying information above the model. === Computer Vision without neural networks After considering all the options above, the best method found was using computer vision techniques. One technique is blob detection. A blob is defined as a group of bright or dark pixels contrasting against a background. In this case the miniatures would be the blobs. Utilizing blob analysis would allow locating general position of miniatures (as they would very clearly stick out from the background) but getting their exact center may then prove difficult (which would be needed for calculating movement and line of sight). Combined with the addition of terrain adding clutter to the image and the miniatures potentially being any colour, this approach could prove difficult to implement in the given time frame. The computer vision method explored the most was utilizing coloured tags to identify miniatures. The use of coloured tags would also mean there would be access to specific measurements allowing for an accurate location to be discerned. However, some external modification to the the game board or miniatures would need to be made. The challenge here is finding a way to do this without obstructing the flow of the game or the models themselves. // herefore, video analysis certainly is one of the most promising // techniques for years to come. @rfid-based = Design and Implementation The general design of the system would place a webcam or camera above the game board. Miniatures would have easily recognisable tags attached to allow for easy identification via computer vision. The tag would need to be unobstructive to the game (big enough to be seen by the camera but small enough as to not get in the way of the game whilst also not looking too "out of place"). This ruled out sticking QR codes on top of miniatures, or attaching them to wires a pointed upwards from a miniatures base. This meant a unique design was needed. Before attempting find tags some viewpoint / perspective correction is needed. The end goal is to get a bird's-eye view of the game board, however a camera placed above the board will produce a slightly distorted view of the game board. This process would involve finding some set reference points on the board, in this case the corners, and using these to perform a perspective transform creating a top down view of the image. This means that when the tags are located their location can be provided directly to the game board display. The detection problem can be split into two components: locating and identifying the miniature. Tackling the location problem first seemed like the most sensible approach to begin with before moving onto identification. In the end it was chosen to 3D print a small rim that could fit around the base of a miniature. This rim would be a high contrasting colour to the background and miniature. The benefits of 3D printing the rim is that the colours and dimensions are customizable to the user. Miniature war-gamers paint their models in a variety of different ways, so having a rim they can decide the colour of themselves keeps the system accessible to the target audience and open to many different miniature sizes and colours. #figure( image("images/circleYellow.jpg",width:50%,), caption: ([An example of the basic contrasting rim design.]) ) Some important observations here are that models may poke over the edge of the rim and models will block the rim due to parallax, so a rim detection system will need to find a full circle from only a partial amount of visible rim. When terrain is introduced this will cause further issues. The proposed fixes for this will be discussed later. The next step was to find a way to detect the rim. The best solution found for this was _openCV_ @openCV. An alternative image processing suit would have been to utilise _MATLAB_, but _openCV_ can be used directly within _Python_. As a result, the final system could be written entirely within _Python_ signficantly streamlining the development process without needing to connect _MATLAB_ with an external program for the game board logic / display. This led to the development of a processing method which will locate the center point of circles of a specific colour in a provided image. The image processing steps taken: // REMEMBER TO QUOTE THIS: https://docs.opencv.org/4.x/da/d53/tutorial_py_houghcircles.html + Sort the image to only contain the colour of the rim (in this case yellow). + Grayscale the image. + Apply a blur to the image (this helps to remove noise from the image). + Apply hough circle detection @hough-circles on the processed image. + Draw the detected circles onto the original image. It is important to note that hough circle detection can easily mistake two close but separate circles as one circle. As a result of this, the hough circle detection function in _openCV_ has several arguments that will need to be tweaked as the system is developed. Through some testing we were able to locate singular circles even with the presence of terrain pieces blocking parts of the rim. #figure( image("images/detection.png",width:80%), caption: ([Circle detection on a gameboard with example terrain.]) ) Parallax is a problem that still needs to be addressed. Through testing it was found that using a standard game board size, the parallax was too great to expect to locate a rim. As a result, we have settled on using a two camera solution. This solves the parallax problem by having each camera watch a different half of the game board. However, one problem with this approach is that calibration will be needed to ensure that the overlap between the cameras does not cause issues. This could be achieved by either calibrating the cameras to the game board, as it will be of a known size, by placing tags on each corner of the board. Alternatively the middle of the board could be marked with a piece of tape, or tags on the edges. This should then allow for the two images to be stitched together accurately. Due to the nature of this project being aimed at following the _Kill Team Gallowdark_ rule set, the terrain pieces used are placed along a grid. This prevents a situation where a terrain piece is very close to the edge of a board with a miniature placed behind resulting in the miniature being blocked from view. // PROVIDE SOME IMAGE TO DESCRIBE THIS. CITE OPEN CV AND MATLAB // MinDist - min distanceb etween the centers of two circles, // param1 - sensitivity of the canny edge detection - // param2 - how close to a full circle is needed for it to be detected - number of edge points needed to declare a circle. // minRadius + maxRadius - pretty self explanitory - closer or further to camera // - One potential tracking approach is explored by this video: // https://www.youtube.com/watch?v=RaCwLrKuS1w // - Looks at a circle in the newest frame that is closest in position and size to the tracked circle in the last frame. // - We can exploit the fact that _Kill Team_ is turn based such that only one model can move at a time. Due to this we can compare the differences between the previous image of the game state and the current game state to indentify which model has been moved. // - This can assist in making up for shortcomings in the identifcation methods. Once we have located a model we need to identify it. In our implementation of _Kill Team_ we will assume that each player will have up to 14 models on their team (known as "operatives"). As a result we will need to create an encoding capable of representing at least 28 different options. Another rim will be placed around the bright contrast rim from before. This rim will be split into 6 sections. The first section will indicate the starting of the encoding and will use a unique colour. The other 5 sections will use 2 different colours to represent 1 or 0. Once a rim has been located, the system will then determine the orientation of the encoding circle. As the size is known, we can then rotate around the image of the encoding circle reading each section to eventually get a binary number. Each binary number will then represent a unique model on the board. #figure( image("images/circle.png",width:60%), caption: ([An example of what the encoding circle could look like. The red section indicates the start of the encoding. The system would be able to determine which direction to read in based on the start encoding section being in the bottom or top half of the image.]) ) Another encoding method that was considered was to use one rim for both detection and identification. By using one rim of known colours, the image could be split into 3 images each containing only the colour of each segment. Each image would then be converted to black and white and recombined to create a singular image with the segments connected to create a full circle. After circle detection has been performed the system could then do the normal encoding reading. This approach would allow the rims of the miniatures to be significantly smaller and only require three contrasting colours. One problem faced by this approach is that vision of the entire circle is needed to read the encoding, as a result if it is partially obscured then the system will be unable to identify the model. This could be counteracted by splitting the encoding into quarters around the rim, so that as long as 1/4 of the encoding is visible, the system can correctly identify it. We can also exploit _Kill Team_ being a turn based game and simply compare the current game state with the previous game state to identify which pieces have been moved. A combination of active tracking and being able to determine which piece has moved, should allow for a robust system. Terrain detection will be performed using two possible solutions, the user can either select predetermined terrain set ups or the system can detect the terrain pieces. The current approach for terrain detection is to use QR codes on the top of each terrain piece. In _Gallowdark_ the size and shape of each terrain "module" is already known. As a result, the QR code can be used to determine the type of terrain and orientation. Then, since the terrain type has been established we already know the size and shape. So we can then fill it in on the virtual board. Once the system has correctly identified and located each model and the terrain, a digital representation of the game board can be created. #figure( image("images/board-suggestion.png"), caption: ([A proposed GUI for the game helper.]) ) <GUI> In @GUI the red semi-circle represents the valid area of movement for a selected operative. This can be calculated by splitting the game board into a very small grid, we can then apply a path finding algorithm (Dijkstra's, A\* etc) to the grid from the operative's starting point. We can then calculate all grid positions reachable by using a movement value less than or equal to the current operatives available movement characteristic. Red circles around opposing operatives show they are in line of sight. This would be calculated using simple ray casting. A white and red circle indicates a selected opposing operative. This is to calculate the statistics on the right hand side of the screen to be specific to the opposing operative. The information for the selected operative is displayed below the game board. A user can select the action they have performed / want to perform on the right hand side. The GUI will be built in _PyQt_ @pyqt which is a python wrapper for the _Qt_ framework. One potential stretch goal is for the GUI and the detection system to be separate entities. For example, the detection system should be able to detect and track pieces without input needed from the virtual game board i.e. which model is selected. This would allow for other detection systems to be swapped out in the future or for other applications to be developed using the detection system. = Progress == Project Management As per the initial Project Proposal the original goal for the first semester was to have ring detection and terrain detection completed. Currently, simple ring detection has been completed. This still needs to be expanded to include identification, though work has begun on this. Terrain detection has been moved backwards to allow for more time to make a good detection system for the miniatures. Choosing to tackle the ring detection first was a good choice. This allowed for me to encounter larger issues and provide solutions to them whilst still in the research stages on the project. This has allowed for me to develop my computer vision skills and take approaches I would not have otherwise considered. For example, using a singular rim of multiple colours and combining the images to create a full circle for identification. Supervisor meetings were held every week to check for blockers or advice on reports. Uniquely, our supervisor meetings were conducted as a group with the other two dissertation students my supervisor had. As all three of us were working on tabletop game based projects we were all knowledgeable in the area. This meant we could provide feedback or ideas for each others projects from a student perspective. I found the Gantt chart to be unhelpful in managing the project. Gantt charts work well in well structured work environments. However, due to the nature of this semester having a large amount of other courseworks and commitments that needed to be balanced. An agile approach was much more effective, doing work when time was available. The nature of development work being much more effective when done in long, uninterrupted sessions meant that, with this semester's courseworks and lectures being very demanding at 70 credits, being able to find long swathes of uninterrupted time was very difficult between lectures, courseworks, tutorials etc. As a result, development was done in smaller, more frequent chunks which were not as effective as longer, less frequent chunks. I expect this to change next semester when the workload decreases down to 50 credits. A much preferred method, that I will likely go ahead with, is Kanban. I personally found most use from the Gantt chart in that the project is naturally broken up into sub tasks. This approach of sub-tasking when, combined with a less structured agile approach, is something that Kanban works really well at and have found effective in the past in GRP (COMP2002). In the interests of being able to show current progress in comparison to previous I have included an updated Gantt chart (@firstGantt) along with the previous one (@secondGantt). Looking at the time remaining I will focus on getting the main parts of the system functional. These are: model detection and tracking, terrain detection, virtual game board representation, movement preview and line of sight preview. If there is time available then I will aim to also implement the flow of the game (breaking it down into each phase, providing guidance of what to do in each phase, statistics etc). The most technically interesting part of this is the virtual game board and tracking technology. So placing a focus on having them work fluidly is more important to the dissertation. #import "@preview/timeliney:0.0.1" #page(flipped: true)[ #figure( timeliney.timeline( show-grid: true, { import timeliney: * headerline(group(([*January*], 4)),group(([*February*], 4)),group(([*March*], 4)),group(([*April*], 4)),group(([*May*],3))) headerline( group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(3).map(n => strong(str(n + 1)))), ) taskgroup(title: [*Other Commitments*], { task("MDP Coursework",(0,3.5), style: (stroke: 2pt + gray)) task("HAI Coursework", (0, 1), style: (stroke: 2pt + gray)) task("Ethics Essay",(0,3.5),style: (stroke: 2pt+gray)) task("SEM Coursework", (6,12), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Initial Write-up*], { task("Interim Report", (0, 0.25), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Computer Vision*], { task("Multi Ring Tracking",(4,5),style: (stroke: 2pt + gray)) task("Ring Identification", (5,7), style: (stroke: 2pt + gray)) task("Terrain Detection", (7, 8), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Game Logic*], { task("Game Board Framework",(8,10), style: (stroke: 2pt + gray)) task("Movement",(10,11), style: (stroke: 2pt + gray)) task("Line of Sight",(11,12), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Final Write-up*], { task("Dissertation", (11, 14), style: (stroke: 2pt + gray)) task("Presentation Prep", (14, 18.5), style: (stroke: 2pt + gray)) }) milestone( at: 0.25, style: (stroke: (dash: "dashed")), align(center, [ *Interim Report Submission*\ Jan 2023 ]) ) milestone( at: 14, style: (stroke: (dash: "dashed")), align(center, [ *Dissertation Submission*\ Apr 2024 ]) ) milestone( at: 18.5, style: (stroke: (dash: "dashed")), align(center, [ *Project Presentation*\ May 2024 ]) ) } ), caption: ([The Updated Gantt chart]) ) <firstGantt> ] #page(flipped: true)[ #figure( timeliney.timeline( show-grid: true, { import timeliney: * headerline(group(([*October*], 4)), group(([*November*], 4)),group(([*December*], 4)),group(([*January*], 4)),group(([*February*], 4)),group(([*March*], 4)),group(([*April*], 4)),group(([*May*],3))) headerline( group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(4).map(n => strong(str(n + 1)))), group(..range(3).map(n => strong(str(n + 1)))), ) taskgroup(title: [*Other Commitments*], { task("MDP Coursework", (4, 7),(7.2,13.75), style: (stroke: 2pt + gray)) task("HAI Coursework", (2, 10), style: (stroke: 2pt + gray)) task("Ethics Essay",(1,4),(6,9),(10.5,13.75),style: (stroke: 2pt+gray)) task("SEM Coursework", (18, 24), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Initial Write-up*], { task("Project Proposal", (2,4), style: (stroke: 2pt + gray)) task("Ethics Checklist", (4, 5), style: (stroke: 2pt + gray)) task("Interim Report", (7.5, 9.5), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Computer Vision*], { task("Ring Detection", (3, 6), style: (stroke: 2pt + gray)) task("Terrain Detection", (6, 8), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Game Logic*], { task("Game Board Framework",(13.75,16), style: (stroke: 2pt + gray)) task("Movement",(16,18), style: (stroke: 2pt + gray)) task("Line of Sight",(18,19), style: (stroke: 2pt + gray)) }) taskgroup(title: [*Final Write-up*], { task("Dissertation", (22, 26), style: (stroke: 2pt + gray)) task("Presentation Prep", (26, 30), style: (stroke: 2pt + gray)) }) milestone( at: 10, style: (stroke: (dash: "dashed")), align(center, [ *Interim Report Submission*\ Dec 2023 ]) ) milestone( at: 26, style: (stroke: (dash: "dashed")), align(center, [ *Dissertation Submission*\ Apr 2024 ]) ) milestone( at: 30.5, style: (stroke: (dash: "dashed")), align(center, [ *Project Presentation*\ May 2024 ]) ) } ), caption: ([The original Gantt chart]) )<secondGantt> ] == Contributions and Reflections Something I underestimated was the amount of time needed to research and determine which method should be used for model detection. A large amount of time was spent determining the viability of different approaches that may make the system more robust and easier to produce. As a result of this, development started later and took longer than was expected. The biggest impact on this project was needing to get an extension on other coursework deadlines. This extension had a knock on effect of other courseworks which took priority over this project. The timings allotted in the Gantt chart did not take into account extensions being needed. However, I am happy with the progress made so far. Having completed the methodology to find the circular tags even when partially obstructed is very promising to the viability of the chosen approach. As a result, I believe that a solid strategy has been chosen for model detection and identification and the work done so far has successfully laid a good foundation for the project. === LSEPI The main LSEPI issue you could see here is copyright issues as _Kill Team_ is a copyrighted entity owned by _Games Workshop_. The way this project will handle this is by having the system implement the _Kill Team Lite_ rule set previously mentioned. This is publicly available published by _Games Workshop_. One issue is that different "_Kill Teams_" (groups of operatives) will have different and unique rules as well as their own statistics pages. These are copyrighted and won't be able to be directly implemented. As a result of this, this project will implement basic operatives with fake statistic pages based off of the publicly published _Kill Team_ information. If this proves to be problematic then the game rules will be based off a very similar system _Firefight_ @one-page. This is published by _One Page Rules_, who produce free to use, public miniature war-game systems. = Bibliography #bibliography(title: none, style: "ieee", "interim.yml") // @article{article, // author = {<NAME>. and <NAME> <NAME>}, // year = {2005}, // month = {01}, // pages = {0-4}, // title = {TARBoard: Tangible Augmented Reality System for Table-top Game Environment}, // volume = {5}, // journal = {Personal Computing} // }
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10400.typ
typst
Apache License 2.0
#let data = ( ("DESERET CAPITAL LETTER LONG I", "Lu", 0), ("DESERET CAPITAL LETTER LONG E", "Lu", 0), ("DESERET CAPITAL LETTER LONG A", "Lu", 0), ("DESERET CAPITAL LETTER LONG AH", "Lu", 0), ("DESERET CAPITAL LETTER LONG O", "Lu", 0), ("DESERET CAPITAL LETTER LONG OO", "Lu", 0), ("DESERET CAPITAL LETTER SHORT I", "Lu", 0), ("DESERET CAPITAL LETTER SHORT E", "Lu", 0), ("DESERET CAPITAL LETTER SHORT A", "Lu", 0), ("DESERET CAPITAL LETTER SHORT AH", "Lu", 0), ("DESERET CAPITAL LETTER SHORT O", "Lu", 0), ("DESERET CAPITAL LETTER SHORT OO", "Lu", 0), ("DESERET CAPITAL LETTER AY", "Lu", 0), ("DESERET CAPITAL LETTER OW", "Lu", 0), ("DESERET CAPITAL LETTER WU", "Lu", 0), ("DESERET CAPITAL LETTER YEE", "Lu", 0), ("DESERET CAPITAL LETTER H", "Lu", 0), ("DESERET CAPITAL LETTER PEE", "Lu", 0), ("DESERET CAPITAL LETTER BEE", "Lu", 0), ("DESERET CAPITAL LETTER TEE", "Lu", 0), ("DESERET CAPITAL LETTER DEE", "Lu", 0), ("DESERET CAPITAL LETTER CHEE", "Lu", 0), ("DESERET CAPITAL LETTER JEE", "Lu", 0), ("DESERET CAPITAL LETTER KAY", "Lu", 0), ("DESERET CAPITAL LETTER GAY", "Lu", 0), ("DESERET CAPITAL LETTER EF", "Lu", 0), ("DESERET CAPITAL LETTER VEE", "Lu", 0), ("DESERET CAPITAL LETTER ETH", "Lu", 0), ("DESERET CAPITAL LETTER THEE", "Lu", 0), ("DESERET CAPITAL LETTER ES", "Lu", 0), ("DESERET CAPITAL LETTER ZEE", "Lu", 0), ("DESERET CAPITAL LETTER ESH", "Lu", 0), ("DESERET CAPITAL LETTER ZHEE", "Lu", 0), ("DESERET CAPITAL LETTER ER", "Lu", 0), ("DESERET CAPITAL LETTER EL", "Lu", 0), ("DESERET CAPITAL LETTER EM", "Lu", 0), ("DESERET CAPITAL LETTER EN", "Lu", 0), ("DESERET CAPITAL LETTER ENG", "Lu", 0), ("DESERET CAPITAL LETTER OI", "Lu", 0), ("DESERET CAPITAL LETTER EW", "Lu", 0), ("DESERET SMALL LETTER LONG I", "Ll", 0), ("DESERET SMALL LETTER LONG E", "Ll", 0), ("DESERET SMALL LETTER LONG A", "Ll", 0), ("DESERET SMALL LETTER LONG AH", "Ll", 0), ("DESERET SMALL LETTER LONG O", "Ll", 0), ("DESERET SMALL LETTER LONG OO", "Ll", 0), ("DESERET SMALL LETTER SHORT I", "Ll", 0), ("DESERET SMALL LETTER SHORT E", "Ll", 0), ("DESERET SMALL LETTER SHORT A", "Ll", 0), ("DESERET SMALL LETTER SHORT AH", "Ll", 0), ("DESERET SMALL LETTER SHORT O", "Ll", 0), ("DESERET SMALL LETTER SHORT OO", "Ll", 0), ("DESERET SMALL LETTER AY", "Ll", 0), ("DESERET SMALL LETTER OW", "Ll", 0), ("DESERET SMALL LETTER WU", "Ll", 0), ("DESERET SMALL LETTER YEE", "Ll", 0), ("DESERET SMALL LETTER H", "Ll", 0), ("DESERET SMALL LETTER PEE", "Ll", 0), ("DESERET SMALL LETTER BEE", "Ll", 0), ("DESERET SMALL LETTER TEE", "Ll", 0), ("DESERET SMALL LETTER DEE", "Ll", 0), ("DESERET SMALL LETTER CHEE", "Ll", 0), ("DESERET SMALL LETTER JEE", "Ll", 0), ("DESERET SMALL LETTER KAY", "Ll", 0), ("DESERET SMALL LETTER GAY", "Ll", 0), ("DESERET SMALL LETTER EF", "Ll", 0), ("DESERET SMALL LETTER VEE", "Ll", 0), ("DESERET SMALL LETTER ETH", "Ll", 0), ("DESERET SMALL LETTER THEE", "Ll", 0), ("DESERET SMALL LETTER ES", "Ll", 0), ("DESERET SMALL LETTER ZEE", "Ll", 0), ("DESERET SMALL LETTER ESH", "Ll", 0), ("DESERET SMALL LETTER ZHEE", "Ll", 0), ("DESERET SMALL LETTER ER", "Ll", 0), ("DESERET SMALL LETTER EL", "Ll", 0), ("DESERET SMALL LETTER EM", "Ll", 0), ("DESERET SMALL LETTER EN", "Ll", 0), ("DESERET SMALL LETTER ENG", "Ll", 0), ("DESERET SMALL LETTER OI", "Ll", 0), ("DESERET SMALL LETTER EW", "Ll", 0), )
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/meta/numbering-01.typ
typst
Other
#for i in range(0, 4) { numbering("A", i) [ for #i \ ] } ... \ #for i in range(26, 30) { numbering("A", i) [ for #i \ ] } ... \ #for i in range(702, 706) { numbering("A", i) [ for #i \ ] }
https://github.com/DrGo/typst-tips
https://raw.githubusercontent.com/DrGo/typst-tips/main/refs/samples/tufte-handout/sample_handout.typ
typst
#import "tufte-handout.typ": template, margin-note //#set page(flipped: true) #show: doc => template(title: "A sample handout", doc) = Part 1 #lorem(20) #margin-note[#lorem(10)] #lorem(30) #margin-note( figure(caption: "important diagram", { stack( path( stroke: black, closed: true, (50pt, 0pt), (50pt, 50pt), (0pt, 50pt), ), line( stroke: red + 0.5pt, start: (50pt, 0pt), angle: 225deg, length: 36pt ), line( stroke: blue + 0.5pt, start: (0pt, 0pt), end: (0pt, -25pt) ) ) }) ) #lorem(20) = Part 2 == Part 2.1 #lorem(20) #margin-note([The student should be able to do this easily!]) #lorem(20) == Part 2.2 #lorem(30)
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/lorem_02.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // // // Error: 7-9 missing argument: words // #lorem()
https://github.com/kdog3682/mathematical
https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/factorial-fraction.typ
typst
#import "@local/typkit:0.1.0": * #import "factorial.typ": factorial #import "fraction.typ": fraction /// this is the factorial function /// asdf /// sdf /// sdf /// sdf /// /// sdf /// sdf #let factorial-fraction( top: none, bottom: none, delimiter: "times", cancel: false, lhs: none, highlight-remaining: false, highlight-color: "blue", centered: false, ..sink, ) = { let args = sink.pos() let ( a, b, ) = if args.len() == 0 { ( top.value, bottom.value, ) } else { args } let top-base = ( left: 3, right: 3, fill: blue, early-stop: true, ) let bottom-base = ( left: 3, right: 3, fill: purple, early-stop: true, ) let top = merge-dictionaries( top-base, top, ) let bottom = merge-dictionaries( bottom-base, bottom, ) let runner( n, opts, cancels, ) = { if cancel == false { cancels = none } let hide = opts.at( "hide", default: false, ) if hide == true { return hidden( 5pt, ) } return colored( factorial( n, cancels: cancels, limit: 10, left: opts.left, right: opts.right, highlight-remaining: highlight-remaining, ), opts.fill, ) } let numerator = runner( a, top, b, ) let denominator = runner( b, bottom, a, ) // let opts = ( // highlight-remaining: highlight-remaining, // cancels: cancels, // limit: 10, // ) // let (numerator, denominator) = double-factorial(a, b, top, bottom, ..opts) let rhs = fraction( numerator, denominator, align: align, ) let value = rhs if lhs != none { let lhs = fraction( colored( str-add( a, "!", ), top.fill, ), colored( str-add( b, "!", ), bottom.fill, ), ) value = ( lhs, rhs, ).join( marks.math.equals, ) } if centered == true { set text( size: 1.5em, ) value = align( value, center, ) } return v( 10pt, ) + value } // #factorial-fraction(556, 55, cancel: true) // #factorial-fraction(556, 55, cancel: false) // WORKS // #factorial-fraction( // top: ( value: 10, left: 5, fill: "none"), // bottom: ( value: 6, left: 5, fill: "none", hide: false), // cancel: true, // highlight-remaining: true // )
https://github.com/typst-community/valkyrie
https://raw.githubusercontent.com/typst-community/valkyrie/main/src/types/sink.typ
typst
Other
#import "../base-type.typ": base-type #import "../assertions-util.typ": assert-base-type #import "../ctx.typ": z-ctx #let to-args-type(..args) = args #let sink( positional: none, named: none, ..args, ) = { if positional != none { assert-base-type(positional) } if named != none { assert-base-type(named) } base-type( name: "argument-sink", types: (arguments,), ..args, ) + ( positional-schema: positional, named-schema: named, handle-descendents: (self, it, ctx: z-ctx(), scope: ()) => { let positional = it.pos() if self.positional-schema == none { if positional.len() > 0 { (self.fail-validation)( self, it, scope: scope, ctx: ctx, message: "Unexpected positional arguments.", ) } } else { positional = (self.positional-schema.validate)( self.positional-schema, it.pos(), ctx: ctx, scope: (..scope, "positional"), ) } let named = it.named() if self.named-schema == none { if named.len() > 0 { (self.fail-validation)( self, it, scope: scope, ctx: ctx, message: "Unexpected named arguments.", ) } } else { named = (self.named-schema.validate)( self.named-schema, it.named(), ctx: ctx, scope: (..scope, "named"), ) } to-args-type(..positional, ..named) }, ) }
https://github.com/Dherse/typst-brrr
https://raw.githubusercontent.com/Dherse/typst-brrr/master/samples/masterproef/elems/ugent-theme.typ
typst
#import "./slides.typ": * #import "./colors.typ": * #import "./code_blocks.typ": * #let ratio = 1.621784511784512 #let text_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) #let scale-size-comp(body, box-size, box-inset, styles) = { let body-size = measure(body, styles) if body-size.width > box-size.width - box-inset.inset.x { (box-size.width - box-inset.inset.x) / body-size.width } else if body-size.height > box-size.height - box-inset.inset.y { (box-size.height - box-inset.inset.y) / body-size.height } else { 1.0 } } #let scale-size(body, box-size, box-inset) = /*style(styles => { let scale-factor = scale-size-comp(box(width: box-size.width - box-inset.inset.x, body), box-size, box-inset, styles) scale(x: scale-factor * 100%, y: scale-factor * 100%, body) })*/ body #let ugent-theme( color: ugent-blue, text-color: white, logo: "ugent/logo_ugent.png", second-logo: "ugent/logo_ea.png", ) = data => { let progress-bar() = place(bottom + left, locate(loc => { let top = logical-slide.at(loc).at(0) let end = query(heading.where(level: 99), loc) let top_end = if end.len() == 0 { logical-slide.final(loc).at(0) } else { logical-slide.at(end.at(0).location()).at(0) } let progress = top / top_end * 100% box(width: progress, height: 2pt, fill: ugent-blue) })) let first-slide(..) = align(center + horizon, image(logo)) let title-slide(slide-info, bodies) = { if bodies.len() != 0 { panic("title slide of default theme does not support any bodies") } set text(font: "UGent Panno Text") let box_size = (width: 45.63cm / ratio, height: 18.07cm / ratio) let box_inset = (dx: 2.54cm / ratio, dy: 3.87cm / ratio) let text_box_inset = (dx: 3.59cm / ratio - box_inset.dx, dy: 6.53cm / ratio - box_inset.dy) let text_box_size = (width: 42.18cm / ratio, height: 12.32cm / ratio) let subtitle_inset = (dx: 3.59cm / ratio - box_inset.dx, dy: 19.1cm / ratio - box_inset.dy) let subtitle_size = (width: 42.18cm / ratio, height: 1.62cm / ratio) let body = place(top + left, box({ set text(size: 60pt, fill: text-color) show text: smallcaps underline(data.title) }, ..text_box_size, ..text_inset), ..text_box_inset) let subtitle = place(top + left, box({ align(horizon, { set text(size: 20pt, fill: ugent-accent1) show par: set block(spacing: 10pt) data.subtitle }) }, ..subtitle_size), ..subtitle_inset) if "dept" in data or "research-group" in data { let dept_size = (width: 23.04cm / ratio, height: 1.5cm / ratio) let dept_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) let dept_offset = (dx: 23.84cm / ratio, dy: 1.1cm / ratio) place(top + left, box({ align(horizon, { set text(size: 14pt, fill: ugent-blue) show par: set block(spacing: 5pt) if "dept" in data { smallcaps[*#data.dept*] if "research-group" in data { parbreak() } } if "research-group" in data { smallcaps[#data.research-group] } }) }, ..dept_size, ..dept_inset), ..dept_offset) } place(bottom + left, image(logo, width: 6.41cm / ratio)) place(top + left, box(fill: color, body + subtitle, ..box_size), ..box_inset) if second-logo != none { place(top + left, image(second-logo, height: 3.87cm / ratio)) } progress-bar() } let end(slide-info, bodies) = { set text(font: "UGent Panno Text", fill: text-color) let icon(icon) = text(font: "tabler-icons", size: 24pt, baseline: -4pt, fill: text-color, icon) let added-content = () if "linkedin" in data { added-content.push(icon("\u{ec8c}")) added-content.push(link(data.linkedin, [ #data.authors.at(0) ])) } let social = { set text(size: 24pt) table( columns: 2, stroke: none, column-gutter: 0cm / ratio, inset: 0.2em, icon("\u{ec1a}"), link("https://www.facebook.com/ugent/", [ Universiteit Gent ]), icon("\u{ec27}"), link("https://twitter.com/ugent", [ \@ugent ]), icon("\u{ec20}"), link("https://www.instagram.com/ugent/", [ \@ugent ]), icon("\u{ec8c}"), link("https://www.linkedin.com/school/ghent-university/", [ Ghent University ]), ..added-content, ) } let body = { set text(size: 24pt, fill: text-color) show par: set block(spacing: 12pt) let content = () if "email" in data { content.push(text(size: 25pt, "E")) content.push(link( "mailto:" + data.email, text(size: 25pt, data.email) )) } if "phone" in data { content.push(text(size: 25pt, "T")) content.push(text(size: 25pt, data.phone)) } if "mobile" in data { content.push(text(size: 25pt, "M")) content.push(text(size: 25pt, data.mobile)) } text(size: 35pt, data.authors.join(", ")) linebreak() text(size: 25pt, "Student") linebreak() linebreak() smallcaps(text(size: 25pt, "Photonics Research Group")) linebreak() linebreak() table( columns: (1cm, auto), stroke: none, inset: 0pt, row-gutter: 0.5em, align: left, ..content ) align( bottom + left, link("https://ugent.be", text(size: 25pt, "www.ugent.be")) ) } let box_size = (width: 45.63cm / ratio, height: 18.07cm / ratio) let box_offset = (dx: 2.54cm / ratio, dy: 3.87cm / ratio) let icon_box_size = (width: 20.16cm / ratio, height: 10cm / ratio) let icon_box_offset = (dx: 24.1cm / ratio, dy: 8.6cm / ratio) let icon_box_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) let body_box_size = (width: 20.6cm / ratio, height: 16.03cm / ratio) let body_box_offset = (dx: 3.59cm / ratio, dy: 4.84cm / ratio) let body_box_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) place(bottom + left, image(logo, width: 6.41cm / ratio)) place(top + left, box(fill: color, ..box_size), ..box_offset) place(top + left, box(fill: color, social, ..icon_box_size, ..icon_box_inset), ..icon_box_offset) place(top + left, box(fill: color, body, ..body_box_size, ..body_box_inset), ..body_box_offset) if second-logo != none { place(top + left, image(second-logo, height: 3.87cm / ratio)) } progress-bar() } let section-slide(slide-info, bodies) = { if bodies.len() != 0 { panic("title slide of default theme does not support any bodies") } set text(font: "UGent Panno Text") let box_size = (width: 45.63cm / ratio, height: 21.94cm / ratio) let box_inset = (dx: 2.54cm / ratio, dy: 0cm) let text_box_inset = (dx: 3.59cm / ratio - box_inset.dx, dy: 9.02cm / ratio - box_inset.dy) let text_box_size = (width: 42.18cm / ratio, height: 12.32cm / ratio) let text_inset = (inset: (x: text_inset.inset.x, y: 15pt)) let body = place(top + left, box({ set text(size: 100pt, fill: text-color) show text: smallcaps show text: underline align(bottom + left, section.display()) }, ..text_box_size, ..text_inset), ..text_box_inset) place(top + left, box(fill: color, body, ..box_size), ..box_inset) place(bottom + left, image(logo, width: 6.41cm / ratio)) progress-bar() } let body-slide(slide-info, bodies, box-size, box-inset) = { if bodies == none { panic("No bodies provided") } else if bodies.len() == 1 { if (not "scale" in slide-info) or slide-info.scale { scale-size(bodies.first(), box-size, box-inset) } else { bodies.first() } } else { let colwidths = none let thisgutter = 5pt if "colwidths" in slide-info { colwidths = slide-info.colwidths if colwidths.len() != bodies.len(){ panic("Provided colwidths must be of same length as bodies") } } else { colwidths = (1fr,) * bodies.len() } if "gutter" in slide-info { thisgutter = slide-info.gutter } grid( columns: colwidths, gutter: thisgutter, ..bodies .enumerate() .map(((i, body)) => if (not "scale" in slide-info) or slide-info.scale { scale-size( body, ( width: box-size.width / colwidths.len() - (colwidths.len() - 1) * thisgutter, height: box-size.height / (1 + calc.rem(bodies.len(), colwidths.len())) ), box-inset ) } else { body }) ) } } let content-image-slide(slide-info, bodies) = { if bodies == none { panic("missing bodies") } if bodies.len() != 2 { panic("content,image must have exactly two bodies") } let (content, image) = bodies let content_size = (width: 23.45cm / ratio, height: 18.6cm / ratio) let content_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) let image_size = (width: 17.5cm / ratio, height: 18.05cm / ratio) stack( dir: ltr, spacing: 2.3cm / ratio, box(..content_size, ..content_inset, content), box(..image_size, clip: true, image) ) } let image-slide(slide-info, bodies) = { if bodies == none { panic("missing bodies") } if bodies.len() != 2 { panic("content,image must have exactly two bodies") } let (content, picture) = bodies let content_size = (width: 21.54cm / ratio, height: 19.19cm / ratio) let content_offset = (dx: 2.54cm / ratio, dy: 2.81cm / ratio) let content_inset = (inset: (x: 0.25cm / ratio, y: 0.3em)) let content_bg = rgb("#E9F0FA") let image_size = (width: 45.62cm / ratio, height: 22cm / ratio - 0.5pt) let image_offset = (dx: 2.54cm / ratio + 1pt, dy: 0cm / ratio) place(top + left, box(fill: ugent-blue, ..image_size, clip: true, picture), ..image_offset) place(top + left, box(fill: content_bg, ..content_size, ..content_inset, { set text(fill: ugent-blue) content }), ..content_offset) place(bottom + left, image(logo, width: 6.41cm / ratio)) if "footer" in slide-info { let footer_size = (width: 22.21cm / ratio, height: 1.22cm / ratio) let footer_offset = (dx: 18.92cm / ratio, dy: 24.99cm / ratio) let footer_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) place(top + left, ..footer_offset, box(..footer_size, ..footer_inset, { set text(size: 17.1pt, fill: ugent-accent2, font: "UGent Panno Text") show: align.with(center + horizon) slide-info.footer })) } if "date" in slide-info { let date_size = (width: 6.38cm / ratio, height: 1.22cm / ratio) let date_offset = (dx: 11.31cm / ratio, dy: 24.99cm / ratio) let date_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) place(top + left, ..date_offset, box(..date_size, ..date_inset, { set text(size: 17.1pt, fill: ugent-accent2) show text: set align(left + horizon) slide-info.date })) } // Show the slide number in the bottom right corner { set text(fill: ugent-blue, size: 17.1pt) let numbering = locate(loc => { let top = logical-slide.at(loc) numbering(loc.page-numbering(), top.at(0)) }) let offset = (dx: 43.31cm / ratio, dy: 24.86cm / ratio) let box_size = (width: 1.4 * 2.56cm / ratio, height: 1.44cm / ratio) place(top + left, ..offset, box(..box_size, align(horizon + right, numbering))) } progress-bar() } let slide(slide-info, bodies, kind: "slide") = { set text(font: "<NAME>", size: 22pt) let box_size = (height: 18.6cm / ratio, width: 43.61cm / ratio) let box_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) let body = if kind == "slide" { body-slide(slide-info, bodies, box_size, box_inset) } else if kind == "content,image" { content-image-slide(slide-info, bodies) } else if kind == "image" { if bodies == none or bodies.len() != 1 { panic("image slide must have exactly one body") } body-slide(slide-info, bodies, box_size, box_inset) } else if kind == "end" { content_end(slide-info, bodies) } else { panic("Unknown kind: " + kind) } let elems = () // If we have a title, append it to the list of elements if "title" in slide-info { let text_box_size = (width: 43.63cm / ratio) let text_inset = (inset: (x: text_inset.inset.x, y: 10pt)) elems.push( box(..text_box_size, ..text_inset, { set text(size: 50pt, weight: "light", fill: ugent-blue) show text: smallcaps show text: underline align(horizon + left, slide-info.title) }) ) } elems.push(box(..box_size, ..box_inset, body)) place( top + left, dx: 2.54cm / ratio, dy: 0.7cm / ratio, stack( dir: ttb, spacing: 0.75cm / ratio, ..elems ) ) place(bottom + left, image(logo, width: 6.41cm / ratio)) if "footer" in slide-info { let footer_size = (width: 22.21cm / ratio, height: 1.22cm / ratio) let footer_offset = (dx: 18.92cm / ratio, dy: 24.99cm / ratio) let footer_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) place(top + left, ..footer_offset, box(..footer_size, ..footer_inset, { set text(size: 17.1pt, fill: ugent-accent2) show: align.with(center + horizon) slide-info.footer })) } if "date" in slide-info { let date_size = (width: 6.38cm / ratio, height: 1.22cm / ratio) let date_offset = (dx: 11.31cm / ratio, dy: 24.99cm / ratio) let date_inset = (inset: (x: 0.25cm / ratio, y: 0.13cm / ratio)) place(top + left, ..date_offset, box(..date_size, ..date_inset, { set text(size: 17.1pt, fill: ugent-accent2) show text: set align(left + horizon) slide-info.date })) } // Show the slide number in the bottom right corner { set text(fill: ugent-blue, size: 17.1pt) let numbering = locate(loc => { let top = logical-slide.at(loc) numbering(loc.page-numbering(), top.at(0)) }) let offset = (dx: 43.31cm / ratio, dy: 24.86cm / ratio) let box_size = (width: 1.4 * 2.56cm / ratio, height: 1.44cm / ratio) place(top + left, ..offset, box(..box_size, align(horizon + right, numbering))) } progress-bar() } let default = slide.with(kind: "slide") let image-content = slide.with(kind: "content,image") let image = slide.with(kind: "image") ( "corporate logo": first-slide, "title slide": title-slide, "section slide": section-slide, "image content": image-content, "image": image, "image only": image-slide, "end": end, "default": default, ) }
https://github.com/HPDell/typst-cineca
https://raw.githubusercontent.com/HPDell/typst-cineca/main/lib.typ
typst
MIT License
#import "/util/utils.typ": * // Make a calendar with events. #let calendar( // Event list. // Each element is a four-element array: // // - Index of day. Start from 0. // - Float-style start time. // - Float-style end time. // - Event body. Can be anything. Passed to the template.body to show more details. // // Float style time: a number representing 24-hour time. The integer part represents the hour. The fractional part represents the minute. events, // Then range of hours, affacting the range of the calendar. hour-range: (8, 20), // Height of per minute. Each minute occupys a row. This number is to control the height of each row. minute-height: 0.8pt, // Templates for headers, times, or events. It takes a dictionary of the following entries: `header`, `time`, and `event`. template: (:), // A stroke style to control the style of the default stroke, or a function taking two parameters `(x, y)` to control the stroke. The first row is the dates, and the first column is the times. stroke: none ) = { let items = events-to-calendar-items(events, hour-range.at(0)) let days = items.keys().len() let hours = hour-range.at(1) - hour-range.at(0) let style = ( header: default-header-style, time: default-time-style, event: default-item-style, ..template ) let minutes-offset = hour-range.at(0) * 60 let stroke-shape = if type(stroke) == "stroke" { stroke } else { 0.1pt + black } let stroke-rule = if type(stroke) == "function" { stroke } else { (x, y) => ( right: if y < (hours * 60 + 1) { stroke-shape } else { 0pt }, top: if x > 0 { if y < 1 { stroke-shape } else if calc.fract((y - 1) / 60) == 0 { stroke-shape } else { 0pt } } ) } grid( columns:(auto,) + (1fr,)*days, rows: (auto, ) + (minute-height,) * hours * 60 + (8pt,), fill: white, stroke: stroke-rule, [], ..array.range(days).map(d => (style.header)(d)), ..array.range(hours * 60 + 1).map(y => { array.range(days + 1).map(x => { if x == 0 { if calc.fract(y / 60) == 0 { let hour = calc.trunc(y / 60) + hour-range.at(0) let t = datetime(hour: hour, minute: 0, second: 0) (style.time)(t) } else [] } else { if items.keys().contains(str(x)) { if items.at(str(x)).keys().contains(str(y)) { let (last, body) = items.at(str(x)).at(str(y)) show: block.with(inset: (x: 2pt, y: 0pt), width: 100%) place({ block( width: 100%, height: (last) * minute-height, { (style.event)(..(minutes-to-datetime(y + minutes-offset), body)) } ) }) } } } }) }).flatten() ) } // Make a month view of a calendar optionally with events #let calendar-month-summary( // Event list. // Each element is a two-element array: // // - Day. A datetime object. // - Additional information for showing a day. It actually depends on the template `day-summary`. For the deafult template, it requires an array of two elements. // - Shape. A function specify how to darw the shape, such as `circle`. // - Arguments. Further arguments for render a shape. events: (), // Templates for headers, times, or events. It takes a dictionary of the following entries: `day-summary`, `day-head`, `month-head`, and `layout`. template: (:), // Whether to put sunday as the first day of a week. sunday-first: false, // Additional arguments for the calendar's grid. ..args ) = { let yearmonths = events.map(it => (it.at(0).year(), it.at(0).month())).dedup() let event-group = events.map(it => it.at(0).display("[year]-[month]")) let style = ( day-summary: default-day-summary, day-head: default-month-day-head, month-head: default-month-head, layout: stack.with(dir: ltr, spacing: 1em), ..template ) let calendars = yearmonths.map(((year, month)) => { // Get all dates between date-from and date-to let first-day = datetime(year: year, month: month, day: 1) let days = get-month-days(month, year) let last-day = first-day + duration(days: days - 1) let group-id = first-day.display("[year]-[month]") let month-events = event-group.enumerate().filter(((i, it)) => it == group-id).map(((i, it)) => events.at(i)) let dates = range(first-day.day(), last-day.day() + 1).map(it => datetime( year: first-day.year(), month: first-day.month(), day: it )) let date-weekday = dates.map(it => it.weekday() + int(sunday-first)).map(i => if i > 7 { i - 7 } else { i }) // Get the weekdays of the dates let nweek = dates.map(it => it.weekday()).filter(it => it == 1).len() if date-weekday.at(0) > 0 { nweek = nweek + 1 } // Map the dates and weekdays let week-day-map = () for (i, (d, w)) in dates.zip(date-weekday).enumerate() { if i == 0 or w == 1 { week-day-map.push(()) } week-day-map.last().push((d, w)) } let events-map = (:) for e in events { let key = e.at(0).display("[year]-[month]-[day]") events-map.insert(key, e.at(1)) } let header = week-day-map.at(1).map(((d, w)) => (style.day-head)(d.display("[weekday repr:short]"))) grid( columns: (2em,) * 7, rows: (1.1em,) * (nweek + 1), align: center + horizon, ..args, grid.cell(colspan: 7, (style.month-head)([#first-day.display() -- #last-day.display()])), ..header, ..week-day-map.map(week => { ( range(1, week.first().at(1)).map(it => []), week.map(((day, w)) => { let day-str = day.display("[year]-[month]-[day]") if day-str in events-map.keys() { (style.day-summary)(day, events-map.at(day-str)) } else { (style.day-summary)(day, none) } }) ).join() }).flatten() ) }) (style.layout)(..calendars) } #let calendar-month( // Event list. // Each element is a two-element array: // // - Day. A datetime object. // - Additional information for showing a day. It actually depends on the template `day-body`. For the deafult template, it requires a content. events, // Templates for headers, times, or events. It takes a dictionary of the following entries: `day-body`, `day-head`, `month-head`, and `layout`. template: (:), // Whether to put sunday as the first day of a week. sunday-first: false, // Additional arguments for the calendar's grid. ..args ) = { events = events.sorted(key: ((x, _)) => int(x.display("[year][month][day][hour][minute][second]"))) let style = ( day-body: default-month-day, day-head: default-month-day-head, month-head: default-month-head, layout: stack, ..template ) let yearmonths = events.map(it => (it.at(0).year(), it.at(0).month())).dedup() let event-group = events.map(it => it.at(0).display("[year]-[month]")) let calendars = yearmonths.map(((year, month)) => { let first-day = datetime(year: year, month: month, day: 1) let group-id = first-day.display("[year]-[month]") let days = get-month-days(month, year) let day-range = (first-day, first-day + duration(days: days - 1)) let month-events = event-group.enumerate().filter(((i, it)) => it == group-id).map(((i, it)) => events.at(i)) default-month-view( month-events, day-range, sunday-first: sunday-first, style-day-body: style.at("day-body"), style-day-head: style.at("day-head"), style-month-head: style.at("month-head"), ..args ) }) (style.layout)(..calendars) }
https://github.com/RhenzoHideki/desenvolvimento-em-fpga
https://raw.githubusercontent.com/RhenzoHideki/desenvolvimento-em-fpga/main/AE5/relatorioAE5.typ
typst
#import "../typst-ifsc/templates/article.typ":* #show: doc => article( title: "Relatório Atividade extra-classe 5 Relógio HH-MM-SS", subtitle: "Dispositivos lógicos programáveis", // Se apenas um autor colocar , no final para indicar que é um array authors: ("<NAME>",), date: "26 de Setembro de 2023", doc, ) = Introdução == Objetivos Este projeto feito em aula do relógio HHMMSS tem como objetivo indicar as horas (HH) , minutos (MM) e segundos (SS) , mostrando as unidades em 3 pares de displays de 7 segmentos, estes que estão são encontrados no kit DE2-115 da TERASIC , encontrado no laboratório de sinais digitais. == Motivação Em aula foram ensinadas vários conceitos de VHDL e boas maneiras para um projeto. Porém até o momento não havia sido aplicado o conjunto dessas ideias em um projeto. Dessa forma este projeto vem para amarrar essas ideias em aplica-las em aula. == Procedimentos Em aula foram feitas 4 entidades. Essas entidades foram separadas da seguinte forma , divisor de clock que foi chamado div_clk , um contador BCD (Binary-coded decimal) que foi chamado de contador_bcd , um conversor BCD para SSD (Seven-segment display) que foi chamado de bcd2ssd , e por fim a entidade que integra todo o conjunto o relógio , chamado de relogio_HHMMSS. Com essas entidades feitas é possível junta-los em um arquivo para assim obter-se o projeto final como um todo. #pagebreak() = Descrição do Projeto Como falado anteriormente , O projeto foi separado em seções menores , atacando o projeto em partes menores para uma melhor manutenção do código. O primeiro levantamento feito em aula é representado pela Figura 1 , que foi feita em aula. Nela é possível observar as pequenas partes que compõem o todo. Se bem observado tem para cada display de sete segmentos um conversor BCD para SSD , estes que recebem de o valores de um dos 3 contadores , e por fim temos um o clock de 50MHz conectados em todos os contadores e um divisor de clock que habilita o contador de segundos. #align(center)[ #figure( image( "./Figuras/relogioHHMMSS.png",width: 100%, ), caption: [ Elaboração do Projeto \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] == Componentes utilizados O componentes utilizados para o projeto podem ser visualizados na Figura 1. No total foram Utilizados 6 displays de sete segmentos , 6 conversores de BCD para SSD , 3 contadores , 1 divisor de clock e 1 clock de 50MHz. Contabilizando o total de 17 componentes para cumprir com os objetivos desse projeto. Como dito anteriormente , nesse projeto foi abordado como criar componentes a partir de uma entidade .vhdl , dessa forma foram criados 3 componentes menores. Um divisor de clock com 2 entradas , reset e clock in , e uma saída clock out . Esse divisor tem como objetivo diminuir os pulsos de enable dos segundos , podendo ajustar toda contagem do sistema para diferentes clocks . Apenas trocando o valor genérico do componente "div". Para os contadores foram criados com 2 entradas , clock e reset , e três saídas bcd unidade , bcd dezena e um clock out. Nos contadores foi pensado inicialmente em fazer contadores que contavam de 0 a 59 e posteriormente esse dado teria de ser tratado , porém o professor nos sugeriu a construção de 2 contadores , 1 para unidade e o outro para dezena , dessa forma o componente extra foi descartado. Da mesma forma que o divisor , é possível entrar com valores para ajustar os generics dos contadores uma vez que para o projeto é necessário 2 contadores de 59 e um de 24 ou 12. Para o conversor BCD para SSD , foi a tarefa mais simples dos 3 componentes menores , uma vez que o componente é apenas uma tabela de conversão , foi utilizado apenas um when case ou with select. Uma vez que o sistema tinha todos seu componentes menores feito o trabalho maior foi de ligar todos os componentes criados dentro do clock.vhd. == Sistema Completo Após o sistema ter todos seu componentes montas foi feita a pinagem e compilação completa. === Pinagem A pinagem pode ser vista nesta 2 figuras : #align(center)[ #figure( image( "./Figuras/pinagem1.png",width: 100%, ), caption: [ Parte 1 da pinagem \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] #align(center)[ #figure( image( "./Figuras/pinagem2.png",width: 100%, ), caption: [ Parte 2 da pinagem \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] === Número de elementos lógicos O numero de elementos lógicos por componente: #align(center)[ #figure( image( "./Figuras/compiled-div-clock.png",width: 100%, ), caption: [ Compilação do divisor de clock \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] Pode ser observado o uso de 11 elementos lógicos no divisor de clock #align(center)[ #figure( image( "./Figuras/compiled-counter.png",width: 100%, ), caption: [ Compilação do contador \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] Pode ser observador o uso de 16 elementos lógicos para cada par de contadores , tendo como uma projeção de se ter 3 vezes mais , aproximadamente 48 elementos para o conjunto todo. Observação o contador estava configurado para contar 24 horas. #align(center)[ #figure( image( "./Figuras/compiled-bcd2ssd.png",width: 100%, ), caption: [ Compilação do conversor bcd para ssd \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] Para o conversor bcd para ssd foram utilizados 14 elementos , porém cada elemento trabalha de forma separada . Assim a estimativa para o total de elementos lógicos presente no projeto seria de 84 elementos lógicos para todo o grupo de conversores. #align(center)[ #figure( image( "./Figuras/compiled-clock.png",width: 100%, ), caption: [ Compilação do relógio (clock) \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] Apesar de na soma de todos os componentes terem dado 143 elementos lógicos, a compilação resultou em um total 168 de elementos . Isso pode ter ocorrido pois alguns componentes flutuavam os valores de elementos lógicos dependendo de como eram configurados. #pagebreak() = Resultados obtidos == RTL viewer Aqui estão os RTLs viewers para cada componente: #align(center)[ #figure( image( "./Figuras/rtlviewer-div-clock.png",width: 100%, ), caption: [ RTL viewer do divisor de clock \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] #align(center)[ #figure( image( "./Figuras/rtlviewer-counter.png",width: 100%, ), caption: [ RTL viewer do contador bcd \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] #align(center)[ #figure( image( "./Figuras/rtlviewer-bcd2ssd.png",width: 100%, ), caption: [ RTL viewer do conversor bcd ssd \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] #align(center)[ #figure( image( "./Figuras/rtlviewer-clock.png",width: 100%, ), caption: [ RTL viewer do relógio (clock) \ Fonte: Elaborada pelo autor ], supplement: "Figura AE5" ) ] == Implementação na placa A implementação na placa foi feita em aula com o kit DE2-115 da TERASIC. Foi possível observar a placa funcionando como esperado. Na implementação podemos ver a contagem sendo feita de 1 em 1 segundo , porém para fins práticos depois aumentamos a frequência de contagem para observar o funcionamento de todos os contadores e todos displays de sete segmentos. O projeto funcionou com sucesso na placa utilizada que foi a = Conclusão Com esse projeto feito em aula foi possível visualizar soluções mais eficientes para lidar com alguns componentes , além de ver o funcionamento e a criação de componente menores para um projeto um pouco maior. Além de utilizar diferentes partes tanto em paralelo quanto sequencial para este projeto . Dessa maneira conseguimos implementar praticamente todo o conhecimento que foi dado em aula neste projeto
https://github.com/lucas-bublitz/tUDESC
https://raw.githubusercontent.com/lucas-bublitz/tUDESC/main/template.typ
typst
Creative Commons Zero v1.0 Universal
// Modelo para Trabalhos Acadêmicos da UDESC // Criado por <NAME> // Licença livre nos termos do GNU // O autor se reserva ao direito de ser citado quanto parte, ou a integralidade, deste documento for utilizado por terceiros, não assumindo qualquer risco comercial, nem exigindo qualquer. #let udesc( config, doc )={ let title = config.title let authors = config.authors let abstract = config.abstract let epigraph = config.epigraph let congratulations = config.congratulations //Dependências //Formatação básica set page( "a4", margin: ( top: 3cm, left: 3cm, right: 2cm, bottom: 2cm ), number-align: right + top, numbering: "1" ) show par: set block(spacing: 1em) set par( justify: true, leading: 1em, first-line-indent: 3em ) set text( size: 12pt, lang: "pt", hyphenate: false ) set document( title: title, author: authors, date: auto ) //Citações //Citação longa (a função nativa quotes() é usada para citação longa) show quote: it => { par(it.body) } //Títulos (headings) show heading: set block(spacing: 2em) show heading: it => { set text(12pt) it // Gambiarra! (Faz com que o próximo parágrafo seja indentado!) "" v(-1em) } set heading( numbering: "1.1.1." ) show heading.where(level: 1): it => [ #pagebreak() #upper(it) ] show heading.where(level: 2): it => text(weight: "regular")[ #upper(it) ] // Figuras (tabelas, imagens, gráficos....) set figure.caption(position: top) show figure: it => { show ref: that => [Fonte: #cite(form: "prose", that.target)] it "" v(-1em) } // PARTE PRÉ-TEXTUAL let credit() = text(oklch(0%, 0, 0deg, 0%))[Este documento utiliza o Modelo de Trabalhos Acadêmidos para UDESC, criado por <NAME>] //Capa let cover() = { page(numbering: none)[ #set text(weight: "bold") #set align(center) UNIVERSIDADE DO ESTADO DE SANTA CATARINA – UDESC\ CENTRO DE CIÊNCIAS TECNOLÓGICAS – CCT\ DEPARTAMENTO DE ENGENHARIA ELÉTRICA – DEE\ #upper(authors) #place(horizon + center)[ #upper(block(title)) ] #place(bottom + center)[ #credit()\ JOINVILLE\ 2024 ] ] } //Anverso (obverse) let obverse() = page(numbering: none)[ #set align(center) #v(3em) #upper(authors) #v(13em) #upper(block(title)) #v(7em) #pad(left: 8cm)[ #align(left)[ #par(justify: true)[ Artigo apresentado ao curso de graduação em direito do Centro Universitário - Católica de Santa Catarina, como requisito parcial para obtenção do título de Bacharel em Direito.\ \ Orientador: Dr. Prof. <NAME> ] ] ] #place(bottom + center)[ JOINVILLE\ 2024 ] ] // Epígrafe (epigraph) let epigraph() = page(numbering: none)[ #place(bottom + right)[ #box(width: 8cm)[ #align(left)[ _Somos por essa causa, essa somente,\ perdidos, mas nossa pena é só esta:\ sem esperança, ansiar eternamente"._\ \ (A Divina Comédia, Canto IV, 40-42, <NAME>) ] ] ] ] //Agradecimento let acknowledgments() = page(numbering: none)[ #align(center, [*AGRADECIMENTOS*]) #v(2em) #lorem(300) ] //Abstract let abstract(text) = page(numbering: none)[ #align(center, [*RESUMO*]) #v(1em) #set par(leading: 0.40em, first-line-indent: 0em) #text *Palavras-chave*: Seguridade Social. Crítica do Direito. Direito. ] // Sumários // Sumários dos capítulos e derivados show outline.where(target: selector(heading.where(outlined: true))): it => { show outline.entry: that => { if that.body.has("children") { box({ place(box(that.body.children.at(0), width: 3.5em)) box([], width: 3.5em) box({ that.body.children.at(2) box(width: 1fr, that.fill) box(width: 0.5em) box(width: 1em,align(left,that.page)) }, width: 1fr) }, height: auto) } else { that.body box(width: 1fr, that.fill) box(width: 0.5em) box(width: 1em,align(left,that.page)) } } show outline.entry.where(level: 1): it => [ *#upper(it)* ] show outline.entry.where(level: 2): it => [ #upper(it) ] show outline.entry.where(level: 3): it => [ *#it* ] it } // Sumário das figuras show outline.where(target: selector(figure)): it => { if query(figure).len() > 0 { show outline.entry: that => { that.body box({ box(width: 1fr, that.fill) box(width: 0.5em) box(width: 1em,align(left,that.page)) }, width: 1fr) } it } } show outline: set page(numbering: none) // PARTE PRÉ-TEXTUAL cover() obverse() acknowledgments() abstract(abstract) epigraph() outline(title: [Lista de figuras], target: figure) outline() // PARTE TEXTUAL doc // PARTE PÓS-TEXTUAl show bibliography: set par(first-line-indent: 0em, justify: false) bibliography("bibliography.bib", title: "Referências", style: "abnt.csl") } #let p(ref) = cite(ref, form: "prose")
https://github.com/kacper-uminski/math-notes
https://raw.githubusercontent.com/kacper-uminski/math-notes/main/tams11/math.typ
typst
Creative Commons Zero v1.0 Universal
#let math_template( title: none, course: none, doc ) = { set text(size: 12pt, font: "New Computer Modern") set heading(numbering: "1.") show math.integral: math.limits.with(inline: false) align(center, text(18pt)[ #course - #title ]) align(center, text(15pt)[ <NAME> ]) pagebreak() doc } #let titled_block(title, txt) = align(center,block( width: 90%, fill: luma(230), inset: 8pt, radius: 4pt, align(left)[ *#title* #txt ] )) #let ncr(all, choice) = $vec(all,choice)$
https://github.com/dashuai009/dashuai009.github.io
https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/037.typ
typst
#let date = datetime( year: 2022, month: 10, day: 25, ) #metadata(( title: "cmake对c++20 module的支持", subtitle: [c++20,modules,cmake], author: "dashuai009", description: "在cmake中,通过target_source增加了对c++20 module特性的支持。到了3.25版本,这一命令得到大幅改进。看起来是对c++20 module做了应有的支持(编译器不支持的语法,cmake也没办法)。", pubDate: date.display(), ))<frontmatter> #import "../__template/style.typ": conf #show: conf == 相关链接 <相关链接> #link("https://zhuanlan.zhihu.com/p/350136757")[知乎 孙孟越] 介绍比较详细了,还介绍了 C++ 之父介绍了 modules 的发展进程. #link("https://github.com/royjacobson/modules-report")[github modules-reportmodules-report] #link("https://cmake.org/cmake/help/latest/command/target_sources.html")[cmake target\_sources] #link("https://www.youtube.com/watch?v=hkefPcWySzI")[target\_sources CMake 2022 C++ Modules and More - Bill Hoffman - CppNow 2022] 22年7月就分享了如何支持module,一直到10月份才出来。 == target\_sources <target_sources> 简单示例,上边youtube中的。 #figure( image("037/cmake.png"), caption: [ cmake target\_source ], ) == demo <demo> #link("https://www.kitware.com/import-cmake-c20-modules/")[import CMake; C++20 Modules (kitware.com)] 上边是cmake官方给出的例子,比较简单。 我在vs2022 preview 17.6、cmake3.26下测试,模块分区(module internal partition)还不被支持。 目前来说(2023.4)c++20的构建系统还不是很完善。
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-110D0.typ
typst
Apache License 2.0
#let data = ( ("SORA SOMPENG LETTER SAH", "Lo", 0), ("SORA SOMPENG LETTER TAH", "Lo", 0), ("SORA SOMPENG LETTER BAH", "Lo", 0), ("SORA SOMPENG LETTER CAH", "Lo", 0), ("SORA SOMPENG LETTER DAH", "Lo", 0), ("SORA SOMPENG LETTER GAH", "Lo", 0), ("SORA SOMPENG LETTER MAH", "Lo", 0), ("SORA SOMPENG LETTER NGAH", "Lo", 0), ("SORA SOMPENG LETTER LAH", "Lo", 0), ("SORA SOMPENG LETTER NAH", "Lo", 0), ("SORA SOMPENG LETTER VAH", "Lo", 0), ("SORA SOMPENG LETTER PAH", "Lo", 0), ("SORA SOMPENG LETTER YAH", "Lo", 0), ("SORA SOMPENG LETTER RAH", "Lo", 0), ("SORA SOMPENG LETTER HAH", "Lo", 0), ("SORA SOMPENG LETTER KAH", "Lo", 0), ("SORA SOMPENG LETTER JAH", "Lo", 0), ("SORA SOMPENG LETTER NYAH", "Lo", 0), ("SORA SOMPENG LETTER AH", "Lo", 0), ("SORA SOMPENG LETTER EEH", "Lo", 0), ("SORA SOMPENG LETTER IH", "Lo", 0), ("SORA SOMPENG LETTER UH", "Lo", 0), ("SORA SOMPENG LETTER OH", "Lo", 0), ("SORA SOMPENG LETTER EH", "Lo", 0), ("SORA SOMPENG LETTER MAE", "Lo", 0), (), (), (), (), (), (), (), ("SORA SOMPENG DIGIT ZERO", "Nd", 0), ("SORA SOMPENG DIGIT ONE", "Nd", 0), ("SORA SOMPENG DIGIT TWO", "Nd", 0), ("SORA SOMPENG DIGIT THREE", "Nd", 0), ("SORA SOMPENG DIGIT FOUR", "Nd", 0), ("SORA SOMPENG DIGIT FIVE", "Nd", 0), ("SORA SOMPENG DIGIT SIX", "Nd", 0), ("SORA SOMPENG DIGIT SEVEN", "Nd", 0), ("SORA SOMPENG DIGIT EIGHT", "Nd", 0), ("SORA SOMPENG DIGIT NINE", "Nd", 0), )
https://github.com/Clamentos/FabRISC
https://raw.githubusercontent.com/Clamentos/FabRISC/main/src/spec/Section2.typ
typst
Creative Commons Attribution Share Alike 4.0 International
/// #import "Macros.typ": * /// #section( [ISA Modules], [This section is dedicated to provide an overview of the modular capabilities of the FabRISC ISA. The list of modules and some implementation specific parameters will be presented shortly.], ///. subSection( [Module List], [Features and capabilities are packaged in modules which can be composed of instructions or other requirements such as registers, events, operating modes, etc... There are no mandatory modules in this specification in order to maximize flexibility, however, once a particular extension is chosen, the hardware must provide all the features and abstractions of said extension. The requirements for each and every module will be extensively explained in the upcoming sections when required. The following is a simple table of all the existing base modules:], tableWrapper([Module list.], table( columns: (15%, 20%, 65%), align: (x, y) => (right, left + horizon, left + horizon).at(x), [#middle([*Index*])], [#middle([*Short name*])], [#middle([*Full name*])], [ 0], [`CISB` ], [Computational-Integer-Scalar-Basic.], // - [ 1], [`CISA` ], [Computational-Integer-Scalar-Advanced.], // [ 2], [`CISM` ], [Computational-Integer-Scalar-Multiword.], // [ 3], [`CIVB` ], [Computational-Integer-Vector-Basic.], // [ 4], [`CIVA` ], [Computational-Integer-Vector-Advanced.], // [ 5], [`CIVR` ], [Computational-Integer-Vector-Reductions.], // [ 6], [`CIC` ], [Computational-Integer-Compressed.], // [ 7], [`CFSB` ], [Computational-FP-Scalar-Basic.], // [ 8], [`CFSA` ], [Computational-FP-Scalar-Advanced.], // [ 9], [`CFVB` ], [Computational-FP-Vector-Basic.], // [ 10], [`CFVA` ], [Computational-FP-Vector-Advanced.], // [ 11], [`CFVR` ], [Computational-FP-Vector-Reductions.], // [ 12], [`DSB` ], [Data-Scalar-Basic.], // [ 13], [`DSA` ], [Data-Scalar-Advanced.], [ 14], [`DVB` ], [Data-Vector-Basic.], [ 15], [`DVA` ], [Data-Vector-Advanced.], [ 16], [`DAB` ], [Data-Atomic-Basic.], [ 17], [`DAA` ], [Data-Atomic-Advanced.], [ 18], [`DB` ], [Data-Block.], [ 19], [`DC` ], [Data-Compressed.], [ 20], [`FIB` ], [Flow-Integer-Basic.], [ 21], [`FIA` ], [Flow-Integer-Advanced.], [ 22], [`FIC` ], [Flow-Integer-Compressed.], [ 23], [`FFB` ], [Flow-FP-Basic.], [ 24], [`FFA` ], [Flow-FP-Advanced.], [ 25], [`FV` ], [Flow-Vector.], [ 26], [`SB` ], [System-Basic.], [ 27], [`SA` ], [System-Advanced.], [ 28], [`VC` ], [Vector-Configuration], [ 29], [`FRMD` ], [FP-Rounding-Modes], [ 30], [`HLPR` ], [Helper-Registers.], [ 31], [`PERFC` ], [Performance-Counters.], [ 32], [`FNC` ], [Fencing.], [ 33], [`TM` ], [Transactional-Memory.], [ 34], [`EXC` ], [Exceptions.], [ 35], [`IOINT` ], [IO-Interrupts.], [ 36], [`IPCINT`], [IPC-Interrupts.], [ 37], [`USER` ], [User-Mode.], [ 38], [`DALIGN`], [Data-Alignment.], [ 39], [`CTXR` ], [Context-Reducer], [ 40], [-], [Reserved for future use.], [...], [... ], [...], [ 55], [-], [Reserved for future use.], [ 56], [-], [Reserved for custom extension.], [...], [... ], [...], [ 63], [-], [Reserved for custom extension.] )), [If the hardware designers choose to not implement any privileged capability provided by the `USER` module, then the system must always run in _machine mode_ which has complete and total control over all microarchitectural resources.] ), ///. subSection( [Implementation Specific Parameters], [FabRISC defines some implementation specific microarchitectural parameters to clear potential capability related issues in both the documentation, the running software, as well as making the ISA more general, flexible and extensible. These parameters, along with other information, must be physically stored internally in the shape of read-only registers so that programs can gather information about various characteristics of the system via dedicated operations such as the `SYSINFO` instruction. Depending on which modules are implemented, some of these parameters can be ignored and set to a default value. Some parameters are presented here as they are fundamental and mandatory, while others are declared in the following sections when appropriate. This is not an exhaustive list:], list(tight: false, [*ISA Modules* (`ISAMOD`): _This 64 bit parameter indicates the implemented instruction set modules as previously described. `ISAMOD` works as a checklist where each bit indicates the desired module: the least significant bit will be the first module, while the most significant bit will be the last module in the list (see the "index" column from the module table). The remaining most significant bits are reserved for future expansion as well as custom extensions._], [*ISA Version* (`ISAVER`): _This 16 bit parameter indicates the currently implemented ISA version. `ISAVER` is subdivided into two bytes with the most significant byte representing the major and the least significant byte the minor version. Minor versions are considered compatible with each other, while major versions may be not and it will depend on the actual change history made to the architecture._] ), comment([ I consider this modular approach to be a wise idea because it allows the hardware designers to only implement what they really need with high degree of granularity and little extra. The fact that there is no explicit mandatory subset of the ISA may seem odd, but can help with specialized systems, as well as to greatly simplify the specification. With this, it becomes perfectly possible to create, for example, a floating-point only processor with very few integer instructions to alleviate overheads and extra complexities. This decision, however, makes silly and nonsensical things possible such as having no flow transfer or no memory operations. The ISA, in the end, kind of relies on the common sense of the hardware designers when it comes to realizing sensible microarchitectures. The miscellaneous modules also contain the `USER` module, which is responsible for giving the ISA different privilege levels by restricting access to some resources and functionalities. FabRISC currently only supports a maximum of two privilege levels: "user mode" and "machine mode". The `EXC`, `IOINT`, `IPCINT`, `HLPR` and `PERFC` modules allow the implementation of hardware level event-driven computation which, in conjunction with the earlier mentioned modules, is what really helps in supporting fully fledged operating systems, proper memory and process virtualization techniques as well as aiding higher-level event-driven programming. The purpose of the ISA parameters is to improve code compatibility by laying out in clear way all the capabilities of the specific microarchitecture. These parameters are also handy for writing a more general and broad documentation that applies to many different situations. Each particular hart must hold these parameters as read-only values organized in special purpose "configuration" registers in a separate bank from the main file. The software can then probe these registers and perform dynamic code dispatching for the specific microarchitecture. ]) ) ) #pagebreak() ///
https://github.com/lyzynec/orr-go-brr
https://raw.githubusercontent.com/lyzynec/orr-go-brr/main/06/main.typ
typst
#import "../lib.typ": * #knowledge[ #question(name: [Formulate the general problem of calculus of variations. Explain the difference between the _variation_ and _differential_.])[ #part(name: [General problem of calculus of variations])[ $ y(a) = y_a, y(b) = y_b\ J(y) = integral_a^b L(x, y(x), y'(x)) upright(d) x\ min_(y(x) in cal(C)^1 [a, b]) J(y(x)) $ - $y in cal(C)^1$ means that the function $cal(C)^1$ continous, meainig its derivative is also continous. ] #part(name: [Variation and differential])[ _Differential_ is applying a small change to the input of a function in order to approximate sensitivity and direction of that function at point. _Variation_ is applying a small change to the function itself, meaining changing some values of the function somewhat. As we try to find the _function_ that minimizes some integral, not a countable number of values (vector) this makes sense. You can imagine it as making differential of a function with output of infinite--size vector, values of that vector are values in all points of the original varied function. _Or not, I don't know, but this would make sense._ ] ] #question(name: [Write down the Euler--Lagrange equation and explain that it constitutes the first--order necessary condition of optimality for the calculus of variations problem $min_y(x) integral_a^b L(x, y(x), y'(x))$.])[ $ (diff L(x, y(x), y'(x))) / (diff y(x)) - upright(d)/(upright(d) x) (diff L(x, y(x), y'(x))) / (diff y'(x)) = 0 $ This gives the first--order necessary condition of optimality #footnote[ You may remember it from physics as this ugly thing $ (diff L)/(diff q^i) (t, bold(q(t)), dot(bold(q))(t)) - upright(d)/(upright(d) t) (diff L)/(diff dot(q)^i)(t, bold(q(t)), dot(bold(q))(t)), i = 1, 2, ..., n $ ]. - Functions that satisfy this are called _extremals_. _I do not really have a intuitive explanation why it is like that._ ] #question(name: [Give the first--order necessary conditions of optimality for a general (possibly nonlinear) optimal control problem on a fixed and finite time interval. Highlight that it comes in the form of a set of differential and algebraic equations together with the boundary conditions that reflect the type of the problem.])[ The optimal control problem $ min_(bold(x)(t), bold(u)(t)) &[integral_(t_i)^(t_f) L(bold(x), bold(u), t) upright(d) t]\ "subject to" dot(bold(x))(t) &= bold(f)(bold(x), bold(u), t)\ bold(x)(t_i) &= bold(r)_i $ with fixed final state $ bold(x)(t_f) = bold(r)(t_f) $ Has an augmented cost function with augmented Lagrangian $ J^"aug" (t, bold(x), dot(bold(x)), bold(u), bold(lambda)) = integral_(t_i)^(t_f) underbrace([ L(bold(x), bold(u), t) + bold(lambda)^T (dot(bold(x)) - bold(f)(bold(x), bold(u), t))], L^"aug") upright(d) t $ #align(center)[#grid(columns: 2, row-gutter: 10pt, column-gutter: 10pt, align: left, [state equation], $dot(bold(x))(t) - bold(f)(bold(x), bold(u), t) = bold(0)$, [costate equation], $gradient_bold(x) L - gradient_bold(x) bold(f) bold(lambda) = dot(bold(lambda))$, [equation of stationarity], $gradient_bold(u) L - gradient_bold(u) bold(f) bold(lambda) = bold(0)$, )] ] #question(name: [Give the first--order necessary conditions for an optimal control problem on a fixed and finite time interval with a continous LTI system and a quadratic cost -- so--called LQ problem. Discuss how the boundary conditions change if the final state if regarded fixed or free.])[ Now the augmented Lagrangian looks like $ L^"aug" = 1/2 (bold(x)^T bold(Q) bold(x) + bold(u)^T bold(R) bold(u)) + bold(lambda)^T (dot(bold(x)) - bold(A) bold(x) - bold(B) bold(u)) $ The conditions of optimality will be $ dot(bold(x)) &= bold(A) bold(x) + bold(B) bold(u)\ dot(bold(lambda)) &= bold(Q) bold(x) - bold(A)^T bold(lambda)\ bold(0) &= bold(R) bold(u) - bold(B)^T bold(lambda) $ Provided $bold(R) > 0$, $ bold(u) = bold(R)^(-1) bold(B)^T bold(lambda) $ Boundary conditions: #align(center)[#grid(columns: 2, row-gutter: 10pt, column-gutter: 10pt, align: left, [fixed final state], $bold(x)(t_f) = bold(r)(t_f)$, [free final state], $bold(S)_f bold(x)(t_f) + bold(lambda)(t_f) = 0$ )] - For fixed final time, our solution is obrained through the reachability Gramian. - For free final time, we will have to solve Riccati differential equation. ] #question(name: [Characterize qualitatively the solution to the LQ--optimal control problem on a fixed and finite time interval with a fixed final state. Namely, you should emphasize that it is an open--loop control.])[ - Its an open loop solution, same as with the discrete time variant. - Existence is determined by reachability of the given state (reachability Gramian). ] #question(name: [Characterize qualitatively the solution to the LQ--optimal control problem on a fixed and finite time interval with a free final state. Namely, you should emphasize that it is a time--varying state--feedback control and that the time-varying feedback gains are computed from the solution to the differential Riccati equation.])[ - Its an closed loop solution, same as with the discrete time variant. - Obtained through Riccati differential equation. - It consists of time--varying state feedback control. ] #question(name: [Explain the basic facts about LQ--optimal control on an _infinite time interval_ with a free final state. Namely, you should explain that it comes in the form of a state feedback and that the feedback gain can be computed either as the limiting solution to the differential Riccati equation or (and this is preferrable) as a solution to Algebraic Riccati Equation (ARE). The latter option brings in some issues related to existence and uniqueness of a stabilizing controller, which you should discuss.])[ - Same as for the discrete time variant, this results in time invariant state feedback. - It can by obtained numerically through CARE, or by solving the Riccati differential equation. - Same as for the discrete time variant, unique and stabilizing solution exists if and only if $(bold(A), sqrt(bold(Q)))$ id detecteble (or observable for $bold(S)(t) > 0$). ] ] #skills[ #question(name: [Solve the continuous-time LQ-optimal control problem using solvers available in Matlab.])[] ]
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/crates/conversion/vec2sema/README.md
markdown
Apache License 2.0
# reflexo-vec2sema Render vector items into HTML semantics. See [Typst.ts](https://github.com/Myriad-Dreamin/typst.ts)
https://github.com/gongke6642/tuling
https://raw.githubusercontent.com/gongke6642/tuling/main/内省/询问/询问.typ
typst
= query 查找文档中的元素。 这些query功能使您可以在文档中搜索特定类型或具有特定标签的元素。要使用它,您首先需要确保上下文可用。 == 寻找元素 在下面的示例中,我们创建一个自定义页面标题,以小写字母显示文本“Typst Academy”和当前部分标题。在第一页上,节标题被省略,因为标题位于第一个节标题之前。 为了实现这种布局,我们打开一个context,然后查询当前位置之后的所有标题。上下文块中的代码运行两次:每页运行一次。 - 在第一页上,查询当前位置之前的所有标题会产生一个空数组:没有以前的标题。我们检查这种情况并只显示“Typst Academy”。 - 对于第二页,我们从查询结果中检索最后一个元素。这是当前位置之前的最新标题,因此,它是我们当前所在部分的标题。我们通过该body字段访问其内容并将其显示在“Typst Academy”旁边。 #image("image.png") #image("image2.png") #image("image3.png") #image("image4.png")
https://github.com/jordanqt327/Typst-Pruebas
https://raw.githubusercontent.com/jordanqt327/Typst-Pruebas/main/Typst/prueba.typ
typst
= My Document Title This is a paragraph in Typst. You can write your text here. == Section Title You can also create sections and subsections. Here's a simple formula: $a^2 + b^2 = c^2$. * This is a bullet point. * Another bullet point. 1. This is a numbered list item. 2. Another numbered list item. \------- = Lista + The climate - Temperature - Precipitation + The topography + The geology \---------------- = Imagenes Dona: @Dona se robaba los zapatos y los metia debajo de la cama. #figure( image("Dona.jpg", width: 45%), caption: [Dona] )<Dona> = Math The equation $Q = rho A v + C$ defines the glacial flow rate. \------------------------ The flow rate of a glacier is defined by the following equation: $ Q = rho A v + C $ \------------------------ Total displaced soil by glacial flow: $ 7.32 beta + sum_(i=0)^nabla Q_i / 2 $ \------------------------- Total displaced soil by glacial flow: $ 7.32 beta + sum_(i=0)^nabla (Q_i (a_i - epsilon)) / 2 $ \------------------------- $ v := vec(x_1, x_2, x_3) $ = Formatting #par(justify: true)[ = Background In the case of glaciers, fluid dynamics principles can be used to understand how the movement and behaviour of the ice is influenced by factors such as temperature, pressure, and the presence of other fluids (such as water). ] ------------------------------------ #set par(justify: true) = Background In the case of glaciers, fluid dynamics principles can be used to understand how the movement and behaviour of the ice is influenced by factors such as temperature, pressure, and the presence of other fluids (such as water). = Set up the page #set text( font: "New Computer Modern", size: 10pt ) #set page( paper: "a6", margin: (x: 1.8cm, y: 1.5cm), ) #set par( justify: true, leading: 0.52em, ) = A hint of sophistication #set heading(numbering: "1.") = Introduction #lorem(10) == Background #lorem(12) == Methods #lorem(15) -------------------------------- #set heading(numbering: "1.a") = Introduction #lorem(10) == Background #lorem(12) == Methods #lorem(15) ------------------------ = Show rules #show "ArtosFlow": name => box[ #box(image( "Dona.jpg", height: 0.7em, )) #name ] This report is embedded in the ArtosFlow project. ArtosFlow is a project of the Artos Institute.
https://github.com/ckunte/typst-snippets-st
https://raw.githubusercontent.com/ckunte/typst-snippets-st/master/README.md
markdown
MIT License
# Custom Typst snippets for use in Sublime Text Typesetting documents, letters, reports, or even books in [Typst] is not as verbose as LaTeX, but certainly error-prone, given the need for strict syntax. A handful of [Sublime Text][st] snippets provided in this repository try to reduce this tedium to as low as practicable. This repository contains the following custom snippets: | Snippet | Inserts | | ------------------------- | ---------------------- | | `apdx` + <kbd>tab</kbd> | appendix block | | `bib` + <kbd>tab</kbd> | BibTeX entry | | `cod` + <kbd>tab</kbd> | python code file | | `fig` + <kbd>tab</kbd> | figure block | | `file` + <kbd>tab</kbd> | file | | `hd` + <kbd>tab</kbd> | set heading no.s | | `letter` + <kbd>tab</kbd> | letter block | | `lnk` + <kbd>tab</kbd> | url and link label | | `ltmpl` + <kbd>tab</kbd> | letter template | | `lscape` + <kbd>tab</kbd> | set page to landscape | | `note` + <kbd>tab</kbd> | note block | | `ntmpl` + <kbd>tab</kbd> | note template | | `pb` + <kbd>tab</kbd> | page break | | `ref` + <kbd>tab</kbd> | reference block | | `tbl` + <kbd>tab</kbd> | table block | These snippets work when the file under edit is set as a Typst file from the pull-up menu in the status bar. (Suggest installing [Typst package][tp].) ## What are snippets and how do they work? The concept of a snippet is simple. Think of a block of pre-formatted text (i.e., a template) that one needs to use often. One can of-course type or copy-paste such blocks of text repeatedly the hard way, or one could instead assign such common blocks of text with an abbreviated keyword, which in turn calls the entire block of text. To ensure such blocks do not accidentally appear while typing the actual content of the note, paper, or report, a trigger is required. The trigger in this case is a <kbd>tab</kbd> key. ## How to add these snippets to Sublime Text This is done in two steps, viz., (a) add a repository and then (b) activate it. The how to is described below. 1. From _Tools > Command Palette..._ type _Add Repository_, and in the input box, enter `https://github.com/ckunte/typst-snippets-st` 2. From _Tools > Command Palette..._ type _Install Package_, and in the result list, type `typst-snippets-st` and select the thus found package. ## How to upgrade From _Tools > Command Palette..._ type _Upgrade Package_, and select one of the two options presented (i.e., _Package Control: Upgrade Package_ or _Package Control: Upgrade/Overwrite All Packages_). ## Custom build A custom build file (e.g. like below: `Typst-Cyg.sublime-build`) can be added under _Settings > Browse packages > User_, with file path edited to suit: ``` { "shell_cmd" : "typst compile \"$file_name\"", "selector" : "source.typ", "path" : "C:\\Users\\ckun\\misc\\cyg\\bin;$path", "working_dir" : "$file_path" } ``` Then From _Tools > Build system_, select _Typst-Cyg_. This will enable building (with Ctrl+B) from source to output (i.e. from .typ to .pdf file) from within Sublime Text. [Typst]: https://typst.app [st]: https://www.sublimetext.com "Text editing done right." [tp]: https://packagecontrol.io/packages/Typst
https://github.com/giZoes/justsit-thesis-typst-template
https://raw.githubusercontent.com/giZoes/justsit-thesis-typst-template/main/others/bachelor-evaluation.typ
typst
MIT License
#import "style.typ": 字体, 字号 #let table-stroke = 0.5pt #let indent = h(2em) #let proposal ={ // set page(numbering: "1") // counter.(page).update(0) align(center, text( font: 字体.宋体, size: 字号.小三, weight: "bold", "江苏科技大学苏州理工学院毕业设计(论文)中期检查表" )) set text(font: 字体.宋体, size: 字号.五号) set par(first-line-indent: 2em) // #set underline(offset: 0.1em) { set text(size: 字号.五号) table( columns: (58pt, 1fr, 58pt, 1fr, 58pt, 1fr), stroke: table-stroke, rows: 1.5cm, align: center + horizon, //标题部分 [学生姓名], [张居正], [学号], [218111545200], [指导教师], [某某], table.cell(colspan: 2)[毕业设计(论文)题目], table.cell(colspan: 4)[量子力学下的代码质量与代码进化论], ) } v(-1.2em) table( columns: (60pt,1fr), stroke: table-stroke, inset: 10pt, rows: auto, align: (center+horizon,left), )[ #v(10pt) 毕业设计的主要工作内容和计划进度(由学生填写) #v(10pt) ][ #v(15em) ][ #v(10pt) 目前已完成工作情况(由学生填写) #v(10pt) ][ #v(10em) ][ #v(10pt) 存在的主要问题和解决方案(由学生填写) #v(10pt) ][ #v(25em) ] v(-1.2em) table( columns: (190pt,1fr), stroke: table-stroke, inset: 10pt, rows: auto, align: (center+horizon,), )[ #v(10pt) 指导教师对学生工作态度评价 #v(10pt) ][ #sym.ballot.x 认真   #sym.ballot 较认真   #sym.ballot 一般   #sym.ballot 差 ][ #v(10pt) 指导教师对学生完成工作质量评价 #v(10pt) ][ #sym.ballot 优   #sym.ballot.x 良   #sym.ballot 中   #sym.ballot 一般   #sym.ballot 差 ] v(-1.2em) table( columns: (60pt,1fr), stroke: table-stroke, inset: 10pt, rows: auto, align: (center+horizon,left), )[ #v(10pt) 指导教师意见和建议 #v(10pt) ][ 还行 #set align(right+bottom) #v(1em) 指导教师签名: #h(7em) #v(0.5em)     年  月  日 ]} #proposal
https://github.com/ysthakur/PHYS121-Notes
https://raw.githubusercontent.com/ysthakur/PHYS121-Notes/main/Notes/Ch01.typ
typst
MIT License
= Chapter 1 == Distance vs Displacement - Distance is an absolute value, how much you travelled - Displacement is $Delta x = "end" - "start"$ - Path doesn't matter - Vector quantity == Significant figures - When multiplying or dividing, whichever operand had fewer significant figures, that's how many significant figures the result has - e.g. $3.73 dot 5.7 = 21$ (5.7 has 2 sig figs, so 21 does too) - When adding or subtracting, whichever operand had the fewest decimal places, that's how many decimal places the result will have - e.g. $16.7 + 5.24 = 21.94 = 21.9$ (16.7 has 1 decimal place, so the result does too)
https://github.com/piepert/philodidaktik-hro-phf-ifp
https://raw.githubusercontent.com/piepert/philodidaktik-hro-phf-ifp/main/src/lernerfolgskontrollen/spue01/main.typ
typst
Other
#import "/src/lernerfolgskontrollen/template.typ": * #set page(flipped: true) #set text(size: 0.675em) #show: columns.with(2) #show: project.with( topic: "Schriftlicher Dialog", stufe: 9 ) *Aufgabe* #h(1fr) (20 P.) \ Wähle von den folgenden zwei Situationen eine aus. Gestalte anschließend einen philosophischen Dialog, in welchem sich die Dialogpartner*innen begründet mit ihrer Position zur jeweiligen Situation auseinandersetzen! + Der Nihilist Nils und die Hedonistin Heidi sprechen über die bevorstehende Party ihrer Klassenkameradin Lucy. Diese soll am kommenden Samstag stattfinden. Einen besonderen Anlass für die Party gibt es nicht. Beide haben eine Einladung bekommen, sind sich jedoch uneinig, ob sie diese annehmen. + Der Hedonist Heinrich und die Nihilistin Nadja unterhalten sich über die neue Umweltkampagne der Schule. Diese ruft dazu auf, dass sich alle Schüler:innen jeden Sonntag im Schuljahr zum Müllsammeln im Park treffen sollen. Beide haben eine andere Meinung zu dieser Kampagne. *Zusatzaufgabe* #h(1fr) (+ 2 P.) \ Maja, ein Vertreterin der Mesoteslehre, bekommt das Gespräch zwischen den beiden mit und möchte eine Lösung finden, mit der beide zufrieden sind. Ergänze den Lösungsvorschlag von Maja in deinem Dialog. #h(1fr) #show table.cell: block.with(breakable: false) #table(columns: (1fr, auto), strong[Dialog], strong[P.], table.cell(fill: black.lighten(80%), strong[Formales -- allgemein]), table.cell(fill: black.lighten(80%), strong[2 P.]), [ Grammatik ], [\_\_ / 1], [ Orthographie ], [\_\_ / 1], table.cell(fill: black.lighten(80%), strong[Formales -- dialog-spezifisch]), table.cell(fill: black.lighten(80%), strong[4 P.]), [ Eine Gliederung des Dialoges ist erkennbar (in Hinführung, These aufstellen, Argumentation, Fazit). ], [\_\_ / 2], [ Der Text ist in reiner Gesprächsform. Nur die Figuren sprechen, es gibt keinen Erzähler oder sonstigen Text, der nicht durch Figuren gesagt wird. ], [\_\_ / 1], [ Es sind mindestens 2 Gesprächsteilnehmer vorhanden. ], [\_\_ / 1], table.cell(fill: black.lighten(80%), strong[Inhaltliches -- generell]), table.cell(fill: black.lighten(80%), strong[2 P.]), [ Der in der Aufgabenstellung gegebene Kontext wird eingehalten. - keine vom Thema abschweifenden Bemerkungen - Bezug zum gegebenen Thema erkennbar ], [\_\_ / 2], table.cell(fill: black.lighten(80%), strong[Inhaltliches -- Problematisierung/Thesenfindung]), table.cell(fill: black.lighten(80%), strong[3 P.]), [ Mindestens eine These zur Prüfung wird aufgestellt. Z.B.: - "Wir sollten auf die Party, da es uns allen Spaß bereitet!" - "Ob ich mich nun im Park treffe oder nicht, ist egal, denn die Umwelt befindet sich eh nicht in meiner Kontrolle." ], [\_\_ / 1], [ Die bearbeiteten Thesen bieten eine Problematik des moralischen Handelns, der philosophischen Lebensgestaltung oder bzgl. der Sinndeutung des Lebens. Z.B.: - Sollten wir nur handeln, wenn es uns Freude bereitet? - Inwiefern muss ich Verantwortung übernehmen, wenn meine Umgebung scheinbar nicht unter meiner Kontrolle ist? ], [\_\_ / 2], table.cell(fill: black.lighten(80%), strong[Inhaltliches -- Diskussin/Argumentation]), table.cell(fill: black.lighten(80%), strong[6 P.]), [ Die bearbeiteten Thesen werden durch Argumente kritisch geprüft. Argumente bleiben nicht "im leeren Raum", sondern die Gesprächspartner reagieren darauf. ], [\_\_ / 3], [ Die Argumente sind formal logisch-gültig oder mindestens plausibel. ], [\_\_ / 3], table.cell(fill: black.lighten(80%), strong[Inhaltliches -- Fazit]), table.cell(fill: black.lighten(80%), strong[3 P.]), [ Die Argumente werden abgewogen und eine stärkere Position wird herausgestellt. Es handelt sich nicht um ein reines Aufzählen von Argumenten, sondern die Argumente bauen sich auf und kommen schließlich zu einer Beantwortung der These. ], [\_\_ / 1], [ Das Problem wird gelöst (die These wird beantwortet) oder offene Fragen werden angesprochen. ], [\_\_ / 2], table.cell(fill: black.lighten(80%), strong[Zusatzaugabe]), table.cell(fill: black.lighten(80%), strong[2 P.]), [ Das Argument macht sich die Mesotes-Lehre zunutze, um das Problem zu beantworten. ], [\_\_ / 1], [ Das Argument ist eine mögliche Lösung des Problems. ], [\_\_ / 1], ) /* #line(length: 100%) #set enum(numbering: "A)") *Aufgabe 1* #h(1fr) (6 P.) \ Beschreibe eine Religion deiner Wahl anhand der folgenden Kriterien: - Prophet - Name des Gottes - wichtige Feste - Gebote - Vorstellung *Aufgabe 2* #h(1fr) (8 P.) \ Wähle von den folgenden zwei Aufgaben eine aus und bearbeite sie. #grid(columns: 2)[ 1. Erläutere das Kausalitätsprinzip anhand eines selbstgewählten Beispiels. ][ 2. Erläutere anhand eines selbstgewählten Beispiels, inweifern Gott den Menschen laut Nietzsche Orientierung geboten hat. ] *Aufgabe 3* #h(1fr) (6 P.) \ Wähle von den folgenden zwei Aufgaben eine aus und bearbeite sie. #grid(columns: 2)[ 1. Nimm begründet Stellung dazu, ob ChatGPT als "Gott" bezeichnet werden kann. _Hinweis: Du kannst dich auf Jonas oder Nietzsche beziehen._ ][ 2. Entwirf als Thomas v. Aquin eine Antwort auf die folgende These. Berücksichtige dabei seinen Gottesbeweis. _"Wert Gott nicht sieht, kann ihn auch nicht beweisen."_ ] *Zusatzaufgabe* #h(1fr) (+ 1 P.) \ Beschreibe eine Religion deiner Wahl anhand der folgenden Kriterien: - Prophet - Name des Gottes - wichtige Feste - Gebote - Vorstellung
https://github.com/werifu/HUST-typst-template
https://raw.githubusercontent.com/werifu/HUST-typst-template/main/CHANGELOG.md
markdown
MIT License
**before** 前面没有写changelog,之后补上 **2023/05/09** * 添加引用的自动跳转 * 更换文献引用格式,使用 #bib_cite("xxx") 引用而非直接@ **2023/05/15** * 添加了自定义 * 调整了行间距(1.5em => 1.24em),肉眼校对,无公式支撑 * 自定义了目录,符合网安学院要求 **2023/06/03** * 优化 sample 和 readme 文案 **2023/06/08** * 支持匿名化处理,anony 参数控制是否用小黑条 * feature 说明 **2023/06/09** * 增加了一些图示说明 **2023/12/23** * 修复了typst 0.9.0后引入的caption重复问题 * 增加了适用于 HUST-CSE-UG 的自定义参考文献格式 * 改善 README 说明文字
https://github.com/jdupak/thesis-slides-rustgcc
https://raw.githubusercontent.com/jdupak/thesis-slides-rustgcc/master/main.typ
typst
#import "@preview/polylux:0.3.1": * #import "@preview/fletcher:0.3.0" as fletcher: node, edge #import "theme/ctu.typ": * #show: ctu-theme.with() #title-slide[ = Memory Safety Analysis for Rust GCC #v(0.75em) #set text(size: 1em) <NAME> #v(1em) #set text(size: 0.7em) #table( columns: (1fr, 1fr), column-gutter: 0.5em, stroke: none, align: (right, left), [Supervisor:], [Ing. <NAME> PhD.], [Project reviewer:], [MSc. <NAME>] ) #v(1em) Faculty of Electrical Engineering \ Department of Measurement #notes( ```md Vážená komise, dámy a pánové, dovolte abych vám představil výsledky své diplomové práce nazvané "Analýza bezpečného přístupu k paměti pro kompilátor Rust GCC". Cílem mé práce bylo implementovat statickou analýzu, známou jako "borrow checker" do vznikajícího nového překladače jazyka Rust nad platformou GCC. ``` ) ] #slide[ = Borrow Checker Rules #only("1-2")[ - Move ] #only("3-")[#text(fill: luma(50%))[ - Move ]] #only(2)[ ```rust let mut v1 = Vec::new(); v1.push(42) let mut v2 = v1; // <- Move println!(v1[0]); // <- Error ``` #v(0.5em) ] #only("1-3")[ - Lifetime subset relation - Borrow must outlive borrowee ] #only("4-")[#text(fill: luma(50%))[ - Lifetime subset relation - Borrow must outlive borrowee ]] #only(3)[ ```rust fn f() -> &i32 { &(1+1) } // <- Error ``` #v(0.5em) ] - One mutable borrow or multiple immutable borrows - No modification of immutable borrow data #only(4)[ ```rust let mut counter = 0; let ref1 = &mut counter; // ... let ref2 = &mut counter; // <- Error ``` ] #notes( ```md Nejdříve vám seznámím se samotnou analýzou a problémy které řeší. Základní operací při práci s pamětí je přesun unikátních zdrojů, takzvaný "move". Pro move musíme zajistit, že unikátní zdroj není duplikován a že k původnímu, nyní nevalidnímu objektu, není dále přistupováno. Pro dočasné používání, což je například volání metody, musíme zajistit, že objekt bude existovat po celou dobu tohoto používání. Typickou chybou v této oblasti je například návrat reference na lokální hodnotu. Pro bezpečnou součinost více vláken musíme zajistit buďto sdílený přístup pouze pro čtení, a nebo exkluzivní přístup pro zápis. ``` ) ] #slide[ = Checking Functions #let f = ```rust struct Vec<'a> { ... } impl<'a> Vec<'a> { fn push<'b> where 'b: 'a (&mut self, x: &'b i32) { // ... } } ``` #only("1")[#f] #only("2-")[ #text(size: 0.7em, f) ```rust let a = 5; // 'a 'b 'b: 'a { // let mut v = Vec::new(); // * v.push(&a); // * * OK let x = v[0]; // * * OK } // * OK ``` ] #notes( ```md Protože analýza celého programu by měla extrémní výpočetní nároky, provádí borrow checker pouze analýzu uvnitř funkce. Na hranicích funkce musí programátor popsat popsat invarianty platnosti referencí a to pomocí lifetime anotací, na slidu apostrof `a` a apostrof `b`. Na příkladu zde máme vektor referencí, jejihž platnost v rámci programu je zdola omezena regionem apostrof `a`. Pokud chceme vložit fo vektoru novou referenci s platností apostrof `b`, musíme říci, že oblast programu apostrof `b` je alespoň tak velká, jako apostrof `a`. Zde na konrétním příkladu, můžete vidět dosazené časti programu. ``` ) ] #slide[ = CFG Computation #grid(columns: (3fr, 1fr))[ ```rust fn f<'a>(map: Map<K, V>) -> &'a V { // Lookup key in map. // Return reference to value. match map.get_mut(&key) { Some(value) => value, // Found one. None => { // Not found. // New reference to map! map.insert(key, V::default()); } } } ``` ][ #set text(size: 0.75em, font: "Roboto Mono") #only(1)[ #fletcher.diagram( { let (start, match, s, n, end, ret) = ((0,0), (0,-1), (-0.5, -2), (0.5, -2), (0, -3), (0, -4)) node(start, "Start") node(match, "Match") node(s, "Some") node(n, "None") node(end, "End") node(ret, "Return") edge(start, match, "->") edge(match, s, "->") edge(match, n, "->") edge(s, end, "->") edge(n, end, "->") edge(end, ret, "->") })] #only(2)[ #fletcher.diagram( { let (start, match, s, n, end, ret) = ((0,0), (0,-1), (-0.5, -2), (0.5, -2), (0, -3), (0, -4)) node(start, "Start") node(match, "Match") node(s, "Some") node(n, text(fill:red, "None")) node(end, "End") node(ret, "Return") edge(start, match, "->") edge(match, s, "->") edge(match, n, "-->") edge(s, end, "->") edge(n, end, "->") edge(end, ret, "->") })] ] #notes( ```md Nejtěší částí analýzy je dosazení konrétních částí programu za lifetime proměné, tedy nalezení oblastí, kde musí být dané reference validní. Moderní borrow checker musí provádět výpočet na control flow grafu, jinak by velmi silně programátora omezoval. Povšimněte si zde na příkladu, že při vstupu do větve None není žádná reference do proměné map platná, protože metoda získávají tuto referenci selhala. Moderní borrow borrow checker musí vzít v potaz i takové situace. ``` ) ] #slide[ #only("1,4-")[ = Implementation ] #only("1")[ - Parsing, AST, HIR - Lifetime handling in the type checker - Variance analysis $A angle.l 'a, T angle.r lt.eq B angle.l 'b, F angle.r arrow.double ('a subset.eq 'b) and (T lt.eq F)$ ] #only("4-")[#text(fill: luma(50%))[ - Parsing, AST, HIR - Lifetime handling in the type checker - Variance analysis - BIR construction ]] #only("2")[ #block(width: 100%, align(center, image("media/pipeline.svg", height: 80%))) ] #only("3")[ #block(width: 100%, align(center, image("media/bir.svg", height: 80%))) ] #only("4")[ - Fact collection - Polonius FFI - Error reporting ] #only("5-")[#text(fill: luma(50%))[ - Fact collection - Polonius FFI - Error reporting ]] #only("5-")[ - Changed #text(fill:green)[+10174] #h(10pt) #text(fill:red)[-1374] - _48%_ GCC upstream - _11%_ Rust GCC - _~~~9%_ PR in review ] #notes( ```md Nyní se podíváme na jednotlivé části, které jsem implementoval, abych základní variantu této analýzy integroval do překladače Rust GCC. V první řadě bylo třeba zajistit správné parsování lifetime anotací a jejich reprezentaci v abstraktním syntaktickém stromě a vysoko-úrovňové reprezentaci. V dalším kroku bylo nutné provést resoluci jmen jednotlivých anotací, přiřazení použití k definicím a reprezentace unitř typového systému. Jednou z významných komplikací bylo zajistit zachování správosti během operací na typech, a to hlavně během substituce typových parametrů. U generických typů bylo dále nutné spočítat takzvanou varianci generických argumentů. Variance určuje vztah mezi relacemi typů a relacemi generických parametrů těchto typů. Příklad na slidu. Dalším krokem byl návrh zcela nové vnitří reprezentace, nazvané Borrow-checker IR. Jak jste viděli během představení analýzy, výpočet probíha na control flow grafu. Na tomto srovnání vnitřních reprezentací Rust GCC and rustc můžete vidět, že zatím co abstraktní syntaktický strom a vysoko úrovňová reprezentace, která má také formu stromu je obou kompilátorům společná. Rust GCC předává middle-endu program ve formě stromu, zatímco rustc má vlastní reprezentaci MIR, založenou na control flow grafu. Právě na MIRu probíhá v rustc borrow checking. Control flow graf GCC není pro tyto účely dostatečný, protože neopsahuje informace specifické pro rust. Proto bylo nutné vytvořit novou reprezentaci inspirovanou MIRerm, a přeložit do ní program s vysokoúrovňové reprezentace a reprezetace typů. Z této nové reprezentace jsou pak získány relevatní informace o programu, předány výpočetnímu systému Polonius, vivinutému vývojáři rustc, k samotné analýze. Protože je Polonius implementovaný v Rust, bylo nutné implementovat FFI vrstvu pro propojení s překladačem. Moje řešení zahrnuje zhruba deset tisíc řídek kódu v různých částech projektu. Téměr polovina již byla přijata do hlavního repozitáře GCC. U dalších 20% probíhá review pull requestu a zbytek je prozatím v mé vývojové větvi. ``` ) ] #slide[ = Results - Limitations \ - Move errors - Subset errors - Access rule errors #notes( ```md Jak jste viděli, tak tato analýza vyžaduje úpravy ve velké části překladače. Tedy bylo nutné vybudovat rozsáhlou infrastrukturu, aby bylo možné vůbec **začít** se samotnou analýzou. Proto jsou možnosti implementované analýzi zatím omezené na poměrně **jednoduchý kód**. Nicméně na tomto kódu dokážeme detekovat velkou část porušení pravidel přístupu k paměti. Známe limitace, jsou popsané detailněji v textu mé práce, a všechny jsou technického charakteru a mělo by být možné je vyřešit prostým rozšířením existujícího kódu. Hlavní limitací je překlad složitých jazykových kontruktů do nové reprezentace. ``` ) ] #let error(body) = { v(1em) text(font: "DejaVu Sans Mono", size: .8em)[ *#text(fill:red)[Error:]* #body ] } #slide[ == Borrow Rules ```rust fn mutable_borrow_while_immutable_borrowed() { let x = 0; let y = &x; // <--- let z = &mut x; // <--- let w = y; } ``` #error([Found loan errors in function mutable_borrow_while_immutable_borrowed]) #notes( ```md Na tomto příkladu vydíte porušení pravidel o současné existenci více referencí. ``` ) ] #slide[ == Struct & Method ```rust struct Reference<'a> { value: &'a i32, } impl<'a> Reference<'a> { fn new<'a>(value: &'a i32) -> Reference<'a> { Reference { value: value } } } ``` #notes( ```md Nicméně reálný kód často obsahuje reference uvnitř složitějších struktur. Pro demonstraci použijeme tuto strukturu obsahující referenci a konstruktor. ``` ) ] #slide[ == Borrow Rules with Struct ```rust fn mutable_borrow_while_immutable_borrowed_struct() { let x = 0; let y = Reference::new(&x); let z = &mut x; //~ ERROR let w = y; } ``` #error([Found loan errors in function mutable_borrow_while_immutable_borrowed]) #notes( ```md Zde můžete vidět, že analýza stále funguje i pokud je reference skrytá uvnitř struktury. ``` ) ] #slide[ == Subset Rules ```rust fn complex_cfg_subset<'a, 'b>(b: bool, x: &'a u32, y: &'b u32) -> &'a u32 { if b { y //~ ERROR } else { x } } ``` #error([Found subset errors in function complex_cfg_subset]) #notes( ```md Zde je demostrována kontrola na hracinici funkce. V první větvi podmínky je navrácena reference, jejíž životnost není jakkoliv provázána s návratovou hodnotou. Proto není možné prokázat, že vrácená reference vždy ukazuje na validní objekt. ``` ) ] #slide[ = Future - Open Source Security support - GSoC 2024 #notes( ```md Co se budoucnosti této práce týče, pokud to bude v rámci mé další kariéry možné, chtěl bych na projektu pokračovat. Můžu zmínit, že společnosti Open Source Security, jeden z hlavních sponzorů Rust GCC projevila zájem o financování pokračování mé práce. Dále také připravujeme projekt do Google Summer of Code, který by řešil některé z limitací. ``` ) ] #title-slide[ #image("media/gccrs.png", height: 35%) #text(size: 2em)[Thank You] \ #text(size:1.5em)[for your attention] ]
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/attach-p1_03.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test function call after subscript. $pi_1(Y), a_f(x), a^zeta (x), a^abs(b)_sqrt(c) \ a^subset.eq (x), a_(zeta(x)), pi_(1(Y)), a^(abs(b))_(sqrt(c))$
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.1.2/src/bezier.typ
typst
Apache License 2.0
// This file contains functions related to bezier curve calculation #import "vector.typ" // Map number v from range (ds, de) to (ts, te) #let _map(v, ds, de, ts, te) = { let d1 = de - ds let d2 = te - ts let v2 = v - ds let r = v2 / d1 return ts + d2 * r } /// Get point on quadratic bezier at position t /// /// - a (vector): Start point /// - b (vector): End point /// - c (vector): Control point /// - t (float): Position on curve [0, 1] /// -> vector #let quadratic-point(a, b, c, t) = { // (1-t)^2 * a + 2 * (1-t) * t * c + t^2 b return vector.add( vector.add( vector.scale(a, calc.pow(1-t, 2)), vector.scale(c, 2 * (1-t) * t) ), vector.scale(b, calc.pow(t, 2)) ) } /// Get dx/dt of quadratic bezier at position t /// /// - a (vector): Start point /// - b (vector): End point /// - c (vector): Control point /// - t (float): Position on curve [0, 1] /// -> vector #let quadratic-derivative(a, b, c, t) = { // 2(-a(1-t) + bt - 2ct + c) return vector.scale( vector.add( vector.sub( vector.add( vector.scale(vector.neg(a), (1 - t)), vector.scale(b, t)), vector.scale(c, 2 * t)), c) , 2) } /// Get point on cubic bezier curve at position t /// /// - a (vector): Start point /// - b (vector): End point /// - c1 (vector): Control point 1 /// - c2 (vector): Control point 2 /// - t (float): Position on curve [0, 1] /// -> vector #let cubic-point(a, b, c1, c2, t) = { // (1-t)^3*a + 3*(1-t)^2*t*c1 + 3*(1-t)*t^2*c2 + t^3*b vector.add( vector.add( vector.scale(a, calc.pow(1-t, 3)), vector.scale(c1, 3 * calc.pow(1-t, 2) * t) ), vector.add( vector.scale(c2, 3*(1-t)*calc.pow(t,2)), vector.scale(b, calc.pow(t, 3)) ) ) } /// Get dx/dt of cubic bezier at position t /// /// - a (vector): Start point /// - b (vector): End point /// - c1 (vector): Control point 1 /// - c2 (vector): Control point 2 /// - t (float): Position on curve [0, 1] /// -> vector #let cubic-derivative(a, b, c1, c2, t) = { // -3(a(1-t)^2 + t(-2c2 - bt + 3 c2 t) + c1(-1 + 4t - 3t^2)) vector.scale( vector.add( vector.add( vector.scale(a, calc.pow((1 - t), 2)), vector.scale( vector.sub( vector.add( vector.scale(b, -1 * t), vector.scale(c2, 3 * t) ), vector.scale(c2, 2) ), t ) ), vector.scale(c1, -3 * calc.pow(t, 2) + 4 * t - 1) ), -3 ) } /// Get bezier curves ABC coordinates /// /// /A\ <-- Control point of quadratic bezier /// / | \ /// / | \ /// /_.-B-._\ <-- Point on curve /// ,' | ', /// / | \ /// s------C------e <-- Point on line between s and e /// /// - s (vector): Curve start /// - e (vector): Curve end /// - B (vector): Point on curve /// - t (fload): Position on curve [0, 1] /// - deg (int): Bezier degree (2 or 3) /// -> (tuple) Tuple of A, B and C vectors #let to-abc(s, e, B, t, deg: 2) = { let tt = calc.pow(t, deg) let u(t) = { (calc.pow(1 - t, deg) / (tt + calc.pow(1 - t, deg))) } let ratio(t) = { calc.abs((tt + calc.pow(1 - t, deg) - 1) / (tt + calc.pow(1 - t, deg))) } let C = vector.add(vector.scale(s, u(t)), vector.scale(e, 1 - u(t))) let A = vector.sub(B, vector.scale(vector.sub(C, B), 1 / ratio(t))) return (A, B, C) } /// Compute control points for quadratic bezier through 3 points /// /// - s (vector): Curve start /// - e (vector): Curve end /// - B (vector): Point on curve /// /// -> (s, e, c) Cubic bezier curve points #let quadratic-through-3points(s, B, e) = { let d1 = vector.dist(s, B) let d2 = vector.dist(e, B) let t = d1 / (d1 + d2) let (A, B, C) = to-abc(s, e, B, t, deg: 2) return (s, e, A) } /// Compute control points for cubic bezier through 3 points /// /// - s (vector): Curve start /// - e (vector): Curve end /// - B (vector): Point on curve /// /// -> (s, e, c1, c2) Cubic bezier curve points #let cubic-through-3points(s, B, e) = { let d1 = vector.dist(s, B) let d2 = vector.dist(e, B) let t = d1 / (d1 + d2) let (A, B, C) = to-abc(s, e, B, t, deg: 3) let d = vector.sub(B, C) if vector.len(d) == 0 { return (s, e, s, e) } d = vector.norm(d) d = (-d.at(1), d.at(0)) d = vector.scale(d, vector.dist(s, e) / 3) let c1 = vector.add(A, vector.scale(d, t)) let c2 = vector.sub(A, vector.scale(d, (1 - t))) let is-right = ((e.at(0) - s.at(0))*(B.at(1) - s.at(1)) - (e.at(1) - s.at(1))*(B.at(0) - s.at(0))) < 0 if is-right { (c1, c2) = (c2, c1) } return (s, e, c1, c2) } /// Convert quadratic bezier to cubic /// /// - s (vector): Curve start /// - e (vector): Curve end /// - c (vector): Control point /// /// -> (s, e, c1, c2) #let quadratic-to-cubic(s, e, c) = { let c1 = vector.add(s, vector.scale(vector.sub(c, s), 2/3)) let c2 = vector.add(e, vector.scale(vector.sub(c, e), 2/3)) return (s, e, c1, c2) } /// Split a cubic bezier into two cubic beziers at t /// /// - s (vector): Curve start /// - e (vector): Curve end /// - c1 (vector): Control point 1 /// - c2 (vector): Control point 2 /// - t (float): t 0..1 /// -> ((s, e, c1, c2), (s, e, c1, c2)) #let split(s, e, c1, c2, t) = { t = calc.max(0, calc.min(t, 1)) let split-rec(pts, t, left, right) = { if pts.len() == 1 { left.push(pts.at(0)) right.push(pts.at(0)) } else { let new-pts = () for i in range(0, pts.len() - 1) { if i == 0 { left.push(pts.at(i)) } if i == pts.len() - 2 { right.push(pts.at(i + 1)) } new-pts.push(vector.add(vector.scale(pts.at(i), (1 - t)), vector.scale(pts.at(i + 1), t))) } (left, right) = split-rec(new-pts, t, left, right) } return (left, right) } let (left, right) = split-rec((s, c1, c2, e), t, (), ()) return ((left.at(0), left.at(3), left.at(1), left.at(2)), (right.at(0), right.at(3), right.at(1), right.at(2))) } /// Shorten curve by length d. A negative length shortens from the end. /// /// - s (vector): Curve start /// - e (vector): Curve end /// - c1 (vector): Control point 1 /// - c2 (vector): Control point 2 /// - d (float): Distance to shorten by /// -> (s, e, c1, c2) Shortened curve #let shorten(s, e, c1, c2, d) = { if d == 0 { return (s, e, c1, c2) } let num-samples = 6 let split-t = 0 if d > 0 { let travel = 0 let last = cubic-point(s, e, c1, c2, 0) for t in range(0, num-samples + 1) { let t = t / num-samples let curr = cubic-point(s, e, c1, c2, t) let dist = calc.abs(vector.dist(last, curr)) travel += dist if travel >= d { split-t = t - (travel - d) / num-samples break } last = curr } } else { let travel = 0 let last = cubic-point(s, e, c1, c2, 1) for t in range(num-samples, -1, step: -1) { let t = t / num-samples let curr = cubic-point(s, e, c1, c2, t) let dist = calc.abs(vector.dist(last, curr)) travel -= dist if travel <= d { split-t = t - (travel - d) / num-samples break } last = curr } } let (left, right) = split(s, e, c1, c2, split-t) return if d > 0 { right } else { left } } /// Align curve points pts to the line start-end #let align(pts, start, end) = { let (x, y, _) = start let a = -calc.atan2(end.at(1) - y, end.at(0) - x) return pts.map(p => { ((p.at(0) - x) * calc.cos(-a) - (pt.at(1) - y) * calc.sin(-a), (p.at(0) - x) * calc.sin(-a) - (pt.at(1) - y) * calc.cos(-a), p.at(2)) }) } /// Find cubic curve extrema by calculating /// the roots of the curves first derivative. /// /// -> (array of vector) List of extrema points #let cubic-extrema(s, e, c1, c2) = { // Compute roots of a single dimension (x, y, z) of the // curve by using the abc formula for finding roots of // the curves first derivative. let dim-extrema(a, b, c1, c2) = { let f0 = 3*(c1 - a) let f1 = 6*(c2 - 2*c1 + a) let f2 = 3*(b - 3*c2 + 3*c1 - a) if f1 == 0 and f2 == 0 { return () } // Linear function if f2 == 0 { return (-f0 / f1,) } // No real roots let discriminant = f1*f1 - 4*f0*f2 if discriminant < 0 { return () } if discriminant == 0 { return (-f1 / (2*f2),) } return ((-f1 - calc.sqrt(discriminant)) / (2*f2), (-f1 + calc.sqrt(discriminant)) / (2*f2)) } let pts = () let dims = calc.max(s.len(), e.len()) for dim in range(dims) { let ts = dim-extrema(s.at(dim, default: 0), e.at(dim, default: 0), c1.at(dim, default: 0), c2.at(dim, default: 0)) for t in ts { // Discard any root outside the bezier range if t >= 0 and t <= 1 { pts.push(cubic-point(s, e, c1, c2, t)) } } } return pts } /// Return aabb coordinates for cubic bezier curve /// /// -> (bottom-left, top-right) #let cubic-aabb(s, e, c1, c2) = { let (lo, hi) = (s, e) for dim in range(lo.len()) { if lo.at(dim) > hi.at(dim) { (lo.at(dim), hi.at(dim)) = (hi.at(dim), lo.at(dim)) } } for pt in cubic-extrema(s, e, c1, c2) { for dim in range(pt.len()) { lo.at(dim) = calc.min(lo.at(dim), hi.at(dim), pt.at(dim)) hi.at(dim) = calc.max(lo.at(dim), hi.at(dim), pt.at(dim)) } } return (lo, hi) } /// Returns a cubic bezier between p2 and p3 for a catmull-rom curve /// through all four points. /// /// - p1 (vector): Point 1 /// - p2 (vector): Point 2 /// - p3 (vector): Point 3 /// - p4 (vector): Point 4 /// - k (float): Tension between 0 and 1 /// -> (a, b, c1, c2) #let _catmull-section-to-cubic(p1, p2, p3, p4, k) = { return (p2, p3, vector.add(p2, vector.scale(vector.sub(p3, p1), 1/(k * 6))), vector.sub(p3, vector.scale(vector.sub(p4, p2), 1/(k * 6)))) } /// Returns a list of cubic beziert for a catmull curve through points /// /// - points (array): Array of 2d points /// - k (float): Strength between 0 and oo #let catmull-to-cubic(points, k, close: false) = { k = calc.max(k, 0.1) k = if k < .5 { 1 / _map(k, .5, 0, 1, 10) } else { _map(k, .5, 1, 1, 10) } let len = points.len() if len == 2 { return ((points.at(0), points.at(1), points.at(0), points.at(1)),) } else if len > 2 { let curves = () let (i0, iN) = if close { (-1, 0) } else { (0, -1) } curves.push(_catmull-section-to-cubic(points.at(i0), points.at(0), points.at(1), points.at(2), k)) for i in range(1, len - 2, step: 1) { curves.push(_catmull-section-to-cubic( ..range(i - 1, i + 3).map(i => points.at(i)), k)) } curves.push(_catmull-section-to-cubic( points.at(-3), points.at(-2), points.at(-1), points.at(iN), k)) if close { curves.push(_catmull-section-to-cubic( points.at(-2), points.at(-1), points.at(0), points.at(1), k)) } return curves } return () }
https://github.com/NaviHX/bupt-master-dissertation.typ
https://raw.githubusercontent.com/NaviHX/bupt-master-dissertation.typ/main/bupt-master-dissertation.typ
typst
#import "@preview/tablex:0.0.7": gridx, tablex, hlinex, colspanx #import "@preview/i-figured:0.2.3" #import "chinese-cover.typ": chinese-cover #import "english-cover.typ": english-cover #let defense-committee( roman: "Times New Roman", songti: "Noto Serif CJK SC", heiti: "Noto Sans CJK SC", members, defend-date, ) = { set text(font: (roman, songti), size: 12pt) set page(paper: "a4", margin: (x: 3.17cm, y: 2.54cm)) show heading: it => align(center, text(font: heiti, size: 16pt, weight: "bold", it)) heading(outlined: false, numbering: none)[答辩委员会] let name-list = members.flatten() align( center, tablex( columns: (15%, 15%, 15%, 50%), header-rows: 1, align: center + horizon, [职务], [姓#h(1em)名], [职#h(1em)称], [工#h(1em)作#h(1em)单#h(1em)位], ..name-list, [答辩日期], colspanx(3)[#defend-date], ), ) } #let declaration( roman: "Times New Roman", songti: "Noto Serif CJK SC", heiti: "Noto Sans CJK SC", ) = [ #set text(font: (roman, songti), size: 12pt) #set par(leading: 1em, justify: true, first-line-indent: 2em) #show par: set block(below: 1em) #set page(paper: "a4", margin: (x: 3.17cm, y: 2.54cm)) #set text(size: 12pt) #set page(header: none) #align(center, text(weight: "bold", size: 16pt, font: heiti)[独创性声明]) 本人声明所呈交的论文是本人在导师指导下进行的研究工作及取得的研究成果。尽我所知,除了文中特别加以标注和致谢中所罗列的内容以外,论文中不包含其他人已经发表或撰写过的研究成果,也不包含为获得北京邮电大学或其他教育机构的学位或证书而使用过的材料。与我一同工作的同志对本研究所做的任何贡献均已在论文中作了明确的说明并表示了谢意。 申请学位论文与资料若有不实之处,本人承担一切相关责任。 本人签名:#box(line(length: 9em)) #h(2em) 日期:#box(line(length: 9em)) #v(3em) #align(center, text(weight: "bold", size: 16pt, font: heiti)[关于论文授权使用的说明]) 本人完全了解并同意北京邮电大学有关保留、使用学位论文的规定,即:北京邮电大学拥有以下关于学位论文的无偿使用权,具体包括:学校有权保留并向国家有关部门或机构送交学位论文,有权允许学位论文被查阅和借阅;学校可以公布学位论文的全部或部分内容,有权允许采用影印、缩印或其他复制手段保存、汇编学位论文,将学位论文的全部或部分内容编入有关数据库进行检索。(保密的学位论文在解密后遵守此规定) 本人签名:#box(line(length: 9em)) #h(2em) 日期:#box(line(length: 9em)) 导师签名:#box(line(length: 9em)) #h(2em) 日期:#box(line(length: 9em)) ] #let bupt-chinese-abstract(heiti, abstract, ..keywords) = [ #set text(size: 14pt) #set heading(numbering: none) #show heading: set text(15pt) = 摘要 #v(1em) #abstract #text(font: heiti, weight: "bold")[关键词:] #keywords.pos().join([;]) ] #let bupt-english-abstract(abstract, ..keywords) = [ #set text(size: 14pt) #set heading(numbering: none) #show heading: set text(15pt) = ABSTRACT #v(2em) #abstract #text(weight: "bold")[KEY WORDS:] #keywords.pos().join("; ") ] #let bupt-table-figure(chinese-caption, english-caption, t) = { figure( caption: chinese-caption, supplement: [表], kind: "表", figure(caption: english-caption, supplement: [Table], t), ) } #let bupt-image-figure(chinese-caption, english-caption, img) = { figure( caption: english-caption, supplement: [Figure], figure(caption: chinese-caption, supplement: [图], kind: "图", img), ) } #let bupt-template( roman: "Times New Roman", songti: "Noto Serif CJK SC", heiti: "Noto Sans CJK SC", even-page-header: [北京邮电大学硕士学位论文], chinese-abstract, english-abstract, symbols, bib, appendix, acknowledgment, achievement, content, ) = { // Preamble set text(font: (roman, songti), size: 12pt) set par(leading: 1em, justify: true, first-line-indent: 2em) show par: set block(below: 1em) set math.equation(supplement: "公式", numbering: (..numbers) => { locate(loc => { let chapter = counter(heading.where(level: 1)).at(loc).at(0) numbering("(1-1)", chapter, ..numbers) }) }) set cite(style: "gb-7714-2015-numeric") set page(paper: "a4", margin: (x: 3.17cm, y: 2.54cm)) set page( header-ascent: 41.6%, header: locate( loc => { let pn = loc.page() set text(font: (roman, songti), size: 9pt) if calc.even(pn) { align(center)[ #even-page-header #line(length: 100%) ] } else { let chapter-before = query(selector(heading.where(level: 1)).before(loc), loc) let chapter-after = query(selector(heading.where(level: 1)).after(loc), loc) let chapter = if chapter-after.len() > 0 and chapter-after.first().location().page() == pn { chapter-after.first().body } else if chapter-before.len() > 0 { chapter-before.last().body } else [ even-page-header ] align(center)[ #chapter #line(length: 100%) ] } }, ), ) show heading.where(level: 1): it => { align(center, text(size: 16pt, it)) } show heading.where(level: 2): it => { text(size: 14pt, it) } show heading: it => { set text(font: (roman, heiti)) text(size: 12pt, it) par()[#text(size: 5pt)[#h(0em)]] } show heading: i-figured.reset-counters set heading(numbering: (..numbers) => { if numbers.pos().len() == 1 { numbering("第一章", ..numbers.pos().slice(0, 1)) } else { numbers.pos().map(str).join(".") } }) set figure.caption(separator: " ") show figure.caption: set text(size: 10.5pt) show figure: i-figured.show-figure.with(numbering: "1-1") show figure.where(kind: table): set figure.caption(position: top) show figure.where(kind: "表"): set figure.caption(position: top) show figure.where(kind: image): set figure.caption(position: bottom) show figure.where(kind: "图"): set figure.caption(position: bottom) set page(numbering: "I") counter(page).update(1) show heading.where(level: 1): it => { pagebreak() it } // Abstract if chinese-abstract != none { bupt-chinese-abstract(heiti, ..chinese-abstract) } if english-abstract != none { bupt-english-abstract(..english-abstract) } // ToC heading(numbering: none, outlined: false)[目录] { set par(first-line-indent: 0em) set outline(title: none) outline(title: none, indent: 2em) } // Symbols heading(numbering: none, outlined: false)[符号说明] let symbols = symbols.flatten() let symbol-table-content = (hlinex(), [符号], [代表意义], [单位], ..symbols, hlinex()) align( center, gridx( columns: (30%, 30%, 30%), align: center + horizon, header-rows: 1, ..symbol-table-content, ), ) // Content set page(numbering: "1") counter(page).update(1) content set heading(numbering: none) // References if bib != none { set text(size: 10.5pt) bibliography(bib, title: [参考文献], full: true, style: "gb-7714-2015-numeric") } // Appendix if appendix != none [ = 附 #h(1em) 录 #appendix ] // Achknowledgment if acknowledgment != none [ = 致 #h(1em) 谢 #acknowledgment ] // Achievement if achievement != none [ #show par: set block(below: 2em) = 攻读学位期间取得的“创新成果”目录 #achievement ] }
https://github.com/yasemitee/Teoria-Informazione-Trasmissione
https://raw.githubusercontent.com/yasemitee/Teoria-Informazione-Trasmissione/main/2023-11-17.typ
typst
== Canale BEC - Canale Binario ERASER Si tratta di un canale binario a cancellazione con $Chi = {0,1}$ e $Y = {0,1,e}$, dove $e$ indica la perdita di un simbolo o meglio la cancellazione di un simbolo (eraser). Il canale BEC è definito nel seguente modo: #v(12pt) #figure( image("assets/2023-11-17 canale-eraser.png", width: 15%) ) #v(12pt) Proviamo ora come negli altri casi a ricavare la _capacità di canale_, vogliamo quindi $C = max_p(x) H(YY) - H(YY | XX)$. 1) $H(YY | XX) = P(x=0) dot H(YY | XX = 0) + P(XX=1) dot H(YY | XX = 1)$ Notiamo subito che $P(XX=0) + p(XX=1)=1$ quindi per simmetria $H(YY | XX = 0) = H(YY | XX = 1) = H(alpha)$ Quindi abbiamo che $C = max_p(x) H(YY) - H(YY | XX)$ 2) Per trovare $H(YY)$ abbiamo bisogno di introdurre un bernoulliana $ ZZ = cases( 1 " se" YY = e, 0 " altrimenti" ) $ Ora notiamo che $ H(YY,ZZ) = H(YY) + H(ZZ | YY) = H(YY) $ $ "questo perché" $ $ H(YY,ZZ) = H(YY) + underbrace(H(ZZ | XX),0) $ Quindi ora per ricavare $H(YY)$ dobbiamo trovare $H(ZZ)$ e per fare ciò basta osservare che $ H(ZZ = 1) = underbracket(P(XX=0) dot H(ZZ = 1 | XX = 0),alpha) + underbracket(P(XX=1) dot H(ZZ = 1 | XX = 1),alpha) = alpha $ $ H(ZZ = 0) = 1 - alpha $ $ ZZ = cases( alpha " se" YY = e, 1-alpha " altrimenti" ) $ Ora possiamo concludere $ H(YY | ZZ) = underbracket(P(ZZ = 0), 1- alpha) dot underbracket(H(YY | ZZ = 0), H(XX)) + underbracket(underbracket(P(ZZ = 1), alpha) dot underbracket(H(YY | ZZ = 1),0),0) $ $ H(YY | ZZ) = (1 - alpha) dot H(XX) $ $ H(YY) = H(alpha) + (1-alpha) H(XX) $ $ = $ Quindi $ C= max_p(x) ((1-alpha) dot H(XX)) - H(alpha) $ $ = (1-alpha) max_p(x) H(XX) = 1- alpha $
https://github.com/Vanille-N/mpri2-edt
https://raw.githubusercontent.com/Vanille-N/mpri2-edt/master/edt.typ
typst
#import "typtyp.typ" #let tt = typtyp #import "mpri.typ" #import "time.typ" #import "classes.typ" #let blank_color = rgb("f8f8f8") #let table_ratio = 94% #let title_ratio = 5% #let small_gutter = 1.5pt #let large_gutter = 3pt #let cell = rect.with( inset: 10pt, outset: 0pt, width: 100%, height: 100%, radius: 6pt ) #let show-hour-lines(bounds, notable) = { for hour in notable { let vpos = time.absolute(bounds, hour) // This is a hack: we need to rescale to the size of the table without titles let real_vpos = (vpos * (100% - title_ratio) + title_ratio) * table_ratio place( top + left, dx: 0%, dy: real_vpos, line( length: 100%, stroke: ( paint: gray.lighten(70%), thickness: 0.1pt, ), ), ) place( top + left, dx: 0%, dy: real_vpos, text(fill: gray.darken(40%), hour.label), ) } } #let show-classes(chosen, all, day-bounds) = { tt.is(tt.array(classes.Class), chosen) tt.is(tt.array(classes.TimeClass), all) tt.is(time.Bounds, day-bounds) let class_cell(class, dy: 0pt, dx: 0pt, height: none, width: none) = { place( top + left, dx: dx, dy: dy, cell.with(height: height, width: width, fill: class.descr.color)()[ #align(center, (class.descr.fmt)(class.descr.name, class.descr.uid, class.descr.teacher, class.room)) ] ) } for (idx, class) in all.enumerate() { if class.descr in chosen { let (width, dx) = if class.sem.len() == 1 { let available_width_per_sem = (100% - small_gutter) / 2 (available_width_per_sem, (available_width_per_sem + small_gutter) * (class.sem.at(0) - 1)) } else { (100%, 0%) } let dy = time.absolute(day-bounds, class.start) let height = time.proportional(day-bounds, class.len) class_cell(class, dy: dy, dx: dx, height: height, width: width) } } } #let day(name, chosen, d, day-bounds) = { tt.is(tt.str, name) tt.is(tt.array(classes.Class), chosen) tt.is(classes.Day, d) tt.is(time.Bounds, day-bounds) grid( columns: (100%,), rows: (title_ratio, 100% - title_ratio), align(center, text(size: 18pt, weight: "bold")[#name]), show-classes(chosen, d, day-bounds), ) } #let conf( week, chosen, ) = { tt.is(classes.Week, week) tt.is(tt.array(classes.Class), chosen) set page( paper: "presentation-16-9", margin: 0.5cm, ) let day-bounds = time.empty let notable-hours = () let ects = 0 for day in ( week.mon, week.tue, week.wed, week.thu, week.fri ) { for class in day { if class.descr in chosen { ects += class.ects let start = class.start let end = time.offset(class.start, class.len) day-bounds = time.extend(day-bounds, start) day-bounds = time.extend(day-bounds, end) if start not in notable-hours { notable-hours.push(start) } if end not in notable-hours { notable-hours.push(end) } } } } show-hour-lines(day-bounds, notable-hours) grid( columns: (4%, 1fr, 1fr, 1fr, 1fr, 1fr, 0.1%), gutter: (large_gutter,), row-gutter: (large_gutter,), rows: (table_ratio,), [], day("Monday", chosen, week.mon, day-bounds), day("Tuesday", chosen, week.tue, day-bounds), day("Wednesday", chosen, week.wed, day-bounds), day("Thursday", chosen, week.thu, day-bounds), day("Friday", chosen, week.fri, day-bounds), [], ) text(size: 12pt)[Totalling * #ects * ECTS] }
https://github.com/tingerrr/hydra
https://raw.githubusercontent.com/tingerrr/hydra/main/CHANGELOG.md
markdown
MIT License
# [unreleased](https://github.com/tingerrr/hydra/releases/tags/) ## Added ## Removed ## Changed ## Fixed --- # [v0.5.1](https://github.com/tingerrr/hydra/releases/tags/v0.5.1) ## Fixed - hydra no longer considers candidates on pages after the current page (https://github.com/tingerrr/hydra/pull/21) --- # [v0.5.0](https://github.com/tingerrr/hydra/releases/tags/v0.5.0) ## Added - `use-last` parameter on `hydra` for a more LaTeX style heading look up, thanks @freundTech! - `hydra` now has a new `use-last` parameter - `context` now has a new `use-last` field - **BREAKING CHANGE** `candidates` now has a new `last` field containing a suitable match for the last primary candidate on this page --- # [v0.4.0](https://github.com/tingerrr/hydra/releases/tags/v0.4.0) Almost all changes in this release are **BREAKING CHANGES**. ## Added - internal util functions and dictionaries for recreating `auto` fallbacks used within the typst compiler - `core.get-text-dir` - returns the text direction - `core.get-binding` - returns the page binding - `core.get-top-margin` - returns the absolute top margin - `util.text-direction` - returns the text direction for a given language - `util.page-binding` - returns the page binding for a given text direction ## Removed - various parameters on `hydra` have been removed - `paper` has been removed in favor of get rule - `page-size` has been removed in favor of get rule - `top-margin` has been removed in favor of get rule - `loc` has been removed in favor of user provided context - internal util dictionary for page sizes ## Changed - hydra now requires a minimum Typst compiler version of `0.11.0` - `hydra` is now contextual - most internal functions are now contextual - the internal context dictionary now holds a `anchor-loc` instead of a `loc` - `get-anchor-pos` has been renamed to `locate-last-anchor` - the internal `page-sizes` dictionary was changed to function - various parameters on `hydra` are now auto by default - `prev-filter` - `next-filter` - `display` - `dir` - `binding` --- # [v0.3.0](https://github.com/tingerrr/hydra/releases/tags/v0.3.0) # [v0.2.0](https://github.com/tingerrr/hydra/releases/tags/v0.2.0) # [v0.1.0](https://github.com/tingerrr/hydra/releases/tags/v0.1.0)
https://github.com/SeniorMars/typst.nvim
https://raw.githubusercontent.com/SeniorMars/typst.nvim/main/README.md
markdown
Apache License 2.0
# typst.nvim WIP. Goals: Tree-sitter highlighting, snippets, and a smooth integration with neovim. For the past week, I've been thinking what I want for Neovim and typst. I now believe that typst will big, and I want to fully support everything the best way I can. So here is what I want: - [ ] Treesitter support: - [ ] Conceal! - [ ] Code blocks - [ ] Highlighting - [ ] Folding - The issue is Typst is a big language, that I needed a break from this project for a bit. - [ ] Rendering math typst blocks - [ ] To get this to work properly, I need to contribute to neovim's anticonceal feature. - [ ] Plus get image rendering to work as expected - [ ] Snippets. Basically, port all the snippets I use from latex to typst. - [ ] External PDF viewers. This is not so hard, but I need to a PR for this to work. It's going to be a lot of work, but I'll get it done. --- Currently working on the tree-sitter parser: https://github.com/SeniorMars/tree-sitter-typst Language spec(?): https://github.com/typst/typst/blob/main/tools/support/typst.tmLanguage.json This plugin will take much inspiration from VimTeX and silicon.nvim :) Note: since the language I'm most familiar with is rust, this project will probably be written in Rust.
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/octique/0.1.0/README.md
markdown
Apache License 2.0
# typst-octique GitHub [Octicons](https://primer.style/foundations/icons/) for Typst. ## Installation ```typst #import "@preview/octique:0.1.0": * ``` ## Usage ```typst // Returns an image for the given name. octique(name, color: rgb("#000000"), width: 1em, height: 1em) // Returns a boxed image for the given name. octique-inline(name, color: rgb("#000000"), width: 1em, height: 1em, baseline: 25%) // Returns an SVG text for the given name. octique-svg(name) ``` ## List of Available Icons See also [`sample/sample.pdf`](sample/sample.pdf). | Code | Icon | | ---- | :--: | |`#octique("accessibility-inset")`| ![accessibility-inset](https://github.com/0x6b/typst-octique/wiki/assets/accessibility-inset.svg) | |`#octique("accessibility")`| ![accessibility](https://github.com/0x6b/typst-octique/wiki/assets/accessibility.svg) | |`#octique("alert-fill")`| ![alert-fill](https://github.com/0x6b/typst-octique/wiki/assets/alert-fill.svg) | |`#octique("alert")`| ![alert](https://github.com/0x6b/typst-octique/wiki/assets/alert.svg) | |`#octique("apps")`| ![apps](https://github.com/0x6b/typst-octique/wiki/assets/apps.svg) | |`#octique("archive")`| ![archive](https://github.com/0x6b/typst-octique/wiki/assets/archive.svg) | |`#octique("arrow-both")`| ![arrow-both](https://github.com/0x6b/typst-octique/wiki/assets/arrow-both.svg) | |`#octique("arrow-down-left")`| ![arrow-down-left](https://github.com/0x6b/typst-octique/wiki/assets/arrow-down-left.svg) | |`#octique("arrow-down-right")`| ![arrow-down-right](https://github.com/0x6b/typst-octique/wiki/assets/arrow-down-right.svg) | |`#octique("arrow-down")`| ![arrow-down](https://github.com/0x6b/typst-octique/wiki/assets/arrow-down.svg) | |`#octique("arrow-left")`| ![arrow-left](https://github.com/0x6b/typst-octique/wiki/assets/arrow-left.svg) | |`#octique("arrow-right")`| ![arrow-right](https://github.com/0x6b/typst-octique/wiki/assets/arrow-right.svg) | |`#octique("arrow-switch")`| ![arrow-switch](https://github.com/0x6b/typst-octique/wiki/assets/arrow-switch.svg) | |`#octique("arrow-up-left")`| ![arrow-up-left](https://github.com/0x6b/typst-octique/wiki/assets/arrow-up-left.svg) | |`#octique("arrow-up-right")`| ![arrow-up-right](https://github.com/0x6b/typst-octique/wiki/assets/arrow-up-right.svg) | |`#octique("arrow-up")`| ![arrow-up](https://github.com/0x6b/typst-octique/wiki/assets/arrow-up.svg) | |`#octique("beaker")`| ![beaker](https://github.com/0x6b/typst-octique/wiki/assets/beaker.svg) | |`#octique("bell-fill")`| ![bell-fill](https://github.com/0x6b/typst-octique/wiki/assets/bell-fill.svg) | |`#octique("bell-slash")`| ![bell-slash](https://github.com/0x6b/typst-octique/wiki/assets/bell-slash.svg) | |`#octique("bell")`| ![bell](https://github.com/0x6b/typst-octique/wiki/assets/bell.svg) | |`#octique("blocked")`| ![blocked](https://github.com/0x6b/typst-octique/wiki/assets/blocked.svg) | |`#octique("bold")`| ![bold](https://github.com/0x6b/typst-octique/wiki/assets/bold.svg) | |`#octique("book")`| ![book](https://github.com/0x6b/typst-octique/wiki/assets/book.svg) | |`#octique("bookmark-slash")`| ![bookmark-slash](https://github.com/0x6b/typst-octique/wiki/assets/bookmark-slash.svg) | |`#octique("bookmark")`| ![bookmark](https://github.com/0x6b/typst-octique/wiki/assets/bookmark.svg) | |`#octique("briefcase")`| ![briefcase](https://github.com/0x6b/typst-octique/wiki/assets/briefcase.svg) | |`#octique("broadcast")`| ![broadcast](https://github.com/0x6b/typst-octique/wiki/assets/broadcast.svg) | |`#octique("browser")`| ![browser](https://github.com/0x6b/typst-octique/wiki/assets/browser.svg) | |`#octique("bug")`| ![bug](https://github.com/0x6b/typst-octique/wiki/assets/bug.svg) | |`#octique("cache")`| ![cache](https://github.com/0x6b/typst-octique/wiki/assets/cache.svg) | |`#octique("calendar")`| ![calendar](https://github.com/0x6b/typst-octique/wiki/assets/calendar.svg) | |`#octique("check-circle-fill")`| ![check-circle-fill](https://github.com/0x6b/typst-octique/wiki/assets/check-circle-fill.svg) | |`#octique("check-circle")`| ![check-circle](https://github.com/0x6b/typst-octique/wiki/assets/check-circle.svg) | |`#octique("check")`| ![check](https://github.com/0x6b/typst-octique/wiki/assets/check.svg) | |`#octique("checkbox")`| ![checkbox](https://github.com/0x6b/typst-octique/wiki/assets/checkbox.svg) | |`#octique("checklist")`| ![checklist](https://github.com/0x6b/typst-octique/wiki/assets/checklist.svg) | |`#octique("chevron-down")`| ![chevron-down](https://github.com/0x6b/typst-octique/wiki/assets/chevron-down.svg) | |`#octique("chevron-left")`| ![chevron-left](https://github.com/0x6b/typst-octique/wiki/assets/chevron-left.svg) | |`#octique("chevron-right")`| ![chevron-right](https://github.com/0x6b/typst-octique/wiki/assets/chevron-right.svg) | |`#octique("chevron-up")`| ![chevron-up](https://github.com/0x6b/typst-octique/wiki/assets/chevron-up.svg) | |`#octique("circle-slash")`| ![circle-slash](https://github.com/0x6b/typst-octique/wiki/assets/circle-slash.svg) | |`#octique("circle")`| ![circle](https://github.com/0x6b/typst-octique/wiki/assets/circle.svg) | |`#octique("clock-fill")`| ![clock-fill](https://github.com/0x6b/typst-octique/wiki/assets/clock-fill.svg) | |`#octique("clock")`| ![clock](https://github.com/0x6b/typst-octique/wiki/assets/clock.svg) | |`#octique("cloud-offline")`| ![cloud-offline](https://github.com/0x6b/typst-octique/wiki/assets/cloud-offline.svg) | |`#octique("cloud")`| ![cloud](https://github.com/0x6b/typst-octique/wiki/assets/cloud.svg) | |`#octique("code-of-conduct")`| ![code-of-conduct](https://github.com/0x6b/typst-octique/wiki/assets/code-of-conduct.svg) | |`#octique("code-review")`| ![code-review](https://github.com/0x6b/typst-octique/wiki/assets/code-review.svg) | |`#octique("code")`| ![code](https://github.com/0x6b/typst-octique/wiki/assets/code.svg) | |`#octique("code-square")`| ![code-square](https://github.com/0x6b/typst-octique/wiki/assets/code-square.svg) | |`#octique("codescan-checkmark")`| ![codescan-checkmark](https://github.com/0x6b/typst-octique/wiki/assets/codescan-checkmark.svg) | |`#octique("codescan")`| ![codescan](https://github.com/0x6b/typst-octique/wiki/assets/codescan.svg) | |`#octique("codespaces")`| ![codespaces](https://github.com/0x6b/typst-octique/wiki/assets/codespaces.svg) | |`#octique("columns")`| ![columns](https://github.com/0x6b/typst-octique/wiki/assets/columns.svg) | |`#octique("command-palette")`| ![command-palette](https://github.com/0x6b/typst-octique/wiki/assets/command-palette.svg) | |`#octique("comment-discussion")`| ![comment-discussion](https://github.com/0x6b/typst-octique/wiki/assets/comment-discussion.svg) | |`#octique("comment")`| ![comment](https://github.com/0x6b/typst-octique/wiki/assets/comment.svg) | |`#octique("container")`| ![container](https://github.com/0x6b/typst-octique/wiki/assets/container.svg) | |`#octique("copilot-error")`| ![copilot-error](https://github.com/0x6b/typst-octique/wiki/assets/copilot-error.svg) | |`#octique("copilot")`| ![copilot](https://github.com/0x6b/typst-octique/wiki/assets/copilot.svg) | |`#octique("copilot-warning")`| ![copilot-warning](https://github.com/0x6b/typst-octique/wiki/assets/copilot-warning.svg) | |`#octique("copy")`| ![copy](https://github.com/0x6b/typst-octique/wiki/assets/copy.svg) | |`#octique("cpu")`| ![cpu](https://github.com/0x6b/typst-octique/wiki/assets/cpu.svg) | |`#octique("credit-card")`| ![credit-card](https://github.com/0x6b/typst-octique/wiki/assets/credit-card.svg) | |`#octique("cross-reference")`| ![cross-reference](https://github.com/0x6b/typst-octique/wiki/assets/cross-reference.svg) | |`#octique("dash")`| ![dash](https://github.com/0x6b/typst-octique/wiki/assets/dash.svg) | |`#octique("database")`| ![database](https://github.com/0x6b/typst-octique/wiki/assets/database.svg) | |`#octique("dependabot")`| ![dependabot](https://github.com/0x6b/typst-octique/wiki/assets/dependabot.svg) | |`#octique("desktop-download")`| ![desktop-download](https://github.com/0x6b/typst-octique/wiki/assets/desktop-download.svg) | |`#octique("device-camera")`| ![device-camera](https://github.com/0x6b/typst-octique/wiki/assets/device-camera.svg) | |`#octique("device-camera-video")`| ![device-camera-video](https://github.com/0x6b/typst-octique/wiki/assets/device-camera-video.svg) | |`#octique("device-desktop")`| ![device-desktop](https://github.com/0x6b/typst-octique/wiki/assets/device-desktop.svg) | |`#octique("device-mobile")`| ![device-mobile](https://github.com/0x6b/typst-octique/wiki/assets/device-mobile.svg) | |`#octique("devices")`| ![devices](https://github.com/0x6b/typst-octique/wiki/assets/devices.svg) | |`#octique("diamond")`| ![diamond](https://github.com/0x6b/typst-octique/wiki/assets/diamond.svg) | |`#octique("diff-added")`| ![diff-added](https://github.com/0x6b/typst-octique/wiki/assets/diff-added.svg) | |`#octique("diff-ignored")`| ![diff-ignored](https://github.com/0x6b/typst-octique/wiki/assets/diff-ignored.svg) | |`#octique("diff-modified")`| ![diff-modified](https://github.com/0x6b/typst-octique/wiki/assets/diff-modified.svg) | |`#octique("diff-removed")`| ![diff-removed](https://github.com/0x6b/typst-octique/wiki/assets/diff-removed.svg) | |`#octique("diff-renamed")`| ![diff-renamed](https://github.com/0x6b/typst-octique/wiki/assets/diff-renamed.svg) | |`#octique("diff")`| ![diff](https://github.com/0x6b/typst-octique/wiki/assets/diff.svg) | |`#octique("discussion-closed")`| ![discussion-closed](https://github.com/0x6b/typst-octique/wiki/assets/discussion-closed.svg) | |`#octique("discussion-duplicate")`| ![discussion-duplicate](https://github.com/0x6b/typst-octique/wiki/assets/discussion-duplicate.svg) | |`#octique("discussion-outdated")`| ![discussion-outdated](https://github.com/0x6b/typst-octique/wiki/assets/discussion-outdated.svg) | |`#octique("dot-fill")`| ![dot-fill](https://github.com/0x6b/typst-octique/wiki/assets/dot-fill.svg) | |`#octique("dot")`| ![dot](https://github.com/0x6b/typst-octique/wiki/assets/dot.svg) | |`#octique("download")`| ![download](https://github.com/0x6b/typst-octique/wiki/assets/download.svg) | |`#octique("duplicate")`| ![duplicate](https://github.com/0x6b/typst-octique/wiki/assets/duplicate.svg) | |`#octique("ellipsis")`| ![ellipsis](https://github.com/0x6b/typst-octique/wiki/assets/ellipsis.svg) | |`#octique("eye-closed")`| ![eye-closed](https://github.com/0x6b/typst-octique/wiki/assets/eye-closed.svg) | |`#octique("eye")`| ![eye](https://github.com/0x6b/typst-octique/wiki/assets/eye.svg) | |`#octique("feed-discussion")`| ![feed-discussion](https://github.com/0x6b/typst-octique/wiki/assets/feed-discussion.svg) | |`#octique("feed-forked")`| ![feed-forked](https://github.com/0x6b/typst-octique/wiki/assets/feed-forked.svg) | |`#octique("feed-heart")`| ![feed-heart](https://github.com/0x6b/typst-octique/wiki/assets/feed-heart.svg) | |`#octique("feed-issue-closed")`| ![feed-issue-closed](https://github.com/0x6b/typst-octique/wiki/assets/feed-issue-closed.svg) | |`#octique("feed-issue-draft")`| ![feed-issue-draft](https://github.com/0x6b/typst-octique/wiki/assets/feed-issue-draft.svg) | |`#octique("feed-issue-open")`| ![feed-issue-open](https://github.com/0x6b/typst-octique/wiki/assets/feed-issue-open.svg) | |`#octique("feed-issue-reopen")`| ![feed-issue-reopen](https://github.com/0x6b/typst-octique/wiki/assets/feed-issue-reopen.svg) | |`#octique("feed-merged")`| ![feed-merged](https://github.com/0x6b/typst-octique/wiki/assets/feed-merged.svg) | |`#octique("feed-person")`| ![feed-person](https://github.com/0x6b/typst-octique/wiki/assets/feed-person.svg) | |`#octique("feed-plus")`| ![feed-plus](https://github.com/0x6b/typst-octique/wiki/assets/feed-plus.svg) | |`#octique("feed-public")`| ![feed-public](https://github.com/0x6b/typst-octique/wiki/assets/feed-public.svg) | |`#octique("feed-pull-request-closed")`| ![feed-pull-request-closed](https://github.com/0x6b/typst-octique/wiki/assets/feed-pull-request-closed.svg) | |`#octique("feed-pull-request-draft")`| ![feed-pull-request-draft](https://github.com/0x6b/typst-octique/wiki/assets/feed-pull-request-draft.svg) | |`#octique("feed-pull-request-open")`| ![feed-pull-request-open](https://github.com/0x6b/typst-octique/wiki/assets/feed-pull-request-open.svg) | |`#octique("feed-repo")`| ![feed-repo](https://github.com/0x6b/typst-octique/wiki/assets/feed-repo.svg) | |`#octique("feed-rocket")`| ![feed-rocket](https://github.com/0x6b/typst-octique/wiki/assets/feed-rocket.svg) | |`#octique("feed-star")`| ![feed-star](https://github.com/0x6b/typst-octique/wiki/assets/feed-star.svg) | |`#octique("feed-tag")`| ![feed-tag](https://github.com/0x6b/typst-octique/wiki/assets/feed-tag.svg) | |`#octique("feed-trophy")`| ![feed-trophy](https://github.com/0x6b/typst-octique/wiki/assets/feed-trophy.svg) | |`#octique("file-added")`| ![file-added](https://github.com/0x6b/typst-octique/wiki/assets/file-added.svg) | |`#octique("file-badge")`| ![file-badge](https://github.com/0x6b/typst-octique/wiki/assets/file-badge.svg) | |`#octique("file-binary")`| ![file-binary](https://github.com/0x6b/typst-octique/wiki/assets/file-binary.svg) | |`#octique("file-code")`| ![file-code](https://github.com/0x6b/typst-octique/wiki/assets/file-code.svg) | |`#octique("file-diff")`| ![file-diff](https://github.com/0x6b/typst-octique/wiki/assets/file-diff.svg) | |`#octique("file-directory-fill")`| ![file-directory-fill](https://github.com/0x6b/typst-octique/wiki/assets/file-directory-fill.svg) | |`#octique("file-directory-open-fill")`| ![file-directory-open-fill](https://github.com/0x6b/typst-octique/wiki/assets/file-directory-open-fill.svg) | |`#octique("file-directory")`| ![file-directory](https://github.com/0x6b/typst-octique/wiki/assets/file-directory.svg) | |`#octique("file-directory-symlink")`| ![file-directory-symlink](https://github.com/0x6b/typst-octique/wiki/assets/file-directory-symlink.svg) | |`#octique("file-moved")`| ![file-moved](https://github.com/0x6b/typst-octique/wiki/assets/file-moved.svg) | |`#octique("file-removed")`| ![file-removed](https://github.com/0x6b/typst-octique/wiki/assets/file-removed.svg) | |`#octique("file")`| ![file](https://github.com/0x6b/typst-octique/wiki/assets/file.svg) | |`#octique("file-submodule")`| ![file-submodule](https://github.com/0x6b/typst-octique/wiki/assets/file-submodule.svg) | |`#octique("file-symlink-file")`| ![file-symlink-file](https://github.com/0x6b/typst-octique/wiki/assets/file-symlink-file.svg) | |`#octique("file-zip")`| ![file-zip](https://github.com/0x6b/typst-octique/wiki/assets/file-zip.svg) | |`#octique("filter")`| ![filter](https://github.com/0x6b/typst-octique/wiki/assets/filter.svg) | |`#octique("fiscal-host")`| ![fiscal-host](https://github.com/0x6b/typst-octique/wiki/assets/fiscal-host.svg) | |`#octique("flame")`| ![flame](https://github.com/0x6b/typst-octique/wiki/assets/flame.svg) | |`#octique("fold-down")`| ![fold-down](https://github.com/0x6b/typst-octique/wiki/assets/fold-down.svg) | |`#octique("fold")`| ![fold](https://github.com/0x6b/typst-octique/wiki/assets/fold.svg) | |`#octique("fold-up")`| ![fold-up](https://github.com/0x6b/typst-octique/wiki/assets/fold-up.svg) | |`#octique("gear")`| ![gear](https://github.com/0x6b/typst-octique/wiki/assets/gear.svg) | |`#octique("gift")`| ![gift](https://github.com/0x6b/typst-octique/wiki/assets/gift.svg) | |`#octique("git-branch")`| ![git-branch](https://github.com/0x6b/typst-octique/wiki/assets/git-branch.svg) | |`#octique("git-commit")`| ![git-commit](https://github.com/0x6b/typst-octique/wiki/assets/git-commit.svg) | |`#octique("git-compare")`| ![git-compare](https://github.com/0x6b/typst-octique/wiki/assets/git-compare.svg) | |`#octique("git-merge-queue")`| ![git-merge-queue](https://github.com/0x6b/typst-octique/wiki/assets/git-merge-queue.svg) | |`#octique("git-merge")`| ![git-merge](https://github.com/0x6b/typst-octique/wiki/assets/git-merge.svg) | |`#octique("git-pull-request-closed")`| ![git-pull-request-closed](https://github.com/0x6b/typst-octique/wiki/assets/git-pull-request-closed.svg) | |`#octique("git-pull-request-draft")`| ![git-pull-request-draft](https://github.com/0x6b/typst-octique/wiki/assets/git-pull-request-draft.svg) | |`#octique("git-pull-request")`| ![git-pull-request](https://github.com/0x6b/typst-octique/wiki/assets/git-pull-request.svg) | |`#octique("globe")`| ![globe](https://github.com/0x6b/typst-octique/wiki/assets/globe.svg) | |`#octique("goal")`| ![goal](https://github.com/0x6b/typst-octique/wiki/assets/goal.svg) | |`#octique("grabber")`| ![grabber](https://github.com/0x6b/typst-octique/wiki/assets/grabber.svg) | |`#octique("graph")`| ![graph](https://github.com/0x6b/typst-octique/wiki/assets/graph.svg) | |`#octique("hash")`| ![hash](https://github.com/0x6b/typst-octique/wiki/assets/hash.svg) | |`#octique("heading")`| ![heading](https://github.com/0x6b/typst-octique/wiki/assets/heading.svg) | |`#octique("heart-fill")`| ![heart-fill](https://github.com/0x6b/typst-octique/wiki/assets/heart-fill.svg) | |`#octique("heart")`| ![heart](https://github.com/0x6b/typst-octique/wiki/assets/heart.svg) | |`#octique("history")`| ![history](https://github.com/0x6b/typst-octique/wiki/assets/history.svg) | |`#octique("home")`| ![home](https://github.com/0x6b/typst-octique/wiki/assets/home.svg) | |`#octique("horizontal-rule")`| ![horizontal-rule](https://github.com/0x6b/typst-octique/wiki/assets/horizontal-rule.svg) | |`#octique("hourglass")`| ![hourglass](https://github.com/0x6b/typst-octique/wiki/assets/hourglass.svg) | |`#octique("hubot")`| ![hubot](https://github.com/0x6b/typst-octique/wiki/assets/hubot.svg) | |`#octique("id-badge")`| ![id-badge](https://github.com/0x6b/typst-octique/wiki/assets/id-badge.svg) | |`#octique("image")`| ![image](https://github.com/0x6b/typst-octique/wiki/assets/image.svg) | |`#octique("inbox")`| ![inbox](https://github.com/0x6b/typst-octique/wiki/assets/inbox.svg) | |`#octique("infinity")`| ![infinity](https://github.com/0x6b/typst-octique/wiki/assets/infinity.svg) | |`#octique("info")`| ![info](https://github.com/0x6b/typst-octique/wiki/assets/info.svg) | |`#octique("issue-closed")`| ![issue-closed](https://github.com/0x6b/typst-octique/wiki/assets/issue-closed.svg) | |`#octique("issue-draft")`| ![issue-draft](https://github.com/0x6b/typst-octique/wiki/assets/issue-draft.svg) | |`#octique("issue-opened")`| ![issue-opened](https://github.com/0x6b/typst-octique/wiki/assets/issue-opened.svg) | |`#octique("issue-reopened")`| ![issue-reopened](https://github.com/0x6b/typst-octique/wiki/assets/issue-reopened.svg) | |`#octique("issue-tracked-by")`| ![issue-tracked-by](https://github.com/0x6b/typst-octique/wiki/assets/issue-tracked-by.svg) | |`#octique("issue-tracks")`| ![issue-tracks](https://github.com/0x6b/typst-octique/wiki/assets/issue-tracks.svg) | |`#octique("italic")`| ![italic](https://github.com/0x6b/typst-octique/wiki/assets/italic.svg) | |`#octique("iterations")`| ![iterations](https://github.com/0x6b/typst-octique/wiki/assets/iterations.svg) | |`#octique("kebab-horizontal")`| ![kebab-horizontal](https://github.com/0x6b/typst-octique/wiki/assets/kebab-horizontal.svg) | |`#octique("key-asterisk")`| ![key-asterisk](https://github.com/0x6b/typst-octique/wiki/assets/key-asterisk.svg) | |`#octique("key")`| ![key](https://github.com/0x6b/typst-octique/wiki/assets/key.svg) | |`#octique("law")`| ![law](https://github.com/0x6b/typst-octique/wiki/assets/law.svg) | |`#octique("light-bulb")`| ![light-bulb](https://github.com/0x6b/typst-octique/wiki/assets/light-bulb.svg) | |`#octique("link-external")`| ![link-external](https://github.com/0x6b/typst-octique/wiki/assets/link-external.svg) | |`#octique("link")`| ![link](https://github.com/0x6b/typst-octique/wiki/assets/link.svg) | |`#octique("list-ordered")`| ![list-ordered](https://github.com/0x6b/typst-octique/wiki/assets/list-ordered.svg) | |`#octique("list-unordered")`| ![list-unordered](https://github.com/0x6b/typst-octique/wiki/assets/list-unordered.svg) | |`#octique("location")`| ![location](https://github.com/0x6b/typst-octique/wiki/assets/location.svg) | |`#octique("lock")`| ![lock](https://github.com/0x6b/typst-octique/wiki/assets/lock.svg) | |`#octique("log")`| ![log](https://github.com/0x6b/typst-octique/wiki/assets/log.svg) | |`#octique("logo-gist")`| ![logo-gist](https://github.com/0x6b/typst-octique/wiki/assets/logo-gist.svg) | |`#octique("logo-github")`| ![logo-github](https://github.com/0x6b/typst-octique/wiki/assets/logo-github.svg) | |`#octique("mail")`| ![mail](https://github.com/0x6b/typst-octique/wiki/assets/mail.svg) | |`#octique("mark-github")`| ![mark-github](https://github.com/0x6b/typst-octique/wiki/assets/mark-github.svg) | |`#octique("markdown")`| ![markdown](https://github.com/0x6b/typst-octique/wiki/assets/markdown.svg) | |`#octique("megaphone")`| ![megaphone](https://github.com/0x6b/typst-octique/wiki/assets/megaphone.svg) | |`#octique("mention")`| ![mention](https://github.com/0x6b/typst-octique/wiki/assets/mention.svg) | |`#octique("meter")`| ![meter](https://github.com/0x6b/typst-octique/wiki/assets/meter.svg) | |`#octique("milestone")`| ![milestone](https://github.com/0x6b/typst-octique/wiki/assets/milestone.svg) | |`#octique("mirror")`| ![mirror](https://github.com/0x6b/typst-octique/wiki/assets/mirror.svg) | |`#octique("moon")`| ![moon](https://github.com/0x6b/typst-octique/wiki/assets/moon.svg) | |`#octique("mortar-board")`| ![mortar-board](https://github.com/0x6b/typst-octique/wiki/assets/mortar-board.svg) | |`#octique("move-to-bottom")`| ![move-to-bottom](https://github.com/0x6b/typst-octique/wiki/assets/move-to-bottom.svg) | |`#octique("move-to-end")`| ![move-to-end](https://github.com/0x6b/typst-octique/wiki/assets/move-to-end.svg) | |`#octique("move-to-start")`| ![move-to-start](https://github.com/0x6b/typst-octique/wiki/assets/move-to-start.svg) | |`#octique("move-to-top")`| ![move-to-top](https://github.com/0x6b/typst-octique/wiki/assets/move-to-top.svg) | |`#octique("multi-select")`| ![multi-select](https://github.com/0x6b/typst-octique/wiki/assets/multi-select.svg) | |`#octique("mute")`| ![mute](https://github.com/0x6b/typst-octique/wiki/assets/mute.svg) | |`#octique("no-entry")`| ![no-entry](https://github.com/0x6b/typst-octique/wiki/assets/no-entry.svg) | |`#octique("north-star")`| ![north-star](https://github.com/0x6b/typst-octique/wiki/assets/north-star.svg) | |`#octique("note")`| ![note](https://github.com/0x6b/typst-octique/wiki/assets/note.svg) | |`#octique("number")`| ![number](https://github.com/0x6b/typst-octique/wiki/assets/number.svg) | |`#octique("organization")`| ![organization](https://github.com/0x6b/typst-octique/wiki/assets/organization.svg) | |`#octique("package-dependencies")`| ![package-dependencies](https://github.com/0x6b/typst-octique/wiki/assets/package-dependencies.svg) | |`#octique("package-dependents")`| ![package-dependents](https://github.com/0x6b/typst-octique/wiki/assets/package-dependents.svg) | |`#octique("package")`| ![package](https://github.com/0x6b/typst-octique/wiki/assets/package.svg) | |`#octique("paintbrush")`| ![paintbrush](https://github.com/0x6b/typst-octique/wiki/assets/paintbrush.svg) | |`#octique("paper-airplane")`| ![paper-airplane](https://github.com/0x6b/typst-octique/wiki/assets/paper-airplane.svg) | |`#octique("paperclip")`| ![paperclip](https://github.com/0x6b/typst-octique/wiki/assets/paperclip.svg) | |`#octique("passkey-fill")`| ![passkey-fill](https://github.com/0x6b/typst-octique/wiki/assets/passkey-fill.svg) | |`#octique("paste")`| ![paste](https://github.com/0x6b/typst-octique/wiki/assets/paste.svg) | |`#octique("pencil")`| ![pencil](https://github.com/0x6b/typst-octique/wiki/assets/pencil.svg) | |`#octique("people")`| ![people](https://github.com/0x6b/typst-octique/wiki/assets/people.svg) | |`#octique("person-add")`| ![person-add](https://github.com/0x6b/typst-octique/wiki/assets/person-add.svg) | |`#octique("person-fill")`| ![person-fill](https://github.com/0x6b/typst-octique/wiki/assets/person-fill.svg) | |`#octique("person")`| ![person](https://github.com/0x6b/typst-octique/wiki/assets/person.svg) | |`#octique("pin-slash")`| ![pin-slash](https://github.com/0x6b/typst-octique/wiki/assets/pin-slash.svg) | |`#octique("pin")`| ![pin](https://github.com/0x6b/typst-octique/wiki/assets/pin.svg) | |`#octique("pivot-column")`| ![pivot-column](https://github.com/0x6b/typst-octique/wiki/assets/pivot-column.svg) | |`#octique("play")`| ![play](https://github.com/0x6b/typst-octique/wiki/assets/play.svg) | |`#octique("plug")`| ![plug](https://github.com/0x6b/typst-octique/wiki/assets/plug.svg) | |`#octique("plus-circle")`| ![plus-circle](https://github.com/0x6b/typst-octique/wiki/assets/plus-circle.svg) | |`#octique("plus")`| ![plus](https://github.com/0x6b/typst-octique/wiki/assets/plus.svg) | |`#octique("project-roadmap")`| ![project-roadmap](https://github.com/0x6b/typst-octique/wiki/assets/project-roadmap.svg) | |`#octique("project")`| ![project](https://github.com/0x6b/typst-octique/wiki/assets/project.svg) | |`#octique("project-symlink")`| ![project-symlink](https://github.com/0x6b/typst-octique/wiki/assets/project-symlink.svg) | |`#octique("project-template")`| ![project-template](https://github.com/0x6b/typst-octique/wiki/assets/project-template.svg) | |`#octique("pulse")`| ![pulse](https://github.com/0x6b/typst-octique/wiki/assets/pulse.svg) | |`#octique("question")`| ![question](https://github.com/0x6b/typst-octique/wiki/assets/question.svg) | |`#octique("quote")`| ![quote](https://github.com/0x6b/typst-octique/wiki/assets/quote.svg) | |`#octique("read")`| ![read](https://github.com/0x6b/typst-octique/wiki/assets/read.svg) | |`#octique("redo")`| ![redo](https://github.com/0x6b/typst-octique/wiki/assets/redo.svg) | |`#octique("rel-file-path")`| ![rel-file-path](https://github.com/0x6b/typst-octique/wiki/assets/rel-file-path.svg) | |`#octique("reply")`| ![reply](https://github.com/0x6b/typst-octique/wiki/assets/reply.svg) | |`#octique("repo-clone")`| ![repo-clone](https://github.com/0x6b/typst-octique/wiki/assets/repo-clone.svg) | |`#octique("repo-deleted")`| ![repo-deleted](https://github.com/0x6b/typst-octique/wiki/assets/repo-deleted.svg) | |`#octique("repo-forked")`| ![repo-forked](https://github.com/0x6b/typst-octique/wiki/assets/repo-forked.svg) | |`#octique("repo-locked")`| ![repo-locked](https://github.com/0x6b/typst-octique/wiki/assets/repo-locked.svg) | |`#octique("repo-pull")`| ![repo-pull](https://github.com/0x6b/typst-octique/wiki/assets/repo-pull.svg) | |`#octique("repo-push")`| ![repo-push](https://github.com/0x6b/typst-octique/wiki/assets/repo-push.svg) | |`#octique("repo")`| ![repo](https://github.com/0x6b/typst-octique/wiki/assets/repo.svg) | |`#octique("repo-template")`| ![repo-template](https://github.com/0x6b/typst-octique/wiki/assets/repo-template.svg) | |`#octique("report")`| ![report](https://github.com/0x6b/typst-octique/wiki/assets/report.svg) | |`#octique("rocket")`| ![rocket](https://github.com/0x6b/typst-octique/wiki/assets/rocket.svg) | |`#octique("rows")`| ![rows](https://github.com/0x6b/typst-octique/wiki/assets/rows.svg) | |`#octique("rss")`| ![rss](https://github.com/0x6b/typst-octique/wiki/assets/rss.svg) | |`#octique("ruby")`| ![ruby](https://github.com/0x6b/typst-octique/wiki/assets/ruby.svg) | |`#octique("screen-full")`| ![screen-full](https://github.com/0x6b/typst-octique/wiki/assets/screen-full.svg) | |`#octique("screen-normal")`| ![screen-normal](https://github.com/0x6b/typst-octique/wiki/assets/screen-normal.svg) | |`#octique("search")`| ![search](https://github.com/0x6b/typst-octique/wiki/assets/search.svg) | |`#octique("server")`| ![server](https://github.com/0x6b/typst-octique/wiki/assets/server.svg) | |`#octique("share-android")`| ![share-android](https://github.com/0x6b/typst-octique/wiki/assets/share-android.svg) | |`#octique("share")`| ![share](https://github.com/0x6b/typst-octique/wiki/assets/share.svg) | |`#octique("shield-check")`| ![shield-check](https://github.com/0x6b/typst-octique/wiki/assets/shield-check.svg) | |`#octique("shield-lock")`| ![shield-lock](https://github.com/0x6b/typst-octique/wiki/assets/shield-lock.svg) | |`#octique("shield-slash")`| ![shield-slash](https://github.com/0x6b/typst-octique/wiki/assets/shield-slash.svg) | |`#octique("shield")`| ![shield](https://github.com/0x6b/typst-octique/wiki/assets/shield.svg) | |`#octique("shield-x")`| ![shield-x](https://github.com/0x6b/typst-octique/wiki/assets/shield-x.svg) | |`#octique("sidebar-collapse")`| ![sidebar-collapse](https://github.com/0x6b/typst-octique/wiki/assets/sidebar-collapse.svg) | |`#octique("sidebar-expand")`| ![sidebar-expand](https://github.com/0x6b/typst-octique/wiki/assets/sidebar-expand.svg) | |`#octique("sign-in")`| ![sign-in](https://github.com/0x6b/typst-octique/wiki/assets/sign-in.svg) | |`#octique("sign-out")`| ![sign-out](https://github.com/0x6b/typst-octique/wiki/assets/sign-out.svg) | |`#octique("single-select")`| ![single-select](https://github.com/0x6b/typst-octique/wiki/assets/single-select.svg) | |`#octique("skip-fill")`| ![skip-fill](https://github.com/0x6b/typst-octique/wiki/assets/skip-fill.svg) | |`#octique("skip")`| ![skip](https://github.com/0x6b/typst-octique/wiki/assets/skip.svg) | |`#octique("sliders")`| ![sliders](https://github.com/0x6b/typst-octique/wiki/assets/sliders.svg) | |`#octique("smiley")`| ![smiley](https://github.com/0x6b/typst-octique/wiki/assets/smiley.svg) | |`#octique("sort-asc")`| ![sort-asc](https://github.com/0x6b/typst-octique/wiki/assets/sort-asc.svg) | |`#octique("sort-desc")`| ![sort-desc](https://github.com/0x6b/typst-octique/wiki/assets/sort-desc.svg) | |`#octique("sparkle-fill")`| ![sparkle-fill](https://github.com/0x6b/typst-octique/wiki/assets/sparkle-fill.svg) | |`#octique("sponsor-tiers")`| ![sponsor-tiers](https://github.com/0x6b/typst-octique/wiki/assets/sponsor-tiers.svg) | |`#octique("square-fill")`| ![square-fill](https://github.com/0x6b/typst-octique/wiki/assets/square-fill.svg) | |`#octique("square")`| ![square](https://github.com/0x6b/typst-octique/wiki/assets/square.svg) | |`#octique("squirrel")`| ![squirrel](https://github.com/0x6b/typst-octique/wiki/assets/squirrel.svg) | |`#octique("stack")`| ![stack](https://github.com/0x6b/typst-octique/wiki/assets/stack.svg) | |`#octique("star-fill")`| ![star-fill](https://github.com/0x6b/typst-octique/wiki/assets/star-fill.svg) | |`#octique("star")`| ![star](https://github.com/0x6b/typst-octique/wiki/assets/star.svg) | |`#octique("stop")`| ![stop](https://github.com/0x6b/typst-octique/wiki/assets/stop.svg) | |`#octique("stopwatch")`| ![stopwatch](https://github.com/0x6b/typst-octique/wiki/assets/stopwatch.svg) | |`#octique("strikethrough")`| ![strikethrough](https://github.com/0x6b/typst-octique/wiki/assets/strikethrough.svg) | |`#octique("sun")`| ![sun](https://github.com/0x6b/typst-octique/wiki/assets/sun.svg) | |`#octique("sync")`| ![sync](https://github.com/0x6b/typst-octique/wiki/assets/sync.svg) | |`#octique("tab-external")`| ![tab-external](https://github.com/0x6b/typst-octique/wiki/assets/tab-external.svg) | |`#octique("table")`| ![table](https://github.com/0x6b/typst-octique/wiki/assets/table.svg) | |`#octique("tag")`| ![tag](https://github.com/0x6b/typst-octique/wiki/assets/tag.svg) | |`#octique("tasklist")`| ![tasklist](https://github.com/0x6b/typst-octique/wiki/assets/tasklist.svg) | |`#octique("telescope-fill")`| ![telescope-fill](https://github.com/0x6b/typst-octique/wiki/assets/telescope-fill.svg) | |`#octique("telescope")`| ![telescope](https://github.com/0x6b/typst-octique/wiki/assets/telescope.svg) | |`#octique("terminal")`| ![terminal](https://github.com/0x6b/typst-octique/wiki/assets/terminal.svg) | |`#octique("three-bars")`| ![three-bars](https://github.com/0x6b/typst-octique/wiki/assets/three-bars.svg) | |`#octique("thumbsdown")`| ![thumbsdown](https://github.com/0x6b/typst-octique/wiki/assets/thumbsdown.svg) | |`#octique("thumbsup")`| ![thumbsup](https://github.com/0x6b/typst-octique/wiki/assets/thumbsup.svg) | |`#octique("tools")`| ![tools](https://github.com/0x6b/typst-octique/wiki/assets/tools.svg) | |`#octique("tracked-by-closed-completed")`| ![tracked-by-closed-completed](https://github.com/0x6b/typst-octique/wiki/assets/tracked-by-closed-completed.svg) | |`#octique("tracked-by-closed-not-planned")`| ![tracked-by-closed-not-planned](https://github.com/0x6b/typst-octique/wiki/assets/tracked-by-closed-not-planned.svg) | |`#octique("trash")`| ![trash](https://github.com/0x6b/typst-octique/wiki/assets/trash.svg) | |`#octique("triangle-down")`| ![triangle-down](https://github.com/0x6b/typst-octique/wiki/assets/triangle-down.svg) | |`#octique("triangle-left")`| ![triangle-left](https://github.com/0x6b/typst-octique/wiki/assets/triangle-left.svg) | |`#octique("triangle-right")`| ![triangle-right](https://github.com/0x6b/typst-octique/wiki/assets/triangle-right.svg) | |`#octique("triangle-up")`| ![triangle-up](https://github.com/0x6b/typst-octique/wiki/assets/triangle-up.svg) | |`#octique("trophy")`| ![trophy](https://github.com/0x6b/typst-octique/wiki/assets/trophy.svg) | |`#octique("typography")`| ![typography](https://github.com/0x6b/typst-octique/wiki/assets/typography.svg) | |`#octique("undo")`| ![undo](https://github.com/0x6b/typst-octique/wiki/assets/undo.svg) | |`#octique("unfold")`| ![unfold](https://github.com/0x6b/typst-octique/wiki/assets/unfold.svg) | |`#octique("unlink")`| ![unlink](https://github.com/0x6b/typst-octique/wiki/assets/unlink.svg) | |`#octique("unlock")`| ![unlock](https://github.com/0x6b/typst-octique/wiki/assets/unlock.svg) | |`#octique("unmute")`| ![unmute](https://github.com/0x6b/typst-octique/wiki/assets/unmute.svg) | |`#octique("unread")`| ![unread](https://github.com/0x6b/typst-octique/wiki/assets/unread.svg) | |`#octique("unverified")`| ![unverified](https://github.com/0x6b/typst-octique/wiki/assets/unverified.svg) | |`#octique("upload")`| ![upload](https://github.com/0x6b/typst-octique/wiki/assets/upload.svg) | |`#octique("verified")`| ![verified](https://github.com/0x6b/typst-octique/wiki/assets/verified.svg) | |`#octique("versions")`| ![versions](https://github.com/0x6b/typst-octique/wiki/assets/versions.svg) | |`#octique("video")`| ![video](https://github.com/0x6b/typst-octique/wiki/assets/video.svg) | |`#octique("webhook")`| ![webhook](https://github.com/0x6b/typst-octique/wiki/assets/webhook.svg) | |`#octique("workflow")`| ![workflow](https://github.com/0x6b/typst-octique/wiki/assets/workflow.svg) | |`#octique("x-circle-fill")`| ![x-circle-fill](https://github.com/0x6b/typst-octique/wiki/assets/x-circle-fill.svg) | |`#octique("x-circle")`| ![x-circle](https://github.com/0x6b/typst-octique/wiki/assets/x-circle.svg) | |`#octique("x")`| ![x](https://github.com/0x6b/typst-octique/wiki/assets/x.svg) | |`#octique("zap")`| ![zap](https://github.com/0x6b/typst-octique/wiki/assets/zap.svg) | |`#octique("zoom-in")`| ![zoom-in](https://github.com/0x6b/typst-octique/wiki/assets/zoom-in.svg) | |`#octique("zoom-out")`| ![zoom-out](https://github.com/0x6b/typst-octique/wiki/assets/zoom-out.svg) | ## License MIT. See [LICENSE](LICENSE) for detail. Octicons are (c) GitHub, Inc. When using the GitHub logos, you should follow the [GitHub logo guidelines](https://github.com/logos).
https://github.com/christophermanning/typst-docker
https://raw.githubusercontent.com/christophermanning/typst-docker/main/examples/resume.typ
typst
MIT License
#set document(date: none) #set page( header: [ #text(2em)[*Firstname Lastname*] \ <EMAIL> | #link("https://www.linkedin.com/in/")[linkedin.com/in/] | #link("https://www.github.com/")[github.com/] | City, State ], ) #let titleline(title) = { v(1em) text[== #title] v(-2pt); line(length: 100%); v(-2pt) } #let entry(org, date, role, location) = { text[ *#org* #h(1fr) #date \ #emph([#role]) #h(1fr) #emph([#location]) ] } #lorem(40) #titleline[Skills] #for c in range(2) [ *#lorem(2)*: #lorem(8) \ *#lorem(3)*: #lorem(10) \ ] #titleline[Experience] #for c in range(4) [ #entry("Org Name", "Jan 1970 - Jan 2038", "Title Name", "City, State") - #lorem(10) - #lorem(13) - #lorem(11) ] #titleline[Education] #for c in range(2) [ #entry("School Name", "Jan 1970 - Jan 2038", "Description", "City, State") - #lorem(12) - #lorem(13) ]
https://github.com/voXrey/cours-informatique
https://raw.githubusercontent.com/voXrey/cours-informatique/main/typst/22-algorithme-du-texte.typ
typst
#import "@preview/codly:0.2.1": * #show: codly-init.with() #codly() #set text(font: "Roboto Serif") = Chapitre 22 : Algorithmique du Texte <chapitre-22-algorithmique-du-texte> == Notations et Vocabulaire <notations-et-vocabulaire> === Notations <notations> On appelle alphabet, souvent noté $Sigma$, un ensemble fini de symboles. #quote(block: true)[ Exemple : $Sigma = { A S C I I }$ $lr(|Sigma|) = 256$ ] #quote(block: true)[ Exemple : En bio-informatique $Sigma = { A , T , C , G }$ $lr(|Sigma|) = 4$ ] On fixe un alphabet sur $Sigma$. Un mot est une suite finie de symboles de $Sigma$. $u = a_0 , dots a_(n - 1)$. On note $lr(|u|) = n$ la longueur de u. Les $a_i$ sont les lettres de u. On notera $u lr([i]) = a_i$ le $i^e$ caractère de u. On note $epsilon.alt$ le mot vide (l'unique mot de longueur 0). On note $Sigma^(\*)$ l'ensemble des mots. On note $circle.stroked.tiny$ la concaténation des mots. $u circle.stroked.tiny v = a_0 dots a_(n - 1) b_0 dots b_(m - 1)$ avec $u = a_0 dots a_(n - 1)$ et $v = b_0 dots b_(m - 1)$. On écrit souvent $u w$ au lieu de $u circle.stroked.tiny w$. === Vocabulaire <vocabulaire> u est un préfixe de v si - $u = a_0 dots a_(n - 1)$ - $v = a_0 dots a_(n - 1) a_n dots a_(m - 1)$ Respectivement "suffixe" u est un facteur de v s'il existe un mot w tel que $w u$ est un préfixe de v. $m lr([i : j])$ est le facteur de m entre position i (inclus) et j (exclus), $m = a_0 dots a_(n - 1) arrow.r.double m lr([i : j]) = a_i dots a_(j - 1)$ == I - Recherche de Mot dans un Texte <i---recherche-de-mot-dans-un-texte> === 1. Algorithme de <NAME>arp <algorithme-de-robin-karp> ==== a. Algorithme naïf <a.-algorithme-naïf> Première approche $m , T in Sigma^(\*)$ On veut savoir si m est un facteur de T Pour $i_0 in \[ 0 , lr(|T|) - lr(|m|) \[$ Tester si $m = T lr([i_0 : i_0 + lr(|m|)])$ ```py algo_naif(texte, motif) # tiré de wikipédia 1. n ← longueur(texte) 2. m ← longueur(motif) 3. pour i de 1 à n-m+1 faire 4. si texte[i..i+m-1] = motif[1..m] 5. motif trouvé dans le texte à la position i 6. motif non trouvé ``` Complexité en $O lr((lr((lr(|T|) - lr(|m|))) lr(|m|)))$ #quote(block: true)[ Exemple pire cas : m = "dauphin" T = "dauphi dauphi dauphi…" ] Idée de l'algorithme On prend une fonction de hachage h et on remplace le test $m = = T lr([i_0 , i_0 + m])$ par le test $h lr((m)) = h lr((T lr([i_0 , i_0 + m])))$ S'il y a égalité des haches, alors on fait le premier test. Empiriquement, si h est "bien faite" (peu de collisions) alors on s'attend à réaliser le premier test beaucoup moins souvent. #quote(block: true)[ Problème : Le calcul de h\(m) est en $O lr((lr(|m|)))$ ] Pour l'algorithme on choisit $h lr((m)) = sum_(i = 0)^(lr(|m|) - 1) c lr((m lr([i]))) lr(|Sigma|)^i m o d lr((N))$ avec $c : Sigma arrow.r \[ 0 , lr(|Sigma|) \[$ une énumération de $Sigma$. $arrow.r.double h lr((m))$ se calcule en $O lr((lr(|m|)))$ L'algorithme On obtient $h ( T lr(|i_0 : i_0 +|) m lr(|\] = h lr((T lr([i_0 - 1 : i_0 - 1 + lr(|m|)]))) \/|) Sigma \| + sum^(lr(|m|) - 1) c lr((T lr([i_0 + lr(|m|) - 1]))) m o d lr((N))$ La somme est calculée qu'une seule fois. On prend $h_m$ le haché de m On calcule $sum^(lr(|m|) - 1)$. $h_T$ = haché $T lr([0 : lr(|m|) - 1]) \* lr(|Sigma|)$ Pour $i_0 in \[ 0 , lr(|T|) - lr(|m|) \[$ ~~~~$h_T = h_T \/ lr(|Sigma|) + sum^(lr(|m|) - 1) \* c lr((T lr([i_0 + lr(|m|) - 1])))$ ~~~~Si $h_T = = h_m :$ ~~~~~~~~Si $m = = T lr([i_0 : i_0 + lr(|m|)])$: ~~~~~~~~~~~~Retourner vrai Retourner faux ```python rabin_karp(texte, motif) # tiré de wikipédia n ← longueur(texte) m ← longueur(motif) hn ← hach(texte[1..m]) hm ← hach(motif[1..m]) pour i de 0 à n-m+1 faire si hn = hm si texte[i..i+m-1] = motif[1..m] motif trouvé dans le texte à la position i hn ← hach(texte[i+1..i+m]) motif non trouvé ``` Complexité : Dans le pire cas on effectue le test `m == T[...]` à chaque fois et on a donc rien gagné. L'analyse de complexité pire cas n'est pas pertinente ici. L'efficacité empirique de cet algorithme repose sur le fait que lorsque $m eq.not T[i_0:i_0+|m|]$ alors $h_m eq.not h_T$ dans la plupart des cas. Autrement dit, les cas où $h_T = h_m$ et $m eq.not T\[...\]$ sont rares. === 2. Algorithme de Boyer-Moore <algorithme-de-boyer-moore> ==== a. Algorithme (version 1) <a.-algorithme-version-1> ```python algo 1. i0 = 0 2. Tant que i0 < |T| - |m| 3. tester m == T[i0 : i0 + |m|] en partant de la droite 4. Si cela échoue, en prenant compte de T[i0 + |m| - 1] 5. On se décale intelligament : i0 = i0 + decalage(T[i0+|m -1]) ``` Comment construire `décalage` ? Pour représenter cette fonction on pourrait utiliser un tableau "offset" et une énumération de $Sigma$, notée c de sorte que $o f f s e t lr([c lr((a))]) = d e c a l a g e lr((a)) forall a in Sigma$ Inconvénient : Beaucoup d'espace utilisé pour rien puisque $forall a in Sigma$ qui n'es pas dans m on a $d e c a l a g e lr((a)) = lr(|m|)$. On utilise donc un dictionnaire dont les clés sont les caractères présents dans m et la valeur associée à $a in Sigma$ sera $d e c a l a g e lr((a))$. Complexité : - Pire des cas : Si $i_0 = i_0 + 1$ à chaque boucle (irréaliste) alors on est en $O lr((lr((lr(|T|) - lr(|m|))) lr(|m|)))$. Cette analyse n'est pas adaptée, l'amélioration est empirique. - Meilleur cas : $i_0 = i_0 + lr(|m|)$. On est en $O lr((frac(lr(|T|) - lr(|m|), lr(|m|))))$. L'algorithme sera d'autant plus efficace que |m| est grand. ==== b. Algorithme (version 2 - hors-programme) <b.-algorithme-version-2---hors-programme> Elle prend compte des suffixes dans la table. == II - Compression de Texte <ii---compression-de-texte> === 1. Codage de Huffman <codage-de-huffman> #quote(block: true)[ Exemple : "AATACGCATAAATA" ] On peut s'intéresser à cette séquence en RAM en prenant A – 01 T – 01 C – 10 G – 11 Espace RAM occupé : $2 \* 14 = 28 b i t s$ Autre codage possible (fonction c) A – 0 T – 10 C – 110 G – 111 Calcul de l'espace mémoire utilisé : $sum_(a in Sigma) lr(|c lr((a))|) \* f r e q_T lr((a)) = 23 b i t s$ On donne un poids différent entre chaque caractère selon sa fréquence d'apparition. Décompression : Algorithme glouton. Il fonctionne si on impose la contrainte suivante $forall a in Sigma, forall b in Sigma, a eq.not b ==> c\(a)in \\{0,1\\}^\*$ n'est pas un préfixe de $c lr((b)) in { 0 , 1 }^(\*)$ Définition : Soit $Sigma$ un alphabet, on appelle codage une fonction injective $c : Sigma arrow.r { 0 , 1 }^(\*)$. On dit qu'un codage est admissible si $forall a eq.not b, c\(a)$ n'est pas un préfixe de c\(b). Objectif : Etant donné $T in Sigma^(\*)$, trouver le #strong[meilleur] codage admissible, c'est-à-dire celui qui minimise la consommation mémoire de T. On note $c_(m T) lr((c)) = sum_(i = 0)^(lr(|T|) - 1) \| c lr((T lr([i]))) \| = sum_(a in Sigma) lr(|c lr((a))|) \* f r e q_T lr((a))$ Où $f r e q_T lr((a))$ est le nombre d'occurrences de a dans T. ==== a. Représentation des Codages <a.-représentation-des-codages> On propose de voir un mot de ${ 0 , 1 }^(\*)$ comme un chemin dans un arbre binaire - 0 : aller à gauche - 1 : aller à droite Pour $a in Sigma$, on écrit a comme étiquette du nœud d'arbre c\(a). Ainsi, on visualise les codages comme des arbres. $c : A arrow.r 00 , T arrow.r 01 , C arrow.r 10 , G arrow.r 11$ Si le codage est admissible dans les symboles $a in Sigma$ étiquettent des feuilles sur l'arbre. Pour minimiser la consommation mémoire, on peut ne considérer que les codages pour lesquels tout nœud interne de l'arbre associé a exactement 2 enfants. #quote( block: true, )[ Remarque : Pour $a in Sigma$, on a |c\(a)| = profondeur du nœud étiqueté par a dans l'arbre. Donc faire "remonter" un symbole dans l'arbre a pour effet de réduire |c\(a)| sans changer |c\(b)| pour $b in Sigma$\\{a}. Donc cela réduit $sum_(a in Sigma) lr(|c lr((a))|) f r e q_T lr((a))$ ] Conclusion : Un codage optimal correspond forcément à une arbre binaire strict. On prendra dans la suite ```ocaml type codage = Leaf of Sigma | Node of codage * codage ``` ==== b. Recherche du codage Optimal <b.-recherche-du-codage-optimal> On fixe un texte T. #quote( block: true, )[ Propriété 1 : Soit $sigma$ une bijection de $Sigma$ dans $Sigma$ et c un codage admissible. Alors $c circle.stroked.tiny sigma$ est un codage admissible et si c correspond à un arbre binaire strict alors $c circle.stroked.tiny sigma$ aussi. ] #quote( block: true, )[ Propriété 2 : Soit c un codage optimal pour T. Soit $a , b in Sigma$ tels que $f r e q_T lr((a)) < f r e q_T lr((b))$ Alors $lr(|c lr((a))|) gt.eq lr(|c lr((b))|)$. Preuve : Si ce n'est pas le cas, on applique au codage la transposition $tau_(a b)$. Par la première propriété, $c circle.stroked.tiny tau_(a b)$ est un codage admissible donc par optimalité $c_(m T) lr((c)) lt.eq c_(m T) lr((c circle.stroked.tiny tau_(a b)))$ et pourtant $c_(m T) lr((c circle.stroked.tiny tau_(a b))) = sum_(d in Sigma) lr(|c circle.stroked.tiny tau_(a b) lr((d))|) f r e q_T lr((d))$ #emph[demander à quelqu'un pour les étapes intermédiaires] $c_(m T) lr((c circle.stroked.tiny tau_(a b))) = c_(m T) lr((c))$ $- lr(|c lr((a))|) f r e q_T lr((a))$ $- lr(|c lr((b))|) f r e q_T lr((b))$ $+ lr(|c lr((h))|) f r e q_T lr((a))$ $+ lr(|c lr((a))|) f r e q_T lr((b))$ $= < 0$ Donc absurde ] #quote( block: true, )[ Preuve : Soit c un codage optimal pour T. Soient $a_1 , a_2$ deux frères dans l'arbre c de profondeur maximale. Par la propriété 2, $f r q u_T lr((a_1))$ et $ "freq"_T\(a_2)$ sont minimales parmi ${ f r e q_T lr((a)) \| a in Sigma }$ On définit $Sigma prime = Sigma without {a_1, a_2} union { a prime }$ où a' est un nouveau symbole qui n'appartient pas à $Sigma$. Et $T prime = T$ dans lequel on remplace tous les $a_1 , a_2$ par a'. $f r e q_(T prime) lr((a prime)) = f r e q_T lr((a_1)) + f r e q_T lr((a_2))$ et T' est un texte sur $Sigma prime$ et $f r e q_(T prime) lr((a)) = f r e q_T lr((a))$ pour $a in Sigma union Sigma prime$ Comme $a_1$ et $a_2$ sont frères dans l'arbre, ils s'écrivent $a_1 = u_0$ et $a_2 = u_1$ (ou l'inverse). On définit $c': Sigma ==> {0,1}^* \ c'\(a) = c\(a) \ c'\(a') = u$. #quote( block: true, )[ Lemme : c est optimal pour T' et $Sigma prime$. Preuve : Supposons par l'absurde que c' n'est pas optimal. Il existe donc $c prime_(o p t)$ tel qu $c_(m T) lr((c prime_(o p t))) < c_(m T prime) lr((c prime)) .$ On va définir $c_(o p t)$ un codage pour T et $Sigma$ qui sera "mieux" que c. On définit $c_("opt") : Sigma ==> {0,1\\}^\* \ c_("opt")\(a) = c_("opt")\'\(a) s a in Sigma union Sigma\' \ c_("opt")\(a_1) = c_("opt")\'\(a\').0 \ c_("opt")\(a_2) = c_("opt")\'\(a\').1$ $c_(m T) lr((c_(o p t))) = sum_(d in Sigma) lr(|c_(o p t) lr((d))|) \* f r e q_T lr((d))$ $ = c_{m T'}\(c_("opt")') - |c_("opt")'\(a')|#emph[freq_{T'}\(a') + |c_("opt")\(a_1)|]"freq"_T\(a_1) + |c_("opt")\(a_2)|\*"freq"\(a_2)$ #emph[demander à quelqu'un pour les étapes intermédiaires] $= c_(m T prime) lr((c_(o p t) prime)) + lr((f r e q_T lr((a_1)) + f r e q_T lr((a_2)))) \* 1$ De même, $c_(m T) lr((c)) = sum_(d in Sigma) lr(|c lr((a))|) \* f r e q_T lr((d)) = c_(m T) lr((c prime)) + lr((f r e q_T lr((a_1)) + f r e q_T lr((a_2))))$ Conclusion : $c_(m T) lr((c_(o p t))) = c_(m T prime) lr((c_(o p t) prime)) + dots$ ] ] ==== c.~Algorithme Glouton Provisoire <c.-algorithme-glouton-provisoire> ```ocaml codage_optimal(S, T): 1. calculer freqT(a) pour a dans S 2. a1, a2 = minimums de freqT(a) 3. S' = S sans a1 et a2, et avec a' 4. T' = T dans lequel on remplace tous les a1 et a2 par a' 5. c' = codage_optimal(S', T') 6. c = le codage tel que Pour tout a commun à S et S' 7. c(a) = c'(a) 8. c(a1) = c(a').0 9. c(a2) = c(a').1 ``` Cas de base : Si $lr(|Sigma|) = 1$, renvoyer $c lr((a)) = 0$ pour a l'unique symbole de $Sigma$. Si $lr(|Sigma|) = 2$, renvoyer $c lr((a)) = 0 , c lr((b)) = 1$ pour $Sigma = { a , b }$. Terminaison : variant $lr(|Sigma|)$ Correction : C'est la preuve qui précède, par récurrence sur $lr(|Sigma|)$ Initialisation : $lr(|Sigma|) in { 1 , 2 }$ c'est évident. Hérédité : Par HR, le $c prime = c o d a g e _ o p t i m a l lr((Sigma prime , T prime))$ est optimal. Le c est alors optimal : s'il ne l'est pas, $exists c_("opt")$ qui est meilleur pour $Sigma , T$ et on en déduit $c_(o p t) prime$ meilleur pour $Sigma prime , T prime$ que $c prime$. Complexité - Calcul des fréquences : $O lr((lr(|T|)))$ - Calcul de a1 et a2 : $O lr((lr(|Sigma|)))$ - Construction de $Sigma prime$ : $O lr((1))$ - Construction dans $T prime$ : $O lr((lr(|T|)))$ - Construction de c : $O lr((lr(|Sigma|)))$ $u_(lr(|Sigma|) , lr(|T|)) = O lr((lr(|T|) + lr(|Sigma|))) + u_(lr(|Sigma|) - 1 , lr(|T|))$ $u_(n , p) = C lr((n + p)) + u_(n - 1 , p)$ $arrow.r.double u_(n , p) = O lr((n^2 + n p))$ Pour améliorer cette complexité on peut calculer une seule fois les fréquences au début. On peut également utiliser une file de priorité pour les calculs de minimaux. On peut également se passer de la construction de T'. Enfin on peut construire c d'une meilleure façon. ==== d.~Algorithme <d.-algorithme> ```ocaml codage_optimal(S, T) 1. On calcule les freqT(a) pour tout a de S 2. In initialise une file de priorité pq qui contient tous les 3. Leaf(a) pour a de S avec priorité freqT(a) 4. Faire |S|-1 fois: 5. a1,p1 = extract_min pq 6. a2,p2 = extract_min pq 7. add pq Node(a1,a2) (p1+p2) 8. (c1, _) = extract_min pq 9. return c ``` ```ocaml (* Rappel *) type codage = Leaf of S | Node of codage * codage ``` ==== e. Complexité <e.-complexité> On calcule la complexité totale - Calcul des fréquences : $O lr((lr(|T|)))$ - Initialisation de la file de priorité : $O lr((lr(|Sigma|)))$ - Boucle : $lr(|Sigma|)$ fois $O ( l o g lr((lr(|Sigma|)))$ Ce qui donne $O lr((lr(|T|) + lr(|Sigma|) l o g lr(|Sigma|)))$. === 2. Algorithme de Lempel Ziv <algorithme-de-lempel-ziv> ==== a. Introduction <a.-introduction> Avantages : - Algorithme Online (streaming) - la décompression ne nécessite pas de connaître le codage Exemple : "ATCATGTATCATGTAA" On maintient une table $f a c t e u r arrow.r n o u v e a u _ s y m b o l e$. Ici pour simplifier, les nouveaux symboles sont des entiers. On initialise la table : $A-->0 \ T-->1 \ C-->2 \ G-->3$ Compression : On ajoute à la table le motif si on ne le connais pas et on incrémente le motif précédent connu. Décompression : On remonte l'algorithme. ==== b. Algorithme de Compression <b.-algorithme-de-compression> ```python def compression(T): i = 0 d = dict([(S[i],i) for i in range(len(S)]) symbole = n while i < len(T): Trouver le plus petit j tel que T[i:j] not in d d[T[i:j]] = symbole symbole += 1 print(d[T[i:j-1]]) i = j-1 return d ``` ==== c.~Algorithme de Décompression <c.-algorithme-de-décompression> ```python def decompression(c): d = {} for i in range(len(S)): d[i] = S[i] print(d[c[0]]) precedent = d[c[0]] for i in range(1, len(c)) d[output_precedant+d[c[i]][0]] print(d[c[i]]) ``` #quote( block: true, )[ Remarque : Il y a plusieurs cas à distinguer, on ne peut pas toujours accéder à d\[c\[i\]\]. ] Invariant à retenir : Si un facteur u appartient au dictionnaire alors tous ses préfixes aussi.
https://github.com/QuadnucYard/touying-theme-seu
https://raw.githubusercontent.com/QuadnucYard/touying-theme-seu/main/README.zh.md
markdown
MIT License
# 东南大学幻灯片模板(typst) [[English]](./README.md) 基于 [Touying](https://touying-typ.github.io/touying/zh/) 的 university 风格实现的东南大学幻灯片模板(typst)。 ## 主题 目前只提供了一个 Beamer 风格的主题 `themes/seu-beamer.typ`,仿自 <https://github.com/TouchFishPioneer/SEU-Beamer-Slide>,在 `examples/beamer-sms.typ` 中提供迁移示例。 欢迎投稿更多模板! ## License Licensed under the [MIT License](LICENSE). ## 友情链接 - seu-thesis-typst by [TideDra](https://github.com/TideDra):<https://github.com/TideDra/seu-thesis-typst> - SEU-Typst-Template by [csimide](https://github.com/csimide):<https://github.com/csimide/SEU-Typst-Template> - seuthesis2024b by [Teddy-van-Jerry](https://github.com/Teddy-van-Jerry):<https://github.com/Teddy-van-Jerry/seuthesis2024b>
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/container_01.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test block sizing. #set page(height: 120pt) #set block(spacing: 0pt) #block(width: 90pt, height: 80pt, fill: red)[ #block(width: 60%, height: 60%, fill: green) #block(width: 50%, height: 60%, fill: blue) ]
https://github.com/Skimmeroni/Appunti
https://raw.githubusercontent.com/Skimmeroni/Appunti/main/Matematica4AI/Math4AI_notes.typ
typst
Creative Commons Zero v1.0 Universal
#set text( font: "Gentium Plus", size: 10pt, lang: "en" ) #set page( paper: "a4", header: align(right)[_Advanced Foundations of Mathematics for AI_], numbering: "1" ) #set par( justify: true ) #set heading( numbering: "1." ) #import "Math4AI_definitions.typ": * #show: thmrules.with(qed-symbol: $square$) #show par: set block(spacing: 0.55em) #outline(indent: auto) #pagebreak() = Linear Algebra == Matrices #include "LinAlg/Matrices.typ" == Vector Spaces #include "LinAlg/Spaces.typ" == Bases and Dimension #include "LinAlg/Bases.typ" == Linear Transformations #include "LinAlg/Transformations.typ" == Eigenvalues and eigenvectors #include "LinAlg/Eigen.typ" == Spectral Theorem #include "LinAlg/Spectral.typ" == Cholesky decomposition #include "LinAlg/Decomposition.typ"
https://github.com/gianzamboni/cancionero
https://raw.githubusercontent.com/gianzamboni/cancionero/main/theme/project.typ
typst
#let docFontSize = 16pt #let titleTest(size: 1.75em) = (font: "Rancho", size: size, weight: 700) #let makeFullPageTitle(it) = { set text(..titleTest()) if(it.body == [Índice]) { it.body linebreak() v(0pt) } else { set text(size: 1.5em) pagebreak(weak: true) align(center + horizon, it.body) pagebreak() } } #let sectionTitle(it) = { set align(left) set text(..titleTest()) it.body v(-0.5em) } #let subsection(it) = { set text(..titleTest(size: 1.25em)) line(length: 100%, stroke: 1pt) v(-1em) align(right, it.body) } #let project(title: "", authors: (), date: none, body) = { set document( author: authors, title: title ) set enum(numbering: "1)") set page(paper: "a4", margin: ( x: 2cm, y: 1.5cm )) set text(size: docFontSize, font: "Arial") show heading.where(level: 1): makeFullPageTitle show heading.where(level: 2): sectionTitle show heading.where(level: 3): subsection show strong: set text(font: "Rancho") set table( fill: (x, y) => if y == 0 { gray.lighten(40%) }, align: right, ) show table.cell.where(y: 0): strong makeFullPageTitle((body: title)) outline(title:[Índice], depth: 2 , indent: 10pt) body }
https://github.com/kdog3682/mathematical
https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/dialogues/index.typ
typst
#import "layouts/singleton-meta-layout.typ": singleton-meta-layout #import "dialogue-layout.typ": dialogue-layout // #let singleton = base-dialogue.with(meta-layout: singleton-meta-layout) // /home/kdog3682/projects/typst/mathematical/0.1.0/src/academic/multiple-choice-question.typ // /home/kdog3682/projects/typst/mathematical/0.1.0/src/academic/index.typ // access the homework type #let run(items, meta) = { let meta-layout = layouts.at(meta.kind) set page(..pages.dialogue) meta-layout(meta) dialogue-layout(items) }
https://github.com/zenor0/simple-neat-typst-cv
https://raw.githubusercontent.com/zenor0/simple-neat-typst-cv/master/README.md
markdown
MIT License
# SIMPLE-NEAT-CV 这是个人自用的一个 Typst 简历模板 ## 预览 ![simple-neat-cv](./example.png) ## 使用 > [!NOTE] > 使用前请确保你的系统上已经安装了 `typst` 以及你的系统中安装了 阿里巴巴普惠体3.0 字体 首先请克隆本仓库 ```bash git clone ``` ### 本地使用 > [!NOTE] > WIP, 这个方法不一定在所有系统上正常工作. 执行 ```bash make ``` 若指令工作正常, 应该会将本模板加入到 typst 包管理器中. 此时你可以通过 `@local/simple-neat-cv:0.1.0` 来访问本模板. 运行 `typst init @local/simple-neat-cv:0.1.0` 即可初始化一个简历项目. ### 在线使用 > [!NOTE] > WIP ## LICENSE 本项目基于 MIT 协议开源, 欢迎二次开发 项目中的图标来源于 [simpleicons](https://simpleicons.org/) 和 [iconpark](https://iconpark.oceanengine.com/)
https://github.com/barrel111/readings
https://raw.githubusercontent.com/barrel111/readings/main/notes/online.typ
typst
#import "@local/preamble:0.1.0": * #import "@preview/lovelace:0.2.0": * #show: setup-lovelace #show: project.with( course: "CS", sem: "Summer", title: "Online Algorithms", subtitle: "", // authors: ("<NAME>",), ) #let OPT = `OPT` #let ALG = `ALG` = Introduction == Approximation Algorithms #definition( "Optimization Problem", )[ An _optimization problem_ $Pi$ is a $5$-tuple $(cal(I), cal(O), s, q, g)$: + $cal(I)$: the set of _instances_. + $cal(O)$: the set of _solutions_. + $s: cal(I) arrow cal(P(cal(O)))$: for every instance $I in cal(I)$, $s(I) subset.eq cal(O)$ denotes the set of feasible solutions for $I$. + $q: cal(I) times cal(O) arrow bb(R)$: for every instance $I in cal(I)$ and every feasible solution $O in s(I)$, $q(I, O)$ denotes the measure of $I$ and $O$. + $g in {max, min}$. An _optimal solution_ for an instance $I in cal(I)$ of $Pi$ is a solution $OPT(I) in s(I)$ such that $ q(I, OPT(I)) = g{q(I, O) bar O in s(I)}. $ If $g = min$, we call $Pi$ a _minimization problem_ and refer to $q$ as the _cost_. If $g = max$ we call $Pi$ a _maximization problem_ and refer to $q$ as the _gain_. ] #definition( "Consistent", )[An algorithm is _consistent_ for an (optimization) problem $Pi$ if it computes a feasible solution for every given instance.] #definition( "Approximation Algorithms", )[Let $Pi$ be an optimization problem, and let $ALG$ be a consistent algorithm for $Pi$. For $r >= 1$, $ALG$ is an _$r$-approximation algorithm_ for $Pi$ if, for every $I in cal(I)$ $ q(OPT(I)) <= r dot.c q(ALG(I)) $ if $Pi$ is a maximization problem, or $ q(ALG(I)) <= r dot.c q(ALG(I)) $ if $Pi$ is a minimization problem. The _approximation ratio_ of $ALG$ is defined as $ r_ALG = inf { r >= 1 bar ALG "is an" r"-approximation algorithm for" Pi}. $] #remark[Sometimes we define $r$-approximation so that $r in (0, 1]$. ] #definition( "Simple Knapsack Problem", )[The _simple knapsack problem_ is a maximization problem specified as follows: An instance $I$ is given by a sequence of $n + 1$ numbers-- $w_1, w_2, dots, w_n, W$. A feasible set is any set $O subset.eq [n]$ such that $ sum_(i in O) w_i <= W. $ The gain of a solution $O$ and a corresponding instance $I$ is given by $ q(I, O) = sum_(i in O) w_i. $ The goal is to maximize this gain.] #remark[Think of the numbers $w_i$ as being weights of items and $W$ as being the maximum weight capacity of the knapsack.] #algorithm(caption: [`KN-GREEDY`], pseudocode-list[ + $O = emptyset$ + $s = 0$ + *sort* such that $w_1>= w_2>= dots.c >= w_n$. + *while* $i < n$ and $s + w_(i + 1)<= W$ *do* + $O = O union w_(i + 1)$ + $s = s + w_(i + 1)$ + $i = i + 1$ + *end* + *return* $O$ ]) #prop[`KN-Greedy` is a polynomial-time $2$-approximation algorithm for the simple knapsack problem.] #proof[ Running time is $cal(O)(n)$, which is polynomial. The loop invariants ensure that the algorithm is consistent. Now, we show that this is a $2$-approximation algorithm. Consider the labeling of the weights after sorting i.e. assume $w_1 >= w_2>= dots.c >= w_n$. If the algorithm accepts all the items, then this is clearly the optimal solution too, $OPT = ALG$. Suppose that the algorithm rejects the $(r + 1)$-th item. If $sum_(i = 1)^r w_i >= W/2$ then we are done as $ ALG = sum_(i = 1)^r w_i >= W/2 >= OPT/2. $ Next, if $sum_(i = 1)^r w_i < W/2$ then note that $w_(r + 1) <= w_r < W/2$. Since the $(r + 1)$-th item is rejected, $ ALG = sum_(i = 1)^r w_i > W - w_(r+1) > W/2 >= OPT/2. $ To see that this bound is tight with $r = 2$, we construct a family of instances whose approximation ratio approaches $2$. Consider the instance with $n = 3$, with weights $ W/2 + 1, W/2, W/2 $ and maximum weight capacity $W$. Then, the algorithm always accepts the item with weight $W/2 + 1$ and cannot accept any more whereas the optimum solution would accept the two items with weight $W/2$. The approximation ratio is then, $ OPT/ALG = W/(W/2 + 1) = 2 dot W/(W + 1) $ As $W arrow infinity$, $OPT"/"ALG to 2$. ] #definition( "FPTAS", )[A _fully polynomial-time approximation scheme_ is an algorithm that takes as input an $epsilon> 0$ and an instance of a (optimization) problem and returns an output value that is at least $(1 - epsilon) OPT$ and at most $(1 + epsilon) OPT$. Importantly, the running time of the algorithm must be polynomial in both the size of the input and $1/epsilon$.] == Online Algorithms #definition( "Online Problem", )[ An _online problem_ $Pi$ is a $5$-tuple $(cal(I), cal(O), s, q, g)$: + $cal(I)$: the set of _instances_. Every instance $I in cal(I)$ is a sequence of _requests_ $I = (x_1, x_2, dots, x_n)$ with $n in bb(N)$. + $cal(O)$: the set of _solutions_. Every output $O in cal(O)$ is a sequence of _answers_ $O = (y_1, y_2, dots, y_n)$ with $n in bb(N)$. + $s: cal(I) arrow cal(P(cal(O)))$: for every instance $I in cal(I)$, $s(I) subset.eq cal(O)$ denotes the set of _feasible solutions_ for $I$. + $q: cal(I) times cal(O) arrow bb(R)$: for every instance $I in cal(I)$ and every feasible solution $O in s(I)$, $q(I, O)$ denotes the measure of $I$ and $O$. + $g in {max, min}$. An _optimal solution_ for an instance $I in cal(I)$ of $Pi$ is a solution $OPT(I) in s(I)$ such that $ q(I, OPT(I)) = g{q(I, O) bar O in s(I)}. $ If $g = min$, we call $Pi$ a _online minimization problem_ and refer to $q$ as the _cost_. If $g = max$ we call $Pi$ a _online maximization problem_ and refer to $q$ as the _gain_. ] #definition( "Online Algorithm", )[Let $Pi$ be an online problem and let $I = (x_1, x_2, dots, x_n)$ be an instance of $Pi$. An _online algorithm_ $ALG$ for $Pi$ computes the output $ALG(I) = (y_1, y_2, dots, y_n)$, where $y_i$ only depends on $x_1, x_2, dots, x_i$ and $y_1, y_2, dots, y_(i - 1)$; we also require $ALG(I) in s(I)$, that is, $ALG(I)$ is a fesible solution for $I$.] #definition( "Competitive Ratio", )[Let $Pi$ be an online problem, and let $ALG$ be a consistent online algorithm for $Pi$. For $c >= 1$, $ALG$ is _$c$-competitive_ for $Pi$ if there is a constant $alpha >= 0$ such that, for every instance $I in cal(I)$. $ q(OPT(I)) <= c dot.c q(ALG(I)) + alpha $ if $Pi$ is an online maximization problem, or $ q(ALG(I)) <= c dot.c q(OPT(I)) + alpha $ if $Pi$ is an online mimization problem. If these inequalities holds with $alpha=0$ we call $ALG$ _strictly $c$-competitive_. $ALG$ is _optimal_ if it is strictly $1$-competitive. The _competitive ratio_ is defined as $ c_ALG = inf {c >= 1 bar ALG "is" c"-competitive for" Pi}. $ If the competitive ratio of $ALG$ is constant and the best that is achievable by any online algorithm for $Pi$, we call $ALG$ _strongly $c$-competitive_.] #remark[Commonly, we refer to $OPT(I)$ as the _optimal offline solution_. It may not even be achievable online as it may require knowledge of the complete instance!] #remark[If the competitive ratio of an online algorithm $ALG$ is at most $c$, where $c$ is a constant, we call $ALG$ _competitive_ or if $alpha=0$, _strictly competitive_. If the algorithm does not possess a competitive ratio with an upper bound that is idenpednent of the input length, we call it _not competitive_. It is fine to call an online algorithm competitive if its competitive ratio depends on some parameter of the problem being studied. This classification isn't always clear cut but is still often helpful.] _Why do we have the additive constant $alpha$ when defining the competitive ratio?_ The additive constant allows us to ignore finitely many exceptional instances on which the online algorithm may perform poorly. Particularly, to prove lower bounds, the constant $alpha$ forces us to construct infinitely many instances with increasing costs or gains. The following lemmas and propositions make this notion more explicit. #lemma[Let $Pi$ be an online minimization problem and let $cal(I) = {I_1, I_2, dots}$ be an infinite set of instances of $Pi$ such that $abs(I_i) <= abs(I_(i + 1))$ and such that the number of different input lengths in $cal(I)$ is infinite. Suppose further that $q(OPT) > 0$ on $cal(I)$. If there is an increasing, unbounded function $c: NN^+ arrow RR^+$ such that, $ q(ALG(I_i))/q(OPT(I_i)) >= c(n) "where" n = |I_i|, $ then $ALG$ isn't competitive.]<min-uncompetitive> #proof[ Suppose for contradiction that $ALG$ is competitive, that is $ q(ALG(I_i)) <= c' dot.c q(OPT(I_i)) + alpha. $ Then, $ [c(n) - c'] dot.c q(OPT(I_i)) <= alpha. $ This clearly contradicts the fact that $alpha$ is a constant. ] #lemma[Let $Pi$ be an online minimization problem and let $cal(I) = {I_1, I_2, dots}$ be an infinite set of instances of $Pi$ such that $abs(I_i) <= abs(I_(i + 1))$ and such that the number of different input lengths in $cal(I)$ is infinite. Suppose further that $q(ALG) > 0$ on $cal(I)$. If there is an increasing, unbounded function $c: NN^+ arrow RR^+$ such that, $ q(OPT(I_i))/q(ALG(I_i)) >= c(n) "where" n = |I_i|, $ then $ALG$ isn't competitive.] #remark[Since this is a minimization problem, $q(OPT) > 0$ also implies $q(ALG) > 0$.]<max-uncompetitive> #proof[ Same idea as the prior proof. ] #prop[ Let $Pi$ be an online minimization problem, and let $cal(I) = {I_1, I_2, dots}$ be an infinite set of instances of $Pi$ such that $abs(I_i) <= abs(I_(i + 1))$, and such that the number of different input lengths in $cal(I)$ is infinite. Let $ALG$ be an online algorithm for $Pi$. If there is some constant $c >= 1$ such that + $ q(ALG(I_i))/q(OPT(I_i)) >= c, " for every" i in NN^+ $ + $lim_(i to infinity) q(OPT(I_i)) = infinity$ then $ALG$ isn't a $(c - epsilon)$-competitive online algorithm for $Pi$, for any $epsilon > 0$. ]<min-nobetter> #proof[ Suppose for contradiction that $ALG$ is a $(c - epsilon)$-competitive algorithm for some $epsilon > 0$. Then, there is a constant $alpha$ such that $ q(ALG(I_i)) <= (c - epsilon) dot.c q(OPT(I_i)) + alpha \ implies q(ALG(I_i))/q(OPT(I_i)) - alpha/q(OPT(I_i)) <= c - epsilon $ for every $i in NN^+$. By a), the first term above is at least $c$ and by b), we can find instances for which the second term is less than $epsilon$. Thus, we have a contradiction. ] #prop[Let $Pi$ be an online maximization problem, and let $cal(I) = {I_1, I_2, dots}$ be an infinite set of instances of $Pi$ such that $abs(I_i) <= abs(I_(i + 1))$, and such that the number of different input lengths in $cal(I)$ is infinite. Let $ALG$ be an online algorithm for $Pi$. If there is some constant $c >= 1$ such that + $ q(OPT(I_i))/q(ALG(I_i)) >= c, " for every" i in NN^+ $ + $lim_(i to infinity) q(OPT(I_i)) = infinity$ then $ALG$ isn't a $(c - epsilon)$-competitive online algorithm for $Pi$, for any $epsilon > 0$.]<max-nobetter> #proof[ Suppose for contradiction that $ALG$ is a $(c - epsilon)$-competitive online algorithm for some $epsilon > 0$. Then, there is some constant $alpha$ such that $ q(OPT(I_i))/q(ALG(I_i)) - alpha/q(ALG(I_i)) <= c - epsilon $ for every $i in NN^+$. Furthermore, by a) and b), if $q(ALG(I_i))$ were bounded by a constant, it wouldn't be competitive at all. Thus, it is fair to assume that $lim_(i to infinity) q(ALG(I_i)) = infinity$. By a), the first term above is at least $c$ and from the previous sentence, we can find instances for which the second term is less than $epsilon$. Thus, we have a contradiction. ] #remark[Propositions 1.2.1 and 1.2.2 need to be carefully interpreted. For example, they don't rule out the possibility for a $(c - 1/n)$-competitive algorithm.] == Paging: Basics #definition( "Paging", )[The _paging problem_ is an online minimization problem. Suppose there are $m in NN^+$ memory pages $p_1, p_2, dots, p_m$, which are stored in the main memory. An instance is a sequence $I = (x_1, x_2, dots, x_n)$ such that $x_i in {p_1, p_2, dots, p_m}$, for all $i in [n]$. This means that page $x_i$ is requested in time step $T_i$. An online algorithm $ALG$ for paging maintains a _cache_ memory of size $k$ with $k < m$, represented by the tuple $B_i = (p_(j_1), p_j_2, dots, p_j_k)$ for time step $T_i$. Initially, the cache is initialized as $B_0 = (p_1, p_2, dots, p_k)$, that is, with the first $k$ pages. If in some time step $T_i$, a page $x_i$ is requested and $x_i in B_(i - 1)$, $ALG$ outputs $y_i = 0$. Conversely, if $x_i in.not B_(i - 1)$, $ALG$ has to choose a page $p_j in B_(i - 1)$, which is then removed from the cache to make room for $x_i$. In this case, $ALG$ outputs $y_i = p_j$ and the new cache content is $B_i = (B_(i - 1) backslash p_j) union x_i$. The cost is defined as $ q(ALG(I)) = abs({i in [n] bar y_i != 0 }) $ and the goal is to minimize it.] #definition( $k"-Phase Partition"$, )[ Let $I = (x_1, x_2, dots, x_n)$ be an arbitrary instance of paging. A _$k$-phase partition of $I$_ assigns the requests from $I$ to consecutive disjoint phases $P_1, P_2, dots, P_N$ such that + Phase $P_1$ starts with the first request for a page that is not initially in the cache. Then, $P_1$ contains a maximum-length subsequence of $I$ that contains at most $k$ distinct pages. + For any $i$ with $2 <= i <= N$, phase $P_i$ is a maximum-length subsequence of $I$ that starts right after $P_(i - 1)$ and again contains at most $k$ distinct pages. ] One possible strategy is given by the following algorithm. #algorithm( caption: [`FIFO`], $"The cache is organized as a queue. Whenever a page must be evicted, the one residing in the" \ "cache for the longest time is chosen. The first " k "pages may be removed arbirarily."$, ) We analyze this algorithm by considering its behavior on different #lemma[Any optimal solution `OPT` must make at least one page fault for every phase in a $k$-phase partition.]<opt-paging> #proof[Let $I = (x_1, x_2, dots, x_N)$ be any instance of paging and consider $I$'s $k$-phase partition $P_1, P_2, dots, P_N$. WLOG, assume $x_1 in.not {p_1, p_2, dots, p_k}$. Shift the $k$-phase partition ${P_i}_( i in [N] )$ by a single page to form the partition ${P'_i}_(i in [N])$. Note that the last phase $P'_N$ may be empty but all others have the same length as before (i.e. $abs(P_i) = abs(P'_i)$). For $i in [N - 1]$, all phases $P_i$ had maximum length (with respect to containing $k$-distinct pages) and thus, the corresponding phases $P'_i$ contain $k$ pages that differ from the page $p'$ that was last requested before the start of $P'_i$. Since $p'$ is in the cache of $OPT$ at the beginning of $P'_i$ and there are $k$ more distinct requests different from $p'$, $OPT$ has to cause one page fault during $P'_i$. This gives us $N - 1$ pages faults for $OPT$ plus an additional one on $x_1$ at the beginning before any of the phases ${P'_i}$. ] #prop[`FIFO` is strictly $k$-competitive for paging.]<FIFO> #proof[Let $I = (x_1, x_2, dots, x_N)$ be any instance of paging and consider $I$'s $k$-phase partition $P_1, P_2, dots, P_N$. WLOG, assume $x_1 in.not {p_1, p_2, dots, p_k}$. We start by showing that in any fixed phase $P_i$, $i in [N]$, `FIFO` does not cause more than $k$ page faults during $P_i$. By definition there are at most $k$ pages requested in this phase. Suppose $p$ is the first page in phase $P_i$ that causes a page fault for `FIFO`. Further suppose that $C_i$ is the set of all pages added to the cache in phase $P_i$. Then $p$ must be the first element of $C_i$ that is evicted from the cache. However, when $p$ is loaded into the cache there are $k - 1$ other pages in the cache that must be removed before $p$. So, $p$ must remain in the cache for the next $k - 1$ page faults. Every element of $C_i - p$ can cause at most one page fault in the next $k - 1$ page faults. Since there are at most $k - 1$ distinct pages requested in $P_i$, it must be the case that phase $P_i$ ends in the next $k - 1$ page faults too. Thus, no element causes more than one page fault in phase $P_i$. Consequently, there are at most $k$ page faults in total during any one phase. Thus, by @opt-paging, $ ALG <= k dot.c N <= k dot OPT. $ ] In fact, this is the best we can hope for. #prop[No online algorithm for paging is better than $k$-competitive.] #proof[ For convenience, take $n$ to be a multiple of $k$. Consider instances with $k + 1$ pages $p_1, p_2, dots, p_k, p_(k + 1)$. Recall that the cache is initialized $(p_1, p_2, dots, p_k)$. Since the cache size is $k$, there is exactly one page, at any given time step, that isn't in the cache of $ALG$. The idea is that the adversary always requests precisely this uncached page to obtain an instance of length $n$. Since the adversary knows $ALG$, it can always forsee which page will be replaces by $ALG$ if a page fault occurs (for example, it can just run $ALG$ to determine this). More formally, the following algorithm can construct the instance for us. #algorithm( caption: [Paging Adversary], pseudocode-list[ + $I = (p_(k + 1))$ + $i = 1$ + *while* $i <= n - 1$ do + $p = "the page that is currently note in the cache of" ALG$ + $I = I union p$ + $ i = i + 1$ + *end* + *return* $I$ ] ) By construction, this causes a page fault at every time step. Hence, $ALG$ has a total cost of $n$ on instance $I$. Now, we study the optimal cost $OPT$ for this instance. We divide the input into distinct consecutive phases such that each phase consists of exactly $k$ time steps. Note that $ALG$ makes $k$ page faults in every phase. So, we want to show that $OPT$ makes at most one page fault in every phase. For any phase $P_i$, consider the first time step $T_j$, $j in [(i - 1)k + 1, i k]$, wherein the first page fault occurs. Note that there $P_i$ consists of exactly $i k - j <= k - 1$ more time steps, so at msot $k - 1$ more distinct pages are requested. Since the cache stores $k$ pages, there must be at least one page $p'$ in the cache that isn't requested during this phase and $OPT$ chooses $p'$ to evict. If there are more than one candidate for $p'$, $OPT$ can break the tie by choosing the page whose first request is the latest among all such pages [TERRIBLE EXPOSITION: should just explicitly say that Longest Forward Distance is the local optima or not leak details of the $OPT$ strategy without meaning to!]. We have, on this instance, $ ALG(I) = n = k dot n/k >= k dot OPT(I). $ Now if , as $n arrow infinity$, the number of page faults caused by $OPT$ is constant or bounded, $ALG$ isn't competitive at all (@min-uncompetitive). On the other hand, if the number of page faults by $OPT$ increases with $n$, then $ALG$ cannot be better than $k$-competitive (@max-nobetter). ] A natural counterpart to `FIFO` is the following strategy. #algorithm( caption: [`LIFO`], $"The cache is organized as a stack. Whenever a page must be evicted, the one that was most" \ "recently loaded into the cache is chosen. On the first page fault, an arbirary page may be " \ "removed."$, ) This is a terrible strategy! Intuitively, this is because we end up using only one of the cells in the cache. #prop[`LIFO` is not competitive for paging.] #proof[ We use @min-uncompetitive by showing that, for every $n$, there is an instance of paging of length $n$ such that $q(mono("LIFO"))/q(OPT)$ grows propotionally with $n$. Consider an instance of length $n$ on $m = k + 1$ total pages where the adversary always requests the same two pages. Since the cache is initialized with pages $p_1, dots, p_k$, the adversary starts off by requesting page $p_(k + 1)$. As a result, `LIFO` must evict a page $p_i$ from the cache. Since the adversary knows that `LIFO` chooses $p_i$, it requests it in time step $T_2$ and `LIFO` removes $p_(k + 1)$. The adversary keeps requesting these two pages, constructing the instance $I$, $ (p_(k + 1), p_i, p_(k + 1), p_i, dots) $ We see that on this instance, `LIFO` causes a page fault at every time step. On the other hand, the optimal solution $OPT$ just removes a page $p_j$ with $j != i$ in time step $T_1$ and has overall cost $1$ because both $p_i$ and $p_(k + 1)$ will be in the cache from this time step onwards. ] Intuitively, we may expect the frequency of access to be a good heuristic. This gives us the following strategy dubbed _Least Frequently Used_. #algorithm( caption: [`LFU`], $"On a page fault, the page that was, so far, least frequently used is removed."$, ) Unfortunately, this isn't much better in the worst case. #prop[`LFU` is not competitive for paging.] #proof[ Again, we want to employ @min-uncompetitive. The idea is very similar to the adversarial construction for `LIFO`. Consider the instance $I$ given by, $ "("underbrace(p_1\, p_1\, dots\, p_1, n' "requests"), underbrace(p_2 \, p_2\, dots \, p_2, n' "requests"), dots ,underbrace(p_(k - 1)\, p_(k - 1)\, dots\, p_(k - 1), n' "requests"), underbrace(p_(k + 1)\, p_k\, dots\, p_(k + 1) \, p_k, 2(n' - 1) "requests") ")" $ of length $n' (k - 1) + 2(n' - 1)$. By the initial contents of the cache, no page faults occur in the first $n'(k - 1)$ time steps for any online algorithm. After that, all pages in the cache except for $p_k$ have been requested $k + 1$ times. Thus when $p_(k + 1)$ is requested, at time step $T_(n'(k - 1) + 1)$, `LFU` evicts page $p_k$. Next, the adversary requests page $p_k$ and `LFU` evicts $p_(k + 1)$ as it has been requested only once. This is iterated $n' - 1$ times until both $p_k$ and $p_(k + 1)$ have been requested $n' - 1$ times each. Note that `LFU` (by induction) makes a page fault in each of the last $2(n' - 1)$ time steps. On the other hand, the optimal solution $OPT(I)$ simply removes a page $p_j$ with $j != k$ in time step $T_(n'(k - 1) + 1)$ and causes no more page faults. Since $ n' = (n + 2)/(k + 1), $ the competitive ratio of `LFU` can be bounded from below by $ 2(n' - 1) = 2 dot (n - k + 1)/(k + 1) $ which is a linear function of $n$. ] #remark[Taking a closer look at the previous propositions, we see that the lower bound on the competitive ratio of `LIFO` is stronger than the one for `LFU` by a factor of $(k + 1)/2$.] == Paging: Marking With some basic results out of the way, we shift our focus to a general class of algorithms for pagins known as _marking algorithms_. These play an important role in the context of randomized algorithms for paging. #definition("Marking Algorithm")[A _marking algorithm_ works in phases and _marks_ pages that were already requested; it only removes pages that are not marked. If all pages in the cache are marked and a page fault occurs, the current phase ends and a new one starts by first, unmarking all pages in the cache. Before processing the first request, all pages get marked such that the first request that causes a page fault starts a new phase. In pseudocode, the general form is as shown below. #algorithm( caption: "Marking Algorithm", pseudocode-list[ + *mark* all pages in the cache #comment[first page fault starts new phase] + *for* every request $x$ *do* + *if* $x in "cache"$ + *if* $x$ is unmarked + *mark* $x$ + *output* "0" + *else* + *if* there is no unmarked page + #line-label(<reset-mark>) *unmark* all pages in the cache #comment[start new phase] + #line-label(<marking-choice>) $p =$ page somehow chosen among all unmarked cached pages + #line-label(<remove-unmark>) *remove* $p$ and *insert* $x$ at the old position of $p$ + #line-label(<pagefault-mark>)*mark* $x$ + *output* "$p$" + *end* ] )] Marking algorithms are really nice! #let PM = k => $P_(mono("MARK"), #k)$ #lemma[ There are at most $k$ page faults in one marking phase.]<k-faults-in-phase> #proof[A page that has been marked in one phase never gets unmarked until the next phase starts (@reset-mark). Furthermore, throughout a phase, a marked page remains in the cache as only unmarked pages are ever removed (@remove-unmark). Since there are at most $k$ pages in the cache, there are at most $k$ pages that are marked in a single phase. In any one such phase $PM(i)$, note that every page fault leads to a page being marked (@pagefault-mark). So, there are at most $k$ page faults in one marking phase too. ] #prop[Every marking algorithm is strictly $k$-competitive.] #proof[Let `MARK` be a fixed marking algorithm. Let $I$ denote the given input and consider its $k$-phase partition into $N$ phases $P_1, dots, P_N$. By the same argument as @FIFO, we conclude that any optimal algorithm $OPT$ makes at least $N$ page faults in total on $I$. Now, we show that `MARK` makes at most $k$ page faults in one fixed phase $P_i$ with $i in [N]$. We denote the $overline(N)$ phases defined by `MARK` by $PM(1), PM(2), dots, PM(overline(N))$. First, we claim that both $N = overline(N)$ and that $P_i = PM(i)$ for $i in [N]$. The result we want follows from verifying these two claims since `MARK` makes at most $k$ page faults in one phase $PM(i)$ (@k-faults-in-phase). Start by observing that $P_1$ and $PM(1)$ start with the first request that causes a page fault. Every phase $P_i$ except the last one is, by definition, is a maximum-length sequence of $k$ distinct requests. Every requested page gets marked by `MARK` after being requested. When $k$ distinct pages have been requested then all pages in `MARK`'s cache are marked. With the $(k + 1)$-th distinct page $p'$ requested since the beginning of $P_(i)$, a new phase $P_(i + 1)$ starts. In this time step, a new phase $PM(i + 1)$ is also started as there is no unmarked page left in its cache to replace with $p'$. So, the phases $P_i$ and $PM(i)$ coincide. ] There are quite a few algorithms that fit into the marking paradigm. Consider the _Least Recently Used_ strategy given below. #algorithm( caption: `LRU`, $"On a page fault, the page that was last requested least recently is removed. The first " k \ "pages may be removed arbitrarily."$ ) #prop[`LRU` is a marking algorithm.] #proof[ We want to show that `LRU` never removes a page that is currently marked by some marking algorithm. Suppose for contradiction that there exists an instance $I$ such that `LRU` removes a marked page. Let $p$ be the page for which this happens for the first time, and denote the corresponding timestep by $T_(j)$ with $j in [n]$ during some phase $P_i$ with $i in [N]$. Since $p$ is marked, it must have been requested before during $P_i$, say in time step $T_(j')$ with $j' < j$. After that $p$ was the most recently used. So, if `LRU` removes $p$ in time step $T_j$, there must have been $k$ distinct requests following time step $T_(j')$ -- the first $(k - 1)$ cause $p$ to become least recently used and on the $k$-th request $p$ is removed by `LRU`. However, this implies that $P_i$ consists of at least $k + 1$ different requests, which contradicts the definition of a $k$-phase partition.] == Paging: Lookahead and Resource Augmentation So far, we have been dealing with analyzing the worst case performance of our online algorithms. However, it might be realistic to assume some additional knowledge about the input. A straightforward approach to give an online algorithm an advantage compared to the classical model is to alow it to have some _lookahead_ i.e. allow it to look into the future for $l$ time steps. However, in this context, it doesn't give us much improvement. #prop[No online algorithm with lookahead $ell$ for paging is better than $k$-competitive.] #proof[ Consider paging with lookahead $ell$. That is to say, in any time step, an online algorithm $ALG_ell$ sees the current request together with subsequent $ell$ requests. Since the adversary also knows $ALG_ell$, it surely knows $ell$ and may use the following construction. The idea is to repeat each request $ell$ times such that $ALG_ell$ is still in the dark at the time step where it has to replace a page. Consider an instance with $m = k + 1$ pages. The first $ell + 1$ requests all ask for the only page that is not in the cache initially i.e. $p_(k + 1)$. In the first time step, $ALG_ell$ must replace a page, but it cannot see which page is requested in time step $T_(ell + 2)$. Thus, the additional knowledge is completely useless and the adversary can request $p_i$ which $ALG_ell$ replaces in time step $T_1$. When $ALG_ell$ must find a page to replace with $p_i$, it only knows the prefix $ (p_(k + 1), underbrace(p_(k + 1) \, dots \, p_(k + 1), ell "requests"), p_i, underbrace(p_i\, dots\, p_i, ell "requests") ")" $ of the input. Again, this doesn't help. Repeating this construction, the adversary can ensure that $ALG_ell$ causes a page fault every $ell + 1$ time steps. By @opt-paging, $OPT$ makes at most one page fault every $k(ell + 1)$ time steps. So, for such inputs of length $n$, $ALG_ell$ causes $n slash (ell + 1)$ page faults while $OPT$ causes at most $n slash (k(ell + 1)). $ [IMPROVE THIS PROOF.]] Another way to empower online algorithms against the adversary is called _resource augmentation_, wherein we allow the online algorithm to use more resources than the optimal offline algorithm. = Randomized Algorithms == Randomized Online Algorithms == Yao's Principle == Paging: Randomized Algorithms == Ski-Rental Problem
https://github.com/SabrinaJewson/cmarker.typ
https://raw.githubusercontent.com/SabrinaJewson/cmarker.typ/main/README.md
markdown
MIT License
<picture> #set document(title: "cmarker.typ") #set page(numbering: "1", number-align: center) #set text(lang: "en") #align(center, text(weight: 700, 1.75em)[cmarker.typ]) #set heading(numbering: "1.") #show link: c => text(underline(c), fill: blue) #set image(height: 2em) #show par: set block(above: 1.2em, below: 1.2em) #align(center)[https://github.com/SabrinaJewson/cmarker.typ] #"</picture> <!--".slice(0,0) #import "lib.typ" as cmarker #let markdown = read("README.md") #cmarker.render( markdown.slice(markdown.position("</picture>") + "</picture>".len()), h1-level: 0, smart-punctuation: false, ) /*--> <!--typst-begin-exclude--> # cmarker <!--typst-end-exclude--> This package enables you to write CommonMark Markdown, and import it directly into Typst. <!--typst-begin-exclude--> <table> <tr> <th><code>simple.typ</code></th> <th><code>simple.md</code></th> </tr> <tr> <td> ```typst #import "@preview/cmarker:0.1.1" #cmarker.render(read("simple.md")) ``` </td> <td> ```markdown # We can write Markdown! *Using* __lots__ ~of~ `fancy` [features](https://example.org/). ``` </td> </tr> </table> <table> <tr><th><code>simple.pdf</code></th></tr> <tr><td><img src="./examples/simple.png"></td></tr> </table> <!--typst-end-exclude--> <!--raw-typst #table( columns: (1.2fr, 1fr), [`simple.typ`], [`simple.md`], [```typst #import "@preview/cmarker:0.1.1" #cmarker.render(read("simple.md")) ```], [```markdown # We can write Markdown! *Using* __lots__ ~of~ `fancy` [features](https://example.org/). ```], ) #table(columns: (100%), [`simple.pdf`], image("./examples/simple.png", height: auto)) --> This document is available in [Markdown](https://github.com/SabrinaJewson/cmarker.typ/tree/main#cmarker) and [rendered PDF](https://github.com/SabrinaJewson/cmarker.typ/blob/main/README.pdf) formats. ## API We offer a single function: ```typc render( markdown, smart-punctuation: true, blockquote: none, math: none, h1-level: 1, raw-typst: true, scope: (:), show-source: false, ) -> content ``` The parameters are as follows: - `markdown`: The [CommonMark](https://spec.commonmark.org/0.30/) Markdown string to be processed. Parsed with the [pulldown-cmark](https://docs.rs/pulldown-cmark) Rust library. You can set this to `read("somefile.md")` to import an external markdown file; see the [documentation for the read function](https://typst.app/docs/reference/data-loading/read/). - Accepted values: Strings and raw text code blocks. - Required. - `smart-punctuation`: Automatically convert ASCII punctuation to Unicode equivalents: - nondirectional quotations (" and ') become directional (“” and ‘’); - three consecutive full stops (...) become ellipses (…); - two and three consecutive hypen-minus signs (-- and ---) become en and em dashes (– and —). Note that although Typst also offers this functionality, this conversion is done through the Markdown parser rather than Typst. - Accepted values: Booleans. - Default value: `true`. - `blockquote`: A callback to be used when a blockquote is encountered in the Markdown, or `none` if blockquotes should be treated as normal text. Because Typst does not support blockquotes natively, the user must configure this. - Accepted values: Functions accepting content and returning content, or `none`. - Default value: `none`. For example, to display a black border to the left of the text one can use: ```typc box.with(stroke: (left: 1pt + black), inset: (left: 5pt, y: 6pt)) ``` <!--raw-typst #(box.with(stroke: (left: 1pt + black), inset: (left: 5pt, y: 6pt)))[which displays like this.] --> - `math`: A callback to be used when equations are encountered in the Markdown, or `none` if it should be treated as normal text. Because Typst does not support LaTeX equations natively, the user must configure this. - Accepted values: Functions that take a boolean argument named `block` and a positional string argument (often, the `mitex` function from [the mitex package](https://typst.app/universe/package/mitex)), or `none`. - Default value: `none`. For example, to render math equation as a Typst math block, one can use: ```typc #import "@preview/mitex:0.2.4": mitex #cmarker.render(`$\int_1^2 x \mathrm{d} x$`, math: mitex) ``` <!--raw-typst which renders as: $integral_1^2 x dif x$ --> - `h1-level`: The level that top-level headings in Markdown should get in Typst. When set to zero, top-level headings are treated as text, `##` headings become `=` headings, `###` headings become `==` headings, et cetera; when set to `2`, `#` headings become `==` headings, `##` headings become `===` headings, et cetera. - Accepted values: Integers in the range [0, 255]. - Default value: 1. - `raw-typst`: Whether to allow raw Typst code to be injected into the document via HTML comments. If disabled, the comments will act as regular HTML comments. - Accepted values: Booleans. - Default value: `true`. For example, when this is enabled, `<!--raw-typst #circle(radius: 10pt) -->` will result in a circle in the document (but only when rendered through Typst). See also `<!--typst-begin-exclude-->` and `<!--typst-end-exclude-->`, which is the inverse of this. - `scope`: When `raw-typst` is enabled, this is a dictionary providing the context in which the evaluated Typst code runs. It is useful to pass values in to code inside `<!--raw-typst-->` blocks. - Accepted values: Any dictionary. - Default value: `(:)`. - `show-source`: A debugging tool. When set to `true`, the Typst code that would otherwise have been displayed will be instead rendered in a code block. - Accepted values: Booleans. - Default value: `false`. This function returns the rendered `content`. ## Supported Markdown Syntax We support CommonMark with a couple extensions. - Paragraph breaks: Two newlines, i.e. one blank line. - Hard line breaks (used more in poetry than prose): Put two spaces at the end of the line. - `*emphasis*` or `_emphasis_`: *emphasis* - `**strong**` or `__strong__`: **strong** - `~strikethrough~`: ~strikethrough~ - `[links](https://example.org)`: [links](https://example.org/) - `### Headings`, where `#` is a top-level heading, `##` a subheading, `###` a sub-subheading, etc - `` `inline code blocks` ``: `inline code blocks` - ```` ``` out of line code blocks ``` ```` Syntax highlighting can be achieved by specifying a language after the opening backticks: ```` ```rust let x = 5; ``` ```` giving: ```rust let x = 5; ``` - `---`, making a horizontal rule: --- - ```md - Unordered - lists ``` - Unordered - Lists - ```md 1. Ordered 1. Lists ``` 1. Ordered 1. Lists - `$x + y$` or `$$x + y$$`: math equations, if the `math` parameter is set. - `> blockquotes`, if the `blockquote` parameter is set. - Images: `![Some tiled hexagons](examples/hexagons.png)`, giving ![Some tiled hexagons](examples/hexagons.png) ## Interleaving Markdown and Typst Sometimes, you might want to render a certain section of the document only when viewed as Markdown, or only when viewed through Typst. To achieve the former, you can simply wrap the section in `<!--typst-begin-exclude-->` and `<!--typst-end-exclude-->`: ```md <!--typst-begin-exclude--> Hello from not Typst! <!--typst-end-exclude--> ``` Most Markdown parsers support HTML comments, so from their perspective this is no different to just writing out the Markdown directly; but `cmarker.typ` knows to search for those comments and avoid rendering the content in between. Note that when the opening comment is followed by the end of an element, `cmarker.typ` will close the block for you. For example: ```md > <!--typst-begin-exclude--> > One Two ``` In this code, “Two” will be given no matter where the document is rendered. This is done to prevent us from generating invalid Typst code. Conversely, one can put Typst code inside a HTML comment of the form `<!--raw-typst […]-->` to have it evaluated directly as Typst code (but only if the `raw-typst` option to `render` is set to `true`, otherwise it will just be seen as a regular comment and removed): ```md <!--raw-typst Hello from #text(fill:blue)[Typst]!--> ``` ## Markdown–Typst Polyglots This project has a manual as a PDF and a README as a Markdown document, but by the power of this library they are in fact the same thing! Furthermore, one can read the `README.md` in a markdown viewer and it will display correctly, but one can *also* run `typst compile README.md` to generate the Typst-typeset `README.pdf`. How does this work? We just have to be clever about how we write the README: ```markdown <picture> (Typst preamble content) #"</picture> <!--".slice(0,0) #import "@preview/cmarker:0.1.1" #let markdown = read("README.md") #markdown = markdown.slice(markdown.position("</picture>") + "</picture>".len()) #cmarker.render(markdown, h1-level: 0) /*--> Regular Markdown goes here… <!--*///--> ``` The same code but syntax-highlighted as Typst code helps to illuminate it: ```typ <picture> (Typst preamble content) #"</picture> <!--".slice(0,0) #import "@preview/cmarker:0.1.1" #let markdown = read("README.md") #markdown = markdown.slice(markdown.position("</picture>") + "</picture>".len()) #cmarker.render(markdown, h1-level: 0) /*--> Regular Markdown goes here… <!--*///--> ``` ## Limitations - We do not currently support HTML tags, and they will be stripped from the output; for example, GitHub supports writing `<sub>text</sub>` to get subscript text, but we will render that as simply “text”. In future it would be nice to support a subset of HTML tags. - We do not currently support Markdown tables and footnotes. In future it would be good to. - Although I tried my best to escape everything correctly, I won’t provide a hard guarantee that everything is fully sandboxed even if you set `raw-typst: false`. That said, Typst itself is well-sandboxed anyway. ## Development - Build the plugin with `./build.sh`, which produces the `plugin.wasm` necessary to use this. - Compile examples with `typst compile examples/{name}.typ --root .`. - Compile this README to PDF with `typst compile README.md`. <!--*///-->
https://github.com/DavinDiaz/reporte_psiconsciencia
https://raw.githubusercontent.com/DavinDiaz/reporte_psiconsciencia/main/_extension/reporte_psiconsciencia/main.typ
typst
#import "template.typ": * #set outline(title: "Table of contents") #show: bubble.with( title: "Bubble template", subtitle: "Simple and colorful template", author: "hzkonor", affiliation: "University", date: datetime.today().display(), year: "Year", class: "Class", //main-color: "4DA6FF", logo: image("assets/logo.png"), color-words: ("highlight", "important") ) #outline() #pagebreak() // Edit this content to your liking = Introduction This is a simple template that can be used for a report. = Features == Colorful items The main color can be set with the `main-color` property, which affects inline code, lists, links and important items. - These bullet - points - are colored + It also + works with + numbered lists! == Customized items Figures are customized but this is settable in the template file. You can of course reference them @ref. #figure(caption: [Source tree], kind: image, box(width: 65%,sourcecode(numbering:none, ```bash main ├── README.md ├── assets │   ├── images │   │   ├── used images │   └── backup │   └── backup files ├── makefile └── src ├── headers │   ├── files.h └── files.c ```)) )<ref> #pagebreak() = Enjoy ! #lorem(100)
https://github.com/katamyra/Notes
https://raw.githubusercontent.com/katamyra/Notes/main/Compiled%20School%20Notes/CS2110/Quiz3StudyGuides.typ
typst
#import "../../template.typ": * #show: template.with( title: [ CS 2110 Quiz 3 Notes ], authors: ( ( name: "<NAME>", link: "https://github.com/katamyra" ), ), ) #set text( fill: rgb("#04055c") ) = I/O (Input / Output) The LC-3 uses *memory-mappings* (reserved locations of its memory) for communicating with registers stored in external devices *Memory mappings* in LC3 work by reserving specific locations in its memory for this purpose. Different addresses are dedicated to specific functions, such as handling input from a keyboard. By writing to or reading from these reserved addresses, LC3 can perform I/O operations without specific I/O instructions in the ISA. == Synchronous vs Asynchronous I/O #definition[ *Synchronous IO* - data is transferred at a fixed rate both our system and the device can agree on. The CPU reads/writes every x cycles *Asynchronous IO* - data is transferred at an unpredictable rate (typical of keyboard input/monitor output). - In order to synchronize the system and the device, we use *flags* (signals for communicating when the data is ready to be used) ] == IO Techniques #definition[ *Polling* - the _processor_ repeatedly checks if data is ready and proceeds when it is *Interrupts* - the _device_ interrupts the execution of the processor and demands that a routine is executed instantly with whatever data is supplying ] == Keyboard Input TBD == Display Output TBD = Operating System of the LC-3 == Privilege There are two types of privilege in LC-3: *Supervisor mode* is the mode that operating system code executes in. This mode carries out I/O routines, and other crucial system behavior. - In supervisor mode, a program can access all locations in memory and execute all instructions *User mode* is the mode that user code executes in. This mode is restricted from accessing some parts of memory such as OS code and I/O space == Priority A computer like the LC-3 might need to handle many different kinds of program and events, which is why we ned to assign them a priority. #theorem[ Every program gets given a priority level `PRn`, ranging from `PL0` to `PL7` Higher numbers have higher priority, so PL7 has the highest priority. ] == Process Status Register (PSR) The status of the currently executing program is stored in the PSR. The status includes information about the privilege and priority of the program, and the current CC codes. ``` PSR[15] == 1 <- User Mode PSR[15] == 0 <- Supervisor Mode ``` == Memory Layout #definition[ *Privileged Memory*: (*x0000 - x2FFF*) the part of memory that requires supervisor privileges to access. - It contains code and data for the operating system, including a supervisor stack x000 - *Supervisor Stack Pointer*: This part of memory contains most of the OS, including code ofr system processes such as TRAP system routines SSP - x2FFF: This part of memory is the supervisor stack, which grows and shrinks as SSP moves. ] #definition[ *User Memory: (x3000 - xF300*) x3000 - *User Stack Pointer*: This part of memory contains the main function at x3000, and any subroutines the user wrote and any dat needed by the user programs. USP - xFDFF: This part of memory is the user stack, it can grow and shrink as the USP moves ] #definition[ *I/O Page* - xFE00 to xFFFF - This part of memory isnt mapped to actual memory, but registers are memory mapped here, meaning that registers are given an address, and using that address accesses that register instead. This can only be accessed using *supervisor privileges* ] = Traps == Traps Overview #definition[ *TRAPs* are service routines built into the LC-3 that helps simplify instructions. This is the trap vector format: `[1111] [0000] [trapvect8]` TRAPs have an opcode (1111), and an *8-bit trap vector*, which allows us to preform different functions. ] #theorem[ *Common Trap Aliases* + HALT - x25: stops LC3 from running + OUT - x21: print character in R0 to console + PUTS - x22: print a string of characters, staring at the address in R0 and ending at a null character ] The trap vector is actually an address that points to another address of a trap handler - *trap handlers* are a bundle of instructions that a TRAP oerforms TRAP Instruction Formula: $"PC" #sym.arrow.l "MEM[ZEXT(TRAPvect8)]"$ == Trap Vector Table As a recap, x0000 - x2FFF is allocated for the system. The *trap vector table* (and the interrupt table) lies from x0000 to x01FF, and serves as a lookup table in memory. Each reference in the lookup table corresponds to the memory address of the _start_ of a trap handler somewhere else in the system memory. == Interrupts SOmetimes its not realistic for a current program to totally halt to poll I/O. In this case, we use *interrupt driven I/O*, which allows a program to run normally until the moment that an event comes in. From that point, the processor immediately handles the appropriate event with a handler code before going back to the original executing program. == Interrupts Overview #theorem[ External devices can assert interrupts with pauses the current executing of the program. This requires three conditions. + The computers currently allows this device to interrupt + The decide wants to assert an interrupt + The interrupt has higher priority than the program currently executing ] KBSD and KBDR have an extra bit de fined as "interrupt enable" bit so that the processor can turn on/off the ability of specific devices to interrupt LC-3. We use a *priority encoder* to detect the "interrupt signal" from these devices and output the priority level of the highest priority device requesting an interrupt. From there, the highest priority level is compared against the priority level of the current program. - If the highest interrupt priority level is higher than the current programs, the INT signal at the bottom of the diagram outputs 1, and we interrupt the processor before the next fetch. If not, INT is 0, and we don't interrupt the processor. #definition[ The *interrupt vector table* is a table which maps an 8-bit interrupt vectory to the memory address of the start of a chunk of service routine code somewhere between. ] The main differences with this and the trap vector table is that instead of zero extending we append x01 as the higher 8 bits = C Compilation *C sources files* (or .c files) are typical code files where all our functions are written. *C header files* contains functions and global variables to be shared between several source files. In order to use a header file, you can include it using `#include file.h` == Preprocessor #definition[ The *C preprocessor* performs text-level substitution on C source files and header files before they reach the compile ] #theorem[ The major *text-level* level substitution are: + *\#include* (file inclusion) + *\#define* (macro expansion) + *Conditional compilation* + *Code line identification* ] #definition[ *File Inclusion* C header files can be included to other header or sources files with \#include. The preprocessor will satisfy these inclusions by _literally copying the entire contents of the included file into the other file_. ] #definition[ *Macros* Macros are a _preprocessor directive_, essentially meaning that they tell the preprocessor to do something before compilation. They are just advanced text replacement. This is how they are defined: `#define MACRO_NAME(ARGUMENTS) TEXT_REPLACEMENT` ] An important thing about macros is that they are just text-replacement, and you should *always* surround the entire expression and every use of an argument in parentheses. So you should write: ```c #define MULTIPLY(X, Y) ((X) * (Y)) (NOT X * Y) ``` == The C Compiler The C compiler is responsible for taking in the pre processed C code and converting that into machine code. It has two major phases: + *Source Code Analysis*: Source code is parsed and broken down into smaller parts + *Target Code Synthesis*: Constructing machine code representation for each of the target parts === Symbol Table If C, we can use function names, variable names, etc. The compiler makes use of a symbol table for all of these. === Compiled Code Compiled code will be .c translated into binary files, using the ISA of what our target system is. It outputs an object file, which is just machine code for one part of the program. == The Linker Object files are not executable on their own, but the linker takes multiple object files and creates one executable file to run your program. = C Variables #definition[ *Data Types*: - *int* - *char*: 8 bits - *double*: at the lowest level, a floating point number is a bit battery where one of the bits represents the sign of the number, several other bits represent the mantissa, and the remaining represent the exponent ] == Identifiers - Identifiers can be composed of letters from the alphabet, digits, and \_. Only letters and the underscore character can be used to start the identifier. - Cannot be a keyword - Case-sensitive - Variable starting with an underscore are usually only in special library code - Uppercase is reserved for symbolic \#define values - Can you use it anyways? Syntax or general rule? == Size Of The size of a data type depends on the system the program has been compiled to run on. The best way to find this is to use sizeof(int)
https://github.com/coco33920/.files
https://raw.githubusercontent.com/coco33920/.files/mistress/typst_templates/letter/main.typ
typst
#import "template.typ": * #show: letter.with( sender: [ <NAME>, Universal Exports, 1 Heavy Plaza, Morristown, NJ 07964 ], recipient: [ Mr. <NAME> \ Acme Corp. \ 123 Glennwood Ave \ Quarto Creek, VA 22438 ], date: [Morristown, June 9th, 2023], subject: [Revision of our Producrement Contract], name: [<NAME> \ Regional Director], ps: [test], ) Dear Joe, #lorem(99) Best,
https://github.com/rikhuijzer/phd-thesis
https://raw.githubusercontent.com/rikhuijzer/phd-thesis/main/style.typ
typst
The Unlicense
#import "functions.typ": only #let table_of_contents() = context { let chapter_selector = selector( heading.where(level: 1) ) let chapters = query(chapter_selector) let numbers = range(1, chapters.len() + 1) [ \ #text(upper("Table of Contents"), style: "italic", size: 12pt) \ \ #table( columns: (auto, auto), align: (left, right + bottom), ..for (i, chapter) in numbers.zip(chapters) { let papers = range(2, 6) let chapter_text = if i in papers [ \ #text("Chapter " + str(i), style: "italic", size: 8pt) \ #chapter.body ] else [ #if i == 6 [ \ ] // Summary and bibliography are sort of separate from the discussion. #if i == 7 [ \ \ ] #chapter.body ] let loc = chapter.location() let page_number = loc.page() - 4 let chapter_link = [#link(loc)[#page_number]] (chapter_text, chapter_link) } ) \ ] } #let figure_num(fig) = { let chapter_selector = selector( heading.where(level: 1) ).before(fig.location()) // Re-count all chapters because we cannot easily grab the current chapter // number from the chapter (heading) element. let chapter_num = only(counter(chapter_selector).get()) // Not handling empty case because figures come only after first chapter. let chapter = query(chapter_selector).last() let chapter_figs = selector( figure.where(kind: fig.kind) ).after(chapter.location()).before(fig.location()) // Counter gives the wrong result for some reason. let fig_num = query(chapter_figs).len() [#chapter_num.#fig_num] } // I couldn't get the set and show for figure right, so // as a workaround let's just use custom functions. #let citefig(label) = context { let matches = query(selector(label)) let fig = only(matches) link(fig.location())[#figure_num(fig)] } #let style( title: "Placeholder title", body ) = { show heading.where( depth: 1 ): set heading(numbering: "1", supplement: [Chapter]) show heading.where( depth: 2 ): set heading(numbering: "1.1") show heading: set block(above: 1.4em, below: 1em) show figure: fig => context { // Contains fields: body, kind, supplement ([Figure]), numbering, counter. let caption = fig.caption let sup = caption.supplement let txt = caption.body.text assert(not txt.ends-with("."), message: "Caption ends with a dot") block(width: 100%)[ // Having note not justified next to caption is okay because it further // distinguishes the note from the body of the text. #set par(justify: false) *#sup #figure_num(fig)* \ _ #text(txt, size: 9pt) _ #align(center)[ #fig.body ] ] } set par( first-line-indent: 1em, justify: true, ) // Automatically uses the bibliography style. let style = auto set cite(style: style) set table( stroke: none, ) // Adobe Garamond Pro is nice but arguably does look a bit old fashion. // Baskerville is a bit more "straight" to the point, but has some fun on italics too. // Baskerville also has a slightly military touch to it. // But is a bit too curly on the number 2. // EurotypoBKL is more new but looks a bit like a cheap font. // // EB Garamond is good enough and is free. let font = "EB Garamond" // Use to test whether font is used. // Fallback true needed for Gronnerod in some fonts. let fallback = true set text(font: font, fallback: fallback, size: 10pt) table_of_contents() pagebreak() // Force blank back page. counter(heading).update(0) counter(page).update(0) set page( footer: context { set text(size: 9pt) let num = only(counter(page).get()) let is_left_page = calc.even(num) if is_left_page [ #num ] else [ #h(1fr) #num ] } ) set text(font: font, fallback: fallback, size: 10pt, top-edge: 1em) show par: set block(spacing: 0.6em) // Includes code blocks and inline code. show raw: set text(size: 6.4pt) // Display the content. body }
https://github.com/lrberge/cv
https://raw.githubusercontent.com/lrberge/cv/main/CV.typ
typst
// // Font awesome // /* Install the fontawesome fonts: - 1: download -- https://fontawesome.com/download - 2: go to the folder otfs and install the fonts - 3: use: #text(font: "Font Awesome 6 Free")[\u{f09b}] */ #let fa-github = text(font: "Font Awesome 6 Free")[\u{f09b}] #let fa-envelope = text(font: "Font Awesome 6 Free")[\u{f0e0}] #let fa-link = text(font: "Font Awesome 6 Free")[\u{f08e}] #let the_month = datetime.today().display("[month repr:long] [year]") #set page( paper: "a4", margin: (bottom: 1.6cm, x: 1.15cm, top: 2cm), footer: { set align(center + horizon) counter(page).display( "1 / 1", both: true, ) }, header: { context { if here().page() > 1 { set align(center + horizon) set text(10pt) box(width: 100%, stroke: (bottom: stroke(0.5pt)), inset: (bottom: 5pt))[ #smallcaps[ CV -- #the_month #h(1fr) <NAME> ] ] } } } ) #set text(font: "Fira Sans", size: 11pt) // #set text(font: "Poppins", size: 10pt) #show par: set block(above: 1em) #let content_to_text(x) = { if x.has("text") { return x.text } else if x.has("children") { let res = "" for xi in x.children { res += content_to_text(xi) } return res } else if x.func() == [ ].func() { return " " } else { return x } } // #let a = [bon: jour les gens] // #{ // repr(a) // parbreak() // let b = a.children // repr(b) // repr(b.at(1).func()) // repr(b.at(2).func() == [ ].func()) // } // #repr(content_to_text(a)) // Headers #set heading(numbering: "1") #show heading: x => { context { let height_total = page.height - page.margin.bottom let y_remaining = height_total - here().position().y let nb_raw = counter(heading).get() let nb_fmt = text(18pt)[#numbering(x.numbering, ..nb_raw)] let size = measure(nb_fmt) let nb_height = size.height let height_heading = nb_height + 1cm let add_pb = y_remaining < 2 * height_heading if y_remaining < 2 * height_heading { [#hide[hidden text necessary for page break] \ #pagebreak(weak: false)] } let header_fmt = [#x.body] block(stroke: (bottom: 0pt), width: 100%, inset: (bottom: 3pt))[ #set text(13pt) #place(dx: -size.width - 0.15cm, dy: 0cm)[#nb_fmt] #underline(evade: true, offset: 5pt, stroke: (paint: black, thickness: 1.1pt))[ #header_fmt#text(fill:white)[#box(width: 1fr)[#repeat[.]]] ] ] } v(0.15cm) } #let avoid_orphan() = { context { let height_total = page.height - page.margin.bottom let y_remaining = height_total - here().position().y if y_remaining < 2.5cm { [#hide[hidden text necessary for page break] \ #pagebreak(weak: false)] } } } // Links #show link: set text(fill: rgb(16%, 32%, 75%)) // // DOCUMENT // #align(center)[ #text(20pt)[*<NAME>*] #text(12pt)[_ #the_month _] ] #set table(stroke: 0pt) #table( columns: (60%, 40%), [*Personal information*], [*Professional address*], [French citizen, born 1985], [ University of Bordeaux \ BSE, UMR CNRS 6060 ], table.cell(align: bottom)[ #link("mailto:<EMAIL>")[#box(width: 2em)[#fa-envelope] <EMAIL>]\ #link("https://sites.google.com/site/laurentrberge/home")[#box(width: 2em)[#fa-link] #link("https://sites.google.com/site/laurentrberge/home")] ], [ 16, avenue <NAME> CS 50057 \ 33608 Pessac \ FRANCE ], table.cell(colspan: 2, inset: (top: 15pt))[*Fields of interest:* _applied micro-economics #sym.star economics of innovation #sym.star labor mobility #sym.star statistics_] ) #let date_table(..all_lines, line_sep: false) = { // format: date: stuff, in content form all_lines = all_lines.pos() table( columns: (15%, 1fr), ..for line in all_lines { if line == [] or line == [ ] { ([], []) continue } let all_elements = line.children let date = all_elements.at(0) let i = 1 while all_elements.at(i) != [:] { date += all_elements.at(i) i += 1 } // skip ':' i += 1 let desc = [] while i < all_elements.len() { desc += all_elements.at(i) i += 1 } (table.cell(align: right)[#date], [#desc]) if line_sep { ([], []) } } ) } = Current Position #date_table( [2020--#sym.dots.h: *Associate Professor of Economics (Maître de Conférences)* \ BSE (UMR CNRS 6060), University of Bordeaux] ) = Previous Positions #date_table( [2016--2020: *Postdoctoral Researcher in Economics* \ Department of Economics and Management, University of Luxembourg], [2016: *Postdoctoral Researcher in Statistics* \ MAP5 (UMR CNRS 8145), University Paris Descartes], ) = Research Visits #date_table( [Oct. 2013 --Jan. 2014 : *Visiting Researcher in Residence* \ London School of Economics, Department of Geography and Environment ], ) = Education #date_table( [2015: *Ph.D. in Economics*, GREThA, University of Bordeaux], [2010: *M.S. Economics & Statistics*, University Paris#{sym.space.hair}1], [2009: *M.S. Economics*, University of Bordeaux], [2008: *B.A. Economics*, University of Bordeaux], [2006: *B.S. Mathematics*, University of Bordeaux], ) = Awards #date_table( [2019: Department of Economics and Management Award for Young Researchers, University of Luxembourg], [2018: <NAME> Award (best paper published in _Papers in Regional Science_ in 2017) for the article "Network proximity in the geography of research collaboration."] ) = Publications #let linkfun = link #let output_item( title: [#text(red)[Insert a title]], year: none, authors: none, venue: none, number: none, link: none, extra: none, format: "#title #authors \ #venue#number, #year.#extra", format_number: ", #number", format_extra: "\ #emph[#extra]", format_venue: "#emph[#venue]", ) = { let title_fmt = [#title] if link != none { title_fmt = linkfun(link, title_fmt) } let authors_fmt = [] if authors != none { authors_fmt = [w/~#authors.join(", ", last: " and ")] } let venue_fmt = [] if venue != none { if format_venue == none { venue_fmt = venue } else { venue_fmt = eval(format_venue, mode: "markup", scope: (venue: venue)) } } let number_fmt = [] if number != none { if format_number == none { number_fmt = number } else { number_fmt = eval(format_number, mode: "markup", scope: (number: number)) } } let extra_fmt = [] if extra != none { if format_extra == none { extra_fmt = extra } else { extra_fmt = eval(format_extra, mode: "markup", scope: (extra: extra)) } } // let format_new = format.replace("title", "title_fmt").replace("authors", "authors_fmt") let dict = (title: title_fmt, year: year, authors: authors_fmt, venue: venue_fmt, number: number_fmt, extra: extra_fmt) let entry = eval(format, mode: "markup", scope: dict) list(entry) } #let publi = output_item /* #publi(title: "", link: "", authors: ("", ""), venue: "", number: "", year: 2022) */ #publi(title: "How patent rights affect university science.", link: "https://doi.org/10.1093/icc/dtac044", authors: ("<NAME>", "<NAME>"), venue: "Industrial and Corporate Change", number: "dtac044", year: 2022) #publi(title: "Unintended triadic closure in social networks: The strategic formation of research collaborations between French inventors.", link: "https://doi.org/10.1016/j.jebo.2018.10.009", authors: ("<NAME>", "<NAME>", "<NAME>"), venue: "Journal of Economic Behavior and Organization", number: "169", year: 2019) #publi(title: "The Latent Topic Block Model for the co-clustering of textual interaction data.", link: "https://doi.org/10.1016/j.csda.2019.03.005", authors: ("<NAME>", "<NAME>", "<NAME>"), venue: "Computational Statistics and Data Analysis", number: "137", year: 2019) #publi(title: "How do inventors networks affect urban invention?", link: "https://doi.org/10.1016/j.regsciurbeco.2018.05.002", authors: ("<NAME>", "<NAME>"), venue: "Regional Science and Urban Economics", number: "71", year: 2018) #publi(title: "Bridging centrality as an indicator to measure the 'bridging role' of actors in networks: An application to the European nanotechnology co-publication network.", link: "https://doi.org/10.1016/j.joi.2017.09.004", authors: ("<NAME>", "<NAME>"), venue: "Journal of Informetrics", number: "11(4)", year: 2017) #publi(title: "Centrality of regions in R&D networks: A new measurement approach using the concept of bridging paths.", link: "https://doi.org/10.1080/00343404.2016.1269885", authors: ("<NAME>", "<NAME>"), venue: "Regional Studies", number: "51(8)", year: 2017) #publi(title: "Network proximity in the geography of research collaboration.", link: "https://doi.org/10.1111/pirs.12218", venue: "Papers in Regional Science", number: "96(4)", year: 2017, extra: "Beckmann Award 2018 (Best PiRS paper of 2017)") #publi(title: "HDclassif: An R package for model-based clustering and discriminant analysis of high-dimensional data.", link: "https://doi.org/10.18637/jss.v046.i06", authors: ("<NAME>", "<NAME>"), venue: "Journal of Statistical Software", number: "46(6)", year: 2012) = Publications in French #publi(title: "Le déploiement du très haut débit a-t-il favorisé la numérisation des entreprises? Une évaluation du Plan France Très Haut Débit.", link: "https://www.cairn.info/revue-economique-2024-2-page-301.htm", authors: ("<NAME>", "<NAME>"), venue: "La Revue Économique", number: "75", year: 2024) = Book Chapters #publi(title: "Bridging centrality: A new indicator for the positioning of actors in complex networks.", link: "https://www.amazon.com/Innovation-Complexity-Policy-Contributions-innovation/dp/3631723156", authors: ("<NAME>", "<NAME>"), venue: [In Weber (ed.): _Innovation Complexity and Policy. Contributions from 30 years of innovation policy research in Austria_, pp. 85-100, <NAME>, Frankfurt am Main `[ISBN 978-3-631-72315-9]`], year: 2017, format_venue: none) = Refereed Conference Proceedings #publi(title: "Software patents and scientific publications.", link: "https://doi.org/10.5465/AMBPP.2017.214", authors: ("<NAME>", "<NAME>"), venue: "Best Paper Proceedings of the Academy of Management", number: "n°13779", year: 2017) = Working Papers #publi(title: "Efficient estimation of maximum likelihood models with multiple fixed-effects: the R package FENmlm", link: "https://github.com/lrberge/fixest/blob/master/_DOCS/FENmlm_paper.pdf", venue: "CREA Discussion papers", number: "13", year: 2018) = Work in Progress #let simple_pub = output_item.with(format: "#title #authors #extra") #simple_pub(title: "How access to knowledge shapes innovation: The case of the ARPANET.", authors: ("<NAME>", "<NAME>")) #simple_pub(title: "Does job mobility increase innovation? A case study of the fall of Nortel.", authors: ("<NAME>", "<NAME>")) = Patents #simple_pub(title: "Method for co-clustering senders and receivers based on text or image data files.", link: "https://patents.google.com/patent/EP3591545A1/en", authors: ("<NAME>", "<NAME>", "<NAME>"), extra: [\ _European Patent Office_, filed 2018, `EP18305896.5`, `PCT PCTEP2019/068011`.], format_extra: none) = Ph.D. *Supervision.* #date_table( [2019--2023: <NAME>, on Job Mobility and Innovation. Jointly supervised with Nicolas Jonard.] ) *PhD comitees.* #date_table( [2022--#sym.dots.h: <NAME> (BSE)], [2022--#sym.dots.h: <NAME> (BSE)] ) = Grants #date_table[ 2020: France Stratégie, "#link("https://www.strategie.gouv.fr/sites/strategie.gouv.fr/files/atoms/files/etude_inrae_retombees_du_plan_france_tres_haut_debit_sur_les_entreprises.pdf",)[_Retombées du déploiement du très haut débit sur les entreprises: Quels effets sur les usages numériques, l'innovation, et la performance ?_]" (Evaluating the economic impact of ultra-fast broadband deployment in France) \ w/ <NAME>, <NAME> (PI) and <NAME> `[91,000€]` ] = Communications #let st = super[st] #let nd = super[nd] #let rd = super[rd] #let th = super[th] #let eme = super[ème] #let emes = super[èmes] #let sep = sym.parallel #date_table( [2019: 3#rd Luxembourg Workshop on Innovation, Luxembourg #sep 12#th Annual Northwestern/USPTO Conference on Innovation Economics, Chicago, USA #sep University Paris Dauphine, Governance Analytics seminars, Paris, France], [], [2018: 2#nd Luxembourg Workshop on Innovation, Luxembourg #sep 13#th European Policy for Intellectual Property conference, Berlin, Germany #sep 4#th Geography of Innovation Conference, Barcelona, Spain], [], [2017: GREThA seminars, Bordeaux, France #sep BETA seminars, Strasbourg, France #sep 12#th European Policy for Intellectual Property conference, Bordeaux, France #sep 77#th Academy of Management conference, Atlanta, USA #sep 54#eme colloque de l'ASRDLF, Athens, Greece #sep 7#th ZEW/MaCCI Conference on the Economics of Innovation and Patenting, Mannheim, Germany #sep 1#st Luxembourg workshop on Innovation, Luxembourg #sep RSA annual conference, Dublin, Ireland], [2016: 3#rd Geography of Innovation Conference, Toulouse #sep 33#emes journées de la microéconomie appliquée, Besançon #sep Journée R, Muséum National D'Histoire Naturelle, Paris #sep 56#th congress of the European Regional Science Association, Vienna, Austria #sep Barcelona workshop on regional an urban economics, Barcelona, Spain], [], [2012--2015: NetWorkshop, Pécs, Hungary #sep 55#th congress of the European Regional Science Association, Lisbon, Portugal #sep Econ-Geo seminars, University of Utrecht, Utrecht, The Netherlands #sep 54#th congress of the European Regional Science Association, Saint-Petersburg, Russia (#sym.times 2 presentations) #sep 53#rd congress of the European Regional Science Association, Palermo, Italy #sep Economic Geography seminars, London School of Economics, London, England #sep 22#eme Séminaire Européen des Doctorants en Économie Régionale, Bordeaux], ) = Refereeing Activities #let nb(n) = [(#sym.times~#n)] #par(hanging-indent: 0.5em)[ #box(width: 4em)[*Articles.*] Regional Studies #nb(4), Industry and Innovation #nb(4), Science and Public Policy #nb(1), PLoS One #nb(1), Scientometrics #nb(4), Tijdschrift voor economische en sociale geografie #nb(1), Journal of Economics & Management Strategy #nb(1), Review of Industrial Organization #nb(1), Journal of Statistical Software #nb(1). #box(width: 4em)[*Grants.*] Agence Nationale de la Recherche #nb(1). ] = Academic Responsibilities #date_table( [2021--#sym.dots.h: Director of a special teaching program in law and economics at the University of Bordeaux \ _(Directeur d'étude du parcours renforcé L1 AES)_], [2021--#sym.dots.h: Elected member of the scientific board of BSE, University of Bordeaux.], [2018--2020: Appointed member of the scientific board of the department of economics and management (postdoc representative), University of Luxembourg], [2013--2015: Elected member of the scientific board of the laboratory GREThA (PhD representative), University of Bordeaux.], ) = Organization #date_table( [2017--2019: Co-organizer of the 1#st, 2#nd and 3#rd Luxembourg workshops on Innovation], [2013--2014: Organizer of the first economics Ph.D. lunch seminars -- GREThA, University of Bordeaux], ) = Teaching Experience #let simple_table(..all_lines) = { all_lines = all_lines.pos() let first_line = all_lines.at(0).split(";") let ncols = first_line.len() set align(center) table( columns: ncols, align: (x, y) => if x == 0 {left} else {center}, ..for line in all_lines { let all_values = line.split(";") for i in range(all_values.len()) { if i == 0 { (strong(all_values.at(i)),) } else { (eval(all_values.at(i), mode: "markup"),) } } } ) } *2023--2024, University of Bordeaux.* #simple_table( "Capitalism and Global Wealth ; Undergraduate ; 24h", "Introduction to economics ; Undergraduate ; 12h", "R's Shiny ; Graduate ; 6h", "Webscraping ; Graduate ; 6h", "Introduction to R ; Graduate ; 2#sym.times~12h", "Data management in R ; Graduate ; 15h", "R Programming for Environmental Economics ; Graduate ; 15h", "Introduction to Econometrics ; Graduate ; 24h", "Methodology ; Undergraduate ; 15h" ) #avoid_orphan() *Previous experience (lectures only).* #simple_table( "Data visualization ; Graduate ; 6h ; 2022", "Webscraping ; Graduate ; 6h ; 2022", "Mathematics for Economics ; Undergraduate ; 18h ; 2021,2022", "European Economics ; Undergraduate ; 18h ; 2021, 2022", "Introduction to R programming ; PhD ; 15h ; 2018, 2019", ) = Software Products: R Packages _Over 950,000 downloads as of March 2024 (CRAN downloads only)._ #let soft_item(pkg, desc) = { set par(first-line-indent: 1em) [#fa-github #link("https://github.com/lrberge/" + pkg)[#strong(pkg)] ~ #desc] } #soft_item("fixest")[Fast and user-friendly estimation of econometric models with multiple fixed-effects.] #soft_item("stringmagic")[The easiest way to perform complex string operations compactly and efficiently. ] #soft_item("indexthis")[Highly optimised algorithm to create indexes.] #soft_item("fplot")[The easiest way to plot graphs of regular/conditional/weighted distributions.] #soft_item("hdd")[Class of data allowing the easy manipulation of out of memory data sets.] #soft_item("dreamerr")[Tools to make error-handling a piece of cake.] #soft_item("HDclassif")[Tools to create coherent groups of observations based on observables (clustering).]
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/fruitify/0.1.0/fruitify.typ
typst
Apache License 2.0
#let _exports = { let xorshift_plugin = plugin("xorshift_plugin.wasm") let to_le_bytes(n) = { // assumption: n is an unsigned integer let res = () while n > 0 { res += (calc.rem(n, 256),) n = calc.quo(n, 256) } res } let from_le_bytes(b) = { let res = 0 let mul = 1 for byte in b { res += mul * byte mul *= 256 } res } let gen_u32(seed, mod) = { assert(type(seed) == array, message: "seed needs to be an array") assert(seed.len() == 16, message: "seed needs to have length 16") for x in seed { assert(type(x) == int, message: "seed needs to consist of ints") assert(x >= 0, message: "seed components need to be non-negative") assert(x < 256, message: "seed components need to fit in 8 bits each") } assert(type(mod) == int, message: "mod needs to be an int") assert(mod >= 0, message: "mod needs to be non-negative") assert(mod < calc.pow(2, 32), message: "mod needs to fit into 32 bits") let seed = bytes(seed) let mod = bytes(to_le_bytes(mod)) let res = xorshift_plugin.rand(seed, mod) let res = array(res) from_le_bytes(res) } let seed_from_int(n) = { assert(type(n) == int, message: "input needs to be an int") // note: 251 is the largest prime < 256, so this mod-mult gives some VERY basic rng based on the number // (which can then be used as a seed for the real rng) (1, 210, 211, 230, 130, 206, 155, 111, 194, 201, 164, 92, 21, 219, 73, 169) .map(mult => calc.rem(mult * calc.rem(n, 251), 251)) } let inc_seed(seed) = { let carry = 1 for i in range(16) { let i = -1 - i seed.at(i) += carry carry = 0 if seed.at(i) == 256 { seed.at(i) = 0 carry = 1 } } seed } let gen_index(seed, len, excluded) = { let leftover = range(len).filter(x => x not in excluded) if seed == none { (none, leftover.first()) } else { let idx_idx = gen_u32(seed, leftover.len()) let idx = leftover.at(idx_idx) (inc_seed(seed), idx) } } let fruits = "🍇🍒🍍🍋🍉🍎🥝🥥🍌🥭🫐🍊🫒🍅🍏🍐🍓🍈🍑".codepoints() let the-state = state("fruitify.state", ( symbols: fruits, reuse: false, seed: none, used: (:), )) let setup(..args) = { let _no_pos() = {} _no_pos(..args.pos()) let args = args.named() the-state.update(st => { // note: this needs to happen BEFORE setting the `reuse` option // so that `used` DOES get cleared once when setting `reuse: true` // (it should only work into the future) if not st.reuse { st.used = (:) } // straight-forward arguments for k in ("symbols", "reuse") { if k in args { st.at(k) = args.at(k) } } if "random-seed" in args { let rs = args.random-seed st.at("seed") = if type(rs) == int { seed_from_int(rs) } else { rs } } st }) } let reset() = the-state.update(st => st + (used: (:))) let fruitify(eq, ..args) = { setup(..args) show regex("\b\pL\b"): it => { let c = it.text the-state.update(st => { let symbols = st.symbols let seed = st.seed let used = st.used // if c == "a" { // panic(used) // } assert(used.len() < symbols.len(), message: "equation has more characters than possible symbols") if c not in used { let used_idcs = used.values().map(v => symbols.position(w => v == w)) let (next_seed, idx) = gen_index(seed, symbols.len(), used_idcs) st.used.insert(c, symbols.at(idx)) st.seed = next_seed } st }) the-state.display(st => { st.used.at(c) }) } eq } (setup, fruitify, reset, fruits) } #let (fruitify-setup, fruitify, reset-symbols, fruits) = _exports #let vegetables = "🌽🥔🥑🥜🫚🧅🥕🫑🧄🥒🌶🫛🌰🥬🍄🥦🍆🫘".codepoints()
https://github.com/kfijalkowski1/typst-diff-tool
https://raw.githubusercontent.com/kfijalkowski1/typst-diff-tool/main/typst_ast_parser/data/5_add_paragraphs/add_paragraphs.typ
typst
#import "@preview/pesha:0.2.0": * #experience( place: "Hot Topic", title: "Retail-sales associate", time: [2004--06], )[ - Top in-store sales associate in seven out of eight quarters - Inventory management - Training and recruiting ] #experience( place: "Home", title: "Home information", time: [2024--10], )[ - size 400m2 - 4 bedrooms - private pool ] #experience( place: "Warsaw", title: "Job offer", time: [2004--06], )[ - Junior rust developer - a lot of money - fruit thursday ]
https://github.com/vEnhance/1802
https://raw.githubusercontent.com/vEnhance/1802/main/r10.typ
typst
MIT License
#import "@local/evan:1.0.0":* #show: evan.with( title: [Notes for 18.02 Recitation 10], subtitle: [18.02 Recitation MW9], author: "<NAME>", date: [7 October 2024], ) #quote(attribution: [Ars Brevis, by Piet Hein])[ There is \ one art, \ no more, \ no less: \ to do \ all things \ with art- \ lessness. \ ] This handout (and any other DLC's I write) are posted at #url("https://web.evanchen.cc/1802.html"). #notify(title: [I'm writing unified course notes now!])[ I'm working on writing a replacement set of lecture notes! You can find them posted on my usual website, filename `lamv.pdf`. ] #notify(title: [Say hi to Lichen])[ At some point (I don't know exactly when) Lichen will be here as a teaching mentee, meaning he'll observe some number of the recitations and then try to teach one (and get feedback from you all). Please be nice to Lichen! ] = Review of Oct 2 recitation and Oct 3-4 lecture I won't repeat myself from `r09.pdf`; I have about 10 extra copies of `r09.pdf` if you need one. == Level curves replace $x y$-graphs from high school You'll mostly want to work with *level-curve pictures*, in which case - all the variables $x$, $y$, $z$ etc. are inputs - the output $f(x,y)$ is *unnamed*; no variable name for it. (Sometimes just $f$ if really needed, like in the last question on PSet 4B, but usually it's anonymous.) Contour plot is a synonym for level curve, if you run into that term. == Type signatures for this section (If you're new to the recitation, read the handout from recitation 1.) Relevant types: - $f : RR^n -> RR$ is a function that accepts _points_ and outputs _numbers_. - $nabla f : RR^n -> RR^n$ is a function that accepts _points_ and outputs _vectors_. Although points and vectors were used interchangeably before, it might help to think of them as different types for this part of the course, because you will draw and use them rather differently. (For example, we don't ever add two points.) - A _point_ represents a position in your input space $RR^n$. You should *draw a point as a dot* (rather than arrows). I'll try to use capital letters $P$, $Q$, ... for these. - A _vector_ is usually a displacement of some sort in this part of the course; so you should always *draw vectors as arrows*. I'll try to use bold lowercase letters $bf(u)$, $bf(v)$, ... for these. == Gradient replaces derivative See the table in `r09.pdf`. The two important things to know are: - Linear approximation: $f(P + bf(u)) approx f(P) + nabla f dot bf(u)$ - $nabla f$ is a normal vector to the tangent of the level curve at $P$. To concretely compute $nabla f$, take the partial derivatives of $f$ with respect to each of the input variables. == Notational difference I think Maulik places arrows over $nabla f$ to emphasize it's a vector, but I'm not going to write the arrow because I think it makes the equation harder to read. Feel free to add the arrow yourself if it helps you, though. = Recitation questions from official course / 1: Consider the function $f (x , y) = frac(1, x^2 + y^2) .$ + Draw the level curves for $f (x , y) = 1$ or $2$. + Sketch the graph of $f$. + Find the partial derivatives $f_x (x , y)$ and $f_y (x , y)$. + Starting at the point $(1 , 2)$, in what unit vector direction $upright(bold(u))$ does $f (x , y)$ increase the fastest? What is the rate of increase in this direction? + What is the directional derivative $D_(upright(bold(u))) f (1 , 2)$ where $upright(bold(u))$ is the direction given by the unit vector $angle.l 1 \/ sqrt(5) , - 2 \/ sqrt(5) angle.r$? / 2: Estimate the value of $log (0.49^2 + 0.76)$ by calculating the linear approximation of the function $log (x^2 + y)$ near the point $(0.5 , 0.75)$. (Here I’m using $log$ for natural logarithm). / 3: Consider the surface $S$ given by the equation $x y - 2 x z^2 + 3 y^2 z = 2$. If I consider a vector $upright(bold(u))$ that is tangent to $S$ at the point $(1 , 1 , 1)$, what is the directional derivative of $f (x , y , z) = x y - 2 x z^2 + 3 y^2 z$ at $(1 , 1 , 1)$ in direction $upright(bold(u))$? What is the equation of the tangent plane to $S$ at the point $(1 , 1 , 1)$?
https://github.com/J3m3/poolc-fp
https://raw.githubusercontent.com/J3m3/poolc-fp/main/lib/style.base.typ
typst
MIT License
#let default_font = "Pretendard" #let fontsize_big = 30pt #let fontsize_medium = 25pt #let fontsize_small = 20pt #let fontsize_extrasmall = 16pt #let fontsize_copyright = 6pt #let (margin_x, margin_y) = (1.1cm, 1.1cm) // ======= CHANGABLE THINGS ======= #let color_light = rgb("#e0f0ed") #let color_medium = rgb("#a7dad5") #let color_dark = rgb("#47be9b") #let date = datetime(year: 2024, month: 1, day: 24) // ================================ #let copyright(date: date) = [ ⓒ #date.year(). Je Sung Yang all rights reserved. ]
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/058%20-%20Duskmourn%3A%20House%20of%20Horror/004_Children%20of%20the%20Carnival%2C%20Part%201.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Children of the Carnival, Part 1", set_name: "Duskmourn: House of Horror", story_date: datetime(day: 21, month: 08, year: 2024), author: "<NAME>", doc ) No one knew where the wind came from. It shouldn't have been blowing anymore: the House walls were tall and strong, without cracks or crannies through which a wayward wind could slip. The windows were locked. Every few years, some group of cocky young harvesters would decide that everyone before them had been doing it wrong somehow and take up bricks, and hoes, and whatever else they could find, making the long journey to the nearest plate glass window, intending to smash it and set the world free. When Duskmourn was feeling charitable, their bodies would be found. When the House was feeling hungry—much more common an occurrence—there would be nothing, not even bones, to show where they had gone. Their names would be added to the cautionary tale of why it's a bad idea to go around smashing windows, and their parents would cry in private, trying not to let the younger children see. The younger children saw, of course. The children always saw. The children saw how many people left the safe zones and didn't come back again. More every year. The once dependable paths through Duskmourn's body were becoming more and more dangerous; some of them had family that had moved to other safe zones after spilling promises across the table like jewels. "I'll always come back to visit you." "This will always be my home." "How could anyone trade a carnival for a Benefactory without regrets?" And some of the promises had been the truth, and some of them had been pretty polished lies, and so many of them came down to the same thing in the end, when the ones who made them never came back again. Dawn sat on the rough stone wall that marked the border between the carnival's safety and the treacherous roses to the west, throwing chips of stone at the roses to watch them snap and snarl. She knew better than to get close enough for them to sink their thorns into her. That was the thing about safe zones: you could learn their borders, but so could everything else. Travel outside the safe zones was all the more treacherous because the greatest concentration of the House's monsters could be found just over the border. They lurked there, waiting for someone to step over the temporary boundaries that had formed between the safe places and the rest of the House, waiting for the unwary to make themselves into targets. Dawn pulled back and hurled a particularly large stone chip at a yellow rose, which snapped it out of the air and swallowed, stem bulging as the rock traveled toward its root ball. If you could feed a bush enough rocks, you might clog the roots and kill the thing, although you could also turn the whole bush into a sort of slingshot, with the roses spitting projectiles at anyone in range. The elders disapproved of feeding the roses. Dawn really didn't care. She wasn't going to live in the carnival her whole life. Even if her inventions were never enough to catch the attention of the Benefactors, there were other safe zones in the House—her brother had gone to an attic settlement, and one of her cousins was in the hedge maze region—and none of them would have the local elders watching her every move. Oh, they'd have their own elders, but those elders would know her as the adult she was, not the child she'd been. #emph[These] elders would forever see her as something to protect, to contain, to control, and she was well tired of it. But she wasn't going to need to deal with any of that, because she was going to build the best traps, the best warning systems, and the Benefactors were going to claim her to join them and help them fight against the House. It was all she'd ever wanted. Motion from the safe trail past the garden. Dawn stood, feet planted firmly on the wall, and strained to see who was coming. Delight washed through her and she jumped down—on the protected side, not into the disappointed roses—to run toward the break in the fence where the path would deliver the walkers to the carnival bawn. The wind whipped through her hair as she ran, carrying the familiar scent of popcorn and frying dough. The carnival was a safe zone, but the House still replenished its lures, the delicious smells that had drawn the first survivors to the patched and leaning tents. Everything else had to be gathered, wandering the safe zone to gather crops or sending foraging parties into other rooms. Some of the dining halls and kitchens replenished themselves on a regular basis, setting their own lures out to trap survivors. It was best when they didn't need to forage, of course. Dawn could name half a dozen people—strong, clever people—who'd gone on foraging trips and never come back. Every trip into the House was a trip you might not survive. So, she ran toward the fence, legs pumping and heart pounding, to see the foragers returned. All three of them had made it back. Rill had a nasty gash down one arm, straight through the fabric of her canvas-and-wallpaper coat, but the flesh Dawn could see through the hole was still pink and healthy looking, not riddled with splinters or decorated with fishhooks. Sunset was walking like his left ankle hurt him, leaning on City and wincing every time he put his foot down. City, of course, looked perfectly fine. City always did. He was the fastest and strongest of their current foragers, too clever for the harvest by far; he'd been making runs into the body of the House since the day he was declared of age, taking greater risks than any of the others, and always coming back. All of them were named for things that had existed in the world outside, fading dreams of freedom and comfort passed down, long after they had lost all meaning—Dawn was fairly sure her name and Sunset's meant the same thing, and that a Rill was something to do with water, but she'd never experienced those things for herself. Neither had her parents, or her grandparents, or anyone she'd ever met. Before the House, many of them had lived in a city. It had been the place where they made their homes and built their wonderful creations, where they wove their magic and spent their lives in comfort and plenty. Nothing like the lives they eked out now, scrounging for the House's scraps, running from the House's monsters. City was the best and most powerful name they had, and it only made sense for the one who carried it to be the best of them all. City was going to lead them one day, Dawn knew it, and when he did, he would find a way to put a stop to the slow erosion of the safe routes between settlements. He would make their world steady and stable again. A happy life could be lived surrounded by monsters, if you knew where to throw your stones and set your feet. Dawn knew both those things. Just like she knew that it wasn't right for two members of a raiding team to come back injured when the third looked completely fine. She shot City an uneasy look as she moved to support Sunset, helping him take the weight off his foot. "What happened?" "Ambush while we were cleaning out a cold room," said Rill. "You should have seen it—more wheels of cheese than you ever get in one place. And jam! Actual jam, in jars!" Dawn's eyes widened. Jam in jars was the best kind. After you swallowed the sweetness, you'd have glass left to use in making weapons, or traps, or even the most basic sort of scanners, the ones that would barely give you a few seconds' warning before a cellarspawn burst out of the wall and started trying to drag you away. Better scanners needed better material. A few months ago, one of the raiding parties had come back to the carnival with a whole box of silverware that she'd been able to melt down for wire. "Your detectors," she said abruptly. "Didn't they go off?" "It wasn't a cellarspawn," said Rill. Her voice was hollow. "We were deep in the Boilerbilges, nowhere near the Hauntwoods, but it wasn't a cellarspawn." "Then what?" asked Dawn. "Wickerfolk," said Sunset. Dawn managed, barely, not to recoil, her eyes snapping around to the cut on Rill's arm. "Did you clean the wound?" #emph[Just one splinter, and she could be]  … "This wasn't our first raid," said City, with uncharacteristic sharpness. "Don't act like you'd have done any better. Our detectors weren't keyed for wickerfolk. There wasn't any reason they should have been. So, we got caught off guard." "But we still got some cheese," said Sunset, trying to sound upbeat. "It wasn't a total loss." "I cleaned it," said Rill. "Nothing's lodged inside. I just wish I knew why they'd been hunting there. It was so close to the safe route …" "And nothing attacked you on safe ground?" Rill shook her head. Dawn exhaled. The safe paths through the House had been hard drawn, paid for in blood and brutality over the course of years. The House didn't want them to be safe anywhere, of course; it needed their fear, and a bed where nothing would try to snatch you out from beneath the covers went counter to everything it stood for. But people needed safety to stay #emph[people] . They needed time to stop and breathe, before the stress stopped their hearts and left them rotting, no longer tempting sources of terror for the House to harvest. So, bit by bit, they had established the treaties. Unwritten things, of course, things that could be forgotten or passed on incorrectly, so there was always the question of whether the path you were on was #emph[really] safe, or just looked like it. Still, if you stayed on the paths and watched for the sigils, you should be able to move between zones without too much risk. "We saw another of those groups of strangers," said Sunset. "They were wearing clothing like nothing I'd ever seen before, and wandering around like they had no idea they were in danger. And they all had their glimmers! Every one of them!" "Huh," said Dawn. There had been a great shaking in the House a few harvest cycles back, knocking things off shelves and sending dust cascading down from the rafters. And then it had passed, and everything had been normal—until it wasn't. Strange doors had begun to appear outside the safe zones. Doors appearing and disappearing according to their own whim was nothing new, but not at the rate these seemed to crop up—more and more every week it seemed. Rill had seen one open, and the air that blew through the door had been fresh and sweet, so sweet it hurt the throat, like it was blowing from another world. More strangers had started showing up after that. Never many at the same time, but a steady enough stream that everyone knew by this point. People had been vanishing from the edges of the safe zones since the beginning. It wasn't until the appearance of the doors that they'd started to vanish from the paths. "Can't you walk any faster?" demanded City. Dawn frowned at him. "Sunset's hurt. He's going as fast as he can. You're not carrying anything so urgent that we should have to hurry." "Sorry," said City, slumping a bit. "I'm just … tired." "We're almost there." They reached the top of the low hill that split the carnival grounds in two, and there it was ahead of them: home. One of the few places Duskmourn had never been able to taint or tarnish, a safe zone from the beginning. The tents were a riot of color, patched canvas walls rippling in the breeze and pennants snapping overhead. Magical lights wrapped around the tentpoles and stretched between the tents themselves, creating a web of promised sanctuary. The central bonfire was lit; someone was playing a fiddle tune that sounded like a full stomach and a warm bed. Dawn exhaled. Sometimes she chafed against the need to be a good member of the community, but she couldn't deny the joy she felt at the sight of home. #figure(image("004_Children of the Carnival, Part 1/01.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none) Rill and Sunset looked like they felt the same way. City, though … City's face was cold and solemn, not a smile in sight. He pulled ahead as they walked down the hill toward the main tent. "You know, Dawn," he said. "Your detectors kept us safe for years. Without you, we would have died a dozen times over. That's why we wanted you to keep harvesting and artifacting until the Benefactors came for you, not come with us on the foraging runs." "I know," she said, bemused. "Why are you telling me this now?" "Because things have changed." Dawn stumbled. City turned to look at her. #emph[Had his eyes always been that blue, that bright? Or was it just a trick of the carnival lights?] "Duskmourn gave us safety because we gave House what was needed more than anything: we gave Duskmourn the people to sustain it." "I don't …" "But something shifted. Something outside. And now, Duskmourn can do what we've been doing all along. Now House can #emph[hunt] ." City looked genuinely regretful as he stopped walking and turned fully around, shrugging off his coat. Beneath it he wore a tabard embroidered with the wings of a vast and colorful moth. He smiled as he spread his arms, and the smile was the smile of Duskmourn, and it was terrible to behold. "Duskmourn doesn't need us anymore," he said, while Rill gasped in horror and Dawn and Sunset stared. "But there is a way to survive, if you're clever enough to see it. There's still a place for those of us who wish to serve the Devouring Father, among the Cult of Valgavoth. Join me, Dawn. Spread your wings and fly toward his light." Rill and Sunset were injured. Dawn was the only one who stumbled back, away from the suddenly menacing shape of her friend. Behind her, something smashed. She looked over her shoulder. A massive razorkin was kicking down the fence, what looked like an entire army of Duskmourn's monsters following close behind. She spun around and ran. The time of truce was ended, but maybe, if she was lucky, she could find one of those mysterious doors. Maybe she could be the one who found out where the wind came from. Dawn ran, and the carnival fell, screaming, behind her.
https://github.com/dashuai009/dashuai009.github.io
https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/019.typ
typst
#let date = datetime( year: 2022, month: 3, day: 14, ) #metadata(( "title": "文本分类", "author": "dashuai009", description: "bupt-人工智能课大作业", pubDate: "'Jul 08 2022'", subtitle: [人工智能,文本分类], ))<frontmatter> #import "../__template/style.typ": conf #show: conf == 1. 实验目的 <实验目的> - 掌握数据预处理的方法,对训练集数据进行预处理; - 掌握文本建模的方法,对语料库的文档进行建模; - 掌握分类算法的原理,基于有监督的机器学习方法,训练文本分类器; - 利用学习的文本分类器,对未知文本进行分类判别; - 掌握评价分类器性能的评估方法。 == 2. 实验类型 <实验类型> 数据挖掘算法的设计与编程实现。 == 3. 实验要求 <实验要求> - 文本类别数:\>=10类; - 训练集文档数:\>=50000篇;每类平均5000篇。 - 测试集文档数:\>=50000篇;每类平均5000篇。 == 4. 实验内容 <实验内容> 利用分类算法实现对文本的数据挖掘,主要包括: - 语料库的构建,主要包括利用爬虫收集Web文档等; - 语料库的数据预处理,包括文档建模,如去噪,分词,建立数据字典,使用词袋模型或主题模型表达文档等; - 选择分类算法(朴素贝叶斯(必做)、SVM/其他#strike[#strong[实际验收有几个不做的?];];等),训练文本分类器,理解所选的分类算法的建模原理、实现过程和相关参数的含义; - 对测试集的文本进行分类 - 对测试集的分类结果利用正确率和召回率进行分析评价:计算每类正确率、召回率,计算总体正确率和召回率。 == 5. 实验准备 <实验准备> === 5.1. 实验环境 <实验环境> #strong[fedora 33] 个人比较喜欢linux环境,windows环境应该也可以完成下面的步骤。一些文件路径可能会修改。 另外,硬盘读取速度等都会影响实验结果(速度等方面)。 === 数据集准备 #link("http://thuctc.thunlp.org/message")[THUCNews] 数据集大小:1.45GB 样本数量:80多万 数据集详情链接:#link("http://thuctc.thunlp.org")[thuctc] 在解压出的目录下新建两个文件夹`trainData`和`testData`。 注意:Linux下解压会多一层目录并多一个`__MACOSX`,这个并不影响实验。 解压结果如下 #figure( image("019/thu.png", width: 200pt), caption: [ 文本分类实验 解压结果 ], ) === cppjieba <cppjieba> #link("https://github.com/yanyiwu/cppjieba")[cppjieba];是”结巴(Jieba)“中文分词的C++版本。 这个要比python快10倍,解决本实验足够好用! 用法 - 依赖软件 \> `g++ (version >= 4.1 is recommended) or clang++;` \> \> `cmake (version >= 2.6 is recommended);` - 下载和编译 \> `git clone --depth=10 --branch=master git://github.com/yanyiwu/cppjieba.git` \> \> `cd cppjieba` \> \> `mkdir build` \> \> `cd build` \> \> `cmake ..` \> \> `make` 结果如下 #figure( image("019/jieba.png", width: 200pt), caption: [ 文本分类实验 ], ) 我们只需要修改`cppjieba\test\demo.cpp}`, 但我们要在`cppjieba\build`下执行`make`进行编译, 并执行`./demo`运行分词程序。 === 朴素贝叶斯分类器 <朴素贝叶斯分类器> 贝叶斯方法的分类目标是在给定描述实例的属性值 $< a_1 , a_2 , . . . , a_n >$下,得到最可能的目标值$V_(M A P)$。 $ v_(M A P) & = a r g max_(v_j) P (v_j \| a_1 , a_2 , . . . , a_n)\ & = a r g max_(v_j in V) frac(P (a_1 , a_2 , . . . , a_n) P (v_j), P (a_1 , a_2 , . . . , a_n))\ & = a r g max_(v_j in V) P (a_1 , a_2 , . . . , a_n) P (v_j)\ & = a r g max_(v_j in V) P (v_j) product_i P (a_i \| v_j) $ 采纳m-估计方法,即有统一的先验概率并且m等于词汇表的 大小,因此 $ P (w_k \| v_j) = frac(v_k + 1, n + lr(|V o c a b u l a r y|)) $ 定义如下函数: `Learn_Naive_Bayes_Text( Examples, V )` Examples为一组文本文档以及它们的目标值。V为所有可能目标值的集合。此函数作用是学 习概率项$P (w_k \| v_j)$和$P (v_j)$。 ```cpp 收集Examples中所有的单词、标点符号以及其他记号 Vocabulary <= 在Examples中任意文本文档中出现的所有单词及记号的集合 计算所需要的概率项$P(vj)$和$P(wk|vj)$ 对V中每个目标值vj $docs_j\leftarrow$Examples中目标值为$v_j$的文档子集 $P(v_j)\leftarrow|docs_j| / |Examples|$ $Text_j\leftarrow$将$docs_j$中所有成员连接起来建立的单个文档 $n\leftarrow$在$Text_j$中不同单词位置的总数 对Vocabulary中每个单词$w_k$ $n_k\leftarrow$单词$w_k$出现在$Text_j$中的次数 $P(w_k|v_j)\leftarrow(n_k+1) / (n+|Vocabulary|)$ ``` == $chi^2$检验 <chi2检验> $ chi^2 (t , c) = frac(N (A D - B C)^2, (A + B) (A + C) (C + D) (B + D)) $ === svm <svm> 下载地址:#link("http://www.csie.ntu.edu.tw/~cjlin/cgi-bin/libsvm.cgi?+http://www.csie.ntu.edu.tw/~cjlin/libsvm+tar.gz")[libsvm] 之后`make`一下,就会用三个可执行文件,`svm-scale svm-train svm-train`。 分别是数据整理,模型训练和数据预测。 具体参数可以直接输入`./svm-scale`获得。 更多内容可以自行百度。 == 代码实现 <代码实现> 总共分为五个部分 ```cpp trainingData(); makeDict(); makeSvm(); testingData(); printAns(); ``` === 变量声明 <变量声明> ```cpp #define MYMODE_RW (S_IRUSR | S_IWUSR) using namespace std; const char *const DICT_PATH = "../dict/jieba.dict.utf8"; const char *const HMM_PATH = "../dict/hmm_model.utf8"; const char *const USER_DICT_PATH = "../dict/user.dict.utf8"; const char *const IDF_PATH = "../dict/idf.utf8"; const char *const STOP_WORD_PATH = "../dict/stop_words.utf8"; const string dataSet = "../../THUCNews/THUCNews/"; // const string dataSet = "../../THUCNews/"; const string svmTrainData = "../../THUCNews/trainData/"; const string svmTestData = "../../THUCNews/testData/"; const int MAXS = 8e6 + 10; //文件大小限制 //Vtot 总的类别数量 //maxCommon取相关性的前200的词组成词袋 //testCnt每个类别如果有2*testCnt篇文章, //就取testCnt篇作为测试数据, //再取testCnt篇作为训练数据 const int Vtot = 11, maxCommon = 200, testCnt = 5000; //所有种类 string V[Vtot] = {"体育", "娱乐", "家居", "彩票", "房产", "教育", "时政", "游戏", "社会", "科技", "财经"}; //dic[i]第i类文章中出现的所有词 map<string, int> dic[Vtot + 1]; //p[i][word] p(word|vj=i) //第i类文章中,word出现的概率,做平滑处理 map<string, int> p[Vtot]; //kappa[i][word] word和第i类文章的kappa^2值 map<string, int> kappa[Vtot + 1]; //在处理svm数据集时,每个单词的编号 map<string, int> svmIndex; //idf值 map<string, double> idf; //fileList[i]第i类文章列表 vector<string> fileList[Vtot]; //混淆矩阵 int confusionMatrix[Vtot][Vtot]; int docs[Vtot + 1]; // docs[i]:第i类文章训练或测试的数目 int n[Vtot + 1]; // n[i]:第i类有多少个单词(有重复) int precision[Vtot], recall[Vtot]; //准确率和回归率 //多线程id long unsigned int thrd[Vtot]; #define dictAll dic[Vtot] //结巴分词器 cppjieba::Jieba jieba(DICT_PATH, HMM_PATH, USER_DICT_PATH, IDF_PATH, STOP_WORD_PATH); ``` === {训练数据部分} <训练数据部分> 下面是处理数据的多线程函数 ```cpp /** * 多线程函数 * 处理数据 * 读入文件->分词->计算dic[type]->计算kappa[type] */ void *trainingData(void *t) { int type = (long long)t; auto &dict = dic[type]; auto &kap = kappa[type]; char sentence[MAXS]; for (int d = 0; d < docs[type]; ++d) { //读入文件 string fileName = dataSet + V[type] + "/" + fileList[type][d]; int fd = open(fileName.c_str(), O_RDONLY); int len = pread(fd, sentence, MAXS, 0); close(fd); //如果文件大小超出MAXS会出错 if (len < 0) continue; sentence[len] = '\0'; vector<pair<string, string>> tagres; //分词 jieba.Tag(sentence, tagres); //计算dic[type]->计算kappa[type] set<string> thisArticle;//本篇文章单词集合 for (auto it : tagres) { const auto &word = it.first; //单词 if (strstr(it.second.c_str(), "n") != NULL && word.length() > 2) { //名词 且 长度大于一 dict.find(word) != dict.end() ? dict[word]++ : dict[word] = 1; thisArticle.insert(word); } } for (auto it : thisArticle) { kap.find(it) != kap.end() ? kap[it]++ : kap[it] = 1; } } cout << V[type] << "\tDone\n"; return NULL; } ``` 想要调用上面的线程函数,需要先读取文件列表,如下: ```cpp //读取文件列表 void readFileLists(string dir, vector<string> &fileList) { // cout<<dir<<'\n'; DIR *d = opendir(dir.c_str()); struct dirent *entry; while ((entry = readdir(d)) != NULL) { if (strstr(entry->d_name, ".txt") != NULL) fileList.push_back(entry->d_name); } closedir(d); } //读取文件列表并处理训练数据 void trainingData() { for (int i = 0; i < Vtot; ++i) { readFileLists(dataSet + V[i], fileList[i]); } for (int i = 0; i < Vtot; ++i) { docs[i] = min(int(fileList[i].size()) / 2, testCnt); // testCnt方便测试 cout << V[i] << " 类的文件个数为 " << docs[i] << '\n'; docs[Vtot] += docs[i]; int res = pthread_create(&thrd[i], NULL, trainingData, (void *)i); if (res) { printf("Create thress NO.%d failed\n", i); exit(res); } } for (int i = 0; i < Vtot; ++i) { pthread_join(thrd[i], NULL); } } ``` === 生成字典部分 <生成字典部分> ```cpp //抽取词典,使用kappa^2检验 void makeDict() { for (int i = 0; i < Vtot; ++i) { vector<pair<double, string>> mostCommon; for (auto it : kappa[i]) { double A = 0, B = 0, C = 0, D = 0; A = it.second; C = docs[i] - A; const auto &word = it.first; for (int j = 0; j < Vtot; ++j) if (i != j) { if (kappa[j].find(word) != kappa[j].end()) { B += kappa[j][word]; } } D = docs[Vtot] - docs[i] - B; double k = docs[Vtot] * (A * D - B * C) * (A * D - B * C) / ((A + C) * (A + B) * (B + D) * (C + D)); mostCommon.push_back({k, word}); } sort(mostCommon.begin(), mostCommon.end()); reverse(mostCommon.begin(), mostCommon.end()); int item = 0; cout << V[i] << ':'; for (auto it : mostCommon) { ++item; if (item > maxCommon) break; const auto &word = it.second; const int cnt = dic[i][word]; n[i] += cnt; dictAll.find(word) != dictAll.end() ? dictAll[word] += cnt : dictAll[word] = cnt; } cout << V[i] << "单词个数" << n[i] << '\n'; n[Vtot] += n[i]; /*for (int j = max(0, (int)mostCommon.size() - 20); j < (int)mostCommon.size(); ++j) { cout << "(" << mostCommon[j].first << ' ' << mostCommon[j].second << ") "; } cout << '\n';*/ } } ``` === 生成svm的训练数据文件和测试数据文件 <生成svm的训练数据文件和测试数据文件> ```cpp /** * 多线程函数 * 整理svm用到的数据文件 * 包括测试文件和输出文件 * 要按照libsvm要求的数据格式来 */ void *svmData(void *t) { int type = (long long)t; //先打开训练数据文件 int fdout = open((svmTrainData + V[type]).c_str(), O_RDWR | O_CREAT, MYMODE_RW); if (fdout == -1) { cout << errno << '\n'; perror("open"); } auto &dict = dic[type]; auto &kap = kappa[type]; string outBuf; char sentence[MAXS]; int fdoffset = 0; for (int d = 0; d < (int)fileList[type].size() && d < 2 * docs[type]; ++d) { if (d == docs[type]) { //这之后是测试数据集 close(fdout); fdout = open((svmTestData + V[type]).c_str(), O_RDWR | O_CREAT, MYMODE_RW); fdoffset = 0; if (fdout == -1) { cout << errno << '\n'; perror("open"); } } //读取文件并进行分词 string fileName = dataSet + V[type] + "/" + fileList[type][d]; int fd = open(fileName.c_str(), O_RDONLY); long long int len = pread(fd, sentence, MAXS, 0); close(fd); if (len < 0) continue; //如果文件大小超出MAXS会出错 sentence[len] = '\0'; // cout << sentence << '\n'; vector<pair<string, string>> tagres; jieba.Tag(sentence, tagres); //统计这篇文章中的词频 map<string, int> art; int totArt = 0; for (auto it : tagres) { const auto &word = it.first; //单词 if (dictAll.find(word) != dictAll.end()) { art.find(word) != art.end() ? art[word]++ : art[word] = 1; totArt++; } } outBuf = to_string(type) + " "; for (auto it : art) { auto &word = it.first; double tf = it.second * 1.0 / totArt; // tf-idf=tf*idf outBuf += to_string(svmIndex[word]) + ":" + to_string(tf * idf[word]) + " "; } outBuf += '\n'; pwrite(fdout, outBuf.c_str(), outBuf.length(), fdoffset); fdoffset += outBuf.length(); } close(fdout); return NULL; } /** * 处理svm用到的数据 */ void makeSvm() { //对单词进行编号,并计算idf值 int item = 0; for (auto it : dictAll) { auto &word = it.first; svmIndex[word] = ++item; for (int i = 0; i < Vtot; ++i) { if (kappa[i].find(word) != kappa[i].end()) { idf.find(word) != idf.end() ? idf[word] += kappa[i][word] : idf[word] = kappa[i][word]; } } } for (auto &it : idf) { it.second = log10(docs[Vtot] * 2 / (it.second + 1)); } //多线程处理数据,每个线程处理一个类别的数据 for (int i = 0; i < Vtot; ++i) { int res = pthread_create(&thrd[i], NULL, svmData, (void *)i); if (res) { printf("Create thress NO.%d failed\n", i); exit(res); } } for (int i = 0; i < Vtot; ++i) { pthread_join(thrd[i], NULL); } //处理完成之后,将所有类别的数据合到一起 char buf[MAXS]; int fdout = open((svmTrainData + "trainData.txt").c_str(), O_RDWR | O_CREAT, MYMODE_RW); if (fdout == -1) { cout << errno << '\n'; perror("open"); } off_t off = 0; for (int i = 0; i < Vtot; ++i) { int fd = open((svmTrainData + V[i]).c_str(), O_RDONLY); int len = pread(fd, buf, MAXS, 0); close(fd); if (len < 0) continue; buf[len] = '\0'; pwrite(fdout, buf, strlen(buf), off); cout << strlen(buf) << '\n'; off += strlen(buf); cout << V[i] << ' ' << strlen(buf) << ' ' << off << '\n'; } close(fdout); fdout = open((svmTestData + "testData.txt").c_str(), O_RDWR | O_CREAT, MYMODE_RW); off = 0; for (int i = 0; i < Vtot; ++i) { int fd = open((svmTestData + V[i]).c_str(), O_RDONLY); int len = pread(fd, buf, MAXS, 0); close(fd); if (len < 0) continue; buf[len] = '\0'; pwrite(fdout, buf, strlen(buf), off); off += strlen(buf); cout << V[i] << ' ' << strlen(buf) << ' ' << off << '\n'; } close(fdout); } ``` === 进行文本分类 <进行文本分类> 先预处理出p的值, ```cpp /** * 朴素贝叶斯分类器 */ void *bayes(void *t) { int type = (long long)t; cout << "classfiying " << V[type] << "\n"; char sentence[MAXS]; for (int f = docs[type]; f < (int)fileList[type].size(); ++f) { if (f - docs[type] > docs[type]) break; //读取文件 string fileName = dataSet + V[type] + "/" + fileList[type][f]; int fd = open(fileName.c_str(), O_RDONLY); int len = pread(fd, sentence, MAXS, 0); close(fd); //分词 if (len < 0) continue; //读入出错,buf不够之类的 sentence[len] = '\0'; vector<pair<string, string>> tagres; jieba.Tag(string(sentence), tagres); //计算argmax(p) pair<double, int> ans[Vtot]; for (int i = 0; i < Vtot; ++i) { ans[i] = {docs[i] * 1.0 / docs[Vtot], i}; } for (auto it : tagres) { if (dictAll.find(it.first) != dictAll.end()) { for (int i = 0; i < Vtot; ++i) { ans[i].first += p[i][it.first]; } } } sort(ans, ans + Vtot); confusionMatrix[type][ans[Vtot - 1].second] += 1; } cout << "classfiy " << V[type] << "\tdone\n"; return NULL; } //训练数据 void testingData() { //预处理p[i][word]值 cout << "字典长度=" << dictAll.size() << '\n'; for (int vj = 0; vj < Vtot; ++vj) { for (auto it : dictAll) { const auto &word = it.first; if (dic[vj].find(word) != dic[vj].end()) { p[vj][word] = log10(dic[vj][word] + 1) - log10(n[vj] + dictAll.size()); } else { p[vj][word] = log10(1) - log10(n[vj] + dictAll.size()); } } } //多线程进行文本分类 for (int i = 0; i < Vtot; ++i) { int res = pthread_create(&thrd[i], NULL, bayes, (void *)i); if (res) { printf("Create thress NO.%d failed\n", i); exit(res); } } for (int i = 0; i < Vtot; ++i) { pthread_join(thrd[i], NULL); } } ``` === 输出结果 <输出结果> ```cpp //输出想要的答案 void printAns() { double avrPre = 0, avrRecall = 0; for (int v1 = 0; v1 < Vtot; ++v1) { cout << V[v1] << ":\t"; for (int v2 = 0; v2 < Vtot; ++v2) { precision[v2] += confusionMatrix[v1][v2]; recall[v1] += confusionMatrix[v1][v2]; cout << confusionMatrix[v1][v2] << '\t'; } cout << '\n'; } for (int v = 0; v < Vtot; ++v) { cout << V[v] << '\n'; cout << "准确率=" << confusionMatrix[v][v] * 100.0 / precision[v] << "% "; avrPre += confusionMatrix[v][v] * 100.0 / precision[v]; cout << "召回率=" << confusionMatrix[v][v] * 100.0 / recall[v] << "%\n"; avrRecall += confusionMatrix[v][v] * 100.0 / recall[v]; } cout << "平均准确率=" << avrRecall / Vtot << "% " << "平均召回率=" << avrRecall / Vtot << "\n"; } ``` == 实验结果 <实验结果> 先输出训练的文件个数,训练完成之后,统计每个类别有多少单词。 之后,得出字典长度为2037。 #figure( image("019/ans1.png", width: 300pt), caption: [ 结果 ], ) 预测完成之后是混淆矩阵。 #figure( image("019/ans2.png"), caption: [ 文本分类实验 混淆矩阵 ], ) 输出召回率和准确率,最后输出运行时间。 #strong[可以看到最后运行时间仅为12秒,这就是c++的长处之一。] 总共分析了55000篇文章,从读入到分词,到其他计算。总共执行时间非常短。 相比于python的10min左右要好得多。 #figure( image("019/ans3.png", width: 250pt), caption: [ 结果 ], ) 最后的实验结果也比较理想。准确率和召回率都比较高。 == svm <svm-1> 这里我并没有用到梯度预测、n折交叉验证(训练时间太长,实验时间不够)。 但我们完整实现了用libsvm的工具进行文本分类的必要步骤。以下均在`libsvm`目录下。 - 数据整理。 `../THUCNews/trainData/trainData.txt`是待训练数据的原始形式,`../THUCNews/testData/testData.txt`是待预测数据。用`./svm-scale -l 0 -r train.scale ../THUCNews/trainData/trainData.txt` 和`./svm-scale -l 0 -r test.scale ../THUCNews/testData/testData.txt`命令, 可以将训练数据、测试数据整理为标准格式,且下界为0. - 模型训练 `./svm-train -h 0 -g 0.001 -c 10 train.scale my.model`得到模型文件。 - 数据预测 `./svm-predict test.scale my.model ans.out` 结果如下 #figure( image("019/svmans.png"), caption: [ svman ], ) 实验结果较为理想。当然,该方法还有很多可以改进的空间,这里就不深究。 == {代码} <代码> #link("https://paste.ubuntu.com/p/nGWbQWmttr/")[超链接] #link("https://paste.ubuntu.com/p/p9KQmh8Q2T/")[之前一版]
https://github.com/UBOdin/data_structures_book
https://raw.githubusercontent.com/UBOdin/data_structures_book/main/chapters/3-asymptotic.typ
typst
#set heading(numbering: "1.") #import "@preview/plotst:0.2.0": * = Asymptotic Runtime Complexity Data Structures are the fundamentals of algorithms: How efficient an algorithm is depends on how the data is organized. Think about your workspace: If you know exactly where everything is, you can get to it much faster than if everything is piled up randomly. Data structures are the same: If we organize data in a way that meets the need of an algorithm, the algorithm will run much faster (remember the array vs linked list comparison from earlier)? Since the point of knowing data structures is to make your algorithms faster, one of the things we'll need to talk about is how fast the data structure (and algorithm) is for specific tasks. Unfortunately, "how fast" is a bit of a nuanced comparison. I could time how long algorithms *A* and *B* take to run, but what makes a fair comparison depends on a lot of factors: - How big is the data that the algorithm is running on? - *A* might be faster on small inputs, while *B* might be faster on big inputs. - What computer is running the algorithm? - *A* might be much faster on one computer, *B* might be much faster on a network of computers. - *A* might be especially tailored to Intel x86 CPUs, while *B* might be tailored to the non-uniform memory latencies of AMD x86 CPUs. - How optimized is the code? - Hand-coded optimizations can account for multiple orders of magnitude in algorithm performance. In short, comparing two algorithms requires a lot of careful analysis and experimentation. This is important, but as computer scientists, it can also help to take a more abstract view. We would like to have a shorthand that we can use to quickly convey the 50,000-ft view of "how fast" the algorithm is going to be. That shorthand is asymptotic runtime complexity. == Why is Asymptotic Analysis important? Try the following code in python: ```python from random import randrange from datetime import datetime N = 10000 TRIALS = 1000 #### BEGIN INITIALIZE data data = [] for x in range(N): data += [x] data = list(data) #### END INITIALIZE data contained = 0 start_time = datetime.now() for x in range(TRIALS): if randrange(N) in data: contained += 1 end_time = datetime.now() time = (end_time - start_time).total_seconds() / TRIALS print(f"For N = {N}, that took {time} seconds per lookup") ``` This code creates a list of `N` elements, and then does `TRIALS` checks to see if a randomly selected value is somewhere in the list. This is a toy example, but see what happens as you increase the value of `N`. In most versions of python, you'll find that every time you multiply `N` by a factor of, for example 10, the total time taken per lookup grows by the same amount. Now try something else. Modify the code so that the `data` variable is initialized as: ```python #### BEGIN INITIALIZE data data = [] for x in range(N): data += [x] data = set(data) #### END INITIALIZE data ``` You'll find that now, as you increase `N`, the time taken *per lookup* grows at a much smaller rate. Depending on the implementation of python you're using, this will either grow as $log N$ or only a tiny bit. The `set` data structure is much faster at checking whether an element is present than the `list` data structure. Here's the results from the experiment on my own computer, with `list` marked in solid red and `set` marked in dashed green. #{ let list_pts = ( (10, 8.99e-04), (200000, 2.545766), (400000, 4.505643), (600000, 8.956291), (800000, 13.229746), (1000000, 16.772886), ) let set_pts = ( (10, 8.720000000000001e-04), (200000, 1.108e-03), (400000, 1.309e-03), (600000, 1.283e-03), (800000, 1.228e-03), (1000000, 1.35e-03), ) let x_axis = axis(min: 0, max: 1000000, step: 200000, location: "bottom", title: [Size of Input ($N$)]) let y_axis = axis(min: 0, max: 20, step: 5, location: "left", title: [Runtime (s)]) let list_plt = graph_plot( plot(data: list_pts, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: red, thickness: 2pt), markings: [], caption: [Code with `list` and `set`] ) let set_plt = graph_plot( plot(data: set_pts, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: green, thickness: 2pt, dash: "dashed"), markings: [] ) overlay( (list_plt, set_plt), (70%, 25%), ) } <example_runtime_plot> There's two important things to take away from this experiment. First, the two lines are distinctly different: the runtime for `list` grows, not quite, but more or less as a straight line, while the runtime for set remains imperceptibly small. If you zoom in, you'll see that it's just about a horizontal line. Second, this pattern shows up for everyone. It doesn't matter what OS you're using, your CPU, your python version, or any other circumstance of how you run the code. If you're using a CPU that's ten times faster than mine, your numbers will be ten times bigger, but if you plot your results for the same experiment, your graphs will have the same general shape#footnote[ This is not strictly true. For some implementations of python, you might see a _slight_ increase in the runtime for `set` that will look like a logarithmic curve. The main point below, however, still holds. ]. The reason for this is simple: To find an element in a `list`, we need to check every element, one-by-one, until we find the element we're looking for. On the other hand, in a `set` (implemented as a hash table), there's only one possible place where a specific element might be found. In general, we only need to do a single check to test whether the element is present#footnote[ Again, not entirely true. Under certain circumstances, hash tables can be as bad as lists. We'll talk about this in much more detail later in the book. ]. Put another way, in a `list`, if we have twice as many elements, it will take twice as long to check each and every one. Your computer might be ten times faster than mine, but it will still take your computer 2 times as long to find an item in a list of 2 million elements than in a list of 1 million elements. On the other hand, in a `set`, through the awesome power of hash tables, we only need to check a single element, regardless of whether the `set` contains 1 element, 100 elements, 1 million elements, or 1 trillion elements. No matter how big it gets, the cost to check whether an element is in the `set` will always be the same#footnote[ This is a bit of a lie. With enough elements you'll run out of memory and your program will either crash or start using 'virtual' memory, which is much slower. Still, for the sizes of data that we'll be dealing with for most of this book, it's a reasonable approximation to assume that the cost won't change. ]. The idea that data organization creates a predictable relationship between the amount of data and the cost of accessing the data is at the heart of data structures. As a result, it's useful to have some terms that we can use to get across relationships like these without constantly having to resort to saying things like "If you double the number of elements in the list, the runtime of finding an element also doubles." Asymptotic complexity, which we discuss in this chapter, is how define these terms precisely. === Some examples of asymptotic runtime complexity Look at the documentation for data structures in your favorite language's standard library. You'll see things like: - The cost of appending to a Linked List is $O(1)$ - The cost of finding an element in a Linked List is $O(N)$ - The cost of appending to an Array List is $O(N)$, but amortized $O(1)$ - The cost of inserting into a Tree Set is $O(log N)$ - The cost of inserting into a Hash Map is Expected $O(1)$, but worst case $O(N)$ - The cost of retrieving an element from a Cuckoo Hash is always $O(1)$ These are all examples of asymptotic runtimes, and they are intended to give you a quick at-a-glance idea of how well the data structure handles specific operations. Knowing these properties of the data structures you work with can help you to pick data structures that match the needs of you algorithm, and avoid major performance pitfalls. === Asymptotic Analysis in General Although our focus in this book is mainly on asymptotic *runtime* complexity, asymptotic analysis is a general tool that can be used to discuss all sorts of properties of code and algorithms. For example: - How fast is an algorithm? - How much space does an algorithm need? - How much data does an algorithm read from disk? - How much network bandwidth will an algorithm use? - How many CPUs will a parallel algorithm need? == Runtime Growth Functions Throughout most of the book, we'll use $T(N)$ to mean the runtime (T) of an algorithm run on an input of size $N$. You can think of this function as telling us "For an input of size $N$, this algorithm will take $T(N)$ seconds to run" This is a little bit of an inexact statement, since the actual number of seconds it takes depends on the type of computer, nuances of implementation, and more. As we'll see later, this imprecision won't actually matter, but for now, you can assume that we're talking about a specific implementation, on a specific computer (like e.g., your computer). To make our lives easier, we're going to make a few assumptions about how $T(N)$ works: 1. For all $N >= 0$, it must be the case that $T(N) >= 0$ - The algorithm can't take negative time to run. 2. For all $N >= N'$, it must be the case that $T(N) >= T(N')$ - It shouldn't be the case that the algorithm runs faster on a bigger input #footnote[ In practice, this is not actually the case. We'll see a few examples of functions who's runtime can sometimes be faster on a bigger input. Still, for now, it's a useful simplification. ]. We call any function that follows these two rules a *growth function*, and since $T(N)$ is a runtime, we refer to it as a runtime growth function. == Complexity Classes Although we want to define asymptotic complexity classes precisely, it can help to start with a more intuitive idea. Remember the example above, where the two data structures behaved very differently: The runtime of `in` on a `list` grew linearly with the size of the `list`, while the runtime of `in` on a `set` was completely independent of the size of the `set`. We're going to group these behaviors into something that we're going to call *Complexity Classes*#footnote[ To be pedantic, what we'll be describing is called "simple complexity classes", but throughout this book, we'll refer to them as just complexity classes. ]. Let's look at a concrete example: @linlog_plot shows three different runtime growth functions: Green (dashed), Red (solid), and Blue (dash-dotted). For an input of size $N = 2$, the green dashed function appears to run the fastest (the best), while the blue dash-dotted function is the slowest (worst). By the time we get to $N = 10$, the roles have reversed, and the blue dash-dotted function is the best. #{ let lin1 = ( (0, 1), (68, 20) ) let lin2 = ( (0, 0.5), (50, 20) ) let log1 = ( (0, 0.5), (1, 1.5), (4, 2.5), (9, 3.5), (16,4.5), (25,5.5), (36, 6.5), (49, 7.5), (64, 8.5), (81, 9.5) ) let x_axis = axis(min: 0, max: 80, step: 5, location: "bottom", title: [Size of Input ($N$)]) let y_axis = axis(min: 0, max: 21, step: 2, location: "left", title: [Runtime ($T(N)$)]) let pl_lin_1 = graph_plot( plot(data: lin1, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: red, thickness: 2pt), markings: [], caption: "Different types of growth functions" ) let pl_lin_2 = graph_plot( plot(data: lin2, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: green, thickness: 2pt, dash: "dashed"), markings: [] ) let pl_log_1 = graph_plot( plot(data: log1, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: blue, thickness: 2pt, dash: "dash-dotted"), markings: [] ) overlay( (pl_lin_1, pl_lin_2, pl_log_1), (70%, 25%), ) } <linlog_plot> // https://raw.githubusercontent.com/johannes-wolf/cetz/master/manual.pdf So let's talk about these lines and what we can say about them. First, in this book, since we're taking the 50,000 ft view of algorithm performance, we're going to ignore what happens for "small" inputs. From this perspective, the blue dot-dashed line is the "best". But why is it better? If we look closely, both the green dashed and the red solid line are straight lines. The blue dot-dashed line starts going up faster than both the other two lines, but bends downward. In short, the blue dot-dashed line draws a function of the form $a log(N) + b$, while the other two lines draw functions of the form $a dot N + b$. For "big enough" values of $N$, any function of the form $a log(N) + b$ will always be smaller than any function of the form $a dot N + b$. On the other hand, the value of any two functions of the form $a dot N + b$ will always "look" the same. No matter how far we zoom out, those functions will always be a constant factor different. Our 50,000 foot view of the runtime of a function (in terms of $N$, the size of its input) will be to look at the "shape" of the curve as we plot it against $N$. == Formal Notation Sometimes it's convenient to have a shorthand for writing down that a runtime belongs in a complexity class. We write: $g(N) in Theta(f(n))$ ... to mean that the mathematical function $g(N)$ belongs to the same *asymptotic complexity class* as $f(N)$. You may also see this written as an equality. For example $T(N) = Theta(N)$ ... means that the runtime function $T(N)$ belongs to the *linear* complexity class. Continuing the example above, we would use our new shorthand to describe the two implementations of Python's `in` operator as: - $T_"set" in Theta(log N)$ - $T_"list" in Theta(N)$ *Formalism:* A little more formally, $Theta(f(N))$ is the *set* of all mathematical functions $g(N)$ that belong to the same complexity class as $f(N)$. So, writing $g(N) in Theta(f(N))$ is saying that $g(N)$ is in ($in$) the set of all mathematical functions in the same complexity class as $f(N)$#footnote[ We are sweeping something under the rug here: We haven't precisely defined what it means for two functions to be in the same complexity class yet. We'll get to that shortly, after we introduce the concept of complexity bounds. ]. Here are some of the more common complexity classes that we'll encounter throughout the book: - *Constant*: $Theta (1)$ - *Logarithmic*: $Theta (log N)$ - *Linear*: $Theta (N)$ - *Loglinear*: $Theta (N log N)$ - *Quadratic*: $Theta (N^2)$ - *Cubic*: $Theta (N^3)$ - *Exponential* $Theta (2^N)$ #figure( image("graphics/complexity-overview.svg", width: 50%), caption: [ $Theta(f(N))$ is the set of all mathematical functions including $f(N)$ and everything that has the same "shape", represented in the chart above as a highlighted region around each line. ] ) This list of complexity classes is presented in a specific order. The later a class appears in the list above, the faster a function in the class grows. Any function that has a *linear* shape, will always be smaller (for big enough values of $N$) than a function with a *loglinear* shape. === Polynomials and Dominant Terms What complexity class does $10N + N^2$ fall into? Let's plot it and see: #{ let lin_pts = ( (0, 0), (20, 200) ) let quad_pts = ( (0,0), (1,1), (2,4), (3,9), (4,16), (5,25), (6,36), (7,49), (8,64), (9,81), (10,100), (11,121), (12,144), (13,169), (14,196), (15,225), (16,256), (17,289), (18,324), (19,361), (20,400) ) let target_pts = ( (0,0), (1,11), (2,24), (3,39), (4,56), (5,75), (6,96), (7,119), (8,144), (9,171), (10,200), (11,231), (12,264), (13,299), (14,336), (15,375), (16,416), (17,459), (18,504), (19,551), (20,600) ) let x_axis = axis(min: 0, max: 20, step: 1, location: "bottom", title: [Size of Input ($N$)]) let y_axis = axis(min: 0, max: 600, step: 40, location: "left", title: [Runtime ($T(N)$)]) let lin_plt = graph_plot( plot(data: lin_pts, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: red, thickness: 2pt), markings: [], caption: [Comparing $10N$ (solid red), $N^2$ (dashed green), and $10N+N^2$ (dash-dotted blue)] ) let quad_plt = graph_plot( plot(data: quad_pts, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: green, thickness: 2pt, dash: "dashed"), markings: [] ) let target_plt = graph_plot( plot(data: target_pts, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: blue, thickness: 2pt, dash: "dash-dotted"), markings: [] ) overlay( (lin_plt, quad_plt, target_plt), (70%, 25%), ) } <poly_plot> @poly_plot compares these three functions. Observe that the dash-dotted blue line starts off very similar to the solid red line. However, as $N$ grows, its shape soon starts resembling the dashed green $N^2$ more than the solid red $10N$. Although we don't have the tools to prove it yet, take our word for it that this is a pattern. In any polynomial (a sum of mathematical expressions), for really big values of $N$, the complexity class of the "biggest" term starts to win out once we get to really big values of $N$. In general, for any sum of mathematical functions: $g(N) = f_(1)(N) + f_(2)(N) + ... + f_(k)(N)$ The complexity class of $g(N)$ is the greatest complexity class of any $f_(i)(N)$ For example: - $10N + N^2 in Theta(N^2)$ - $2^N + 4N in Theta(2^N)$ - $1000 · N log(N) + 5N in Theta(N log(N))$ We'll prove this formally at the end of the chapter in @summation_proof. === $Theta$ in mathematical formulas Sometimes we'll write $Theta(g(N))$ in regular mathematical formulas. For example, we could write: $Theta(N) + 5N$ You should interpret this as meaning any function that has the form: $f(N) + 5N$ ... where $f(N) in Theta(N)$. == Code Complexity Let's see a few examples of how we can figure out the runtime complexity class of a piece of code. ```python def userFullName(users: List[User], id: int) -> str: user = users[id] fullName = user.firstName + " " + user.lastName return fullName ``` The `userFullName` function takes a list of users, and retrieves the `id`th element of the list and generates a full name from the user's first and last names. For now, we'll assume that looking up any element of any array (`users[id]`), string concatenation (`user.firstName + " " + user.lastName`), assignment (`user = ...`, and `fullName`), and returns are all constant-time operations#footnote[ Array lookups being constant-time is a huge simplification, called the RAM model, that we'll roll back at the end of the book. Similarly, string concatenation is not quite $Theta(1)$. It's usually $Theta(N)$ where $N$ is the size of the strings being concatenated. However, as long as we assume that strings are relatively small, we'll pretend (for now) that string concatenation is constant-time. ]. Under these assumptions, the first, second, and third lines can each be evaluated in constant time $Theta(1)$. The total runtime of the function is the time required to run each line, one at a time, or: $T_"userFullName" (N) = Theta(1) + Theta(1) + Theta(1)$ Recall above, that $Theta(1)$ in the middle of an arithmetic expression can be interpreted as $f(N)$ where $f(N) in Theta(1)$ (it is a constant). That is, $f(N) = c$. So, the above expression can be rewritten as#footnote[ There's another simplification here. Technically, $f(N)$ is always within a bounded factor of a constant $c_1$, and likewise for $g(N)$, but we'll clarify this when we get to complexity bounds below. ]: $T_"userFullName" (N) = c_1 + c_2 + c_3$ Adding three constant values together (even without knowing what they are, exactly) always gets us another constant value. So, we can say that $T_"userFullName" (N) in Theta(1)$. ```python def updateUsers(users: List[User]) -> None: x = 1 for user in users: user.id = x x += 1 ``` The `updateUsers` function takes a list of users and assigns each user a unique id. For now, we'll assume that the assignment operations (`x = 1` and `user.id`), and the increment operation (`x += 1`) all take constant ($Theta(1)$) time. So, we can model the total time taken by the function as: $T_"updateUsers" (N) = O(1) + sum_"user" (O(1) + O(1))$ Simplifying as above, we get $T_"updateUsers" (N) = c_1 + sum_"user" (c_2 + c_3)$ Recalling the rule for summation of a constant, using $N$ as the total number of users, and then the rule for sums of terms, we get: $T_"updateUsers" (N) = c_1 + N · (c_2 + c_3) = Theta(N)$ ////////////////////////////////////////////////////////////////////////// == Complexity Bounds Not every mathematical function fits neatly into a single complexity class. Let's go to our python code example above. The `in` operator tests to see whether a particular value is present in our data. If `data` is a `list`, then the implementation checks every position in the list, in order. Internally, Python implements the expression `target in data` with something like: ```python def __in__(data, target): N = len(data) for i in range(N): if data[i] == target: return True return False ``` In the best case, the value we're looking for happens to be at the first position `data[0]`, and the code returns after a single check. In the worst case, the value we're looking for is at the last position `data[N-1]` or is not in the list at all, and we have to check every one of the `N` positions. Put another way, the *best case* behavior of the function is constant (exactly one check), while the *worst case* behavior is linear ($N$ checks). We can write the resulting runtime using a case distinction: $T_"in" (N) = cases( a · 1 + b & bold("if") mono("data[0]" = "target"), a · 2 + b & bold("if") mono("data[1]" = "target"), "...", a · (N-1) + b & bold("if") mono("data[N-2]" = "target"), a · N + b & bold("if") mono("data[N-1]" = "target") )$ We don't know the runtime exactly, as it is based on the computer and version of python we are using. However, we can model it, in general in terms of some upfront cost $b$ (e.g., for computing `N = len(data)`), and some additional cost $a$ for every time we go through the loop (e.g., for computing `data[i] == target`). Since we don't know where the target is, exactly, Let's do a quick experiment. The code below is like our example above, but measures the time for one lookup at a time. Each point it prints out is the runtime of a single lookup as the `list` gets bigger and bigger. ```python from random import randrange from datetime import datetime N = 100000 TRIALS = 400 STEP = int(N/TRIALS) data = list() for i in range(TRIALS): # Increase the list size by STEP for j in range(STEP): data += [i * STEP + j] start = datetime.now() # Measure how long it takes to look up a random element if randrange(i * STEP + STEP) in data: pass end = datetime.now() # Print out the total time in microseconds microseconds = (end - start).total_seconds() * 1000000 print(f"{i}, {microseconds}") ``` #{ let lin_pts = ( (0, 4.0), (1, 4.0), (2, 1.0), (3, 5.0), (4, 3.0), (5, 9.0), (6, 12.0), (7, 2.0), (8, 5.0), (9, 15.0), (10, 2.0), (11, 12.0), (12, 22.0), (13, 3.0), (14, 19.0), (15, 24.0), (16, 28.0), (17, 14.0), (18, 10.0), (19, 7.0), (20, 4.0), (21, 24.0), (22, 11.0), (23, 34.0), (24, 43.0), (25, 35.0), (26, 5.0), (27, 30.0), (28, 44.0), (29, 22.0), (30, 51.0), (31, 49.0), (32, 14.0), (33, 48.0), (34, 23.0), (35, 3.0), (36, 39.0), (37, 3.0), (38, 50.0), (39, 17.0), (40, 37.0), (41, 26.0), (42, 24.0), (43, 47.0), (44, 71.0), (45, 52.0), (46, 55.0), (47, 33.0), (48, 50.0), (49, 87.0), (50, 7.0), (51, 67.0), (52, 67.0), (53, 5.0), (54, 41.0), (55, 8.0), (56, 92.0), (57, 53.0), (58, 9.0), (59, 55.0), (60, 92.0), (61, 27.0), (62, 104.0), (63, 81.0), (64, 63.0), (65, 18.0), (66, 108.0), (67, 113.0), (68, 10.0), (69, 31.0), (70, 34.0), (71, 4.0), (72, 127.0), (73, 25.0), (74, 10.0), (75, 75.0), (76, 115.0), (77, 125.0), (78, 83.0), (79, 46.0), (80, 5.0), (81, 135.0), (82, 26.0), (83, 70.0), (84, 91.0), (85, 74.0), (86, 115.0), (87, 47.0), (88, 4.0), (89, 81.0), (90, 71.0), (91, 75.0), (92, 115.0), (93, 23.0), (94, 9.0), (95, 51.0), (96, 70.0), (97, 97.0), (98, 4.0), (99, 32.0), (100, 111.0), (101, 93.0), (102, 98.0), (103, 177.0), (104, 4.0), (105, 79.0), (106, 5.0), (107, 111.0), (108, 18.0), (109, 81.0), (110, 121.0), (111, 82.0), (112, 63.0), (113, 45.0), (114, 101.0), (115, 75.0), (116, 49.0), (117, 128.0), (118, 79.0), (119, 93.0), (120, 100.0), (121, 43.0), (122, 10.0), (123, 87.0), (124, 46.0), (125, 77.0), (126, 152.0), (127, 47.0), (128, 220.0), (129, 35.0), (130, 198.0), (131, 100.0), (132, 265.0), (133, 195.0), (134, 104.0), (135, 177.0), (136, 70.0), (137, 130.0), (138, 219.0), (139, 40.0), (140, 81.0), (141, 42.0), (142, 113.0), (143, 20.0), (144, 164.0), (145, 228.0), (146, 49.0), (147, 62.0), (148, 196.0), (149, 157.0), (150, 139.0), (151, 226.0), (152, 56.0), (153, 49.0), (154, 252.0), (155, 78.0), (156, 40.0), (157, 170.0), (158, 142.0), (159, 73.0), (160, 86.0), (161, 6.0), (162, 256.0), (163, 278.0), (164, 256.0), (165, 19.0), (166, 43.0), (167, 17.0), (168, 269.0), (169, 200.0), (170, 245.0), (171, 259.0), (172, 154.0), (173, 173.0), (174, 86.0), (175, 50.0), (176, 268.0), (177, 293.0), (178, 178.0), (179, 13.0), (180, 257.0), (181, 262.0), (182, 265.0), (183, 253.00000000000003), (184, 51.0), (185, 285.0), (186, 87.0), (187, 241.0), (188, 91.0), (189, 55.0), (190, 149.0), (191, 352.0), (192, 2.0), (193, 235.0), (194, 152.0), (195, 294.0), (196, 106.0), (197, 14.0), (198, 131.0), (199, 221.0), (200, 127.0), (201, 340.0), (202, 369.0), (203, 170.0), (204, 376.0), (205, 69.0), (206, 131.0), (207, 62.0), (208, 96.0), (209, 257.0), (210, 55.0), (211, 98.0), (212, 344.0), (213, 360.0), (214, 305.0), (215, 45.0), (216, 290.0), (217, 150.0), (218, 183.0), (219, 198.0), (220, 184.0), (221, 141.0), (222, 64.0), (223, 355.0), (224, 283.0), (225, 2.0), (226, 55.0), (227, 62.0), (228, 247.0), (229, 117.0), (230, 147.0), (231, 358.0), (232, 194.0), (233, 358.0), (234, 248.0), (235, 331.0), (236, 413.0), (237, 86.0), (238, 213.0), (239, 338.0), (240, 365.0), (241, 387.0), (242, 115.0), (243, 40.0), (244, 153.0), (245, 412.0), (246, 292.0), (247, 245.0), (248, 86.0), (249, 317.0), (250, 125.0), (251, 367.0), (252, 323.0), (253, 62.0), (254, 41.0), (255, 351.0), (256, 123.00000000000001), (257, 68.0), (258, 318.0), (259, 394.0), (260, 427.0), (261, 262.0), (262, 22.0), (263, 283.0), (264, 229.0), (265, 428.0), (266, 344.0), (267, 157.0), (268, 459.0), (269, 433.0), (270, 46.0), (271, 360.0), (272, 295.0), (273, 308.0), (274, 367.0), (275, 46.0), (276, 155.0), (277, 54.0), (278, 454.0), (279, 474.0), (280, 424.0), (281, 367.0), (282, 363.0), (283, 406.0), (284, 353.0), (285, 240.0), (286, 3.0), (287, 350.0), (288, 76.0), (289, 144.0), (290, 342.0), (291, 298.0), (292, 242.0), (293, 98.0), (294, 157.0), (295, 154.0), (296, 112.0), (297, 376.0), (298, 189.0), (299, 510.00000000000006), (300, 354.0), (301, 328.0), (302, 160.0), (303, 277.0), (304, 3.0), (305, 522.0), (306, 172.0), (307, 501.99999999999994), (308, 29.0), (309, 92.0), (310, 506.00000000000006), (311, 400.0), (312, 535.0), (313, 64.0), (314, 492.99999999999994), (315, 154.0), (316, 37.0), (317, 385.0), (318, 433.0), (319, 200.0), (320, 455.0), (321, 506.99999999999994), (322, 67.0), (323, 117.0), (324, 352.0), (325, 33.0), (326, 13.0), (327, 358.0), (328, 416.0), (329, 274.0), (330, 292.0), (331, 171.0), (332, 479.0), (333, 16.0), (334, 422.0), (335, 446.0), (336, 382.0), (337, 112.0), (338, 111.0), (339, 210.0), (340, 510.99999999999994), (341, 578.0), (342, 258.0), (343, 264.0), (344, 545.0), (345, 348.0), (346, 72.0), (347, 106.0), (348, 607.0), (349, 606.0), (350, 370.0), (351, 225.0), (352, 213.0), (353, 216.0), (354, 418.0), (355, 508.0), (356, 157.0), (357, 380.0), (358, 503.0), (359, 364.0), (360, 54.0), (361, 193.0), (362, 410.0), (363, 477.0), (364, 18.0), (365, 606.0), (366, 565.0), (367, 643.0), (368, 266.0), (369, 54.0), (370, 80.0), (371, 445.0), (372, 54.0), (373, 441.0), (374, 260.0), (375, 185.0), (376, 200.0), (377, 227.0), (378, 52.0), (379, 403.0), (380, 540.0), (381, 423.0), (382, 194.0), (383, 163.0), (384, 402.0), (385, 3.0), (386, 75.0), (387, 368.0), (388, 29.0), (389, 594.0), (390, 69.0), (391, 450.0), (392, 628.0), (393, 653.0), (394, 44.0), (395, 669.0), (396, 451.0), (397, 218.0), (398, 35.0), (399, 506.00000000000006), ) let x_axis = axis(min: 0, max: 400, step: 40, location: "bottom", title: [Size of Input ($N$)]) let y_axis = axis(min: 0, max: 1000, step: 100, location: "left", title: [Runtime ($T(N)$)]) let lin_plt = graph_plot( plot(data: lin_pts, axes: (x_axis, y_axis)), (100%, 25%), stroke: (paint: red, thickness: 2pt), markings: [], caption: [Scaling the `list` lookup.] ) lin_plt } <list_lookup> @list_lookup shows the output of one run of the code above. You can see that it looks a lot like a triangle. The *worst case* (top of the triangle) looks a lot like the *linear* complexity class ($Theta(N)$, or an angled line), but the *best case* (bottom of the triangle) looks a lot more like a flat line, or the *constant* complexity class ($Theta(1)$, or a flat line). The runtime is _at least_ constant, and _at most_ linear: We can *bound* the runtime of the function between two complexity classes. == Big-$O$ and Big-$Omega$ We capture this intuition of bounded runtime by introducing two new concepts: Worst-case (upper, or Big-$O$) and Best-case (lower, or Big-$Omega$) bounds. To see these in practice, let's take the linear complexity class as an example: #figure[ #image("graphics/complexity-theta.svg", width:50%) ] We write $O(N)$ to mean the set of all mathematical functions that are *no worse than $Theta(N)$*. This includes: - all mathematical functions in $Theta(N)$ itself - all mathematical functions in lesser (slower-growing) complexity classes (e.g., $Theta(1)$) - any mathematical function that never grows faster than $O(N)$ (e.g., the runtime of each individual lookup in our Python example above) The figure below illustrates $Theta(N)$ (the dotted region) and all lesser complexity classes. Note the similarity to @list_lookup. #figure[ #image("graphics/complexity-bigo.svg", width:50%) ] Similarly, we write $Omega(N)$ to mean the set of all mathematical functions that are *no better than $Theta(N)$*. This includes: - all functions in $Theta(N)$ itself - all greater (faster-growing) complexity classes (e.g., $Theta(N^2)$), and anything in between. #figure[ #image("graphics/complexity-omega.svg", width:50%) ] To summarize, we write: - $f(N) in O(g(N))$ to say that $f(N)$ is in $Theta(g(N))$ or a lesser complexity class. - $f(N) in Omega(g(N))$ to say that $f(N)$ is in $Theta(g(N))$ or a greater complexity class. == Formalizing Big-$O$ Before we formalize our bounds, let's first figure out what we want out of that formalism. Let's start with the basic premise we outlined above: For a function $f(N)$ to be in the set $O(g(N))$, we want there to be some function in $Theta(g(N))$ that is always bigger than $f(N)$. The first problem we run into with this formalism is that we haven't really defined what exactly $Theta(g(N))$ is yet, so we need to pin down something first. Let's start with the same assumption we made earlier: we can scale $g(N)$ by any constant value without changing its complexity class. Formally: $forall c : c dot g(N) in Theta(g(N))$ That is, for any constant $c$ ($forall$ means 'for all'), the product $c dot g(N)$ is in $Theta(g(N))$ (remember that $in$ means is in). This isn't meant to be all-inclusive: There are many more functions in $Theta(g(N))$, but this gives us a pretty good range of functions that, at least intuitively, belong in $g(N)$'s complexity class. Now we have a basis for formalizing Big-$O$: We can say that $f(N) in O(g(N))$ if there is *some* multiple of $g(N)$ that is always bigger than $f(N)$. Formally: $exists c > 0, forall N : f(N) <= c dot g(N)$ That is, there exists ($exists$ means there exists) some positive constant $c$, such that for each value of $N$, the value of $f(N)$ is smaller than the corresponding value of $c dot g(N)$. Let's look at some examples: *Example:* $f(N) = 2N$ vs $g(N) = N$ Can we find a $c$ and show that for this $c$, for all values of $N$, the Big-$O$ inequality holds for $f$ and $g$? - $f(N) <= c dot g(N)$ - $2N <= c dot N$ - $2 <= c$ We start with the basic inequality, substitute in the values of $f(N)$ and $g(N)$, and then divide both sides by $N$. So, the inequality is always true for any value of $c >= 2$. *Example:* $f(N) = 100N^2$ vs $g(N) = N^2$ Can we find a $c$ and show that for this $c$, for all values of $N$, the Big-$O$ inequality holds for $f$ and $g$? - $f(N) <= c dot g(N)$ - $100N^2 <= c dot N^2$ - $100 <= c$ We start with the basic inequality, substitute in the values of $f(N)$ and $g(N)$, and then divide both sides by $N$. So, the inequality is always true for any value of $c >= 100$. *Example:* $f(N) = N$ vs $g(N) = N^2$ Can we find a $c$ and show that for this $c$, for all *integer* values of $N$, the Big-$O$ inequality holds for $f$ and $g$? - $f(N) <= c dot g(N)$ - $N <= c dot N^2$ - $1 <= c dot N$ *Uh-oh!* For $N = 0$, there is no possible value of $c$ that we can plug into that inequality to make it true ($0 dot c$ is never bigger than $1$ for any $c$). *Attempt 2*: So what went wrong? Well, we mainly care about how $f(N)$ behaves for really big values of $N$. In fact, for the example, for any $N >= 1$ (and $c >= 1$), the inequality is satisfied! It's just that pesky $N = 0$!#footnote[ Recall that we're only allowing non-negative input sizes (i.e., $N >= 0$), so negative values of $N$ aren't a problem. ]. So, we need to add one more thing to the formalism: the idea that we only care about "big" values of N. Of course, that leaves the question of how big is "big"? Now, we could pick specific cutoff values, but any specific cutoff we picked would be entirely arbitrary. So, instead, we just make the cutoff definition part of the proof: When proving that $f(N) in O(g(N))$, we just need to show that *some* cutoff exists, beyond which $f(N) <= c dot g(N)$. *The formal definition of Big-$O$ is*: $f(N) in O(g(N)) <=> exists c > 0, N_0 >= 0: forall N >= N_0 : f(N) <= c dot g(N)$ In this equation, $exists$ means "there exists" and $forall$ means "for all". Teasing apart the above equation: - $f(N) in O(g(N))$, the thing we want to define, is equivalently defined as ($<=>$)... - There exists some constant $c$ strictly bigger than 0 ($exists c > 0$). - There exists some cutoff value for $N$ ($exists N_0 >= 0$)... - So that for any larger value of $N$ ($N >= N_0$)... - $f(N)$ is smaller than $c dot g(N)$. In other words, saying $f(N) in O(g(N))$ is the same as saying that there is some constant $c$ so that for sufficiently large $N$, $f(N) <= c dot g(N)$. === Proving a function has a specific Big-$O$ bound To show that a mathematical function is *in* $O(g(N))$, we need to find a $c$ and an $N_0$ for which we can prove the Big-$O$ inequality. A generally useful strategy is: 1. Write out the Big-O inequality 2. "plug in" the values of $f(N)$ and $g(N)$ 3. "Solve for" $c$, putting it on one side of the inequality, with everything else on the other side. 4. Try a safe default of $N_0 = 1$. 5. Use the $A <= B$ and $B <= C$ imply $A <= C$ trick (transitivity of inequality) to replace any term involving $N$ with $N_0$ 5. Use the resulting inequality to find a lower bound on $c$ Continuing the above example of $f(N) = N$ and $g(N) = N^2$, we want to show that there is a constant $c$ so that for sufficiently large $N$. $f(N) <= c dot g(N)$ We start by "plugging in" values of $f$ and $g$: $N <= c dot N^2$ We can solve for $c$ by dividing both sides by $N^2$, getting: $1/N <= c$ From here, we need to find a constant $c$ that is bigger than $1/N$ for all sufficiently large values of $N$. If we can find such a constant, then we can work backwards through the proof to show that $f(N) <= c dot g(N)$ for that constant. Remember that we defined "sufficiently large" values of $N$ as all values of $N$ greater than some constant $N_0$. Following the guidelines above, let's pick a value of $N_0 = 1$. Observe that the function $1/N$ shrinks as $N$ grows. The greatest value of $1/N$ occurs when $N$ is at its smallest value ($N_0 = 1$). In other words, we can say that: $1/N <= 1$ for all $N >= 1$ This equation fits the pattern! So, since $1/N <= 1$ ... and with $c = 1$, we have $1/N <= c$ Which in turn means that $N <= c dot N^2$ And so swapping in $f(N)$ and $g(N)$: $f(N) <= c dot g(N)$ === Proving a function does not have a specific Big-$O$ bound To show that a mathematical function is *not in* $O(g(N))$, we need to prove that there can be *no* $c$ or $N_0$ for which we can prove the Big-$O$ inequality. A generally useful strategy is: 1. Write out the Big-O inequality 2. "plug in" the values of $f(N)$ and $g(N)$ 3. "Solve for" $c$, putting it on one side of the inequality, with everything else on the other side. 4. Simplify the equation on the opposite side and show that it is strictly growing. Generally, this means that the right-hand-side is in a complexity class at least $N$. Flipping the above example: - $f(N) <= c dot g(N)$ - $N^2 <= c dot N$ - $N <= c$ $N$ is strictly growing: for bigger values of $N$, it gets bigger. There is no constant that can upper bound the mathematical function $N$. ///////////////////////////////////////////// == Formalizing Big-$Omega$ Now that we've formalized Big-$O$ (the upper bound), we can formalize Big-$Omega$ (the lower bound) in exactly the same way: $f(N) in O(g(N)) <=> exists c > 0, N_0 >= 0: forall N >= N_0 : f(N) >= c dot g(N)$ The only difference is the direction of the inequality: To prove that a function exists in Big-$Omega$, we need to show that $f(N)$ is bigger than some constant multiple of $g(N)$. ///////////////////////////////////////////// == Formalizing Big-$Theta$ Although we started with an intuition for Big-$Theta$, we haven't yet formalized it. To understand why, let's take a look at the following runtime: $T(N) = 10N + "rand"(10)$ Here $"rand"(10)$ means a randomly generated number between 0 and 10 for each call. If the function were *just* $10N$, we'd be fine in using our intuitive definition of $Theta(N)$ being all multiples of $N$. However, this function still "behaves like" $g(N) = N$... just with a little random noise added in. For big values of $N$ (e.g., $10^10$), the random noise is barely perceptible. Although we can't say that $T(N)$ is *equal to* some multiple $c dot N$, we can say that it is *close to* that multiple (in fact, it's always between $10N$ and $10N + 10$). In other words, we can bound it from both above and below! Let's try proving this with the tricks we developed for Big-$O$ and Big-$Omega$: - $T(N) <= c_"upper" dot N$ - $10N + "rand"(10) <= c_"upper" dot N$ (plug in $T(N)$) - $10 + ("rand"(10))/N <= c_"upper"$ (divide by $N$) Looking at this formula, we can make a few quick observations. First, by definition $"rand"(10)$ is never bigger than $10$. Second, if $N_0 = 1$, $1/N$ can never be bigger than $1$. In other words, $("rand"(10))/N$ can not be bigger than 10. Let's prove that to ourselves. Taking the default $N_0 =1$ we get: - $1 <= N$ (plug in $N_0$) - $10 <= 10N$ (multiply by $10$) - $"rand"(10) <= 10 <= 10N$ (transitivity with $"rand"(10) <= 10$) - $("rand"(10))/N <= 10$ (divide by $N$) - $10 + ("rand"(10))/N <= 10+10$ (add $10$) So, if we pick $c_"upper" >= 20$, we can show (again, by transitivity): $10 + ("rand"(10))/N <= c_"upper"$ Which gets us $T(N) <= c_"upper" dot N$ for all $N > N_0$. Now let's try proving a lower (Big-$Omega$) bound: - $T(N) >= c_"lower" dot N$ - $10N + "rand"(10) >= c_"lower" dot N$ (plug in $T(N)$) - $10 + ("rand"(10))/N >= c_"lower"$ (divide by $N$) - $10 >= c_"lower"$ (By transitivity: $10 >= 10 + ("rand"(10))/N$) This inequality holds for any $10 >= c_"lower" > 0$ (recall that $c$ has to be strictly bigger than zero). So, we've shown that $T(N) in O(N)$ *and* $T(N) in Omega(N)$. The new thing is that we've shown that the upper and lower bounds *are the same*. That is, we've shown that $T(N) in O(g(N))$ and $T(N) in Omega(g(N))$ *for the same mathematical function $g$*. If we can prove that an upper and lower bound for some mathematical function $f(N)$ that is the same mathematical function $g(N)$, we say that $f(N)$ and $g(N)$ are in the same complexity class. Formally, $f(N) in Theta(g(N))$ is defined as $f(N)$ being bounded from *both* above and below by $g(N)$. In other words, $f(N) in Theta(g(N))$ if and only if $f(N) in O(g(N))$ *and* $f(N) in Omega(g(N))$. == Tight Bounds In the example above, we said that $"rand"(10) <= 10$. We could have just as easily said that $"rand"(10) <= 100$. The latter inequality is just as true, but somehow less satisfying; yes, the random number will always be less than 100, but we can come up with a "tighter" bound (i.e., $10$). Similarly Big-$O$ and Big-$Omega$ are bounds. We can say that $N in O(N^2)$ (i.e., $N$ is no worse than $N^2$). On the other hand, this bound is just as unsatisfying as $"rand"(10) <= 100$, we can do better. If it is not possible to obtain a better Big-$O$ or Big-$Omega$ bound, we say that the bound is *tight*. For example: - $10N^2 in O(N^2)$ and $10N^2 in Omega(N^2)$ are tight bounds. - $10N^2 in O(2^N)$ is correct, but *not* a tight bound. - $10N^2 in Omega(N)$ is correct, but *not* a tight bound. Note that since we define Big-$Theta$ as the intersection of Big-$O$ and Big-$Omega$, all Big-$Theta$ bounds are, by definition tight. As a result, we often call Big-$Theta$ bounds "tight bounds". *Note*: It *is* possible for a Big-$O$ or Big-$Omega$ bound to be tight, without having a Big-$Theta$ bound. You'll see an example of this below in @interpreting_code. == Which Variable? We define asymptotic complexity bounds in terms of *some* variable, usually the size of the collection $N$. However, it's also possible to use other bounds. For example, consider the function, which computes factorial: ```java public int factorial(int idx) { if(idx <= 0){ return 0; } int result = 1; for(i = 1; i <= idx; i++) { result *= i; } return result } ``` The runtime of this loop depends on the input parameter `idx`, performing one math operation for each integer between 1 and `idx`. So, we could give the runtime as $Theta("idx")$. When the choice of variable is implicit, it's customary to just use $N$, but this is not always the case. For example, when we talk about sequences and lists, the size of the sequence/list is most frequently the variable of interest. However, there might be other parameters of interest: - If we're searching the list for a specific element, what position to we find the element at? - If we're looking through a linked list for a specific element, what index are we looking for? - If we have two or more lists (e.g., in an Edge List data structure), each list may have a different size. In these cases, and others like them, it's important to be clear about which variable you're talking about. === Related Variables When using multiple variables, we can often bound one variable in terms of another. Examples include: - If we're looking through a linked list for the element at a specific index, the index must be somewhere in the range $[0, N)$, where $N$ is the size of the list. As a result, we can can always replace $O("index")$ with $O(N)$ and $Omega("index")$ with $Omega(1)$, since `index` is bounded from above by a linear function of $N$ and from below by a constant. - The number of edges in a graph can not be more than the square of the number of vertices. As a result, we can always replace $O("edges")$ with $O("vertices"^2)$ and $Omega("edges")$ with $Omega(1)$. *Note*: Even though $O("index")$ in the first example may be a tighter bound than $O(N)$, the $O(N)$ bound is still tight *in terms of $N$*: We can not obtain a tighter bound that is a function only of $N$. == Summary We defined three ways of describing runtimes (or any other mathematical function): - Big-$O$: The worst-case complexity: - $T(N) in O(g(N))$ means that the runtime $T(N)$ scales *no worse than* the complexity class of $g(N)$ - Big-$Omega$: The best-case complexity - $T(N) in Omega(g(N))$ means that the runtime $T(N)$ scales *no better than* the complexity class of $g(N)$ - Big-$Theta$: The tight complexity - $T(N) in Theta(g(N))$ means that the runtime $T(N)$ scales *exactly as* the complexity class of $g(N)$ We'll introduce amortized and expected runtime bounds later on in the book; Since these bounds are given without qualifiers, and so are sometimes called the *Unqualified* runtimes. === Formal Definitions For any two functions $f(N)$ and $g(N)$ we say that: - $f(N) in O(g(N))$ if and only if $exists c > 0, N_0 >= 0 : forall N > N_0 : f(N) <= c dot g(N)$ - $f(N) in Omega(g(N))$ if and only if $exists c > 0, N_0 >= 0 : forall N > N_0 : f(N) >= c dot g(N)$ - $f(N) in Theta(g(N))$ if and only if $f(N) in O(g(N))$ and $f(N) in Omega(g(N))$ Note that a simple $Theta(g(N))$ may not exist for a given $f(N)$, specifically when the tight Big-$O$ and Big-$Omega$ bounds are different. === Interpreting Code<interpreting_code> In general#footnote[ All of these are lies. The cost of basic arithmetic is often $O(log N)$, array access runtimes are affected by caching (we'll address that later in the book), and string operations are proportional to the length of the string. However, these are all useful simplifications for now. ], we will assume that most simple operations: basic arithmetic, array accesses, variable access, string operations, and most other things that aren't function calls will all be $Theta(1)$. Other operations are combined as follows... *Sequences of instructions* ```java { op1; op2; op3; } ... ``` Sum up the runtimes. $T(N) = T_("op1")(N) + T_("op2")(N) + T_("op3")(N) + ...$ *Loops* ```java for(i = min; i < max; i++) { block; } ``` Sum up the runtimes for each iteration. Make sure to consider the effect of the loop variable on the runtime of the inner block. $T(N) = sum_(i="min")^"max" T_("block")(N, i)$ As a simple shorthand, if (i) the number of iterations is predictable (e.g., if the loop iterates $N$ times) and (ii) the complexity of the loop body is independent of which iteration the loop is on (i.e., $i$ does not appear in the loop body), you can just multiply the complexity of the loop by the number of iterations. *Conditionals* ```java if(condition){ block1; } else { block2; } ``` The total runtime is the cost of either `block1` or `block2`, depending on the outcome of `condition`. Make sure to add the cost of evaluating `condition`. $T(N) = T_"condition"(N) + cases( T_("block1")(N) "if" "condition is true", T_("block2")(N) "otherwise" )$ The use of a cases block is especially important here, since if $T_("block1")(N)$ and $T_("block2")(N)$ belong to different asymptotic complexity classes, the overall block of code belongs to multiple classes (and thus does not have a simple $Theta$ bound). === Simple Complexity Classes We will refer to the following specific complexity classes: - *Constant*: $Theta (1)$ - *Logarithmic*: $Theta (log N)$ - *Linear*: $Theta (N)$ - *Loglinear*: $Theta (N log N)$ - *Quadratic*: $Theta (N^2)$ - *Cubic*: $Theta (N^3)$ - *Exponential* $Theta (2^N)$ These complexity classes are listed in order. === Dominant Terms In general, any function that is a sum of simpler functions will be dominated by one of its terms. That is, for a polynomial: $f(N) = f_(1)(N) + f_(2)(N) + ... + f_(k)(N)$ The asymptotic complexity of $f(N)$ (i.e., its Big-$O$ and Big-$Omega$ bounds, and its Big-$Theta$ bound, if it exists) will be the *greatest* complexity of any individual term $f_(i)(N)$#footnote[ Note that this is only true when $k$ is fixed. If the number of polynomial terms depends on $N$, we need to consider the full summation. ]. *Remember*: If the dominant term in a polynomial belongs to a single simple complexity class, then the entire polynomial belongs to this complexity class, and the Big-$O$, Big-$Omega$, and Big-$Theta$ bounds are all the same. === Multiclass Asymptotics A mathematical function may belong to multiple simple classes, depending on an unpredictable input or the state of a data structure. Generally, multiclass functions arise in one of two situations. First, the branches of a conditional may have different complexity classes: $T(N) = cases( T_(1)(N) "if" "a thing is true" T_(2)(N) "otherwise" )$ If $T_(1)(N)$ and $T_(2)(N)$ belong to different complexity classes, then $T(N)$ as a whole belongs to *either* class. In this case, we can only bound the runtime $T(N)$. Specifically, if $Theta(T_(1)(N)) > Theta(T_(2)(N))$, then: - $T(N) in Omega(T_(2)(N))$ - $T(N) in O(T_(1)(N))$. Second, the number of iterations of a loop may depend on an input that is bounded by multiple complexity classes. For example, if $"idx" in [1, N]$ (`idx` is somewhere between $1$ and $N$, inclusive), then the following code does not belong to a single complexity class: ```java for(i = 0; i < idx; i++){ do_a_thing(); } ``` In this case, we can bound the runtime based on `idx`. Assuming `do_a_thing()` is $Theta(1)$, then $T(N) in Theta("idx")$. However, since we can't bound `idx` in terms of $N$, then we can only provide weaker bounds with respect to $N$: - $T(N) in Omega(1)$ - $T(N) in O(N)$ Remember that if we can not obtain identical, tight upper and lower bounds in terms of a given input variable, there is no simple $Theta$-bound in terms of that variable. === Proving Summation<summation_proof> Earlier, we claimed that the sum of a collection of functions belonged to the greatest complexity class of any of the component functions. To make this statement more precise, let's start by defining the sum ($g$) and the component functions ($f_1 ... f_k$). Although we make this assumption in general, we'll be explicit here that we're going to assume that each $f_i$ is a growth function. $g(N) = f_(1)(N) + f_(2)(N) + ... + f_(k)(N)$ At least one of the functions has to belong to the greatest complexity class. Let's call this one $f_("max")$. In formal terms, this means that for all $1 <= i <= k$: $forall i in [1, k]: f_(i)(N) in O(f_("max")(N))$. We want to prove that $g(N) = Theta(f_("max")(N))$. Recall that, to prove this, we need to show (i) that $g(N) in O(f_("max")(N))$, and (ii) that $g(N) in Omega(f_("max")(N))$ ==== Upper Bound Let's start with showing Big-O. We want to show that $g(N) in O(f_("max"))$ Expanding out both $g(N)$ and the Big-O, this is equivalent to showing that there exists a $c$ so that for any sufficiently large $N$: $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) <= c dot f_("max")(N)$ Recall that, since $f_("max")$ belongs to the greatest complexity class, we have that for all $1 <= i <= k$: $forall i in [1, k]: f_(i)(N) in O(f_("max")(N))$. Or, expanding out the Big-O (for a sufficiently large $N$): $forall i in [1, k]: f_(i)(N) <= c_i dot f_("max")(N)$. If we add together both sides of this equation, we get $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) <= c_1 dot f_("max")(N) + c_2 dot f_("max")(N) + ... + c_k dot f_("max")(N)$ Factoring the function out from the right-hand side: $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) <= (c_1 + c_2 + ... + c_k) dot f_("max")(N)$ And since $c_1 + c_2 + ... + c_k$ is a constant, we can label that constant $c$: $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) <= c dot f_("max")(N)$ Which is what we wanted to show in the first place, so as the math hippies say, QED. ==== Lower Bound A bit less intuitively, we want to show that $g(N)$ grows at least as fast as $f_("max")(N)$, or: $g(N) in Omega(f_("max"))$ Again, we expand out both $g(N)$ and Big-$Omega$, and want to show that (for a sufficiently large N), there is a constant $c$ for which we can show: $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) >= c dot f_("max")(N)$ Remember that we're assuming that each $f_(i)$ is a growth function. Going back to the definition of growth functions, this means that $f_(i)(N) >= 0$ for *any* positive value of $N$. Since each $f_(i)$ must be at least 0, replacing every $f_(i)$ *except* $f_"max"$ with 0 can only make their sum smaller: $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) >= 0 + ... + f_("max")(N) + ... + 0$ Simplifying the right-hand side: $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) >= f_("max")(N)$ Multiplying the right-hand side by 1 $f_(1)(N) + f_(2)(N) + ... + f_(k)(N) >= 1 dot f_("max")(N)$ Which is the equation that we want to show, for $c = 1$.
https://github.com/Myriad-Dreamin/shiroa
https://raw.githubusercontent.com/Myriad-Dreamin/shiroa/main/github-pages/docs/format/main.typ
typst
Apache License 2.0
#import "/github-pages/docs/book.typ": book-page #show: book-page.with(title: "Format") = Format In this section you will learn how to: - Structure your book correctly - Format your `book.typ` file - Configure your book using `book.typ` - Customize your theme
https://github.com/mintyfrankie/brilliant-CV
https://raw.githubusercontent.com/mintyfrankie/brilliant-CV/main/README.md
markdown
Apache License 2.0
<h1 align="center"> <img src='https://github.com/mintyfrankie/mintyfrankie/assets/77310871/64861d2d-971c-47cd-a5e8-5ad8659f2c2b'> <br><br> Brilliant CV </h1> <br> > If my work helps you drift through tedious job seeking journey, don't hesitate to think about [buying me a Coke Zero](https://github.com/sponsors/mintyfrankie)... or a lot of them! 🥤 **Brilliant CV** is a [**Typst**](https://github.com/typst/typst) template for making **Résume**, **CV** or **Cover Letter** inspired by the famous LaTeX CV template [**Awesome-CV**](https://github.com/posquit0/Awesome-CV). ## Features **1. Separation of style and content** > Version control your CV entries in the `modules` folder, without touching the styling and typesetting of your CV / Cover Letter _(hey, I am not talking about **Macrohard Word**, you know)_ **2. Quick twitches on the visual** > Add company logos, put your shiny company name or your coolest title at the first line globally or per-document needs **3. Multilingual support** > Centrally store your multilingual CVs (English + French + German + Chinese + Japanese if you are superb) and change output language in a blink ***(NEW)* 4. AI Prompt and Keywords Injection** > Fight against the abuse of ATS system or GenAI screening by injecting invisible AI prompt or keyword list automatically. ## Preview | CV | Cover Letter | | :------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------: | | ![CV](https://github.com/mintyfrankie/mintyfrankie/assets/77310871/94f5fb5c-03d0-4912-b6d6-11ee7d27a9a3) | ![Cover Letter](https://github.com/mintyfrankie/brilliant-CV/assets/77310871/b4e74cdd-6b8d-4414-b52f-13cd6ba94315) | | CV (_French, red, no photo_) | Cover Letter (_French, red_) | | :------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------: | | ![CV](https://github.com/mintyfrankie/brilliant-CV/assets/77310871/fed7b66c-728e-4213-aa58-aa26db3b1362) | ![Cover Letter](https://github.com/mintyfrankie/brilliant-CV/assets/77310871/65ca65b0-c0e1-4fe8-b797-8a5e0bea4b1c) | | CV (_Chinese, green_) | Cover Letter (_Chinese, green_) | | :------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------: | | ![CV](https://github.com/mintyfrankie/brilliant-CV/assets/77310871/cb9c16f5-8ad7-4256-92fe-089c108d07f5) | ![Letter](https://github.com/mintyfrankie/brilliant-CV/assets/77310871/a5a97be2-87e2-43fe-b605-f862a0d600d7)| ## Usage > If you are using Typst online editor, you don't have to follow local development steps. ### 1. Install Fonts In order to make Typst render correctly, you will have to install the required fonts [**Roboto**](https://fonts.google.com/specimen/Roboto), [**Source Sans Pro**](https://fonts.google.com/specimen/Source+Sans+3) (or **Source Sans 3**) as well as [Fontawesome 6](https://fontawesome.com/download) in your local system. *NOTE: For online editor, Source Sans Pro are already included; however you will still have to manually upload the `.otf` or `.ttf` files of **Fontawesome** and **Roboto** to your project, by creating a folder `fonts` and put all the `otf` files there. See [Issue](https://github.com/typst/webapp-issues/issues/401)* ### 2. Check Documentation A [documentation](https://mintyfrankie.github.io/brilliant-CV/docs.pdf) on CV functions is provided for reference. ### 3. Bootstrap Template You have two ways to bootstrap the template, according to your need and tech-savvy level. #### 3.1 With Typst CLI In your local system, just working like `git clone`, boostrap the template using this command: ```bash typst init @preview/brilliant-cv:<version> ``` Replace the `<version>` with the latest or any releases (after 2.0.0). #### 3.2 With `utpm` pakcage manager [utpm](https://github.com/Thumuss/utpm) is a WIP packager manager for Typst. Install it with official instructions. Git clone then this repository on your local system, and within the workspace, run `utpm workspace link --force`. You will have to take care of templating by yourself, though. ### 4. Compile Files Adapt the `metadata.toml` to suit your needs, then `typst c cv.typ` to get your first CV! ### 5. Beyond It is recommended to: 1. Use `git` to manage your project, as it helps trace your changes and version control your CV. 2. Use `typstyle` and `pre-commit` to help you format your CV. 3. Use `typos` to check typos in your CV if your main locale is English. 4. (Advanced) Use `LTex` in your favorite code editor to check grammars and get language suggestions. ## How to upgrade version For the time being, upgrade can be achieved by manually "find and replace" the import statements in batch in your favorite IDE. For example: ```typst #import "@preview/brilliant-cv:2.0.0" -> #import "@preview/brilliant-cv:2.0.3" ``` **Make sure you read the release notes to notice any breaking changes. We estimate that there would still be some as Typst has not reached to a stable release neither.** ## Migration from `v1` > The version `v1` is now deprecated, due to the compliance to Typst Packages standard. However, if you want to continue to develop on the older version, please refer to the `v1-legacy` branch. With an existing CV project using the `v1` version of the template, a migration is needed, including replacing some files / some content in certain files. 1. Delete `brilliant-CV` folder, `.gitmodules`. (Future package management will directly be managed by Typst) 2. Migrate all the config on `metadata.typ` by creating a new `metadata.toml`. Follow the example toml file in the repo, it is rather straightforward to migrate. 3. For `cv.typ` and `letter.typ`, copy the new files from the repo, and adapt the modules you have in your project. 4. For the module files in `/modules_*` folders: 1. Delete the old import `#import "../brilliant-CV/template.typ": *`, and replace it by the import statements in the new template files. 2. Due to the Typst path handling mecanism, one cannot directly pass the path string to some functions anymore. This concerns, for example, the `logo` argument in `cvEntry`, but also on `cvPublication` as well. Some parameter names were changed, but most importantly, **you should pass a function instead of a string (i.e. `image("logo.png")` instead of `"logo.png"`).** Refer to new template files for reference. 5. You might need to install `Roboto` and `Source Sans Pro` on your local system now, as new Typst package discourages including these large files. 6. Run `typst c cv.typ` without passing the `font-path` flag. All should be good now, congrats! Feel free to raise an issue for more assistance should you encounter a problem that you cannot solve on your own :) ## Credit - [**Typst**](https://github.com/typst/typst) is a newborn, open source and simple typesetting engine that offers a better scripting experience than [**LaTeX**](https://www.latex-project.org/). - [**Awesome-CV**](https://github.com/posquit0/Awesome-CV) is the original LaTeX CV template from which this project is heavily inspired. Thanks [posquit0](https://github.com/posquit0) for your excellent work! - [**Font Awesome**](https://fontawesome.com/) is a comprehensive icon library and toolkit used widely in web projects for its vast array of icons and ease of integration. - [**tidy**](https://github.com/Mc-Zen/tidy) is a package that generates documentation directly in Typst for your Typst modules. Keep it tidy!
https://github.com/Error-418-SWE/Documenti
https://raw.githubusercontent.com/Error-418-SWE/Documenti/src/3%20-%20PB/Documentazione%20esterna/Manuale%20Utente/Manuale%20Utente.typ
typst
#import "/template_modern.typ": * #show: project.with( title: "Manuale Utente", authors: ( "<NAME>", "<NAME>", ), showLog: true, showImagesIndex: true, isExternalUse: true, ); = Introduzione == Scopo del documento Il presente documento ha lo scopo di illustrare le funzionalità del prodotto denominato _WMS3: Warehouse Management 3D_ e di fornire istruzioni dettagliate per il suo corretto utilizzo. Leggendo questo documento, l'utente acquisirà familiarità con i requisiti minimi necessari per l'operatività dell'applicazione e le migliori pratiche per un utilizzo ottimale. == Approccio al documento Il presente documento viene redatto in modo incrementale in modo da assicurare la coerenza delle informazioni al suo interno con gli sviluppi in corso e le esigenze evolutive del progetto. == Scopo del prodotto Il seguente documento tratta del programma denominato _WMS3: Warehouse Management 3D_, avente come obiettivo la realizzazione di un sistema di gestione di magazzino in tre dimensioni. Il prodotto offre le seguenti funzionalità principali: - possibilità di creazione di un magazzino e delle sue componenti; - visualizzazione tridimensionale del magazzino, con possibilità di muovere la vista; - visualizzazione delle informazioni della merce presente in magazzino; - caricamento dei dati relativi alle merci da un database SQL; - emissione di richieste di spostamento della merce all'interno del magazzino; - filtraggio e ricerca delle merci con rappresentazione grafica dei risultati; - importazione di planimetrie in formato SVG. == Glossario Al fine di agevolare la comprensione del presente documento viene fornito, al termine dello stesso (@glossario), un glossario che espliciti il significato dei termini di dominio. Essi sono evidenziati nel testo mediante l'aggiunta di una "_G_" a pedice: #align(center, { text("Termine di glossario") h(0.03em) text( fill: luma(100), sub(emph("G")) ) h(0.02em) }) == Riferimenti <riferimenti> === Riferimenti a documentazione interna <riferimenti-interni> - Documento #st_v: \ _#link("https://github.com/Error-418-SWE/Documenti/blob/main/3%20-%20PB/Documentazione%20esterna/Specifica%20Tecnica_v" + st_vo + ".pdf")_ #lastVisitedOn(18, 03, 2024) === Riferimenti normativi <riferimenti-normativi> - Capitolato "Warehouse Management 3D" (C5) di _Sanmarco Informatica S.p.A._: \ _#link("https://www.math.unipd.it/~tullio/IS-1/2023/Progetto/C5.pdf")_ #lastVisitedOn(13, 02, 2024) === Riferimenti informativi <riferimenti-informativi> #pagebreak() = Requisiti Di seguito sono elencate le versioni dei browser minime necessarie per l'esecuzione dell'applicazione. Per i requisiti di sistema e hardware si rimanda al documento #st_v. #figure( table( columns: 2, [*Browser*], [*Versione*], [Google Chrome],[$>=$ 89], [Microsoft Edge],[$>=$ 89], [Mozilla Firefox],[$>=$ 67], [Apple Safari],[$>=$ 15], [Opera Browser],[$>=$ 76], [Google Chrome per Android],[$>=$ 89], [Apple Safari per iOS],[$>=$ 17.1], [Samsung Internet],[$>=$ 23], ), caption: "Browser supportati" ) = Istruzioni d'uso == Apertura del programma Per aprire la pagina web dedicata all'utilizzo di WMS3, è necessario seguire i seguenti passaggi: - assicurarsi che il programma sia in esecuzione. In caso di necessità, la procedura per eseguire il software è descritta nel documento #st_v; - aprire un browser web e collegarsi all'indirizzo "localhost:3000". == Avvio e Configurazione dell'ambiente Il software all'avvio si presenta come segue: #figure( image("./imgs/avvio.png", width: 60%), caption: [ Schermata iniziale ], ) <avvio> qui vengono proposte due diverse configurazioni iniziali possibili, *Planimetria rettangolare* e *Planimetria personalizzata*. Le differenze tra le due modalità di lavoro verranno descritte nei capitoli dedicati. Si può scegliere una modalità di lavoro selezionandola nel pannello di @avvio e premendo il pulsante *Prossimo* in basso a destra. === Inizializzazione Planimetria Rettangolare #figure( image("./imgs/planimetria_rettangolare.png", width: 60%), caption: [ Definizione parametri del piano rettangolare ], ) <piano_rettangolare> Selezionata l'opzione "Planimetria rettangolare" nella schermata di @avvio, verrà mostrata la schermata presente in @piano_rettangolare, dove sarà possibile definire la lunghezza e la profondità del magazzino che vogliamo creare. Per terminare la configurazione del piano è richiesto che i due parametri siano entrambi maggiori di 0, in caso contrario verrà impedito di procedere alla visualizzazione del piano e verrà mostrato un messaggio di errore. Inoltre, selezionando l'opzione "Importa i prodotti da database", una volta terminata la configurazione dell'ambiente verrà caricata nell'apposita finestra la lista dei prodotti presenti nel database. Premere quindi il pulsante *Submit* per procedere alla visualizzazione dell'ambiente 3D, oppure il pulsante *Indietro* per annullare la configurazione e tornare al menù mostrato in @avvio. === Inizializzazione Planimetria Personalizzata #figure( image("./imgs/planimetria_personalizzata.png", width: 50%), caption: [ Definizione parametri del piano personalizzato ], ) <piano_personalizzato> Selezionata l'opzione "Planimetria personalizzata" nella schermata di @avvio, verrà mostrata la schermata presente in @piano_personalizzato, dove sarà possibile caricare il file SVG che verrà disegnato sul piano, e inserire la misura del lato maggiore del magazzino, in modo da scalarne correttamente la pianta. Il sistema ritornerà un errore e impedirà la visualizzazione del piano nel caso in cui: - il file SVG non venga inserito correttamente; - il file SVG non sia valido; - il valore associato al lato maggiore sia minore o uguale a 0. "Nella schermata sono inoltre presenti due checkbox per l'importazione dei dati da database: - "Importa gli scaffali dal database": permette di importare gli scaffali presenti nel database, i quali verranno successivamente visualizzati all'interno dell'ambiente 3D; - "Importa i prodotti dal database": permette di importare i prodotti presenti nel database, e, se selezionata l'opzione precedente, di popolare gli scaffali presenti con i rispettivi prodotti. Premere quindi il pulsante *Submit* per procedere alla visualizzazione dell'ambiente 3D, oppure il pulsante *Indietro* per annullare la configurazione e tornare al menù mostrato in @avvio. === Completamento configurazione dell'ambiente #figure( grid( columns: (auto, auto), rows: (auto, auto), [ #image("./imgs/piano_rettangolare.png", width: 90%)], [ #image("./imgs/piano_personalizzato.png", width: 90%)], ),caption: [ Corretta configurazione del piano rettangolare (sinistra) e del piano personalizzato (destra)], ) <fine_configurazione_iniziale> Una volta che l'ambiente è stato correttamente configurato è possibile cominciare a lavorare con il piano 3D che si presenta come mostrato in in @fine_configurazione_iniziale (in questo caso i piani rappresentati sono entrambi vuoti). == Movimento nell'ambiente tridimensionale Il sistema permette quattro diversi tipi di movimento di camera all'interno dell'ambiente: rotazione del piano, zoom-in e zoom-out, lo spostamento della camera sui due assi (panning) e con le frecce direzionali. === Rotazione della piano #figure( grid( columns: 3, rows: (auto, auto), [ #image("./imgs/rotazione_1.png", width: 80%)], [ #image("./imgs/rotazione_2.png", width: 75%)], [ #image("./imgs/mouse_tasto_sinistro.png", width: 50%)], ),caption: [Rotazione del piano rispetto la camera], ) <camera_rotazione> Premendo il tasto sinistro e spostando il mouse verso destra ruoteremo il piano rispetto alla camera in senso antiorario, muovendolo invece verso sinistra verrà ruotato in senso orario, come mostrato in @camera_rotazione. Nello stesso modo, spostando il mouse verso il basso possiamo cambiare l'angolazione del piano. === Zoom-in, Zoom-out #figure( grid( columns: 3, rows: (auto, auto), [ #image("./imgs/zoom-in.png", width: 80%)], [ #image("./imgs/zoom-out.png", width: 75%)], [ #image("./imgs/mouse_rotella.png", width: 30%)], ),caption: [Zoom-in e zoom-out], ) <camera_zoom> Ruotando la rotella del mouse in avanti è possibile avvicinarsi all'oggetto desiderato (zoom-in), mentre ci si può allontanare (zoom-out) ruotandola nel senso opposto, come mostrato in @camera_zoom. === Panning #figure( grid( columns: 3, rows: (auto, auto), [ #image("./imgs/panning_1.png", width: 70%)], [ #image("./imgs/panning_2.png", width: 70%)], [ #image("./imgs/mouse_tasto_destro.png", width: 50%)], ),caption: [Panning della camera], ) <camera_panning> Premendo il tasto destro del mouse e spostando il mouse in una delle quattro direzioni (su, giù, destra e sinistra) otterremo il panning della camera, ovvero lo spostamento sui due assi della camera rispetto al piano come mostrato in figura @camera_panning. === Movimento con frecce direzionali <movimento_frecce_direzionali> #figure( grid( columns: 3, rows: (auto, auto), [ #image("./imgs/keyboard_wasd.jpg", width: 45%)], [ #image("./imgs/keyboard_arrow.png", width: 40%)], [ #image("./imgs/keyboard_shift.jpg", width: 40%)], ),caption: [Movimento con le frecce direzionali], ) <camera_direzionali> L' ultimo metodo per muoversi all'interno dell'ambiente è tramite l'uso delle frecce direzionali (o in alternativa i tasti W, A, S, D), che permettono il movimento nella direzione indicata dalla freccia. Con il tasto Shift è possibile aumentare la velocità di spostamento. == Visualizzazione zone === Visualizzazione lista zone <visualizzazione_lista_zone> Sulla sinistra della schermata è presente il pannello dedicato alla visualizzazione delle liste di zone, prodotti, ordini e impostazioni. Selezionando da esso la prima icona (@icona_lista_zone (sinistra)), la quale corrisponde alla voce "Zone", si aprirà un ulteriore pannello dove sarà possibile visualizzare la lista delle zone già presenti nell'ambiente (@icona_lista_zone (destra)). #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/pulsante_zone.png", width: 60%)], [ #image("./imgs/lista_zone.png", width: 55%)], ),caption: [Icona "Zone" (sinistra) e pannello contenente la lista delle zone (destra)], ) <icona_lista_zone> Ogni riga di tale lista corrisponde ad una zona, la quale viene identificata dal suo parametro `ID`. Ad esso seguono le icone relative all'ispezione della zona (@ispezione_zona), rappresentata da un occhio, e alla sua eliminazione (@eliminazione_zona), rappresentata da un cestino. Nel caso in cui venga creata una nuova zona (@creazione_zona), essa verrà aggiunta alla lista delle zone. Similmente, se una zona dovesse essere eliminata dall'ambiente, essa verrebbe rimossa dalla lista. === Ispezione zona <ispezione_zona> Successivamente alla creazione dell'ambiente di lavoro è possibile ispezionare le zone in esso contenute visualizzandone i dettagli. Per eseguire tale operazione è possibile interagire con: - il pulsante contenente l'icona raffigurante un occhio (@immagini_pulsanti_ispezione_zona (sinistra)) presente nella lista delle zone (@visualizzazione_lista_zone) nella riga corrispondente alla zona da ispezionare; - il cubo rosso presente nell'angolo in basso a sinistra rispetto alla zona, visibile nel caso in cui il cursore del mouse sia sovrapposto a tale zona (@immagini_pulsanti_ispezione_zona (destra)). Per eseguire l'ispezione, tale cubo deve essere premuto con un doppio click del tasto sinistro del mouse. #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/pulsante_occhio_zona.png", width: 80%)], [ #image("./imgs/cubo_zona.png", width: 100%)], ),caption: [Pulsante "occhio" per l'ispezione zona (sinistra) e cubo utile all'interazione con una zona (destra)], )<immagini_pulsanti_ispezione_zona> \ L'esecuzione di almeno una delle modalità elencate permette la visualizzazione, sulla destra dello schermo, del pannello relativo alle informazioni della zona di interesse (@immagine_pannello_ispezione_zona). #figure( image("./imgs/pannello_ispezione_zona.png", width: 30%), caption: [ Pannello di ispezione zona ], ) <immagine_pannello_ispezione_zona> \ In esso vengono visualizzati: - *ID*: numero intero che rappresenta il codice identificativo univoco della zona, è visualizzato come titolo del pannello; - *Direzione*: può assumere valore `NS` (North-South) o `EW` (East-West) e rappresenta l'orientamento della zona rispetto al piano; - *Dimensioni*: tre campi rispettivamente relativi a: - *Lunghezza*: numero reale che definisce la lunghezza della zona; - *Larghezza*: numero reale che definisce la larghezza della zona; - *Altezza*: numero reale che definisce l'altezza della zona. - *Lista dei Bin*: tabella che elenca, per tutti i bin della zona: - codice alfanumerico identificativo univoco del bin; - contenuto del bin ("Libero" se vuoto); - pulsante per l'ispezione del bin (@ispezione_bin) se non vuoto. \ Nella parte inferiore del pannello sono presenti i pulsanti: - *Localizza*: se premuto (@immagine_pulsante_localizza_zona) riposiziona automaticamente la visuale sulla zona, in modo da permettere una più immediata visualizzazione e localizzazione della stessa; - *Modifica*: se premuto permette la modifica della zona (@modifica_zona); - *Elimina*: se premuto permette l'eliminazione della zona (@eliminazione_zona). #figure( image("./imgs/pulsante_localizza_zona.png", width: 40%), caption: [ Pulsante localizza zona ], ) <immagine_pulsante_localizza_zona> == Ispezione bin <ispezione_bin> Successivamente alla creazione di una zona è possibile ispezionarne i bin singolarmente al fine di visualizzarne i dettagli specifici. Per eseguire tale operazione è possibile interagire con: - il bin di interesse tramite un doppio click del tasto sinistro del mouse. Così facendo esso viene colorato di rosso (@immagini_opzioni_selezione_bin sinistra); - il pulsante per l'ispezione del bin (@immagini_opzioni_selezione_bin destra) presente nella lista dei bin di una zona specifica (@ispezione_zona). #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/zona_bin_selezionato.png", width: 80%)], [ #image("./imgs/pulsante_ispeziona_bin.png", width: 80%)], ),caption: [Selezione di un bin (sinistra) e pulsante ispezione bin (destra)], ) <immagini_opzioni_selezione_bin> \ L'esecuzione di almeno una delle modalità elencate permette la visualizzazione, sulla destra dello schermo, del pannello relativo alle informazioni del bin di interesse (@immagini_pannello_bin). In esso vengono visualizzati: - *ID*: codice alfanumerico identificativo univoco del bin, è visualizzato anche come titolo del pannello. Esso è composto da tre valori: - l'ID della zona che lo contiene; - una lettera dell'alfabeto inglese che identifica la colonna che lo contiene; - numero del bin rispetto alla zona che lo contiene. - *Dimensioni*: tre campi rispettivamente relativi a: - *Lunghezza*: numero reale che definisce la lunghezza del bin; - *Larghezza*: numero reale che definisce la larghezza del bin; - *Altezza*: numero reale che definisce l'altezza del bin. - *Area prodotto contenuto*: nel caso in cui il bin contenga un prodotto, vengono visualizzate le sue informazioni (@ispezione_prodotto), altrimenti viene data la possibilità di selezionare un prodotto non collocato da inserire nel bin. #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/pannello_bin_pieno.png", width: 80%)], [ #image("./imgs/pannello_bin_vuoto.png", width: 80%)], ),caption: [Pannello ispezione bin pieno (sinistra) e pannello ispezione bin vuoto (destra)], ) <immagini_pannello_bin> \ Per inserire un prodotto all'interno di un bin vuoto mediante la sua ispezione, è sufficiente premere sul pulsante contenente la scritta "Seleziona" e ricercare il prodotto desiderato dal relativo pannello sulla sinistra tramite il suo `ID` o nome. Per confermare la scelta è necessario premere sul pulsante contenente la scritta "Conferma" (@immagine_pannello_inserimento_prodotto). Il pannello di ispezione del bin verrà aggiornato per contenere le nuove informazioni. #figure( image("./imgs/pannello_inserimento_prodotto.png", width: 80%), caption: [ Pannello di ricerca e inserimento di un prodotto nel bin ], ) <immagine_pannello_inserimento_prodotto> == Visualizzazione prodotti === Visualizzazione lista prodotti <visualizzazione_lista_prodotti> Sulla sinistra della schermata è presente il pannello dedicato alla visualizzazione delle liste di zone, prodotti, ordini e impostazioni. Selezionando da esso la seconda icona (@icona_lista_prodotti sinistra), la quale corrisponde alla voce "Prodotti", si aprirà un ulteriore pannello dove sarà possibile visualizzare le liste dei prodotti letti da database (@icona_lista_prodotti destra). #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/bottone_lista_prodotti.png", width: 50%)], [ #image("./imgs/lista_prodotti.png", width: 60%)], ),caption: [Icona "Prodotti" (sinistra), pannello contenente la lista prodotti (destra)], ) <icona_lista_prodotti> \ Esistono due liste di prodotti: - *Collocati*: vengono riportati i prodotti contenuti in un bin e visualizzati nell'ambiente; - *Non collocati*: vengono riportati i prodotti che non sono contenuti all'interno di un bin e non sono visualizzati nell'ambiente. Ogni riga di tali liste corrisponde ad un prodotto, il quale viene identificato dal suo nome, ed elenca le informazioni relative al suo `ID` e le categorie di cui fa parte. Sulla destra di tale riga è presente l'icona relativa all'ispezione del prodotto (@ispezione_prodotto), rappresentata da un occhio. === Ispezione prodotto <ispezione_prodotto> Successivamente alla creazione dell'ambiente di lavoro è possibile ispezionare i prodotti letti da database visualizzandone i dettagli. Per eseguire tale operazione è possibile interagire con: - il pulsante contenente l'icona raffigurante un occhio (@immagini_ispezione_prodotto (sinistra)) presente nella lista dei prodotti (@visualizzazione_lista_prodotti) nella riga corrispondente al prodotto da ispezionare; - il pannello di ispezione del bin contenente il prodotto di interesse (@ispezione_bin). L'esecuzione di almeno una delle modalità elencate permette la visualizzazione, sulla destra dello schermo, del pannello relativo alle informazioni del prodotto di interesse (@immagini_ispezione_prodotto (destra)). #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/pulsante_ispezione_prodotto.png", width: 70%)], [ #image("./imgs/pannello_ispezione_prodotto.png", width: 70%)], ),caption: [Pulsante "occhio" per l'ispezione prodotto (sinistra) e pannello di ispezione prodotto (destra)], )<immagini_ispezione_prodotto> \ In esso vengono visualizzati: - *Nome*: nome del prodotto, è visualizzato come titolo del pannello; - *ID*: codice identificativo univoco del prodotto; - *Categorie*: lista delle categorie appartenenti al prodotto; - *Dimensioni*: tre campi rispettivamente relativi a: - *Lunghezza*: numero reale che definisce la lunghezza del prodotto; - *Larghezza*: numero reale che definisce la larghezza del prodotto; - *Altezza*: numero reale che definisce l'altezza del prodotto. - *Peso*: numero reale che definisce il peso del prodotto. == Ordini di movimentazione prodotti === Visualizzazione lista ordini di movimentazione Sulla sinistra della schermata è presente il pannello dedicato alla visualizzazione delle liste di zone, prodotti, ordini e impostazioni. Selezionando da esso la terza icona (@icona_lista_ordini (sinistra)), la quale corrisponde alla voce "Ordini di movimentazione", si aprirà un ulteriore pannello dove sarà possibile visualizzare la lista degli ordini di movimentazione avvenuti nella sessione corrente (@icona_lista_ordini (destra)). Per ogni ordine viene riportato il nome del prodotto soggetto allo spostamento e gli `ID` del bin di partenza e di destinazione. #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/bottone_lista_ordini.png", width: 45%)], [ #image("./imgs/lista_ordini.png", width: 90%)], ),caption: [Icona "Ordini di movimentazione" (sinistra), pannello contenente la lista degli ordini (destra)], ) <icona_lista_ordini> === Richiesta spostamento prodotto <richiesta_spostamento_prodotto> Lo spostamento di un prodotto tra due bin avviene tramite _drag and drop_: - posizionando il puntatore del mouse sul bin contenente il prodotto che si desidera spostare e premendo il tasto sinistro del mouse, è possibile "prendere" il prodotto; - tenendo premuto il medesimo tasto è possibile spostare il prodotto nella posizione del bin di destinazione; - rilasciando il tasto, verrà visualizzata nella parte inferiore destra dello schermo una notifica che avviserà della corretta creazione, o meno, dell'ordine di spostamento. Nel caso in cui l'ordine di movimentazione sia avvenuto correttamente, i bin di partenza e di arrivo verranno evidenziati rispettivamente in giallo e verde (@immagine_spostamento_prodotti). #figure( image("./imgs/spostamento_drag_and_drop.png", width: 40%), caption: [ Visualizzazione spostamento prodotti nella zona ], ) <immagine_spostamento_prodotti> \ I prodotti non collocati, visualizzabili nell'apposita sezione del menù *Prodotti* (@visualizzazione_lista_prodotti), possono essere posizionati tramite il pannello di ispezione del bin di destinazione (@ispezione_bin), purché esso sia vuoto. Tale operazione non comporta la generazione di una richiesta di spostamento. == Ricerca zone <ricerca_zone> Nella parte superiore del pannello relativo alla visualizzazione della lista delle zone (@icona_lista_zone (destra)), è presente l'area di ricerca delle zone tramite `ID`. Dopo aver inserito il parametro desiderato nella barra di ricerca, verranno mostrati nell'area sottostante i risultati relativi alle zone aventi un `ID` che contenga al suo interno il parametro immesso. == Ricerca prodotti <ricerca_prodotti> Nella parte superiore del pannello relativo alla visualizzazione delle liste dei prodotti (@icona_lista_prodotti (destra)), è presente l'area di ricerca dei prodotti. I criteri di ricerca disponibili sono: - nome del prodotto o una sua sottostringa: deve essere selezionata la voce "Nome" a sinistra della barra di ricerca; - `ID` del prodotto o una sua sottostringa: deve essere selezionata la voce "ID" a sinistra della barra di ricerca; - categoria merceologica: dopo aver premuto su "Categoria prodotto" comparirà una lista nella quale è possibile selezionare la categoria desiderata. I risultati della ricerca verranno mostrati nell'area sottostante, all'interno dell'apposita lista (@visualizzazione_lista_prodotti). == Creazione zona <creazione_zona> Successivamente alla creazione dell'ambiente di lavoro è possibile creare le zone contenenti i bin in modo personalizzato. Sul lato destro superiore del pannello relativo alla visualizzazione della lista delle zone (@visualizzazione_lista_zone), è presente un pulsante nero contenente la scritta "+ Nuova" (@nuova_zona (sinistra)). #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/pulsante_nuova_zona.png", width: 80%)], [ #image("./imgs/pannello_nuova_zona.png", width: 50%)], ),caption: [Pulsante di creazione di una nuova zona (sinistra) e pannello di creazione di una nuova zona (destra)], ) <nuova_zona> Alla sua pressione verrà reso disponibile, sulla destra della schermata, il pannello "Nuova zona" (@nuova_zona (destra)) nel quale sarà possibile inserire tutti i dati utili alla creazione della zona personalizzata. I dati necessari alla creazione sono: - *ID*: numero intero che rappresenta il codice identificativo univoco della zona; - *Direzione*: può assumere valore `Nord-Sud` o `Est-Ovest` e rappresenta l'orientamento della zona rispetto al piano; - *Dimensioni*: - *Lunghezza*: numero reale che definisce la lunghezza della zona; - *Larghezza*: numero reale che definisce la larghezza della zona. Essa è definibile dall'utente solo se è selezionata l'opzione "Dividi in parti uguali", altrimenti nello stesso campo viene mostrata automaticamente la larghezza calcolata come somma delle larghezze delle singole colonne; - *Altezza*: numero reale che definisce l'altezza della zona. Essa viene mostrata automaticamente come somma delle altezze dei singoli livelli. - *Colonne*: tramite un pulsante di opzione, è possibile selezionare la modalità di configurazione delle colonne. Esse sono: - *colonne uguali*: tramite l'opzione "Dividi in parti uguali" è possibile dichiarare che tutte le colonne della zona avranno larghezza uguale e definire il *numero di colonne* con un numero intero. In questo modo la larghezza di ogni singola colonna sarà uguale alla divisione tra la larghezza della zona dichiarata e il numero di colonne; - *colonne personalizzate*: tramite l'opzione "Colonne personalizzate" è possibile specificare la *larghezza delle colonne* singolarmente dentro ad un apposito form separando ciascun valore (rappresentato da un numero reale) con degli spazi. Per esempio, se viene inserito: #align(center, `2 1 3 1.5`) si sta dichiarando che la zona possiede quattro colonne rispettivamente di larghezza 2, 1, 3 e 1.5. - *aggiunta livelli*: a destra del numero attuale di livelli configurati, è presente un pulsante bianco contenente la scritta "Aggiungi" che, se premuto, permette l'aggiunta nell'area sottostante di una sezione rappresentante un nuovo livello della zona. È quindi possibile definire l'*altezza del livello* indicata mediante un numero reale. Successivamente all'inserimento dei dati rappresentativi della nuova zona personalizzata, è possibile premere sul pulsante "Crea zona" (@pulsante_crea_zona (sinistra)) per generare l'elemento 3D corrispondente nell'ambiente di lavoro. Esso verrà posizionato automaticamente alle coordinate (0,0) del piano e sarà quindi possibile riposizionarlo. #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/pulsante_crea_zona.png", width: 74%)], [ #image("./imgs/creazione_zone_3.png", width: 80%)], ),caption: [Pulsante di creazione zona (sinistra) e nuova zona creata nell'ambiente di lavoro (destra)], ) <pulsante_crea_zona> == Spostamento zona nell'ambiente 3D <collocamento_zona> Successivamente alla creazione dell'ambiente di lavoro è possibile spostare una zona in esso collocata. Per eseguire tale operazione è possibile interagire con il cubo rosso, presente nell'angolo in basso a sinistra rispetto alla zona, visibile nel caso in cui il cursore del mouse venga sovrapposto ad essa (@immagini_pulsanti_ispezione_zona (destra)). Per realizzare lo spostamento è sufficiente premere il tasto sinistro del mouse sopra al cubo descritto e, mantenendo la pressione di tale tasto, spostare il mouse per riposizionare la zona interessata nell'ambiente di lavoro. Per rendere più agevole la fase di spostamento, è possibile muovere la visuale come descritto nella sezione "Movimento con frecce direzionali" (@movimento_frecce_direzionali). Durante questa fase la base della zona sarà di colore verde nel caso in cui essa non collida con altri elementi dell'ambiente di lavoro, rossa altrimenti. #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/spostamento_zona_verde.png", width: 80%)], [ #image("./imgs/spostamento_zona_rosso.png", width: 80%)], ),caption: [Spostamento zona non in collisione (sinistra) e spostamento zona in collisione (destra)], ) <immagini_spostamento_zona> \ Per confermare la posizione desiderata per la zona soggetta allo spostamento, è sufficiente rilasciare il tasto sinistro del mouse. Nel caso in cui la zona fosse in collisione con altri elementi dell'ambiente di lavoro, essa verrà posizionata nell'ultima locazione valida da essa raggiunta durante la fase di spostamento. === Griglia Per agevolare il posizionamento di una zona durante la fase di spostamento, è possibile usufruire di una griglia tramite il relativo pannello presente nella parte inferiore destra dello schermo (@immagine_grid). #figure( image("./imgs/grid.png", width: 50%), caption: [ Pannello di selezione passo Griglia ], ) <immagine_grid> \ Nel caso in cui da esso venga selezionato un valore diverso da zero, nel piano dell'ambiente di lavoro sarà possibile visualizzare una griglia con passo uguale al valore selezionato. Quando essa è attiva, lo spostamento di una zona avverrà esclusivamente nelle posizioni coincidenti con le intersezioni della griglia, potendo quindi modificare la propria posizione in funzione di valori multipli del passo selezionato. == Modifica zona <modifica_zona> Successivamente alla creazione dell'ambiente di lavoro è possibile modificare le zone posizionate cambiandone i parametri dimensionali e di orientamento, potendo modificare o aggiungere le colonne e i livelli desiderati. Durante un'azione di modifica di una zona è possibile rimuovere una o più colonne già presenti esclusivamente se contengono solo bin senza prodotti al loro interno e se non sono presenti colonne con indice superiore a quello dell'insieme di colonne da rimuovere. Tale logica concerne anche la rimozione dei livelli. Per esempio, durante la modifica di una zona formata da tre livelli (contenenti prodotti solo nel secondo) e cinque colonne (contenenti prodotti solo nelle prime tre), sarà possibile rimuovere la quarta colonna (se priva di prodotti) oppure sia la quarta che la quinta (purché entrambe siano prive di prodotti), ma non le precedenti. Similmente sarà possibile rimuovere il terzo livello (se privo di prodotti) ma non i precedenti. Nella parte inferiore del pannello relativo alla visualizzazione delle informazioni di una zona (@ispezione_zona), è presente un pulsante contenente la scritta "Modifica" (@pulsante_modifica_zona). #figure( image("./imgs/pulsante_modifica_zona.png", width: 40%), caption: [ Pulsante di modifica di una zona ], ) <pulsante_modifica_zona> Alla sua pressione sarà permesso all'utente di poter modificare alcuni parametri della zona in oggetto. Essi sono i medesimi richiesti durante la creazione della zona (@creazione_zona) e seguono gli stessi vincoli, fuorché il parametro `ID`, il quale non è modificabile. Successivamente all'inserimento dei dati rappresentativi della zona modificata, è possibile premere sul pulsante "Salva le modifiche alla zona" (@pulsante_salvataggio_modifica_zona) per aggiornare, come richiesto, l'elemento 3D corrispondente nell'ambiente di lavoro. #figure( image("./imgs/pulsante_salvataggio_modifiche_zona.png", width: 40%), caption: [ Pulsante di salvataggio delle modifica di una zona ], ) <pulsante_salvataggio_modifica_zona> == Eliminazione zona <eliminazione_zona> Successivamente alla creazione dell'ambiente di lavoro è possibile eliminare una zona in esso collocata. Per eseguire tale operazione è possibile interagire con: - il pulsante rosso contenente la scritta "Elimina" contenuto nella parte inferiore del pannello relativo alla visualizzazione delle informazioni di una zona (@immagini_pulsanti_eliminazione_zona (sinistra)); - il pulsante contenente l'icona raffigurante un cestino (@immagini_pulsanti_eliminazione_zona (destra)) presente nella lista delle zone (@visualizzazione_lista_zone) nella riga corrispondente alla zona da eliminare. #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/pulsante_eliminazione_zona.png", width: 60%)], [ #image("./imgs/pulsante_cestino_zone.png", width: 90%)], ),caption: [Pulsante "Elimina" di eliminazione zona (sinistra) e pulsanti "cestino" di eliminazione zona (destra)], )<immagini_pulsanti_eliminazione_zona> Alla pressione di uno dei pulsanti elencati, verrà aperto il pannello di conferma dell'operazione, in quanto irreversibile. #figure( image("./imgs/conferma_eliminazione_zona.png", width: 50%), caption: [Pannello di conferma eliminazione zona ], ) <immagine_pannello_conferma_eliminazione_zona> Nel caso in cui venisse premuto il pulsante "Elimina" presente in quest'ultimo pannello, la relativa zona (e conseguentemente tutti i bin in essa contenuti) verrà rimossa dall'ambiente di lavoro e dalla lista delle zone. Dal momento dell'eliminazione di una zona contenente dei prodotti, essi saranno visualizzabili nella lista dei prodotti non collocati (@visualizzazione_lista_prodotti) in attesa di una nuova collocazione. == Impostazioni #figure( grid( columns: 2, rows: (auto, auto), [ #image("./imgs/impostazioni.png", width: 50%)], [ #image("./imgs/ridimensionamento.png", width: 90%)], ),caption: [Impostazioni ambiente 3D], ) <impostazioni> Premendo il pulsante *Settings* in basso a sinistra dello schermo si aprirà il pannello mostrato in @impostazioni (sinistra). In alto troviamo *Informazioni*, contenente varie informazioni riguardo il prodotto. Nella sezione sottostante troviamo Planimetria, dove sono riportati i valori dimensionali del piano, che possono essere modificati cliccandoci sopra e inserendo i nuovi valori da tastiera. L'inserimento di valori che andrebbero a rimpicciolire il piano vengono considerati errati e la modifica delle dimensioni resa impossibile. Una volta modificati i valori e premuto il pulsante *Salva* viene mostrato il pannello in @impostazioni (destra) che mostra in bianco il piano attuale e tratteggiata l'estensione che si vuole apportare. Premere quindi *Conferma* per applicare le modifiche. Da notare che nel caso di modifica di un piano personalizzato il ridimensionamento non andrà ad influire sul SVG, che manterrà le dimensioni definite in fase di inizializzazione, ma aumenterà la superficie bianca su cui è disegnato. Nell'ultima sezione in basso, *Demo*, troviamo due pulsanti: - *Risincronizza*: permette riportare il piano al suo stato iniziale, annullando tutte le modifiche e gli spostamenti effettuati; - *Reimposta*: permette di eliminare tutto il lavoro fatto sul piano e tornare alla schermata di @avvio. #pagebreak() = Glossario dei termini <glossario> \ // generato dal template
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/rivet/0.1.0/gallery/example3.typ
typst
Apache License 2.0
#import "../src/lib.typ": schema, config #let example = schema.load("/gallery/example1.yaml") //#schema.render(example) = Chapter 1 #lorem(50) = Chapter 2 #lorem(50) == Section 2.1 #lorem(20) #figure( schema.render(example, config: config.config(all-bit-i: false)), caption: "Test schema" ) #lorem(20) = Chapter 3
https://github.com/giZoes/justsit-thesis-typst-template
https://raw.githubusercontent.com/giZoes/justsit-thesis-typst-template/main/resources/utils/custom-numbering.typ
typst
MIT License
// 一个简单的自定义 Numbering // 用法也简单,可以特殊设置一级等标题的样式,以及一个缺省值 #let custom-numbering(base: 1, depth: 5, first-level: auto, second-level: auto, third-level: auto, format, ..args) = { if (args.pos().len() > depth) { return } if (first-level != auto and args.pos().len() == 1) { if (first-level != "") { numbering(first-level, ..args) } return } if (second-level != auto and args.pos().len() == 2) { if (second-level != "") { numbering(second-level, ..args) } return } if (third-level != auto and args.pos().len() == 3) { if (third-level != "") { numbering(third-level, ..args) } return } // default if (args.pos().len() >= base) { numbering(format, ..(args.pos().slice(base - 1))) return } }
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/bytefield/0.0.4/lib/gen.typ
typst
Apache License 2.0
#import "types.typ": * #import "utils.typ": * #import "asserts.typ": * #import "states.typ": * #import "@preview/tablex:0.0.8": tablex, cellx, gridx /// generate metadata which is needed later on #let generate_meta(fields, args) = { // collect metadata into an dictionary // let bh = fields.find(f => is-header-field(f)) // let msb = if (bh == none) {right} else { bh.data.msb } let (pre_levels, post_levels) = _get_max_annotation_levels(fields.filter(f => if is-bf-field(f) { is-note-field(f) } else { f.type == "annotation" } )) let meta = ( size: fields.filter(f => if is-bf-field(f) { is-data-field(f) } else { f.type == "bitbox" } ).map(f => if (f.data.size == auto) { args.bpr } else { f.data.size } ).sum(), msb: args.msb, cols: ( pre: pre_levels, main: args.bpr, post: post_levels, ), header: ( rows: 1, ), side: ( left: ( cols: if (args.side.left_cols == auto) { (auto,)*pre_levels } else { args.side.left_cols }, ), right: ( cols: if (args.side.right_cols == auto) { (auto,)*post_levels } else { args.side.right_cols }, ) ) ) return meta; } /// helper to calc number values from autofill string arguments #let _get_header_autofill_values(autofill, fields, meta) = { if (autofill == "bounds") { return fields.filter(f => is-data-field(f)).map(f => if f.data.range.start == f.data.range.end { (f.data.range.start,) } else {(f.data.range.start, f.data.range.end)}).flatten() } else if (autofill == "all") { return range(meta.size) } else if (autofill == "offsets") { let _fields = fields.filter(f => is-data-field(f)).map(f => f.data.range.start).flatten() //.filter(value => value < meta.cols.main).flatten() _fields.push(meta.cols.main -1) _fields.push(meta.size -1) return _fields.dedup() } else if (autofill == "bytes") { let _fields = range(meta.size, step: 8) _fields.push(meta.cols.main -1) _fields.push(meta.size -1) return _fields.dedup() } else { () } } /// Index all fields #let index_fields(fields) = { fields.enumerate().map(((idx, f)) => { assert_bf-field(f) dict_insert_and_return(f, "field-index", idx) }) } /// indexes all fields and add some additional field data #let finalize_fields(fields, meta) = { // This part must be changed if the user low level api changes. let _fields = index_fields(fields) // Define some variables let bpr = meta.cols.main let range_idx = 0; let fields = (); // data fields let data_fields = _fields.filter(f => is-data-field(f)) if (meta.msb == left ) { data_fields = data_fields.rev() } for f in data_fields { let size = if (f.data.size == auto) { bpr - calc.rem(range_idx, bpr) } else { f.data.size } let start = range_idx; range_idx += size; let end = range_idx - 1; fields.push(data-field(f.field-index, size, start, end, f.data.label, format: f.data.format)) } // note fields for f in _fields.filter(f => is-note-field(f)) { let anchor = _get_index_of_next_data_field(f.field-index, _fields) let data = dict_insert_and_return(f.data, "anchor", anchor) fields.push(dict_insert_and_return(f, "data", data)) } // header fields -- needs data fields already processed !! for f in _fields.filter(f => is-header-field(f)) { let autofill_values = _get_header_autofill_values(f.data.autofill, fields, meta); let numbers = if f.data.numbers == none { () } else { f.data.numbers + autofill_values } let labels = f.data.at("labels", default: (:)) fields.push(header-field( start: if f.data.range.start == auto { if meta.msb == right { 0 } else { meta.size - bpr } } else { assert.eq(type(f.data.range.start),int); f.data.range.start }, end: if f.data.range.end == auto { if meta.msb == right { bpr } else { meta.size } } else { assert.eq(type(f.data.range.end),int); f.data.range.end }, msb: meta.msb, numbers: numbers, labels: labels, ..f.data.format, )) break // workaround to only allow one bitheader till multiple bitheaders are supported. } return fields } /// generate data cells from data-fields #let generate_data_cells(fields, meta) = { let data_fields = fields.filter(f => f.field-type == "data-field") if (meta.msb == left ) { data_fields = data_fields.rev() } data_fields = data_fields let bpr = meta.cols.main; let _cells = (); let idx = 0; for field in data_fields { assert_data-field(field) let len = field.data.size let slice_idx = 0; let should_span = is-multirow(field, bpr) let current_offset = calc.rem(idx, bpr) // let num_of_wraps = calc_row_wrapping(field, bpr, current_offset) while len > 0 { let rem_space = bpr - calc.rem(idx, bpr); let cell_size = calc.min(len, rem_space); // calc stroke let _default_stroke = (1pt + black) let _stroke = ( top: _default_stroke, bottom: _default_stroke, rest: _default_stroke, ) if ((len - cell_size) > 0 and data_fields.last().field-index != field.field-index) { _stroke.at("bottom") = field.data.format.fill } if (slice_idx > 0){ _stroke.at("top") = none } let cell_index = (field.field-index, slice_idx) let x_pos = calc.rem(idx,bpr) + meta.cols.pre let y_pos = int(idx/bpr) + meta.header.rows let cell_format = ( fill: field.data.format.fill, stroke: _stroke, ) // adjust label for breaking fields. let middle = int(field.data.size / 2 / bpr) // roughly calc the row where the label should be displayed let label = if (should_span or middle == slice_idx) { field.data.label } else if (slice_idx < 2 and len < bpr) { "..." } else {none} // prepare for next cell let tmp_size = if should_span {field.data.size} else {cell_size} idx += tmp_size; len -= tmp_size; slice_idx += 1; // add bf-cell to _cells _cells.push( //type, grid, x, y, colspan:1, rowspan:1, label, idx ,format: auto bf-cell("data-cell", x: x_pos, y: y_pos, colspan: cell_size, rowspan: if(should_span) { int(field.data.size/bpr)} else {1}, label: label, cell-idx: cell_index, format: cell_format ) ) } } return _cells } /// generate note cells from note-fields #let generate_note_cells(fields, meta) = { let note_fields = fields.filter(f => f.field-type == "note-field") let _cells = () let bpr = meta.cols.main for field in note_fields { let side = field.data.side let level = field.data.level // get the associated field let anchor_field = fields.find(f => f.field-index == field.data.anchor) let row = meta.header.rows; if anchor_field != none { row = int( if (meta.msb == left) { (meta.size - anchor_field.data.range.end)/bpr } else { anchor_field.data.range.start/bpr }) + meta.header.rows } else { // if no anchor could be found, fail silently continue } _cells.push( bf-cell("note-cell", cell-idx: (field.field-index, 0), x: if (side == left) { meta.cols.pre - level - 1 } else { meta.cols.pre + bpr + level }, y: int(row), rowspan: field.data.rowspan, label: field.data.label, format: field.data.format, ) ) } return _cells } /// generate header cells from header-fields #let generate_header_cells(fields, meta) = { let header_fields = fields.filter(f => f.field-type == "header-field") let bpr = meta.cols.main let _cells = () for header in header_fields { let nums = header.data.at("numbers", default: ()) + header.data.at("labels").keys().map(k => int(k)) let cell = nums.filter(num => num >= header.data.range.start and num < header.data.range.end).dedup().map(num =>{ let label = header.data.labels.at(str(num), default: "") let show_number = num in header.data.numbers //header.data.numbers != none and num in header.data.numbers if header.data.msb == left { header-cell(num, label: label, show-number: show_number, pos: (bpr - 1) - calc.rem(num,bpr) , meta, ..header.data.format) // TODO } else { header-cell(num, label: label, show-number: show_number, meta, ..header.data.format) // TODO } }) _cells.push(cell) } return _cells } /// generates cells from fields #let generate_cells(meta, fields) = { // data let data_cells = generate_data_cells(fields, meta); // notes let note_cells = generate_note_cells(fields, meta); // header let header_cells = generate_header_cells(fields, meta); return (header_cells, data_cells, note_cells).flatten() } /// map bf custom cells to tablex cells #let map_cells(cells) = { cells.map(c => { let cell_type = c.at("cell-type", default: none) let body = if (cell_type == "header-cell") { let label_text = c.label.text let label_num = c.label.num locate(loc => { style(styles => { set text(if c.format.text-size == auto { _get_header_font_size(loc) } else { c.format.text-size }) set align(center + bottom) let size = measure(label_text, styles).width stack(dir: ttb, spacing: 0pt, if is-not-empty(label_text) { box(height: size, inset: (left: 50%, rest: 0pt))[ #set align(start) #rotate(c.format.at("angle", default: -60deg), origin: left, label_text) ] }, if (is-not-empty(label_text) and c.format.at("marker", default: auto) != none){ line(end:(0pt, 5pt)) }, if c.format.number {box(inset: (top:3pt, rest: 0pt), label_num)}, ) }) }) } else { box( height: 100%, width: if (cell_type == "data-cell") { 100% } else {auto}, stroke: c.format.at("stroke", default: none), c.label ) } return cellx( x: c.position.x, y: c.position.y, colspan: c.span.cols, rowspan: c.span.rows, inset: c.format.at("inset", default: 0pt), fill: c.format.at("fill", default: none), align: c.format.at("align", default: center + horizon), body ) }) } /// produce the final output #let generate_table(meta, cells) = { let cells = map_cells(cells); // TODO: new grid with subgrids. let table = locate(loc => { gridx( columns: meta.side.left.cols + range(meta.cols.main).map(i => 1fr) + meta.side.right.cols, rows: (auto, _get_row_height(loc)), align: center + horizon, inset: (x:0pt, y: 4pt), ..cells ) }) return table }
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/text/case-01.typ
typst
Other
// Error: 8-9 expected string or content, found integer #upper(1)
https://github.com/a-kkiri/HEU-Report-Typst
https://raw.githubusercontent.com/a-kkiri/HEU-Report-Typst/main/README.md
markdown
Apache License 2.0
<h1 align="center"> <a href="https://github.com/a-kkiri/HEU-Report-Typst"> <img alt="HEU_Latex_Template" src="https://github.com/a-kkiri/HEU-Report-Typst/blob/main/figures/heu_logo.png?raw=true" /> </a> <a href="https://github.com/typst/typst"> <img alt="Typst" src="https://user-images.githubusercontent.com/17899797/226108480-722b770e-6313-40d7-84f2-26bebb55a281.png"> </a> <br />哈尔滨工程大学课程作业 Typst 模板 </h1> ## 模板介绍 用于哈工程(HEU)课程作业/结课汇报的通用简易 Typst 模板 ## 模板下载 - 页面右边点击:**Clone or download -> Download Zip** ## 使用方法 默认模板文件由主要以下五部分组成: - main.typ 主文件 - refs.bib 参考文献 - template.typ 文档格式控制,包括一些基础的设置、函数 - fonts 字体文件夹 - figures 图片文件夹 使用模板首先需配置 main.typ,设置标题、描述、作者等信息。如需要进一步更改文档格式,请修改 template.typ。 文档编写过程中,在终端中使用 `typst watch --font-path ./fonts main.typ --root .` 进行即时预览;文档编写完成后,使用 `typst compile --font-path ./fonts main.typ` 生成 PDF 文件。 ## 模板预览 | [封面](https://github.com/a-kkiri/HEU-Report-Typst/blob/main/figures/cover.jpg) | [正文](https://github.com/a-kkiri/HEU-Report-Typst/blob/main/figures/content.jpg)| |:---:|:---:| | ![cover](https://github.com/a-kkiri/HEU-Report-Typst/blob/main/figures/cover.jpg?raw=true) | ![content](https://github.com/a-kkiri/HEU-Report-Typst/blob/main/figures/content.jpg?raw=true)| > [!IMPORTANT] > 文档使用的中文字体仅包含约 7000 个通用汉字和符号,若出现某些字无法显示,请更换其他字体。
https://github.com/catppuccin/typst
https://raw.githubusercontent.com/catppuccin/typst/main/src/flavors/catppuccin-macchiato.typ
typst
MIT License
#let macchiato = ( name: "Macchiato", emoji: "🌺", order: 2, dark: true, light: false, colors: ( rosewater: ( name: "Rosewater", order: 0, hex: "#f4dbd6", rgb: rgb(244, 219, 214), accent: true, ), flamingo: ( name: "Flamingo", order: 1, hex: "#f0c6c6", rgb: rgb(240, 198, 198), accent: true, ), pink: ( name: "Pink", order: 2, hex: "#f5bde6", rgb: rgb(245, 189, 230), accent: true, ), mauve: ( name: "Mauve", order: 3, hex: "#c6a0f6", rgb: rgb(198, 160, 246), accent: true, ), red: ( name: "Red", order: 4, hex: "#ed8796", rgb: rgb(237, 135, 150), accent: true, ), maroon: ( name: "Maroon", order: 5, hex: "#ee99a0", rgb: rgb(238, 153, 160), accent: true, ), peach: ( name: "Peach", order: 6, hex: "#f5a97f", rgb: rgb(245, 169, 127), accent: true, ), yellow: ( name: "Yellow", order: 7, hex: "#eed49f", rgb: rgb(238, 212, 159), accent: true, ), green: ( name: "Green", order: 8, hex: "#a6da95", rgb: rgb(166, 218, 149), accent: true, ), teal: ( name: "Teal", order: 9, hex: "#8bd5ca", rgb: rgb(139, 213, 202), accent: true, ), sky: ( name: "Sky", order: 10, hex: "#91d7e3", rgb: rgb(145, 215, 227), accent: true, ), sapphire: ( name: "Sapphire", order: 11, hex: "#7dc4e4", rgb: rgb(125, 196, 228), accent: true, ), blue: ( name: "Blue", order: 12, hex: "#8aadf4", rgb: rgb(138, 173, 244), accent: true, ), lavender: ( name: "Lavender", order: 13, hex: "#b7bdf8", rgb: rgb(183, 189, 248), accent: true, ), text: ( name: "Text", order: 14, hex: "#cad3f5", rgb: rgb(202, 211, 245), accent: false, ), subtext1: ( name: "Subtext 1", order: 15, hex: "#b8c0e0", rgb: rgb(184, 192, 224), accent: false, ), subtext0: ( name: "Subtext 0", order: 16, hex: "#a5adcb", rgb: rgb(165, 173, 203), accent: false, ), overlay2: ( name: "Overlay 2", order: 17, hex: "#939ab7", rgb: rgb(147, 154, 183), accent: false, ), overlay1: ( name: "Overlay 1", order: 18, hex: "#8087a2", rgb: rgb(128, 135, 162), accent: false, ), overlay0: ( name: "Overlay 0", order: 19, hex: "#6e738d", rgb: rgb(110, 115, 141), accent: false, ), surface2: ( name: "Surface 2", order: 20, hex: "#5b6078", rgb: rgb(91, 96, 120), accent: false, ), surface1: ( name: "Surface 1", order: 21, hex: "#494d64", rgb: rgb(73, 77, 100), accent: false, ), surface0: ( name: "Surface 0", order: 22, hex: "#363a4f", rgb: rgb(54, 58, 79), accent: false, ), base: ( name: "Base", order: 23, hex: "#24273a", rgb: rgb(36, 39, 58), accent: false, ), mantle: ( name: "Mantle", order: 24, hex: "#1e2030", rgb: rgb(30, 32, 48), accent: false, ), crust: ( name: "Crust", order: 25, hex: "#181926", rgb: rgb(24, 25, 38), accent: false, ), ), )
https://github.com/v411e/optimal-ovgu-thesis
https://raw.githubusercontent.com/v411e/optimal-ovgu-thesis/main/template/chapter/99-Appendix.typ
typst
MIT License
#counter(heading).update(0) #set heading(numbering: (..nums) => { let vals = nums.pos() let value = "ABCDEFGHIJ".at(vals.at(0) - 1) if vals.len() == 1 { return "Appendix " + value } else { return value + "." + nums.pos().slice(1).map(str).join(".") } }) = == Lorem <appendix_lorem>
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/par-bidi_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test reordering with different top-level paragraph directions. #let content = par[Text טֶקסט] #text(lang: "he", content) #text(lang: "de", content)
https://github.com/tuto193/typst-uos-thesis
https://raw.githubusercontent.com/tuto193/typst-uos-thesis/main/languages.typ
typst
MIT License
// Template for other languages. Replace values but not keys where needed #let english = ( degree: ( bachelor: "Bachelor's thesis", master: "Master's thesis", ), index: "Contents", supervisors: ( first: "First supervisor", second: "Second supervisor", ), sections: ( main: "Chapter", other: "Section", ), lists: ( figures: "List of Figures", tables: "List of Tables", glossary: "List of Abbreviations" ), bib: "Bibliography", fig: "Figure", tab: "Table", months: ( "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December", ), proclamation: ( title: "Proclamation", contents: [ I hereby confirm that I wrote this thesis independently and that I have not made use of any other resources or means than those indicated. ] ), ) #let german = ( degree: ( bachelor: "Bachelorarbeit", master: "Masterarbeit", ), supervisors: ( first: "Erstgutachter", second: "Zweitgutachter", ), index: "Inhaltsverzeichnis", sections: ( main: "Kapitel", other: "Abschnitt", ), lists: ( figures: "Abbildungsverzeichnis", tables: "Tabellenverzeichnis", glossary: "Abkürzungsverzeichnis", ), bib: "Quellenangaben", fig: "Abbildung", tab: "Tabelle", months: ( "Januar", "Februar", "März", "April", "Mai", "Juni", "Juli", "August", "September", "Oktober", "November", "Dezember", ), proclamation: ( title: "Eidesstattliche Erklärung", contents: [ Hiermit versichere ich, dass ich die vorliegende Arbeit selbständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt sowie Zitate kenntlich gemacht habe. ] ), ) #let spanish = ( degree: ( bachelor: "Tésis de bachelor", master: "T<NAME> maestría", ), index: "Contenidos", supervisors: ( first: "Primer supervisor", second: "Segundo supervisor", ), sections: ( main: "Capítulo", other: "Sección", ), lists: ( figures: "Lista de Ilustraciones", tables: "Lista de Tablas", glossary: "Lista de Abreviaturas", ), bib: "Bibliografía", fig: "Ilustración", tab: "Tabla", months: ( "Enero", "Febrero", "Marzo", "Abril", "Mayo", "Junio", "Julio", "Agosto", "Septiembre", "Octubre", "Noviembre", "Deciembre", ), proclamation: ( title: "Declaración", contents: [ Por la presente, confirmo que he escrito esta tesis de manera independiente y que no he utilizado ningún otro recurso o medio que no esté indicado. ] ), ) #let dict = ( en: english, de: german, es: spanish, )
https://github.com/HEIGVD-Experience/docs
https://raw.githubusercontent.com/HEIGVD-Experience/docs/main/S5/MAT3/docs/3-ReprGeometriquePlanGauss/repr-geometrie-plan-Gauss.typ
typst
#import "/_settings/typst/template-note.typ": conf #show: doc => conf( title: [ Représentation géométrique et plan de Gauss ], lesson: "MAT3", chapter: "3 - Représentation géométrique et plan de Gauss", definition: "Ce document présente la représentation géométrique des nombres complexes dans le plan de Gauss, où chaque nombre complexe z = a + bj est représenté par le point M(a, b). Il aborde également le conjugé complexe, la forme trigonométrique (module et argument), ainsi que les opérations avec la forme trigonométrique telles que la multiplication, division, et puissance. La formule de Moivre est aussi expliquée pour les puissances de nombres complexes.", col: 1, doc, ) = Représentation dans le plan Dans un repère orthonormé du plan $R^2$, un nombre complexe $z = a + "bj"$ est représenté par le point $M(a, b)$, où $a$ est la partie réelle et $b$ la partie imaginaire. Cette représentation visuelle se fait dans le plan complexe, également appelé plan de Gauss. #image("/_src/img/docs/image copy 148.png") == Conjugé complexe dans le plan Dans le plan complexe, le conjugé complexe de $z, z"*"$ représente la sysmétrie par rapport à l'axe des réels. #image("/_src/img/docs/image copy 149.png") = Forme trigonométrqiue Tout nombre complexe peut-être défini par deux valeur nommées *module* et *argument principal*. == Module Le module correspond à la norme du vecteur nombre complexe $z = a + b = arrow("OM")$ et on le note: $ |z| = r = norm((a/b)) = sqrt(a^2 + b^2) $ *Le module sera toujours une valeur positive, car on parle de longueur*. == Argument principal L'argument principal d'un nombre imaginaire dit $z$ correspond à l'angle orienté $theta$, mesuré en radian et exprimé dans l'intervale $]- pi, pi]$ que forme le vecteur $arrow("OM")$ avec l'axe des nombres réels positifs. Pour calculer l'angle nous devrons toujours prendre la partie la plus petite de celui-ci soit en passant par le cadran 1 et 2 ou par 4 et 3 en ajoutant le signe $-$ avant. Pour le calculer l'angle $theta$ nous pouvons utiliser le tableau suivant: #align(center)[ #linebreak() #box( stroke: 1pt + color.black, inset: 20pt, )[ $ "Arg"(z) = cases("Arctan"(b/a) - pi &"si" a < 0 " et " b < 0, (-pi)/2 &"si" a = 0 " et " b < 0, "Arctan"(b/a) &"si" a > 0, pi/2 &"si" a = 0 " et " b > 0, "Arctan"(b/a) + pi &"si" a < 0 " et " b gt.eq 0) $ ] ] = Determination d'un nombre complexe par la forme trigonométrique Un nombre complexe peut-être déterminé uniquement grâce à son module et son argument principal. Il est aussi possible de définir l'inverse en partant du module et de son angle, on peut donc écrire $z$ comme étant: #align(center)[ #linebreak() #box( stroke: 1pt + color.black, inset: 10pt, )[ $ z = r(cos(theta) + j*sin(theta)). $ ] ] Cette forme est appelée la *forme trigonométrique* ou autrement dit forme polaire de $z$. == Forme trigonomértique #sym.arrow algébrique Pour passer de la forme trigonométrique à la forme algébrique il suffit de poser: #align(center)[ #linebreak() #box( stroke: 1pt + color.black, inset: 10pt, )[ $ a = "Re"(z) = r*cos(theta) " et " b = "Im"(z) = r*sin(theta) $ ] ] == Propriétés 1. $|z| ≥ 0 "et" |z| = 0 "si et seulement si" z = 0$ 2. $z * z"*" = |z|^2$ 3. $|z| = |z"*"|$ 4. $|z * w| = |z| * |w|$ 5. $|z/w| = frac(|z|,|w|)$ 6. $|z + w| lt.eq |z| + |w| "(Inégalité triangulaire)"$, 7. Si $z = a + 0j = a$ est un nombre réel pur, alors $|z| = |a|$, c.-à-d. le module est la valeur absolue. = Opération avec la forme trigonométrique == Multiplication avec la forme trigonométrique Prenons deux nombres complexes sous forme trigonométrique: $ z_1 = r_1(cos(Theta_1) + j*sin(Theta_1)) " et " z_2 = r_2(cos(Theta_2) + j*sin(Theta_2)) $ Pour multiplier deux nombres complexes sous forme trigonométrique nous devons: #align(center)[ 1. *Multiplier les modules de $z_1$ et $z_2$,* \ 2. *Additionner les arguments de $z_1$ et $z_2$.* ] Nous aurons donc: #align(center)[ #linebreak() #box( stroke: 1pt + color.black, inset: 10pt, )[ $ z_1*z_2 = r_1*r_2(cos(Theta_1 + Theta_2) + j*sin(Theta_1 + Theta_2)) $ ] ] #linebreak() == Division avec la forme trigonométrique Prenons deux nombres complexes sous forme trigonométrique: $ z_1 = r_1(cos(Theta_1) + j*sin(Theta_1)) " et " z_2 = r_2(cos(Theta_2) + j*sin(Theta_2)) $ avec *$z_2$ non nul*! #align(center)[ 1. *Diviser le module de $z_1$ par celui de $z_2$,* 2. *Soustraire l'argument de $z_2$ à celui de $z_1$.* ] Nous aurons la formule suivante: #align(center)[ #linebreak() #box( stroke: 1pt + color.black, inset: 10pt, )[ $ frac(z_1,z_2) = frac(r_1,r_2)(cos(Theta_1 - Theta_2) + j*sin(Theta_1 - Theta_2)) $ ] ] #linebreak() == Puissance avec la forme trigonométrique La formule générale de $z^n$ pour $n in Z$ : #align(center)[ #linebreak() #box( stroke: 1pt + color.black, inset: 10pt, )[ $ z^n = r^n*(cos(n*Theta) + j*sin(n*Theta)) $ ] ] Donc il suffit de : #align(center)[ 1. *Elever le module de $z$ à la puissance $n$*, 2. *Multiplier l'argument de $z$ par $n$*. ] = Informations complémentaires == Formule de Moivre Soit $cos(theta) + j*sin(theta)$ un nombre comlpexe et soit $n in ZZ$ un nombre entier, alors nous aurons: #align(center)[ #linebreak() #box( stroke: 1pt + color.black, inset: 10pt, )[ $ (cos(theta) + j*sin(theta))^n = cos(n theta) + j*sin(n theta) $ ] ] == Autres À noter que si $"Arg"(z) = alpha$ alors, $ "Arg"(z * j) = "on fait une rotation de " frac(pi, 2) \ "Arg"(frac(z, j)) = "on fait une rotation de " frac(-pi, 2) $
https://github.com/medewitt/bio720
https://raw.githubusercontent.com/medewitt/bio720/main/main.typ
typst
BSD 3-Clause "New" or "Revised" License
#import "template.typ": * #show: ams-article.with( title: "Reading notes: An integrated map of genetic variation from 1,092 human genomes", authors: ( ( name: "<NAME>", department: [Department of Biology], organization: [Wake Forest University], location: [Winston Salem, NC 27101], email: "<EMAIL>", url: "www.michaeldewittjr.com" ), ), abstract: "Caveat Emptor", bibliography-file: "integratedbio.bib", ) #set heading(numbering: "1.") #let today = datetime.today() #show glossaryWords("glossary.yml"): word => glossaryShow("glossary.yml", word) #block( fill: aqua, inset: 5pt, radius: 4pt )[ #text(14pt)[Objective] Charactertize rare variants in the human genome defined as those SNPs at 1% of the population. ] = Data sources The 1092 human genomes data set is comprised of a mixed of low coverage whole-genome sequence (WGS) data, which is lower coverage as there are fewer reads per sequence, targeted deep exome sequence data, and dense SNP data. These data were gathered from 14 different populations drawing from a mixture of existing and newly collected data. The fourteen different populations were: - ASW: people living with African ancestry in Southwest United States (AFR) - CEU: Utah residents with ancestry from North and Western Europe (EUR) - CHB: Han Chinese in Beijing, China (EAS) - CHS: Han Chinese is South, China (EAS) - CLM: Colombians in Medellin, Columbia (AMR) - FIN: Finnish in Finland (EUR) - GBR: British from England and Scotland, UK (EUR) - IBS: Iberian populations in Spain (EUR) - LWK: Luhya in Webuye, Kenya (AFR) - JPT: Japanese in Tokyo, Japan (EAS) - MXL: people with Mexican ancestry in Los Angeles, Califormia (AMR) - PUR: Puerto Ricans in Puerto Rica, USA (AMR) - TSI: Toscani in Italia (EUR) - YRI Yoruba in Ibadan, Nigeria (AFR) #block( fill: luma(230), inset: 5pt, radius: 4pt )[ There is some obvious selection bias in these data as many are from European or European descendents and these groupers are a bit vague. Central Asia, India, Australia, Northern Africa and the Middle East, South Central Africa, and many Pacific islands are completely missing. Similarly, the groupings fail to capture known unique populations (Basque in Spain) while grouping some cosmopolitans together with more murky migration history. ] == Focus for mutations Due to the challenges of identifying complex and large variants and shorter indels in regions of low complexity the researchers focused on: - Biallelic indels - Large deletions == Power the analysis for rare variants They calculate that their analysis is sufficiently powered to detect an SNP present at 1% of the study population at 99.3% (@fig1). Similarly, they find that they can detect SNPs at 0.1% at over 90% power (except in the WGS data). This is due to lower read coverage in the WGS data as there are fewer reads and thus less opportunity to separate noise/error from the processing at these lower frequencies. The researchers used information from inferred haplotypes to enrich their data (using known familial pairings from mother-father-offspring trios). These results were used to enrich their data and improve their power (through the use of the linkage disequilibrium, LD, as measured from these familial triads). #figure( image("assets/figure1.png", width: 80%), caption: [a, Power to detect SNPs as a function of variant count (and proportion) across the entire set of samples, estimated by comparison to independent SNP array data in the exome (green) and whole genome (blue). b, Genotype accuracy compared with the same SNP array data as a function of variant frequency, summarized by the r2 between true and inferred genotype (coded as 0, 1 and 2) within the exome (green), whole genome after haplotype integration (blue), and whole genome without haplotype integration (red). LD, linkage disequilibrium; WGS, whole-genome sequencing.] ) <fig1> = Genetic variation within and between populations Many of the more _common variants_ identified in this study have been _previously described_ (94% of variants with $>=$ 5% ). This study described some additional, less well-described variants. #figure( image("assets/figure2.png", width: 80%), caption: [ The distribution of rare and common variants. a, Summary of inferred haplotypes across a 100-kb region of chromosome 2 spanning the genes ALMS1 and NAT8, variation in which has been associated with kidney disease45. Each row represents an estimated haplotype, with the population of origin indicated on the right. Reference alleles are indicated by the light blue background. Variants (non-reference alleles) above 0.5% frequency are indicated by pink (typed on the high-density SNP array), white (previously known) and dark blue (not previously known). Low frequency variants (,0.5%) are indicated by blue crosses. Indels are indicated by green triangles and novel variants by dashes below. A large, low-frequency deletion (black line) spanning NAT8 is present in some populations. Multiple structural haplotypes mediated by segmental duplications are present at this locus, including copy number gains, which were not genotyped for this study. Within each population, haplotypes are ordered by total variant count across the region. b, The fraction of variants identified across the project that are found in only one population (white line), are restricted to a single ancestry-based group (defined as in a, solid colour), are found in all groups (solid black line) and all populations (dotted black line). c, The density of the expected number of variants per kilobase carried by a genome drawn from each population, as a function of variant frequency (see Supplementary Information). Colours as in a. Under a model of constant population size, the expected density is constant across the frequency spectrum. ] ) == Many common variants This study found that variants present at $>=$ 10% were found in all of the population groups. 53% of the rare variants at 0.5% were found in a single population. They found that derived allele frequency distribution diverged below 40% such that those individuals from African backgrounds carry three times as many low-frequency mutations. This may reflect ancestral bottlenecks in non-African populations. All populations shown rare variants (< 0.5 % frequency) which likely reflects the growing population sizes. == Infering history #block( fill: aqua, inset: 5pt, radius: 4pt )[ Figure 3 highlights the increased diversity in Africa which aligns with our understanding of human migration and evolution out of Africa. This is shown by examining both the marginal distribution of f2 variants (panel A) overall and within the group, the absolute percentage of novel variants identified (panel C), and the more divergent, shorter haplotype lengths (b). ] The researchers then examined some patterns sharing of variants (they refer to these as f_2 mutations). These reflects some "between population" sharing of mutations such as: - Spanish population mutations are more likely to appear in those persons from Americas rather than other European grounds - Within East Asian populations those from Han Chinese South, China are more likely shared with Han Chinese in Beijing rather than Japan. However, Japanese mutations are more likely to be shared with those from Beijing. - Those persons with African descent in the American Southwest have mutations shared with those from Yoruba in Nigeria than Luhya in Kenya. They also saw interesting dynamics with the Finnish have mutations more closely related to the African populations tested (which makes sense given the relative isolation of the Finnish language). y-axis frequency of near by variants bounded by the haplotype shown in @fig3 B. To calculate Most 2f variants were found in Africa representing a likely older population and have likely diverged earlier. They found a negative correlation between the variant frequency and median length of the shared haplotype (i.e., longer mutations were less likely to be shared or were less predominant). More recently related groups will share longer haplotypes while those with shorter shared lengths are more distantly related. @fig3 C continues to show that African associated diploids had higher variation. Generaly consensus is that humans may have migrated to the Americas using small boats. #figure( image("assets/figure3.png", width: 80%), caption: [ a, Sharing of f2 variants, those found exactly twice across the entire sample, within and between populations. Each row represents the distribution across populations for the origin of samples sharing an f2 variant with the target population (indicated by the left-hand side). The grey bars represent the average number of f2 variants carried by a randomly chosen genome in each population. b, Median length of haplotype identity (excluding cryptically related samples and singleton variants, and allowing for up to two genotype errors) between two chromosomes that share variants of a given frequency in each population. Estimates are from 200 randomly sampled regions of 1 Mb each and up to 15 pairs of individuals for each variant. c, The average proportion of variants that are new (compared with the pilot phase of the project) among those found in regions inferred to have different ancestries within ASW, PUR, CLM and MXL populations. Error bars represent 95% bootstrap confidence intervals. NatAm, Native American. ] ) <fig3> == Admixture Admixture was examined between the cosmopolitans. On average MXL groups had the greatest proportion of Native American ancestry, but the individual variance was very high (3% to 92% despite an average of 47%). = Understanding purifying selection #block( fill: aqua, inset: 5pt, radius: 4pt )[ We observe lower rates of genetic diversity, espectially at those sites that control function. Figure 4 shows that we see evidence of purifying (background) selection in the human genome and that this purifying selection occurs at higher rates at sites of functional importance (as selection operates on phenotype and thus function rather than genotype). ] The purpose of figure 4 from _An integreated map of genetic variation from 1,092 human genomes_ @the1000genomesprojectconsortiumIntegratedMapGenetic2021 is to illustrate the role of purifying selecting within and between populations. Critically, this first requires a discussion of purifying selection. *Purifying selection* or negative selection is a type of background selection resulting in lower genetic diversity @cvijovicEffectStrongPurifying2018. When mutations occur in the genome which are highly deleterious, offspring do not survive long enough to pass on these mutations to subsequent generations, at least on longer timescales @cvijovicEffectStrongPurifying2018. When this background selection occurs, the observed genetic diversity is lower than what would be expected under neutral substitition. However, these dynamics exist within the broader population level (longer) timescales and periodic deleterious mutations do appear to exist on shorter term time scales. Additionally, as described by Vitti et al, "selection operates at the level of the phenotype, alleles showing evidence of selection are likely to be of funtional relevance" @vittiDetectingNaturalSelection2013. Thus we would anticipate purifying selection to act on those alleles with functional relevance. Figure 4 of the human genomes paper then seeks to test this hypothesis by examing the rate of rare mutations at sites with different levels of conservation and assessing the conservation of sites associated with particular functionality. = Quantifying human variation == Genomic Evolutionary Rate Profiling Score The Genomic Evolutionary Rate Profiling Score (GERP) scores provide a way of measuring which sites likely lead to delertious mutations. This statistical framework provides a way of estimating the difference between the expected number of mutations under a neutral substitution model (which assumes no impact on fitness) and the observed variation. Positive GERP scores thus represent fewer mutations than would be expected, likely indicating a more conserved site, while negative scores would indicate more substitutions than expected. Put another way, we would expect that high GERP scores will occur in regions which are important for survival to reproductive age and are largely conserved in the population (i.e., at higher levels in the population with fewer mutations in these regions). == Derived Allele Frequency We can examine the derived allele frequency (DAP) to assess the overall distribution of alleles within a population. As a reminder, a derived allele are variants which have arisen since the last common ancestor. The derived allele frequency is then a summarization of the pattern and frequency that these variant alleles appear. Taking the population sampled as a whole we can calculate the frequency with which each allele variant appears. = Examination of Figure 3 == Panel A In @fig4a we see the following: - X-axis: the GERP score representing the evolutionary conservation where higher scores are more conserved. The comparison group were from a group of sequenced mammals. - Y-axis: the proportion of variants with a DAF < 0.5% where higher values indicate lower frequencies in the studied population - Colored lines: the different functional elements - cross on the x and y axes representing the average values for GERP score and proportion of variants with a DAF < 0.5%, respectively. From this figure we can conclude that: - More generally, there are fewer mutation observed in more highly conserved sites (i.e., higher GERP scores and higher proportions). - Specifically, we see that additional Stop codons, Splice mutations, and non-synonymous (Nonsyn) mutations appear very rare and are likely associated with deleterious effects. Later in the paper these are identified as "loss of function" mutations. - The addition of Stop codons continues to be relatively rare at most sites across GERP scores. This implies that the addition of the codons likely results in a severe loss of function. As stop codons stop transcription prematurely, these additions will result in macromolecules not being transcribed more generally. This patern is similar amongst the splice mutations which impact the assembly of said macromolecules. - We see an interesting phenomena with splice variants having higher mutation in less conserved locations. - The authors note that rare variant loads are similar for synomynous and nonsynomynous locations suggesting weak selective constraints Comparing across species can only compare homolgous regions. #figure( image("assets/figure4a.png", width: 60%), caption: [The relationaship between evolutionary conservation (measured by GERP score) and rare variant proportion (fraction of all variants weith derived allele frequency (DAF) < 5% )for variants occurring in different functional elements and with different coding consequences. Crosses indicate the average GERP score at variant sites (x axis) and the proportion of rare variants (y axis) in each category.], ) <fig4a> == Panel B In @fig4b examines the CTCF-binding motif within the CTCF-binding peaks (putative CTFT-binding motif CCMYCTNNNGG) . The transcription repressor CTCF has been characterized as playing a vital role in transcription regulation including the recombination of the antibody loci and the regulation of chromatin architecture #cite("filippovaExceptionallyConservedTranscriptional1996", "cooperDistributionIntensityConstraint2005"). Intuitively, we would expect relatively low diversity in this gene as chromatin structural formation is vital for transcription (and cellular generation more generally). The binding motif is shown in the picto-graphic (called the "logo plot") for the actual nucleotide. In all cases, red represents the "out peak" and blue represents the "in peak" from the Chip-seq (ChipSeq) analysis which is used to map binding sites. Those sites that are located within the peak are likely related to binding and associated with function. The in peak is the mapped functional/ active site of the CTCF gene while the out peak represents the CTCF motif, but not on the CTCF gene. This indicates that the conservation and lower diversity rates are active site conserving (preserving functionality of the gene). === Upper panel The y axis again represents the GERP score in the different regions. Higher values represent more likely to be a conserved region as it changes less than expected under a natural substitution model. === Lower panel The y axis represents the average diversity as defined as the per-nucleotide pairwise distance with higher values representing more differences (i.e., more distance and differences) === Figure conclusions As suspected, in an important gene we see that those sites associated with function ("in peak") vary less than expected under natural substition as shown by the GERP scores and lower pair-wise nucleotide distances in the lower panel. This is not to say that there isn't a more complex story as there is a hint of degeneracy in position 8. #figure( image("assets/figure4b.png", width: 60%), caption: [Levels of evolutionary conservation (mean GERP score, top) and genetic diversity (per-nucleotide pairwise differences, bottom) for the sequences matching the CTCF-binding motif within CTCF-binding peaks, as identified experimentally by ChIP-seq in the ENCODE project (blue) and in a matched set of motifs outside peaks (red). The logo plot shows the distribution of identified motifs within peaks.], ) <fig4b> == Key conclusions There are some highly conserved regions and likely purifying selection is driven by the addition of stop codons, splice mutations, and non-synomynous mutations at these conserved sites due to their lower observed frequencies of occuring. = Use of 1000 Genomes Project data in medical genetics #block( fill: aqua, inset: 5pt, radius: 4pt )[ This project has yielded important results by allowing for these results to be imputed to other studies (Figure 5a) and have overall highlighted the rates at which different substitutions occur and at what frequency to serve as baseline models. ] The authors argue that these data can serve as reference data for future GWAS studies. As these data provide a "null model" for rare, low frequency, and common variants, they can provide a background for what to expect in a random sample of the population. This null model is vital in being able to detect the relationship between phenotypic and genotypic differences. As they hint, the focus likely needs to be on the functionally important polymorphisms (e.g., those sites with high GERP scores) which are likely tied to function. "Because many variants contribution to disease risk are likely to be segregating at low frequency, we recommend that variant frequency be considered when using the resource to identify pathological candidates." #figure( image("assets/figure5.png", width: 80%), caption:[ a, Accuracy of imputation of genome-wide SNPs, exome SNPs and indels (using sites on the Illumina 1 M array) into ten individuals of African ancestry (three LWK, four Masaai from Kinyawa, Kenya (MKK), two YRI), sequenced to high coverage by an independent technology3. Only indels in regions of high sequence complexity with frequency .1% are analysed. Deletion imputation accuracy estimated by comparison to array data46 (note that this is for a different set of individuals, although with a similar ancestry, but included on the same plot for clarity). Accuracy measured by squared Pearson correlation coefficient between imputed and true dosage across all sites in a frequency range estimated from the 1000 Genomes data. Lines represent whole-genome SNPs (solid), exome SNPs (long dashes), short indels (dotted) and large deletions (short dashes). SV, structural variants. b, The average number of variants in linkage disequilibrium (r2 . 0.5 among EUR) to focal SNPs identified in GWAS47 as a function of distance from the index SNP. Lines indicate the number of HapMap (green), pilot (red) and phase I (blue) variants. ] ) = Author's discussion == Rare variation is likely associated with complex diseases The authors argue that understanding rare variation is important to understanding complexes diseases. A reminder here, rare is not private, rather it is the 1% of the variants. == Cost-effective use of combining data from multiple sources They authors argue that their incorportation of multiple types of data has allowed for a cost-effective way of reconstructing haplotypes. They didn't present any evidence of this directly, but rather show that WGS with LD data can allow for reconstruction. They found many variants not in the dense reads of exons (40% not from exon reads). They mention use of CHG array data. == Methodological advances They mention that while they did some interesting things (combining reads and variant calls from multiple groups), there are still limitations with long reads, duplications, etc. == Local differentiation through purifying selection despite metrics The authors mention that purfying slection at functionally relevant sites can lead to substantial local differentiation despite low F#sub[ST]. Rare variants tend to be recent and restricted due to geography (or ancestry). This suggests that local context (and inheritance) may be important for understanding particular disease phenotypes == Supplment ws === Method for Local Ancestry - Created a common panel from the available sequences to create the panels (e.g., all of the African groups representing Africa.) - Exclusion was for Native Americans which used other published sequences not included in the report where they used a panel from Mao et al (2007). - Made an ancestry call using a consensus approach from mutliple ancestry methods - LAMP-LD (hierarchical hidden markov model approach) - HAPMIX (hidden markov model) - RFMIX (random forests on conditional random fields) - MULTIMIX (hidden markov model) - Passed the SNP reads of the individiuals with putative haplotype panels, enriched with the parent/child trios for each chromosome and haplotype where if 3/4 of the methods made the same call, then it was defined as the consensus === Method for KEGG Analysis First the ycalculate the number of excess rare nonsymonymous variants which is defined as the number of nonsynomynous variants in excess of the rate predicted from the non-rare K_a / Ks ratio as shown in @varcalc. MAF represents the minor allele frequences. Larger values represent higher differentiation with the general rule that values greater than 0.15 are substantially different while values of 1 are completely differentiated. #set math.equation(numbering: "(1)") $ "Excess rare NonSynomynous" = "NonS"_("MAF" < 0.5%) \ & - "S"_( "MAF"< 0.5%) ("NonS"_("MAF" > 0.5%) / "S"_("MAF" > 0.5%)) $ <varcalc> We can also examine the simple linear regression of the different mapped KEGG pathways. The below table reflects the regression of the GERP score (independent variable) on the Number of Excess Nonsynonynmous substitutions. We see that generally, as the conservation increases that we see higher excess nonsynomynous variants, but that there is lots of noise in this model. #table( columns: (auto, auto), inset: 10pt, align: horizon, [*Parameter*], [*Values*], $R^2$, [0.02139], "gerp", [0.16278 (SD 0.07249, p-value 0.0259)], "Model F",[F-statistic: 5.043 on 1 and 184 DF, p-value: 0.02592] ) == Fixation Indices Larger values represent higher differentiation with the general rule that values greater than 0.15 are substantially different while values of 1 are completely differentiated. $ F_("st") = (pi_"between" - pi_"within") / pi_"between" $ <fstcalc> Where $pi$ represents the average pairwise distance. This metric can take on values between 0 and 1 where #pagebreak() #glossary("glossary.yml")
https://github.com/Q4kK/resume
https://raw.githubusercontent.com/Q4kK/resume/master/resume.typ
typst
#import "template.typ": * #show link: underline #show: resume.with( aside: [ #text(size: 20pt, fill: black, weight: "bold", "<NAME>") = Contact - #link("mailto:<EMAIL>") - +1 612-391-3470 - #link("github.com/q4kK") ] ) = About Me #bio()[I am a hard-working individual with a large interest in computer systems, Linux / NixOS development, and game design. I enjoy learning about new things related to game engines, cybersecurity, and NixOS! ][ In my free time, I enjoy walking around my town, gardening, designing game ideas, cooking, and programming my personal api servicing my home lab. ][ ] = Education #entry()[ Winona State University ][ Computer Science ][ Currently studying for my Bachelor's of Computer Science at Winona State University. ][ 2022-Present ] = Experience #entry()[ Student Systems Administrator ][ Winona State University ][ -- Strong grasp of Linux systems and Unix command line. \ -- Worked extensively with virtual machines and server hardware. \ -- Experience creating automation, solutions, and deployments using Ansible. ][ 2023-Present ] #code_entry()[ Python ][ Fluent enough to get the job done. I have experience using python to control windows services, powershell scripts, and JSON api requests. ] #code_entry()[ Java ][ I am proficient in Java, as it is my college's language of choice for the Computer Science program. Most of my fundamental programming skills were developed in Java, so I'm very comfortable floating around this language. ] #code_entry()[ #link("https://github.com/Q4kK/resume")[This Resume] ][ This resume was built using typst, github actions, and a healthy dose of the command line. ]
https://github.com/icyzeroice/typst-packages
https://raw.githubusercontent.com/icyzeroice/typst-packages/main/README.md
markdown
MIT License
# My collection for typst packages > PowerShell is compatible for macOS, Windows and Linux. Just [install it](https://github.com/PowerShell/PowerShell?tab=readme-ov-file#get-powershell) and it is so convenient! | package | description | |:--:|:--:| | [minimal-styling](./packages/minimal-styling/README.md) | a simple styling template for note-taking | | [lipsum](./packages/lipsum/README.md) | create multiple language blind text like `#lorem` |
https://github.com/mem-courses/linear-algebra
https://raw.githubusercontent.com/mem-courses/linear-algebra/main/note/3.矩阵的运算.typ
typst
#import "../template.typ": * #show: project.with( title: "Linear Algebra #3", authors: ( (name: "<NAME>", email: "<EMAIL>", phone: "3230104585"), ), date: "November 13, 2023", ) #let AA = math.bold("A") #let BB = math.bold("B") #let CC = math.bold("C") #let DD = math.bold("D") #let EE = math.bold("E") #let XX = math.bold("X") #let II = math.bold("I") #let OO = math.bold("O") #let TT = math.upright("T") #let alpha = math.bold(math.alpha) #let beta = math.bold(math.beta) #let theta = math.bold(math.theta) = 矩阵的运算 == 数乘 矩阵数乘的四条定理: #box(height: 29pt)[ #columns(2)[ 1. $k(AA + BB) = k AA + k BB$; 2. $(k+t) AA = k AA + t AA$; 3. $(k t) AA = k(t AA)$; 4. $1 dot AA = AA$. ] ] == 加法 矩阵加法的四条定理: #box(height: 29pt)[ #columns(2)[ 1. 加法交换律:$AA + BB = BB + AA$; 2. 加法结合律:$(AA + BB) + CC = AA + (BB + CC)$; 3. 零矩阵:$OO + AA = AA + OO = AA$; 4. 负矩阵:$AA + (-AA) = OO$. ] ] == 乘法 只有第一个矩阵的 *列数* 等于第二个矩阵的 *行数* 时,两个矩阵才可以相乘.结果矩阵的第 $i$ 行第 $j$ 个元素等于第一个矩阵的第 $i$ 行与第二个矩阵的第 $j$ 列的点积. 1. 不满足交换律:一般地,$AA BB != BB AA$(如果 $AA BB = BB AA$,则称 $AA$ 与 $BB$ 是可交换的); 2. 不满足消去律:$AA BB = AA CC arrow.r.double.not BB = CC$;$AA BB = OO arrow.r.double.not AA = OO or BB = OO$(如果 $AA$ 可逆,那么消去律满足); 3. 满足结合律:$(AA BB) CC = AA (BB CC)$; 4. 满足分配律:$AA (BB + CC) = AA BB + AA CC$,$(BB + CC) AA = BB AA + CC AA$,$k(AA BB) = (k AA)BB = AA (k BB)$. #warn[ #def[注意] - $(AA + BB)(AA - BB) = AA^2 - AA BB + BB AA - BB^2 != AA^2 - BB^2$ - $(AA BB)^2 = AA (BB AA) BB != AA^2 BB^2$ ] #def[性质1]设 $AA,BB$ 都是 $n$ 阶方阵,则 $|AA BB| = |AA| dot |BB| = |BB AA|$. #prof[ #set math.mat(delim: "|") #def[证明]记 $|DD| = display(mat(AA, OO; -EE, BB)) = |AA| |BB|$. 另一方面,对 $DD$ 做初等列变换 $display(R_(n+j)+sum_(i=1)^n b_(i j)R_i\,sp j=1\,2\,dots.c\,n)$,可得 $|DD|=display(mat(AA,CC;-EE,OO))$,其中 $CC=AA BB$.那么有 $|DD| = (-1)^(n^2) |CC| |-EE| = (-1)^(n(n+1)) |CC| = |AA BB|$. 因此证得 $|AA BB| = |AA| |BB|$. ] #def[性质2]关于可交换矩阵的几条结论:\ #deft[性质2]1. 与任何矩阵可交换的矩阵一定是数量矩阵\ #deft[性质2]2. 与对角线上元素互不相同的对角矩阵可交换的矩阵一定是对角矩阵\ == 转置 #def[性质1]当 $AA$ 为方阵时,$|AA| = |AA^TT|$; #def[性质2]$(AA BB)^TT = BB^TT AA^TT$. = 矩阵的逆 对于 $n$ 阶方阵 $AA$,若存在 $n$ 阶方阵 $BB$ 使得 $AA BB = BB AA = EE$,则称 $AA$ *可逆*,且 $BB$ 是 $AA$ 的 *逆矩阵*.用 $AA^(-1)$ 表示 $AA$ 的逆矩阵. == 伴随矩阵 $n$ 阶 *方阵* $AA$ 的各个元素的代数余子式 $A_(i j)$ 所构成的如下矩阵:$ AA^* = mat(A_(11),A_(21),dots.c,A_(n 1);A_(12),A_(22),dots.c,A_(n 2);dots.v,dots.v,,dots.v;A_(1 n),A_(2 n),dots.c,A_(n n)) $称为 $AA$ 的 *伴随矩阵*.(注意:原先第 $i$ 行各元素的代数余子式在伴随矩阵的第 $j$ 列) #def[引理]$AA dot AA^* = AA^* dot AA = |AA| dot EE$ #prof[ 可以通过 $sum_(i=1)^n a_(j i) A_(k i) = |AA| [j=k]$ 证明. ] #def[性质]1. $AA$ 可逆 $<=>$ $AA^*$ 可逆; #deft[性质]2. $|AA^*| = |AA|^(n-1) sp (n>=2)$(由 1. 得无论 $AA$ 是否可逆都成立); #deft[性质]3. $(AA^*)^* = |AA|^(n-2) AA sp (n>=3)$ #deft[性质]4. $(k AA)^* = k^(n-1) AA^* sp (n>=2)$ #deft[性质]5. $(AA BB)^* = BB^* AA^*$; #deft[性质]6. $(AA^*)^(-1) = (AA^(-1))^* = display(1/(abs(AA)) AA)$ #deft[性质]7. $(AA^*)^TT = (AA^TT)^*$ #deft[性质]8. $r(AA^*) = display(cases( n\,quad& r(AA) = n, 1\,quad& r(AA) = n-1, 0\,quad& r(AA) < n-1, ))$. == 性质 #def[性质1.1]$AA$ 可逆\ #deft[性质1.1]$<=>$ $|AA| != 0$\ #deft[性质1.1]$<=>$ $r(AA) = n$\ #deft[性质1.1]$<=>$ $AA XX=theta$ 只有零解\ #deft[性质1.1]$<=>$ $forall beta,sp AA XX=beta$\ #deft[性质1.1]$<=>$ $AA$ 可以表示为初等矩阵的乘积\ #deft[性质1.1]$<=>$ $AA^*$ 可逆\ #deft[性质1.1]$<=>$ $AA$ 与 $EE$ 相抵(等价)\ #deft[性质1.1]$<=>$ $AA$ 的行(列)向量组线性无关\ #deft[性质1.1]$<=>$ $AA$ 的所有特征值都不为零\ #def[性质1.2]$AA$ 不可逆\ #deft[性质1.2]$<=>$ $|AA| = 0$\ #deft[性质1.2]$<=>$ $r(AA) < n$\ #deft[性质1.2]$<=>$ $AA XX = theta$ 有非零解\ #deft[性质1.2]$<=>$ $AA$ 的行(列)向量组线性相关\ #deft[性质1.2]$<=>$ $0$ 是 $A$ 的特征值\ #def[性质2]如果 $AA$ 可逆,则 $AA^(-1)$ 必唯一. #def[性质3]当 $AA$ 可逆时,$AA^(-1) = display(AA^* / (|AA|))$. #prof[ *必要性*:设 $AA$ 可逆,两边取行列式得到$ |AA||BB| = |BB||AA| = |EE| = 1 => |AA|!=0 $ *充分性*:设 $|AA|!=0$,由引理 $AA dot AA^* = AA^* dot AA = |AA| dot EE$ 两边同除 $|AA|$ 得到$ AA dot (AA^*)/(|AA|) = (AA^*)/(|AA|) dot AA = EE $ 比较定义可知,$AA$ 可逆,那么 $AA^(-1) = display(frac(AA^*,|AA|))$. ] #def[性质4]设 $AA$ 可逆,那么: #box(height: 31pt)[ #columns(2)[ 1. $AA^(-1)$ 可逆,且 $(AA^(-1))^(-1) = AA$; 2. $k!=0$ 时,$k AA$ 可逆,且 $(k AA)^(-1) = display(1/k) AA^(-1)$; 3. $AA^TT$ 可逆,且 $(AA^TT)^(-1) = (AA^(-1))^TT$; 4. $|AA^(-1)| = |AA|^(-1)$. ] ] #def[性质5]若 $AA,BB$ 都是 $n$ 阶可逆矩阵,且 $AA BB$ 可逆,那么 $(AA BB)^(-1) = BB^(-1) AA^(-1)$. #def[性质6]若对角矩阵 $bold(Lambda) = op("diag")(lambda_1,lambda_2,dots.c,lambda_n)$ 可逆,那么 $bold(Lambda)^(-1) = op("diag")(lambda_1^(-1),lambda_2^(-1),dots.c,lambda_n^(-1))$. #def[性质7](Cramer 法则)对于线性方程组 $AA bold(X) = bold(beta)$,若 $AA$ 为 $n$ 阶方阵,且 $D = |AA| != 0$,有唯一解: #set math.mat(delim: "(") $ bold(X) = mat(x_1;x_2;dots.v;x_n) = AA^(-1) bold(beta) = frac(AA^*,|AA|) bold(beta) = frac(1,|AA|) mat( AA_11,AA_21,dots.c,AA_(n 1); AA_12,AA_22,dots.c,AA_(n 2); dots.v,dots.v,,dots.v; AA_(1 n),AA_(2 n),dots.c,AA_(n n); ) mat(b_1;b_2;dots.v;b_n) = frac(1,D) mat(D_1;D_2;dots.v;D_n) $ == 求矩阵的逆的方法 === 定义法 找到 $BB$ 使得 $AA BB = EE$,则 $AA^(-1) = BB$ === 伴随矩阵法 $display(AA^(-1) = 1/(abs(AA)) AA)$. === 初等变换法 对分块矩阵 $display(mat(AA,dots.v,EE))$ 应用初等行变换化为 $display(mat(EE,dots.v,AA^(-1)))$ = 矩阵的分块 == 加法 设 $AA=(a_(i j))_(m times n)=(AA_(i j))_(s times t), BB=(b_(i j))_(m times n)=(BB_(i j))_(s times t)$,如果有相同分块法,则$ AA + BB = (AA_(i j) + BB_(i j))_(s times t) $ == 数乘 设 $AA=(AA_(i j))_(s times t)$,那么 $k dot AA = (k dot AA_(i j))_(s times t)$. == 乘法 设 $AA=(a_(i j))_(m times s),BB=(b_(i j))_(s times n)$,只要求 $AA$ 的行的分法和 $BB$ 的列的分法 *完全一样*,那么可用类似于矩阵乘法的规则相乘. == 转置 等于将分块矩阵转置并将每个元素转置. == 求逆 当矩阵分块后,对于以下特殊情形:(其中 $AA in PP^(r times r),BB in PP^(s times s)$) $ bold(G)_1 = mat(AA,OO;CC,BB), quad bold(G)_2 = mat(AA,CC;OO,BB), quad bold(G)_3 = mat(OO,AA;BB,CC), quad bold(G)_4 = mat(CC,AA;BB,OO). $ 由于 $|bold(G)_1| = |bold(G)_2| = |AA||BB|,sp |bold(G)_3| = |bold(G)_4| = (-1)^(s times t)|AA||BB|$,所以 $bold(G)_1 (bold(G)_2,bold(G)_3,bold(G)_4)$ 可逆 $<=>$ $AA$ 和 $BB$ 均可逆.可以证明: $ (bold(G)_1)^(-1) = mat(AA^(-1),OO;-BB^(-1)CC AA^(-1),BB^(-1)),&quad (bold(G)_2)^(-1) = mat(AA^(-1),-AA^(-1) CC BB^(-1);OO,BB^(-1)),\ (bold(G)_3)^(-1) = mat(-BB^(-1) CC AA^(-1),BB^(-1);AA^(-1),OO),&quad (bold(G)_4)^(-1) = mat(OO,BB^(-1);AA^(-1),-AA^(-1)CC BB^(-1)). $ #prof[ 设 $bold(G)=display(mat(AA,OO;CC,BB))$ 且 $bold(G)$ 可逆.设 $bold(G)^(-1) = display(mat(XX_11,XX_12;XX_21,XX_22))$,那么 $ &mat(AA,OO;CC,BB) mat(XX_11,XX_12;XX_21,XX_22) = mat(EE_r,;"",EE_s)\ =>& display(mat(AA XX_11,AA XX_12;CC XX_11 + BB XX_21,CC XX_12 + BB XX_22 ))= mat(EE_r,;"",EE_s)\ $ 从而可以解得 $bold(G)^(-1)$. ] == 准对角阵 $AA = op("diag")(AA_1,AA_2,dots.c,AA_n)$.其中 $AA$ 是方阵,$AA_(i) (i=1,2,dots.c,n)$ 也都是方阵. 那么 $AA$ 的数乘、加法、乘法、方幂、行列式、求逆运算都可以类比对角矩阵. = 初等矩阵 设 $AA in PP^(m times n)$,则 - 对矩阵应用初等行变换,相当于将矩阵左乘对应的 $m$ 阶初等矩阵; - 对矩阵应用初等列变换,相当于将矩阵右乘对应的 $n$ 阶初等矩阵. 这里以三阶初等矩阵为例: 1. *倍乘*:$E_2 (k) = display(mat(1,0,0;0,k,0;0,0,1))$,可将矩阵的第 $2$ 行(或第 $2$ 列)乘以 $k$ 倍. 2. *交换*:$E_12 = display(mat(0,1,0;1,0,0;0,0,1))$,可将矩阵的第 $1,2$ 行(或第 $1,2$ 列)交换. 3. *倍加*:$E_13 (k)=display(mat(1,0,0;0,1,0;k,0,1))$,可将矩阵的第 $1$ 行的 $k$ 被加到第 $3$ 行(或第 $3$ 列的 $k$ 倍加到第 $1$ 列). #def[性质]初等矩阵是可逆的,且初等矩阵的逆是初等矩阵. #def[结论1]$r(AA_(m times n))=r <=>$ 存在 $m$ 阶可逆阵 $bold(P)$ 和 $n$ 阶可逆阵 $bold(Q)$,使得 $display(bold(P A Q) = mat(EE_r,OO;OO,OO))$. #def[结论2]设 $AA in PP^(n times n)$ 可逆,那么 $AA$ 一定可以被表示为一系列初等矩阵的乘积.
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/dict-03.typ
typst
Other
// Test default value. #test((a: 1, b: 2).at("b", default: 3), 2) #test((a: 1, b: 2).at("c", default: 3), 3)
https://github.com/physicshinzui/typst-templates
https://raw.githubusercontent.com/physicshinzui/typst-templates/main/book/book-template.typ
typst
// ======== Preamble ========= #import "@preview/ctheorems:1.1.2": * #import "@preview/physica:0.9.2": * // Define my template as a function `conf` #let conf( title: "Title", author: "Authors", date: datetime.today().display(), lang: "en", font: "New Computer Modern", toc: false, bibliography-file: none, doc, ) = { // For Styling if lang == "en" { font = font } else { font = font } set text(lang:lang, font:font, size:11pt) set heading(numbering: "1.") set page(numbering: "1") // #show link: underline // text(lang:"ja", font:"YuMincho", size:11pt) align(center, text(20pt, font:"Gothic")[ *#title* ]) align(center)[ #author \ #date ] // For Equation's behaviour set math.equation(numbering: "(1.)", supplement: [Eq]) // For theorem environment : https://typst.app/universe/package/ctheorems show: thmrules.with(qed-symbol: $square$) // For matrix and vector expression set math.vec(delim: "[") set math.mat(delim: "[") // Figure // #set figure // #show figure.caption: emph show figure.where( kind: table ): set figure.caption(position: top) // === Code === // Reference: https://typst.app/docs/reference/text/raw/ // Display inline code in a small box // that retains the correct baseline. show raw.where(block: false): box.with( fill: luma(240), inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt, ) // Display block code in a larger block // with more padding. show raw.where(block: true): block.with( fill: luma(240), inset: 10pt, radius: 4pt, ) if toc == true { outline() } doc // Put this after `doc` if bibliography-file != none { bibliography(title:"References", style:"ieee", bibliography-file) } } // // For theorem environement // #let theorem = thmbox("theorem", "Theorem", fill: rgb("#eeffee")) // #let proposition = thmbox("proposition", "Proposition", fill: rgb("#eeffee")) // #let lemma = thmbox("lemma", "Lemma", fill: rgb("#eeffee")) // #let corollary = thmplain("corollary", "Corollary", titlefmt: strong) // #let definition = thmbox("definition", "Definition",fill: luma(245)).with(numbering: "1.") // #let example = thmplain("example", "Example").with(numbering: "1.") // #let proof = thmproof("proof", "Proof") // #let claim = thmbox("claim", "Claim").with(numbering: none) // #let remark = thmbox("remark", "Remark").with(numbering: none) // // For Physics environment // #let requirement = thmbox("requirement", "Requirement", fill: rgb("#eeffee")) // #let result = thmbox("result", "Result", fill: rgb("#eeffee")) // #let derivation = thmproof("derivation", "Derivation")