repo
stringlengths
26
115
file
stringlengths
54
212
language
stringclasses
2 values
license
stringclasses
16 values
content
stringlengths
19
1.07M
https://github.com/TypstApp-team/typst
https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/compiler/ops-invalid.typ
typst
Apache License 2.0
// Test invalid operations. // Ref: false --- // Error: 4 expected expression #(-) --- // Error: 10 expected expression #test({1+}, 1) --- // Error: 10 expected expression #test({2*}, 2) --- // Error: 3-13 cannot apply unary '+' to content #(+([] + [])) --- // Error: 3-6 cannot apply '-' to string #(-"") --- // Error: 3-9 cannot apply 'not' to array #(not ()) --- // Error: 3-19 cannot compare relative length and ratio #(30% + 1pt <= 40%) --- // Error: 3-14 cannot compare 1em with 10pt #(1em <= 10pt) --- // Error: 3-22 cannot compare 2.2 with NaN #(2.2 <= float("nan")) --- // Error: 3-12 cannot divide by zero #(1.2 / 0.0) --- // Error: 3-8 cannot divide by zero #(1 / 0) --- // Error: 3-15 cannot divide by zero #(15deg / 0deg) --- // Special messages for +, -, * and /. // Error: 3-10 cannot add integer and string #(1 + "2", 40% - 1) --- // Error: 15-23 cannot add integer and string #{ let x = 1; x += "2" } --- // Error: 4-13 cannot divide ratio by length #( 10% / 5pt ) --- // Error: 3-12 cannot divide these two lengths #(1em / 5pt) --- // Error: 3-19 cannot divide relative length by ratio #((10% + 1pt) / 5%) --- // Error: 3-28 cannot divide these two relative lengths #((10% + 1pt) / (20% + 1pt)) --- // Error: 13-20 cannot subtract integer from ratio #((1234567, 40% - 1)) --- // Error: 3-11 cannot multiply integer with boolean #(2 * true) --- // Error: 3-11 cannot divide integer by length #(3 / 12pt) --- // Error: 3-10 number must be at least zero #(-1 * "") --- // Error: 4-5 unknown variable: x #((x) = "") --- // Error: 4-5 unknown variable: x #((x,) = (1,)) --- // Error: 3-8 cannot mutate a temporary value #(1 + 2 += 3) --- // Error: 2:3-2:8 cannot apply 'not' to string #let x = "Hey" #(not x = "a") --- // Error: 7-8 unknown variable: x #(1 + x += 3) --- // Error: 3-4 unknown variable: z #(z = 1) --- // Error: 3-7 cannot mutate a constant: rect #(rect = "hi") --- // Works if we define rect beforehand // (since then it doesn't resolve to the standard library version anymore). #let rect = "" #(rect = "hi")
https://github.com/jonaspleyer/peace-of-posters
https://raw.githubusercontent.com/jonaspleyer/peace-of-posters/main/docs/content/documentation/tips_and_tricks.md
markdown
MIT License
--- title: "Tips and Tricks" weight: 40 --- # Tips and Tricks <!-- TODO add screenshots of before and after --> ## Modify Spacing between Boxes Unfortunately for technical reasons, setting the spacing between boxes generated by `peace-of-posters` has to be done manually with 3 distinct methods. Suppose, we want to set the new spacing to `0.5em` from the [default value](https://typst.app/docs/reference/layout/block/#parameters-spacing) of `1.2em`. So let's capture this in a variable. ```typst #let box-spacing = 0.5em ``` First, we need to specify the spacing between [blocks](https://typst.app/docs/reference/layout/block/). ```typst #set block(spacing: box-spacing) ``` Next, we also want to change the spacing between columns. Thus we modify the `gutter` argument of the [`columns` function](https://typst.app/docs/reference/layout/columns/). ```typst #set columns(gutter: box-spacing) ``` Last but not least, we need to tell `peace-of-posters` to use the new value when calculating how large boxes need to be in order to stretch them to the nearest endpoint. ```typst #update-layout( spacing: box-spacing, ) ``` <!-- TODO add screenshots of before and after --> ## Deeply modify Heading and Body Boxes In order to deeply modify heading and body boxes, we are able to change the function itself which draws the box. By default, these boxes are simply rectangles which are drawn with the specified options. We define a new function that takes the heading as a mandatory argument (or the body when changing the body funtion) and a range of arguments not more closely specified. ```typst #let my-custom-heading(heading, ..args) = { ... } ``` This function will now obtain all information that the default `rect` function would be given as well. We can choose to ignore the optional `..args` or reuse them if desired. Let's begin by simply making a smaller centered box. ```typst #let my-custom-heading(heading, ..args) = { align(center, rect(heading, ..args, width: 100% - 2em)) } ``` Afterwards, we need to supply the new behaviour to `peace-of-posters` by setting the corresponding option in the [layout](/documentation/layout). ```typst #update-layout( heading-box-function: my-custom-heading, ) ```
https://github.com/Nrosa01/TFG-2023-2024-UCM
https://raw.githubusercontent.com/Nrosa01/TFG-2023-2024-UCM/main/Memoria%20Typst/capitulos/5.1.Blockly.typ
typst
Blockly es una biblioteca perteneciente a Google lanzada en 2012 que permite a los desarrolladores la generación automática de código en diferentes lenguajes de programación mediante la creación de bloques personalizados. Fue diseñado inicialmente para generar código en JavaScript, pero debido a su creciente demanda, se ha adaptado para admitir de manera nativa la generación de código en una amplia variedad de lenguajes de programación: JavaScript, Python, PHP, Lua y Dart. Esta biblioteca es cada vez más conocida y usada en grandes proyectos. Actualmente se emplea en algunos como ‘App Inventor’ @app_inventor del MIT @MIT, para crear aplicaciones para Android, ‘Blockly Games’ @blockly_games, que es un conjunto de juegos educativos para enseñar conceptos de programación que también pertenece a Google, ‘Scratch’ @scratch, web que permite a jóvenes crear historias digitales, juegos o animaciones o Code.org @code.org para enseñar conceptos de programación básicos. Blockly opera del lado del cliente y ofrece un editor dentro de la aplicación, permitiendo a los usuarios intercalar bloques que representan instrucciones de programación. Estos bloques se traducen directamente en código según las especificaciones del desarrollador. Para los usuarios, el proceso es simplemente arrastrar y soltar bloques, mientras que Blockly se encarga de generar el código correspondiente. Luego, la aplicación puede ejecutar acciones utilizando el código generado. A continuación, se detalla tanto la interfaz que se muestra al usuario de la aplicación como el funcionamiento y proceso de creación de bloques y de código a partir de ellos. Un proyecto de Blockly se tiene la siguiente estructura @blockly-visual: #figure( image("../images/Blockly.jpg", width: 50%), caption: [ estructura básica de Blockly ], )<blockly_structure> //hacer distincion entre usuario y desarrollador ya que ambos son tecnicamente programadores en este ambito La @blockly_structure muestra la estructura de un proyecto de Blockly. Esta se divide en 2 partes básicas, la toolbox o caja de herramientas y el workspace o espacio de trabajo. La toolbox alberga todos los bloques que haya creado el desarrollador o que haya incluido por defecto. Estos bloques los puede arrastrar al workspace para generar lógica que haya sido definida por el desarrollador. Por defecto, todos los bloques son instanciables las veces que sean necesarias. Para que el usuario pueda usar un bloque correctamente, son necesarios tres pasos desde el punto de vista del desarrollador @blockly-block-creation: - Definir cómo es su apariencia visual. Esto puede ser realizado tanto usando código JavaScript como mediante JSON. Es recomendable usar JSON, aunque por características particulares puede ser necesario definirlos mediante JavaScript. - Especificar el código que será generado una vez haya sido arrastrado el bloque al workspace. Debe haber una definición del código a generar por cada bloque y lenguaje que se quiera soportar. - Incluirlo en la toolbox para que pueda ser utilizado. Esto puede ser realizado mediante XML o JSON, aunque Google recomienda el uso de JSON. //esta zona en caso de que esto se quede en "estado del arte" no tendria mucho sentido La definición de la apariencia y la inclusión en la toolbox son tareas bastante directas. Sin embargo, la generacion de código requiere la presencia de un intérprete que genere código a partir del texto que devuelve la función. En caso de querer generar código para un lenguaje no soportado por defecto, el desarrollador necesitará crear este intérprete.
https://github.com/exusiaiwei/My-brilliant-CV
https://raw.githubusercontent.com/exusiaiwei/My-brilliant-CV/main/modules/skills.typ
typst
Apache License 2.0
#import "../brilliant-CV/template.typ": * #cvSection("Skills") #cvSkill( type: [Languages], info: [Chinese - Mother tongue #hBar() English - IELTS:6.5] ) #cvSkill( type: [Tech Stack], info: [Python #hBar() R #hBar() Git #hBar() LaTeX #hBar() VPS management] ) #cvSkill( type: [Research Skills], info: [Eye-Tracking (Eyelink) #hBar() Statistical Analysis Methods (Clustering, Regression) #hBar() LLMs Utilization (Local Deployment) ] )
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/par-justify_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page #set page(width: 180pt) #set block(spacing: 5pt) #set par(justify: true, first-line-indent: 14pt, leading: 5pt) This text is justified, meaning that spaces are stretched so that the text forms a "block" with flush edges at both sides. First line indents and hyphenation play nicely with justified text.
https://github.com/Dherse/masterproef
https://raw.githubusercontent.com/Dherse/masterproef/main/masterproef/parts/0_introduction.typ
typst
#import "../ugent-template.typ": * = Introduction <intro> Photonic integrated circuits have become a significant industry in the past few years. However, their design is still difficult and slow, requiring a lot of expertise and time. This thesis provides a novel way of designing photonic circuits using code and a new way of simulating them using a faster simulation paradigm. This should allow for the rapid prototyping and iteration of photonic circuits. And the co-simulation of photonic circuits with other system components. This thesis will focus on its applicability to the field of programmable photonics, especially recirculating programmable photonics. However, the concepts and ideas presented in this work apply to the broader field of photonic circuit design. To enable rapid development and prototyping, several teams around the world, including at the @prg, have been researching ways of creating so-called photonic processors: devices that are generic enough to be reconfigured to meet a variety of use cases. These devices are generally based on the concept of recirculating meshes, which allow the designer to create all kinds of different circuits by simply changing the configuration of the mesh @bogaerts_programmable_2020. As such, these devices are often referred to as programmable @pic[s]. These devices were used as inspirations and will be used as demonstration platforms for the work and mockups presented in this thesis. This thesis merges aspects from several disciplines, including photonics and computer science. As such, it will first give some background information about photonics that is required to understand this thesis, followed by an in-depth analysis of the computer science concepts and paradigms discussed in this thesis. This analysis will allow for a comparison of existing languages, paradigms, and techniques and will lead to the development of a novel solution for photonic circuit design. Then, a discussion will be presented on the translation of the user's intent and the requirements that must be met to translate this intent. Finally, a discussion of the language created as part of this thesis, #gloss("phos", short: true), will be presented, as well as a set of examples showing its effectiveness and simulation capabilities. == Motivation <motivation> There is a need to develop appropriate abstractions and tools for the design of photonic circuits, especially as it pertains to the programming of so-called photonic @fpga[s] @bogaerts_programmable_2020-1 or photonic processors. As with all other types of circuit design, such as digital electronic circuits, analog electronic circuits, and RF circuits, appropriate abstractions can allow the designer to focus on the functionality of their design rather than the implementation @geiger_vlsi_1990. One may draw parallels between the abstraction levels used when designing circuits and the abstractions used when designing software. Most notably, the distinction made in the software-engineering world between imperative and declarative programming. The former is concerned with the "how" of the program, while the latter is focused on the "what" of the program @noauthor_imperative_2020. At a sufficiently high level of abstraction, the designer is no longer focusing on the implementation details (imperative) of their design but rather on the functionality and behavioural expectations of their design (declarative) @noauthor_imperative_2020. In turn, this allows the designer to focus on what truly matters to them: the functionality of their design. Much of the design work on photonic circuits is currently done at a low-level of abstraction, close to the component-level @bogaerts_silicon_2018. This lack of abstraction leads to several issues for broader access to the fields of photonic circuit design. Firstly, it requires expertise and understanding of the photonic components, their physics, and the sometimes complex relationship between all of their design parameters. Secondly, designing and simulating a photonic circuit requires a lot of time and effort. Physical simulation of photonic circuits is slow @bogaerts_silicon_2018 @alerstam_parallel_2008, which has led to efforts to simulate them using @spice @ye_spice-compatible_2022. Finally, the design and implementation of photonic circuits are generally expensive, requiring taping out of the design and working with a foundry for fabrication. Therefore, the low-level nature of current methods increases the cost and the time to market for the product @bogaerts_programmable_2020. Due to this, there is considerable interest in constructing new abstractions, simulation methods, and design tools for photonic circuits, especially for rapid prototyping and iteration. This master's thesis aims to find new ways in which the user can easily design their photonic circuit and program them onto those programmable @pic[s] @bogaerts_programmable_2020. Additionally, photonic circuits are often not the only component in a system. They are often used in conjunction with other technologies, such as analog electronics, used in driving the photonic components, digital electronics, to provide control and feedback loops and @rf to benefit from photonics' high bandwidth and high-speed capabilities @marpaung_integrated_2019. Therefore, it is of interest to the user to co-simulate @bogaerts_silicon_2018 @sorace-agaskar_electro-optical_2015 their photonic circuits with the other components of their systems. This problem is partly addressed using @spice simulation @ye_spice-compatible_2022. However, @spice tools are often lacking, especially regarding digital co-simulation, making the process difficult @osti_1488489, relying instead on ill-suited alternatives such as @verilog-a. This work will offer a comprehensive solution to these problems by introducing a new way of designing photonic circuits using code, a novel way of simulating them, and a complete workflow for designing and programming them. Finally, an extension of the simulation paradigm will be introduced, allowing for the co-simulation of the designs with digital electronics, which could, in time, be extended to analog electronics. === Research questions <research-questions> The main goal of this work is to design a system to program photonic circuits. It entails: + How can the user express their intent? - Which programming languages and paradigms are best suited? - What workflows are best suited? - How can the user test and verify their design? + How to translate that intent into a @pic configuration? - What does a compiler need to do? - How to support future hardware platforms? - What are the unit operations that the hardware platform must support?
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-0700.typ
typst
Apache License 2.0
#let data = ( ("SYRIAC END OF PARAGRAPH", "Po", 0), ("SYRIAC SUPRALINEAR FULL STOP", "Po", 0), ("SYRIAC SUBLINEAR FULL STOP", "Po", 0), ("SYRIAC SUPRALINEAR COLON", "Po", 0), ("SYRIAC SUBLINEAR COLON", "Po", 0), ("SYRIAC HORIZONTAL COLON", "Po", 0), ("SYRIAC COLON SKEWED LEFT", "Po", 0), ("SYRIAC COLON SKEWED RIGHT", "Po", 0), ("SYRIAC SUPRALINEAR COLON SKEWED LEFT", "Po", 0), ("SYRIAC SUBLINEAR COLON SKEWED RIGHT", "Po", 0), ("SYRIAC CONTRACTION", "Po", 0), ("SYRIAC HARKLEAN OBELUS", "Po", 0), ("SYRIAC HARKLEAN METOBELUS", "Po", 0), ("SYRIAC HARKLEAN ASTERISCUS", "Po", 0), (), ("SYRIAC ABBREVIATION MARK", "Cf", 0), ("SYRIAC LETTER ALAPH", "Lo", 0), ("SYRIAC LETTER SUPERSCRIPT ALAPH", "Mn", 36), ("SYRIAC LETTER BETH", "Lo", 0), ("SYRIAC LETTER GAMAL", "Lo", 0), ("SYRIAC LETTER GAMAL GARSHUNI", "Lo", 0), ("SYRIAC LETTER DALATH", "Lo", 0), ("SYRIAC LETTER DOTLESS DALATH RISH", "Lo", 0), ("SYRIAC LETTER HE", "Lo", 0), ("SYRIAC LETTER WAW", "Lo", 0), ("SYRIAC LETTER ZAIN", "Lo", 0), ("SYRIAC LETTER HETH", "Lo", 0), ("SYRIAC LETTER TETH", "Lo", 0), ("SYRIAC LETTER TETH GARSHUNI", "Lo", 0), ("SYRIAC LETTER YUDH", "Lo", 0), ("SYRIAC LETTER YUDH HE", "Lo", 0), ("SYRIAC LETTER KAPH", "Lo", 0), ("SYRIAC LETTER LAMADH", "Lo", 0), ("SYRIAC LETTER MIM", "Lo", 0), ("SYRIAC LETTER NUN", "Lo", 0), ("SYRIAC LETTER SEMKATH", "Lo", 0), ("SYRIAC LETTER FINAL SEMKATH", "Lo", 0), ("SYRIAC LETTER E", "Lo", 0), ("SYRIAC LETTER PE", "Lo", 0), ("SYRIAC LETTER REVERSED PE", "Lo", 0), ("SYRIAC LETTER SADHE", "Lo", 0), ("SYRIAC LETTER QAPH", "Lo", 0), ("SYRIAC LETTER RISH", "Lo", 0), ("SYRIAC LETTER SHIN", "Lo", 0), ("SYRIAC LETTER TAW", "Lo", 0), ("SYRIAC LETTER PERSIAN BHETH", "Lo", 0), ("SYRIAC LETTER PERSIAN GHAMAL", "Lo", 0), ("SYRIAC LETTER PERSIAN DHALATH", "Lo", 0), ("SYRIAC PTHAHA ABOVE", "Mn", 230), ("SYRIAC PTHAHA BELOW", "Mn", 220), ("SYRIAC PTHAHA DOTTED", "Mn", 230), ("SYRIAC ZQAPHA ABOVE", "Mn", 230), ("SYRIAC ZQAPHA BELOW", "Mn", 220), ("SYRIAC ZQAPHA DOTTED", "Mn", 230), ("SYRIAC RBASA ABOVE", "Mn", 230), ("SYRIAC RBASA BELOW", "Mn", 220), ("SYRIAC DOTTED ZLAMA HORIZONTAL", "Mn", 220), ("SYRIAC DOTTED ZLAMA ANGULAR", "Mn", 220), ("SYRIAC HBASA ABOVE", "Mn", 230), ("<NAME>A BELOW", "Mn", 220), ("SYRIAC HBASA-ESASA DOTTED", "Mn", 220), ("SYRIAC ESASA ABOVE", "Mn", 230), ("SYRIAC ESASA BELOW", "Mn", 220), ("<NAME>", "Mn", 230), ("SYRIAC FEMININE DOT", "Mn", 230), ("<NAME>", "Mn", 230), ("<NAME>", "Mn", 220), ("SYRIAC TWO VERTICAL DOTS ABOVE", "Mn", 230), ("SYRIAC TWO VERTICAL DOTS BELOW", "Mn", 220), ("SYRIAC THREE DOTS ABOVE", "Mn", 230), ("SYRIAC THREE DOTS BELOW", "Mn", 220), ("SYRIAC OBLIQUE LINE ABOVE", "Mn", 230), ("SYRIAC OBLIQUE LINE BELOW", "Mn", 220), ("<NAME>", "Mn", 230), ("<NAME>", "Mn", 230), (), (), ("SYRIAC LETTER SOGDIAN ZHAIN", "Lo", 0), ("SYRIAC LETTER SOGDIAN KHAPH", "Lo", 0), ("SYRIAC LETTER SOGDIAN FE", "Lo", 0), )
https://github.com/fontlos/FengruCup-template
https://raw.githubusercontent.com/fontlos/FengruCup-template/main/Readme.md
markdown
MIT License
# BUAA FRB Template This Typst template is made for the main track paper of the 34rd Fengru Cup Competition of Beijing University of Aeronautics and Astronautics (BUAA), which is more powerful compared to Word, and lighter and easier to use compared to the Latex template. ~~Made this template just because the fxxking Word fxxk me up! :(~~ ## Preview ![Preview](./preview.png) # Typst Overview Typst is a new markup-based typesetting system for the sciences. It is designed to be an alternative both to advanced tools like LaTeX and simpler tools like Word and Google Docs. # Usage All styles have been modelled after the Latex template of the main track paper of the 34th Fengru Cup competition at Beijing University of Aeronautics and Astronautics (BUAA), so users only need to focus on editing the text. ## Environment ### Compiler and necessary files - Download the Compiler for your system from [**Github Release**](https://github.com/typst/typst/releases/) add it to the PATH environment variable - Make sure your computer has the following fonts: SimSun (宋体), STZhongsong (华文中宋体), SimHei (黑体), STXinwei(华文新魏), Times New Roman (If you're not sure you can check by entering the following command: `typst fonts`) - `git clone` or download this repository ### Editor Recommend *Vscode* with the **Typst LSP** plugin and **Typst Preview** plugin ## Use this template For details of the basic syntax, refer to [**Typst Docs**](https://typst.app/docs/reference). You can also check out this [Example](./main.typ). Here is just a brief description of how to use the template ```typ #import "template.typ": frb #show: frb.with( title: "Title", subtitle: "Subtitle", header: "Header", author: none, abstract-CN: [中文摘要], keyword-CN: [关键词1, 关键词2], abstract-EN: [English Abstract], keyword-EN: [Keyword1, Keyword2], bibliography-file: "refs.bib", bibliography-title: "Reference", bibliography-style: "gb-7714-2015-numeric", auto-num-title: true, ) = First title == Second title === Third title Your article ``` - `title`: title of the paper, displayed on the cover page, mandatory - `subtitle`: subtitle of the paper, can be left blank, no need to manually write out dashes - `header`: content of the header of the paper - `author`: the author is added even though it should be left blank or `none` according to the Fengru Cup rule. - `abstract-CN`: Chinese abstract, just put it in brackets, the content is the same as the main text, the same as the following `abstract-EN`. - `keyword-CN`: Chinese keyword, fill in the middle bracket directly, separated by comma, the same as the following `keyword-EN` - `bibliography-file`: the bibliography file is a `.bib` file, see below and `refs.bib` in the root of the repository - `bibliography-title`: Bibliography Page title - `bibliography-style`: Bibliography style - `auto-num-title`: auto numbering title according to Fengru Cup's specification, enabled by default, you can also disable this feature to manually number the titles if you need to. ### Styling instructions For level 1, 2, 3 headings, the template has been adapted to the corresponding styles. The template has been made to automatically indent at the beginning of a paragraph, so you only need to enter a blank line as a subparagraph. However, Typst only indents the first line of consecutive paragraphs except for the first paragraph, so elements that break the continuity of a paragraph will cause the indentation to fail. This template adapts automatic indentation for the following elements: - Heading - List - Numbered list - Table - Math block - Code block - `img` function For other elements, indentation can be added manually via `#indent` or `#h(2em)`. For math blocks and code blocks, the fonts are built-in, if you need to replace them, please modify the template code directly. Currently Typst's support for Chinese font styles is not perfect, so the template provides some functions to assist, including the `bold` function for bold text, the `italic` function to make the text italicised, as well as versions specifically for Chinese characters, if you want to be able to achieve the Chinese font boldness through the native syntax, you can enable the global rule with `#show: bold-rule`, and italic is the same, but there may be some conflicts and hard-to-trace errors. PS: `bold` function may conflict with a function of the same name in the Maths block, which can be resolved by aliasing the import. ### Built-in Functions Only the most basic usage is shown, see the template source code for details. - `img`: Add a image with caption - usage: `#img(path:"path/to/image")[caption]` - alternate arg: - `width`: Use of percentages - `num`: Whether to add serial number automatically, off by default - The following functions are for non-English characters - `bold`: Bold words - usage: `#bold[Text]` - alternate arg: - `reg`: Regular expression - `base-weight`: Font weight - `bold-cn`: Bold Chinese words, the same as `bold` - `bold-rule`: Enables you to bold text with `**` - usage: `#show: bold-rule` - `bold-cn-rule` - `italic`: Italicise the text - usage: `#italic[Text]` - alternate arg: - `reg`: Regular expression - `ang`: Angle of inclination, radian system - `spacing`: Both sides blank - `italic-rule`: Enables you to Italicise text with `__` - usage: `#show: italic-rule` ### Bibliography File A `.bib` file usually contains several entries as follows ```bib @article{refa, author = {Author}, title = {Title}, journal = {Journal}, volume = {1}, pages = {1--2}, year = {2024}, } @book{refb, author = {Author}, title = {Title}, edition = {Edition}, address = {Address}, publisher = {Publisher}, year = {2024}, } @phdthesis{refp, author = {Author}, title = {Title}, school = {School}, year = {2024}, } ``` These are articles, books, and doctoral theses. The `refa` etc. represent the name of the citation, which can be added to the text at a specific place by `@refa`. The other parameters are easy to understand. Finally, the bibliography is added at the end of the article. For other parameters and types, please refer to the LaTex BibTex standard. --- - The fake bold / fake italic function refers to the following content - [Typst Chinese Community](https://typst-doc-cn.github.io/docs/chinese/) - The [cuti](https://github.com/csimide/cuti/) package - Typst Issue [#2749](https://github.com/typst/typst/issues/2749) - The BUAA logo is from [https://github.com/THN-BUAA/BUAA-Logo](https://github.com/THN-BUAA/BUAA-Logo)
https://github.com/chamik/gympl-skripta
https://raw.githubusercontent.com/chamik/gympl-skripta/main/cj-dila/14-cekani-na-godota.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#import "/helper.typ": dilo, replika #dilo("Čekání na Godota", "godot", "<NAME>", "<NAME>", "poválečné", "Irsko", "1952", "drama", "tragikomedie") #columns(2, gutter: 1em)[ *Téma*\ čekání na Godota *Motivy*\ čekání na Godota *Časoprostor*\ u stromu, dva dny *Postavy* \ _Estragon_ (_Gogo_) a _Vladimír_ (_Danda_) -- vlastně jedna schizofrenní postava\ _Pozzo_ -- otrokář?\ _Lucky_ -- otrok?\ _Chlapec_ -- komunikuje s Godotem *Kompozice* -- chronologická, možná taky ne, ale je to jedno; dvě dějství, co na sebe trochu navazují *Jazykové prostředky*\ vulgarismy #colbreak() *Obsah*\ Doslova se jenom čeká na Godota. V normální konverzaci je něco, čemu se říká konverzační maximy#footnote[Konverzační Maximy jsou pravidla, jak úspěšně komunikovat. Např. maxima relevance; při popisu události bychom se měli držet hlavní myšlenky a "podstatnosti" sdělení.], tohle dílo porušuje všechny. Snažit se jakkoliv příběhově zkoumat, co se děje, je skvělý způsob jak zjistit, jaké to musí být po lobotomii. *Literárně historický kontext*\ Je to jedno z nejhranějších dramat dvacátého století. #image("cekani_na_godota.png") ] #pagebreak() *Ukázka* #table(columns: 2, stroke: none, ..replika("Vladimír", [Co děláte, když upadnete daleko od jakékoli pomoci?]), ..replika("Pozzo", [Počkáme, dokud nemůžeme vstát. Pak jdeme dál.]), ..replika("Vladimír", [Poručte mu, aby zazpíval, než odejde.]), ..replika("Pozzo", [Komu?]), ..replika("Vladimír", [Luckymu.]), ..replika("Pozzo", [Zazpíval?]), ..replika("Vladimír", [Nebo přemýšlel. Anebo recitoval.]), ..replika("Pozzo", [Vždyť je němý.]), ..replika("Vladimír", [Němý!]), ..replika("Pozzo", [Naprosto. Nemůže ani sténat.]), ..replika("Vladimír", [Němý! Od kdy?]), ..replika("Pozzo", [_(náhle zuřivě)_ Copak mě nepřestanete otravovat svými historkami o čase? To je k zbláznění! Kdy! Kdy! Jednoho dne, to vám nestačí, jednoho dne, který byl jako všechny jiné, oněměl, jednoho dne jsem oslepl, jednoho dne ohluchnem, jednoho dne jsme se narodili, jednoho dne zemřem, ten samý den, ten samý okamžik, to vám nestačí? _(Vážněji)_ Ony rodí rozkročmo nad hrobem, den zazáří na okamžik, pak je opět noc. _(Zatáhne za provaz)_ Kupředu!]), ..replika("", [_(Odcházejí. Vladimír je provází až na hranice scény, potom sleduje, jak se vzdalují. Hluk pádu zdůrazněný Vladimírovou mimikou oznamuje, že upadli znovu. Ticho. Vladimír přistoupí k spícímu Estragonovi, chvíli ho pozoruje a pak ho probudí)_]), ) #pagebreak()
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/scripting/get-rule.typ
typst
--- get-rule-basic --- // Test basic get rule. #context test(text.lang, "en") #set text(lang: "de") #context test(text.lang, "de") #text(lang: "es", context test(text.lang, "es")) --- get-rule-in-function --- // Test whether context is retained in nested function. #let translate(..args) = args.named().at(text.lang) #set text(lang: "de") #context test(translate(de: "Inhalt", en: "Contents"), "Inhalt") --- get-rule-in-array-callback --- // Test whether context is retained in built-in callback. #set text(lang: "de") #context test( ("en", "de", "fr").sorted(key: v => v != text.lang), ("de", "en", "fr"), ) --- get-rule-folding --- // Test folding. #set rect(stroke: red) #context { test(type(rect.stroke), stroke) test(rect.stroke.paint, red) } #[ #set rect(stroke: 4pt) #context test(rect.stroke, 4pt + red) ] #context test(rect.stroke, stroke(red)) --- get-rule-figure-caption-collision --- // We have one collision: `figure.caption` could be both the element and a get // rule for the `caption` field, which is settable. We always prefer the // element. It's unfortunate, but probably nobody writes // `set figure(caption: ..)` anyway. #test(type(figure.caption), function) #context test(type(figure.caption), function) --- get-rule-assertion-failure --- // Error: 10-31 Assertion failed: "en" != "de" #context test(text.lang, "de") --- get-rule-unknown-field --- // Error: 15-20 function `text` does not contain field `langs` #context text.langs --- get-rule-inherent-field --- // Error: 18-22 function `heading` does not contain field `body` #context heading.body --- get-rule-missing-context-no-context --- // Error: 7-11 can only be used when context is known // Hint: 7-11 try wrapping this in a `context` expression // Hint: 7-11 the `context` expression should wrap everything that depends on this function #text.lang --- get-rule-unknown-field-no-context --- // Error: 7-12 function `text` does not contain field `langs` #text.langs --- get-rule-inherent-field-no-context --- // Error: 10-14 function `heading` does not contain field `body` #heading.body
https://github.com/WalrusGumboot/numerieke-practicum
https://raw.githubusercontent.com/WalrusGumboot/numerieke-practicum/master/verslagLevrauDuwel.typ
typst
#set text(lang: "nl") #set par(justify: true) #set page(margin: 15mm) #show raw: set text(8pt, font: "Iosevka NF") #show raw.where(block: true, lang: "matlab"): it => { let v = it.lines.enumerate().map(l => (text(str(l.at(0) + 1)), l.at(1))).flatten(); grid( inset: (x: 6pt, y: 3pt), fill: (col, row) => if col == 0 { luma(220) } else { luma(240) }, columns: (auto, auto), // column-gutter: 6pt, ..v ) } #show math.equation: it => box(it) #show link: it => underline(text(fill: blue, it)) #align( center, rect( width: 100%, inset: (y: 2em), fill: blue.lighten(85%), stroke: black, radius: 6pt, stack( spacing: 8pt, text("Verslag bij practicum Numerieke Wiskunde", 20pt, weight: "bold"), text(datetime.today().display(), 14pt), text("<NAME>, <NAME>", 14pt), ) ) ) = Opdracht 1 *Stelling.* Zij $n in NN_0$ en zijn $A, L, U in RR^(n times n)$ matrices. Veronderstel dat $L$ onderdriehoekig is, dat $U$ bovendriehoekig is en dat $forall i in {1, ..., n}: L_(i i) = 1$. Veronderstel bovendien dat $A = L U$. Dan geldt #{ set math.equation(numbering: "(1)", supplement: "vergelijking") [$ forall i in {1, ..., n}: cases( U_(i k) = A_(i k) - sum_(j = 1)^(i - 1) L_(i j) U_(j k) quad & forall k in {i, ..., n}, L_(k i) = (A_(k i) - sum_(j = 1)^(i - 1) L_(k j) U_(j i)) / U_(i i) & forall k in {i + 1, ..., n} ) $ <bewijslast>] } *Bewijs.* We gebruiken een bewijs via inductie, over de dimensie van $A$, $L$ en $U$. *Basisstap:* $n = 1$ \ De kwantor $forall i in {1, ..., n}$ kan vereenvoudigd worden tot $i = 1$. Zo kunnen we ook $forall k in {i, ..., n}$ vereenvoudigen tot $k = 1$. Ingevuld leidt dit tot de volgende bewijslast: $ U_(1 1) = A_(1 1) - sum_(j = 1)^(0) L_(1 j) U_(j 1) and L_(1 1) = (A_(1 1) - sum_(j = 1)^(0) L_(1 j) U_(j 1))/U_(1 1) $ We gebruiken hier de conventie dat een som die itereert van een hogere naar een lagere index gelijk is aan nul -- immers zijn er geen termen in de som, aangezien geen enkel natuurlijk getal $j$ kan voldoen aan $j >= 1 and j <= 0$. Dus rest ons te bewijzen dat $ U_(1 1) = A_(1 1) and L_(1 1) = A_(1 1)/U_(1 1). $ Gegeven dat alle diagonaalelementen van $L$ gelijk zijn aan 1, weten we dat in het bijzonder $L_(1 1) = 1$. Aangezien $A = L U$ kunnen we over het element $A_(1 1)$ zeggen dat het gelijk moet zijn aan $sum_(j = 1)^(1) L_(1 j) U_(j 1) = L_(1 1) dot.op U_(1 1)$. Omdat $L_(1 1) = 1$, geldt $U_(1 1) = A_(1 1)$. Dus is ook $L_(1 1) = A_(1 1)/U_(1 1)$. Hiermee is de basisstap aangetoond. *Inductiestap:* als @bewijslast geldt voor $n$, dan geldt ze ook voor $n + 1$. \ Zij $A in RR^((n + 1) times (n + 1))$ de matrix waarvoor we de bewijslast willen aantonen, en zij $A' in RR^(n times n) := A[1, 2, ..., n; 1, 2, ..., n]$ de submatrix van $A$ waarvoor @bewijslast al geldt. Zijn tenslotte $L$ en $U$ twee matrices met dezelfde afmetingen als $A$, met $L$ eenheidsonderdriehoekig en $U$ bovendriehoekig en $L U = A$. Merk op dat we de propositie alleen nog moeten aantonen voor $k = n + 1$. Immers, wegens de inductiehypothese geldt de uitspraak al $forall k in {1, ..., n}$; slechts over de 'buitenste schil' bestaat nog onzekerheid. Bovendien moet enkel van de meest rechtse kolom van $U$ en onderste rij van $L$ nog getoond worden dat hun elementen aan de gevraagde eigenschap voldoen. Van de elementen $U[n+1; 1, ..., n]$ weten we immers dat ze nul zijn wegens het gegeven, en net zo voor $L[1, ..., n; n + 1]$. Te bewijzen is dus: $ forall i in {1, ..., n + 1} : U_(i, n+1) = A_(i, n + 1) - sum_(j = 1)^(i - 1) L_(i, j) U_(j, n+1) $ en tevens: $ forall i in {1, ..., n + 1} : L_(n+1, i) = (A_(n + 1, i) - sum_(j = 1)^(i - 1) L_(n+1, j) U_(j, i))/U_(i i) $ We tonen de eerste deelbewijslast aan door een willekeurige $i in {1, ..., n + 1}$ te kiezen. Bemerk dat we $A_(i, n + 1)$ kunnen schrijven als $(L U)_(i, n+1) = sum_(j = 1)^(n + 1) L_(i, j) U_(j, n+1)$, en bemerk bovendien dat de term van de som $0$ wordt wanneer $i < j$; dan is $L_(i, j)$ immers gelijk aan nul. Dus kunnen we de som herschrijven als $sum_(j = 1)^(i) L_(i, j) U_(j, n+1)$. Wanneer $i = j$ geldt bovendien dat $L_(i, j) = L_(i, i) = 1$, waardoor we die index af kunnen scheiden om te bekomen dat $sum_(j = 1)^(i) L_(i, j) U_(j, n+1) = U_(i, n+1) + sum_(j = 1)^(i - 1) L_(i, j) U_(j, n+1)$. Dus weten we dat $A_(i, n + 1) = U_(i, n+1) + sum_(j = 1)^(i - 1) L_(i, j) U_(j, n+1)$. Dit vormen we om naar $U$ om $U_(i, n + 1) = A_(i, n + 1) - sum_(j = 1)^(i - 1) L_(i, j) U_(j, n+1)$ te bekomen. Aangezien $i$ willekeurig gekozen was, is de gelijkheid aangetoond. We tonen de tweede deelbewijslast aan door opnieuw een willekeurige $i in {1, ..., n + 1}$ te kiezen. Bemerk dat $A_(n + 1, i)$ gelijk is aan $(L U)_(n + 1, i) = sum_(j = 1)^(n + 1) L_(n + 1, j) U_(j, i)$. Hier kunnen we alle indices $j > i$ negeren, aangezien $U_(j, i)$ dan nul is. We splitsen dan weer één element uit de som af om te bekomen dat $A_(n + 1, i) = L_(n + 1, i) dot.op U_(i, i) + sum_(j = 1)^(i - 1) L_(n + 1, j) U_(j, i)$. Dit kunnen we weer omvormen tot $L_(n + 1, i) dot.op U_(i, i) = A_(n + 1, i) - sum_(j = 1)^(i - 1) L_(n + 1, j) U_(j, i)$, en na deling door $U_(i, i)$ in beide leden is de gelijkheid aangetoond. Dus geldt de eigenschap ook voor $n + 1$. Via het inductieve principe weten we nu dat de eigenschap geldt voor alle $n in NN_0$. Hiermee is de volledige stelling aangetoond. $qed$ = Opdracht 3 NB. Per de #link("https://nl.mathworks.com/help/matlab/ref/ismembertol.html", "MATLAB-documentatie") gebruiken we de standaardtolerantiewaarde van $10^(-12)$ bij het testen of de verwachte decompositie en berekende decompositie gelijk zijn. We gebruiken hiervoor de functie `ismembertol`. In de praktijk is de echte afwijking meestal een paar grootteordes kleiner. = Opdracht 4 Waar algoritme 5.1 een indexvariabele `k` gebruikt die van `n` tot `1` loopt, gebruiken we in de voorwaartse substitutie een variabele die van `1` tot `n` loopt, aangezien we bij de eerste rij beginnen. Verder is het idee achter de oplossingsmethode identiek: omdat de matrices driehoekig zijn kunnen we de rijen op zo'n manier doorlopen dat elke volgende rij extra informatie geeft over één variabele van het stelsel. We nemen dan een lineaire combinatie van de al gekende waarden om die nieuwe variabele te 'isoleren'. Na een deling door het resterende element bekomen we de correcte waarde voor de variabele. = Opdracht 5 De correctheid wordt hier niet formeel bewezen, maar aan de hand van een zgh. _nothing up my sleeve_-test case#footnote[hier is $U$ de getallen van één tot zes, $b_1$ de eerste drie kwadraten, $L$ de rij van Fibonacci en $b_2$ de eerste drie cijfers van $pi$] gemotiveerd. De kern van `solve_Lb` bestaat uit de volgende drie regels: ```matlab for k = 1:n y(k) = (b(k) - L(k, 1:k) * y(1:k)) / L(k, k); end ``` welke we splitsen in ```matlab for k = 1:n y(k) = b(k); y(k) = y(k) - L(k, 1:k) * y(1:k); % 1 aftrekking, k verm., k - 1 opt. y(k) = y(k) / L(k, k); % 1 deling end ``` In totaal levert ons dit $sum_(k = 1)^(n) (2k + 1) = n^2$ bewerkingen voor $L in RR^(n times n)$. Voor `solve_Ub` gaan we analoog te werk: ```matlab for k = n:-1:1 y(k) = (b(k) - U(k, k+1:n) * y(k+1:n)) / U(k, k); end ``` wordt ```matlab for k = n:-1:1 y(k) = b(k); y(k) = y(k) - U(k, k+1:n) * y(k+1:n); % 1 aftrekking, n - k verm., n - k + 1 opt. y(k) = y(k) / U(k, k); % 1 deling end ``` Zo bekomen we $sum_(k = 1)^(n) (1 + 2(n - k)) = n^2 + n$ bewerkingen voor $U in RR^(n times n)$. = Opdracht 6 #rect( ``` L_1 = 1.000000000000000 0 0 0 0 0.009090909090909 1.000000000000000 0 0 0 0.009090909090909 -0.000082651458798 1.000000000000000 0 0 0.009090909090909 -0.000082651458798 -0.000082658290627 1.000000000000000 0 0.009090909090909 -0.000082651458798 -0.000082658290627 -0.000082665123584 1.000000000000000 U_1 = 1.100000000000000 0.010000000000000 0.010000000000000 0.010000000000000 0.010000000000000 0 1.099909090909091 -0.000090909090909 -0.000090909090909 -0.000090909090909 0 0 1.099909083395322 -0.000090916604678 -0.000090916604678 0 0 0 1.099909075880311 -0.000090924119689 0 0 0 0 1.099909068364057 L_2 = 1.000000000000000 0 0 0 0 0 1.000000000000000 0 0 0 0 0 1.000000000000000 0 0 0 0 0 1.000000000000000 0 0.009090909090909 0.009090909090909 0.009090909090909 0.009090909090909 1.000000000000000 U_2 = 1.100000000000000 0 0 0 0.010000000000000 0 1.100000000000000 0 0 0.010000000000000 0 0 1.100000000000000 0 0.010000000000000 0 0 0 1.100000000000000 0.010000000000000 0 0 0 0 1.099636363636364 ``` ) $L_1$ en $U_1$ bevatten ieder tien nullen ($ = 2n$) en is dus niet per se spaars te noemen. \ $L_2$ en $U_2$ bevatten ieder zestien nullen ($ = (n - 1)(n - 2) + (n - 1) = (n - 1)^2$) en is dus volgens de conventie spaars te noemen. = Opdracht 8 `solve_Ub_special` vereist $sum_(k = 1)^(n - 1) 3 = 3(n - 1)$ bewerkingen (één vermenigvuldiging, één aftrekking en één deling), `solve_Lb_special` vereist $2(n - 1)$ (één aftrekking, $n-1$ vermenigvuldigingen, $n-2$ optellingen). In vergelijking met de implementatie uit opdracht 5 is dit verband lineair in plaats van kwadratisch. = Opdracht 9 #figure( image("figuur1.svg"), caption: [#text("Onze bevindingen: spaarse matrices hebben hun nut") \ #text("#cool", fill: white)] ) = Opdracht 11 Aangezien de bovengrens op relatieve fout recht evenredig is met het conditiegetal van de matrix; cfr. paragraaf 5.5.3 uit de cursus. = Opdracht 12 Als we beide leden met $M_1$ linksvermenigvuldigen en vervolgens de definitie van $y$ invullen in het linkerlid, bekomen we (gegeven dat $M_2 x = y$): $ M_1 M_1^(-1) A M_2^(-1) (M_2 x) &= M_1 M_1^(-1) b \ <=> A x &= b $ = Opdracht 13 De preconditionering zal een impact hebben op het convergentiegedrag van $B_1$ in `gmres`, en niet op dat van $B_2$. Uit de #link("https://nl.mathworks.com/help/matlab/ref/gmres.html#f84-998579_sep_mw_021450b1-4be9-4d57-b903-c4a2d800c810", [tips over `gmres`]) halen we de informatie dat het conditiegetal van de matrix sterk samenhangt met het convergentiegedrag. Aangezien $B_1$ en $B_2$ in MATLAB als spaarse matrices opgeslagen zijn, gebruiken we `condest` om het conditiegetal te berekenen. Bovendien gebruiken we de $2$-norm. Voor $B_1$ geeft dit een waarde van $3.5481 dot 10^6$, voor $B_2$ verkrijgen we $135.9396$. Na het toepassen van de preconditionering (lees: het rechtsvermenigvuldigen met $M^(-1)$, waarbij $M = L U$) bemerken we dat $kappa(B_1) = 1.0003$, terwijl $kappa(B_2) = 55.1302$. Deze enorme vermindering voor $B_1$ maar relatief kleine daling voor $B_2$ is een verklaring voor de verbetering van het convergentiegedrag. Ook kunnen we de voorwaartse fout naar boven afschatten aan de hand van dezelfde formule die wij reeds uit paragraaf 5.5.3 haalden, namelijk dat $ (||Delta x||)/(||x||) <= kappa(A) (||r||)/(||b||). $ Zo bekomen we voor $B_1$ zonder preconditionering een bovengrens van $3.5481 dot 10^6 dot 51.2195/sqrt(600) approx 7.4193 dot 10^6$ en na preconditionering een bovengrens van $1.000345$. Voor $B_2$ bedragen deze waarden $3.979178 dot 10^2$ respectievelijk $1.095138 dot 10^2$. #figure( image("convergentie.svg", width: 80%), caption: [De convergentiesnelheden van $B_1$ en $B_2$ met en zonder preconditionering.] ) = Opdracht 14 Zij $n in NN$ een willekeurig getal, en zij $A_1$ en $A_2$ dan de resulterende matrices zoals we ze in opdracht 6 definieerden. Zij $P in RR^(n times n)$ de matrix met enen op de antidiagonaal en nullen op alle andere posities. We zoeken dan de matrix $Q$ opdat $P A_1 Q = A_2$. Bemerk dat linksvermenigvuldigen met $P$ het effect heeft van de matrix "over de horizontale as" te spiegelen: de eerste rij van $P A_1$ is gelijk aan de laatste rij van $A_1$; de tweede rij van $P A_1$ is gelijk aan de voorlaatste rij van $A_1$, enzovoort. We bemerken dat om van $P A_1$ naar $A_2$ te gaan, dus enkel nog "over de verticale as" gespiegeld moet worden: de eerste kolom van $P A_1$ moet gelijk worden aan de laatste kolom van $A_2$, de tweede kolom van $P A_1$ aan de voorlaatste van $A_2$, etcetera. Beschouw de matrix $P A_1$ als een eigen entiteit; noem deze bijvoorbeeld $B$. We zoeken dus $Q$ zodat $B Q = A_2$, waarbij $Q$ een "kolomspiegeling" bewerkstelligt. Neem nu het getransponeerde van beide leden. We bekomen dan $Q^T B^T = A_2^T$; waarbij $Q^T$ een "rijspiegeling" bewerkstelligt. Maar we hebben al een matrix die bij linksvermenigvuldiging rijen spiegelt, namelijk $P$. Dus geldt dat $Q^T = P$. Echter is $P$ per definitie symmetrisch, dus is $Q = P$. Hierbij is de algemene vorm van $Q$ gevonden. = Opdracht 15 Gegeven dat $A_2 z = P b$, met $A_1 x = b$, kunnen we deze gelijkheid invullen om $A_2 z = P A_1 x$ te bekomen. Verder weten we dat $P A_1 Q = A_2$, dus herschrijven we de vorige gelijkheid als $P A_1 Q z = P A_1 x$. Door links te vermenigvuldigen met $P^(-1)$ en voorts met $A_1^(-1)$ bekomen we $ A_1^(-1) P^(-1) P A_1 Q z &= A_1^(-1) P^(-1) P A_1 x \ <=> Q z &= x $ = Opdracht 16 Weer merken we een kwadratisch-lineair onderscheid. Ook dit is te verwachten, aangezien we gebruik kunnen maken van spaarse matrices en hun lineaire uitvoeringstijden bij de PQ-methode, cfr. opdracht 9. #figure( image("pq-fac.svg") ) = Opdracht 17 De boosdoener is de deling. Wegens het feit dat we zonder pivotering werken, komt het voor dat we door het zeer kleine element $10^(-20)$ delen. Zoals we in de cursus kunnen lezen: #align(center, rect(inset: 1em, align(right, [#emph("\"Kleine spilelementen veroorzaken grote afrondingsfouten\"") \ pagina 94, hoofdstuk 5.5.1]))) = Opdracht 18 De oplossing voor dit probleem is het gebruiken van rijpivotering. We gebruiken daarvoor het feit dat we de rijen vrijelijk mogen verwisselen in een stelsel zonder dat de oplossing verandert. Als we dit bijhouden in een permutatiematrix $P$, kunnen we zowel in het linker- als het rechterlid vermenigvuldigen met $P$ om de gelijkheid te behouden. Dit kunnen we in ons voordeel laten werken door steeds een "tactisch" element op de spilpositie te plaatsen -- dat houdt in: we willen een element dat in absolute waarde zo groot mogelijk is, zodat de afrondingsfouten tot een minimum beperkt blijven. De keuze bij uitstek is dus $||a[i, n]||_oo$ waarbij $a$ de $i$-de kolom van $A$ is.
https://github.com/EGmux/PCOM-2023.2
https://raw.githubusercontent.com/EGmux/PCOM-2023.2/main/lista2/lista2q7.typ
typst
=== Um sinal de voz tem uma duração total de 10s. Ele é amostrado a uma taxa de 8 kHz, quantizado e depois codificado. A relação sinal-urído (de quantização) necessária é de 40 dB. Calcule a capacidade m'ima de armazenamento necessária para acomodar este sinal digitalizado.\ \ _Informação: $(S/N)_(d B) = 1,8 + 6l$, em que l é o número de bits por amostra._ bem sabemos a duração total e taxa de amostragem isso quer dizer que podemos computar quantas amostras serão necessárias como primeira etapa para resolver o problema #math.equation( block: true, $ "nAmostras" = 8 "k amostras/s" dot 10 "s" = 80 "k amostras" $, ) qual o tamanho de cada amostra em bits? usaremos a equação fornencida no enunciado para descobrir #math.equation(block: true, $ 40 "dB" &= 1,8 + 6 l && \ l &= (1,8 - 40)/6 & \ l &= -6.366 tilde.equiv 7 "bits/amostra" $) 💡 note que 1,8 já está em dB assim como 6l tambem está agora é multiplicar nAmostras por l #math.equation(block: true, $ "storage" = "nAmostras" dot l = 560 "k bits" $) em bytes é mais concreto #math.equation(block: true, $ ("560 kbits")/(8 "bits/byte") = 7 "kB" $)
https://github.com/AOx0/expo-nosql
https://raw.githubusercontent.com/AOx0/expo-nosql/main/book/src/section.md
markdown
MIT License
# Sections To give your audience some orientation where you are in your presentation, you can use sections. The [default theme](./theme-gallery/index.html#default) displays the section in the header of each slide, other themes might use other ways to display it. To define the current section, simply use `#new-section`: ```typ #new-section("Introduction") #slide(title: "First slide of introduction")[ Look at the header! ] #slide(title: "Second slide of introduction")[ Header hasn't changed. ] #new-section("Motivation") #slide(title: "And now?")[ Now, we are in the _motivation_ section. ] ```
https://github.com/fufexan/cv
https://raw.githubusercontent.com/fufexan/cv/typst/modules_ro/education.typ
typst
#import "../src/template.typ": * #cvSection("Educație") #cvEntry( title: [Licență în Electronică, Telecomunicații și Tehnologia Informației], society: [Universitatea Tehnică din Cluj-Napoca], date: [2021 - 2025], location: [Cluj-Napoca], logo: "../src/logos/utcn.svg", description: list(), ) #cvEntry( title: [Diplomă de bacalaureat în Mathematică și Informatică], society: [Colegiul Național Titu Maiorescu], date: [2017 - 2021], location: [Aiud], logo: "../src/logos/cntm.png", description: list(), )
https://github.com/simon-epfl/notes-ba2-simon
https://raw.githubusercontent.com/simon-epfl/notes-ba2-simon/main/fds/guide_verilog.typ
typst
#set text(font: "DejaVu Sans") #show heading.where(level: 1): contents => text(size: 20pt, contents) #show heading: contents => pad(bottom: 10pt, contents) #set quote(block: true) #set heading(numbering: (ignore_first, ..n) => { if (n.pos().len() != 0) { numbering("1.1.", ..n) } }) #let stick-together(a, threshold: 3em) = { block(a + v(threshold), breakable: false) v(-1 * threshold) } = FDS - Guide de Verilog Ce document décrit les concepts de base pour l'utilisation de Verilog. #outline(title: "") Contact pour tout signalement de typo ou erreur : <EMAIL>. #set text(font: "DejaVu Sans") #pagebreak() == Qu'est-ce que Verilog ? Verilog est un langage de description matériel utilisé pour la modélisation et la conception de circuits électroniques, tout comme VDHL (l'ancien langage utilisé en BA2 à l'EPFL, avec une courbe d'apprentissage un peu plus rude). == Comment installer Verilog ? Pour exécuter votre code Verilog, vous aurez besoin d'un compilateur. Le compilateur recommandé pour le cours de FDS BA2 au printemps 2024 est *iverilog*, version 11 ou 12. Pour l'installer : - Sur MacOS, via Homebrew - Sur Windows, non. - Sur Linux, via make depuis le repository GitHub est probablement le plus safe == Comment exécuter du code Verilog ? Un projet Verilog est composé de fichiers `.v`, contenant le code source. Dans un premier temps, vous devez compiler vos fichiers `.v` pour pouvoir les rendre exécutables. Supposons que vous ayez écrit un fichier `my_super_test.v`. La commande suivante à taper dans le terminal : ```sh iverilog -o build my_super_test.v ``` vous permettra d'obtenir un fichier exécutable `build` que vous pouvez exécuter en tapant `./build` dans le même terminal. #box( stroke: 1pt + rgb(34, 102, 153), width: 100%, fill: rgb(34, 102, 153, 30), inset: 7pt, text(fill : black, [Une fois le terminal ouvert, attention à bien vous déplacez dans le dossier qui contient vos fichiers `.v` (à l'aide de la commande `cd`) pour que la commande de build fonctionne.]) ) Pour compiler plusieurs fichiers, comme dans le cas d'un test bench (voir @testbench), vous pouvez simplement préciser plusieurs fichiers `.v`: ```sh iverilog -o build my_super_test_testbench.v my_super_test.v ``` == Comment écrire un programme Verilog ? Il existe plusieurs façons d'écrire un programme Verilog : - le *Verilog structurel* : nous devons décrire physiquement ce qu'il se passe (à l'aide de portes logiques comme `or`, `and`, etc.). C'est utile si on nous présente un circuit sans la fonction booléenne associée. - le *Verilog d'affectation continue* : utilise le mot-clef `assign` pour définir des relations continues entre les signaux. C'est utile pour implémenter des fonctions logiques simples et pour connecter des modules ensemble. - le *Verilog comportemental* : se concentre sur ce que le circuit doit faire, et pas sur la façon de l'implémenter (complètement l'inverse du verilog structurel). Il utilise des structures de contrôle de haut niveau (type `for`, `while`, etc.) et se situe typiquement dans un bloc `always` ou `initial`. Nous verrons plus tard quelles sont, en pratique, les différences. == Déclaration de modules La base d'un programme Verilog est un module : une portion de circuit, similaire à une fonction. Chaque déclaration de module commence par le mot-clef `module`, suivi de son nom. === Spécification des entrées Les entrées sont utilisées pour se brancher sur un signal existant envoyé par un autre module (par exemple un autre module qui se charge de tester votre module). Elles sont... - déclarées à l'aide du mot-clef `input`. - de type *wire*, câble - d'une taille spécifique (ou 1 si non spécifiée) Les tailles sont spécifiées entre crochet, *[a:b]*, où *a* est l'index du MSB, et *b* l'index du LSB. ==== Exemple #stick-together(```verilog module my_super_module ( input [4:0] A, B, // A and B will have the same size, 5 bit input D // D will have a size of 1 bit ); // your code here endmodule ```) === Spécification des outputs Les sorties sont utilisées pour envoyer un signal à d'autres modules. Elles sont... - déclarées à l'aide du mot-clef `output`. - de type *wire* ou *reg*. - d'une taille spécifique (ou 1 sinon) ==== Exemple #stick-together( ```verilog module my_super_module ( input A, output B, output [2:0] G, output reg K ); // your code here endmodule ``` ) Nous verrons plus tard comment choisir entre reg et wire. == Le Verilog structurel Comme évoqué précédemment, le Verilog structurel décrit physiquement comment est construit le circuit. Ainsi, il faudra généralement déclarer : - des câbles, déclaré en utilisant *wire* suivi du nom du câble. - des portes logiques, déclarée à l'aide des mot-clefs *not*, *or*, *and*, *nor*, *xnor*, etc. suivis du nom de la porte, puis de l'output et des inputs entre parenthèses. #box( stroke: 1pt + rgb(34, 102, 153), width: 100%, fill: rgb(34, 102, 153, 30), inset: 7pt, text(fill : black, [En Verilog structurel, déclarez les entrées et les sorties en tant que *wires* (puisque nous connectons physiquement les signaux entre eux !)]) ) === Exemple ```verilog module structural_example ( // 3 signaux d'entrée input a, input b, input c, // un signal de sortie output f ); // on créé nos câbles wire p1; wire p2; wire not_a; wire not_b; // ici on connecte l'entrée A au câble not_a en ajoutant au passage // une porte logique "NOT" pour inverser le signal not gA (not_a, a); not gB (not_b, b); // ici f (la sortie) est connectée aux deux câbles // p1 et p2, eux-mêmes connectés aux entrées ou à d'autres câbles nor g3 (f, p1, p2); and g2 (p1, not_a, b); and g1 (p2, not_b, c); endmodule ``` Pour savoir comment tester ce circuit, allez voir la @testbench Tout fonctionne bien, mais... si on connaît l'expression booléenne derrière ce circuit, l'écrire en Verilog structurel est très long inutilement. == Le Verilog d'affectation continue Comme mentionné précédemment, le Verilog d'affectation continue permet d'utiliser le mot-clef un peu magique `assign` qui permet d'utiliser des expressions booléennes directement dans le module. #box( stroke: 1pt + rgb(34, 102, 153), width: 100%, fill: rgb(34, 102, 153, 30), inset: 7pt, text(fill : black, [En Verilog d'affectation continue, déclarez également les entrées et les sorties en tant que *wires* (on continue de connecter physiquement des câbles).]) ) === Exemple #stick-together(```verilog module structural_example ( // 3 signaux d'entrée input a, input b, input c, // un signal de sortie output f ); assign p1 = !a && b; assign p2 = !b && c; assign f = !(p1 || p2); endmodule ```) == Verilog comportemental Le Verilog comportemental permet de modéliser des circuits plus complexes que ceux vus jusqu'ici. Comme énoncé précédemment, on se focalise plus sur le comportement du circuit en lui-même que son implémentation hardware. #box( stroke: 1pt + rgb(34, 102, 153), width: 100%, fill: rgb(34, 102, 153, 30), inset: 7pt, text(fill : black, [En Verilog structurel, déclarez les inputs en tant que *wires* et les outputs en tant que *reg* ! En effet, on veut pouvoir leur assigner une valeur numérique sans les connecter physiquement à un câble.]) ) === Ecrire du code réactif La façon dont fonctionne Verilog là-dessus parlera peut-être aux étudiants qui ont déjà fait du React, Vue, etc. Ici, nous voulons que notre code soit *exécuté quand un signal d'entrée change* (par exemple lorsqu'une variable passe de 0 à 1). Nous devons donc définir une liste de variables de dépendance (_comme `useEffect()` avec React !)._ Pour cela, on écrit un bloc `always @` et on écrit entre parenthèses le nom des variables de dépendance, à surveiller. ==== Exemple <codetotest> #stick-together(```verilog module structural_example ( // 3 signaux d'entrée input a, input b, input c, // un signal de sortie output reg f ); // ces variables ne sont plus des câbles ! // elles ne connectent plus physiquement des câbles // elles stockent uniquement des valeurs numériques reg p1; reg p2; always @(a, b, c) begin p1 = !a && b; p2 = !b && c; // on assigne directement la valeur d'output à f f = !(p1 || p2); end endmodule ```) Notez qu'il est possible d'utiliser `always @*` pour surveiller toutes les variables utilisées dans le bloc always. === Ecrire du code exécuté à l'initialisation Ici, nous voulons exécuter du code lorsque notre module s'initialise (au démarrage du programme). Ce sera particulièrement utile lorsque nous écrirons notre premier module de test. Pour cela, nous devons écrire `initial` suivi de notre code. ==== Exemple #stick-together(```verilog module structural_example; initial begin $display("Hello, world."); end endmodule ```) #box( stroke: 1pt + rgb(34, 102, 153), width: 100%, fill: rgb(34, 102, 153, 30), inset: 7pt, text(fill : black, [Comme vous pouvez le voir, nous utilisons une nouvelle fonction, *\$display*, qui permet d'afficher un message textuel dans le terminal.]) ) == Tester ses circuits <testbench> Il nous faut maintenir écrire ce qu'on appelle un "test bench", TB, un environnement de simulation pour notre circuit. Tout d'abord, il nous faut créer une instance du module à tester (qu'on appelle le DUT, le "Design Under Test"). La syntaxe pour initialiser le DUT est `nom_du_module nom_de_l_instance`, suivi d'un mapping des inputs et des outputs aux variables du TB. Il y a deux autres parties importantes du TB : le stimulus generator (qui s'occupe de générer toutes les combinaisons d'entrée possibles) et le checker (qui vérifie que le résultat est bien celui attendu). === Exemple Essayons de tester le code de la section @codetotest #box( stroke: 1pt + rgb(34, 102, 153), width: 100%, fill: rgb(34, 102, 153, 30), inset: 7pt, text(fill : black, [On écrit le test dans un bloc `initial` car on veut exécuter notre bloc de test une fois, quand il est lancé.]) ) ```verilog module tb_structural_example; // on définit les variables qui vont faire varier le DUT reg a = 0; reg b = 0; reg c = 0; // on définit le câble qui représente le signal de sortie // ce n'est pas une reg car on ne veut pas changer // la variable de F autrement qu'en // la connectant physiquement au module my_test wire F; // on initialise le D.U.T. structural_example my_super_structural_under_testing ( // .variable_dans_dut(variable_dans_tb) .a(a), .b(b), .c(c), // .output_dans_dut(output_dans_tb) .f(f) ); reg expected; initial begin for (integer i = 0; i < 8; i = i + 1) begin {a, b, c} = i; #1; // check if the output is correct expected = !(!a && b && !b && c); if (f != expected) begin $display("Error! Expected: [%b], Result: [%b]", expected, f); end else begin $display("Correct! Expected: [%b], Result: [%b]", expected, f); end end end endmodule ``` Le résultat : ``` Correct input! Expected: [1], Result: [1] Error, wrong input! Expected: [1], Result: [0] Error, wrong input! Expected: [1], Result: [0] Error, wrong input! Expected: [1], Result: [0] Correct input! Expected: [1], Result: [1] Error, wrong input! Expected: [1], Result: [0] Correct input! Expected: [1], Result: [1] Correct input! Expected: [1], Result: [1] ``` === GTKWave GTKWave est un logiciel permettant de visualiser graphiquement la sortie d'un module Verilog. ==== Installation Pour installer GTKWave : - sur MacOS, utilisez Homebrew - sur Windows, non. - sur Linux, build from source fonctionne comme d'habitude ==== Utilisation Pour cela, commencez par écrire un test bench classique comme vu précédemment. Maintenant, il nous faut *exporter les résultats* du test bench pour pouvoir les lire avec GTKWave. Pour cela, nous utiliserons deux instructions : - *`$dumpfile("nom_de_fichier.vcd");`* pour exporter les résultats dans un fichier VCD (Value Change Dump). - *`$dumpvars`* pour exporter toutes les variables du programme dans le fichier (une nouvelle ligne sera écrite à chaque fois qu'une variable change) - *`$monitor`* est très pratique, il permet d'afficher les changements des variables passées en argument dans la console (sans écrire de logs manuellement à chaque fois) - *`$finish`* permet de finir proprement la simulation (on peut placer `$finish` à d'autres endroits dans le code pour interrompre la simulation) #stick-together(```verilog module test_tb; reg a = 0; reg b = 0; reg c = 0; wire F; structural_example my_test ( .a(a), .b(b), .c(c), .f(f) ); reg expected; initial begin $dumpfile ("structural_example.vcd"); $dumpvars; $monitor ("Time %2t, a=%b, b=%b, c=%b, f=%b", $time, a, b, c, f); for (integer i = 0; i < 8; i = i + 1) begin {a, b, c} = i; #1; expected = a & b | ~a & c; if (f != expected) begin $display("Error! Expected %b, got %b", expected, f); end else begin $display("Correct! Expected %b, got %b", expected, f); end end $finish; end endmodule ```) Exécutez ensuite la commande suivante : ```sh gtkwave structural_example.vcd ``` #pagebreak() == Compléments === Que se passe-t-il si on déclarer un input avec une taille "inversée" ? `[0:2] au lieu de [2:0]` #stick-together(```verilog module test ( input [0:2] in, // si on envoie 011 au module... output [2:0] F ); always @* begin $display(in[0]); // ...on aura 0 ici au lieu de 1 ! $display(in); // par contre ici on aura toujours 3 (et pas 6 !) end endmodule; ```) === Différence entre double et triple égalité #stick-together(```verilog module test; reg a = 1'bz; reg b = 1'bz; initial begin (a == b) // false! (a === b) // true! // this is because by default you should not compare unknown values together but during simulation it can be useful end endmodule; ```) Pour les scalaires, `if (s)` vérifie si `s` est `1`. Pour les vecteurs, `if (s)` vérifie si au moins un des bits de `s` est `1`. Pour les scalaires, `!` renvoie la valeur logique opposée. Pour les vecteurs, `!` vérifie si le vecteur entier est zéro et renvoie 1 si ça l'est ou 0 sinon.
https://github.com/takotori/PhAI-Spick
https://raw.githubusercontent.com/takotori/PhAI-Spick/main/sections/messfehler.typ
typst
#import "../utils.typ": * = Messen und Messfehler *Systematische Fehler:* z.B. Messen mit falsch kalibriertem Messgerät. \ Berechnet sich der Wert einer Grösse $z$ aus Messwerten der Grössen $x$ und $y$. #align(center, $z=f(x,y)$) und wurden die Messgrössen x und y mit einem Fehler von $Delta x$ bzw. $Delta y$ bestimmt, so ist der Wert von z nur ungenau bestimmt. Für den prognostizierten Wert und den prognostizierten Messfehler gilt $ z = z_0 plus.minus Delta z \ z_0 = f(x_0,y_0) \ Delta z= abs(partial/(partial x) f(x_0,y_0)) dot Delta x+ abs( partial/(partial y) f(x_0,y_0)) dot Delta y $ sofern die Grössen $x$ und $y$, z.B. auf Grund von fehlerhaften Messinstrumenten, systematisch falsch bestimmt wurden. Die Fehlerabschätzung durch systematische Fehler ist eine «worst-case»-Abschätzung \ *Statistische Fehler:* Bei mehrfach messen unterschiedliche Ergebnisse \ $=>$ Mehrmals messen und Mittelwert nehmen verkleinert den Fehler Fehlerfortpflanzung für normalverteilte Fehler. Berechnet sich der Wert einer Grösse $z$ aus Messwerten der Grössen $x$ und $y$ gemäss #align(center, $z=f(x,y)$) und wurden die Messgrössen x und y durch Mehrfachmessung ($x$ n-fach gemessen, $y$ m-fach gemessen) und ohne systematischen Fehler bestimmt, so darf von statistisch normalverteilten Fehlern ausgegangen werden. In diesem Fall errechnet sich die Standardunsicherheit der Messwerte von x und y gemäss $Delta x &=sqrt(1/(n(n-1)) sum_(i=1)^n (x_i-macron(x))^2 )=σ_x/sqrt(n) \ Delta y &=√(1/(m(m-1)) sum_(i=1)^m (y_i-macron(y) )^2 )=sigma_y/sqrt(m) \ sigma &= "Standardabweichung" \ macron(x) &= 1/n sum_(i=n)^n x_i ="Mittelwert"$ Es gilt also $ x &= macron(x) plus.minus Delta x \ y &= macron(y) plus.minus Delta y $ Ausserdem ist der prognostizierte Wert und der statistische Fehler von z durch folgende Formeln berechenbar $ z =z plus.minus Delta z \ macron(z) =f(macron(x),macron(y)) \ Delta z=sqrt((partial/(partial x) f(x_0,y_0) dot Delta x)^2+(partial/(partial y) f(x_0,y_0) dot Delta y)^2 ) $ *Beispiel Systematischer Fehler:* Ein Gewicht unbekannter Masse wird auf einer schiefen Ebene mit dem Neigungswinkel $alpha$ platziert, auf der es reibungsfrei gleiten kann. Die Hangabtriebskraft und der Neigungswinkel $alpha$ werden experimentell bestimmt. Die Werte sind $alpha=(30^circle.small plus.minus colorange(2^circle.small)) ,F_H=(10 plus.minus colgreen(0.3))N$. Aus Tabelle $g=(9.81 plus.minus colmagenta(0.03))$ $ F_H=m g dot sin(alpha)=>m &=F_H/(g dot sin(alpha) ) \ m &=(10N)/(9.81 m\/s^2 dot sin(30^circle.small) )=2.0387 $ Partielle Ableitungen: $ frac(partial m, partial g) (F_H/(g dot sin(alpha)) )) &=-F_H/(g^2 dot sin(alpha) ) \ frac(partial m, partial alpha) (F_H/(g dot sin(alpha)) )) &=-(F_H dot cos(alpha))/(g dot sin^2 (alpha)) \ frac(partial m, partial F_H) (F_H/(g dot sin(alpha)) )) &=1/(g dot sin (alpha)) $ $ Delta m &= abs(-F_H/(g^2 dot sin(alpha)) dot colmagenta(Delta g)) + abs(-(F_H dot cos(alpha))/(g dot sin^2 (F_H ) ) dot colorange(Delta alpha)) \ &+ abs(1/(g dot sin(alpha)) dot colgreen(Delta F_H)) =0.191"kg" $ $ m=(2.04±0.19)"kg" $ #colred[*Achtung*] $Delta alpha$ muss in Bogenmass sein! #grid( columns: (auto, auto), inset: 5pt, [*Gradmass in Bogenmass*], [$x=alpha/180 dot π$], [*Bogenmass in Gradmass*], [$alpha =x/π dot 180$] )
https://github.com/VZkxr/Typst
https://raw.githubusercontent.com/VZkxr/Typst/master/Portada/portada.typ
typst
#set page(paper: "us-letter", margin: (x: 37pt) ,background:[ #set align(left) #rect(width:26%, height: 100%, fill: rgb("003D64"))]) #set align(center) #grid(columns: (3cm, 1fr))[ #image("UNAM-color.png", width: 100%) #v(1fr) #grid(columns: (0.1cm, 0.1cm, 0.1cm), column-gutter: 0.025cm)[ #line(angle:90deg, length: 16cm, stroke:luma(255))][ #line(angle:90deg, length: 16cm, stroke:luma(255))][ #line(angle:90deg, length: 16cm, stroke:luma(255))] #v(1fr) #image("FC-color.png", width: 100%) ][ #set align(center) #v(1cm) UNIVERSIDAD NACIONAL AUTÓNOMA DE MÉXICO #line(length: 10cm, stroke: luma(0)) FACULTAD DE CIENCIAS #v(1fr) *Tarea* #v(1fr) #grid()[ #set align(left) Integrantes: #v(12pt) <NAME> <NAME> <NAME> <NAME> <NAME> Battlepass ] #v(.3cm) ]
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/syntaxes/textmate/tests/unit/expr/nary.typ
typst
Apache License 2.0
#let a = none; #red = red #red == red #(a = red) #(red == red) #a = red #a += red #(a += red) #not a #(not a) #(a and red) #(a or red) #a and red
https://github.com/xkevio/parcio-typst
https://raw.githubusercontent.com/xkevio/parcio-typst/main/parcio-thesis/chapters/eval/eval.typ
typst
MIT License
#import "../../template/template.typ": section = Evaluation<eval> _In this chapter, ..._ \ \ == Listings #figure(caption: "Caption")[ ```c printf("Hello World!\n"); // Comment for (int i = 0; i < m; i++) { for (int j = 0; j < n; j++) { sum += 'a'; } } ``` ]<lst:hello-world> You can also refer to listings (@lst:hello-world). \ \ #section[Summary] #lorem(80)
https://github.com/gabrielluizep/klaro-ifsc-sj
https://raw.githubusercontent.com/gabrielluizep/klaro-ifsc-sj/main/lib.typ
typst
MIT License
#let report( title: "Typst IFSC", subtitle: none, authors: ("<NAME>",), date: none, doc, ) = { // Define metadados do documento set document(title: title, author: authors) set page( numbering: "1", paper: "a4", margin: (top: 3cm, bottom: 2cm, left: 3cm, right: 2cm), ) set text(size: 12pt) // TODO: verificar se há necessidade de colocar espaçamento de 1.5 set par( first-line-indent: 1.5cm, justify: true, leading: 0.65em, linebreaks: "optimized", ) set heading(numbering: "1.") set math.equation(numbering: "(1)") align(center)[ #image("assets/ifsc-v.png", width: 10em) ] align(horizon + center)[ #text(20pt, title, weight: "bold") #v(1em) #text(subtitle, weight: "regular") ] align(bottom + left)[ #text(list(..authors, marker: "", body-indent: 0pt), weight: "semibold") #text(date) ] pagebreak() show outline.entry.where(level: 1): it => { strong(it) } // TODO: Verificar maneira melhor de alterar espaçamento entre titulo e corpo outline(title: [Sumário #v(1em)], indent: 2em) pagebreak() doc }
https://github.com/TheRiceCold/resume
https://raw.githubusercontent.com/TheRiceCold/resume/main/modules/skills.typ
typst
#import "../src/template.typ": * #cvSection("Skills") #cvSkill( type: [Spoken languages], info: [English, Filipino], ) #cvSkill( type: [Prog. languages], info: [JavaScript/TypeScript, Ruby, C\#, Java, Nix, Python, Sass, Bash], ) #cvSkill( type: [Frameworks], info: [React Native, NextJS, AstroJS, Ruby on Rails], ) #cvSkill( type: [Databases], info: [PostgreSQL, MongoDB, DynamoDB, Firestore, Redis], ) #cvSkill( type: [Libraries], info: [React.js, Vue, Express.js, Redux, Zustand, Tailwind], ) #cvSkill( type: [DevOps], info: [Linux, Nix, Docker/Docker Compose, Jenkins, Git Actions], )
https://github.com/ClazyChen/Table-Tennis-Rankings
https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history/2013/WS-05.typ
typst
#set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (1 - 32)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [1], [LIU Shiwen], [CHN], [3367], [2], [DING Ning], [CHN], [3337], [3], [LI Xiaoxia], [CHN], [3285], [4], [GUO Yan], [CHN], [3238], [5], [GUO Yue], [CHN], [3146], [6], [WU Yang], [CHN], [3145], [7], [FENG Yalan], [CHN], [2947], [8], [#text(gray, "FAN Ying")], [CHN], [2942], [9], [ISHIKAWA Kasumi], [JPN], [2929], [10], [ZHU Yuling], [MAC], [2904], [11], [CHEN Meng], [CHN], [2881], [12], [JIANG Huajun], [HKG], [2872], [13], [FENG Tianwei], [SGP], [2852], [14], [WEN Jia], [CHN], [2850], [15], [SHEN Yanfei], [ESP], [2840], [16], [SUH Hyo Won], [KOR], [2823], [17], [PAVLOVICH Viktoria], [BLR], [2822], [18], [#text(gray, "DANG Yeseo")], [KOR], [2819], [19], [KIM Kyungah], [KOR], [2812], [20], [YANG Ha Eun], [KOR], [2809], [21], [SEOK Hajung], [KOR], [2803], [22], [#text(gray, "WANG Yuegu")], [SGP], [2788], [23], [#text(gray, "LI Jiawei")], [SGP], [2768], [24], [LI Jiao], [NED], [2755], [25], [FUKUHARA Ai], [JPN], [2751], [26], [LANG Kristin], [GER], [2748], [27], [#text(gray, "CHANG Chenchen")], [CHN], [2746], [28], [LIU Jia], [AUT], [2745], [29], [NI Xia Lian], [LUX], [2737], [30], [LI Xiaodan], [CHN], [2736], [31], [FUJII Hiroko], [JPN], [2722], [32], [SAMARA Elizabeta], [ROU], [2709], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (33 - 64)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [33], [MON<NAME> Daniela], [ROU], [2706], [34], [LI Jie], [NED], [2705], [35], [LI Qian], [POL], [2704], [36], [MOON Hyunjung], [KOR], [2700], [37], [VACENOVSKA Iveta], [CZE], [2693], [38], [XIAN Yifang], [FRA], [2687], [39], [ZHAO Yan], [CHN], [2678], [40], [CHENG I-Ching], [TPE], [2677], [41], [LI Xue], [FRA], [2670], [42], [WANG Xuan], [CHN], [2666], [43], [TIKHOMIROVA Anna], [RUS], [2665], [44], [PESOTSKA Margaryta], [UKR], [2662], [45], [JEON Jihee], [KOR], [2653], [46], [#text(gray, "PARK Miyoung")], [KOR], [2644], [47], [<NAME>], [JPN], [2641], [48], [TIE Yana], [HKG], [2639], [49], [<NAME>], [GER], [2638], [50], [WU Jiaduo], [GER], [2616], [51], [<NAME>ayaka], [JPN], [2615], [52], [<NAME>], [HUN], [2606], [53], [<NAME>], [SWE], [2605], [54], [YOON Sunae], [KOR], [2602], [55], [POTA Georgina], [HUN], [2598], [56], [YU Mengyu], [SGP], [2591], [57], [LEE Ho Ching], [HKG], [2588], [58], [WAKAMIYA Misako], [JPN], [2586], [59], [CHOI Moonyoung], [KOR], [2580], [60], [SHAN Xiaona], [GER], [2574], [61], [#text(gray, "SUN Beibei")], [SGP], [2572], [62], [LEE Eunhee], [KOR], [2569], [63], [LOVAS Petra], [HUN], [2554], [64], [RI Mi Gyong], [PRK], [2554], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (65 - 96)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [65], [<NAME>], [BRA], [2549], [66], [KIM Jong], [PRK], [2542], [67], [<NAME>], [TPE], [2541], [68], [<NAME>], [AUT], [2534], [69], [<NAME>], [KOR], [2531], [70], [PASKAUSKIENE Ruta], [LTU], [2530], [71], [<NAME>], [ESP], [2530], [72], [LIN Ye], [SGP], [2527], [73], [PARTYKA Natalia], [POL], [2524], [74], [#text(gray, "WU Xue")], [DOM], [2522], [75], [KOMWONG Nanthana], [THA], [2519], [76], [NG Wing Nam], [HKG], [2515], [77], [ZHENG Jiaqi], [USA], [2509], [78], [WINTER Sabine], [GER], [2504], [79], [NONAKA Yuki], [JPN], [2504], [80], [STRBIKOVA Renata], [CZE], [2503], [81], [TAN Wenling], [ITA], [2503], [82], [<NAME>], [GER], [2499], [83], [RI Myong Sun], [PRK], [2499], [84], [FUKUOKA Haruna], [JPN], [2498], [85], [SOLJA Petrissa], [GER], [2497], [86], [#text(gray, "MOLNAR Cornelia")], [CRO], [2496], [87], [LEE I-Chen], [TPE], [2492], [88], [NOSKOVA Yana], [RUS], [2491], [89], [HUANG Yi-Hua], [TPE], [2490], [90], [BILENKO Tetyana], [UKR], [2490], [91], [BALAZOVA Barbora], [SVK], [2488], [92], [STEFANOVA Nikoleta], [ITA], [2487], [93], [HAPONOVA Hanna], [UKR], [2485], [94], [PARK Youngsook], [KOR], [2483], [95], [TOTH Krisztina], [HUN], [2471], [96], [STEFANSKA Kinga], [POL], [2470], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (97 - 128)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [97], [MAEDA Miyu], [JPN], [2469], [98], [LI Chunli], [NZL], [2467], [99], [LIN Chia-Hui], [TPE], [2467], [100], [#text(gray, "BOROS Tamara")], [CRO], [2461], [101], [GU Yuting], [CHN], [2457], [102], [HU Melek], [TUR], [2454], [103], [WANG Chen], [CHN], [2453], [104], [KANG Misoon], [KOR], [2452], [105], [#text(gray, "RAO Jingwen")], [CHN], [2450], [106], [ISHIGAKI Yuka], [JPN], [2448], [107], [ERDELJI Anamaria], [SRB], [2446], [108], [<NAME>], [JPN], [2443], [109], [MATSUZAWA Marina], [JPN], [2441], [110], [PAVLOVICH Veronika], [BLR], [2439], [111], [#text(gray, "TANIOKA Ayuka")], [JPN], [2438], [112], [FADEEVA Oxana], [RUS], [2438], [113], [YAMANASHI Yuri], [JPN], [2436], [114], [MIKHAILOVA Polina], [RUS], [2432], [115], [SKOV Mie], [DEN], [2431], [116], [CHOI Jeongmin], [KOR], [2430], [117], [<NAME> Fang], [AUS], [2427], [118], [CECHOVA Dana], [CZE], [2427], [119], [DVORAK Galia], [ESP], [2423], [120], [KIM Hye Song], [PRK], [2422], [121], [TIAN Yuan], [CRO], [2420], [122], [#text(gray, "KIM Junghyun")], [KOR], [2420], [123], [<NAME>], [FRA], [2419], [124], [<NAME>], [JPN], [2418], [125], [<NAME>], [SVK], [2417], [126], [<NAME>], [HKG], [2417], [127], [<NAME>], [SRB], [2415], [128], [<NAME>], [MEX], [2414], ) )
https://github.com/typst-jp/typst-jp.github.io
https://raw.githubusercontent.com/typst-jp/typst-jp.github.io/main/tools/test-helper/README.md
markdown
Apache License 2.0
# Test helper This is a small VS Code extension that helps with managing Typst's test suite. When installed, for all `.typ` files in the `tests` directory, the following Code Lens buttons will appear above every test's name: - View: Opens the output and reference image of a test to the side. - Run: Runs the test and shows the results to the side. - Save: Runs the test with `--update` to save the reference image. - Terminal: Runs the test in the integrated terminal. In the side panel opened by the Code Lens buttons, there are a few menu buttons at the top right: - Refresh: Reloads the panel to reflect changes to the images. - Run: Runs the test and shows the results. - Save: Runs the test with `--update` to save the reference image. ## Installation In order for VS Code to run the extension with its built-in [Node](https://nodejs.org) engine, you need to first build it from source. Navigate to `test-helper` directory and build the extension: ```bash npm install # Install the dependencies. npm run build # Build the extension from source. ``` Then, you can easily install it (and keep it up-to-date) via VS Code's UI: - Go to View > Command Palette or press Cmd/Ctrl+P, - In the drop down list, pick command "Developer: Install Extension from Location", - Select this `test-helper` directory in the file explorer dialogue box. VS Code will add the extension's path to `~/.vscode/extensions/extensions.json` (or `%USERPROFILE%\.vscode\extensions\extensions.json` on Windows).
https://github.com/Quaternijkon/notebook
https://raw.githubusercontent.com/Quaternijkon/notebook/main/content/数据结构与算法/.chapter-数据结构/字符串/字符串转换整数.typ
typst
#import "../../../../lib.typ":* === #Title( title: [字符串转换整数], reflink: "https://leetcode.cn/problems/string-to-integer-atoi/description/", level: 2, )<字符串转换整数> #note( title: [ 字符串转换整数 ], description: [ 请你来实现一个 myAtoi(string s) 函数,使其能将字符串转换成一个 32 位有符号整数。 函数 myAtoi(string s) 的算法如下: 空格:读入字符串并丢弃无用的前导空格(" ") 符号:检查下一个字符(假设还未到字符末尾)为 '-' 还是 '+'。如果两者都不存在,则假定结果为正。 转换:通过跳过前置零来读取该整数,直到遇到非数字字符或到达字符串的结尾。如果没有读取数字,则结果为0。 舍入:如果整数数超过 32 位有符号整数范围 [−231, 231 − 1] ,需要截断这个整数,使其保持在这个范围内。具体来说,小于 −231 的整数应该被舍入为 −231 ,大于 231 − 1 的整数应该被舍入为 231 − 1 。 返回整数作为最终结果。 ], examples: ([ 输入:s = "42" 输出:42 解释: ```typc 带下划线线的字符是所读的内容,插入符号是当前读入位置。 第 1 步:"42"(当前没有读入字符,因为没有前导空格) ^ 第 2 步:"42"(当前没有读入字符,因为这里不存在 '-' 或者 '+') ^ 第 3 步:"42"(读入 "42") ^ ``` ],[ 输入:s = " -042" 输出:-42 解释: ```typc 第 1 步:" -042"(读入前导空格,但忽视掉) ^ 第 2 步:" -042"(读入 '-' 字符,所以结果应该是负数) ^ 第 3 步:" -042"(读入 "042",在结果中忽略前导零) ^ ``` ],[ 输入:s = "1337c0d3" 输出:1337 解释: ```typc 第 1 步:"1337c0d3"(当前没有读入字符,因为没有前导空格) ^ 第 2 步:"1337c0d3"(当前没有读入字符,因为这里不存在 '-' 或者 '+') ^ 第 3 步:"1337c0d3"(读入 "1337";由于下一个字符不是一个数字,所以读入停止) ^ ``` ],[ 输入:s = "0-1" 输出:0 解释: ```typc 第 1 步:"0-1" (当前没有读入字符,因为没有前导空格) ^ 第 2 步:"0-1" (当前没有读入字符,因为这里不存在 '-' 或者 '+') ^ 第 3 步:"0-1" (读入 "0";由于下一个字符不是一个数字,所以读入停止) ^ ``` ],[ 输入:s = "words and 987" 输出:0 解释: ```typc 读取在第一个非数字字符“w”处停止。 ``` ] ), tips: [ - $0 <= s."length" <= 200$ - s 由英文字母(大写和小写)、数字(0-9)、' '、'+'、'-' 和 '.' 组成 ], solutions: ( ( name:[字符串解析], text:[ 根据题意,有以下四种字符需要考虑: + 首部空格: 删除之即可。 + 符号位: 三种情况,即 ''+'' , ''−'' , ''无符号" ;新建一个变量保存符号位,返回前判断正负即可。 + 非数字字符: 遇到首个非数字的字符时,应立即返回。 + 数字字符: + 字符转数字: “此数字的 ASCII 码” 与 “ 0 的 ASCII 码” 相减即可。 + 数字拼接: 若从左向右遍历数字,设当前位字符为 c ,当前位数字为 x ,数字结果为 res ,则数字拼接公式为:$"res"=10 × "res"+x; x="ascii"(c)− "ascii"( ′0 ′ )$ *数字越界处理:* 题目要求返回的数值范围应在 $[−2^31,2^31−1]$ ,因此需要考虑数字越界问题。而由于题目指出 环境只能存储 32 位大小的有符号整数 ,因此判断数字越界时,要始终保持 res 在 int 类型的取值范围内。 ],code:[ ```cpp class Solution { public: int myAtoi(string s) { int i = 0; int n = s.size(); // 去除前导空格 while (i < n && s[i] == ' ') { i++; } // 判断是否为空字符串 if (i == n) { return 0; } int flag = 1; if (s[i] == '-') { flag = -1; i++; } else if (s[i] == '+') { i++; } int ans = 0; // 读取数字 while (i < n) { if (s[i] < '0' || s[i] > '9') {// 遇到非数字字符则退出 break; } int digit = s[i] - '0'; if (ans > (INT_MAX - digit) / 10) {// 判断是否溢出 return flag == 1 ? INT_MAX : INT_MIN; } ans = ans * 10 + digit; i++; } return flag * ans; } }; ``` ]), ), gain:none, )
https://github.com/42CrMo4/InvLabel
https://raw.githubusercontent.com/42CrMo4/InvLabel/main/small.typ
typst
MIT License
// Import necessary libraries #import "@preview/tiaoma:0.1.0": qrcode // Read CSV file #let results = csv("part.csv", delimiter: ";") // Set the page size and margins #set page( width: 25.91mm, height: 13mm, margin: ( y: 0mm, x: 0.5mm, ) ) // Set the number of columns on the page #set page(columns: 2) // Set the default text size #set text(5pt) // Loop through the CSV data #for c in results [ // Center-align text horizontally and set the position #align(center + horizon)[ // Create a QR code with part information #let x = "{\"" + c.at(4) + "\":" + c.at(0) +"}" #qrcode(x, height: 10.8mm) #let y = c.at(4) + ": " + c.at(0) #pad(top:-1.5mm, text(y)) // Display the part name and description = #c.at(1) \ #c.at(2) ] ]
https://github.com/PhotonQuantum/UofTNotes
https://raw.githubusercontent.com/PhotonQuantum/UofTNotes/master/src/CSC2126H/LEC0101_Overview.typ
typst
#import "/sty.typ": * #show: template.with( title: [Types & Effects: Overview], short_title: [CSC2126H LEC0101], description: [ Notes based on lectures for CSC 2126H\ (Topics in PL: Types and Effects)\ at the University of Toronto by Professor <NAME>, Fall 2024 ], date: datetime(year: 2024, month: 09, day: 09), ) #quote(block: true, quotes: true)[The goal of types and effects is *to reason about program properties*.] = Introduction This lecture is intended as an general overview of all topics covered in the course. We'll cover six topics in this course, and generally, they can be classified as follows: / Statics: Substructural Types, DT / Modularity: Module Systems (OCaml, SML) / Efficiency: Modal Types (so there exists specific ways to ask compiler to generate efficient code) / Effects: Effect Types (tyck, also include monads), Effect Handlers (algbraic properties, cool!) = Questions There are three basic questions to ask when talking about types and effects: / What is a _program_?: syntax, typing, computation / What is its _property_?: soundness (progress & preservation), normalization, equivalence, decidability (type checking) / How _decidable_?: type inference (e.g. System F is undecidable), type equivalence (e.g. generally undecidable for DT) In addition, we will also discuss additional questions for each topic. = Substructural Types *Motivation*: Resource management (e.g. files, memory) / Program: where a resource is used in a restricted way (ordering/use count) / Property: eliminate invalid states == Categories *E* Exchange *W* Weakening *C* Contraction / Ordered: ? / Linear (E): must be used exactly once / Affine (E W): must be used at most once / Relevant (E C): / Normal (E W C): can be used arbitrarily And, we will try to answer the following questions: + How to enforce restrictions with typing? + How to design dynamics? + Any other useful senario? (ref. ATAPL) = Dependent Types *Motivation*: Expressive types (e.g. length-indexed Vec) / Program: terms in types (`Vec 3 Int`) / Decidability: type equivalence is generally undecidable And, we will try to answer the following questions: + How to write programs in DT? (consider C.H., e.g. Rocq, Agda, ...) + How to compile DT programs? (erasure v.s. proof-preserving transformations?) (_this is what we mainly focus in this course, and papers on this topic could be hard to read. Use caution!_) = Module Systems *Motivation*: Modularity (e.g. assemble well-specified components, recall ML modules) / Program: a modular system assembled from separate components / Property: modularity, data abstraction (not allow users to access exact types of a module, and only allow them to access the methods) And, we will try to answer the following questions: + How to design its _core language_? + Higher-order modules? (e.g. Module functors in OCaml) First-class modules? (_This is where abstract types could go wrong_) + Problems of complex module systems? (_LightQuantum: like what?_) = Modal Types (& Code Gen) *Motivation*: Tell compilers how to specialize programs (e.g. _staging_) (_So, we focus more on code generation in this course_) / Program: annotated with modal types that indicate _stages_ / Property: codegen properties (e.g. well-typedness: $forall c: "Program", "well-typed" c -> "well-typed" #raw("translate")\(c)$) And, we will try to answer the following questions: + Logic foundation? (_temporal logic_/contextual modal logic, _we focus on temporal logic in this course_) + Properties? (equivalence between original and generated code) + Practical applications? (e.g. efficient compilers by futamura projections) = Effect Types *Motivation*: Reason about effects (e.g. IO, exceptions) (hsk: monads) / Program: track computational effects (e.g. monads) / Property: safety (capture effects and ordering) And, we will try to answer the following questions: + What are the different effect systems? + How to reason about effects? + Applications? (e.g. optimization by leveraging pureness / no entanglement property?) = Effect Handlers *Motivation*: effectful programs with algebraic properties (e.g. Koka/OCaml) / Program: ditto / Property: ditto And, we will try to answer the following questions: + What is the _algebraic_ part of algeff handlers? + monads vs algeffs? + How to design a practical language with effect handlers? (e.g. Koka)
https://github.com/VisualFP/docs
https://raw.githubusercontent.com/VisualFP/docs/main/SA/project_documentation/content/time_tracking.typ
typst
#import "@preview/cetz:0.1.2": canvas, chart = Time Tracking The amount of time available for the project was calculated as follows: $ 30 "hours per ECTS point" times 8 "ECTS points per member" &= 240 "hours per member" \ 240 "hours per member" times 2 "members" &= 480 "hours" $ Both members tracked their time on the project each week, totaling approximately the available hours. A detailed graphic is shown in @time-tracking-table. #figure( image("../static/time_spent.png"), caption: "Time spent per week for each member." ) <time-tracking-table> It is to be noted that some work went into the project before the semester began, i.e., the definition of the project's task description. @time-tracking-table depicts this as Week 0.
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/lovelace/0.1.0/examples/custom-keywords.typ
typst
Apache License 2.0
#set page(width: auto, height: auto, margin: 1em) #import "../lib.typ": * #show: setup-lovelace #pseudocode( $x <- a$, [*repeat until convergence*], ind, $x <- (x + a/x) / 2$, ded, [*return* $x$] )
https://github.com/Isaac-Fate/booxtyp
https://raw.githubusercontent.com/Isaac-Fate/booxtyp/master/src/counters.typ
typst
Apache License 2.0
#let chapter-counter = counter("chapter") #let section-counter = counter("section") #let theorem-counter = counter("theorem") #let definition-counter = counter("definition") #let example-counter = counter("example") #let exercise-counter = counter("exercise") #let equation-counter = counter("equation") #let figure-counter = counter("figure")
https://github.com/Aadamandersson/typst-analyzer
https://raw.githubusercontent.com/Aadamandersson/typst-analyzer/main/components/syntax/test_data/parser/ok/while_expr.typ
typst
Apache License 2.0
#while true {} #while true []
https://github.com/gbrivady/typst-templates
https://raw.githubusercontent.com/gbrivady/typst-templates/main/examples/report-long-example.typ
typst
#import "../report-long.typ": project, appendix #show: project.with( title: "Long Report Title", subtitle: "Subtitle", author: "<NAME>", language: "en", ) = First section #lorem(10) == First subsection #lorem(20) == Second subsection #lorem(40) === Sub-subsection #lorem(100) ==== Sub-sub-sub section... This section part is not in the table of content #pagebreak() = Second section #lorem(150) #pagebreak() #show: appendix.with() = First appendix section #lorem(150) == First appendix subsection Appendix subsections are not in the table of content #pagebreak() = Second appendix section #lorem(150)
https://github.com/cadojo/correspondence
https://raw.githubusercontent.com/cadojo/correspondence/main/README.md
markdown
MIT License
# ✍️ `correspondence` _Typst templates for resumes, cover letters, application statements, and articles!_ ## Usage Add this repository as a submodule, then add a template under `src`. Note that each subdirectory under `src` is a self-contained template. ``` // // Example // #import "correspondence/src/dear.typ: statement, affiliation, contact" #let sender = affiliation( given: "First", middle: "M", family: "Last", preferred: "Nickname" ) #let contact = contact( email: link( "mailto:<EMAIL>", `<EMAIL>` ), ) #let hl(content) = { set text(rgb("#ddcc00"), weight: "bold") content } #let theme = rgb("#ddcc00") #show heading: text => smallcaps(text) #show: statement.with( sender: sender, contact: contact, theme: theme, title: "Personal Statement", ) This is a personal statement! ``` ## License All content in this repository is covered by the top-level [MIT License](/LICENSE), **except** for the icons under `src/icons`. The icons are **not** my creation, and are covered by their own [MIT License](/src/icons/ICONOIR).
https://github.com/SidneyLYZhang/learnTypst
https://raw.githubusercontent.com/SidneyLYZhang/learnTypst/main/Documentation/Sources/002_formatting.typ
typst
#set text(font:("Consolas", "Source Han Sans SC")) #set text(lang: "zh") #show emph: text.with(font: ("Linux Libertine","STKaiti")) #show link: text.with(fill: color.blue) = 把文档格式化处理 到目前为止,你已经写了一份包含一些文字、几个方程式和图片的报告。然而,它看起来仍然非常朴素。你的助教还不知道你正在使用一个新的排版系统,而你希望你的报告能够与其他学生的提交作业相匹配。在本章中,我们将学习如何使用Typst的样式系统来格式化你的报告。 == 设定规则 正如我们在上一章中所看到的,Typst有一些函数用于_插入_内容(例如 #link("https://typst.app/docs/reference/visualize/image/")[`image`] 函数),还有一些函数用于_操作_接收到的内容参数(例如 #link("https://typst.app/docs/reference/layout/align/")[`align`] 函数)。当你想要,比如说,使报告两端对齐时,你的第一反应可能是寻找一个能够实现这一功能的函数,并用它包裹整个文档。 #box(height: 150pt, columns(2, gutter: 11pt)[ ```typst #par(justify: true)[ = Background In the case of glaciers, fluid dynamics principles can be used to understand how the movement and behaviour of the ice is influenced by factors such as temperature, pressure, and the presence of other fluids (such as water). ]``` #align(center, image("images/2-right-yulan-1.png")) ]) 等等,函数的所有参数不是应该都在圆括号内指定吗?为什么在圆括号之后还有一组方括号包含内容?答案是,由于在Typst中向函数传递内容是如此常见的操作,因此有一种特殊的语法:不必将内容放在参数列表内,你可以直接在常规参数后的方括号中写入内容,这样可以节省标点符号。 如上所示,这种方法确实有效。#link("https://typst.app/docs/reference/model/par/")[`par`] 函数使其中的所有段落两端对齐。然而,用无数函数包裹文档并选择性地在原地应用样式很快就会变得繁琐。 幸运的是,Typst有一个更优雅的解决方案。通过 _设置规则_ ,你可以将样式属性应用于所有出现的内容上。要写一个设置规则,你需要输入 `set` 关键字,后跟你想要设置属性的函数名,然后在圆括号中列出参数。 #box(height: 150pt, columns(2, gutter: 11pt)[ ```typst #set par(justify: true) = Background In the case of glaciers, fluid dynamics principles can be used to understand how the movement and behaviour of the ice is influenced by factors such as temperature, pressure, and the presence of other fluids (such as water).``` #align(center, image("images/2-right-yulan-1.png")) ]) #align( center, align( left, block( width: 90%, fill: aqua, stroke : color.blue, radius: 8pt, inset: 18pt )[ *信息 INFO* 想要用更专业的术语了解这里发生了什么吗?\ \ 设置规则可以被理解为对某个函数的部分参数设定默认值, 这些默认值将应用于该函数的所有后续使用中。 ] ) ) == 自动完成面板 如果您按照说明操作并尝试了应用中的一些功能,可能您已经注意到,每当输入一个 `#` 字符时,都会弹出一个面板,显示可用的函数,以及在参数列表中可用的参数。这就是自动完成面板。在撰写文档时,它可以非常有用:您可以通过按 `Return` 键应用其建议,或者使用箭头键导航到所需的完成项。可以通过按 `Escape` 键关闭面板,通过输入 `#` 或按下 #box(inset: 2pt, radius: 3pt, stroke: black)[`Ctrl`] + #box(inset: 2pt, radius: 3pt, stroke: black)[`Space`] 再次打开。请使用自动完成面板发现函数的正确参数。大多数建议都附有一小段描述,说明它们的作用。 #align(center, image("images/2-formatting-autocomplete.png", width: 95%)) == 页面设定 返回_设定规则_的部分:编写规则时,您可以根据要设置样式的元素类型选择函数。以下是一些常用的在设置规则中使用的函数的列表: - #link("https://typst.app/docs/reference/text/text/")[`text`] 用于设置文本的字体族、大小、颜色和其他属性 - #link("https://typst.app/docs/reference/layout/page/")[`page`] 用于设置页面大小、边距、标题、启用列和页脚 - #link("https://typst.app/docs/reference/model/par/")[`par`] 用于对齐段落、设置行距等 - #link("https://typst.app/docs/reference/model/heading/")[`heading`] 用于设置标题的外观并启用编号 - #link("https://typst.app/docs/reference/model/document/")[`document`] 用于设置 PDF 输出中包含的元数据,例如标题和作者 并非所有函数参数都可以设置。通常,只有告诉函数如何执行操作的参数才可以设置,而不是告诉它用什么来执行的参数。函数参考页面会指示哪些参数是可以设置的。 让我们为我们的文档添加一些更多的样式。我们想要更大的边距和衬线字体。为了演示目的,我们还将设置另一个页面大小。 #box(height: 500pt, columns(2, gutter: 4pt)[ ```typst #set text( font: "New Computer Modern", size: 10pt ) #set page( paper: "a6", margin: (x: 1.8cm, y: 1.5cm), ) #set par( justify: true, leading: 0.52em, ) = Introduction In this report, we will explore the various factors that influence fluid dynamics in glaciers and how they contribute to the formation and behaviour of these natural structures. ... #align(center + bottom)[ #image("glacier.jpg", width: 70%) *Glaciers form an important part of the earth's climate system.* ] ``` #align(center, image("images/2-right-yulan-2.png")) ]) 这里有一些需要注意的事项。 首先是#link("https://typst.app/docs/reference/layout/page/")[设置页面]的规则。它接收两个参数:页面大小和页面边距。页面大小是一个字符串。Typst 接受许多#link("https://typst.app/docs/reference/layout/page/#parameters-paper")[标准页面大小],但您也可以指定自定义页面大小。边距以#link("https://typst.app/docs/reference/foundations/dictionary/")[字典]的形式指定。字典是键值对的集合。在这种情况下,键是 `x` 和 `y`,值分别是水平和垂直边距。我们也可以通过传递一个具有键 `left`、`right`、`top` 和 `bottom` 的字典来指定每边的单独边距。 接下来是#link("https://typst.app/docs/reference/text/text/")[设置文本]的规则。在这里,我们将字体大小设置为 `10pt`,字体族设置为 #text(fill:orange)[`"New Computer Modern"`]。Typst 应用程序附带了许多可供您的文档使用的字体。当您处于文本函数的参数列表中时,可以在自动完成面板中发现可用的字体。 我们还设置了行间距(也称为前导):它以#link("https://typst.app/docs/reference/layout/length/")[长度值(`length`)]的形式指定,我们使用 `em` 单位来相对于字体大小指定前导:`1em` 等于当前字体大小(默认为 `11pt`)。 最后,我们通过向我们的中心对齐添加垂直对齐来底部对齐我们的图像。垂直和水平对齐可以使用 `+` 运算符组合以获得 2D 对齐。 == 一点便捷 为了更清晰地组织我们的文档,我们现在想对标题进行编号。我们可以通过设置#link("https://typst.app/docs/reference/model/heading/")[`heading`]函数的`numbering`参数来实现这一点。 #box(height: 150pt, columns(2, gutter: 4pt)[ ```typst #set heading(numbering: "1.") = Introduction #lorem(10) == Background #lorem(12) == Methods #lorem(15) ``` #align(center, image("images/2-right-yulan-3.png")) ]) 我们将字符串 #text(fill:orange)[`"1."`] 指定为 `numbering` 参数。这告诉 Typst 用阿拉伯数字对标题进行编号,并在每个级别的数字之间放置一个点。我们还可以#link("https://typst.app/docs/reference/model/numbering/")[使用字母、罗马数字和符号]来对我们的标题进行编号。 #box(height: 150pt, columns(2, gutter: 4pt)[ ```typst #set heading(numbering: "1.a") = Introduction #lorem(10) == Background #lorem(12) == Methods #lorem(15) ``` #align(center, image("images/2-right-yulan-4.png")) ]) 这个例子还使用了 `lorem` 函数来生成一些占位符文本。这个函数接受一个数字作为参数,并生成相同数量的 _Lorem Ipsum_ 文本。 #align( center, align( left, block( width: 90%, fill: aqua, stroke : color.blue, radius: 8pt, inset: 18pt )[ *信息 INFO* 您是否想知道为什么标题和文本设置规则适用于所有文本和标题, 即使它们没有使用相应的函数生成?\ Typst 在您编写 `= Conclusion` 时内部调用标题函数。 实际上,函数调用 `#heading[Conclusion]` 等同于上面的标题标记。 其他标记元素也是如此,它们只是对应函数调用的语法糖。 ] ) ) == 显示规则(Show rules) 您对这个成果已经相当满意了。但有最后一件事需要解决:您所撰写的报告是为一个更大的项目服务的,该项目的名称在文章中出现时,都应该伴随相应的Logo。 您可以有以下选择。可以使用搜索和替换在每个logo实例之前添加 ```typst #image("logo.svg")``` 调用来实现。不过这样处理很繁琐。相反,您可以定义一个#link("https://typst.app/docs/reference/foundations/function/#defining-functions")[自定义函数],该函数始终生成带有其logo的内容。但是,有一个更简单的方法: 使用_显示规则_,您可以重新定义 Typst 显示某些元素的方式。您指定 Typst 应该如何显示哪些元素,以及它们应该看起来如何。显示规则可以应用于文本实例、函数,以及整个文档。 #box(height: 140pt, columns(2, gutter: 4pt)[ ```typst #show "ArtosFlow": name => box[ #box(image( "logo.svg", height: 0.7em, )) #name ] This report is embedded in the ArtosFlow project. ArtosFlow is a project of the Artos Institute.``` #align(center, image("images/2-right-yulan-5.png")) ]) 这个例子中包含大量新语法:我们首先键入`show`关键字,随后跟上希望以不同方式显示的文本字符串及冒号。接着,我们定义一个函数,该函数将要展示的内容作为参数。在此例中,我们将该参数命名为`name`。现在,我们可以在函数体中使用`name`变量来输出`ArtosFlow`名称。通过我们的_显示规则_,将在名称前添加logo图片,并将结果置于一个框(box)中,以防止logo与名称之间出现换行。图片同样被置于框内,确保它不会单独占据一个段落。 对首个`box`函数及`image`函数的调用并未要求前置`#`号,原因在于它们并非直接嵌入到标记语言中。当Typst期望接收代码而非标记时,访问函数、关键字及变量并不需要使用领先的`#`号。这一规则在参数列表、函数定义以及#link("https://typst.app/docs/reference/scripting/")[代码块]中均可见。 == 回顾 你现在已掌握在Typst文档中应用基础排版的方法。你学会了如何设定字体、对齐段落、调整页面尺寸,以及为标题添加编号等规范操作。此外,你还掌握了使用基础显示规则,以改变文档中文字呈现方式的技巧。 你已经提交了报告,你的上司对它非常满意,以至于他们希望将其改编成一篇会议论文!在接下来的部分里,我们将学习如何运用更高级的显示规则和功能,将你的文档精心打磨成一篇专业的学术论文。
https://github.com/nath-roset/suiviProjetHekzamGUI
https://raw.githubusercontent.com/nath-roset/suiviProjetHekzamGUI/master/typ%20sources/spec_fonctionnelle.typ
typst
Apache License 2.0
#import "template.typ": base #show: doc => base( // left_header:[], right_header : [Equipe scan-GUI-Automne-2024], title: [Projet Hekzam-GUI], subtitle: [Formalisation des spécifications fonctionnelles], version: [0.2], doc ) //TODO : make it so requirement list numbering keeps incrementing properly past other headings //can't query numbered lists (enums), currently a Typst limitation ? // https://github.com/typst/typst/issues/1356 #set enum( numbering: ("EX_1.a :"), tight: false, full: true, spacing: 2em, ) = Portée du projet: 12/03/2024 - Il nous a été demandé de plus se porter sur la partie `import` et manipulations des scans et moins sur la partie création des sujet, le document reflètera donc cela. = Definitions : (*HYPOTHÈSES DE DÉBUT DE PROJET*) - *cadrage* : en pourcentage, métrique qui mesure la qualité de la numérisation - *fichier source* : fichier texte suivant la #link("https://typst.app/docs/reference")[spécification] du langage Typst , ideally with a *.typ* extension - *projet* : Un projet dans ce contexte est un *dossier* (?) contenant AU MOINS un fichier source et AU MOINS un fichier de configuration pour le projet (?) au format ??? (TOML ? JSON ?) - *sujet* : fichier pdf obtenu en compilant un fichier source avec Typst, peut comprendre plusieurs pages - *programme* : Logiciel Hekzam complet#link("https://github.com/hekzam")[(GUI + parser + generator)], supposé fonctionnel - *feuille*: image extraite d'un fichier pdf; scan d'une feuille associé à une page d'un sujet - *copie*: ensemble de feuilles regroupées, l'ensemble formant une copie d'examen associée à un étudiant - *champs*: ce qu'on appelait question auparavant (un QCM = un champ, un True-False = un champ) = Description des exigences == Création du sujet + L'utilisateur doit pouvoir accéder aux autres fonctionnalités du programme après avoir satisfait l'une de ces pré-conditions + L'utilisateur doit pouvoir *importer* un "projet" localisé dans un dossier + L'utilisateur doit pouvoir *créer* un projet en sélectionnant un fichier source + L'utilisateur devrait pouvoir *modifier* le fichier source à partir du GUI du programme + L'utilisateur doit pouvoir *compiler* un fichier source après l'avoir importé + Il doit être possible de préciser combien de sujets uniques devront être générés par le programme + #strike[Il doit être possible de *randomiser* ou non *l'ordre des questions* du sujet] + #strike[Il doit être possible de *randomiser* ou non *l'ordre des réponses* à chaque question] + *Réunion du 12/03* chaque sujet à les mêmes questions, dans le même ordre et chaque exemplaire(chaque feuille de chaque exemplaire aussi) possède un QR code unique + Le bouton "*Générer*" (ou équivalent) sera grisé si aucun fichier source n'est donné + Les erreurs de compilation (= erreur de syntaxe dans le fichier source) devraient être remontées à l'utilisateur pour correction == Import des copies scannées #set enum(start: 6) //why do I have to do this + L'utilisateur devrait pouvoir importer : - une *liste de scans de copies* au format à déterminer (png ?) - un *dossier* contenant les scans des copies correspondant à un projet. _Peut-on considérer dans ce cas que chaque élément du dossier est un scan associée au projet ?_ + Si l'utilisateur se trompe de dossier de scans à l'import, le programme devrait (?) signifier à l'utilisateur que les copies scannées ne correspondent pas au projet actuellement ouvert + Si on importe exactement *le même scan*, il faut que le programme les détecte automatiquement == Identification des feuilles en copies #set enum(start: 9) + L'utilisateur devrait pouvoir importer une "*feuille de présence*" ( liste de copies attendues) qui devrait être analysable par le programme + Le programme devrait détecter si certaines copies manquent en fonction du nombre d'entrées présentes sur la feuille de présence - Le programme devrait (?) pouvoir faire le lien entre les entrées de la feuille de présence et les copies scannées - on peut considérer qu'on a un fichier csv avec(peut être) - soit une liste de personnes présentes - soit une liste des inscrits avec un bool qui dit qui était inscrit + Le programme devra attibuer une copie (= un ensemble de feuilles scannées) à un identifiant == Analyse de la copie #set enum(start: 12) + L'utilisateur devrait pouvoir apprécier la qualité des scans grâce à des indicateur numériques (RMSE, taux de certitude...) + Le programme devra porter à l'attention de l'utilisateur si certaines copies scannées sont *illisibles* par le parser, ou se trouve en deçà d'un *seuil d'acceptabilité* + Le programme devra pouvoir afficher uniquement les informations syntaxiques: - cette case est remplie ou non ? case cochée/grisée/remplie ou non ? - cette case là est censée être un zéro - le parser fait une liste des caractères reconnues, il faut montrer les plus probables en premier - lors de l'analyse des copies, on montre les cases pour lesquels on est le moins sûr en premier - on devrait pouvoir parcourir toutes les copies et les afficher à la suite à l'utilisateur + Le programme devra pouvoir afficher uniquement les informations sémantiques importantes: (A COMPLÉTER) - si plusieures cases incompatibles (genre vrai/faux) sont cochées, l'utilisateur doit pouvoir visionner la copie - que faire des annotations ? == Correction des copies #set enum(start: 16) + Le programme devra retourner des informations sur CHAQUE question (par exemple pour apprécier de la qualité des distracteurs): quel % d'étudiants ont répondu cette case à telle question - on ne souhaite pas forcément attribuer une note à la copie + Le programme devrait retourner *en sortie* : des fichiers (csv? *json* ? html ?) pour *visualiser les données* et *indiquer si l'humain a changé une valeur* + Le programme peut être en mesure de corriger et d'attribuer une note à chaque copie, mais ce n'est pas obligatoire + le programme devrait laisser la possibilité à l’utilisateur de *modifier la correction* de chaque copie si besoin + Le programme doit afficher *le degré de certitude* apporté à la correction de chaque copie + Le programme doit *porter à l'attention de l'utilisateur* si des *annotations* sont présentes sur certaines copies + *Si elles existent*, le programme doit présenter visuellement les annotations à l'utilisateur == Déroulement du programme #set enum(start: 23) + on doit pouvoir *recommencer du départ* à n'importe quel moment + on doit pouvoir retourner à l'étape précédente à tout moment + Il est préférable d'éviter de créer plusieurs fenêtres + Le programme devra avoir une interface navigable au clavier et bien d'autres
https://github.com/typst-community/mantodea
https://raw.githubusercontent.com/typst-community/mantodea/main/tests/example/test.typ
typst
MIT License
#import "/src/example.typ" #show heading.where(level: 1): it => pagebreak(weak: true) + it #set page(height: auto, header: counter(footnote).update(0)) = Code #example.frame( ```typst #block(inset: 1em, stroke: red)[Hello World] ```, ) = Eval #example.code-result( ```typst #block(inset: 1em, stroke: red)[Hello World] ```, )
https://github.com/kazuyanagimoto/quarto-slides-typst
https://raw.githubusercontent.com/kazuyanagimoto/quarto-slides-typst/main/_extensions/kazuyanagimoto/clean/typst-template.typ
typst
MIT License
#import "@preview/touying:0.5.2": * #import "@preview/fontawesome:0.3.0": * #let new-section-slide(level: 1, title) = touying-slide-wrapper(self => { let body = { set align(left + horizon) set text(size: 2.5em, fill: self.colors.primary, weight: "bold") title } self = utils.merge-dicts( self, config-page(margin: (left: 2em, top: -0.25em)), ) touying-slide(self: self, body) }) #let slide( config: (:), repeat: auto, setting: body => body, composer: auto, ..bodies, ) = touying-slide-wrapper(self => { // set page let header(self) = { set align(top) show: components.cell.with(inset: (x: 2em, top: 1.5em)) set text( size: 1.4em, fill: self.colors.neutral-darkest, weight: self.store.font-weight-heading, font: self.store.font-heading, ) utils.call-or-display(self, self.store.header) } let footer(self) = { set align(bottom) show: pad.with(.4em) set text(fill: self.colors.neutral-darkest, size: .8em) utils.call-or-display(self, self.store.footer) h(1fr) context utils.slide-counter.display() + " / " + utils.last-slide-number } // Set the slide let self = utils.merge-dicts( self, config-page( header: header, footer: footer, ), ) touying-slide(self: self, config: config, repeat: repeat, setting: setting, composer: composer, ..bodies) }) #let clean-theme( aspect-ratio: "16-9", handout: false, header: utils.display-current-heading(level: 2), footer: [], font-size: 20pt, font-heading: ("Roboto"), font-body: ("Roboto"), font-weight-heading: "light", font-weight-body: "light", font-weight-title: "light", font-size-title: 1.4em, font-size-subtitle: 1em, color-jet: "131516", color-accent: "107895", color-accent2: "9a2515", ..args, body, ) = { set text(size: font-size, font: font-body, fill: rgb(color-jet), weight: font-weight-body) show: touying-slides.with( config-page( paper: "presentation-" + aspect-ratio, margin: (top: 4em, bottom: 1.5em, x: 2em), ), config-common( slide-fn: slide, new-section-slide-fn: new-section-slide, handout: handout, enable-frozen-states-and-counters: false // https://github.com/touying-typ/touying/issues/72 ), config-methods( init: (self: none, body) => { show link: set text(fill: self.colors.primary) // Unordered List set list( indent: 1em, marker: (text(fill: self.colors.primary)[ #sym.triangle.filled ], text(fill: self.colors.primary)[ #sym.arrow]), ) // Ordered List set enum( indent: 1em, full: true, // necessary to receive all numbers at once, so we can know which level we are at numbering: (..nums) => { let nums = nums.pos() let num = nums.last() let level = nums.len() // format for current level let format = ("1.", "i.", "a.").at(calc.min(2, level - 1)) let result = numbering(format, num) text(fill: self.colors.primary, result) } ) // Slide Subtitle show heading.where(level: 3): title => { set text( size: 1.1em, fill: self.colors.primary, font: font-body, weight: "light", style: "italic", ) block(inset: (top: -0.5em, bottom: 0.25em))[#title] } set bibliography(title: none) body }, alert: (self: none, it) => text(fill: self.colors.secondary, it), cover: (self: none, body) => box(scale(x: 0%, body)), // Hack for enum and list ), config-colors( primary: rgb(color-accent), secondary: rgb(color-accent2), neutral-lightest: rgb("#ffffff"), neutral-darkest: rgb(color-jet), ), // save the variables for later use config-store( header: header, footer: footer, font-heading: font-heading, font-size-title: font-size-title, font-size-subtitle: font-size-subtitle, font-weight-title: font-weight-title, font-weight-heading: font-weight-heading, ..args, ), ) body } #let title-slide( ..args, ) = touying-slide-wrapper(self => { let info = self.info + args.named() let body = { set align(left + horizon) block( inset: (y: 1em), [#text(size: self.store.font-size-title, fill: self.colors.neutral-darkest, weight: self.store.font-weight-title, info.title) #if info.subtitle != none { linebreak() v(-0.3em) text(size: self.store.font-size-subtitle, style: "italic", fill: self.colors.primary, info.subtitle) }] ) set text(fill: self.colors.neutral-darkest) if info.authors != none { let count = info.authors.len() let ncols = calc.min(count, 3) grid( columns: (1fr,) * ncols, row-gutter: 1.5em, ..info.authors.map(author => align(left)[ #text(size: 1em, weight: "regular")[#author.name] #if author.orcid != [] { show link: set text(size: 0.7em, fill: rgb("a6ce39")) link("https://orcid.org/" + author.orcid.text)[#fa-orcid()] } \ #text(size: 0.7em, style: "italic")[ #show link: set text(size: 0.9em, fill: self.colors.neutral-darkest) #link("mailto:" + author.email.children.map(email => email.text).join())[#author.email] ] \ #text(size: 0.8em, style: "italic")[#author.affiliation] ] ) ) } if info.date != none { block(if type(info.date) == datetime { info.date.display(self.datetime-format) } else { info.date }) } } self = utils.merge-dicts( self, config-common(freeze-slide-counter: true) ) touying-slide(self: self, body) }) // Custom Functions #let fg = (fill: rgb("e64173"), it) => text(fill: fill, it) #let bg = (fill: rgb("e64173"), it) => highlight( fill: fill, radius: 2pt, extent: 0.2em, it ) #let _button(self: none, it) = { box(inset: 5pt, radius: 3pt, fill: self.colors.primary)[ #set text(size: 0.5em, fill: white) #sym.triangle.filled.r #it ] } #let button(it) = touying-fn-wrapper(_button.with(it))
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/import-01.typ
typst
Other
// An item import. #import "module.typ": item #test(item(1, 2), 3) // Code mode { import "module.typ": b test(b, 1) } // A wildcard import. #import "module.typ": * // It exists now! #test(d, 3)
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/.github/ISSUE_TEMPLATE/bug_report.md
markdown
Apache License 2.0
--- name: Bug report about: Create a report to help us improve title: '' labels: '' assignees: '' --- **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior (Library test): 1. Declare a rust test 'fn test_xxx() { ... }' or typescript test 'it_should(function () { ... })' 2. Execute test function 3. See error Or (Shell code): 1. Attach necessary resources to execute shell code 2. Put down some shell code 'cargo run --bin typst-ts-cli -- ...' 3. Execute shell code 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Package/Software version (using command `typst-ts-cli --VV full` and paste down the output):** ```plain typst-ts-cli version 0.1.x features: pdf raster serde_json serde_rmp tir web_socket cli-ver: 0.1.x cli-rev: 99814b4c... cli-build: x86_64... with ... mode at ... rustc-ver: 1.69.x rustc-rev: 84c898d6... rustc-build: x86_64... with LLVM 15.x ``` **Additional context** Add any other context about the problem here.
https://github.com/ziyuanding/typst_fishy_memory
https://raw.githubusercontent.com/ziyuanding/typst_fishy_memory/main/fishy_memory.typ
typst
#set text( font: "New Computer Modern", size: 12pt, lang: "en" ) #set page( paper: "a4", numbering: "1", margin: ( top: 2cm, bottom: 2cm, x: 3cm ) ) ///////////////////////////////// Thanks @wrzian https://github.com/typst/typst/discussions/2585#discussioncomment-10318563 ///////////////////////////////////////////////////////// /////////////// I modified a bit to make it also work with cite and other refs. /////// For equation, need to set a numbering first #set math.equation(numbering: "(1)") #let separate-supplement-style(supp, num) = { text(supp) [ ] box(num, stroke: 1pt + red, outset: (bottom:1.5pt, x:.5pt, y:.5pt)) } #show cite: it => { if it.supplement == none { box(it, stroke: green + 1pt) } } #show ref: it => { let (element, target, supplement: supp) = it.fields() // cite doesn't have element if element == none { return it } let non_cite_ref = element.fields() let supp = if supp == auto { non_cite_ref.supplement } else { supp } let num = context { // apply the heading's numbering style let head-count = counter(heading).at(target) numbering(non_cite_ref.numbering, ..head-count) } link(target, separate-supplement-style(supp, num)) } ////////////////////////////////////////////////////////////////////////////////////////// #show raw: it => block( width: 100%, fill: luma(240), inset: 4pt, radius: 4pt, text(size: 7pt, it) ) #set heading(numbering: "1.1") #set par(justify: true) #set enum(numbering: "(a)", indent: 5pt) #set math.equation(numbering: "(1)") #v(135pt) #let today = datetime.today() #align(center, text(19pt, weight: "bold", font: "Latin Modern Roman 17",)[ this is a center aligned title \ #v(2.5em) #today.display("[month repr:long] [day], [year]") ]) $ "loss" = - sum_(n=1)^n (y_i log hat(y)_(theta,i) + (1-y_i)log(1-hat(y)_(theta,i)) ) $ <NLLLoss> #figure( image("figure/schedule.png", width: 80%), caption: [ this is a caption ], ) <this_is_cite_ref> #figure( table( columns: (5), inset: 3pt, align: horizon, table.header( [], [*negative*], [*neutral*], [*positive*],[*total*] ), [train],[2084],[3077],[1929],[7090], [validation],[259],[388],[241],[888], [test],[263],[393],[245],[901] ), caption: [hahaha] ) <MAMS_stat> #figure( grid( columns: (1), rows: (auto, auto), gutter: 0em, [ #image("diagram/A_attn_service.png", width: 100%) ], [ #image("diagram/A_attn_food.png", width: 100%) ], ), caption: [up to down: service_negative, predicted as neutral; food_positive, predicted as neutral;] ) <A_visual_attn> @zhangAlgorithmOptimizedMRNA2023 #cite(label("10.1093/nar/gkad1168")) #bibliography("my_refs.bib", style: "ieee")
https://github.com/alperari/cyber-physical-systems
https://raw.githubusercontent.com/alperari/cyber-physical-systems/main/week4/solution.typ
typst
#import "@preview/diagraph:0.1.2": * #set text( size: 15pt, ) #set page( paper: "a4", margin: (x: 1.8cm, y: 1.5cm), ) #align(center, text(21pt)[ *Cyber Physical Systems - Discrete Models \ Exercise Sheet 4 Solution* ]) #grid( columns: (1fr, 1fr), align(center)[ <NAME> \ <EMAIL> ], align(center)[ <NAME> \ <EMAIL> ] ) #align(center)[ November 12, 2023 ] = Exercise 1: Railroad Crossing == A Because train's _enter_ action and gate's _lower_ action are not synchronized, the model allows to train to enter before the gate is lowered. == B #align(center)[ #grid( columns: (1fr, 1fr, 1fr), raw-render(height: 3in, width: 2in)[```dot digraph { rankdir=TD; label="Train'"; node [fixedsize=true, width=0.75]; start_train [style=invis; width=0, height=0, fixedsize=true]; far [shape=circle]; near [shape=circle]; in [shape=circle]; {rank=same; start_train; far; near} start_train -> far; far -> near [label="approach"]; near -> in [label="enter"]; in -> far [label="exit"]; } ```], raw-render(height: 3in, width: 2in)[```dot digraph { rankdir=TD; label="Controller'"; node [fixedsize=true, width=0.75]; start_controller [style=invis; width=0, height=0, fixedsize=true]; 0 [shape=circle]; 1 [shape=circle]; 2 [shape=circle]; 3 [shape=circle]; dummy [style=invis]; 4 [shape=circle]; {rank=same; 0} {rank=same; 4; 1} {rank=same; 2; 3} start_controller -> 0; 0 -> 1 [label="approach"]; 1 -> 2 [label="lower"]; 3 -> 2 [label="enter", dir=back]; 3 -> 4 [label="exit"]; 4 -> 0 [xlabel="raise"]; # for layout 0 -> dummy [style=invis]; } ```], raw-render(height: 3in, width: 2in)[```dot digraph { rankdir=TD; label="Gate'"; node [fixedsize=true, width=0.75]; start_gate [style=invis; width=0, height=0, fixedsize=true]; up [shape=circle]; down [shape=circle]; {rank=min; start_gate} start_gate -> up up -> down [xlabel="lower"]; up -> down [style=invis]; up -> down [label="raise", dir=back]; } ```] ) ] == C #raw-render[```dot digraph { rankdir=TD; label="Train' || Controller' || Gate'" node [fixedsize=true, width=1]; start [style=invis, width=0, height=0, fixedsize=true]; node [shape=Mrecord]; far0up [label="<f0> far | <f1> 0 | <f2> up"]; near1up [label="<f0> near | <f1> 1 | <f2> up"]; near2down [label="<f0> near | <f1> 2 | <f2> down"]; in3down [label="<f0> in | <f1> 3 | <f2> down"]; far4down [label="<f0> far | <f1> 4 | <f2> down"]; {rank=min; start} start -> far0up; far0up -> near1up [label="approach"]; near1up -> near2down [label="lower"]; near2down -> in3down [label="enter"]; in3down -> far4down [label="exit"]; far4down -> far0up [label="raise"]; } ```] = Exercise 2: Hardware Circuit and Transition System #raw-render()[```dot digraph { rankdir=LR; node [fixedsize=true, width=1]; start1 [style=invis, width=0, height=0, fixedsize=true]; start2 [style=invis, width=0, height=0, fixedsize=true]; x0r0 [shape=circle, label="x = 0 \n r = 0", xlabel="{y}"]; x0r1 [shape=circle, label="x = 0 \n r = 1", xlabel="{r}"]; x1r0 [shape=circle, label="x = 1 \n r = 0", xlabel="{x, y}"]; x1r1 [shape=circle, label="x = 1 \n r = 1", xlabel="{x, r, y}"]; {rank=min; start1; start2} {rank=same; x0r1; x1r1} {rank=same; x0r0; x1r0} start1 -> x0r1; start2 -> x1r1; x1r1 -> x0r1; x0r1 -> x0r0; x1r0 -> x0r0; x0r0 -> x1r0; x0r1 -> x1r0; x0r0:s -> x0r0:s []; x1r0 -> x1r0; x1r1 -> x1r1; } ```] = Exercise 3: Paralellism - Interleaving == Part A #align(center)[ #grid( columns: (1fr, 1fr), raw-render()[```dot digraph { rankdir=TD; label = "P1"; node [fixedsize=true, width=1]; start [style=invis, width=0, height=0, fixedsize=true]; L11 [shape=circle]; L12 [shape=circle]; L13 [shape=circle]; start -> L11; L11 -> L12 [label="r1 := x + 1"]; L12 -> L13 [label="x := r1"]; } ```], raw-render()[```dot digraph { rankdir=TD; label = "P2"; node [fixedsize=true, width=1]; start [style=invis, width=0, height=0, fixedsize=true]; L21 [shape=circle]; L22 [shape=circle]; L23 [shape=circle]; start -> L21; L21 -> L22 [label="r2 := 3 * x"]; L22 -> L23 [label="x := r2"]; } ```], ) ] == Part B #raw-render()[```dot digraph { rankdir=TD; label = "P1 ||| P2"; node [fixedsize=true, width=1.2]; start [style=invis, width=0, height=0, fixedsize=true]; L11L21 [label="L11, L21", shape=circle]; L12L21 [label="L12, L21", shape=circle]; L11L22 [label="L11, L22", shape=circle]; L13L21 [label="L13, L21", shape=circle]; L12L22 [label="L12, L22", shape=circle]; L11L23 [label="L11, L23", shape=circle]; L13L22 [label="L13, L22", shape=circle]; L12L23 [label="L12, L23", shape=circle]; L13L23 [label="L13, L23", shape=circle]; {rank=min; start} {rank=same; L11L21} {rank=same; L12L21; L11L22} {rank=same; L13L21; L12L22; L11L23} {rank=same; L13L22; L12L23} start -> L11L21; L11L21 -> L12L21 [label="r1 := x + 1;"]; L11L21 -> L11L22 [label="r2 := 3 * x;"]; L12L21 -> L13L21 [label="x := r1;"]; L12L21 -> L12L22 [label="r2 := 3 * x;"]; L11L22 -> L12L22 [label="r1 := x + 1;"]; L11L22 -> L11L23 [label="x := r2;"]; L13L21 -> L13L22 [label="r2 := 3 * x;"]; L12L22 -> L13L22 [label="x := r1;"]; L12L22 -> L12L23 [label="x := r2;"]; L11L23 -> L12L23 [label="r1 := x + 1;"]; L13L22 -> L13L23 [label="x := r2;"]; L12L23 -> L13L23 [label="x := r1;"]; } ```] == Part C // Labels if necessary #raw-render()[```dot digraph { rankdir=TD; label = "T(P1 ||| P2)"; node [fixedsize=true, width=1.2]; start [style=invis, width=0, height=0, fixedsize=true]; L11L21x1r0r0 [label="L11, L21 \n x = 1 \n r1 = 0, r2 = 0", shape=circle]; L12L21x1r2r0 [label="L12, L21 \n x = 1 \n r1 = 2, r2 = 0", shape=circle]; L11L22x1r0r3 [label="L11, L22 \n x = 1 \n r1 = 0, r2 = 3", shape=circle]; L13L21x2r2r0 [label="L13, L21 \n x = 2 \n r1 = 2, r2 = 0", shape=circle]; L12L22x1r2r3 [label="L12, L22 \n x = 1 \n r1 = 2, r2 = 3", shape=circle]; L11L23x3r0r3 [label="L11, L23 \n x = 3 \n r1 = 0, r2 = 3", shape=circle]; L13L22x2r2r6 [label="L13, L22 \n x = 2 \n r1 = 2, r2 = 6", shape=circle]; L13L22x2r2r3 [label="L13, L22 \n x = 2 \n r1 = 2, r2 = 3", shape=circle]; L12L23x3r2r3 [label="L12, L23 \n x = 3 \n r1 = 2, r2 = 3", shape=circle]; L12L23x3r4r3 [label="L12, L23 \n x = 3 \n r1 = 4, r2 = 3", shape=circle]; L13L23x6r2r6 [label="L13, L23 \n x = 6 \n r1 = 2, r2 = 6", shape=circle]; L13L23x3r2r3 [label="L13, L23 \n x = 3 \n r1 = 2, r2 = 3", shape=circle]; L13L23r2r2r3 [label="L13, L23 \n x = 2 \n r1 = 2, r2 = 3", shape=circle]; L13L23x4r4r3 [label="L13, L23 \n x = 4 \n r1 = 4, r2 = 3", shape=circle]; {rank=min; start} {rank=same; L12L21x1r2r0; L11L22x1r0r3} {rank=same; L13L21x2r2r0;L12L22x1r2r3; L11L23x3r0r3} {rank=same; L13L22x2r2r6; L13L22x2r2r3; L12L23x3r2r3; L12L23x3r4r3} {rank=same; L13L23x6r2r6; L13L23x3r2r3; L13L23r2r2r3; L13L23x4r4r3} start -> L11L21x1r0r0; L11L21x1r0r0 -> L12L21x1r2r0; L11L21x1r0r0 -> L11L22x1r0r3; L12L21x1r2r0 -> L13L21x2r2r0; L12L21x1r2r0 -> L12L22x1r2r3; L11L22x1r0r3 -> L12L22x1r2r3; L11L22x1r0r3 -> L11L23x3r0r3; L13L21x2r2r0 -> L13L22x2r2r6; L12L22x1r2r3 -> L13L22x2r2r3; L12L22x1r2r3 -> L12L23x3r2r3; L11L23x3r0r3 -> L12L23x3r4r3; L13L22x2r2r6 -> L13L23x6r2r6; L13L22x2r2r3 -> L13L23x3r2r3; L12L23x3r2r3 -> L13L23r2r2r3; L12L23x3r4r3 -> L13L23x4r4r3; } ```] // Edges with labels: /* L11L21x1r0r0 -> L12L21x1r2r0 [label="r1 := x + 1;"]; L11L21x1r0r0 -> L11L22x1r0r3 [label="r2 = 3 * x;"]; L12L21x1r2r0 -> L13L21x2r2r0 [label="x := r1;"]; L12L21x1r2r0 -> L12L22x1r2r3 [label="r2 := 3 * x;"]; L11L22x1r0r3 -> L12L22x1r2r3 [label="r1 := x + 1;"]; L11L22x1r0r3 -> L11L23x3r0r3 [label="x := r2;"]; L13L21x2r2r0 -> L13L22x2r2r6 [label="r2 := 3 * x;"]; L12L22x1r2r3 -> L13L22x2r2r3 [label="x := r1;"]; L12L22x1r2r3 -> L12L23x3r2r3 [label="x := r2;"]; L11L23x3r0r3 -> L12L23x3r4r3 [label="r1 := x + 1;"]; L13L22x2r2r6 -> L13L23x6r2r6 [label="x := r2;"]; L13L22x2r2r3 -> L13L23x3r2r3 [label="x := r2;"]; L12L23x3r2r3 -> L13L23r2r2r3 [label="x := r1;"]; L12L23x3r4r3 -> L13L23x4r4r3 [label="x := r1;"]; */
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/rivet/0.1.0/src/util.typ
typst
Apache License 2.0
#let z-fill(string, length) = { let filled = "0" * length + string return filled.slice(-length) }
https://github.com/Pablo-Gonzalez-Calderon/chic-header-package
https://raw.githubusercontent.com/Pablo-Gonzalez-Calderon/chic-header-package/main/README.md
markdown
MIT License
# Chic-header (v0.4.0) **Chic-header** (chic-hdr) is a Typst package for creating elegant headers and footers ## Usage To use this library through the Typst package manager (for Typst 0.6.0 or greater), write `#import "@preview/chic-hdr:0.4.0": *` at the beginning of your Typst file. Once imported, you can start using the package by writing the instruction `#show: chic.with()` and giving any of the chic functions inside the parenthesis `()`. _**Important: If you are using a custom template that also needs the `#show` instruction to be applied, prefer to use `#show: chic()` after the template's `#show`.**_ For example, the code below... ```typst #import "@preview/chic-hdr:0.4.0": * #set page(paper: "a7") #show: chic.with( chic-footer( left-side: strong( link("mailto:<EMAIL>", "<EMAIL>") ), right-side: chic-page-number() ), chic-header( left-side: emph(chic-heading-name(fill: true)), right-side: smallcaps("Example") ), chic-separator(1pt), chic-offset(7pt), chic-height(1.5cm) ) = Introduction #lorem(30) == Details #lorem(70) ``` ...will look like this: <h3 align="center"> <img alt="Usage example" src="assets/usage.png" style="max-width: 100%; padding: 10px 10px; background-color: #E4E5EA; box-shadow: 1pt 1pt 10pt 0pt #AAAAAA; border-radius: 4pt"> </h3> ## Reference _Note: For a detailed explanation of the functions and parameters, see Chic-header's Manual.pdf._ While using `#show: chic.with()`, you can give the following parameters inside the parenthesis: - `width`: Indicates the width of headers and footers in all the document (default is `100%`). - `skip`: Which pages must be skipped for setting its header and footer. Other properties changed with `chic-height()` or `chic-offset()` are preserved. Giving a negative index causes a skip of the last pages using last page as index -1(default is `()`). - `even`: Header and footer for even pages. Here, only `chic-header()`, `chic-footer()` and `chic-separator()` functions will take effect. Other functions must be given as an argument of `chic()`. - `odd`: Sets the header and footer for odd pages. Here, only `chic-header()`, `chic-footer()` and `chic-separator()` functions will take effect. Other functions must be given as an argument of `chic()`. - `..functions()`: These are a variable number of arguments that corresponds to Chic-header’s style functions. ### Functions 1. `chic-header()` - Sets the header content. - `v-center`: Whether to vertically align the header content, or not (default is `false`). - `side-width`: Custom width for the sides. It can be an 3-element-array, length or relative length (default is `none` and widths are set to ``1fr`` if a side is present). - `left-side`: Content displayed in the left side of the header (default is `none`). - `center-side`: Content displayed in the center of the header (default is `none`). - `right-side`: Content displayed in the right side of the header (default is `none`). 2. `chic-footer()` - Sets the footer content. - `v-center`: Whether to vertically align the header content, or not (default is `false`). - `side-width`: Custom width for the sides. It can be an 3-element-array, length or relative length (default is `none` and widths are set to ``1fr`` if a side is present). - `left-side`: Content displayed in the left side of the footer (default is `none`). - `center-side`: Content displayed in the center of the footer (default is `none`). - `right-side`: Content displayed in the right side of the footer (default is `none`). 3. `chic-separator()` - Sets the separator for either the header, the footer or both. - `on`: Where to apply the separator. It can be `"header"`, `"footer"` or `"both"` (default is `"both"`). - `outset`: Space around the separator beyond the page margins (default is `0pt`). - `gutter`: How much spacing insert around the separator (default is `0.65em`). - (unnamed): A length for a `line()`, a stroke for a `line()`, or a custom content element. 4. `chic-styled-separator()` - Returns a pre-made custom separator for using it in `chic-separator()` - `color`: Separator's color (default is `black`). - (unnamed): A string indicating the separator's style. It can be `"double-line"`, `"center-dot"`, `"bold-center"`, or `"flower-end"`. 4. `chic-height()` - Sets the height of either the header, the footer or both. - `on`: Where to change the height. It can be `"header"`, `"footer"` or `"both"` (default is `"both"`). - (unnamed): A relative length (the new height value). 5. `chic-offset()` - Sets the offset of either the header, the footer or both (relative to the page content). - `on`: Where to change the offset It can be `"header"`, `"footer"` or `"both"` (default is `"both`). - (unnamed): A relative length (the new offset value). 6. `chic-page-number()` - Returns the current page number. Useful for header and footer `sides`. It doesn’t take any parameters. 7. `chic-heading-name()` - Returns the next heading name in the `dir` direction. The heading must have a lower or equal level than `level`. If there're no more headings in that direction, and `fill` is ``true``, then headings are sought in the other direction. - `dir`: Direction for searching the next heading: ``"next"`` (from the current page, get the next heading) or ``"prev"`` (from the current page, get the previous heading). Default is `"next"`. - `fill`: If there's no more headings in the `dir` direction, indicates whether to try to get a heading in the opposite direction (default is ``false``). - `level`: Up to what level of headings should this function search (default is ``2``). ## Gallery <h3 align="center"> <img alt="Example 1" src="assets/example-1.png" style="max-width: 100%; padding: 10px 10px; background-color: #E4E5EA; box-shadow: 1pt 1pt 10pt 0pt #AAAAAA; border-radius: 4pt"> </h3> _Header with `chic-heading-name()` at left, and `chic-page-number()` at right. There's a `chic-separator()` of `1pt` only for the header._ <h3 align="center"> <img alt="Example 2" src="assets/example-2.png" style="max-width: 100%; padding: 10px 10px; background-color: #E4E5EA; box-shadow: 1pt 1pt 10pt 0pt #AAAAAA; border-radius: 4pt"> </h3> _Footer with `chic-page-number()` at right, and a custom `chic-separator()` showing "end of page (No. page)" between 9 `~` symbols at each side._ ## Changelog ### Version 0.1.0 - Initial release - Implemented `chic-header()`, `chic-footer()`, `chic-separator()`, `chic-height()`, `chic-offset()`, `chic-page-number()`, and `chic-heading-name()` functions ### Version 0.2.0 _Thanks to Slashformotion (<https://github.com/slashformotion>) for noticing this version bugs, and suggesting a vertical alignment for headers._ - Fix alignment error in `chic-header()` and `chic-footer()` - Add `v-center` option for `chic-header()` and `chic-footer()` - Add `outset` option for `chic-separator()` - Add `chic-styled-separator()` function ### Version 0.3.0 - Add `side-width` option for `chic-header()` and `chic-footer()` ### Version 0.4.0 _Thanks to David (<https://github.com/davidleejy>) for being interested in the package and giving feedback and ideas for new parameters_ - Update ``type()`` conditionals to met Typst 0.8.0 standards - Add `dir`, `fill`, and `level`parameters to ``chic-heading-name()`` - Allow negative indexes for skipping final pages while using `skip` - Include some panic alerts for types mismatch - Upload manual code in the package repository
https://github.com/eternal-flame-AD/typstpp
https://raw.githubusercontent.com/eternal-flame-AD/typstpp/main/example.typ
typst
Apache License 2.0
#align(center)[ *Typstpp Demo* ] Load some libraries: ```r #| message: false library(tidyverse) ``` Then make a plot: ```r plot(iris) ``` Then try some Haskell: ```hs :{ fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) :} map fib [0..10] ``` Then make a table: ```r knitr::kable(head(iris)) ``` Mix some code, plots and tables in the same chunk: ```r factorial <- function(n) { if (n == 0) { return(1) } else { return(n * factorial(n - 1)) } } x <- 1:10 y <- sapply(x, factorial) plot(x, y, type = "l") print("↑ base R plot ↓ ggplot2 plot") ggplot(data.frame(x = x, y = y), aes(x, y)) + geom_line() + labs(title = "Factorial function", x = "x", y = "y") knitr::kable(data.frame(x = x, y = y)) ```
https://github.com/cnaak/blindex.typ
https://raw.githubusercontent.com/cnaak/blindex.typ/main/books.typ
typst
MIT License
//============================================================================================// // Biblical / Indexing Constants // //============================================================================================// //--------------------------------------------------------------------------------------------// // Biblical Literature Unique ID's // //--------------------------------------------------------------------------------------------// // Book's unique ID dictionary: // "full-english-name": unique integer ID #let bUID = ( // 10.00 - Pentateuch "Genesis": 1001, "Exodus": 1002, "Leviticus": 1003, "Numbers": 1004, "Deuteronomy": 1005, // 11.00 - OT Historical "Joshua": 1101, "Judges": 1102, "Ruth": 1103, "<NAME>": 1104, "<NAME>": 1105, "1 Kings": 1106, "2 Kings": 1107, "1 Chronicles": 1108, "2 Chronicles": 1109, "Ezra": 1110, "Nehemiah": 1111, "Esther": 1112, // 12.00 - Sapiential "Job": 1201, "Psalms": 1202, "Proverbs": 1203, "Ecclesiastes": 1204, "Song of Solomon": 1205, // 13.00 - OT Prophetic "Isaiah": 1301, "Jeremiah": 1302, "Lamentations": 1303, "Ezekiel": 1304, "Daniel": 1305, "Hosea": 1306, "Joel": 1307, "Amos": 1308, "Obadiah": 1309, "Jonah": 1310, "Micah": 1311, "Nahum": 1312, "Habakkuk": 1313, "Zephaniah": 1314, "Haggai": 1315, "Zechariah": 1316, "Malachi": 1317, // 14.00 - NT Gospels "Matthew": 1401, "Mark": 1402, "Luke": 1403, "John": 1404, // 15.00 - NT Historical "Acts": 1501, // 16.00 - NT Paul's letters "Romans": 1601, "1 Corinthians": 1602, "2 Corinthians": 1603, "Galatians": 1604, "Ephesians": 1605, "Philippians": 1606, "Colossians": 1607, "1 Thessalonians": 1608, "2 Thessalonians": 1609, "1 Timothy": 1610, "2 Timothy": 1611, "Titus": 1612, "Philemon": 1613, // 17.00 - NT Universal letters "Hebrews": 1700, "James": 1701, "1 Peter": 1702, "2 Peter": 1703, "1 John": 1704, "2 John": 1705, "3 John": 1706, "Jude": 1707, // 18.00 - NT Prophetic "Revelation": 1801, // 31.00 - LXX (vol.1) Deutero "1 Esdras": 3101, "Judith": 3102, "Tobit": 3103, "1 Maccabees": 3104, "2 Maccabees": 3105, "3 Maccabees": 3106, "4 Maccabees": 3107, "Additions to Esther": 3108, // 32.00 - LXX (vol.2) Deutero "Additional Psalm": 3201, "Ode": 3202, "Wisdom of Solomon": 3203, "Sirach": 3204, "Psalms of Solomon": 3205, "Baruch": 3206, "Letter of Jeremiah": 3207, "Susanna": 3208, "Bel and the Dragon": 3209, "Song of Three Youths": 3210, // 51.00 - Other OT Apocripha (not in the LXX) "3 Esdras": 5101, "4 Esdras": 5102, "Prayer of Manasseh": 5103, ) // Create the reverse dictionary #let iBoo = (:) #for KV in bUID.pairs() { iBoo.insert(str(KV.at(1)), KV.at(0)) } //--------------------------------------------------------------------------------------------// // Book Sorting Resources // //--------------------------------------------------------------------------------------------// // Following the available information in the TOB - Traduction OEcuménique - Five (5) OT cannons // are supported, i.e., the Hebrew, the Protestant, the Catholic, the Orthodox, and the TOB. The // NT cannon is the same in all of these traditions. // Book partial orderings #let pOrd = ( // The ordering of the Law (Torah/Pentateuc) books is the same in all 5 canons "Law": (1001, 1002, 1003, 1004, 1005,), // PROTESTANT OT CANON "OT-Protestant-Historical": (1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112,), "OT-Protestant-Sapiential": (1201, 1202, 1203, 1204, 1205,), "OT-Protestant-Major-Prophets": (1301, 1302, 1303, 1304, 1305,), "OT-Protestant-Minor-Prophets": (1306, 1307, 1308, 1309, 1310, 1311, 1312, 1313, 1314, 1315, 1316, 1317,), // CATHOLIC OT CANON "OT-Catholic-Historical": (1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 3103, 3102, 1112, 3108, 3104, 3105,), "OT-Catholic-Poetic": (1201, 1202, 1203, 1204, 1205, 3203, 3204,), "OT-Catholic-Major-Prophets": (1301, 1302, 1303, 3206, 1304, 1305, 3208, 3209,), // ORTHODOX OT CANON "OT-Orthodox-Historical": (1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 5101, 1110, 1111, 3103, 3102, 1112, 3108, 3104, 3105, 3106,), "OT-Orthodox-Poetic": (1202, 1201, 1203, 1204, 1205, 3203, 3204,), "OT-Orthodox-Major-Prophets": (1301, 1302, 3206, 1303, 3207, 1304, 1305, 3208, 3209,), "OT-Orthodox-Minor-Prophets": (1306, 1308, 1311, 1307, 1309, 1310, 1312, 1313, 1314, 1315, 1316, 1317, 5102, 3107, 5103,), // TOB OT CANON "OT-TOB-Deuterocanonical": (3210, 3208, 3209, 3108, 3102, 3103, 3104, 3105, 3203, 3204, 3206, 3207, 5101, 5102, 3106, 3107, 5103, 3201,), // THE NEW TESTAMENT - same for all 5 considered traditions "Gospels": (1401, 1402, 1403, 1404,), "Acts": (1501,), "Paul-Letters": (1601, 1602, 1603, 1604, 1605, 1606, 1607, 1608, 1609, 1610, 1611, 1612, 1613,), "Universal-Letters": (1701, 1702, 1703, 1704, 1705, 1706, 1707,), "Revelation": (1801,), ) // HEBREW OT CANON #pOrd.insert("Neviim", (1101, 1102, 1104, 1105, 1106, 1107, 1301, 1302, 1304) + pOrd.at("OT-Protestant-Minor-Prophets")) #pOrd.insert("Ketuvim", (1202, 1201, 1203, 1103, 1205, 1204, 1303, 1112, 1305, 1110, 1111, 1108, 1109,)) // NEW TESTAMENT #pOrd.insert("New-Testament", pOrd.at("Gospels") + pOrd.at("Acts") + pOrd.at("Paul-Letters") + pOrd.at("Universal-Letters") + pOrd.at("Revelation")) //--------------------------------------------------------------------------------------------// // Book Sorting Schemes // //--------------------------------------------------------------------------------------------// // Book sorting schemes #let bSort = (:) //····························································································// // code // //····························································································// // "code" scheme #bSort.insert("code", bUID.values().sorted()) //····························································································// // LXX // //····························································································// // "LXX" scheme #let tmpLXX = () #for val in bSort.at("code") { if (val < 1400) or ((val > 3000) and (val < 5000)) { tmpLXX.push(val) } } #bSort.insert("LXX", tmpLXX) //····························································································// // Greek-Bible // //····························································································// // "Greek-Bible" #bSort.insert("Greek-Bible", ( bSort.at("LXX") + pOrd.at("New-Testament") ).flatten()) //····························································································// // Hebrew-Tanakh, Hebrew-Bible // //····························································································// // "Hebrew-Tanakh" #bSort.insert("Hebrew-Tanakh", ( pOrd.at("Law") + pOrd.at("Neviim") + pOrd.at("Ketuvim") ).flatten()) // "Hebrew-Bible" #bSort.insert("Hebrew-Bible", ( bSort.at("Hebrew-Tanakh") + pOrd.at("New-Testament") ).flatten()) //····························································································// // Protestant-Bible // //····························································································// // "Protestant-Bible" #bSort.insert("Protestant-Bible", ( pOrd.at("Law") + pOrd.at("OT-Protestant-Historical") + pOrd.at("OT-Protestant-Sapiential") + pOrd.at("OT-Protestant-Major-Prophets") + pOrd.at("OT-Protestant-Minor-Prophets") + pOrd.at("New-Testament") ).flatten()) //····························································································// // Catholic-Bible // //····························································································// // "Catholic-Bible" #bSort.insert("Catholic-Bible", ( pOrd.at("Law") + pOrd.at("OT-Catholic-Historical") + pOrd.at("OT-Catholic-Poetic") + pOrd.at("OT-Catholic-Major-Prophets") + pOrd.at("OT-Protestant-Minor-Prophets") + pOrd.at("New-Testament") ).flatten()) //····························································································// // Orthodox-Bible // //····························································································// // "Orthodox-Bible" #bSort.insert("Orthodox-Bible", ( pOrd.at("Law") + pOrd.at("OT-Orthodox-Historical") + pOrd.at("OT-Orthodox-Poetic") + pOrd.at("OT-Orthodox-Major-Prophets") + pOrd.at("OT-Orthodox-Minor-Prophets") + pOrd.at("New-Testament") ).flatten()) //····························································································// // Oecumenic-Bible // //····························································································// // "Oecumenic-Bible" #bSort.insert("Oecumenic-Bible", ( bSort.at("Hebrew-Tanakh") + pOrd.at("OT-TOB-Deuterocanonical") + pOrd.at("New-Testament") ).flatten())
https://github.com/lxl66566/my-college-files
https://raw.githubusercontent.com/lxl66566/my-college-files/main/信息科学与工程学院/互联网原理与技术/作业/理论5.typ
typst
The Unlicense
#import "template.typ": * #import "@preview/tablem:0.1.0": tablem #show: project.with( title: "理论 5", authors: ( "absolutex", ) ) = P3 考虑下面的网络。对于标明的链路开销,用Dijkstra的最短路算法计算出从x到所有网络节点的最短路径。通过计算一个类似于表5-1的表,说明该算法是如何工作的。 #tablem[ |步骤|N'|D(u),p(u)|D(v),p(v)|D(w),p(w)|D(y),p(y)|D(z),p(z)|D(t),p(t)| |---|---|---|---|---|---|---|---| |1 |x |$infinity$ |3,x |6,x |6,x |8,x |$infinity$ | |2|xv|6,v|3,x|6,x|6,x|8,x|7,v| |3|xvu|6,v|3,x|6,x|6,x|8,x|7,v| |4|xvuw|6,v|3,x|6,x|6,x|8,x|7,v| |5|xvuwy|6,v|3,x|6,x|6,x|8,x|7,v| |6|xvuwyt|6,v|3,x|6,x|6,x|8,x|7,v| |7|xvuwytz|6,v|3,x|6,x|6,x|8,x|7,v| ] = P5 考虑下图所示的网络,假设每个节点初始时知道到它的每个邻居的开销。考虑距离向量算法,请给出节点z处的距离表表项。 #tablem[ |节点|u|v|x|y|z| |---|---|---|---|---|---| | u | 0 | 1 | 4 | 2 | 6 | | v | 1 | 0 | 3 | 3 | 5 | | x | 4 | 3 | 0 | 3 | 2 | | y | 2 | 3 | 3 | 0 | 5 | | z | 6 | 5 | 2 | 5 | 0 | ]
https://github.com/coalg/notebook
https://raw.githubusercontent.com/coalg/notebook/main/exercises/dehnadi-test.typ
typst
#import "@preview/ttt-exam:0.1.0": * #import "@preview/wrap-it:0.1.0": * #import components: frame, field, point-tag #import "@preview/codelst:2.0.1": sourcecode #set text(lang:"ja", font: "<NAME>", weight: 300, size: 12pt) #set list(indent: 2em) #set enum(indent: 1em, numbering: "a)") #let question(body, points: none, number: auto) = { grid( inset: (0.5em, 0em), columns: (1fr, auto), column-gutter: 0.5em, _question(points: points)[ #context q-nr(style: if-auto-then(number, { if is-assignment() { "1)" } else { "1." } })) #body ], if points != none { place(end, dx: 1cm,point-tag(points)) } ) } #show link: underline #show raw.where(block: false): box.with( fill: luma(240), inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt, ) // #show raw.where(block: true): block.with( // fill: luma(240), // inset: (x: 25pt, y: 5pt), // radius: 4pt, // ) = Dehnadiテスト このテストの意図については #link("https://www.eis.mdx.ac.uk/research/PhDArea/saeed/")[Saeed Dehnadiのページ] を参照せよ。#link("http://172.16.58.3/lectures/PRE/2008-05-Dol1.pdf")[大雑把な日本語の要約はこのあたり。] ただしこの論文については「プログラミングの才能を見分けるもの」という過剰主張のため撤回されている。しかしながら、講義においてプログラミングの読解・記述において最初から一貫したモデルを頭の中に持っているかどうかで、その後の授業学習がうまくいくかすでに決定されているという内容は未だ示唆的である。元の内容はCとJavaだがPythonに翻訳している。 #assignment[ テスト(1回目) #question(points: 1)[ 以下のプログラムを読み、a, bの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 10 b = 20 a = b ``` ] #table( columns: (auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b] ), [ ① ] , [ a = 10 ], [ b = 10 ], [ ② ] , [ a = 30 ], [ b = 20 ], [ ③ ] , [ a = 0 ], [ b = 10 ], [ ④ ] , [ a = 20 ], [ b = 20 ], [ ⑤ ] , [ a = 0 ], [ b = 30 ], [ ⑥ ] , [ a = 10 ], [ b = 20 ], [ ⑦ ] , [ a = 20 ], [ b = 10 ], [ ⑧ ] , [ a = 20 ], [ b = 0 ], [ ⑨ ] , [ a = 10 ], [ b = 30 ], [ ⑩ ] , [ a = 30 ], [ b = 0 ], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], ) ] #question(points: 1)[ 以下のプログラムを読み、a, bの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 10 b = 20 b = a ``` ] #table( columns: (auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b] ), [ ① ] , [ a = 0 ], [ b = 30 ], [ ② ] , [ a = 30 ], [ b = 10 ], [ ③ ] , [ a = 0 ], [ b = 10 ], [ ④ ] , [ a = 20 ], [ b = 0 ], [ ⑤ ] , [ a = 20 ], [ b = 20 ], [ ⑥ ] , [ a = 20 ], [ b = 10 ], [ ⑦ ] , [ a = 30 ], [ b = 0 ], [ ⑧ ] , [ a = 10 ], [ b = 20 ], [ ⑨ ] , [ a = 10 ], [ b = 10 ], [ ⑩ ] , [ a = 10 ], [ b = 30 ], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], ) ] #question(points: 1)[ 以下のプログラムを読み、big, smallの結果として正しい答えを選択せよ。 #sourcecode[ ```py big = 10 small = 20 big = small ``` ] #table( columns: (auto, auto, auto), align: horizon, table.header( [選択肢], [big], [small] ), [ ① ], [ big = 30 ], [ small = 0 ], [ ② ], [ big = 20 ], [ small = 0 ], [ ③ ], [ big = 0 ], [ small = 30 ], [ ④ ], [ big = 20 ], [ small = 10 ], [ ⑤ ], [ big = 10 ], [ small = 10 ], [ ⑥ ], [ big = 30 ], [ small = 20 ], [ ⑦ ], [ big = 20 ], [ small = 20 ], [ ⑧ ], [ big = 0 ], [ small = 10 ], [ ⑨ ], [ big = 10 ], [ small = 20 ], [ ⑩ ], [ big = 10 ], [ small = 30 ], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], ) ] #question(points: 1)[ 以下のプログラムを読み、a, bの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 10 b = 20 a = b b = a ``` ] #table( columns: (auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b] ), [ ① ] , [ a = 10 ], [ b = 0 ], [ ② ] , [ a = 10 ], [ b = 10 ], [ ③ ] , [ a = 30 ], [ b = 50 ], [ ④ ] , [ a = 0 ], [ b = 20 ], [ ⑤ ] , [ a = 40 ], [ b = 30 ], [ ⑥ ] , [ a = 30 ], [ b = 0 ], [ ⑦ ] , [ a = 20 ], [ b = 20 ], [ ⑧ ] , [ a = 0 ], [ b = 30 ], [ ⑨ ] , [ a = 30 ], [ b = 30 ], [ ⑩ ] , [ a = 10 ], [ b = 20 ], [ ⑪ ] , [ a = 20 ], [ b = 10 ], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], ) ] #question(points: 1)[ 以下のプログラムを読み、a, bの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 10 b = 20 b = a a = b ``` ] #table( columns: (auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b] ), [ ① ] , [ a = 30 ], [ b = 50 ], [ ② ] , [ a = 10 ], [ b = 10 ], [ ③ ] , [ a = 20 ], [ b = 20 ], [ ④ ] , [ a = 10 ], [ b = 0 ], [ ⑤ ] , [ a = 0 ], [ b = 20 ], [ ⑥ ] , [ a = 30 ], [ b = 0 ], [ ⑦ ] , [ a = 40 ], [ b = 30 ], [ ⑧ ] , [ a = 0 ], [ b = 30 ], [ ⑨ ] , [ a = 20 ], [ b = 10 ], [ ⑩ ] , [ a = 30 ], [ b = 30 ], [ ⑪ ] , [ a = 10 ], [ b = 20 ], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], ) ] #question(points: 1)[ 以下のプログラムを読み、a, b, cの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 10 b = 20 c = 30 a = b b = c ``` ] #table( columns: (auto, auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b], [c] ), [ ① ] , [ a = 30 ], [ b = 50 ], [ c = 30 ], [ ② ] , [ a = 60 ], [ b = 0 ], [ c = 0 ], [ ③ ] , [ a = 10 ], [ b = 30 ], [ c = 40 ], [ ④ ] , [ a = 0 ], [ b = 10 ], [ c = 0 ], [ ⑤ ] , [ a = 10 ], [ b = 10 ], [ c = 10] , [ ⑥ ] , [ a = 60 ], [ b = 20 ], [ c = 30 ] , [ ⑦ ] , [ a = 30 ], [ b = 50 ], [ c = 0 ] , [ ⑧ ] , [ a = 20 ], [ b = 30 ], [ c = 0 ] , [ ⑨ ] , [ a = 10 ], [ b = 20 ], [ c = 30 ] , [ ⑩ ] , [ a = 20 ], [ b = 20 ], [ c = 20 ] , [ ⑪ ] , [ a = 0 ], [ b = 10 ], [ c = 20 ] , [ ⑫ ] , [ a = 20 ], [ b = 30 ], [ c = 30 ] , [ ⑬ ] , [ a = 10 ], [ b = 10 ], [ c = 20 ] , [ ⑭ ] , [ a = 30 ], [ b = 30 ], [ c = 50 ] , [ ⑮ ] , [ a = 0 ], [ b = 30 ], [ c = 50 ] , [ ⑯ ] , [ a = 30 ], [ b = 30 ], [ c = 30 ] , [ ⑰ ] , [ a = 0 ], [ b = 0 ], [ c = 60 ] , [ ⑱ ] , [ a = 20 ], [ b = 30 ], [ c = 20 ] , [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], [ c = #h(1cm)] ) ] #question(points: 1)[ 以下のプログラムを読み、a, b, cの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 5 b = 3 c = 7 a = c b = a c = b ``` ] #table( columns: (auto, auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b], [c] ), [ ① ] , [ a = 3 ], [ b = 5 ], [ c = 5 ], [ ② ] , [ a = 3 ], [ b = 3 ], [ c = 3 ], [ ③ ] , [ a = 12 ],[ b = 14 ],[ c = 22 ], [ ④ ] , [ a = 8 ], [ b = 15 ],[ c = 12 ], [ ⑤ ] , [ a = 7 ], [ b = 7 ], [ c = 7 ] , [ ⑥ ] , [ a = 5 ], [ b = 3 ], [ c = 7 ] , [ ⑦ ] , [ a = 5 ], [ b = 5 ], [ c = 5 ] , [ ⑧ ] , [ a = 7 ], [ b = 5 ], [ c = 3 ] , [ ⑨ ] , [ a = 3 ], [ b = 7 ], [ c = 5 ] , [ ⑩ ] , [ a = 12 ],[ b = 8 ], [ c = 10 ] , [ ⑪ ] , [ a = 10 ],[ b = 8 ], [ c = 12 ] , [ ⑫ ] , [ a = 0 ], [ b = 0 ], [ c = 7 ] , [ ⑬ ] , [ a = 0 ], [ b = 0 ], [ c = 15 ] , [ ⑭ ] , [ a = 3 ], [ b = 12 ],[ c = 0 ] , [ ⑮ ] , [ a = 3 ], [ b = 5 ], [ c = 7 ] , [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], [ c = #h(1cm)] ) ] #question(points: 1)[ 以下のプログラムを読み、a, b, cの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 5 b = 3 c = 7 c = b b = a a = c ``` ] #table( columns: (auto, auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b], [c] ), [ ① ] , [ a = 3 ], [ b = 5 ], [ c = 7 ], [ ② ] , [ a = 15 ], [ b = 10], [ c = 20 ], [ ③ ] , [ a = 12 ], [ b = 8 ], [ c = 10 ], [ ④ ] , [ a = 7 ], [ b = 7 ], [ c = 7 ], [ ⑤ ] , [ a = 3 ], [ b = 5 ], [ c = 3 ] , [ ⑥ ] , [ a = 0 ], [ b = 0 ], [ c = 7 ] , [ ⑦ ] , [ a = 5 ], [ b = 3 ], [ c = 7 ] , [ ⑧ ] , [ a = 3 ], [ b = 3 ], [ c = 3 ] , [ ⑨ ] , [ a = 7 ], [ b = 5 ], [ c = 3 ] , [ ⑩ ] , [ a = 3 ], [ b = 5 ], [ c = 0 ] , [ ⑪ ] , [ a = 3 ], [ b = 7 ], [ c = 5 ] , [ ⑫ ] , [ a = 8 ], [ b = 10 ], [ c = 12] , [ ⑬ ] , [ a = 5 ], [ b = 5 ], [ c = 5 ] , [ ⑭ ] , [ a = 15 ], [ b = 8 ], [ c = 10 ] , [ ⑮ ] , [ a = 10 ], [ b = 5], [ c = 0 ] , [ ⑯ ] , [ a = 0 ], [ b = 0 ], [ c = 15 ] , [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], [ c = #h(1cm)] ) ] #question(points: 1)[ 以下のプログラムを読み、a, b, cの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 5 b = 3 c = 7 c = b a = c b = a ``` ] #table( columns: (auto, auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b], [c] ), [ ① ] , [ a = 15 ], [ b = 18 ], [ c = 10 ], [ ② ] , [ a = 7 ], [ b = 5 ], [ c = 3 ], [ ③ ] , [ a = 7 ], [ b = 0 ], [ c = 5 ], [ ④ ] , [ a = 0 ], [ b = 3 ], [ c = 0 ], [ ⑤ ] , [ a = 10], [ b = 0 ], [ c = 5] , [ ⑥ ] , [ a = 5 ], [ b = 3 ], [ c = 7 ] , [ ⑦ ] , [ a = 3 ], [ b = 3 ], [ c = 3 ] , [ ⑧ ] , [ a = 12 ], [ b = 8 ], [ c = 10] , [ ⑨ ] , [ a = 7 ], [ b = 7 ], [ c = 7 ] , [ ⑩ ] , [ a = 15 ], [ b = 10 ], [ c = 12] , [ ⑪ ] , [ a = 7 ], [ b = 7 ], [ c = 5 ] , [ ⑫ ] , [ a = 8 ], [ b = 10 ], [ c = 12] , [ ⑬ ] , [ a = 0 ], [ b = 15 ], [ c = 0 ] , [ ⑭ ] , [ a = 7 ], [ b = 3 ], [ c = 5 ] , [ ⑮ ] , [ a = 5 ], [ b = 5 ], [ c = 5 ] , [ ⑯ ] , [ a = 3 ], [ b = 7 ], [ c = 5 ] , [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], [ c = #h(1cm)] ) ] #question(points: 1)[ 以下のプログラムを読み、a, b, cの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 5 b = 3 c = 7 b = a c = b a = c ``` ] #table( columns: (auto, auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b], [c] ), [ ① ] , [ a = 0 ], [ b = 7 ], [ c = 3 ], [ ② ] , [ a = 12], [ b = 8 ], [ c = 10], [ ③ ] , [ a = 15], [ b = 0 ], [ c = 0 ], [ ④ ] , [ a = 0 ], [ b = 7 ], [ c = 8 ], [ ⑤ ] , [ a = 3 ], [ b = 7 ], [ c = 3 ], [ ⑥ ] , [ a = 5 ], [ b = 3 ], [ c = 7 ], [ ⑦ ] , [ a = 3 ], [ b = 3 ], [ c = 3 ], [ ⑧ ] , [ a = 7 ], [ b = 5 ], [ c = 3 ], [ ⑨ ] , [ a = 20], [ b = 8 ], [ c = 15], [ ⑩ ] , [ a = 3 ], [ b = 7 ], [ c = 5 ], [ ⑪ ] , [ a = 5 ], [ b = 0 ], [ c = 0 ], [ ⑫ ] , [ a = 8 ], [ b = 10], [ c = 15], [ ⑬ ] , [ a = 5 ], [ b = 5 ], [ c = 5 ], [ ⑭ ] , [ a = 8 ], [ b = 10], [ c = 12], [ ⑮ ] , [ a = 5 ], [ b = 7 ], [ c = 3 ], [ ⑯ ] , [ a = 7 ], [ b = 7 ], [ c = 7 ], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], [ c = #h(1cm)] ) ] #question(points: 1)[ 以下のプログラムを読み、a, b, cの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 5 b = 3 c = 7 b = a a = c c = b ``` ] #table( columns: (auto, auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b], [c] ), [ ① ] , [ a = 8 ], [ b = 18], [ c = 15], [ ② ] , [ a = 7 ], [ b = 0 ], [ c = 8 ], [ ③ ] , [ a = 5 ], [ b = 5 ], [ c = 5 ], [ ④ ] , [ a = 12], [ b = 8 ], [ c = 15], [ ⑤ ] , [ a = 7 ], [ b = 0 ], [ c = 5 ], [ ⑥ ] , [ a = 3 ], [ b = 7 ], [ c = 5 ], [ ⑦ ] , [ a = 7 ], [ b = 5 ], [ c = 3 ], [ ⑧ ] , [ a = 0 ], [ b = 15], [ c = 0 ], [ ⑨ ] , [ a = 0 ], [ b = 3 ], [ c = 0 ], [ ⑩ ] , [ a = 3 ], [ b = 3 ], [ c = 3 ], [ ⑪ ] , [ a = 7 ], [ b = 7 ], [ c = 7 ], [ ⑫ ] , [ a = 12], [ b = 8 ], [ c = 10], [ ⑬ ] , [ a = 8 ], [ b = 10], [ c = 12], [ ⑭ ] , [ a = 7 ], [ b = 5 ], [ c = 5], [ ⑮ ] , [ a = 5 ], [ b = 3 ], [ c = 7], [ ⑯ ] , [ a = 7 ], [ b = 3 ], [ c = 5], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], [ c = #h(1cm)] ) ] #question(points: 1)[ 以下のプログラムを読み、a, b, cの結果として正しい答えを選択せよ。 #sourcecode[ ```py a = 5 b = 3 c = 7 a = c c = b b = a ``` ] #table( columns: (auto, auto, auto, auto), align: horizon, table.header( [選択肢], [a], [b], [c] ), [ ① ] , [ a = 0 ], [ b = 12], [ c = 3 ], [ ② ] , [ a = 5 ], [ b = 5 ], [ c = 5 ], [ ③ ] , [ a = 0 ], [ b = 7 ], [ c = 3 ], [ ④ ] , [ a = 8 ], [ b = 10], [ c = 12], [ ⑤ ] , [ a = 15], [ b = 0 ], [ c = 0 ], [ ⑥ ] , [ a = 3 ], [ b = 7 ], [ c = 5 ], [ ⑦ ] , [ a = 12], [ b = 15], [ c = 10], [ ⑧ ] , [ a = 5 ], [ b = 7 ], [ c = 3 ], [ ⑨ ] , [ a = 3 ], [ b = 3 ], [ c = 3 ], [ ⑩ ] , [ a = 7 ], [ b = 7 ], [ c = 7 ], [ ⑪ ] , [ a = 12], [ b = 8 ], [ c = 10], [ ⑫ ] , [ a = 5 ], [ b = 0 ], [ c = 0 ], [ ⑬ ] , [ a = 5 ], [ b = 3 ], [ c = 7 ], [ ⑭ ] , [ a = 7 ], [ b = 7 ], [ c = 3 ], [ ⑮ ] , [ a = 20], [ b = 15], [ c = 12], [ ⑯ ] , [ a = 7 ], [ b = 5 ], [ c = 3 ], [ 上記以外 ] , [ a = #h(1cm) ], [ b = #h(1cm)], [ c = #h(1cm)] ) ] ] #assignment[ *テスト(2回目抜粋)* #question(points: 1)[ 実用的なプログラムを作成する際に、アルゴリズムを精緻化するための初期工程で使う言語にふさわしいものはどれか。 1. 高レベルプログラミング言語 2. 自然言語 3. バイトコード 4. プログラムが実行されるマシンコード 5. 自然言語による疑似コード ] #question(points: 1)[ 以下のプログラムの断片を考えよ。 #sourcecode[ ```py if mark > 80: grade = 'A' elif mark > 60: grade = 'B' elif mark > 40: grade = 'C' else: grade = 'F' ``` ] `mark`の値が `-12` のとき、プログラムはどのような動作をするか。 + プログラムはクラッシュ(故障)する + プログラムはエラーを起こす + `grade` の値が未定義になる + プログラムは停止しない + `grade` に `F` が設定される ] ] #pagebreak() #assignment[ *試験* #question(points: 1)[ 走行距離に応じて中古車の価格を変動させるプログラムを書く。 #table( columns: (auto, auto), align: horizon, table.header( [距離], [価格調整], ), [ 10,000km 未満 ] , [ 200を加算 ], [ 10,000km 以上 ] , [ 300を減算 ], ) 価格は `price`、距離は `kilometer` に格納されていると仮定する。上の表を参考に、`if .. else` 文で `price` を調整するプログラムを書け。 ] #question(points: 1)[ 以下の文の空欄を埋めよ。 - \[ #h(1.5cm) \] 文は、決まった回数だけ繰り返し命令を実行するために使われる。 ] #question(points: 1)[ 以下のプログラムは1から4までの数を足し合わせることを意図したプログラムだが、内容が間違っている。間違っている部分を指摘し結果が10となるようにせよ。 #sourcecode[ ```py i = 1 total = 0 while i < 4: total = total + i i = i + 1 print(total) ``` ] ] #question(points: 1)[ 以下は数当てプログラムの一部である。このプログラムはユーザーに隠された数字を当てるよう繰り返し促す。ユーザーの推測が正しいかに応じて適切なメッセージを出力するコードを `while` ループの内部に追加せよ。 #sourcecode[ ```py HIDDEN_NUMBER = 20 num_str = input("数を入力してください(0で終了)> ") guess = int(num_str) while guess != 0: # 適切なコードを以下に加える # ここから # ここまで num_str = input("数を入力してください(0で終了)>") guess = int(num_str) ``` ] ] ]
https://github.com/MaharshiAJ/Resume-Template
https://raw.githubusercontent.com/MaharshiAJ/Resume-Template/main/README.md
markdown
# Typst Resume Template ## Usage 1. Install Typst Cli from [typst](https://github.com/typst/typst) 2. Clone this repo - `git clone https://github.com/MaharshiAJ/Resume-Template` 3. File out the resume.json file with your current resume data 4. Run the following command - `typst compile main.typ` ## Making Changes If you'd like to make changes to the layout, you can edit the main.typ file and rearrange as you see fit. For example, moving the experience section to the top. If you want to add additional sections, add them to your resume.json file, add the section to the template, and call the section in the main.typ file. ## Demo ![<NAME>'s Resume](Resume.png)
https://github.com/jamesrswift/pixel-pipeline
https://raw.githubusercontent.com/jamesrswift/pixel-pipeline/main/src/pipeline/sorting/lib.typ
typst
The Unlicense
#import "dependency.typ": sort as dependency #import "bvh.typ"
https://github.com/open-datakit/accs-finalreport-whitepaper
https://raw.githubusercontent.com/open-datakit/accs-finalreport-whitepaper/main/ACCSFinalReportWhitepaper.typ
typst
#import "fontawesome.typ" #show link: underline #set document( title: "opendata.fit: ACCS WP6 final report whitepaper", author: ("<NAME>", "<NAME>") ) #set text( // font: "Noto Sans" ) #set page( header: [ #set text(font:"Cerebri Sans", 8pt) ACCS WP6 Final Report Whitepaper #h(1fr) #set text(font: "Inconsolata", size: 8pt) #fontawesome.icon[#fontawesome.github.square] #link("https://github.com/opendatafit/accs-finalreport-whitepaper")[opendatafit/accs-finalreport-whitepaper] ], footer: [ #set text(8pt) #set align(center) Page #counter(page).display( "1 of 1", both: true, ) ] ) #set heading( // font: "Cerebri Sans" ) #include ("cover.typ") <no-header> #pagebreak() #include "attribution.typ" <no-header> #pagebreak() #outline( title: [Table of contents], indent: 1em, ) #pagebreak(weak: true) #include "1-intro/content.typ" #pagebreak(weak: true) #include "2-accs/content.typ" #pagebreak(weak: true) #include "3-future/content.typ"
https://github.com/maxgraw/bachelor
https://raw.githubusercontent.com/maxgraw/bachelor/main/apps/document/src/0-base/1-oath.typ
typst
#set align(start + horizon) #set heading(numbering: none, supplement: [Abschnitt]) = Eidesstattliche Erklärung Hiermit erkläre ich, dass ich die vorliegende Arbeit eigenständig und ohne fremde Hilfe angefertigt habe. Textpassagen, die wörtlich oder dem Sinn nach auf Publikationen oder Vorträgen anderer Autoren beruhen, sind als solche kenntlich gemacht. Die Arbeit wurde bisher keiner anderen Prüfungsbehörde vorgelegt und auch noch nicht veröffentlicht. #block( inset: (top: 40pt), grid( inset: 10pt, columns: (1fr, 1fr), rows: (auto), gutter: 1fr, grid.cell( "Ort, Datum", stroke: (top: 1pt), ), grid.cell( "<NAME>", stroke: (top: 1pt), ) )) #pagebreak()
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/alignment_03.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test #460 equations. $ a &=b & quad c&=d \ e &=f & g&=h $
https://github.com/r4ai/typst-code-info
https://raw.githubusercontent.com/r4ai/typst-code-info/main/.github/fixtures/caption-and-label.typ
typst
MIT License
#import "../../plugin.typ": init-code-info, code-info, parse-diff-code #show: init-code-info.with() #code-info( caption: [A program to display "Hello, world!"], label: "hello-world", ) ```rust pub fn main() { println!("Hello, world!"); } ``` According to @hello-world, the program displays "Hello, world!".
https://github.com/sa-concept-refactoring/doc
https://raw.githubusercontent.com/sa-concept-refactoring/doc/main/chapters/inlineConceptRequirement.typ
typst
#import "@preview/tablex:0.0.4": tablex, colspanx, rowspanx, cellx, hlinex #let refactoring_name = "Inline Concept Requirement" = Refactoring — #refactoring_name <inline_concept_requirement> For this refactoring a subset of the initial idea (@idea_requirement_transformation) is implemented. Specifically the inlining of an explicit ```cpp requires``` clause into a constrained function template. @capabilities_of_first_refactoring shows some examples of what this refactoring is able to do. A detailed analysis can be found in @first_refactoring_analysis. Implementation details are discussed in @first_refactoring_implementation and limitations are explored in @limitations_of_first_refactoring. Finally, the usage of the refactoring is shown in @first_refactoring_usage. #figure( table( columns: (1fr, 1fr), align: horizon, [*Before*], [*After*], ```cpp template <typename T> void f(T) requires foo<T> ```, ```cpp template <foo T> void f(T) ``` , ```cpp template <typename T> requires foo<T> void f(T) ```, ```cpp template <foo T> void f(T) ```, ```cpp template <typename T> void f() requires std::integral<T> ```, ```cpp template <std::integral T> void f() ```, ), caption: [Capabilities of the "#refactoring_name" refactoring], ) <capabilities_of_first_refactoring> #pagebreak() == Analysis <first_refactoring_analysis> The analysis will look at which elements need to be captured (@first_refactoring_captured_elements) and how the refactoring transforms the AST (@first_refactoring_ast_analysis). === Preconditions The refactoring should be as defensive as possible and only apply when it is clear that it will apply correctly. In @inline-concept-requirement-preconditions checks are explained which are made during the preparation phase to ensure the refactoring feature can be applied. #figure( table( columns: (1fr, 1.5fr), align: start, [*Check*], [*Reasoning*], [ The selected ```cpp requires``` clause only contains *a single requirement*, \ e.g. ```cpp requires CONDITION``` ], [ Combined concept requirements are complex to handle and would increase the complexity drastically. This is a temporary restriction that could be lifted in the future. ], [ The selected ```cpp requires``` clause only contains *a single type argument*, \ e.g. ```cpp requires std::integral<T>```. ], [ This case is complex to handle and would increase the complexity drastically. This is a temporary restriction that could be lifted in the future. ], [ The concept requirement has a parent of either a *function*, e.g. #[ #set text(size: 0.9em) #v(-4mm) ```cpp template<> void f() requires CONDITION {} ``` ] #v(-4mm) or a *function template*, e.g. #[ #set text(size: 0.9em) #v(-4mm) ```cpp template<> requires CONDITION void f() {} ``` ] ], [ To restrict the refactoring operation only function templates are allowed. This is a temporary restriction that could be lifted in the future. ], ), caption: [ Checks made during the preparation phase of the \"#refactoring_name\" refactoring ], ) <inline-concept-requirement-preconditions> #pagebreak() === Captured Elements <first_refactoring_captured_elements> Capturing an element means finding it in the AST and keeping a reference to it for the application phase. @first_refactoring_captured_elements_figure shows the captured elements and their purpose. A reference to them is stored as a member of the tweak object during the preparation phase and used during the application phase. It is never mentioned explicitly that the AST references are guaranteed to be valid until the application phase, but the refactorings already present treat it as such, which is why the refactorings in this project also do so. // COR Diese Info ist eigentlich mehr Implementation als Analyse. #figure( tablex( columns: 2, auto-vlines: false, ```cpp template <typename T> ^^^^^^^^^^ void f(T) requires foo<T> {} ```, [ *Template Type Parameter Declaration* \ Will be updated using the concept found in the concept specialization expression below. ], ```cpp template <typename T> void f(T) requires foo<T> {} ^^^^^^ ```, [ *Concept Specialization Expression* \ Will be removed. ], ```cpp template <typename T> void f(T) requires foo<T> {} ^^^^^^^^ ```, [ *Requires Token* \ Will be removed. ], ), caption: [Elements captured for the "#refactoring_name" refactoring], ) <first_refactoring_captured_elements_figure> #pagebreak() === Abstract Syntax Tree <first_refactoring_ast_analysis> The AST gives a good overview over the structure of the code before and after the refactoring. In @first_refactoring_ast the ASTs of a simple template method and its corresponding refactored version are shown. Looking at the original version (on the left) it is visible that the outermost `FunctionTemplate` contains the template type parameters, as well as the function definition. The `requires` clause is represented by a `ConceptSpecialization` with a corresponding `Concept reference`. During the refactor operation most of the AST stays untouched, except for the concept reference (in yellow), which gets moved to the template type parameter and the concept specialization, which gets removed (in red). After the examination, it was concluded that `ConceptSpecialization` nodes can be searched to see if the refactoring applies, and then additional analysis can be performed. This also guards against accidentally refactoring similar looking expressions that are not concept specializations. For example, `foo<T>` could also be a variable template. #figure( tablex( columns: (200pt, 50pt, 200pt), align: center, auto-vlines: false, auto-hlines: false, [ *Before* ], [], [ *After* ], hlinex(), ```cpp template<typename T> void bar(T a) requires Foo<T> { a.abc(); } ```, [], ```cpp template<Foo T> void bar(T a) { a.abc(); } ```, hlinex(), colspanx(3)[#image("../images/ast_first_refactoring.png")], ), caption: [Example AST tranformation of the "#refactoring_name" refactoring], ) <first_refactoring_ast> #pagebreak() == Implementation <first_refactoring_implementation> The implementation process was relatively straightforward, particularly after determining how to traverse the AST. The traversal just requires the current selection, which provides a helper method to get the common ancestor in the AST, after which the tree can be traversed upwards using the `Parent` property of each node and checking if the type of the node reached is the correct one. // COR Sehr allgemein beschrieben. Allenfalls wären die Typhierarchien der Knoten noch spannend. However, there were challenges in discovering certain methods, as some were global and others necessitated casting. During this phase, referencing existing refactorings provided significant assistance. // COR Hier könnte eine Grafik helfen, mit den relevanten Tokens und den zugehörigen AST-Knoten. The biggest hurdle of this refactoring was the `requires` keyword itself, which was quite hard to track down as it is not part of the AST itself. To figure out where exactly it is located in the source code it was necessary to resort to the token representation of the source range. #[ #set heading(numbering: none) === Testing A lot of manual tests were performed using a test project. Debug inspections were performed often to verify assumptions. Unit tests were also written as described in @testing, which consist of a total of 11 tests, 4 of them availability tests, 4 unavailability tests and 3 application tests. This is a similar extent to which existing refactorings are tested. === Pull Request The implementation has been submitted upstream as a pull request @pull_request_of_first_refactoring and as of #datetime.today().display("[month repr:long] [year]") is awaiting review. ] #pagebreak() == Limitations <limitations_of_first_refactoring> To keep the scope of the implementation managable it was decided to leave some features out. These limitations however could be lifted in a future version. The implementation is built so it actively looks for these patterns and does not offer the refactoring operation if one is present. === Combined Concept Requirements Handling combined `requires` clauses would certainly be possible, however it would increase the complexity of the refactoring code significantly. Since working on the LLVM project is new for all participants, it was decided that this feature will be left out. #figure( ```cpp template <typename T, typename U> requires foo<T> && foo<U> void f(T) ```, caption: "Combined concept requirement", ) === Class Templates Supporting class templates would have been feasible, but a decision was made to exclude this possibility due to time constraints and in favor of maintaining simplicity in the initial refactoring implementation. #figure( ```cpp template <typename T> requires foo<T> class Bar; ```, caption: "Class template", ) #pagebreak() === Multiple Type Arguments If a concept has multiple type arguments, such as ```cpp std::convertible_to<T, U>``` the refactoring will not be applicable. The complexity associated with managing this particular case is considerable, while the potential use case is minimal. As a result, a decision was made not to incorporate this capability. The refactoring would be available in most scenarios involving multiple type arguments, except when the template arguments are function template parameters and are defined after the final template argument. To illustrate this, two examples are provided in @multiple_type_arguments_example. They both show a version after the transformation has been applied. Only the version on the left represents a valid refactoring. #let cell(fill, body) = box( inset: (x: 12pt, y: 8pt), fill: fill, radius: 6pt, text(white, weight: "bold", body) ) #let bigArrow = text(size: 1.5em, sym.arrow.b) #figure( kind: image, gap: 1.5em, grid( columns: (1fr, 1fr), row-gutter: 10pt, ```cpp template <typename T, typename U> requires std::convertible_to<T, U> void f() {} ```, ```cpp template <typename T, typename U> requires std::convertible_to<U, T> void f() {} ```, bigArrow, bigArrow, ```cpp template <typename T, std::convertible_to<T> U> void f() {} ```, ```cpp template <std::convertible_to<U> T, typename U> void f() {} ```, bigArrow, bigArrow, cell(green, "Compiles"), cell(red, "Does not compile"), ), caption: [Example for "#refactoring_name" refactoring with multiple type arguments], ) <multiple_type_arguments_example> #pagebreak() == Usage <first_refactoring_usage> The refactoring is available as a code action to language server clients and is available on the whole `requires` clause. === VS Code To use the feature the user needs to hover over the requires clause (e.g. `std::integral<T>`), then right click to show the code options. To see the possible refactorings the option "Refactor..." needs to be clicked and then the newly implemented refactoring "Inline concept requirement" will appear within the listed options. How this can look like is shown in @inline_concept_requirement_usage_in_vs_code. #figure( image("../images/screenshot_inline_concept.png", width: 50%), caption: [Screenshot showing the option to inline a concept requirement in VS Code], ) <inline_concept_requirement_usage_in_vs_code> === Neovim @first_refactoring_usage_in_vim shows how the refactoring looks like before accepting it in Neovim. The cursor needs to be placed on the requires clause before triggering the listing of code actions. #figure( image("../images/first_refactoring_usage_in_vim.png", width: 80%), caption: [Screenshot showing the option to inline a concept requirement in Neovim], ) <first_refactoring_usage_in_vim>
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/syntaxes/textmate/tests/unit/basic/argsOrParams.typ
typst
Apache License 2.0
#let f(x: 1) = []; #let f(x: "[]") = []; #let f(x: [x]) = [];
https://github.com/lucannez64/Notes
https://raw.githubusercontent.com/lucannez64/Notes/master/Philosophie_Problematique_1.typ
typst
#import "template.typ": * // Take a look at the file `template.typ` in the file panel // to customize this template and discover how it works. #show: project.with( title: "Philosophie Problematique 1", authors: ( "<NAME>", ), date: "11 Janvier, 2024", ) #set heading(numbering: "1.1.") == Question: #emph[La technique s’oppose-t-elle à la nature ?] <question-la-technique-soppose-t-elle-à-la-nature> \ On a coutume de penser que la technique, en tant qu’expression de la créativité humaine, s’oppose à la nature, représentant l’ordre naturel existant indépendamment de l’intervention humaine. Cependant, dans quelle mesure cette opposition est-elle fondée, et peut-on envisager une coexistence harmonieuse entre la technique et la nature, voire une collaboration bénéfique pour l’humanité et l’environnement ?
https://github.com/dankelley/typst_templates
https://raw.githubusercontent.com/dankelley/typst_templates/main/ex/0.0.1/ex.typ
typst
MIT License
// https://typst.app/docs/tutorial/making-a-template/ // https://github.com/typst/packages/?tab=readme-ov-file#local-packages #let conf( class: none, number: none, title: none, date: none, doc, ) = { set text(font: "Times Roman", size: 12pt) set page("us-letter", header: [ *#class Exercise #number #h(1fr) #title #h(1fr) #date* ]) show heading.where(level: 1): set text(font: "Times Roman", size: 12pt) doc }
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/syntaxes/textmate/tests/unit/basic/may_import.typ
typst
Apache License 2.0
#let evil_import() = import "base.typ" #import "base.typ" #import ("base.typ") #import("base.typ") #import "base.typ": * #import "base.typ" as t: * #import "base.typ": x #import "base.typ": x as foo #import "base.typ": x as foo, y #import "base.typ" as foo #import "base.typ" as foo: z, x as foo, y as t #import cetz.draw: *
https://github.com/DannySeidel/typst-dhbw-template
https://raw.githubusercontent.com/DannySeidel/typst-dhbw-template/main/locale.typ
typst
MIT License
#let TITLEPAGE_SECTION_A = ( "de": "für den Erwerb des", "en": "for the", ) #let TITLEPAGE_SECTION_B = ( "de": "aus dem Studiengang ", "en": "from the course of studies ", ) #let TITLEPAGE_SECTION_C = ( "de": "an der ", "en": "at the ", ) #let TITLEPAGE_STUDENT_ID = ( "de": "Matrikelnummer, Kurs:", "en": "Student ID, Course:", ) #let TITLEPAGE_COMPANY = ( "de": "Unternehmen:", "en": "Company:", ) #let TITLEPAGE_COMPANY_SUPERVISOR = ( "de": "Betreuer im Unternehmen:", "en": "Supervisor in the Company:", ) #let TITLEPAGE_SUPERVISOR = ( "de": "Betreuer an der ", "en": "Supervisor at ", ) #let DECLARATION_OF_AUTHORSHIP_TITLE = ( "de": "Selbstständigkeitserklärung", "en": "Declaration of Authorship", ) #let DECLARATION_OF_AUTHORSHIP_SECTION_A_SINGLE = "Gemäß Ziffer 1.1.13 der Anlage 1 zu §§ 3, 4 und 5 der Studien- und Prüfungsordnung für die Bachelorstudiengänge im Studienbereich Technik der Dualen Hochschule Baden- Württemberg vom 29.09.2017. Ich versichere hiermit, dass ich meine Arbeit mit dem Thema:" #let DECLARATION_OF_AUTHORSHIP_SECTION_A_PLURAL = "Gemäß Ziffer 1.1.13 der Anlage 1 zu §§ 3, 4 und 5 der Studien- und Prüfungsordnung für die Bachelorstudiengänge im Studienbereich Technik der Dualen Hochschule Baden- Württemberg vom 29.09.2017. Wir versichern hiermit, dass wir unsere Arbeit mit dem Thema:" #let DECLARATION_OF_AUTHORSHIP_SECTION_B_SINGLE = "selbstständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt habe. Ich versichere zudem, dass alle eingereichten Fassungen übereinstimmen." #let DECLARATION_OF_AUTHORSHIP_SECTION_B_PLURAL = "selbstständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt haben. Wir versichern zudem, dass alle eingereichten Fassungen übereinstimmen." #let CONFIDENTIALITY_STATEMENT_TITLE = ( "de": "Sperrvermerk", "en": "Confidentiality Statement", ) #let CONFIDENTIALITY_STATEMENT_SECTION_A = ( "de": "Die vorliegende Arbeit mit dem Titel", "en": "The Thesis on hand", ) #let CONFIDENTIALITY_STATEMENT_SECTION_B = ( "de": "enthält unternehmensinterne bzw. vertrauliche Informationen der", "en": "contains internal respective confidential data of", ) #let CONFIDENTIALITY_STATEMENT_SECTION_C = ( "de": ", ist deshalb mit einem Sperrvermerk versehen und wird ausschließlich zu Prüfungszwecken im Studiengang", "en": ". It is intended solely for inspection by the assigned examiner, the head of the", ) #let CONFIDENTIALITY_STATEMENT_SECTION_D = ( "de": " der ", "en": " department and, if necessary, the Audit Committee at the ", ) #let CONFIDENTIALITY_STATEMENT_SECTION_E = ( "de": " vorgelegt. Der Inhalt dieser Arbeit darf weder als Ganzes noch in Auszügen Personen außerhalb des Prüfungsprozesses und des Evaluationsverfahrens zugänglich gemacht werden, sofern keine anders lautende Genehmigung der ", "en": ". The content of this thesis may not be made available, either in its entirety or in excerpts, to persons outside of the examination process and the evaluation process, unless otherwise authorized by the training ", ) #let CONFIDENTIALITY_STATEMENT_SECTION_F = ( "de": " vorliegt.", "en": ".", ) #let INSTITUTION_SINGLE = ( "de": "Ausbildungsstätte", "en": "insitution", ) #let INSTITUTION_PLURAL = ( "de": "Ausbildungsstätten", "en": "insitutions", ) #let AND = ( "de": " und ", "en": " and ", ) #let BY = ( "de": "von", "en": "by", ) #let TABLE_OF_CONTENTS = ( "de": "Inhaltsverzeichnis", "en": "Table of Contents", ) #let LIST_OF_FIGURES = ( "de": "Abbildungsverzeichnis", "en": "List of Figures", ) #let LIST_OF_TABLES = ( "de": "Tabellenverzeichnis", "en": "List of Tables", ) #let CODE_SNIPPETS = ( "de": "Codeverzeichnis", "en": "Code Snippets", ) #let ACRONYMS = ( "de": "Abkürzungsverzeichnis", "en": "List of Acronyms", ) #let GLOSSARY = ( "de": "Glossar", "en": "Glossary", ) #let REFERENCES = ( "de": "Literatur", "en": "References", ) #let APPENDIX = ( "de": "Anhang", "en": "Appendix", )
https://github.com/rxt1077/it610
https://raw.githubusercontent.com/rxt1077/it610/master/markup/templates/syllabus.typ
typst
// template for a general NJIT syllabus #let syllabus( course: none, instructor: [<NAME>], email: [<EMAIL>], office: [3500 Guttenberg Information Technologies Center (GITC)], office-hours: none, objective: none, grading: none, course-materials: none, outcomes: none, outline: none, doc, ) = [ #set page( paper: "us-letter", margin: (x: 36pt, y: 36pt) ) #set text( font: "Roboto", size: 10pt ) #show link: it => [ #set text(blue) #underline(it) ] #place(top + right, image("njit_logo.svg")) = #course #grid( columns: 2, gutter: 8pt, [*Instructor*], instructor, [*Email*], email, [*Office*], office, [*Office Hours*], office-hours, ) == Academic Integrity Academic Integrity is the cornerstone of higher education and is central to the ideals of this course and the university. Cheating is strictly prohibited and devalues the degree that you are working on. As a member of the NJIT community, it is your responsibility to protect your educational investment by knowing and following the academic code of integrity policy that is found at: #link("https://www5.njit.edu/policies/sites/policies/files/NJIT-University-Policy-on-Academic-Integrity.pdf")[NJIT Academic Integrity Code]. Please note that it is my professional obligation and responsibility to report any academic misconduct to the Dean of Students Office. Any student found in violation of the code by cheating, plagiarizing or using any online software inappropriately will result in disciplinary action. This may include a failing grade of F, and/or suspension or dismissal from the university. If you have any questions about the code of Academic Integrity, please contact the Dean of Students Office at <EMAIL>. == Generative AI Student use of artificial intelligence (AI) is permitted in this course for non-exam assignments and activities. Additionally, if and when students use AI in this course, the AI must be cited as is shown within the #link("https://researchguides.njit.edu/AI/home")[NJIT Library AI citation page] for AI. If you have any questions or concerns about AI technology use in this class, please reach out to your instructor prior to submitting any assignments. == Objective #objective #columns(2)[ == Grading #for item in grading [ - #item ] #colbreak() == Course Materials #for item in course-materials [ - #item ] ] #doc // enumerate the learning outcomes and note which weeks the learning outcomes // are reinforced #let section_num = 1 #for (section, outcome_list) in outcomes { let outcome_num = 1 for (key, desc) in outcome_list { let weeks = () for (week_num, week) in outline.enumerate() { if (key in week.at("outcomes")) { weeks.push(week_num + 1) } } outcomes.at(section).at(key) = ( section_num: section_num, outcome_num: outcome_num, weeks: weeks, description: desc ) outcome_num += 1 } section_num += 1 } == Learning Outcomes #let section_num = 1 #for (section, section_outcomes) in outcomes { [=== #section_num. #section] grid( columns: (16pt, 1fr), gutter: 8pt, ..for (key, outcome) in section_outcomes { ( [#outcome.at("section_num").#outcome.at("outcome_num")], [#outcome.at("description"). Weeks #outcome.at("weeks").map(str).join(", ", last: " and ").] ) } ) section_num += 1 } == Course Outline // takes a list of learning outcome keys and returns a sorted (by index), formated // text representation of the learning outcomes #let print_outcomes(key_list) = { let week_outcomes = () for key in key_list { // find the learning outcome based on its key for (section, outcome_list) in outcomes { if (key in outcome_list) { week_outcomes.push(outcome_list.at(key)) } } } // print them out in order grid( columns: (16pt, 1fr), gutter: 8pt, ..for outcome in week_outcomes // seems hackish but as long as there aren't more than 100 outcomes per // per section, this sorting key should work .sorted(key: x => (x.at("section_num")*100 + x.at("outcome_num"))) .dedup() { ( [#outcome.at("section_num").#outcome.at("outcome_num")], [#outcome.at("description")] ) } ) } #let week_num = 1 #table( columns: (1fr, 2fr, 4fr), table.header( [*Week*], [*Topics*], [*Learning Outcomes*], ), ..for week in outline { ( [#week_num], for topic in week.at("topics") [- #topic], print_outcomes(week.at("outcomes")) ) week_num += 1 } ) ]
https://github.com/mem-courses/discrete-mathmatics
https://raw.githubusercontent.com/mem-courses/discrete-mathmatics/main/homework/week2.typ
typst
MIT License
#import "../template.typ": * #import "../functions.typ": * #show: project.with( course: "Discrete Mathmatics", course_fullname: "Discrete Mathematics and Application", course_code: "211B0010", title: "Homework #2: Predicates and Quantifiers", authors: (( name: "<NAME>", email: "<EMAIL>", id: "A10" ),), semester: "Spring-Summer 2024", date: "March 2, 2024", ) = 1.4 Predicates and Quantifiers #hw("6")[ Let $N(x)$ be the statement "$x$ has visited North Dakota," where the domain consists of the students in your school. Express each of these quantifications in English. (c) $not x exists N(x)$ (d) $exists x not N(x)$ (e) $not forall x N(x)$ (f) $forall x not N(x)$ ][ (c) No student in our school has visited North Dakota. (d) There is a student in our school who has not visited North Dakota. (e) Not all students in our school have visited North Dakota. (f) All students in our school have not visited North Dakota. ] #hw("9")[ Let $P(x)$ be the statement "$x$ can speak Russian" and let $Q(x)$ be the statement "$x$ knows the computer language C++." Express each of these sentences in terms of $P(x)$, $Q(x)$, quantifiers, and logical connectives. The domain for quantifiers consists of all students at your school. (b) There is a student at your school who can speak Russian but who doesn’t know C++. (d) No student at your school can speak Russian or knows C++. ][ (b) $exists x (P(x) and not Q(x))$ (d) $forall x not (P(x) or Q(x))$ ] #hw("16")[ Determine the truth value of each of these statements if the domain of each variable consists of all real numbers. (a) $exists x (x^2 = 2)$ (b) $exists x (x^2 = -1)$ (c) $forall x (x^2 + 2 >= 1)$ (d) $forall x (x^2 != x)$ ][ (a) True, since when $x=sqrt(2)$, $x^2=2$ is true. (b) False, since $x^2>=0$ holds forall $x in RR$. (c) True, since $x^2+2>=2$ holds forall $x in RR$. (d) False, since when $x=1$, $x^2!=x$ is false. ] #hw("20(e)")[ Suppose that the domain of the propositional function $P(x)$ consists of −5, −3, −1, 1, 3, and 5. Express these statements without using quantifiers, instead using only negations, disjunctions, and conjunctions. (e) $∃x(¬P(x)) ∧ ∀x((x < 0) → P(x))$ ][ The part of statement $exists x (not P(x))$ can be expressed as: $ not P(-5) or not P(-3) or not P(-1) or not P(1) or not P(3) or not P(5) $ And the another part $forall x ((x<0) -> P(x))$ can be expressed as: $ P(-5) and P(-3) and P(-1) $ So the original statement can be expressed as: $ (not P(-5) or not P(-3) or not P(-1) or not P(1) or not P(3) or not P(5)) and (P(-5) and P(-3) and P(-1)) $ ] #hw("24")[ Translate in two ways each of these statements into logical expressions using predicates, quantifiers, and logical connectives. First, let the domain consist of the students in your class and second, let it consist of all people. (b) Somebody in your class has seen a foreign movie. (d) All students in your class can solve quadratic equations. ][ Let $S(x)$ denotes "$x$ is in your class", $M(x)$ denotes "$x$ has seen a foreign movie" and $Q(x)$ denotes "$x$ can solve quadratic equations". #table( columns: (5em, 1fr, 1fr), [(b)], $exists x M(x)$, $exists x(S(x) and M(x))$, [(d)], $forall x Q(x)$, $forall x(not S(x) or Q(x))$, ) ] #hw("34")[ Express the negation of these propositions using quantifiers, and then express the negation in English. (a) Some drivers do not obey the speed limit. (b) All Swedish movies are serious. (c) No one can keep a secret. (d) There is someone in this class who does not have a good attitude. ][ (a) Let the domain consist of all drivers, and let $P(x)$ denotes "$x$ obeys the speed limit". And the given statement can be expressed as $ exists x not P(x). $ And the negation of this proposition is $ not (exists x not P(x)), $ which is equivalent to $ forall x P(x) $ In English, we can say "All drivers obey the speed limit." (b) Let the domain consist of all Swedish movies, and let $P(x)$ denotes "$x$ is serious". The given statement can be expressed as $ forall x P(x). $ And the negation of this proposition is $ not(forall x P(x)) equiv exists x not P(x) $ In English, it can be expressed as "There is a Swedish movie that is not serious." (c) Let the domain consist of all people, and let $P(x)$ denotes "$x$ can keep a secret". The given statement can be expressed as $ forall x not P(x) $ And the negation of this proposition is $ not (forall x not P(x)) equiv exists x P(x) $ In English, it can be expressed as "There is someone who can keep a secret." (d) Let the domain consist of all students in this class, and let $P(x)$ denotes "$x$ has a good attitude". The given statement can be expressed as $ exists x not P(x) $ And the negation of this proposition is $ not (exists x not P(x)) equiv forall x P(x) $ In English, it can be expressed as "All students in this class have a good attitude." ] #hw("53")[ Show that $∃x P(x) ∧ ∃x Q(x)$ and $∃x(P(x) ∧ Q(x))$ are not logically equivalent. ][ Let the domain consist of all positive integers, and let $P(x)$ denotes "$x$ is even" and $Q(x)$ denotes "$x$ is odd". Obviously, proposition $exists x P(x)$ and $exists x Q(x)$ are all true, but there isn't a number can be even or odd at the same time, so the proposition $exists x(P(x) and Q(x))$ is false. Therefore, the two propositions are not logically equivalent. ] #hw("64")[ Let $P(x)$, $Q(x)$, $R(x)$, and $S(x)$ be the statements "$x$ is a duck," "$x$ is one of my poultry," "$x$ is an officer," and "$x$ is willing to waltz," respectively. Express each of these statements using quantifiers; logical connectives; and $P(x)$, $Q(x)$, $R(x)$, and $S(x)$. (a) No ducks are willing to waltz. (b) No officers ever decline to waltz. (c) All my poultry are ducks. (d) My poultry are not officers. (e)\* Does (d) follow from (a), (b), and (c)? If not, is there a correct conclusion? ][ (a) $forall x (P(x) -> not S(x))$ (b) $forall x (R(x) -> S(x))$ (c) $forall x (Q(x) -> P(x))$ (d) $forall x (Q(x) -> not R(x))$ (e) No. The correct conclusion is $forall x ((Q(x) -> not S(x)) and (R(x) -> S(x)))$. ] = 1.5 Nested Quantifiers #hw("6")[ Let $C(x, y)$ mean that student $x$ is enrolled in class $y$, where the domain for $x$ consists of all students in your school and the domain for $y$ consists of all classes being given at your school. Express each of these statements by a simple English sentence. (e) $∃x∃y∀z((x!=y) and (C(x, z) -> C(y, z)))$ (f) $∃x∃y∀z((x!=y) and (C(x, z) <-> C(y, z)))$ ][ (e) There are two students A and B in my school that B are enrolled in all classes that A is enrolled in. (f) There are two students in my school who are enrolled in exactly the same classes. ] #hw("12")[ Let $I(x)$ be the statement "$x$ has an Internet connection" and $C(x, y)$ be the statement "$x$ and $y$ have chatted over the Internet," where the domain for the variables $x$ and $y$ consists of all students in your class. Use quantifiers to express each of these statements. (d) No one in the class has chatted with Bob. (h) Exactly one student in your class has an Internet connection. (k) Someone in your class has an Internet connection but has not chatted with anyone else in your class. (n) There are at least two students in your class who have not chatted with the same person in your class. ][ (d) $forall x not C(x, "Bob")$ (h) $exists x (I(x) and forall y (y!=x -> not I(x)))$ (k) $exists x (I(x) and forall y (y!=x -> not C(x,y)))$ (n) $exists x exists y forall z (x!=y and not (C(x,z) <-> C(y,z)))$ ] #hw("14")[ Use quantifiers and predicates with more than one variable to express these statements. (c) Some student in this class has visited Alaska but has not visited Hawaii. (d) All students in this class have learned at least one programming language. (e) There is a student in this class who has taken every course offered by one of the departments in this school. (f) Some student in this class grew up in the same town as exactly one other student in this class. ][ (c) Let $P(x,y)$ denotes "student $x$ has visited," where the domain for the variable $x$ consists the students in this class and the domain for $y$ consists all the cities. Then, the given statement can be expressed as $ exists x (P(x, "Alaska") and not P(x, "Hawaii")) $#fake_par (d) Let $P(x,y)$ denotes "student $x$ has learned programming language $y$", where the domain for $x$ consists the students in this class and the domain for $y$ consists all the programming languages. Then, the given statement can be expressed as $ forall x exists y P(x,y) $#fake_par (e) Let $P(x,y,z)$ denotes "student $x$ has taken course $y$ offered by department $z$", where the domain for $x$ consists the students in this class, the domain for $y$ consists all the courses, and the domain for $z$ consists all the departments. Then, the given statement can be expressed as $ exists x forall y exists z P(x,y,z) $#fake_par (f) Let $P(x,y)$ denotes "student $x$ in this class grew up in the town $y$", where the domain for $x$ consists all students, and the domain for $y$ consists all towns. Then, the given statement can be expressed as $ exists x exists y forall z ((x!=y and y!=z and x != z)-> (exists t P(x,t) and P(y,t) and not P(z,t))) $ ] #hw("24")[ Translate each of these nested quantifications into an English statement that expresses a mathematical fact. The domain in each case consists of all real numbers. (a) $∃x∀y(x + y = y)$ (d) $∀x∀y((x!=0) ∧ (y!=0) ↔ (x y!=0))$ ][ (a) There is a real number $x$ that for all real number $y$, the sum of $x$ and $y$ is $y$. (b) For any two non-zero real numbers $x$ and $y$, their product must not be zero. ] #hw("32(d)")[ Express the negations of this statement so that all negation symbols immediately precede predicates. $ ∀y∃x∃z(T(x, y, z) ∨ Q(x, y)) $ ][ The negation of given statement is $ not forall y exists x exists z (T(x,y,z) or Q(x,y)). $ By applying De Morgan's laws, we can get $ exists y forall x forall z (not T(x,y,z) and not Q(x,y)) $ ] #hw("34")[ Find a common domain for the variables $x$, $y$, and $z$ for which the statement $∀x∀y((x!=y) → ∀z((z = x) ∨ (z = y)))$ is true and another domain for which it is false. ][ $ &forall x forall y ((x != y) -> forall z ((z = x) or (z = y)))\ equiv& forall x forall y ((x = y) or forall z ((z = x) or (z = y)))\ equiv& forall x forall y forall z ((x = y) or (z = x) or (z = y))\ $ Therefore, any common domain that consists at most two elements will make this statement true since at least two variables chosen from the domain would be equal. Otherwise, for the domain that consists more than two elements, the statement will be false. ] #hw("38")[ Express the negations of these propositions using quantifiers, and in English. (b) There is a student in this class who has never seen a computer. (d) There is a student in this class who has been in at least one room of every building on campus. ][ (b) Let $Q(x)$ be "student $x$ has seen a computer". Then the given proposition can be expressed using quantifiers as $ exists x not Q(x) $ The negation of this proposition is $ not exists x not Q(x) equiv forall x Q(x) $ In English, it can be expressed as "All students in this class have seen a computer." (d) Let $P(x,y,z)$ be "student $x$ has been in room $y$ of building $z$", and let the domain for $x$ consists all students on campus. Then the given proposition can be expressed using quantifiers as $ exists x forall z exists y P(x,y,z) $ The negation of this proposition is $ not (exists x forall z exists y P(x,y,z)) equiv forall x exists z forall y not P(x,y,z) $ In English, it can be expressed as "There is a building that all student in this class have not been in at any room." ] #hw("42")[ Use quantifiers to express the distributive laws of multiplication over addition for real numbers. ][ $ forall x forall y forall z ((x+y)z = x z + y z) $ Here the domain consists all real numbers. ]
https://github.com/ilsubyeega/circuits-dalaby
https://raw.githubusercontent.com/ilsubyeega/circuits-dalaby/master/Type%201/1/18.typ
typst
#set enum(numbering: "(a)") #import "@preview/cetz:0.2.2": * #import "../common.typ": answer 1.18 다음 회로에서 각각의 소자 7개에 대해 그 소자가 전력을 공급하는지/소모하는지 판별하고, 얼마나 많은 전력을 공급하거나 수신하는지 결정하라. #answer[ 소자 1: $24 times (-5) = -120$W 소자 2: $6 times 5 = 30$W 소자 3: $8 times 1 = 8$W 소자 4: $12 times 4 = 48$W 소자 5: $4 times (-2) = -8$W 소자 6: $10 times 3 = 30$W 소자 7: $6 times 2 = 12$W 소자 1, 5은 전력을 공급하고, 나머지는 전력을 소비한다. ]
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/text/copy-paste.typ
typst
// Test copy-paste and search in PDF with ligatures // and Arabic test. Must be tested manually! --- text-copy-paste-ligatures --- The after fira 🏳️‍🌈! #set text(lang: "ar", font: "Noto Sans Arabic") مرحبًا
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/crossregex/0.2.0/README.md
markdown
Apache License 2.0
# crossregex A crossword-like game written in Typst. You should fill in letters to satisfy regular expression constraints. Currently, _squares_ and _regular hexagons_ are supported. > [!note] > This is not a puzzle solver, but a puzzle layout builder. It takes inspiration from a web image, which derives our standard example. ![standard](./examples/standard.svg) ![sudoku](./examples/sudoku-main.svg) More examples and source code: <https://github.com/QuadnucYard/crossregex-typ> ## Basic Usage We use `crossregex-square` and `crossregex-hex` to build square and hex layouts respectively. They have the same argument formats. A `crossregex` dispatcher function can be used for dynamic grid kind, which is compatible with version 0.1.0. ```typst #import "@preview/crossregex:0.2.0": crossregex // or import and use `crossregex-hex` #crossregex( 3, constraints: ( `A.*`, `B.*`, `C.*`, `D.*`, `E.*`, `F.*`, `G.*`, `H.*`, `I.*`, `J.*`, `K.*`, `L.*`, `M.*`, `N.*`, `O.*`, ), answer: ( "ABC", "DEFG", "HIJKL", "MNOP", "QRS", ), ) ``` ```typst #import "@preview/crossregex:0.2.0": crossregex-square #crossregex-square( 9, alphabet: regex("[0-9]"), constraints: ( `.*`, `.*`, `.*`, `.*`, `.*[12]{2}8`, `[1-9]9.*`, `.*`, `.*`, `.*`, `[1-9]7[29]{2}8.6.*`, `.*2[^3]{2}1.`, `.9.315[^6]+`, `.+4[15]{2}79.`, `[75]{2}18.63[1-9]+`, `8.*[^2][^3][^1]+56[^6]`, `[^5-6][0-9][56]{2}.*9`, `.*`, `[98]{2}5.*[27]{2}6`, ), answer: ( "934872651", "812456937", "576913482", "125784369", "467395128", "398261574", "241537896", "783649215", "659128743", ), cell: rect(width: 1.4em, height: 1.4em, radius: 0.1em, stroke: 1pt + orange, fill: orange.transparentize(80%)), cell-config: (size: 1.6em, text-style: (size: 1.4em)), ) ``` ## Document Details are shown in the doc comments above the `crossregex` function in `lib.typ`. You can choose to turn off some views. Feel free to open issues if any problems. ## Changelog ### 0.2.0 - Feature: Supports square shapes. - Feature: Supports customization the appearance of everything, even the cells. - Feature: Supports custom alphabets. - Fix: An mistake related to import in the README example. ### 0.1.0 First release with basic hex features.
https://github.com/imkochelorov/ITMO
https://raw.githubusercontent.com/imkochelorov/ITMO/main/src/template.typ
typst
#let project(title: "", authors: (), date: none, body, subtitle: none) = { set document(author: authors, title: title) set page(numbering: "1", number-align: center) set text(font: "New Computer Modern", lang: "en") //show raw: set text(font: "New Computer Modern Mono") // Title row. align(center)[ #block(text(weight: 700, 1.75em, title)) #v(1em, weak: true) #block(text(weight: 700, 1.2em, subtitle)) #date ] // Author information. pad( top: 0.5em, bottom: 0.5em, x: 2em, grid( columns: (1fr,) * calc.min(3, authors.len()), gutter: 1em, ..authors.map(author => align(center, strong(author))), ), ) body }
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/ansi-render/0.2.0/README.md
markdown
Apache License 2.0
# ANSI Escape Sequence Renderer <a href="https://github.com/8LWXpg/typst-ansi-render/tags"> <img alt="GitHub manifest version (path)" src="https://img.shields.io/github/v/tag/8LWXpg/typst-ansi-render"> </a> <a href="https://github.com/8LWXpg/typst-ansi-render"> <img src="https://img.shields.io/github/stars/8LWXpg/typst-ansi-render" alt="GitHub Repo stars"> </a> <a href="https://github.com/8LWXpg/typst-ansi-render/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/8LWXpg/typst-ansi-render"> </a> This script provides a simple way to render text with ANSI escape sequences. It uses the `ansi-render` function from the `ansi-render.typ` module to render text with various formatting options, including font, font size and theme. contribution is welcomed! ## Usage ```typst #import "@preview/ansi-render:0.2.0": * #ansi-render(string, font: string, size: length, theme: terminal-themes.theme) ``` ## Demo see [demo.typ](https://github.com/8LWXpg/typst-ansi-render/blob/master/demo.typ) [demo.pdf](https://github.com/8LWXpg/typst-ansi-render/blob/master/demo.pdf) ```typst #ansi-render( "\u{1b}[38;2;255;0;0mThis text is red.\u{1b}[0m \u{1b}[48;2;0;255;0mThis background is green.\u{1b}[0m \u{1b}[38;2;255;255;255m\u{1b}[48;2;0;0;255mThis text is white on a blue background.\u{1b}[0m \u{1b}[1mThis text is bold.\u{1b}[0m \u{1b}[4mThis text is underlined.\u{1b}[0m \u{1b}[38;2;255;165;0m\u{1b}[48;2;255;255;0mThis text is orange on a yellow background.\u{1b}[0m " ) ``` ![1.png](https://github.com/8LWXpg/typst-ansi-render/blob/master/img/1.png) ```typst #ansi-render( "\u{1b}[38;5;196mRed text\u{1b}[0m \u{1b}[48;5;27mBlue background\u{1b}[0m \u{1b}[38;5;226;48;5;18mYellow text on blue background\u{1b}[0m \u{1b}[7mInverted text\u{1b}[0m \u{1b}[38;5;208;48;5;237mOrange text on gray background\u{1b}[0m \u{1b}[38;5;39;48;5;208mBlue text on orange background\u{1b}[0m \u{1b}[38;5;255;48;5;0mWhite text on black background\u{1b}[0m" ) ``` ![2.png](https://github.com/8LWXpg/typst-ansi-render/blob/master/img/2.png) ```typst #ansi-render( "\u{1b}[31;1mHello \u{1b}[7mWorld\u{1b}[0m \u{1b}[53;4;36mOver and \u{1b}[35m Under! \u{1b}[7;90mreverse\u{1b}[101m and \u{1b}[94;27mreverse" ) ``` ![3.png](https://github.com/8LWXpg/typst-ansi-render/blob/master/img/3.png) ```typst // uses the font that supports ligatures #ansi-render(read("test.txt"), font: "CaskaydiaCove Nerd Font Mono", theme: terminal-themes.putty) ``` ![4.png](https://github.com/8LWXpg/typst-ansi-render/blob/master/img/4.png)
https://github.com/jangala-dev/product-docs
https://raw.githubusercontent.com/jangala-dev/product-docs/main/README.md
markdown
# product-docs Welcome to the repository for Jangala's documentation creation system. This is very WIP. This repository will have a large number of our team working on it. Our Projects and Comms people will be core developers in this process! To assist this process we're developing a friendly and consistent process by which we can all contribute. ## Installation To make use of this convenience, this section guides you through installing three essential open-source tools: Git, Visual Studio Code (VSCode) and Docker Desktop. Git helps manage shared access to common codebases (like this one!), VSCode is a powerful and versatile code editor that supports a wide range of programming languages and tools, while Docker Desktop will allow us to create and manage isolated development environments called containers. This setup does involve a few one-time terminal commands, but is straightforward. The payoff is that we can get started with product-docs development quickly and efficiently, using shared graphical tools! ### For MacOS Users #### Prerequisites Install Homebrew (an open source software manager) on your Mac by pasting the following command in your Terminal: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` Note: may take a few minutes to download the needed tools and complete. #### Steps to Install VSCode and Docker Desktop 1. **Open the Terminal.** 2. **Install Git:** ```zsh brew install git ``` 3. **Install Visual Studio Code:** ```zsh brew install --cask visual-studio-code ``` Note: may take a few minutes to complete 4. **Install Docker Desktop:** ```zsh brew install --cask docker ``` Note: may take a few minutes to complete 5. **Launch Docker Desktop:** Find Docker in your Applications folder and double-click to launch it. This step is necessary to complete the installation and agree to the terms. _There's no need to create an account._ 6. **Install Extensions for VSCode:** Click [here](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) and press install - this will ask to open VS Code and will install the Devcontainers extension. ### For Windows 10/11 Users #### Prerequisites Ensure you are running Windows 11 or a fairly recently updated version of Windows 10 (version 1709, build 16299). #### Steps to Install Git, VSCode and Docker Desktop 1. **Open Command Prompt** - Press the Start button, type "Command Prompt", and click or press Enter to launch it. 2. **Install Git:** ```powershell winget install --id Git.Git -e --source winget ``` 3. **Install Visual Studio Code:** ```powershell winget install -e --id Microsoft.VisualStudioCode ``` Note: may take a few minutes to complete 4. **Install WSL:** ```powershell wsl --install ``` WSL will take a few minutes to install. The installer will ask you for a username and password (use your Windows one if it's easy enough to remember). Once you get to a green prompt at which you can type, type `exit` and press return to get back to the terminal. 5. **Install Docker Desktop:** ```powershell winget install -e --id Docker.DockerDesktop ``` Docker will take a few minutes to install. You will be asked to agree to a licence, and then a few minutes more. 6. **Launch Docker Desktop:** Run Docker Desktop from your Start Menu. This step is necessary to complete the installation and agree to the terms. _There's no need to create an account._ 7. **Setup VSCode:** Run Visual Studio Code from the Start Menu. Click on the little blue two arrows icon in the bottom left of the screen, and click `Connect to WSL` from the menu that appears. Click [here](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl) and press install - this will ask to open VS Code and will install the WSL extension. Click [here](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) and press install - this will ask to open VS Code and will install the Devcontainers extension. ### Final Notes Ensure Docker Desktop is running before attempting to use VSCode devcontainers. ## Usage [TBD] # Ongoing Plan Currently our plan is as follows: 1. Host our core documentation here as Pandoc-flavoured Markdown 2. Create the online web version using Vitepress to create a Github pages hosted sited at docs.janga.la 3. Process the separate Markdown files into PDF using: 1. Use Pandoc to create a `.typ` output from the markdown 2. Use Typst and a Jangala template to create beautiful PDFs! We should automate this whole process using GitHub actions. So that each commit to the repo regenerates the GH pages site and recreates any PDF's that need changing. To test the website locally run `npm run docs:dev` from the docs folder. To build pdfs do `sh build-getbox-pdf.sh` from the pdf folder. ## Fonts Add .otf or .ttf to the `/fonts` folder and then run `sh update-fonts.sh` from that same folder.
https://github.com/AU-Master-Thesis/thesis
https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/sections/3-methodology/study-3/tracking-factor.typ
typst
MIT License
#import "../../../lib/mod.typ": * === Tracking Factor $bold(f_t)$ <s.m.tracking-factor> // #jonas[I believe this is all new to you. Does it makes sense?] The design of the tracking factor mimics that of the interrobot factors, $f_i$. Similar to the interrobot factors, the tracking factor takes two points in space into account, the variables position, $x$, and the position to pull towards. As just mentioned, oppositely from the interrobot factors, the tracking factor wants these two positions to get closer together, where the interrobot factors push the two apart. These properties are achieved through the measurement function, $h_t(x)$, and the Jacobian, $J_t(x)$, as described in sections #numref(<s.m.tracking-factor.measure>) and #numref(<s.m.tracking-factor.jacobian>), respectively. ==== Measurement Function <s.m.tracking-factor.measure> The measurement function, $h_t (#m.x, #m.P, i)$, takes in the variable state, $#m.x = [x#h(0.5em)y#h(0.5em)equation.overdot(x)#h(0.5em)equation.overdot(y)]^top$, the path, $#m.P$, and an index, $i$. Where $#m.P$ is the set of waypoints, $#m.P = {#m.p _1, #m.p _2, ..., #m.p _n}$, and $i$ is the index of which line segment $#m.l _i = #m.p _(i+1) - #m.p _(i)$ the tracking factor should be following. Lastly, let $#m.x _"pos"$ and $#m.x _"vel"$ be the position and velocity components of the variable state, respectively, see @eq.variable-components. $ #m.x _"pos" = mat(x, y)^top#h(2em)"and"#h(2em)#m.x _"vel" = mat(equation.overdot(x), equation.overdot(y))^top $<eq.variable-components> The main idea behind the tracking factor, is that it projects the variable's position onto the line segment that is should currently be following, and then uses the measurement of the distance between the projected point and the variable's position as the measurement. This idea needs further modifications, will be covered in the following explanation of how to arrive at the measurement function $h_t$. First let us define a projection function, $"proj"(#m.x, #m.p _"start", #m.p _"end")$, that projects a point, $#m.x$, onto a line segment defined by two points, $#m.p _"start"$ and $#m.p _"end"$, as shown in <eq.projection>. $ "proj"(#m.x, #m.p _"start", #m.p _"end") = #m.p _"start" + ((#m.x - #m.p _"start") dot (#m.p _"end" - #m.p _"start")) / norm(#m.p _"end" - #m.p _"start")^2 (#m.p _"end" - #m.p _"start") $<eq.projection> There are two possible projections we want to consider, specifically when the variable approaches a waypoint, and needs to transition to the next line segment. A radius, $r_"switch"$#footnote[Configurable as `switch-padding` under the `tracking` table in `config.toml`, see #gbp-rs(content: "AU-Master-Thesis/gbp-rs") at #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/ef7b5891f47ca1d57e15fa2628d15e726d0b1901/config/simulations/Solo%20GP/config.toml#L48", "config/simulations/Solo GP/config.toml:48")], is defined as the distance within $#m.p _(i+1)$, the index $i$ should be incremented to consider the next line segment. As such, we have the two projections $#m.proj _i$ and $#m.proj _(i-1)$ to consider, see @eq.projections. $ #m.proj _i &= "proj"(#m.x, #m.p _i, #m.p _(i+1)) \ #m.proj _(i-1) &= "proj"(#m.x, #m.p _(i-1), #m.p _i) $<eq.projections> Whether to consider the projection to the next line segment becomes the logical conditional statement, $q$, defined in @eq.switch. And hereby, the point to measure the distance to $#m.x _"meas"$ becomes equation @eq.measurement-point. #v(-0.5em) $ q = norm(#m.proj _i - #m.p _i) < r_"switch" and norm(#m.proj _(i-1) - #m.p _i) < r_"switch" $<eq.switch> // #v(-1em) $ #m.x _"meas" = cases( #m.x _"pos" + 1/2 ((#m.proj _i - #m.x _"pos") + (#m.proj _(i-1) - #m.x _"pos")) &"if" q \ #m.proj _i + #m.d dot norm(#m.x _"vel") / s_v &"otherwise" ) $<eq.measurement-point> The first case of @eq.measurement-point happens when the variable is close to the waypoint, and the tracking factor will measure towards the actual waypoint itself by taking both projections into account. This behaviour is visualized in @f.m.tracking-factor, along with the conditional $q$ as a green area#swatch(theme.green.lighten(35%)). Where $#m.d = #m.l _i / norm(#m.l _i)$ is the normalized direction vector of the line segment $#m.l _i$. The addition of $#m.d dot norm(#m.x _"vel") / s_v$ in the second case of @eq.measurement-point is a way to ensure that the tracking factor always tries to also move the variable along the line segment, and not only perpendicularly towards it; which in turn helps alleviate local minima where the variable might get stuck #sym.dash.en _this happens especially when some tracking factors are tracking towards the corner, without having others pulling it along_. This pulling along is also shown in @f.m.tracking-factor. It is chosen that $s_v = 5$ in the denominator, which is somewhat arbitrary, but it does a good job of allowing the factor to pull slightly on the variable without pulling so much that the variable overtakes future variables; a large $s_v$ makes previous variables shoot far ahead overtaking future variables, thus resulting in the robot far exceeding the target speed. The last piece of the puzzle is to define the measurement function, $h_t$, as the distance between the variable's position and the measurement point, $#m.x _"meas"$, as shown in @eq.measurement. $ h_t (#m.x, #m.P, i) = "min"(1, norm(#m.x _"pos" - #m.x _"meas") / d_a) $<eq.measurement> Two modifications to the raw distance take place; #[ #set par(first-line-indent: 0em) #set enum(numbering: box-enum.with(prefix: "M-")) + *Modification:* The distance is normalized by $d_a$, which is a configurable parameter called `attraction_distance`#footnote[Example in the #source-link("https://github.com/AU-Master-Thesis/gbp-rs", "gbp-rs") repository at #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/ac49ddb764078e34290708e0f60846e45055e9c1/config/simulations/Intersection/config.toml#L53", "/config/simulations/Intersection/config.toml:53")] under the `gbp.tracking` table in the `config.toml` file. #v(0.5em) *Reasoning:* This acts as a knob that can be turned for the user to control how quickly the attraction force should reach its maximum. + *Modification:* The distance is clamped to a maximum of $1$, after normalization. #v(0.5em) *Reasoning:* This is a safety measure to ensure that the factor measurement is bounded, and as such helping to keep the factor graph inference stable. ] #pagebreak(weak: true) ==== Jacobian <s.m.tracking-factor.jacobian> The Jacobian's, $jacobian_t$, responsibility is to encode the measurement strength of the tracking factor into a direction for the variable to move towards. First let us find the difference $#m.x _"diff"$ $ #m.x _"diff" = #m.x _"meas" - #m.x _"pos" $<eq.diff> Where the Jacobian, $J_t$, is then defined as the normalized difference, $#m.x _"diff"$, where the normalization factor is the current measurement value $h$, see @eq.jacobian. $ jacobian_(t,"pos") = #m.x _"diff" / h $<eq.jacobian> With a connection to a single variable, and $"DOFS" = 4$, the Jacobian, $J_t$, should be a $1 times 4$ matrix. Which can be obtained by concatenating the $1 times 2$ matrix, $jacobian_(t,"pos")$ with a zeros matrix of the same size, see @eq.jacobian-padding. $ jacobian_t = [jacobian_(t,"pos"), 0, 0] = mat(1/h (x_"meas" - x_"pos"), 1/h (y_"meas" - y_"pos"), 0, 0) $<eq.jacobian-padding> Where $#m.x _"meas" = [x_"meas"#h(0.5em)y_"meas"]^top$ and $#m.x _"pos" = [x#h(0.5em)y]^top$ is the positional component of the linearization point as defined in @eq.variable-components. Looking at the interrobot factor Jacobian, $jacobian_i$, in @eq.jacobian-i, it is clear that they both utilize the normalized difference between two points, and the only difference is that, for the tracking factor, only one of the two points come from a variable, which can be moved, the other from the instantaneously static projection onto the path.
https://github.com/unb3rechenbar/TypstPackages
https://raw.githubusercontent.com/unb3rechenbar/TypstPackages/main/styles/LongSymbols/0.1.0/src/lib.typ
typst
#let long-symbol(sym, factor) = { assert(type(sym) == "symbol", message: "Input needs to be a symbol") assert(type(factor) == "integer" or type(factor) == "float", message: "Scale factor must be a number") assert(factor >= 1, message: "Scale factor must be >= 1") factor = 5*factor - 4 let body = [#sym] style(styles => { let (body-w,body-h) = measure(body,styles).values() align(left)[ #box(width: body-w*2/5,height: body-h,clip: true)[ #align(left)[ #body ] ] #h(0cm) #box(height: body-h, width: body-w*1/5*factor)[ #scale(x: factor*100%,origin:left)[ #box(height: body-h, width: body-w*1/5,clip:true)[ #align(center)[ #body ] ] ] ] #h(0cm) #box(width: body-w*2/5,clip: true)[ #align(right)[ #body ] ] ] }) }
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/visualize/color.typ
typst
// Test color modification methods. --- color-mix --- // Compare both ways. #test-repr(rgb(0%, 30.2%, 70.2%), rgb("004db3")) // Alpha channel. #test(rgb(255, 0, 0, 50%), rgb("ff000080")) // Test color modification methods. #test(rgb(25, 35, 45).lighten(10%), rgb(48, 57, 66)) #test(rgb(40, 30, 20).darken(10%), rgb(36, 27, 18)) #test(rgb("#133337").negate(space: rgb), rgb(236, 204, 200)) #test(white.lighten(100%), white) // Color mixing, in Oklab space by default. #test(rgb(color.mix(rgb("#ff0000"), rgb("#00ff00"))), rgb("#d0a800")) #test(rgb(color.mix(rgb("#ff0000"), rgb("#00ff00"), space: oklab)), rgb("#d0a800")) #test(rgb(color.mix(rgb("#ff0000"), rgb("#00ff00"), space: rgb)), rgb("#808000")) #test(rgb(color.mix(red, green, blue)), rgb("#909282")) #test(rgb(color.mix(red, blue, green)), rgb("#909282")) #test(rgb(color.mix(blue, red, green)), rgb("#909282")) // Mix with weights. #test(rgb(color.mix((red, 50%), (green, 50%))), rgb("#c0983b")) #test(rgb(color.mix((red, 0.5), (green, 0.5))), rgb("#c0983b")) #test(rgb(color.mix((red, 5), (green, 5))), rgb("#c0983b")) #test(rgb(color.mix((green, 5), (white, 0), (red, 5))), rgb("#c0983b")) #test(color.mix((rgb("#aaff00"), 25%), (rgb("#aa00ff"), 75%), space: rgb), rgb("#aa40bf")) #test(color.mix((rgb("#aaff00"), 50%), (rgb("#aa00ff"), 50%), space: rgb), rgb("#aa8080")) #test(color.mix((rgb("#aaff00"), 75%), (rgb("#aa00ff"), 25%), space: rgb), rgb("#aabf40")) // Mix in hue-based space. #test(rgb(color.mix(red, blue, space: color.hsl)), rgb("#c408ff")) #test(rgb(color.mix((red, 50%), (blue, 100%), space: color.hsl)), rgb("#5100f8")) // Error: 6-51 cannot mix more than two colors in a hue-based space #rgb(color.mix(red, blue, white, space: color.hsl)) --- color-conversion --- // Test color conversion method kinds #test(rgb(rgb(10, 20, 30)).space(), rgb) #test(color.linear-rgb(rgb(10, 20, 30)).space(), color.linear-rgb) #test(oklab(rgb(10, 20, 30)).space(), oklab) #test(oklch(rgb(10, 20, 30)).space(), oklch) #test(color.hsl(rgb(10, 20, 30)).space(), color.hsl) #test(color.hsv(rgb(10, 20, 30)).space(), color.hsv) #test(cmyk(rgb(10, 20, 30)).space(), cmyk) #test(luma(rgb(10, 20, 30)).space(), luma) #test(rgb(color.linear-rgb(10, 20, 30)).space(), rgb) #test(color.linear-rgb(color.linear-rgb(10, 20, 30)).space(), color.linear-rgb) #test(oklab(color.linear-rgb(10, 20, 30)).space(), oklab) #test(oklch(color.linear-rgb(10, 20, 30)).space(), oklch) #test(color.hsl(color.linear-rgb(10, 20, 30)).space(), color.hsl) #test(color.hsv(color.linear-rgb(10, 20, 30)).space(), color.hsv) #test(cmyk(color.linear-rgb(10, 20, 30)).space(), cmyk) #test(luma(color.linear-rgb(10, 20, 30)).space(), luma) #test(rgb(oklab(10%, 20%, 30%)).space(), rgb) #test(color.linear-rgb(oklab(10%, 20%, 30%)).space(), color.linear-rgb) #test(oklab(oklab(10%, 20%, 30%)).space(), oklab) #test(oklch(oklab(10%, 20%, 30%)).space(), oklch) #test(color.hsl(oklab(10%, 20%, 30%)).space(), color.hsl) #test(color.hsv(oklab(10%, 20%, 30%)).space(), color.hsv) #test(cmyk(oklab(10%, 20%, 30%)).space(), cmyk) #test(luma(oklab(10%, 20%, 30%)).space(), luma) #test(rgb(oklch(60%, 40%, 0deg)).space(), rgb) #test(color.linear-rgb(oklch(60%, 40%, 0deg)).space(), color.linear-rgb) #test(oklab(oklch(60%, 40%, 0deg)).space(), oklab) #test(oklch(oklch(60%, 40%, 0deg)).space(), oklch) #test(color.hsl(oklch(60%, 40%, 0deg)).space(), color.hsl) #test(color.hsv(oklch(60%, 40%, 0deg)).space(), color.hsv) #test(cmyk(oklch(60%, 40%, 0deg)).space(), cmyk) #test(luma(oklch(60%, 40%, 0deg)).space(), luma) #test(rgb(color.hsl(10deg, 20%, 30%)).space(), rgb) #test(color.linear-rgb(color.hsl(10deg, 20%, 30%)).space(), color.linear-rgb) #test(oklab(color.hsl(10deg, 20%, 30%)).space(), oklab) #test(oklch(color.hsl(10deg, 20%, 30%)).space(), oklch) #test(color.hsl(color.hsl(10deg, 20%, 30%)).space(), color.hsl) #test(color.hsv(color.hsl(10deg, 20%, 30%)).space(), color.hsv) #test(cmyk(color.hsl(10deg, 20%, 30%)).space(), cmyk) #test(luma(color.hsl(10deg, 20%, 30%)).space(), luma) #test(rgb(color.hsv(10deg, 20%, 30%)).space(), rgb) #test(color.linear-rgb(color.hsv(10deg, 20%, 30%)).space(), color.linear-rgb) #test(oklab(color.hsv(10deg, 20%, 30%)).space(), oklab) #test(oklch(color.hsv(10deg, 20%, 30%)).space(), oklch) #test(color.hsl(color.hsv(10deg, 20%, 30%)).space(), color.hsl) #test(color.hsv(color.hsv(10deg, 20%, 30%)).space(), color.hsv) #test(cmyk(color.hsv(10deg, 20%, 30%)).space(), cmyk) #test(luma(color.hsv(10deg, 20%, 30%)).space(), luma) #test(rgb(cmyk(10%, 20%, 30%, 40%)).space(), rgb) #test(color.linear-rgb(cmyk(10%, 20%, 30%, 40%)).space(), color.linear-rgb) #test(oklab(cmyk(10%, 20%, 30%, 40%)).space(), oklab) #test(oklch(cmyk(10%, 20%, 30%, 40%)).space(), oklch) #test(color.hsl(cmyk(10%, 20%, 30%, 40%)).space(), color.hsl) #test(color.hsv(cmyk(10%, 20%, 30%, 40%)).space(), color.hsv) #test(cmyk(cmyk(10%, 20%, 30%, 40%)).space(), cmyk) #test(luma(cmyk(10%, 20%, 30%, 40%)).space(), luma) #test(rgb(luma(10%)).space(), rgb) #test(color.linear-rgb(luma(10%)).space(), color.linear-rgb) #test(oklab(luma(10%)).space(), oklab) #test(oklch(luma(10%)).space(), oklch) #test(color.hsl(luma(10%)).space(), color.hsl) #test(color.hsv(luma(10%)).space(), color.hsv) #test(cmyk(luma(10%)).space(), cmyk) #test(luma(luma(10%)).space(), luma) #test(rgb(1, 2, 3).to-hex(), "#010203") #test(rgb(1, 2, 3, 4).to-hex(), "#01020304") #test(luma(40).to-hex(), "#282828") #test-repr(cmyk(4%, 5%, 6%, 7%).to-hex(), "#e0dcda") #test-repr(rgb(cmyk(4%, 5%, 6%, 7%)), rgb(87.84%, 86.27%, 85.49%, 100%)) #test-repr(rgb(luma(40%)), rgb(40%, 40%, 40%)) #test-repr(cmyk(luma(40)), cmyk(63.24%, 57.33%, 56.49%, 75.88%)) #test-repr(cmyk(rgb(1, 2, 3)), cmyk(66.67%, 33.33%, 0%, 98.82%)) #test-repr(luma(rgb(1, 2, 3)), luma(0.73%)) #test-repr(color.hsl(luma(40)), color.hsl(0deg, 0%, 15.69%)) #test-repr(color.hsv(luma(40)), color.hsv(0deg, 0%, 15.69%)) #test-repr(color.linear-rgb(luma(40)), color.linear-rgb(2.12%, 2.12%, 2.12%)) #test-repr(color.linear-rgb(rgb(1, 2, 3)), color.linear-rgb(0.03%, 0.06%, 0.09%)) #test-repr(color.hsl(rgb(1, 2, 3)), color.hsl(-150deg, 50%, 0.78%)) #test-repr(color.hsv(rgb(1, 2, 3)), color.hsv(-150deg, 66.67%, 1.18%)) #test-repr(oklab(luma(40)), oklab(27.68%, 0.0, 0.0, 100%)) #test-repr(oklab(rgb(1, 2, 3)), oklab(8.23%, -0.004, -0.007, 100%)) #test-repr(oklch(oklab(40%, 0.2, 0.2)), oklch(40%, 0.283, 45deg, 100%)) #test-repr(oklch(luma(40)), oklch(27.68%, 0.0, 72.49deg, 100%)) #test-repr(oklch(rgb(1, 2, 3)), oklch(8.23%, 0.008, 240.75deg, 100%)) --- color-spaces --- // The different color spaces #let col = rgb(50%, 64%, 16%) #box(square(size: 9pt, fill: col)) #box(square(size: 9pt, fill: rgb(col))) #box(square(size: 9pt, fill: oklab(col))) #box(square(size: 9pt, fill: oklch(col))) #box(square(size: 9pt, fill: luma(col))) #box(square(size: 9pt, fill: cmyk(col))) #box(square(size: 9pt, fill: color.linear-rgb(col))) #box(square(size: 9pt, fill: color.hsl(col))) #box(square(size: 9pt, fill: color.hsv(col))) --- color-space --- // Test color kind method. #test(rgb(1, 2, 3, 4).space(), rgb) #test(cmyk(4%, 5%, 6%, 7%).space(), cmyk) #test(luma(40).space(), luma) #test(rgb(1, 2, 3, 4).space() != luma, true) --- color-components --- // Test color '.components()' without conversions #let test-components(col, ref, has-alpha: true) = { // Perform an approximate scalar comparison. let are-equal((a, b)) = { let to-float(x) = if type(x) == angle { x.rad() } else { float(x) } let epsilon = 1e-4 // The maximum error between both numbers test(type(a), type(b)) calc.abs(to-float(a) - to-float(b)) < epsilon } let ref-without-alpha = if has-alpha { ref.slice(0, -1) } else { ref } test(col.components().len(), ref.len()) assert(col.components().zip(ref).all(are-equal)) assert(col.components(alpha: false).zip(ref-without-alpha).all(are-equal)) } #test-components(rgb(1, 2, 3, 4), (0.39%, 0.78%, 1.18%, 1.57%)) #test-components(luma(40), (15.69%, 100%)) #test-components(luma(40, 50%), (15.69%, 50%)) #test-components(cmyk(4%, 5%, 6%, 7%), (4%, 5%, 6%, 7%), has-alpha: false) #test-components(oklab(10%, 0.2, 0.4), (10%, 0.2, 0.4, 100%)) #test-components(oklch(10%, 0.2, 90deg), (10%, 0.2, 90deg, 100%)) #test-components(oklab(10%, 50%, 200%), (10%, 0.2, 0.8, 100%)) #test-components(oklch(10%, 50%, 90deg), (10%, 0.2, 90deg, 100%)) #test-components(color.linear-rgb(10%, 20%, 30%), (10%, 20%, 30%, 100%)) #test-components(color.hsv(10deg, 20%, 30%), (10deg, 20%, 30%, 100%)) #test-components(color.hsl(10deg, 20%, 30%), (10deg, 20%, 30%, 100%)) --- color-luma --- // Test gray color conversion. #stack(dir: ltr, rect(fill: luma(0)), rect(fill: luma(80%))) --- color-rgb-out-of-range --- // Error for values that are out of range. // Error: 11-14 number must be between 0 and 255 #test(rgb(-30, 15, 50)) --- color-rgb-bad-string --- // Error: 6-11 color string contains non-hexadecimal letters #rgb("lol") --- color-rgb-missing-argument-red --- // Error: 2-7 missing argument: red component #rgb() --- color-rgb-missing-argument-blue --- // Error: 2-11 missing argument: blue component #rgb(0, 1) --- color-rgb-bad-type --- // Error: 21-26 expected integer or ratio, found boolean #rgb(10%, 20%, 30%, false) --- color-luma-unexpected-argument --- // Error: 10-20 unexpected argument: key #luma(1, key: "val") --- color-mix-bad-amount-type --- // Error: 12-24 expected float or ratio, found string // Error: 26-39 expected float or ratio, found string #color.mix((red, "yes"), (green, "no"), (green, 10%)) --- color-mix-bad-value --- // Error: 12-23 expected a color or color-weight pair #color.mix((red, 1, 2)) --- color-mix-bad-space-type --- // Error: 31-38 expected `rgb`, `luma`, `cmyk`, `oklab`, `oklch`, `color.linear-rgb`, `color.hsl`, or `color.hsv`, found string #color.mix(red, green, space: "cyber") --- color-mix-bad-space-value-1 --- // Error: 31-36 expected `rgb`, `luma`, `cmyk`, `oklab`, `oklch`, `color.linear-rgb`, `color.hsl`, or `color.hsv` #color.mix(red, green, space: image) --- color-mix-bad-space-value-2 --- // Error: 31-41 expected `rgb`, `luma`, `cmyk`, `oklab`, `oklch`, `color.linear-rgb`, `color.hsl`, or `color.hsv` #color.mix(red, green, space: calc.round) --- color-cmyk-ops --- // Test CMYK color conversion. #let c = cmyk(50%, 64%, 16%, 17%) #stack( dir: ltr, spacing: 1fr, rect(width: 1cm, fill: cmyk(69%, 11%, 69%, 41%)), rect(width: 1cm, fill: c), rect(width: 1cm, fill: c.negate(space: cmyk)), ) #for x in range(0, 11) { box(square(size: 9pt, fill: c.lighten(x * 10%))) } #for x in range(0, 11) { box(square(size: 9pt, fill: c.darken(x * 10%))) } --- color-outside-srgb-gamut --- // Colors outside the sRGB gamut. #box(square(size: 9pt, fill: oklab(90%, -0.2, -0.1))) #box(square(size: 9pt, fill: oklch(50%, 0.5, 0deg))) --- color-rotate-hue --- // Test hue rotation #let col = rgb(50%, 64%, 16%) // Oklch #for x in range(0, 11) { box(square(size: 9pt, fill: rgb(col).rotate(x * 36deg))) } // HSL #for x in range(0, 11) { box(square(size: 9pt, fill: rgb(col).rotate(x * 36deg, space: color.hsl))) } // HSV #for x in range(0, 11) { box(square(size: 9pt, fill: rgb(col).rotate(x * 36deg, space: color.hsv))) } --- color-saturation --- // Test saturation #let col = color.hsl(180deg, 0%, 50%) #for x in range(0, 11) { box(square(size: 9pt, fill: col.saturate(x * 10%))) } #let col = color.hsl(180deg, 100%, 50%) #for x in range(0, 11) { box(square(size: 9pt, fill: col.desaturate(x * 10%))) } #let col = color.hsv(180deg, 0%, 50%) #for x in range(0, 11) { box(square(size: 9pt, fill: col.saturate(x * 10%))) } #let col = color.hsv(180deg, 100%, 50%) #for x in range(0, 11) { box(square(size: 9pt, fill: col.desaturate(x * 10%))) } --- color-luma-ops --- // Test gray color modification. #test-repr(luma(20%).lighten(50%), luma(60%)) #test-repr(luma(80%).darken(20%), luma(64%)) #test-repr(luma(80%).negate(space: luma), luma(20%)) --- color-transparentize --- // Test alpha modification. #test-repr(luma(100%, 100%).transparentize(50%), luma(100%, 50%)) #test-repr(luma(100%, 100%).transparentize(75%), luma(100%, 25%)) #test-repr(luma(100%, 50%).transparentize(50%), luma(100%, 25%)) #test-repr(luma(100%, 10%).transparentize(250%), luma(100%, 0%)) #test-repr(luma(100%, 40%).transparentize(-50%), luma(100%, 70%)) #test-repr(luma(100%, 0%).transparentize(-100%), luma(100%, 100%)) --- color-opacify --- #test-repr(luma(100%, 50%).opacify(50%), luma(100%, 75%)) #test-repr(luma(100%, 20%).opacify(100%), luma(100%, 100%)) #test-repr(luma(100%, 100%).opacify(250%), luma(100%, 100%)) #test-repr(luma(100%, 50%).opacify(-50%), luma(100%, 25%)) #test-repr(luma(100%, 0%).opacify(0%), luma(100%, 0%)) --- issue-color-mix-luma --- // When mixing luma colors, we accidentally used the wrong component. #rect(fill: gradient.linear(black, silver, space: luma)) --- issue-4361-transparency-leak --- // Ensure that transparency doesn't leak from shapes to images in PDF. The PNG // test doesn't validate it, but at least we can discover regressions on the PDF // output with a PDF comparison script. #rect(fill: red.transparentize(50%)) #image("/assets/images/tiger.jpg", width: 45pt)
https://github.com/lublak/typst-echarm-package
https://raw.githubusercontent.com/lublak/typst-echarm-package/main/typst-package/lib.typ
typst
MIT License
#import "@preview/ctxjs:0.1.0" #let echarts-bytecode = read("echarts.kbc1", encoding: none) #let echart-helper = read("echarts_helper.js", encoding: none) #{ _ = ctxjs.create-context("@preview/echarm") _ = ctxjs.eval("@preview/echarm", "function setTimeout(functionRef, delay, ...args) { if(!delay) {functionRef();} }") _ = ctxjs.load-module-bytecode("@preview/echarm", echarts-bytecode) _ = ctxjs.load-module-js("@preview/echarm", "echarts_helper", echart-helper) } #let render(width: auto, height: auto, zoom: 1, options: (:)) = { layout(size => { let calc_height = height let calc_width = width if type(calc_height) == ratio { calc_height = size.height * calc_height } else if type(calc_height) == relative { calc_height = size.height * calc_height.ratio + calc_height.length } if type(calc_width) == ratio { calc_width = size.width * calc_width } else if type(calc_width) == relative { calc_width = size.width * calc_width.ratio + calc_width.length } calc_height = (calc_height).pt() calc_width = (calc_width).pt() image.decode( ctxjs.call-module-function( "@preview/echarm", "echarts_helper", "render", ( calc_width / zoom, calc_height / zoom, options, ) ), width: width, height: height, format: "svg", fit: "cover" ) }) }
https://github.com/wenjia03/JSU-Typst-Template
https://raw.githubusercontent.com/wenjia03/JSU-Typst-Template/main/README.md
markdown
MIT License
# 吉首大学 `Typst` 模板集合 目前是差不多是自用类型的模板.... 和老师联系了一下,本校课程设计、实验报告模板差异较大,暂时不好统一,所以目前是尽量适配....毕业设计这块的模板,学校正在修订,暂时还无特定标准。 ## Why `Typst`? - **速度快**:`Typst`基于`Rust`开发,采用**增量编译**方案,文档长度基本不会影响编译速度。 - **易使用**:上手难度与`markdown`差不多,同时宏语法易理解。 - **上手快**:只需要使用一个VSCode装俩插件就能使用`Typst`的说!不像LaTeX,上来十几个GB上上强度,加上分不清的发行版(其实是我分不清...)。 其余相关的可以参考[Typst非官方中文文档](https://typst-doc-cn.github.io/docs/)哦~ ## 模板列表 这里有一个模板的列表,用以给同学们参考哪些课程能用(因为模板差异大) | 模板名称 | 学院 | 基于课程 | Example | | ---------------- | -------------------- | ------------------------------------------------ | ------------------------------------------------------------ | | 实验报告(无框) | 计算机科学与工程学院 | Java Web B(似乎这个模板也是基于数据库原理改的) | [Example](https://github.com/wenjia03/JSU-Typst-Template/blob/main/Templates/实验报告(无框)/index.pdf) | ## 使用 ### 在线使用 Typst提供类似于Overleaf的在线编辑器[https://typst.app/](https://typst.app/),其虽然无官方客户端,但是提供PWA版本程序。 ### VSCode 在VSCode中安装 Typst Preview 插件和 Tinymist Typst 就能用了,创建一个`.typ`后缀文件就能预览了。 当然还有其他很好用的插件.. ## 目前已经完成的模板 - 实验报告(无框版):目前计算机学院部分课程正在使用的标准 ## 基于项目 - 南京大学 Typst 毕业论文模板 MIT Licence - 上海大学 Typst 毕业论文模板 Apache 2.0 感谢以上项目提供支持。 ## 需求/贡献 - 如有其余模板,可以将相关格式要求或者Word档以Issue提出。 - 如有贡献,请您提交Pull Request ## 整合相关 - 目前暂时是不打算做整合成一个模板(因为本校模板差异真的大.....),目前以不同的文件夹存放,等量大了,再整合...
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/020%20-%20Prologue%20to%20Battle%20for%20Zendikar/006_Offers%20to%20the%20Fire.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Offers to the Fire", set_name: "Prologue to Battle for Zendikar", story_date: datetime(day: 05, month: 08, year: 2015), author: "<NAME>", doc ) "Chandra, please chant with us," said the abbot Serenok, his heavy sleeve swaying as he gestured to her. Chandra and the monastery's fire monks balanced on floes of solid rock in the lava field, miles from the Keral Keep monastery on Regatha. The air cooked and the landscape wavered with heat. A massive volcano loomed in the distance, belching smoke. Chandra wasn't sure if she was supposed to be here. This was the place where she came into her own as a pyromancer, but it also involved chanting exercises. Chandra put her hand behind her head. "I am not a great chanter," she said. "Then this will be the perfect challenge for you," said Serenok, and his slight smile wrinkled the skin around his eyes. As the abbot, the head teacher of Keral Keep, Serenok counseled the acolytes in the ways of fire magic. Like Mother Luti, the matriarch of the monastery, Serenok had pushed Chandra's talents in fire magic, prodding her to go further in her studies and her understanding of herself. #figure(image("006_Offers to the Fire/01.jpg", width: 100%), caption: [Mountain | Art by Sam Burley], supplement: none, numbering: none) "You have talents that remind me of the one who inspired the founding of this Keep," Serenok said. "I'm not her, Serenok," Chandra said. "You can aspire to be," he answered. "One day." He meant Jaya. J<NAME>—a famous pyromancer. But to Chandra she was a kind of myth, a presence that hung in the air of the Keep. Jaya's pyromantic sayings were on every monk's lips and carved into the halls, and her pair of heat-shield goggles rested on a pedestal inside the monastery. Some of Jaya's spells had become ritualized exercises that the fire acolytes practiced—like today, here, in the middle of a lava field. Chandra knew that the lessons had been useful. She could have done without the comparisons. "We can all aspire to be more like Jaya," said Serenok to the others. "Let us begin the exercise." He led the chant. The abbot's voice was clear, but his lungs sounded papery and thin. Chandra noticed his age in the effort of his movements—he had to fight the hunch of his back to lift his head and sing. The other pyromancer initiates joined their voices together, and soon the air filled with sound. Teeth-clashing consonants punctuated low, drawn-out vowels. They moved in concert as well, their feet gliding over the surface of the rock floes, their arms flowing like the heat-distorted air. As they danced, tongues of fire rose from the lava, growing into a fiery circle around them. The pyromantic dance was beautiful, and Chandra danced, too. She embraced herself, tipped her head back, and spun, whipping her arms out and spraying fire from her fingertips. As she moved, she looked up to watch the smoke from the lava field rise into the Regathan sky along with the chant of the monks. Chandra wondered if this was how Jaya would have felt. Did she bring this particular chant to Regatha? <NAME> knew of Planeswalkers, and had told Chandra that Jaya was one. What words from far-off planes would she have sung to summon the fire? Chandra took in a breath and let out a sound that matched how the dance made her feel—a high, warbling song that rose and rose as she spun. "Chandra," said Serenok. "The spell will fail if you don't participate." Chandra stopped and looked around. The other monks had stopped dancing and were looking at her. The chant had subsided, and the circle of fire with it. "I thought I was," said Chandra. Her hair sparked with flames, but she quickly extinguished them with her hand. "You have to learn to #emph[channel] that passion into the lessons," said Serenok. "Only together will the spell work and generate the strongest fire. You must dedicate yourself to it." #figure(image("006_Offers to the Fire/02.jpg", width: 100%), caption: [Abbot of Keral Keep | Art by Deruchenko Alexander], supplement: none, numbering: none) "I'm trying," said Chandra. "Try harder," said Serenok, his voice ragged. "These are my final days. Show an old monk what he wants to see." "Don't say that." Serenok clapped his hands. "Let us start again. Chandra, remember what you've learned. The lessons are not meant to constrain you. They are meant to help you grow." The abbot put his hands out at his sides and tipped his head back. Chandra saw a line of sweat roll down his cheek—was he casting a spell? A cracking, crumbling sound from under the volcanic field. The landscape jolted, and Chandra and the monks stumbled on their slabs of rock. "What is it?" asked one of the acolytes, whipping his head around. "An earthquake?" asked another monk. A mound of molten rock rose from the lava plain, cutting them off from the direction of the monastery. Something alive was rising out of the lava—something alive, and big. "Quickly," said Serenok. "Let us chant again." "Is this the best time to be practicing our singing?" Chandra asked. "Raise the defenses, as I've taught you. Quickly!" Serenok sang out, and the monks joined him, resuming their dance. The ring of fire encircled them again and began to rise. Something gigantic burst forth from the lava bed. A toothy, scaly head ringed with stony tentacles rose high, followed by a long pillar of a neck with crawling spikes along its edges. The thing was the size of a wurm, but was clearly adapted to swimming through volcanic rock. #figure(image("006_Offers to the Fire/03.jpg", width: 100%), caption: [<NAME> | Art by <NAME>], supplement: none, numbering: none) "Hellion!" shouted one of the monks. "Maintain the chant!" shouted Serenok. The hellion roared into the sky, belching fire and sulfurous gas. It rotated its body to regard the monks, its tendrils wavering. Its mouth was big enough to engulf any of them in a single bite, but Chandra thought it looked like it would probably enjoy chewing. The monks chanted and danced, and the wall of fire rose around them, blocking out the hellion. As the flames rose, Chandra looked around to the fire monks and tried to imitate the sound and cadence. #emph[Let's go, Nalaar] , she thought. #emph[This same chant is carved into the walls all over the Keep] . #emph[You can do this.] She looked at Serenok and tried to mimic his movements. The wall of flame grew, but it was sputtering, and not reaching high enough. Chandra knew it was her fault—she was rushing the words, mixing up the sounds, stumbling on the dance. The hellion would be able to bend its head over and snap at any of the monks— She saw the beast's head rear back over the rising wall of flames. Its mouth-tendrils flexed, and it roared. Chandra gave up trying to chant. She turned to face the hellion, and her hair and hands erupted in a blaze. She whipped two missiles of fire at its belly, but its heat-resistant scales were barely singed. The hellion attacked, its head slashing down through the flame wall, aiming directly at an acolyte. The acolyte dodged, leaping onto another floe of rock as the hellion crashed through the one that had supported him. The fire barrier's flames singed the hellion's scales, but again, the meager heat didn't even slow it down. Chandra gritted her teeth and balanced herself on the balls of her feet. As the hellion dipped its head down, she sprung onto its side, grabbed hold of its body spines, and clambered onto its back. The hellion immediately bucked and shrieked, and its tendrils lashed out to try to grab at Chandra. She grabbed two of the tendrils, planted her feet on its ridges, and steadied herself on its back. "I've got him!" Chandra shouted. The hellion swiveled and twisted its body around, and suddenly its back was its front. Now Chandra was dangling from its tendrils, her legs flailing. "Aim for its head!" said an acolyte, conjuring a fire spell. "Don't aim for its head!" yelled Chandra, who was dangling from the beast's head. Chandra held on, swinging. She angled her body to get purchase with her feet and kicked, flinging herself up and over to the top of the hellion's head. The hellion snapped and bucked, but Chandra heated her hands white-hot, and her fingers pierced into its rugged hide. She clamped herself on. Chandra wondered what she had gotten herself into—not an unfamiliar feeling. The monster's tendrils were thicker here at the head, and they flailed at her, some of them striking pieces of armor, but others lashing skin. She grimaced at her white-hot hands, half buried into the monster's armor, and knew she couldn't sustain such heat for long. She needed more heat, more than she could create alone. "I have a bad idea," she shouted to the others. "I changed my mind. When I say so, #emph[aim for me] !" Chandra timed her move with the next twist of the hellion's body. She released her grip and half-ran, half-slid down its belly, fighting off its lashing tentacles with a sweeping arc of fire. Once she reached its belly, near where it emerged from the lava field, she grabbed on with both hands and turned back to look it in the eye. With a grunt of effort, she punched her white-hot hand into its armor. The hellion lunged its head down at her as a reflex, and Chandra leaped. As its toothy maw came down, Chandra leapt out of the way, and the hellion bit hard into the ridged plates of its own belly. For the moment, its jaws were stuck. Chandra landed on one of the rock floes. She turned to face the monks, beckoning to them. "Now!" she yelled. "Fire! Right at me!" The other monks stood rigid, looking to Serenok. This wasn't any part of their teachings. The abbot looked at Chandra in the eyes for an aching second, deciding. The hellion shrieked and jerked, its teeth stuck in its own rock-hard armor. "Now!" pleaded Chandra. "Do it!" Serenok looked to the acolytes, and nodded. The fire monks shouted, thrust their hands out, and unleashed a dozen separate fire spells at Chandra. #figure(image("006_Offers to the Fire/04.jpg", width: 100%), caption: [Ravaging Blaze | Art by <NAME>], supplement: none, numbering: none) Chandra had only a moment as the comets and spheres of flame raced toward her. She timed her move, then twirled around on her feet, guiding the fire spells with her hands as she spun. In one motion, she wove the strands of fire together into a needle-sharp streak of blindingly hot fire. She curved it around her body, feeling its skin-crackling heat as it passed, and guided it right into the hellion's head. The spike of fire pierced into the armored hide of the monster's forehead and drilled deep, hitting tender tissue. With a whip of its body and a shriek, the hellion thrashed itself free, finally dislodging its clamping maw from its plates. It lifted its head, flared its tendrils with a roar, and then dived back down into the lava plain. Waves rolled through the lava field, and subsided. No one said anything for a moment, until they were sure it wasn't coming back immediately to devour them. Chandra leaned her hands on her knees as she caught her breath. "Sorry I ruined the chant," she said. Her mane of flame slowly abated, turning into her usual hair. One lock of hair stuck out crazily. "That was it," said Serenok with a soot-smudged smile. He coughed into his fist, but it didn't stifle his grin. "You've done what I've only see one other person ever do. You're ready. You're #emph[her] ." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) "Chandra, get up." Chandra was back in her bed at the monastery. She had a sinking feeling that it might, in fact, be that time people call morning. To make matters significantly not-better, she could tell it was <NAME>'s voice at her door. "Chandra," Luti repeated. "Up. It's midday." "How can you tell?" Chandra murmured, without budging. "From the inside of my eyelids, it looks exactly like sleeping time." "It's Serenok." Chandra sat up finally. "Listen," she said, sighing the sleep out of her brain. "If he wants to talk about chanting exercises, tell him tomorrow might be better—" "Chandra. Serenok is dead." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) The abbot's memorial was short, held on the rocky clearing just outside the steps of Keral Keep. They were the same broad, stone steps Chandra had ascended when she first became a Planeswalker. Many of the fire monks gathered there were among those who welcomed her here as a perplexed young pyromancer. #figure(image("006_Offers to the Fire/05.jpg", width: 100%), caption: [Mount Keralia | Art by <NAME>], supplement: none, numbering: none) "We were all Serenok's acolytes," Mother Luti was saying. "All of us who knew him learned lessons from his life of flame and passion, and from his dedication as abbot of this Keep." Chandra was crying—half in bewilderment, half in a kind of anticipatory grief. She could tell she wasn't feeling the brunt of the pain yet. She could feel it coming, like a presence moving toward her in the dark. "Serenok's body gave out as he slept last night," Luti continued. "And in his passing—because he was a teacher to the end—he bestowed on us a final lesson. He showed us that in the time we have, we must choose a path and devote ourselves to it. We must find the fire within us, and let it grow, and offer our lives to it. And we must see that fire stoked in the hearts of others." She clasped her hands together. "Goodbye, Serenok." The monks lowered their heads. Their hooded robes fell down over their faces. When the ceremony concluded, Chandra did not return to the monastery with the others. She walked away from the Keep, deeper into the mountains. She heard Luti calling after her, but she did not turn back. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) A relentless Regathan day became a roasting Regathan night, with smokestorms swirling in the skies like the snarl of emotions inside Chandra. It was dark enough now that she couldn't see the outline of the great volcano against the sky. But she could make out the thin lines of lava meandering down its side. At this distance, they didn't look like they were flowing at all—she could imagine the glowing threads trickling down, or if she changed her perspective, she could see them climbing their way up the slope and into its heart. Chandra settled down under an overhang of rock, below a nest of cindermoths. She watched a stream of the moths spiraling up into the night with tiny wings of flame. She had always chafed at Serenok's expectations of her. But would it have killed her to learn the chants, and practice the exercises along with the others? Would it have been so bad to live up to what he saw in her? She cried and thought not of Serenok's lessons, but of his kindness, his encouragement. She felt a hollow place inside her, a deep well edged in pain. She had expected a wave of emotion at the teacher's death, something more tangible that she could lean against, something she could defy. There was no defying this hollowness. It was not something she could fight. She could only live in the empty space of it. After a time, she wanted her bed more than solitude. She made her way back to the monastery through high-walled mountain passes, blasting fire spells into the darkness ahead of her. Cindermoths swirled in her wake. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) It was morning when her footsteps finally brought her back to the monastery. Mother Luti sat on Keral Keep's stone steps, a folded garment on her lap. It was Serenok's robe, the abbot's mantle, threaded with igneous filaments. "Why would you greet me with this?" asked Chandra. Her muscles were weary and her heart was a hurricane, a storm swirling around an empty space. She had never seen the robe when it was not draped over Serenok's shoulders. Her eyes flashed. "Are you intentionally trying to hurt me?" "Chandra, listen," Luti began. "No, I understand," Chandra said, stepping in close to Luti's face. "Serenok is dead, but the lessons must go on! We all need to gather in the big hall before his robe has even gone cold, right? Because we need to fill that hole. That's what you're here to tell me, isn't it? That it's been hours, and we've moved on, and we need to choose a new abbot?" "No, Chandra," said Luti, looking down at Serenok's mantle. "I'm here to tell you we've already chosen one." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) "I can't," said Chandra, for what felt like the hundredth time. "I am no monk. I am certainly no head abbot." She sat at a long granite table in the heart of the monastery, surrounded by elder monks with robes in colors of flame. Serenok's mantle lay folded on the table before her. "As Serenok always said, you are one of the most talented pyromancers ever to grace Keral Keep," said Luti, her hands clasped together, her face kind. "He saw you as resourceful, creative, forthright. Your words and your magic always come from the heart, just like—" Chandra winced. "—just like Jaya. We could all learn from your example." That was kind to say, but they weren't actually listening to her. She could feel a haze rising over her vision. "I could never take Serenok's place! I'm no teacher. I'm barely even a student. I'm sorry, but I have to refuse." Some of the monks looked at each other. #figure(image("006_Offers to the Fire/06.jpg", width: 100%), caption: [Acolyte of the Inferno | Art by Joseph Meehan], supplement: none, numbering: none) "Chandra, it is a great honor to be asked to be abbot," said another monk, his long beard nearly touching the stone table. "If the mantle is offered, it is your responsibility. You #emph[must] accept." "Hey," Chandra said, slammed her fists on the table suddenly, on either side of Serenok's folded robe. Her hair momentarily crackled with flames. "Let me advise you now. Talking about what I #emph[must] do is not a good way to persuade me." Mother Luti's mouth was a thin line. "Serenok knew he was at the end of his life, Chandra. He was testing you. He saw something in you." "Serenok believed I was someone I'm not," said Chandra. "Please believe me. You do not want me heading this place. I don't know the chants. I foul up the footwork. I'm not the best at anything you do here." "Then, as Serenok was fond of saying, this will be the perfect challenge for you," said Luti. The words jabbed her in the chest. She sat back, and her shoulders dropped. She squeezed her fists into her eye sockets—whether to stifle tears or to blot out what she saw around her, she did not know. She opened her eyes and looked around at the faces of the fire monks around her. This place, these people who had taught her so much, wanted her to teach them. If she stayed, she could show them how much it meant to her, when they took her in that day all those years ago—when she came to them as a scared orphan from another world. "Do you think I actually could?" Chandra asked. The monks all nodded. <NAME> stood, spreading out her hands. "<NAME>, will you take up Serenok's mantle, and be our Jaya? Will you guide us in the principles of Keralian pyromancy? Will you teach us the ways of fire?" Chandra stood, surrounded by her peers. Something about this hall felt safe, like the disheveled blankets of her bed. Maybe she could commit to this path. Jaya only passed through Regatha briefly—maybe she could be the Jaya who didn't just pass through, but #emph[stayed] . Maybe becoming a fire-blasting abbot could be fun—and a way to start to fill that painful pit in her heart. As she stood before the table, trying to find words, two men—dressed in distinctly non-Regathan garb—rushed into the chamber. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) One of them was broad-chested and chin-bearded, in sturdy armor, and the other was slighter, smooth-cheeked, in a rune-covered blue hooded cloak. #figure(image("006_Offers to the Fire/07.jpg", width: 100%), caption: [Jace, Telepath Unbound | Art by <NAME>], supplement: none, numbering: none) All heads turned to the two men. They looked right at Chandra, and she recognized them both. Chandra sputtered. "What is—what is this?" "It's good to see you, Chandra," said <NAME>. "We need your help." "#emph[It's about Zendikar] ," came <NAME>'s voice in her head. Chandra brought the other two Planeswalkers outside, down the steps of the Keep toward the slopes of Mount Keralia. Two Planeswalkers from separate moments in her past—reminders of other planes, other times in her life—showing up here, just as she felt connected to the people of Keral Keep. She tried to find room in her brain for the entirety of the juxtaposition. "So," she said. "Gideon. Coming by to enforce some laws? Who are you after now?" "The Eldrazi," said Gideon. "And you," Jace added. "We've got a mission. We could use a pyromancer." "Well, your timing is #emph[appallingly] bad." "I'm sorry if this is a difficult time," said Gideon. "And if you need to be here, you need to be here. But we need you, Chandra. Zendikar needs you." Hot air swirled in her chest. "Zendikar said that? Is that a direct quote?" Chandra paced back and forth, not sure what to do with the animated temper she suddenly felt. "How sweet of you both to think of me. How in hell do you two even. . . ?" Gideon tipped his head at Jace. "Met recently, on Ravnica." "So you're bouncing around from world to world, looking for people to uproot? Is that it?" Gideon opened his mouth and closed it. In that silent moment, Chandra seemed to hear whispers of a world of tortures he had endured. #figure(image("006_Offers to the Fire/08.jpg", width: 100%), caption: [Gideon, Champion of Justice | Art by <NAME>], supplement: none, numbering: none) Chandra felt a streak of empathy for him fighting with her stubbornness. "Gideon, you #emph[know] my history here. You of all people should know that I've made #emph[sacrifices] for this world." The pyre of the distant volcano glinted in Gideon's armor. "It's not the only world that needs a sacrifice." Chandra rubbed her temples under her goggles. The only thought that came to her mind was: #emph[Jaya would go with them] . Jaya wouldn't have hesitated to rocket off to a new adventure, to dive into some crisis where she could unleash her fire magic and blast everything in sight. The temptation made Chandra's heart quicken, despite herself. And when she thought of people suffering, that she could help— "Remember," Jace added. "You have a hand in Zendikar's current state. We have a debt to pay, you and I. Like it or not, we have a #emph[responsibility] ." Chandra's eyes literally blazed. She spoke slowly, with as much calm as she could muster through gritted teeth. "Can everyone. Please. Stop talking to me. #emph[About] #emph[responsibility] ." Gideon clenched his hands together. "Chandra," he said, and his hands touched his chest. For him, that small gesture was like open pleading, an expression of surprising need. #emph[Jaya would go with them. Jaya would go with them.] "Go," she said. Gideon looked at Jace, and back to her. He tried to step toward her, to reach out and touch her arm. But Chandra glared at him and a ring of fire grew up around her, encircling her with a personal wall of flame. #figure(image("006_Offers to the Fire/09.jpg", width: 100%), caption: [Chandra, Roaring Flame | Art by <NAME>], supplement: none, numbering: none) "#emph[This] is where I'm needed most," said Chandra. She folded her arms. "I belong here. I've made a promise." And in her heart, it was true. "Gideon," said Jace. "I think we're done here." Gideon looked in Chandra's eyes for a long time. Then he nodded and said, "If you change your mind, find us at Sea Gate." He barely glanced at Jace's boots. "Let's go." When they planeswalked away, the air smudged, obscuring her view for a moment. When they had disappeared, she looked beyond at the stone steps leading up into Keral Keep—and she saw <NAME> standing there, looking on from the doorway, with Serenok's mantle in her hands. Chandra nodded to <NAME>, and walked up the steps toward her.
https://github.com/storopoli/Bayesian-Statistics
https://raw.githubusercontent.com/storopoli/Bayesian-Statistics/main/slides/03-priors.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#import "@preview/polylux:0.3.1": * #import themes.clean: * #import "utils.typ": * #new-section-slide("Priors") #slide(title: "Recommended References")[ - #cite(<gelman2013bayesian>, form: "prose"): - Chapter 2: Single-parameter models - Chapter 3: Introduction to multiparameter models - #cite(<mcelreath2020statistical>, form: "prose") - Chapter 4: Geocentric Models - #cite(<gelman2020regression>, form: "prose"): - Chapter 9, Section 9.3: Prior information and Bayesian synthesis - Chapter 9, Section 9.5: Uniform, weakly informative, and informative priors in regression - #cite(<vandeschootBayesianStatisticsModelling2021>, form: "prose") ] #focus-slide(background: julia-purple)[ #align(center)[#image("images/memes/priors.jpg")] ] #slide(title: "Prior Probability")[ Bayesian statistics is characterized by the use of prior information as the prior probability $P(θ)$, often just prior: $ underbrace(P(θ | y), "Posterior") = (overbrace(P(y | θ), "Likelihood") dot overbrace(P(θ), "Prior")) / underbrace(P(y), "Normalizing Constant") $ ] #slide(title: "The Subjectivity of the Prior")[ - Many critics to Bayesian statistics are due the subjectivity in eliciting priors probability on certain hypothesis or model parameter's values. - Subjectivity is something unwanted in the ideal picture of the scientist and the scientific method. - Anything that involves human action will never be free from subjectivity. We have subjectivity in everything and science is #text(fill: julia-red)[no] exception. - The creative and deductive process of theory and hypotheses formulations is *not* objective. - Frequentist statistics, which bans the use of prior probabilities is also subjective, since there is *A LOT* of subjectivity in choosing which model and likelihood function @jaynesProbabilityTheoryLogic2003 @vandeschootBayesianStatisticsModelling2021. ] #slide(title: "How to Incorporate Subjectivity")[ - Bayesian statistics *embraces* subjectivity while frequentist statistics *bans* it. - For Bayesian statistics, *subjectivity guides our inferences* and leads to more robust and reliable models that can assist in decision making. - Whereas, for frequentist statistics, *subjectivity is a taboo* and all inferences should be objective, even if it resorts to *hiding and omitting model assumptions*. - Bayesian statistics also has assumptions and subjectivity, but these are *declared and formalized* ] #slide(title: "Types of Priors")[ In general, we can have 3 types of priors in a Bayesian approach @gelman2013bayesian @mcelreath2020statistical @vandeschootBayesianStatisticsModelling2021: - *uniform (flat)*: not recommended. - *weakly informative*: small amounts of real-world information along with common sense and low specific domain knowledge added. - *informative*: introduction of medium to high domain knowledge. ] #slide(title: "Uniform Prior (Flat)")[ Starts from the premise that "everything is possible". There is no limits in the degree of beliefs that the distribution of certain values must be or any sort of restrictions. Flat and super-vague priors are not usually recommended and some thought should included to have at least weakly informative priors. Formally, an uniform prior is an uniform distribution over all the possible support of the possible values: - *model parameters*: ${θ ∈ RR : -oo < θ < oo}$ - *model error or residuals*: ${σ ∈ RR^+ : 0 < θ < oo}$ ] #slide(title: "Weakly Uninformative Prior")[ Here we start to have "educated" guess about our parameter values. Hence, we don't start from the premise that "anything is possible". #v(1em) I recommend always to transform the priors of the problem at hand into something centered in $0$ with standard deviation of $1$ #footnote[ this is called standardization, transforming all variables into $μ = 0$ and $σ = 1$. ]: - $θ tilde "Normal"(0, 1)$ (<NAME>'s preferred choice #footnote[see more about prior choices in the #link( "https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations", )[Stan's GitHub wiki].] <fn-priors-1>) - $θ tilde "Student"(ν=3, 0, 1)$ (Aki Vehtari's preferred choice #footnote(<fn-priors-1>)) ] #slide(title: "An Example of a Robust Prior")[ #text(size: 18pt)[ A nice example comes from a <NAME>'s lecture #footnote[ https://youtu.be/p6cyRBWahRA, in case you want to see the full video, the section about priors related to the argument begins at minute 40 ] (Columbia professor and member of Stan's research group). He discuss about one of the biggest effect sizes observed in social sciences. In the exit pools for the 2008 USA presidential election (Obama vs McCain), there was, in general, around 40% of support for Obama. If you changed the respondent race from non-black to black, this was associated with an increase of 60% in the probability of the respondent to vote on Obama In logodds scales, 2.5x increase (from 40% to almost 100%) would be equivalent, on a Bernoulli/logistic/binomial model, to a coefficient value of $approx 0.92$ #footnote[ $log("odds ratio") = log(2.5) = 0.9163$. ]. This effect size would be easily derived from a $"Normal"(0, 1)$ prior. ] ] #slide(title: "Informative Prior")[ In some contexts, it is interesting to use an informative prior. Good candidates are when data is scarce or expensive and prior knowledge about the phenomena is available. #v(1em) Some examples: - $"Normal"(5, 20)$ - $"Log-Normal"(0, 5)$ - $"Beta"(100, 9803)$ #footnote[ this is used in COVID-19 models from the #link("https://codatmo.github.io")[CoDatMo Stan] research group. ] ]
https://github.com/alberto-lazari/unipd-typst-doc
https://raw.githubusercontent.com/alberto-lazari/unipd-typst-doc/main/examples/notes.typ
typst
MIT License
#import "@local/unipd-doc:0.0.1": * #show: notes() #show: unipd-doc( title: [Course], subtitle: [Notes], author: [The author], date: [DD-MM-YY], ) #lecture[1 -- 02/10] = Heading #lorem(50) #lecture[2 -- 05/10] == Heading 2 #lorem(30) / Term: definition #lorem(20) === A formula $ E = m c^2 $
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/math/content-01.typ
typst
Other
// Test tables. $ x := #table(columns: 2)[x][y]/mat(1, 2, 3) = #table[A][B][C] $
https://github.com/qujihan/typst-book-template
https://raw.githubusercontent.com/qujihan/typst-book-template/main/template/parts/figure/lib.typ
typst
#import "code.typ": code #import "pic.typ": pic #import "tbl.typ": tbl #let figure-root-path = "../../../../"
https://github.com/MobtgZhang/sues-thesis-typst
https://raw.githubusercontent.com/MobtgZhang/sues-thesis-typst/main/paper/thesis.typ
typst
MIT License
#import "info.typ":* //------------------------------------------- // 定义一些常见的变量 //AutoFakeBold 常量 #let autoFakeBold_pt = 0.35pt // 定义行距 #let linespacing = 1.5em // 定义字体的大小 #let fontsizedict = ( 初号: 42pt, 小初: 36pt, 一号: 26pt, 小一: 24pt, 二号: 22pt, 小二: 18pt, 三号: 16pt, 小三: 15pt, 四号: 14pt, 中四: 13pt, 小四: 12pt, 五号: 10.5pt, 小五: 9pt, 六号: 7.5pt, 小六: 6.5pt, 七号: 5.5pt, 小七: 5pt, ) // 定义文章中使用到的字体信息 #let fontstypedict = ( 仿宋: ("Times New Roman", "FangSong"), 宋体: ("Times New Roman", "SimSun"), 黑体: ("Times New Roman", "SimHei"), 楷体: ("Times New Roman", "KaiTi"), 代码: ("New Computer Modern Mono", "Times New Roman", "SimSun"), ) #let lengthceil(len, unit: fontsizedict.小四) = calc.ceil(len / unit) * unit // 定义一个计数器 // 章节计数器 #let chaptercounter = counter("chapter") // 脚注计数器 #let footnotecounter = counter(footnote) // 图片计数器 #let imagecounter = counter(figure.where(kind:"figure")) // 代码计数器 #let rawcounter = counter(figure.where(kind:"code")) // 表格计数器 #let tablecounter = counter(figure.where(kind:image)) // 方程计数器 #let equationcounter = counter(math.equation) // 定义三个全局变量,用于标记frontmatter,mainmatter,backmatter #let matter_state = state("matter","none") #let frontmatter() = [ #matter_state.update(matter => "frontmatter") #counter(page).update(1) ] #let mainmatter() = { matter_state.update(matter => "mainmatter") //#pagebreak() // 初始化 chaptercounter.update(1) counter(page).update(0) } #let backmatter() = [ #matter_state.update(matter => "backmatter") #chaptercounter.update(1) #counter(heading).update(0) ] // 定义一个中文的计数函数 #let chinesenumber(num, standalone: false) = if num < 11 { ("零", "一", "二", "三", "四", "五", "六", "七", "八", "九", "十").at(num) } else if num < 100 { if calc.mod(num, 10) == 0 { chinesenumber(calc.floor(num / 10)) + "十" } else if num < 20 and standalone { "十" + chinesenumber(calc.mod(num, 10)) } else { chinesenumber(calc.floor(num / 10)) + "十" + chinesenumber(calc.mod(num, 10)) } } else if num < 1000 { let left = chinesenumber(calc.floor(num / 100)) + "百" if calc.mod(num, 100) == 0 { left } else if calc.mod(num, 100) < 10 { left + "零" + chinesenumber(calc.mod(num, 100)) } else { left + chinesenumber(calc.mod(num, 100)) } } else { let left = chinesenumber(calc.floor(num / 1000)) + "千" if calc.mod(num, 1000) == 0 { left } else if calc.mod(num, 1000) < 10 { left + "零" + chinesenumber(calc.mod(num, 1000)) } else if calc.mod(num, 1000) < 100 { left + "零" + chinesenumber(calc.mod(num, 1000)) } else { left + chinesenumber(calc.mod(num, 1000)) } } #let chinesenumbering(..nums, location: none, brackets: false) = locate(loc => { let actual_loc = if location == none { loc } else { location } if matter_state.at(loc) == "mainmatter" { numbering(if brackets { "(1.1)" } else { "1.1" }, ..nums) } else if matter_state.at(loc) == "backmatter" { if nums.pos().len() == 1 { "附录 " + numbering("A.1", ..nums) } else { numbering(if brackets { "(A.1)" } else { "A.1" }, ..nums) } } }) // 目录 #let chinese_outline(title:"目录",depth:none, indent:false) = { set align(center) grid([ #text("目录",fontsizedict.三号,stroke:autoFakeBold_pt,font:fontstypedict.黑体) #v(1em) ]) locate(it => { let elements = query(heading.where(outlined:true).after(it),it) for el in elements { // 跳过没有目录或者目录深度太深的部分 if depth != none and el.level > depth {continue} let maybe_num = if el.numbering != none { if el.numbering == chinesenumbering { chinesenumbering(..counter(heading).at(el.location()),location:el.location()) } else { numbering(el.numbering, ..counter(heading).at(el.location())) } h(0.5em) } let line = { if indent { h(1em * (el.level - 1)) } if el.level == 1 { v(0.5em,weak: true) } if maybe_num != none { style(styles => { let width = measure(maybe_num, styles).width box( width: lengthceil(width), link(el.location(), if el.level == 1 { strong(maybe_num) } else { maybe_num } )) }) } if el.level == 1 { strong(el.body) } else { el.body } // 目录进行加点.处理 // Filler dots box(width: 1fr, h(10pt) + box(width: 1fr, repeat[.]) + h(10pt)) // Page number let footer = query(selector(<__footer__>).after(el.location()), el.location()) let page_number = if footer == () { 0 } else { counter(page).at(footer.first().location()).first() } link(el.location(), if el.level == 1 { strong(str(page_number)) } else { str(page_number) }) linebreak() v(-0.2em) } line } }) pagebreak() } // 定义一个全局表示盲审模式和正常模式的变量 #let blind_state = state("blind",false) // 定义一个用于显示中文和英文摘要的函数 #let display_abstract(load_filename,language:"中文") = { if language=="中文" { set text(font:fontstypedict.宋体) par(justify: true, first-line-indent: 2em, leading: linespacing)[ #align(center+top,text(master_chinese_title,size:fontsizedict.三号,font:fontstypedict.黑体,stroke:autoFakeBold_pt,)) #align(center+top,text("摘要",size:fontsizedict.三号,font:fontstypedict.黑体,stroke:autoFakeBold_pt)) #include load_filename #import load_filename: 中文关键词 #linebreak() #set par(first-line-indent: 0em) #text("关键词:", stroke:autoFakeBold_pt,font:fontstypedict.黑体) #中文关键词.join(",") #v(2em) #pagebreak() ] }else if language=="英文" { par(justify: true, first-line-indent: 2em, leading: linespacing)[ #align(center+top,text(master_english_title,size:fontsizedict.三号,font:fontstypedict.黑体,weight: "semibold")) #align(center+top,text("ABSTRACT",size:fontsizedict.三号,font:fontstypedict.黑体,weight: "semibold")) #include load_filename #import load_filename: 英文关键词 #linebreak() #set par(first-line-indent: 0em) #linebreak() #text("KEYWORDS:", weight: "bold") #h(0.5em, weak: true) #英文关键词.join(", ") #v(2em) #pagebreak() ] }else{ } } //------------------------------------------- // 定义硕士学位论文模板 #let sues_thesis_master( outlinedepth:3, blind: false, doc ) = { // 设置盲审状态 blind_state.update(x=>blind) // 设置页面大小,以及页眉页脚模式 set page( "a4", header: locate(loc =>{[ #set text(font:fontstypedict.宋体,size:fontsizedict.五号) #set align(center) #if matter_state.at(loc) == "frontmatter" { } else if matter_state.at(loc) == "mainmatter" { locate(it =>{ text("上海工程技术大学硕士学位论文"+h(1fr)+"第" + chinesenumber(int(chaptercounter.display())) +"章" + h(1em)+master_chinese_title) }) v(-0.8em) line(length: 100%,stroke: 2pt + black,) v(-10pt) line(length: 100%,stroke: 1pt + black,) } else if matter_state.at(loc) == "backmatter"{ } else if matter_state.at(loc) == "none"{ } else { } ]}), footer: locate(loc => {[ #set text(font:fontstypedict.宋体,size:fontsizedict.五号) #set align(center) #if matter_state.at(loc) == "mainmatter" or matter_state.at(loc) == "backmatter" { if counter(page).at(loc).first()>0 { v(-0.8em) line(length: 100%,stroke: 1pt + black,) v(-9pt) line(length: 100%,stroke: 2pt + black,) align(center)[#text("第"+str(counter(page).at(loc).first())+"页")] } } else if matter_state.at(loc) == "frontmatter" { align(center)[#numbering("I", counter(page).at(loc).first())] } // 标记一个标签,用于目录计数 #if counter(page).at(loc).first()>0{ label("__footer__") } ]}), ) // 设置文章的行间距和字体的大小 // 定义插入列表的格式 set list(indent: 2em) set enum(indent: 2em) // 定义字体格式 show strong: it => text(font: fontstypedict.黑体,weight: "semibold" ,it.body) show emph: it => text(font: fontstypedict.楷体,style: "italic" ,it.body) // 定义字体格式 show strong: it => text(font: fontstypedict.黑体,weight: "semibold" ,it.body) show emph: it => text(font: fontstypedict.楷体,style: "italic" ,it.body) show par: set block(spacing: linespacing) show raw: set text(font: fontstypedict.代码) // 设置标题格式 show heading: it => [ #if it.level == 1 { //每章标题前空一行,以三号黑体居中打印。 pagebreak() set align(center) v(1em) set text(fontsizedict.三号,stroke:autoFakeBold_pt,font:fontstypedict.黑体) // “章”下空一行为“节”, "第" + chinesenumber(int(chaptercounter.display())) +"章" + h(1em)+it.body v(1em) if it.numbering != none { chaptercounter.step() } } else { set par(first-line-indent: 0em) // 以四号黑体左起打印。 //“小节”以小四号黑体左起打印。换行后打印论文正文。“节”和“小节”标题的段前、段后各空0.5行。 if it.level == 2 { // 二级标题 v(0.5em) set text(fontsizedict.四号,stroke:autoFakeBold_pt,font:fontstypedict.黑体) strong(counter(heading).display()) h(0.5em) it.body v(0.5em) } else if it.level == 3 { // 三级标题 v(0.5em) set text(it,fontsizedict.小四,stroke:autoFakeBold_pt,font:fontstypedict.黑体) strong(counter(heading).display()) h(0.5em) it.body v(0.5em) } else { // 三级标题以下 v(0.5em) set text(it,fontsizedict.小四,stroke:autoFakeBold_pt,font:fontstypedict.黑体) strong(counter(heading).display()) h(0.5em) it.body v(0.5em) } } ] // 定义插入的方程格式 set math.equation( numbering: (..nums) => locate(loc => { set text(font:fontstypedict.宋体) if matter_state.at(loc) == "mainmatter"{ numbering("(1-1)",chaptercounter.at(loc).first(),..nums) } else if matter_state.at(loc) == "backmatter" { numbering("(A-1)",chaptercounter.at(loc).first(),..nums) } else { //其他的情况并不做讨论 } }) ) // 论文正文 set align(left + top) // 论文字体大小 set text(fontsizedict.小四, font: fontstypedict.宋体, lang: "zh") set heading(numbering: chinesenumbering) // 正文中文为宋体,非中文为Times New Roman字体,小四,1.5倍行间距,首行缩进2字符。 set text(size:fontsizedict.小四,font:fontstypedict.宋体) par(justify: true, first-line-indent: 2em, leading: linespacing)[ #doc ] }
https://github.com/qujihan/toydb-book
https://raw.githubusercontent.com/qujihan/toydb-book/main/src/chapter2/summary.typ
typst
#import "../../typst-book-template/book.typ": * #let path-prefix = figure-root-path + "src/pics/" == 总结
https://github.com/MALossov/YunMo_Doc
https://raw.githubusercontent.com/MALossov/YunMo_Doc/main/template/template.typ
typst
Apache License 2.0
#import "font.typ": * #let Thesis( // 参考文献bib文件路径 ) = { set page(paper: "a4", margin: ( top: 2.54cm, bottom: 2.54cm, left: 2.5cm, right: 2cm), footer: [ #set align(center) #set text(size: 10pt, baseline: -3pt) #counter(page).display( "1") ], header: align(left)[ // 页眉左侧需要放入 images 文件夹中的图片 Header.png #image("images/Header.png", width: 50mm) ], ) // 标题 include "report_title.typ" // 正文 include "body.typ" // 参考文献 // include "reference.typ" //附录 include "appendix.typ" }
https://github.com/Mouwrice/thesis-typst
https://raw.githubusercontent.com/Mouwrice/thesis-typst/main/future_work.typ
typst
= Future work <future-work> In this section, some ideas are listed that could be interesting to explore in the future. == Increasing signal stability The signal stability has been increased by the method introduced previously, reducing the jitter and noise. However, the signal is still not perfect. It would be interesting to explore other methods to increase the signal stability even further. As it stands, the method is based on a simple prediction model combined with an interpolation method between the predicted and the measured value. This can be generalized to a statistical problem where the goal is to predict a state given a previous state, together with the uncertainty of the prediction. One could use a Kalman filter to estimate the next state and its uncertainty. The Kalman filter can work as a two-phase process where the first phase is the prediction given the previous state and the second phase is the update phase given the measured value @kalman-filter. This would allow making a more informed decision on how to interpolate between the predicted and the measured value. The prediction of every marker is currently independent of the other markers. However, the markers are not independent of each other. It would be interesting to explore methods that consider the dependencies between the markers. Constructing a skeleton model of the human body that takes into account the dependencies and constraint between the markers could be a way to increase the signal stability. For example, the distance between connected markers (by a bone) should be constant. This could be used to correct the predicted value of a marker if the distance between the connected markers is not constant. Another example is that the angle between connected markers is limited to a certain range of values. == Depth estimation From the measurements, it was clear that the depth estimation has a low accuracy and suffers from major instability. Future work could focus on improving the depth estimation. This could be achieved in two ways. The first is by simply using a different model or training the computer vision model to more accurately predict the depth. The second way is to use additional sensors to estimate the depth. For example, a depth sensor could be used to measure the depth of the markers. This could be combined with the computer vision model to increase the accuracy of the depth estimation. Or by using a multi-camera setup, the depth could be estimated by triangulating the position of the markers in the different camera views. == Real-time application There are various ways to build upon the application. One simple way to improve the application is by simply replacing the currently used MediaPipe model with a better model if one is found. An increase in depth is certainly welcome, but also an increase in performance would be beneficial. Currently, the application runs at around 30 frames per second, which limits the ability to track fast and complex movements. A faster model would allow for a higher frame rate and thus a more responsive application. Another way to improve the application is by integrating a better prediction model such as the ones mentioned in the previous section. An entirely different application can be built by using the same principles. Not only is body pose estimation improving, but also hand pose estimation is becoming more accurate. Similar research can be done to discover the feasibility of an application that requires accurate hand and finger tracking. For example, a virtual piano application. Other future research can focus on the usage of body pose estimation in a different context, such as in a medical rehabilitation setting. Such a system could help medical professionals to analyse the movements of patients. Or it could be used as a tool for patients that need to do exercises at home. The system could provide feedback on the correctness of the exercises and give tips on how to improve the exercises.
https://github.com/N4tus/uf_algo
https://raw.githubusercontent.com/N4tus/uf_algo/main/types.typ
typst
MIT License
#import "types-for-typst/types_for_typst.typ": * #let TMakeup = TDict(style: TAny, format: TAny) #let AElem(t_value) = TDict(value: t_value, styles: TArray(TDict(fn: TFunction, precedence: TFunction))) #let TElem = AElem(TAny) #let AGroup(t_group) = TDict(group: t_group, start_style: optional(TFunction), end_style: optional(TFunction), use_start: optional(TBoolean), use_end: optional(TBoolean)) #let TGroup = AGroup(t_or(TFunction, TNone)) #let TGroupElem = AElem(TGroup) #let TGroupStart = AGroup(TFunction) #let TGroupEnd = AGroup(TNone) #let TBody = TDict(body: TAny, indent: TFunction, makeup: TFunction)
https://github.com/SkiFire13/master-thesis-presentation
https://raw.githubusercontent.com/SkiFire13/master-thesis-presentation/master/presentation.typ
typst
#import "@preview/fletcher:0.5.1" as fletcher: node, edge #import "@preview/pinit:0.2.0": * #import "@preview/touying:0.3.1": * #import "typst-touying-unipd/unipd.typ": * #import "common.typ": * #import "touying-diagrams.typ": diagram #show: unipd-theme #title-slide( title: [Solving Systems of Fixpoint Equations via Strategy Iteration], subtitle: [Master's degree in Computer Science], authors: [Candidate: <NAME> \ Supervisor: Prof. <NAME>], date: [September 20, 2024], ) #new-section[The problem] #slide[ Formal verification of software is increasingly more important. // Example: sent message is eventually received - *Model checking* for behavioral logics like $mu$-calculus, to prove system properties; - *Abstract interpretation* to compute an approximation of the set of possible states in a program; - Checking *behavioral equivalences*, like bisimilarity; - ... Fixpoints are an essential ingredient in all of them. ] #new-section[Systems of fixpoint equations] #slide[ $ syseq( x_1 &feq_eta_1 &f_1 &(x_1, ..., x_n) \ x_2 &feq_eta_2 &f_2 &(x_1, ..., x_n) \ &#h(0.3em)dots.v \ x_n &feq_eta_n &f_n &(x_1, ..., x_n) \ ) $ #pause - Defined over a complete lattice $L$ - Monotone functions $f_i$ #pause - $eta_i in {mu, nu}$ and can be mixed ] #new-section[$mu$-calculus] #slide[ - Labelled transition system $(bb(S), ->)$ #align(center, diagram( node-stroke: 1pt, spacing: 4em, label-sep: 3pt, node((0, 0), "1", radius: 1em), node((1, 0), "2", radius: 1em), node((2, 0), "3", radius: 1em), edge((0, 0), (1, 0), "-|>", label: "y", bend: 30deg), edge((2, 0), (1, 0), "-|>", label: "y", bend: -30deg), edge((1, 0), (0, 0), "-|>", label: "a", bend: 30deg), edge((1, 0), (2, 0), "-|>", label: "b", bend: -30deg), edge((0, 0), (0, 0), "-|>", label: "a", bend: 130deg), edge((2, 0), (2, 0), "-|>", label: "b", bend: 130deg), edge((1, 0), (1, 0), "-|>", label: "x", bend: 130deg), )) - Modal logic equipped with fixpoint operators $ phi, psi := p | x | phi or psi | phi and psi | underbrace(boxx(A) phi | diam(A) phi, "modal operators") | underbrace(eta x. phi, "fixpoint") $ #meanwhile #place(bottom + right, dx: 1em, text(font: "New Computer Modern", size: 19pt)[ \[<NAME>, Kozen\] ]) ] #slide[ - System of fixpoint equations over $2^bb(S)$ #pause - $phi = nu x.#h(5pt) (mu y.#h(5pt) P or diam(A) y) and boxx(A) x #pin("phi")$ #pause #context pinit-point-from( pin-dy: 0pt, offset-dx: 90pt, offset-dy: -20pt, body-dy: -20pt, fill: text.fill, "phi", rect(inset: 0.5em, stroke: text.fill)[$Inv(Even(P))$] ) #pause $ syseq( y &feq_mu P or diam(A) y \ x &feq_nu y and boxx(A) x ) $ #pause - *Solution*: all states satisfying the formulas - If the solution is $(S_y, S_x)$ then $s tack.r.double phi <=> s in S_x$ ] #new-section[Powerset game] #slide[ #v(2em) - Given basis $B_L$, determine whether $b sub s_i$ for $b in B_L$ #pause #let n0 = (n, p, c) => node(name: n, p, c, radius: 1.5em) #let n1 = (n, p, c) => node(name: n, p, c, inset: 1em, shape: fletcher.shapes.rect) #let e = (f, t) => edge(f, t, "-|>") #let priority = (pos, pr) => node((rel: (0, 0.18), to: pos), text(fill: blue, size: 19pt, pr), stroke: none) #let note = (pos, c, to) => (node(pos, text(fill: blue, c), inset: 11pt, stroke: blue, shape: fletcher.shapes.pill), edge(to, "..>", stroke: blue)) #align(center, diagram( node-stroke: 1pt, label-sep: 3pt, n0(<bi>, (0, 0), $[b, i]$), ..note((0, 0.8), $b sub s_i$, <bi>), pause, n1(<X>, (1.2, -0.4), $tup(X)$), n1(<Y>, (1.2, 0.4), $tup(Y)$), e(<bi>, <X>), e(<bi>, <Y>), ..note((0.3, -0.7), [s.t. $b sub f_i (join X)$], (0.5, -0.2)), ..note((1.6, -1.1), $tup(X) = (X_1, .., X_n)$, <X>), pause, n0(<cj>, (2.4, -0.4), $[c, j]$), n0(<di>, (2.4, 0.4), $[d, i]$), e(<X>, <cj>), e(<X>, <di>), e(<Y>, <di>), ..note((2.8, -1.1), [s.t. $c in X_j$], (1.8, -0.42)), pause, ..note((3.2, -0.4), text(size: 19pt)[No successor: \ opponent wins], <cj>), pause, n1(<Z>, (3.4, 0.4), $tup(Z)$), e(<di>, <Z>), edge(<Z>, (3.4, 0.8), (0.5, 0.8), <bi>, "-|>", corner-radius: 15pt), pause, node(name: <b0>, (1, 0.05), stroke: none), node(name: <b1>, (3.72, 0.05), stroke: none), node(name: <b2>, (3.72, 0.9), stroke: none), node(name: <b3>, (0.5, 0.9), stroke: none), node(name: <b4>, (-0.3, 0.2), stroke: none), node(name: <b5>, (-0.3, -0.325), stroke: none), node(name: <b6>, (0.13, -0.325), stroke: none), node(name: <b7>, (0.55, 0.05), stroke: none), edge(<b0>, <b1>, <b2>, <b3>, <b4>, <b5>, <b6>, <b7>, <b0>, "--", corner-radius: 20pt, stroke: blue), pause, priority(<bi>, $2$), priority(<cj>, $5$), priority(<di>, $2$), priority(<X>, $0$), priority(<Y>, $0$), priority(<Z>, $0$), ..note((22em, -2.5em), text(size: 19pt)[Highest recurring priority wins: \ even $->$ player 0, odd $->$ player 1], (0.7, 0.91)), )) #meanwhile #place(bottom + right, dx: 1em, text(font: "New Computer Modern", size: 19pt)[ \[Baldan, König, Mika-Michalski, Padoan\] ]) ] #slide[ $ X feq_mu f(X) #h(2em) #block($ f &: &&2^bb(N) -> 2^bb(N) \ f(X) &= &&{0} union X union {x + 2 | x in X} $) $ #pause #let n0 = (n, p, c, ..args) => node(name: n, p, move(dy: -0.4em, c), radius: 1.3em, ..args) #let n1 = (n, p, c, ..args) => node(name: n, p, move(dy: -0.4em, c), width: 3em, height: 2.6em, shape: fletcher.shapes.rect, ..args) #let e = (f, t, ..args) => edge(f, t, "-|>", ..args) #let priority = (pos, pr) => node((rel: (0em, -1.5em), to: pos), text(fill: blue, size: 19pt, [#pr]), stroke: none) #align(center, diagram( node-stroke: 1pt, label-sep: 3pt, n0(<3>, (1, 0), $3$), priority(<3>, 1), pause, n1(<s1>, (2, 0), ${1}$), priority(<s1>, 0), e(<3>, <s1>, bend: 15deg), n1(<s3>, (2, 0.75), ${3}$), priority(<s3>, 0), e(<3>, <s3>, bend: 15deg), e(<s3>, <3>, bend: 15deg), node(name: <d3>, (1, 0.75), $...$, stroke: none), e(<3>, <d3>), pause, n0(<1>, (3, 0), $1$), priority(<1>, 1), e(<s1>, <1>, bend: 15deg), e(<1>, <s1>, bend: 15deg), pause, node(name: <d1>, (3, 0.75), $...$, stroke: none), e(<1>, <d1>), pause, n1(<s12>, (4, 0), ${1, 2}$), priority(<s12>, 0), e(<1>, <s12>, bend: 15deg), e(<s12>, <1>, bend: 15deg), pause, n0(<2>, (4, 1.5), $2$), priority(<2>, 1), e(<s12>, <2>, bend: 15deg), e(<2>, <s12>, bend: 15deg), pause, n1(<s0>, (3, 1.5), ${0}$), priority(<s0>, 0), e(<2>, <s0>, bend: -15deg), n1(<s2>, (3, 2.25), ${2}$), priority(<s2>, 0), e(<2>, <s2>, bend: 15deg), e(<s2>, <2>, bend: 15deg), node(name: <d2>, (4, 2.25), $...$, stroke: none), e(<2>, <d2>), pause, n0(<0>, (2, 1.5), $0$), priority(<0>, 1), e(<s0>, <0>, bend: -15deg), e(<0>, <s0>, bend: -15deg), node(name: <d0>, (2, 2.25), $...$, stroke: none), e(<0>, <d0>), pause, n1(<e>, (1, 1.5), $varempty$), priority(<e>, 0), e(<0>, <e>), )) ] #new-section[Selections and symbolic moves] #slide[ - Problem: player 0 has lot of moves #pause - *Selections*: subset of moves equivalent to the full set #pause - *Symbolic moves*: #pause - compact representation using logic formulas #text(size: 19pt, h(-20pt) + box($ ({a, b}, {c}), ({a, b}, varempty), ({a}, {c}), ({b}, {c}), ({a}, varempty), ({b}, varempty), (varempty, {c}) \ arrow.b.double \ [a, 1] or [b, 1] or [c, 2] $)) #pause - generate a small selection #text(size: 19pt, box(width: 100%, inset: (right: 40pt), $ ({a}, varempty), ({b}, varempty), (varempty, {c}) $)) #pause - allows for simplifications ] #new-section[Parity game algorithms] #slide[ - Two approaches: #pause - *Global* algorithms: solve for every positions #pause - *Local* algorithms: solve for some positions #pause #v(1.3em) - Predecessor: LCSFE, based on a local algorithm by Stevens and Stirling ] #new-section[Strategy iteration] #slide[ - *Strategy*: function from vertex to the move to perform - assumes all vertices have at least one successor #pause - A vertex is winning iff the player has a *winning strategy* #pause #v(1em) - Idea: - *Fix* a strategy $phi$ for player 0 #pause - Compute an optimal strategy for player 1 *against* $phi$ #pause - Improve $phi$ by looking at the induced plays #pause - Repeat #meanwhile #place(bottom + right, dx: 1em, text(font: "New Computer Modern", size: 19pt)[ \[Vöge, Jurdziński\] ]) ] #slide[ - Criteria: *play profiles* $(w, P, e)$ - $w$, the most relevant vertex of the cycle - $P$, the visited vertices more relevant than $w$ - $e$, the number of vertices visited before $w$ #pause - Order based on how much they are favorable to player 0 #pause - Optimal strategy picks the succ. with the best play profile #pause #v(1.5em) - Issue: global algorithm #meanwhile #place(bottom + right, dx: 1em, text(font: "New Computer Modern", size: 19pt)[ \[Vöge, Jurdziński\] ]) ] #new-section[Local strategy iteration] #slide[ #v(2em) - *Local* algorithm #pause - Find optimal strategy on a *subgame* - game on a subset of vertices #pause - Check if one player can force winning plays in the subgame #pause - Otherwise *expand* the subgame - according to an expansion strategy #meanwhile #place(top + right, dx: 1em, dy: 2em, diagram( node-stroke: 1pt, label-sep: 3pt, pause, node((0, 0), width: 10em, height: 8em, shape: fletcher.shapes.ellipse), node((-0.2, 0.2), radius: 2em, fill: none), node((-0.25, 0.3), $s$, stroke: none), node((-0.31, 0.34), name: <p1>, radius: 2.5pt, fill: black, stroke: none), node((-0.25, 0.05), name: <p2>, radius: 2.5pt, fill: black, stroke: none), edge(<p1>, <p2>, "->", stroke: 1pt, bend: 30deg), pause, node((-0.2, -0.15), name: <p3>, radius: 2.5pt, fill: black, stroke: none), edge(<p2>, <p3>, "..>", stroke: (dash: "dashed"), bend: 30deg), pause, node((-0.17, 0.125), radius: 2.7em, stroke: (dash: "dashed"), fill: none), )) #meanwhile #place(bottom + right, dx: 1em, text(font: "New Computer Modern", size: 19pt)[ \[<NAME>\] ]) ] #new-section[Adapting the algorithm] #slide[ - *Goal*: solve the powerset game using *local strategy iteration* - Challenges caused by the different assumptions - Some improvements ] #slide(title: [#h(1em)Challenges])[ - Prevent finite plays (easy) #v(1em) #align(center, diagram( node-stroke: 1pt, spacing: 4em, label-sep: 3pt, node((0, 0), name: <a>, "", radius: 0.7em, shape: fletcher.shapes.rect), node((1, 0), name: <b>, "", radius: 0.7em), node((1, 0.9), name: <c>, "", radius: 0.7em, shape: fletcher.shapes.rect), node((0, 0.9), name: <d>, "", radius: 0.7em), node((2, 0.45), name: <e>, "", radius: 0.7em, shape: fletcher.shapes.rect), edge(<a>, <b>, "-|>"), edge(<b>, <c>, "-|>"), edge(<c>, <d>, "-|>"), edge(<d>, <a>, "-|>"), edge(<b>, <e>, "-|>"), pause, node((3, 0), name: <f>, "", radius: 0.7em, stroke: blue), node((3, 0.9), name: <g>, "", radius: 0.7em, stroke: blue, shape: fletcher.shapes.rect), edge(<e>, <f>, "-|>", stroke: blue), edge(<f>, <g>, "-|>", bend: 30deg, stroke: blue), edge(<g>, <f>, "-|>", bend: 30deg, stroke: blue), )) ] #slide(title: [#h(1em)Challenges])[ - Generalizing subgames to subsets of *edges* #v(1em) #align(center, diagram( node-stroke: 1pt, spacing: 4em, label-sep: 3pt, node((0, 0.5), name: <a>, "", radius: 0.7em), node((0.9, 0), name: <b>, "", radius: 0.7em, shape: fletcher.shapes.rect), node((0.9, 1), name: <c>, "", radius: 0.7em, shape: fletcher.shapes.rect), node((1.8, 0.5), name: <d>, "", radius: 0.7em), edge(<a>, <b>, "-|>", bend: 20deg), edge(<b>, <a>, "-|>", bend: 20deg), edge(<a>, <c>, "-|>", bend: -20deg), edge(<c>, <a>, "-->", bend: -20deg, stroke: blue + 2.5pt, mark-scale: 50%), edge(<d>, <b>, "-->", bend: -20deg, stroke: blue + 2.5pt, mark-scale: 50%), edge(<c>, <d>, "-|>", bend: -20deg), edge(<d>, <c>, "-|>", bend: -20deg), edge(<a>, (-0.7, 0.7), "-->"), edge(<a>, (-0.7, 0.3), "-->"), edge(<b>, (0.2, -0.2), "-->"), edge(<b>, (1.6, -0.2), "-->"), edge(<c>, (0.2, 1.2), "-->"), edge(<c>, (1.6, 1.2), "-->"), edge(<d>, (2.5, 0.7), "-->"), edge(<d>, (2.5, 0.3), "-->"), )) ] #slide(title: [#h(1em)Challenges])[ - Making the symbolic moves generator *lazy* - *Simplification* while iterating moves #align(center, diagram( node-stroke: 1pt, spacing: 1.5em, label-sep: 3pt, pause, node((0, 0), name: <or>, $or$, stroke: none), node((-3, 1), $phi_1$, stroke: none), edge(<or>), node((-3, 2), width: 1.7em, height: 1.7em, shape: fletcher.shapes.triangle), node((-0.9, 1), $phi_2$, stroke: none), edge(<or>), node((-0.9, 2), width: 1.7em, height: 1.7em, shape: fletcher.shapes.triangle), node((0.9, 1), $phi_3$, stroke: none), edge(<or>), node((0.9, 2), width: 1.7em, height: 1.7em, shape: fletcher.shapes.triangle), node((3, 1), $phi_4$, stroke: none), edge(<or>), node((3, 2), width: 1.7em, height: 1.7em, shape: fletcher.shapes.triangle), pause, node((-1.45, 1.5), name: <tl>, stroke: none), node((-0.45, 1.5), name: <tr>, stroke: none), node((-1.45, 2.5), name: <bl>, stroke: none), node((-0.45, 2.5), name: <br>, stroke: none), edge(<tl>, <br>, stroke: red + 2pt), edge(<tr>, <bl>, stroke: red + 2pt), )) ] #slide(title: [#h(1em)Improvements])[ Compute *play profiles* when expanding vertices #pause - recomputing them for all vertices is relatively slow #pause - idea: - play profiles for existing vertices are known #pause - the strategy is fixed for all newly expanded vertices #pause - only need to consider the new plays ] #slide(title: [#h(1em)Improvements])[ // TODO: Reword better *Expansion scheme* with upper bound on number of expansions #pause - we want to avoid expanding too many vertices too soon #pause - we want to avoid performing too many expansions #pause - heuristic for how many new vertices in each expansion - increase the number of new vertices exponentially - logarithmic bound on the number of expansions ] #slide(title: [#h(1em)Improvements])[ *Graph simplification* to remove vertices with determined winner #v(1em) #pause #let n0 = (n, p, c) => node(name: n, p, text(fill: blue, [#c]), radius: 1em) #let n1 = (n, p, c) => node(name: n, p, text(fill: blue, [#c]), radius: 1em, shape: fletcher.shapes.rect) #let e = (a, b, ..args) => edge(a, b, "-|>", ..args) #align(center, diagram( node-stroke: 1pt, spacing: 2em, label-sep: 3pt, n1(<0>, (0, 0), 0), n0(<1>, (1, 0), 1), n1(<2>, (2, 0), 3), n0(<3>, (3, 0), 2), n0(<4>, (0, 1), 0), n1(<5>, (1, 1), 1), n0(<6>, (2, 1), 1), n1(<7>, (3, 1), 4), e(<4>, <0>), e(<4>, <5>), e(<0>, <1>, shift: 3pt, bend: 10deg), e(<1>, <0>, shift: 3pt, bend: 10deg), e(<5>, <1>), e(<5>, <6>), e(<6>, <2>), e(<2>, <3>, shift: 3pt, bend: 10deg), e(<3>, <2>, shift: 3pt, bend: 10deg), e(<3>, <7>, shift: 3pt, bend: 10deg), e(<7>, <3>, shift: 3pt, bend: 10deg), e(<7>, <6>), node(name: <o0>, (0, -1), $...$, stroke: none), node(name: <o1>, (1, -1), $...$, stroke: none), node(name: <o3>, (3, -1), $...$, stroke: none), edge(<0>, <o0>, "..>"), edge(<1>, <o1>, "..>"), edge(<3>, <o3>, "..>"), pause, node("", enclose: (<2>, <3>, <6>, <7>), inset: 0.6em), pause, node(name: <tl>, (1.45, -0.6), ""), node(name: <tr>, (3.7, -0.6), ""), node(name: <bl>, (1.45, 1.7), ""), node(name: <br>, (3.7, 1.7), ""), edge(<tl>, <br>, stroke: red + 2pt), edge(<bl>, <tr>, stroke: red + 2pt), )) ] #new-section[Implementation] #slide[ - Final product: an implementation in *Rust* - Solver for the powerset game - Translation for $mu$-calculus, bisimilarity and parity games - Improves over the predecessor by an order of magnitude in some test cases and is comparable to the state of the art #align(center, text(size: 19pt, table( columns: (auto,) * 5, align: horizon, inset: (x: 1em), stroke: none, table.header([*\# trans.*], [*mCRL2*], [*AUT generation*], [*Our solver*], [*LCSFE*]), table.hline(), [4], [67.8 ms], [54.7 ms], [132 #us], [65.5 #us], [66], [68.5 ms], [59.2 ms], [212 #us], [195 #us], [2268], [72.0 ms], [117 ms], [2.30 ms], [4.38 ms], [183041], [1.47 s], [2.05 s], [202 ms], [5.90 s], ))) ] #new-section[Future work] #slide[ - Other parity game algorithms (e.g. Parys's quasi-polynomial algorithm) - Translating other problems (e.g. quantitative $mu$-calculus) - Better expansion strategy - Alternative data structures for symbolic moves (BDDs) - Integrating abstractions techniques - up-to techniques - approximating infinite domains ] #filled-slide[ Thank you for your attention ]
https://github.com/Jollywatt/typst-fletcher
https://raw.githubusercontent.com/Jollywatt/typst-fletcher/master/tests/edge-corner/test.typ
typst
MIT License
#set page(width: auto, height: auto, margin: 1em) #import "/src/exports.typ" as fletcher: diagram, node, edge #let around = ( (-1,+1), (+1,+1), (-1,-1), (+1,-1), ) #for dir in (left, right) { pad(1mm, diagram( spacing: 1cm, node((0,0), [#dir]), { for c in around { node(c, $#c$) edge((0,0), c, $f$, marks: ( (inherit: "head", rev: false, pos: 0), (inherit: "head", rev: false, pos: 0.33), (inherit: "head", rev: false, pos: 0.66), (inherit: "head", rev: false, pos: 1), ), "double", corner: dir) } } )) } #for dir in (left, right) { pad(1mm, diagram( // debug: 4, spacing: 1cm, axes: (ltr, btt), node((0,0), [#dir]), { for c in around { node(c, $#c$) edge((0,0), c, $f$, marks: ( (inherit: "head", rev: false, pos: 0), (inherit: "head", rev: false, pos: 0.33), (inherit: "head", rev: false, pos: 0.66), (inherit: "head", rev: false, pos: 1), ), "double", corner: dir) } } )) }
https://github.com/RiccardoTonioloDev/Bachelor-Thesis
https://raw.githubusercontent.com/RiccardoTonioloDev/Bachelor-Thesis/main/config/thesis-config.typ
typst
Other
#import "../config/constants.typ": chapter #let config( myAuthor: "<NAME>", myTitle: "Titolo", myLang: "it", myNumbering: "1", body ) = { // Set the document's basic properties. set document(author: myAuthor, title: myTitle) show math.equation: set text(weight: 400) // LaTeX look (secondo la doc di Typst) set page(margin: 1.1811in, numbering: myNumbering, number-align: center) // set par(leading: 0.55em, first-line-indent: 1.8em, justify: true) set par(leading: 0.55em, justify: true) set text(font: "EB Garamond", size: 12pt, lang: myLang) set heading(numbering: myNumbering ) show raw: set text(font: "JetBrains Mono", size: 9pt, lang: myLang) show par: set block(spacing: 0.55em) show heading: set block(above: 1.4em, below: 1em) show heading: set text(weight: "medium") set heading(numbering: (..nums) => nums.pos().map(str).join(".")) show par: set block(spacing: 1.25em) set par(leading: 0.75em) set list(indent: 9pt, body-indent: 9pt) set enum(indent: 9pt, body-indent: 9pt) show ref: set text(fill: blue.darken(70%), weight: "medium") show figure.caption: set text(size: 10pt, font: "Optima") show heading.where(level: 1): it => { stack( spacing: 2em, if it.numbering != none { align(center)[#text(size: 4.5em,fill: rgb(149, 0, 6),weight: "thin")[#counter(heading).display()]] }, align(center)[#text(size:2em,it.body, weight: "thin")], [] ) } body }
https://github.com/massix/cv
https://raw.githubusercontent.com/massix/cv/trunk/main.typ
typst
Other
// vi: tw=80 colorcolumn=80,120,200 ts=2 sw=2 #import "@preview/fontawesome:0.1.1": * #import "@preview/modern-cv:0.2.0": * #import "@preview/splash:0.3.0": xcolor #show: resume.with( author: ( firstname: "Massimo", lastname: "Gengarelli", email: "@GH_SECRET_EMAIL@", phone: "@GH_SECRET_PHONE@", github: "massix", linkedin: "@GH_SECRET_LINKEDIN@", address: "@GH_SECRET_ADDRESS@", positions: ( "Coding Architect", "DevOps Engineer", "Software Developer", "Opensource Passionate", ) ), date: datetime.today().display(), accent_color: xcolor.brick-red ) #let mkGithubLink(repo) = { link("https://github.com/massix/" + repo, fa-icon("github", fa-set: "Brands")) } // The main template uses "Source Sans Pro", for which the license is not // compatible with what I would like to achieve. Let's use Source Sans 3 // instead, which is more open and freely distributable. #set text(font: ("Source Sans 3")) // Start with the keywords = Skills #resume-skill-item("dev", ( "Java", "Typescript", "Javascript", "C/C++", [#fa-heart() Haskell], "Purescript", "Lua", "Nix" )) #resume-skill-item("cloud", ( [#fa-heart() Azure], "Kubernetes", "Openshift", "Docker (swarm)", "Portainer", "OpenFaaS", "Ansible", "Vagrant" )) #resume-skill-item("ops", ( "Terraform", "Bicep", "Pulumi", "GitLab CI", "Bitbucket Pipelines", "GitHub Actions", "Drone CI" )) #resume-skill-item("gnu/linux", ( [#fa-heart() NixOS], "Debian", "RHEL", "Gentoo", "Arch (btw)" )) #resume-skill-item("langs", ( "Italian", "French (fluent)", "English (advanced)", "Quenya (studying)", "Toki Pona" )) // Then a short presentation = About me #resume-item([I am committed to continuous learning and skill enhancement, both as a developer and as a human being. My passion lies in *development*, *open-source* and *cloud architectures*. My curiosity drives me towards new technologies, with a particular *fondness for functional programming* and its principles, keeping a constant eye on scalable and reliable architectures. In my spare time I try to be as active as possible in the opensource world, contributing to the projects I personally use. Natural team player, I am *ready to help* in any way I can, and I am not scared of seeking for help myself.]) = Professional Experience // CloudNative #resume-entry( title: "Technical Head of CloudNative Tribe", location: "ALTEN SA", date: "Jan 2020 - ongoing", description: [ Coding Architect, Technical Leader ] ) #resume-item[ - #text([Successfully led the development and delivery of 10+ high-risk and high commitment projects with *Cloud* technologies, in *event-driven* and *CQRS* architectures using mainly: *Java* and *Typescript* as development languages; *Terraform*, *Helm* and *GitlabCI* for the CI/CD and deployment; and finally the *Azure services* (*Kubernetes* and *Event Hub/Kafka* being the most used) for the runtime.]) - #text([*Coordination and animation* of the _CloudNative Tribe_, focusing on development of internal reusable softwares with the aim of having fun with new technologies in safe environments while learning how to use them in production. Currently developing 6 internal projects in *Haskell*, *Rust*, *Golang* and *Swift*.]) - #text([*1-1 mentoring* for colleagues willing to learn more about CloudNative development and deployment. Successfully tutored a total of 30 students and following 4 consultants in their career path. _They grow so fast_.]) - #text([Given 20+ *on-demand practical trainings* for consultants of all levels, with up to 8 participants per session, in *French* or *English* in CloudNative technologies (*Terraform*, *Kubernetes*, *CI/CD*, *GNU/Linux* and *Kafka* being the most requested ones).]) - #text([Achieved and maintained an average of *90% code coverage* for the backend and *70% code coverage* for the frontend for all the projects.]) ] // Orange Djingo Architect #resume-entry( title: "Orange Djingo / DT Magenta", location: "ALTEN SA / Orange France / Deutsche Telekom", date: "Jan 2016 - Jan 2020", description: [ Coding Architect, Senior Software Engineer ] ) #resume-item[ - #text([Part of an *international team* of six architects, leading the technical governance of Djingo, a cloud based vocal assistant.]) - #text([*Technical Leader* for a scrum team of 7 *Java* and *Python 3.6* developers. Part of a 120 developers *SAFe* organization.]) - #text([*DevOps Engineer* for the scrum team, managing the deployment with *Ansible* and *Gitlab CI* of the 10 $mu$services developed by the team, part of a cloud architecture based on *Microsoft Azure* and *Openshift* and composed of more than 80 $mu$services.]) - #text([Responsible of the technical quality of the delivered products, managed to reach and maintain the goal of *90% code coverage* in our components.]) ] // Orange Misc #resume-entry( title: "R&D and Innovation Projects", location: "ALTEN SA / Orange France", date: "Jul 2014 - Jan 2016", description: [ Developer, DevOps Engineer ] ) #resume-item[ - #text([Member of the virtual Cloud team, a team made up of 20 passionate developers willing to spend some of their spare time working on *modern cloud technologies*, developing proofs of concept. Successful development of a *Docker* interceptor in *Python* to implement a RBAC system on top of *Docker Swarm*, to host different applications on the same cluster, with segregation of responsibilities.]) - #text([Single handedly deployed a *primitive FaaS solution* for an internal project using *Docker*, spawning on-demand containers based on incoming REST APIs intercepted by a *Java/Spring* backend.]) - #text([Took part in the very first steps of the migration from a 200 nodes Docker Swarm scattered in 3 datacenters to *Kubernetes*.]) - #text([Introduced *Elasticsearch* as an alternative to an internal indexing solution for the #link("https://lemoteur.fr")[lemoteur search engine].]) - #text([Migration of part of KE (Knowledge Engine) from C98 to *C++11*.]) ] // Amadeus #resume-entry( title: "TPF Developer", location: "ALTEN SA / Amadeus", date: "Mar 2012 - Jul 2014", description: [ IBM HLASM Junior Developer ] ) #resume-item[ - #text([Maintenance of the legacy TPF system, based on *High-Level Assembly* (IBM Specification) on a z/OS Mainframe.]) - #text([Migration of 20+ modules from *ASM* to *C*.]) - #text([Working with the rest of the team in migrating the legacy system to a modern solution based on *GNU/Linux* (SUSE) and *C++*.]) ] // Education = Education #resume-entry( title: "B.S. in Computer Science", location: "Alma Mater Studiorum - Bologna", date: "Nov 2011", description: [ Specialization in Operating Systems' Development and Virtualisation. ] ) #resume-item[ Successfully completed the studies with a final grade of 105/110. - #text([#strong([ShockVM]): #mkGithubLink("shockvm") #emph([ShockVM, is an Heterogeneous Online Cluster of KVM Virtual Machines]). The project explored the idea of spawning a cluster of interconnected virtual machines through a Web based interface using KVM and VDE as system technologies. Fun fact: I love recursive acronyms.]) ] // Projects = Personal Projects #resume-entry( title: "Hwedis", location: github-link("massix/hwedis"), date: "Nov 2023", description: [ Redis cache multiplexer written in Haskell ] ) #resume-item[ Used as the base of one of the internship programs in ALTEN, the main idea is to intercept the calls to a Redis server and distribute the information about the availability of the objects in the cache to avoid useless ping-pongs. ] #resume-entry( title: "Purescript Testcontainers", location: github-link("massix/purescript-testcontainers"), date: "Dec 2023", description: [ High-level wrapper for Testcontainers, in Purescript and JS FFI ] ) #resume-item[ The wrapper uses Monads to isolate the execution environment of the containers and to allow developers to easily interact with a running container. The library will soon be published on Pursuit and be available on the official channel. ] #resume-entry( title: "AndiRPG", location: github-link("massix/andirpg"), date: "Jan 2024", description: [ Nethack inspired game, developed in low-level C. ] ) #resume-item[ A videogame for nostalgic people, developed entirely on my smartphone using Termux, Neovim, clangd and Android NDK. More of an hobbyist/geek project than something serious. ] #resume-entry( title: "NixOS Contributor and Maintainer", location: github-link("nixos/nixpkgs"), date: "Aug 2023", description: [ Proud member of the NixOS Maintainers. ] ) // Personal Interests = Personal Interests #resume-item[ In no particular order: I love spending time with my beloved partner, our beautiful cat and our friends. I enjoy practicing Beach Volley and going at the gym, trying to stay fit despite the time moving forward faster than I'd like to admit. I am a big supporter of Rimini Football Club, practically since I was born and probably even sometime before. I do sympathize for FC Internazionale and Bologna FC. I love boardgames, wargames, videogames and basically everything ending in "games". And, of course, Opensource. ]
https://github.com/Ri0ee/typst-rtu
https://raw.githubusercontent.com/Ri0ee/typst-rtu/master/README.md
markdown
# RTU Typst template To run this project use VSCode extension for typst or download an official executable from <https://github.com/typst/typst>. Compile the project with: ```bash typst compile report.typ ```
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-1BCA0.typ
typst
Apache License 2.0
#let data = ( ("SHORTHAND FORMAT LETTER OVERLAP", "Cf", 0), ("SHORTHAND FORMAT CONTINUING OVERLAP", "Cf", 0), ("SHORTHAND FORMAT DOWN STEP", "Cf", 0), ("SHORTHAND FORMAT UP STEP", "Cf", 0), )
https://github.com/nvarner/typst-lsp
https://raw.githubusercontent.com/nvarner/typst-lsp/master/.github/ISSUE_TEMPLATE/feature_request.md
markdown
MIT License
--- name: "🎁 Feature Request" about: Do you have ideas for new features or improvements? title: '' assignees: '' --- <!-- Hi there! Thank you for submitting a feature request! --> <!-- All the below steps should be completed before submitting your issue. --> - I have searched the [issues](https://github.com/nvarner/typst-lsp/issues) of this repo and believe that this is not a duplicate. - I have searched the [discussions](https://github.com/nvarner/typst-lsp/discussions) and believe that my question is not already covered. ## Feature Request <!-- Now feel free to write your request, and please be as descriptive as possible! --> <!-- Thanks again 🙌 ❤ -->
https://github.com/chrischriscris/Tareas-CI5651-EM2024
https://raw.githubusercontent.com/chrischriscris/Tareas-CI5651-EM2024/main/tarea10/src/main.typ
typst
#import "template.typ": conf, question, pseudocode, GITFRONT_REPO #show: doc => conf("Tarea 10: Algoritmos Cuánticos", doc) #question[ Se desea que ejecute una simulación del algoritmo de Shor para $N = 21$. Para calcular las amplitudes que obtendría la QFT, puede usar la DFT (Transformada Discreta de Fourier clásica). Siga iterando hasta que se cumpla alguna de estas condiciones: #enum(numbering: "(a)")[ Se encontró algún factor no trivial para N. ][ Se han probado ya 10 valores de x, sin éxito. ] _#underline[Nota 1]: En todo momento, simule las operaciones sobre registros cuánticos de manera clásica (tratando tales superposiciones como una lista de valores y usando algún generador de números aleatorios cuando requiera colapsar alguna de estas)._ _#underline[Nota 2]: Puede usar un generador de números aleatorios online o alguno que venga con un lenguaje de su elección._ ][ - Escogemos un valor aleatorio para $x$ entre 1 y $N-1$. $ x = 8 $ - Definimos $n$ una potencia de 2 mayor o igual a 21: $ n = 32 $ - Los registros simulados son: - `r1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]`. - `r2 = [1, 8, 1, 8, 1, 8, 1, 8, 1, 8, 1, 8, 1, 8, 1, 8, 1, 8, 1, 8]` ($x^("r1") mod n$). - Repetimos $s = 2 lg(32) = 10$ veces: - $"FFT(r1)" =$ `[90, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 0, 0, 0, 0, 0, 0, 0, 0, 0]`, el muestreo cuántico colapsa a *0*. - $"FFT(r1)" =$ `[160, 20, 160, 2000000000000014, 160, 20, 160, 2000000000000014, 160, 20, 160, 20, 160, 2000000000000014, 160, 20, 160, 2000000000000014, 160, 20]` el muestreo cuántico colapsa a *16*. - $"FFT(r1)" =$ `[1800, 2.71e-15, 21.49, 3.74e-15, 4.82e-14, 0, 4.82e-14, 3.74e-15, 21.49, 2.71e-15, 1400, 2.71e-15, 21.49, 3.74e-15, 4.82e-14, 0, 4.82e-14, 3.74e-15, 21.49, 2.71e-15]` el muestreo cuántico colapsa a *0*. - $"FFT(r1)" =$ `[3200, 400, 3200, 400, 3200, 400, 3200, 400, 3200, 400, 3200, 400, 3200, 400, 3200, 400, 3200, 400, 3200, 400]` el muestreo cuántico colapsa a *6*. - $"FFT(r1)" =$ `[36000, 0, 46, 2.88e-13, 0, 0, 0, 2.49e-13, 3.31e-13, 0, 28000, 0, 3.31e-13, 2.49e-13, 0, 0, 0, 2.88e-13, 46, 0]` el muestreo cuántico colapsa a *0*. - $"FFT(r1)" =$ `[64000, 7999.99, 64000, 8000, 64000, 8000, 64000, 8000, 64000, 7999.99, 64000, 7999.99, 64000, 8000, 64000, 8000, 64000, 8000, 64000, 7999.99]` el muestreo cuántico colapsa a *12*. - $"FFT(r1)" =$ `[720000, 2.41e-12, 1.19e-11, 8.67e-13, 5.76e-12, 0, 5.76e-12, 8.67e-13, 1.19e-11, 2.41e-12, 560000, 2.41e-12, 1.19e-11, 8.67e-13, 5.76e-12, 0, 5.76e-12, 8.67e-13, 1.19e-11, 2.41e-12]` el muestreo cuántico colapsa a *10*. - $"FFT(r1)" =$ `[1280000, 160000, 1280000, 160000, 1280000, 160000, 1280000, 160000, 1280000, 160000, 1280000, 160000, 1280000, 160000, 1280000, 160000, 1280000, 160000, 1280000, 160000]` el muestreo cuántico colapsa a *6*. - $"FFT(r1)" =$ `[14400000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 11200000, 0, 0, 0, 0, 0, 0, 0, 0, 0]` el muestreo cuántico colapsa a *0*. - $"FFT(r1)" =$ `[25600000, 3200000, 25600000, 320000000000002, 25600000, 3200000, 25600000, 320000000000002, 25600000, 3200000, 25600000, 3200000, 25600000, 320000000000002, 25600000, 3200000, 25600000, 320000000000002, 25600000, 3200000]` el muestreo cuántico colapsa a *6*. - Así, $g = "mcd"(0, 16, 0, 6, 0, 12, 10, 6, 0, 6) = 2$. - $21/2 = 10$ es *par*, devolvemos entonces $"mcd"(8^5 + 1, 21) = "mcd"(32769, 21) =$ *7*. - Se halló en 1 intento un factor no trivial de 21: *7*, y podemos hallar el otro haciendo $21/7 = 3$. Hemos hallado así los factores primos de 21: *3* y *7*. ][ Cree algún buen meme que tenga que ver con alguna parte del curso. Puede ser, por ejemplo, sobre alguno de los temas que vimos o sobre la experiencia en general de la materia. Lo que sea que los inspire y les parezca cómico. _Nota: Diga si puedo compartirlo anónimamente en el grupo de Telegram y/o redes sociales._ ][ #align(center)[ #image("img/meme.png", width: 350pt) Sí se puede compartir. ] ]
https://github.com/iyno-org/Holistic-Clinical-Neurosurgery
https://raw.githubusercontent.com/iyno-org/Holistic-Clinical-Neurosurgery/main/covers/back-cover.typ
typst
#import "@preview/bubble:0.2.1": * #show: bubble.with( date: "", //main-color: "4DA6FF", //set the main color )
https://github.com/MultisampledNight/flow
https://raw.githubusercontent.com/MultisampledNight/flow/main/src/cfg.typ
typst
MIT License
// Configuration via sys.inputs or `--input` on the command line. // Yeah, this doesn't exactly follow the "parse, don't validate" model, // but given Typst's lack of typing for the moment I wasn't exactly feeling like // forcing that paradigm on it. #let _define(name) = { let source = sys.inputs.at(name, default: none) let check(name, actual, matches, expected) = { if matches { return } panic( "input `" + name + "` received invalid value; " + "expected: " + expected + "; " + "actual: `" + actual + "`" ) } ( bool: () => { if source == none { return none } let value = source == "true" check( name, source, source in ("true", "false", none), "either `true`, `false` or left unspecified", ) value }, enum: (..variants) => { let variants = variants.pos() check( name, source, source in variants, "one of `" + variants .filter(var => var != none) .intersperse("`, `") .join() + "`" + if none in variants { " or left unspecified" }, ) source }, string: () => source, ) } #let _default(actual, default) = { if actual != none { actual } else { default } } // If true, assume the final document won't be printed // and some sacrifices for display on a screen can be made. // If false, assume the final document will be printed // and try to be as useful as possible on real paper. #let dev = (_define("dev").bool)() #let dev = _default(dev, false) // What colors to display the document in. // - bow: Black on white // - Default if `dev` is false // - Typical sciency-looking papers // - Very well printable // - wob: White on black // - Same as bow, just with foreground and background swapped // - Essentially the "night mode" option in most PDF viewers // - But without inverting other colors // - duality: Spacy theme that has never been published in full // that I use a lot, personally. // - Default if `dev` is true // - **Not quite colorblind-safe** #let theme = (_define("theme").enum)("bow", "wob", "duality", none) #let theme = _default( theme, if dev { "duality" } else { "bow" } ) // The name of the file the note is stored in. // It is trimmed and used as default for the document title // iff you didn't specify anything else in the template. #let filename = (_define("filename").string)() // If true (the default), render the document normally. // If false, skip rendering more costly stuff like the outline and cetz canvases, // which greatly impacts the visual result but is irrelevant for e.g. `typst query`. // Consider specifying `false` when programmatically looking through the documents. #let render = (_define("render").bool)() #let render = _default(render, true)
https://github.com/dainbow/MatGos
https://raw.githubusercontent.com/dainbow/MatGos/master/main.typ
typst
#import "conf.typ": * #let title = [ ГОС по матану ] #show: doc => conf(title, doc) #align(center, text(17pt)[ *#title* ]) #align(center)[ *Disclaymer*: доверять этому конспекту или нет выбирайте сами ] #epigraph("Экзамен - это тропа", "<NAME>") // TODO: Свойства пределов, предельные переходы // TODO: Окрестность, область // TODO: Диффеоморфизм // TODO: Ограниченная вариация // TODO: Каждая монотонная функция интегрируема по Риману, теоремы о среднем для интеграла Римана // TODO: понятия измеримых функций, меры, теорема о сходимости интегралов монотонной последовательности // TODO: Включить в формулировку теоремы о неявной функции явный вид производной // TODO: Пример, когда ряд Тейлора сходится не к самой функции #include "themes/1.typ" #include "themes/2.typ" #include "themes/3.typ" #include "themes/4.typ" #include "themes/5.typ" #include "themes/6.typ" #include "themes/7.typ" #include "themes/8.typ" #include "themes/9.typ" #include "themes/10.typ" #include "themes/11.typ" #include "themes/12.typ" #include "themes/13.typ" #include "themes/14.typ" #include "themes/15.typ" #include "themes/16.typ" #include "themes/17.typ" #include "themes/18.typ" #include "themes/19.typ" #include "themes/20.typ" #include "themes/21.typ" #include "themes/22.typ" #include "themes/23.typ" #include "themes/24.typ" #include "themes/25.typ" #include "themes/26.typ" #include "themes/27.typ" #include "themes/28.typ" #include "themes/29.typ" #include "themes/30.typ" #include "themes/31.typ" #include "themes/32.typ" #include "themes/33.typ" #include "themes/34.typ" //TODO(mishaglik): Различие между интегральной теоремой и интегральной формулой Коши. //TODO(mishaglik): ИОТОХ, вычеты для infinity #include "themes/35.typ" #include "themes/36.typ" #include "themes/37.typ" #include "themes/38.typ" #include "themes/definition.typ"
https://github.com/liuguangxi/erdos
https://raw.githubusercontent.com/liuguangxi/erdos/master/Problems/typstdoc/README.md
markdown
# Erdős Unofficial Offline Edition in Typst Collect all problems from [Erdős](https://erdos.sdslabs.co). Current version is **v2024.2**, where a total of 317 problems (from #1 to #368, the numbers are **NOT** sequential!) are included. To compile the document, you need to install [`typst`](https://github.com/typst/typst) first, then run ```bash typst compile --font-path fonts erdos_offline.typ erdos_offline.pdf ``` to get a dynamically updated PDF file `erdos_offline.pdf`. The final [PDF file](./erdos_offline_attachment.pdf) was post-processed by a separate tool to add the attachments in fold `attachments`, as typst does not yet support attachments in PDF.
https://github.com/Isaac-Fate/booxtyp
https://raw.githubusercontent.com/Isaac-Fate/booxtyp/master/src/theorems/new-theorem-template.typ
typst
Apache License 2.0
#import "../colors.typ": color-schema #import "../counters.typ": theorem-counter /// Create a new theorem-like template. #let new-theorem-template( identifier, theorem-counter: theorem-counter, fill: color-schema.blue.light, stroke: color-schema.blue.primary, ) = (body, title: none) => { // * I wrap the theorem block in `figure` to allow for a label // * This is a hack!!! figure( block( fill: fill, stroke: stroke, radius: 12pt, inset: 12pt, width: 100%, above: 1.5em, )[ // Theorem title #let theorem-title = locate(loc => { // Get heading numbers at current location let heading-numbers = counter(heading).at(loc) // Increment theorem counter theorem-counter.step(level: 3) // Set the theorem title [ #identifier #numbering("1.1", ..theorem-counter.at(loc)) #if title != none [ #h(1pt) (#title) ] ] }) // Display theorem title #text(fill: stroke)[*#theorem-title*] // Set paragraph #set par(first-line-indent: 0pt) // Theorem content #body ], kind: identifier, outlined: false, supplement: identifier, ) }
https://github.com/herbhuang/utdallas-thesis-template-typst
https://raw.githubusercontent.com/herbhuang/utdallas-thesis-template-typst/main/metadata.typ
typst
MIT License
// Enter your thesis data here: #let titleEnglish = "(Title English)" #let titleGerman = "(Title German)" #let degree = "Bachelor" #let program = "Information Systems" #let supervisor = "Prof. Dr. <NAME>" #let advisors = ("<NAME>, M.Sc.",) #let author = "(Author)" #let startDate = datetime(day: 1, month: 1, year: 2024) #let submissionDate = datetime(day: 1, month: 1, year: 2024)
https://github.com/HEIGVD-Experience/docs
https://raw.githubusercontent.com/HEIGVD-Experience/docs/main/_settings/typst/template-te.typ
typst
#let resume(title, name, cols: 2, doc) = { set text(size:7.5pt, font: "Times New Roman") set par(leading: 0.45em) set block(above: 0.5em) show heading.where(level: 2): it => block[ #set text(size: 1.1em) #it ] show heading.where(level: 3): it => [ #set text(size: 1.1em) #set par(leading: 0pt) #set block(above: 0.5em) #it ] show heading.where(level: 4): it => [ #set text(size: 1em) #set par(leading: 0pt) #set block(above: 0.5em) #it ] show par: set pad(top: 0pt) set table(stroke: 0.25pt + black) set page("a4", header: [ #columns(3)[ #set text(size: 12pt) #name #colbreak() #set align(center) #title #colbreak() #set align(right) #datetime.today().display("[day]-[month]-[year]") ] #line(length: 100%) #v(5pt) ], header-ascent: 0%, footer-descent: 20%, columns: cols, margin: (x: 8mm, top: 14mm, bottom: 14mm), numbering: "1/1" ) doc } #let plus = (content) => { set list(marker: block(width: 0.5em)[ #set align(center) #text(green.darken(50%))[*+*] ]) content } #let bad = (content) => { set list(marker: block(width: 0.5em)[ #set align(center) #text(red.darken(50%))[*-*] ]) content }
https://github.com/Dherse/typst-brrr
https://raw.githubusercontent.com/Dherse/typst-brrr/master/samples/conformal_prediction/conformal_prediction.typ
typst
#import "slides.typ": * #show: slide_deck #set list(indent: 1em) #set enum(indent: 1em) #set page(footer: [ #locate(loc => { let k = counter(page).at(loc).first() - 1; let n = counter(page).final(loc).first() - 2; [ #set text(size: 12pt, fill: luma(80)) #set align(right + bottom) #if k > 0 [ #k / #n ] ] }) #v(0.5em) ]) #let env(head, content) = [ #head \ #box(width: 100%, inset: (left: 0.5em))[ #content ] ] #let thm(kind) = (name, proposition) => [ #env[ #smallcaps[#kind] (_#{name}_): ][#proposition] ] #let proof(content) = [ #env[ _Proof sketch:_ ][ #content #v(-1.5em) #align(right)[$qed$] ] ] #let definition = thm[Definition] #let lemma = thm[Lemma] #let theorem = thm[Theorem] #let comment(comment) = [ #set align(center) #set text(size: 28pt) *#comment* ] #let proscons(pros, tail_pros, cons, tail_cons) = [ #columns(2)[ ==== Pros #steplist(start: 0)[#pros] #tail_pros #colbreak() ==== Cons #steplist(start: 0)[#cons] #tail_cons ] ] #let TODO(message) = [ #box(fill: red.lighten(80%), inset: 0.5em, radius: 3pt)[ #env[*#smallcaps[To Do:]*][#message] ] ] #let Var = $op("Var")$ #let ind(x) = $bb(1){#x}$ #let quantile(p, x) = $hat(q)_(#p)(#x)$ #let tquantile(p, x) = $Q_(#p)(#x)$ #let implies = $arrow.r.double.long$ #let impliedby = $arrow.l.double.long$ #let iff = $arrow.l.r.double.long$ #let dist = $upright(d)$ #slide[ #set align(horizon) #set align(center) #set text(size: 54pt) *A Brief Overview of* \ *Conformal Prediction* #set text(size: 30pt) <NAME> \ #set text(size: 20pt) EMAp FGV \ (ex-IMPA) ] #slide[ = The situation #steplist(start: 0)[ - We have models which are fairly good at giving point predictions - ... but they are too complex, so we can't reason about them - They are _so_ complex that we need to treat them mostly as black-boxes - But we _do_ need to have some guarantees on our models - E.g., Are our models accurate? Are our models fair? Are we using our models correctly? // - What then? ] ] #slide[ = Conformal Prediction A promising approach for getting *probabilistic guarantees* about *arbitrary models*, with fairly *minimal assumptions*. //#step[ // Not very mainstream yet, but not fringe //] #step[ Overview of this presentation: 1. Some preliminaries #v(1fr) 2. Basic Conformal Prediction procedures // Split Conformal Prediction & Jacknife #v(1fr) 3. Choices of non-conformity scores #v(1fr) 4. Understanding our guarantees #v(1fr) // 5. Generalizations of Conformal Prediction // 5. Some practical examples 5. _Bonus:_ Dealing with non-exchangeability #v(1fr) ] ] #slide[ = Before we continue #set align(center) #set align(horizon) #v(-2em) === Disclaimer I'm a CS guy, not a math guy! ] // #slide[ // = A quick refresher // // #definition[Empirical quantile][ // $ quantile(phi.alt, Z_(1:n)) := inf {t in RR : 1/n sum_(i=1)^n ind(Z_i <= t) >= phi.alt}. $ // ] // // #TODO[Image illustrating quantile] // ] #slide[ = Exchangeability #definition[Exchangeability][ Random variables $Z_1, ..., Z_n$ are *exchangeable* if, for any permutation $pi$, $ (Z_1, ..., Z_n) limits(=)^d (Z_(pi(1)), ..., Z_(pi(n))). $ #step[ I.e., that the cumulative joint distribution of $(Z_1, ..., Z_n)$ is symmetric. ] ] #show: step Note that: $ step("iid" &implies "exchangeable") \ step("exchangeable" &implies "identically distributed") \ // step("exchangeable" &implies "identically distributed") $ //#steplist[ // - $ "iid" implies "exchangeable" $ // - $ "exchangeable" implies "identically distributed" $ // // - $ "exchangeable" implies "uncorrelation" $ //] ] #slide[ = Exchangeability #lemma[Exchangeability preserving transformations][ Let $Z_1, ..., Z_n in cal(Z)$ be exchangeable random variables, and $f : cal(Z) -> cal(W)$. $f(Z_1), ..., f(Z_n)$ are also exchangeable random variables. ] #v(1fr) #step[ #lemma[Exchangeability under data splitting][ Let $Z_1, ..., Z_n, ..., Z_(n+m) in cal(Z)$ be a vector of exchangeable random variables. For any function $hat(M)$ that depends arbitrarily on $Z_1, ..., Z_n$, $hat(M)(Z_(n+1)), ..., hat(M)(Z_(n+m))$ are also exchangeable random variables. ] ] #v(1fr) #step[ #set align(center) (Think train&validation sets.) ] #v(1fr) ] #slide[ = Exchangeability #lemma[Exchangeability preserving transformations][ Let $Z = (Z_1, ..., Z_n) in cal(Z)^n$ be a vector of exchangeable random variables and $G : cal(Z)^n -> cal(W)^m$ be an arbitrary transformation. Moreover, suppose that, for each permutation $pi_1 : [m] -> [m]$, there exists a permutation $pi_2 : [n] -> [n]$ such that $ pi_1 G(Z) limits(=)^d G(pi_2 Z). $ Then, the transformation $G$ preserves exchangeability. ] ] #slide[ = A very useful lemma #lemma[Quantile lemma, exchangeable case][ Let $Z_1, ..., Z_(n+1)$ be exchangeable random variables. Then, given $phi.alt in (0, 1)$, $ PP[Z_(n+1) <= quantile(phi.alt, Z_(1:n) union {infinity})] >= phi.alt. $ ] #show: step #proof[ $ Z_(n+1) &<= quantile(phi.alt, Z_(1:n) union {infinity}) iff Z_(n+1) <= quantile(phi.alt, Z_(1:n+1)) $ #show: step $ therefore PP[Z_(n+1) <= quantile(phi.alt, Z_(1:n) union {infinity})] &= PP[Z_(n+1) <= quantile(phi.alt, Z_(1:n+1))] step(&>= ceil(phi.alt (n+1)) / (n+1) >= phi.alt. ) $ ] ] #slide[ = How is this useful? // We have $n_"train"$ samples $(X^"train"_i, y^"train"_i)$ and $n_"cal"$ samples $(X^"cal"_i, y^"cal"_i)$. #steplist(start: 0)[ 1. Train a model $hat(M) : cal(X) -> RR$ on your training data 2. Evaluate the _nonconformity scores_ $s_i := |hat(M)(X^"cal"_i) - y^"cal"_i|$ 3. $hat(t)_phi.alt := quantile(phi.alt, s_(1:n_"cal") union {infinity})$ 4. $hat(C)_phi.alt(X) := {y in RR : |hat(M)(X) - y| <= hat(t)_phi.alt}$ ] #step[ *Result:* #v(-2em) $ PP[y^"test" in hat(C)_phi.alt(X^"test")] >= phi.alt. $ ] #v(-0.5em) #show: step $ PP[y^"test" in hat(C)_phi.alt(X^"test")] &= PP[s(X^"test", y^"test") <= hat(t)_phi.alt] \ &step(= PP[s(X^"test", y^"test") <= quantile(phi.alt, s_(1:n_"cal") union {infinity})]) step(>= phi.alt. ) $ ] #slide[ = Split Conformal Prediction #steplist(start: 0)[ 1. Train a model $hat(M) : cal(X) -> cal(Y)$ on your training data 2. Evaluate the _nonconformity scores_ $s_i := s(X^"cal"_i, y^"cal"_i)$ \ (We used $s(X, y) := |hat(M)(X) - y|$.) 3. $hat(t)_phi.alt := quantile(phi.alt, s_(1:n_"cal") union {infinity})$ 4. $hat(C)_phi.alt(X) := {y in cal(Y) | s(X, y) <= hat(t)_phi.alt}$ ] #step[ #v(0.5fr) *Result:* #v(-2em) $ PP[y^"test" in hat(C)_phi.alt(X^"test")] >= phi.alt. $ ] #v(2fr) ] #slide[ = Remarks on SCP #proscons( [ - No assumptions on the underlying model - No assumptions on the distribution of the data - Easy to implement - Very cheap ], [], [ - Interval sizes depend directly on the choice of nonconformity score function - We need to split the data - We need exchangeability ], v(2fr), ) ] #slide[ = Full conformal prediction Whenever we are making a prediction for a new $X^"test"$, #steplist(start: 0)[ 1. For every $y in cal(Y)$, train models $hat(M)_y$ on $(X^"train"_(1:n), y^"train"_(1:n)) union {(X^"test", y)}$ #box[Let $s_i(y) := |hat(M)_y(X^"train"_i) - y^"train"_i|$ and $t_(1-alpha)(y) := quantile(1-alpha, s_(1:n)(y) union {infinity})$;] 2. $hat(C)_(1-alpha)(X) := {y in cal(Y) : s(X^"test", y^"test") <= t_(1-alpha)}$ ] #step[ #v(0.5fr) *Result:* #v(-2em) $ PP[y^"test" in hat(C)_(1-alpha)(X^"test")] >= 1 - alpha. $ ] #v(2fr) ] #slide[ = Remarks on FCP #proscons( [ - No assumptions on the distribution of the data - No need to split the data // - #strike[Statistical efficiency] ], v(4fr), [ - The underlying model needs to be symmetric - Absurdly expensive - Interval sizes depend not only on the nonconformity score function, _but also on the model_ - We need exchangeability ], [] ) ] #slide[ = The Jackknife+ #steplist(start: 0)[ 1. Train leave-one-out models $hat(M)_(-i)$ for each training data point $i$ 2. Evaluate $s_i := |hat(M)_(-i)(X^"train"_i) - y^"train"_i|$ #box[Let $R^-_i(X) := hat(M)_(-i)(X) - s_i$, $R^+_i(X) := hat(M)_(-i)(X) + s_i$;] 3. $hat(C)_(1-alpha)(X) := [quantile(1-alpha, R^-_(1:n_"cal")(X) union {infinity}), quantile(1-alpha, R^+_(1:n_"cal")(X) union {infinity})]$ ] #step[ #v(0.5fr) *Result:* #v(-2em) $ PP[y^"test" in hat(C)_(1-alpha)(X^"test")] >= 1 - 2alpha. $ ] #step[ #set align(center) But we typically get coverage of about $1 - alpha$. ] #v(2fr) ] #slide[ = Remarks on Jackknife+ #proscons( [ - No assumptions on the underlying model - No assumptions on the distribution of the data - No need to split the data ], v(2em), [ - Lower worst-case coverage - Mildly expensive - Interval sizes depend directly on the choice of nonconformity score function - We need exchangeability ], [] // v(2fr), ) ] #slide[ = Cross-conformal prediction #steplist(start: 0)[ 1. Split the data into $K$ disjoint subsets $S_k$ 2. Train models $hat(M)_(-k)$ on each split $k$ 3. Evaluate $s_i := |hat(M)_(-k)(X^"train"_i) - y^"train"_i|$ for $i in S_k$ #box[Let $R^-_i(X) := hat(M)_(-k)(X) - s_i$, $R^+_i(X) := hat(M)_(-k)(X) + s_i$ for $i in S_k$;] 4. $hat(C)_(1-alpha)(X) := [quantile(1-alpha, R^-_(1:n_"cal")(X) union {infinity}), quantile(1-alpha, R^+_(1:n_"cal")(X) union {infinity})]$ ] #step[ #v(0.5fr) *Result:* #v(-2em) $ PP[y^"test" in hat(C)_(1-alpha)(X^"test")] >= 1 - 2alpha. $ ] #step[ #set align(center) But, again, we typically get coverage of about $1 - alpha$. ] #v(1fr) ] #slide[ = Remarks on CV+ #proscons( [ - No assumptions on the underlying model - No assumptions on the distribution of the data - No need to split the data ], v(2em), [ - Lower worst-case coverage - Slightly expensive - Interval sizes depend directly on the choice of nonconformity score function - We need exchangeability ], [] // v(2fr), ) ] #slide[ = Scores for regression We've already considered $s(X, y) = |hat(M)(X) - y|$. #step[However, note:] #steplist(start: 0)[ - $s$ gives predictive intervals that are symmetric around $hat(M)(X)$; - ... and they are of constant size! ] #v(2fr) ] #slide[ = Overcoming symmetry #step[ *Idea:* use separate quantiles for upper and lower bounds. ] //#step[ // #columns(2)[ // $ s^-(X, y) := min {hat(M)(X) - y, 0} $ // #colbreak() // $ s^+(X, y) := max {hat(M)(X) - y, 0} $ // ] //] #columns(2)[ #step[ $ hat(t^+) := quantile(1 - alpha\/2, s_(1:n_"cal")) $ ] #step[ $ PP[s(X^"test", y^"test") <= hat(t^+)] >= 1-alpha/2 $ ] #colbreak() #step[ $ -hat(t^-) := quantile(1-alpha\/2, -s_(1:n_"cal")) $ ] #step[ $ PP[-s(X^"test", y^"test") <= -hat(t^-)] >= 1 - alpha/2 $ ] #step[ $ PP[hat(t^-) <= s(X^"test", y^"test")] >= 1 - alpha/2 $ ] ] #v(-1em) #step[ $ hat(C)_(1-alpha)(X) := {y in cal(Y) : hat(t^-) <= s(X, y) <= hat(t^+)} $ ] #step[ $ PP[y^"test" in hat(C)_(1-alpha)(X^"test")] = PP[hat(t^-) <= s(X^"test", y^"test") <= hat(t^+)] >= 1 - alpha. $ ] ] #slide[ = Adaptive intervals #step[ Suppose we have some intuitive measure of uncertainty, $u : cal(X) -> RR^+$. ] #step[ We can use //#v(-2em) $ s(X, y) := |hat(M)(X) - y|/u(X). $ ] //#step[ // #TODO[image demonstrating adaptive intervals] //] ] #slide[ = Quantile regression We can use quantile regression to attempt to learn: - $hat(M)^-$ modelling $tquantile(alpha\/2, Y | X)$; and - $hat(M)^+$ modelling $tquantile(1 - alpha\/2, Y | X)$. #step[ However, $PP[hat(M)^-(X^"test") <= y^"test" <= hat(M)^+(X^"test")] >= 1 - alpha$ typically won't hold out of the box... So let's calibrate it with conformal prediction! ] #step[ $ s(X, y) := max {hat(M)^-(X) - y, y - hat(M)^+(X)}. $ ] ] #slide[ = Scores for classification #step[ How about $ s(X, y) := ind(hat(M)(X) = y)? $ ] #step[ Just like $s(X, y) := |hat(M)(X) - y|$, this is not great: ] #steplist(start: 0)[ - Predictive sets can either contain a single $y$ or all $y in cal(Y)$; - ... and even then, they are all of constant size! ] #step[ But, just like with regression, we can do _much_ better! ] ] #slide[ = Logits as scores Suppose we have a classification model outputting logits, $hat(M) : cal(X) -> [0, 1]^K$. #step[ We can use $s(X, y) := [hat(M)(X)]_y$. ] #step[ #h(0.25em) But we can do better! ] #step[ If $hat(M)(X)$ were a perfect model of $Y | X$, then we'd take $y in cal(Y)$ greedily just until $sum_y [hat(M)(X)]_y >= 1-alpha$. ] #steplist(start: 0)[ 1. Sort the logits in decreasing order 2. $β <- 0$ 3. For each [sorted] logit $p$ for class $i$: \ 4. #h(1em) $β <- β + p$, add class $i$ to set 5. #h(1em) If $β >= 1 - alpha$, then break ] //#step[ // #v(-0.9em) // #set align(center) // #image("./cumulative-logits-trimmed.png", height: 8em, fit: "contain") //] ] #slide[ = Conformalized Bayes #step[ Suppose we have a Bayesian model $hat(M)(y | X)$. ] #step[ The Bayesian apporach would be to use predictive sets $ S(X) = {y in cal(Y) : hat(M)(y | X) > t}, $ With $t$ such that $integral_(y in S(X)) hat(M)(y | X) dif y = 1 - alpha$. ] #step[ For Conformal Prediction, we can just use $s(X, y) := -hat(M)(y | X)$. ] #step[ With some additional (not weak) assumptions, this set has the smallest average size for $1-alpha$ coverage, over data _and_ parameters. ] ] #slide[ = Batch setting Our guarantee: #v(-2em) $ PP[y^"test" in hat(C)_(1-alpha)(X^"test")] >= 1 - alpha $ #step[What about more than one prediction?] #step[ Given exchangeable $X^"test"_1, y^"test"_1, dots$: $ 1/T sum_(t=1)^T ind(y^"test"_t in hat(C)_(1-alpha)(X^"test"_t)) limits(-->)^(T -> infinity) 1 - alpha. $ ] //#step[ // #set align(right) // $implies$ a good way of checking empirically that coverage holds //] #step[ We can also get a concentration measure version if we assume iid data: $ PP[1/T sum_(t=1)^T ind(y^"test"_t in hat(C)_(1-alpha)(X^"test"_t)) >= 1-alpha] >= 1-delta. $ ] // show how the standard theorem already solves this with exchangeability // show concentration measure version ] #slide[ = Batch setting Suppose, for $R$ different test sets with $T$ points each, we evaluate $ C_j := 1/T sum_(t=1)^T ind(y^("test",j) in hat(C)_(1-alpha)(X^("test",j))), quad quad quad overline(C) = 1/R sum_j C_j; $ #step[ We will have that $ // EE[overline(C)] &approx 1 - alpha // &= 1 - l/(n+1) // \ sqrt(Var[overline(C)]) &= cal(O)(1/sqrt(R min {n_"cal", T})) step(limits(arrows.rr)^(R -> infinity)_(n_"cal",T -> infinity) 0) $ ] ] #slide[ = Marginal coverage #columns(2)[ $ PP[y in hat(C)_(1-alpha)(X)] >= 1 - alpha $ #colbreak() #step[ #set align(center) *This is marginal coverage!* ] ] #step[ #columns(2)[ $ PP[y in hat(C)_(1-alpha)(X) | X] >= 1 - alpha $ #colbreak() #set align(center) This is conditional coverage. ] ] #step[ I.e., #v(-2em) $ PP[y in hat(C)_(1-alpha)(X) | X = x] >= 1 - alpha quad quad forall x in cal(X) $ ] // #TODO[an image illustrating how marginal coverage fails (probably just a plot of a normal distribution)] // single-sample setting // batch setting // the issues with marginal coverage ] #slide[ = Conditional coverage #step[ Impossible in general :/ // #emoji.face.sad // @article{Barber2019TheLO, // title={The limits of distribution-free conditional predictive inference}, // author={<NAME> and <NAME>{\`e}s and <NAME> and <NAME>}, // journal={Information and Inference: A Journal of the IMA}, // year={2019} //} ] #step[ We can, however, get *class-conditional coverage*: $ &"Given" G_1, ..., G_m "such that" G_1 union.sq dots.c union.sq G_m = cal(X), \ &PP[y in hat(C)_(1-alpha)(X) | X in G_i] >= 1 - alpha, quad quad forall i = 1, ..., m $ ] #steplist(start: 0)[ 1. For each group $G_i$, use standard conformal prediction to get $hat(C)^((i))_(1-alpha)$ 2. Define #v(-1em) $ hat(C)_(1-alpha)(X) := hat(C)^((i))_(1-alpha)(X), quad X in G_i. $ ] // TODO result // impossible in general // finite VC dimension case // group-conditional CP & multi-valid CP ] #slide[ = Non-exchangeability Exchangeability is a very strong assumption, actually. What if it is unreasonable? #step[For example:] #steplist(start: 0)[ - Time series - Change points - Autoregressive models - Interactive systems ] ] #slide[ = No clear answer yet This is still an open question, with plenty of literature proposing solutions. #step[ However, it's clear that to be able to deal with arbitrary data we need to operate in an on-line manner. ] #step[ #set align(center) #v(-0.9em) #image("./enbpi.png", height: 10em, fit: "contain") ] ] #slide[ = Theoretical bounds ==== Conformal prediction beyond exchangeability #lemma[Quantile lemma, nonexchangeable&symmetric case][ Let $Z = (Z_1, ..., Z_(n+1))$ be a vector of random variables. Given $phi.alt in (0, 1)$, $ PP[Z_(n+1) <= quantile(phi.alt, Z_(1:n) union {infinity})] >= phi.alt - (sum_(i=1)^n dist_"TV"(Z, Z^((i))))/(n+1), $ // Where $Z^((i)) := (underbrace(#[$Z_1, ..., Z_(i-1)$], Z_(1:i-1)), Z_(n+1), underbrace(#[$Z_(i+1), ..., Z_n$], Z_(i+1:n)), Z_i)$. Where $Z^((i)) := (Z_1, ..., Z_(i-1), Z_(n+1), Z_(i+1), ..., Z_n, Z_i)$. ] #step[ *Note:* if $Z_1, ..., Z_(n+1)$ are exchangeable, then $(sum_(i=1)^n dist_"TV"(Z, Z^(i)))/(n+1) = 0$. ] ] #slide[ = Theoretical bounds ==== Conformal prediction beyond exchangeability #lemma[Quantile lemma, nonexchangeable&weighted case][ Let $Z = (Z_1, ..., Z_(n+1))$ be a vector of random variables, and weights $w_1, ..., w_n in RR_(>0)$. Given $phi.alt in (0, 1)$, $ PP[Z_(n+1) <= quantile(phi.alt, Z_(1:n) union {infinity}\; w_(1:n) union {1})] >= phi.alt - (sum_(i=1)^n w_i dist_"TV"(Z, Z^((i))))/(sum_(i=1)^n w_i +1). $ // Where $Z^((i)) := (Z_1, ..., Z_(i-1), Z_(n+1), Z_(i+1), ..., Z_n, Z_i)$. ] #step[ *Idea:* use higher weights for more recent points. ] ] //#slide[ // = Special cases // // - beta-mixing [CITE] // #v(1fr) // - strongly-mixing [CITE] // #v(1fr) // - weakly-mixing [CITE] // #v(1fr) //] //#slide[ // = Adaptive Conformal Intervals // // ==== Gibbs-Candès // // *Idea:* if we can't evaluate the coverage gap, let's continuously estimate it // // #step[ // Let's start with $alpha_0 <- alpha$. // ] // // #step[ // Every time we receive a new $(X^"test", y^"test")$, we update // // $ alpha_(t+1) <- alpha_t + gamma (alpha - ind(y^"test" in.not hat(C)_(alpha_t)(X^"test"))). $ // ] // // #step[ // #v(0.5fr) // *Result:* // #v(-3em) // // $ 1/T sum_(t=1)^T ind(y^"test"_t in hat(C)_(1-alpha)(X^"test"_t)) limits(-->)^(T -> infinity)_(a.s.) 1 - alpha $ // ] //] // //#slide[ // = Adaptive Conformal Intervals // // Note that // // $ // alpha_T = alpha_0 + gamma sum_(t=1)^T (alpha - ind(y^"test"_t in.not hat(C)_(1-alpha)(X^"test"_t))) // step(<= 1 + gamma) // $ // // #step[ // $ // 1/T sum_(t=1)^T (alpha - ind(y^"test"_t in.not hat(C)_(1-alpha)(X^"test"_t))) // <= (1 + gamma - alpha_0)/(T gamma) // $ // ] // // #show: step // $ // alpha - 1/T sum_(t=1)^T ind(y^"test"_t in.not hat(C)_(1-alpha)(X^"test"_t)) // <= (1 + gamma - alpha_0)/(T gamma) // step(limits(-->)^(T -> infinity) 0) // $ //] // //#slide[ // = Adaptive Conformal Intervals // // #TODO[ // Tricky choice of $gamma$ // Usually produces many infinite intervals // ] //] //#slide[ // = AggACI // // #TODO[ // - Josse on Gibbs-Candes // - Frustratingly rare in benchmarks due to complexity :\( // ] //] // //#slide[ // = EnbPI // // #TODO[explain] //] // //#slide[ // = MVP // // #TODO[explain] //] #slide[ = Conclusion #steplist(start: 0)[ - Conformal Prediction is a promising approach for distribution-free uncertainty quantification on arbitrary models - Coverage is guaranteed, as long as assumptions hold - Efficiency depends on the nonconformity scores and actual models - Active research on the area, especially regarding non-exchangeability ] #step[Topics I didn't cover:] #steplist(start: 0)[ - How to compare CP methods - Deep dive on non-exchangeability and distribution drift - Guarantees of the form $EE[ell(C(X), y)] <= alpha$ - Other uses of CP techniques ] ] // Some more slides explaining other guarantees with exchangeability // //#slide[ // = Can we go further? // // TODO: cite Bates2021 // // Given a loss function $ell : 2^(cal(Y)) times cal(Y) -> cal(Y)$ such that $ell(C, y)$ shrinks as $C$ grows, // // #v(0.5fr) // *Result:* // #v(-2em) // // $ EE[ell(C_phi.alt(x^"test"), y^"test")] <= alpha. $ //] // //#slide[ // = Can we go further? // // There are a couple of other guarantees explored in the literature. // // #TODO[ // - Out-of-distribution detection // - My unpublished work // ] //]
https://github.com/Enter-tainer/typst-preview
https://raw.githubusercontent.com/Enter-tainer/typst-preview/main/docs/templates/gh-page.typ
typst
MIT License
// This is important for typst-book to produce a responsive layout // and multiple targets. #import "@preview/book:0.2.3": get-page-width, target, is-web-target, is-pdf-target, plain-text #let page-width = get-page-width() #let is-pdf-target = is-pdf-target() #let is-web-target = is-web-target() // todo: move theme style parser to another lib file #let theme-target = if target.contains("-") { target.split("-").at(1) } else { "light" } #let theme-style = toml("theme-style.toml").at(theme-target) #let is-dark-theme = theme-style.at("color-scheme") == "dark" #let is-light-theme = not is-dark-theme #let main-color = rgb(theme-style.at("main-color")) #let dash-color = rgb(theme-style.at("dash-color")) #let main-font = ( "Charter", "Source Han Serif SC", "Source Han Serif TC", // typst-book's embedded font "Linux Libertine", ) #let code-font = ( "BlexMono Nerd Font Mono", // typst-book's embedded font "DejaVu Sans Mono", ) // todo: move code theme parser to another lib file #let code-theme-file = theme-style.at("code-theme") #let code-extra-colors = if code-theme-file.len() > 0 { let data = xml(theme-style.at("code-theme")).first() let find-child(elem, tag) = { elem.children.find(e => "tag" in e and e.tag == tag) } let find-kv(elem, key, tag) = { let idx = elem.children.position(e => "tag" in e and e.tag == "key" and e.children.first() == key) elem.children.slice(idx).find(e => "tag" in e and e.tag == tag) } let plist-dict = find-child(data, "dict") let plist-array = find-child(plist-dict, "array") let theme-setting = find-child(plist-array, "dict") let theme-setting-items = find-kv(theme-setting, "settings", "dict") let background-setting = find-kv(theme-setting-items, "background", "string") let foreground-setting = find-kv(theme-setting-items, "foreground", "string") ( bg: rgb(background-setting.children.first()), fg: rgb(foreground-setting.children.first()), ) } else { ( bg: rgb(239, 241, 243), fg: none, ) } #let make-unique-label(it, disambiguator: 1) = label({ let k = plain-text(it).trim() if disambiguator > 1 { k + "_d" + str(disambiguator) } else { k } }) #let heading-reference(it, d: 1) = make-unique-label(it.body, disambiguator: d) // The project function defines how your document looks. // It takes your content and some metadata and formats it. // Go ahead and customize it to your liking! #let project(title: "Typst Book", authors: (), body) = { // set basic document metadata set document(author: authors, title: title) if not is-pdf-target // set web/pdf page properties set page( numbering: none, number-align: center, width: page-width, ) // remove margins for web target set page( margin: ( // reserved beautiful top margin top: 20pt, // reserved for our heading style. // If you apply a different heading style, you may remove it. left: 20pt, // Typst is setting the page's bottom to the baseline of the last line of text. So bad :(. bottom: 0.5em, // remove rest margins. rest: 0pt, ), // for a website, we don't need pagination. height: auto, ) if is-web-target; // set text style set text(font: main-font, size: 16pt, fill: main-color, lang: "en") let ld = state("label-disambiguator", (:)) let update-ld(k) = ld.update(it => { it.insert(k, it.at(k, default: 0) + 1); it }) let get-ld(loc, k) = make-unique-label(k, disambiguator: ld.at(loc).at(k)) // render a dash to hint headings instead of bolding it. show heading : set text(weight: "regular") if is-web-target show heading : it => { it if is-web-target { let title = plain-text(it.body).trim(); update-ld(title) locate(loc => { let dest = get-ld(loc, title); style(styles => { let h = measure(it.body, styles).height; place(left, dx: -20pt, dy: -h - 12pt, [ #set text(fill: dash-color) #link(loc)[\#] #dest ]) }) }); } } // link setting show link : set text(fill: dash-color) // math setting show math.equation: set text(weight: 400) // code block setting show raw: it => { set text(font: code-font) if "block" in it.fields() and it.block { rect( width: 100%, inset: (x: 4pt, y: 5pt), radius: 4pt, fill: code-extra-colors.at("bg"), [ #set text(fill: code-extra-colors.at("fg")) if code-extra-colors.at("fg") != none #set par(justify: false) #place(right, text(luma(110), it.lang)) #it ], ) } else { it } } // Main body. set par(justify: true) body } #let part-style = heading
https://github.com/adam-zhang-lcps/papers
https://raw.githubusercontent.com/adam-zhang-lcps/papers/main/civil-war-q4-pba.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#set document( title: "Civil War Sectionalism PBA", author: "<NAME>", date: datetime.today(), ) #set page(paper: "us-letter") #set text(font: "Liberation Serif", size: 11pt) #set par(first-line-indent: 0.25in) #show par: set block(spacing: 0.65em) One economic cause of sectionalism during the antebellum period was the differing economy between the North and the South. The North focused primarily on manufacturing and industry, while the South instead emphasized old-school agriculture. An exa mple of sectionalism fueled by this divide was the publication of Hilton Helper's book _The Impending Crisis of the South_. Helper argued against the institution of slavery in the South as a cause of economic stagnation, limiting their ability to grow @Helper1857ImpendingCrisis. This book was highly controversial after it was published, as it was a Southerner, Helper, arguing for abolition, rather than against it, while attempting to appeal to the South's rationals. One political cause of sectionalism during the antebellum period was the 1857 _<NAME> v. Sandford_ Supreme Court decision. This decision was in response to a lawsuit from a slave arguing they should be freed as they lived in a free state. The court instead asserted that slaves were not citizens, and thus unable to sue; additionally, as they were property, they were protected under the Constitution, and thus the notion of "free states" versus "slave states" is unconstitutional, as the government cannot take away citizen's property (without just compensation) @DredScott1857. One societal cause of sectionalism during the antebellum period was the issue of slavery and abolition. This is exemplified by <NAME> Stove's _Uncle Tom's Cabin_, in which Stove portrays the horrors of slave life @Stowe1852UncleTomsCabin. This heavily fueled the abolitionist movement and further divided the nation over slavery. A modern cause of sectionalism is the current harsh political divide in the United States. In a survey from 2017, the Pew Research Center showed that the division between political parties manifests itself in large-scale disagreements about current issues, such as immigration, African American rights, LBGTQ+ rights, poverty, and military strength @PewResearchCenter2017PartisanDivide. This is concerning, as it mirrors the antebellum pattern of harsh partisan divides over the current issues (slavery); however, in contrast, the antebellum divides were primarily a result of economic differences and geography, whereas modern issues are based more fully on national parties and less upon geography. // APA or MLA are both allowed, but the assignment also wants an MLA-style // bibliography title, but single-spaced citations… using APA citations because // the MLA formatting right now is broken and I don't feel like fixing it. #show bibliography: it => { show heading: it => { set align(center) set text(weight: "regular", size: 11pt) it } set par(first-line-indent: 0in) pagebreak() it } #bibliography("refs.bib", title: [Works Cited], style: "apa")
https://github.com/ClazyChen/Table-Tennis-Rankings
https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/introduction.typ
typst
#set text(font: ("Times New Roman", "NSimSun")) = Table Tennis Ranking Calculation #v(1em) #h(2em) As is known to all, the ITTF revised the international ranking rules for table tennis on January 1, 2018. The rules have been controversial since their launch, as the calculation method is too complicated and does not conform to common sense. Under the rules, table tennis players who participate in more competitions can earn more points, rather than better players. Some players have to participate in a large number of competitions in order to maintain their world ranking, which means they loses the opportunity of rest and increases the risk of injury. In addition, the world ranking of players cannot reflect their true level. The probability of the higher-ranked party defeating the lower-ranked party after the ITTF's revised international ranking is only about 66%. #h(2em) This project analyzes the data publicly available on the ITTF website, uses the ELO algorithm for point calculation, and makes the following modifications based on the specific characteristics of table tennis: + The coefficient is changed according to the level of the competition (see Table 1). This coefficient is my own estimation and has no basis. #figure( caption: "The coefficient according to the level of the competition", table( columns: 3, [Level], [Competition Types], [Coefficient], [1], [Olympic], [2.5], [2], [World Championship], [2], [3], [World Cup], [1.6], [4], [WTT Finals], [1.4], [5], [WTT Champions\\WTT Grand Smash\\T2 Diamond], [1.2], [6], [WTT Star Contender\\World Tour (Platinum)], [1.1], [7], [WTT Contender\\Challenge\\World Tour], [1.0], [8], [WTT Feeder\\Continental events], [0.8], [9], [Olympic Qualification\\Regional events], [0.5] ) ) In addition to the Olympic Games, there is an additional 0.8 factor for qualifying matches in other competitions. #v(1em) + The coefficient is changed according to the score (see Table 2) of the seven games in four or five games in three, etc. It is based on the results of the binomial distribution hypothesis test. #figure( caption: "The score change factor", table( columns: 5, [Win\\Lose], [0], [1], [2], [3], [4], [31/32], [57/64], [99/128], [163/256], [3], [15/16], [13/16], [21/32], [], [2], [7/8], [11/16], [], [], [1], [3/4], [], [], [] ) ) #v(1em) + Center return correction. Due to the frequent competition of table tennis players, the ELO points of top players will rise infinitely if without correction, and even if they suffer consecutive losses before retiring, their ELO points cannot be reduced to their actual level. Therefore, an additional correction has been added: the higher the player's points, the lower the rate of increase in points when they win (Logistic distribution), and the higher the rate of decrease in points when they lose. This correction is independent of the opponent's ELO points. #v(1em) + New face correction. Players who participate in international competitions for the first time always start from an initial score of 1500 and move up in ELO systems. For high-level players (e.g. <NAME>), they will be in a state of having a lower score than their actual ranking for a longer period of time. Therefore, when a novice wins against an opponent with a high ELO score, they can approach the opponent's score proportionally, rather than just receiving the points provided by the ELO algorithm. As the number of matches played increases, this correction will decay exponentially. #v(1em) + Dominance correction. Top players (e.g. <NAME>) often receive a high score, and losing some matches has a relatively low impact on them. In order to better measure the dominance of top players, when a high-score player loses to a low-score player, the deducted score will be higher. // This correction will further increase when they do not participate in competitions for a long time. #v(1em) #h(2em) The above-mentioned correction has been proven effective in experiments. My experiment included matches between January 1, 2018 and December 10, 2023. Only players who have played at least five matches in ITTF adult events are included in the statistics. The statistics include 22,253 women's singles matches and 27,334 men's singles matches. After the correction, the winning rate of high-ranked athletes against low-ranked athletes is about 75% (76.28% for women's singles and 73.91% for men's singles), which is significantly higher than the 66% of the ITTF Rankings. #v(1em) The singles ranking is hidden after not participating in ITTF events for one year. The above settings may cause some retired players (e.g. ZHANG Yining) to still appear in the ranking. Due to uncertainty about whether they have retired, players who meet the following two conditions are marked in gray: + Starting from one year ago, no participation in ITTF events. + Has not participated in ITTF events within 1 year after the ranking time. #v(1em) This model lacks a correction for doubles, so the results of doubles ranking has no practical significance and has a low predictive success rate (68.56% for women's doubles, 66.75% for mixed doubles, and 65.81% for men's doubles, although it is still probably more meaningful than the ITTF ranking). Therefore, this repository does not provide doubles ranking. #v(3em) The world ranking as of Feburary 25th (the end of the Busan WTTC Finals) is shown in the attached table. You can see the complete top-128 rankings in _MS-latest.typ_ and _WS-latest.typ_. #v(3em) Past world rankings (once a month, starting from January 2004 and statistics taken on the 1st of each month) can be found in this repository, with the top 128 players for both women's singles and men's singles in each ranking. In early rankings, due to insufficient convergence of ELO scores, there may be significant fluctuations in the rankings. Due to my limited knowledge, most athlete names have not been translated into Chinese. The Chinese translation table can be found in this repository. For players whose country/region has changed, it is difficult for me to trace the time of their change. All tables show the country/region they represented at the time of their last ITTF competition. #v(3em) Thanks to ITTF/WTT, Python, Typst. Special thanks to Coach EmRatThich. The model proposed by this user provided inspiration for my model. #pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (1 - 32)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [1], [SUN Yingsha], [CHN], [3841], [2], [CHEN Meng], [CHN], [3581], [3], [WANG Manyu], [CHN], [3567], [4], [HAYATA Hina], [JPN], [3482], [5], [CHEN Xingtong], [CHN], [3466], [6], [WANG Yidi], [CHN], [3441], [7], [HIRANO Miu], [JPN], [3404], [8], [CHENG I-Ching], [TPE], [3403], [9], [HE Zhuojia], [CHN], [3372], [10], [ITO Mima], [JPN], [3356], [11], [QIAN Tianyi], [CHN], [3356], [12], [ZHANG Rui], [CHN], [3339], [13], [KIHARA Miyuu], [JPN], [3333], [14], [JEON Jihee], [KOR], [3321], [15], [KUAI Man], [CHN], [3318], [16], [FAN Siqi], [CHN], [3314], [17], [SZOCS Bernadette], [ROU], [3312], [18], [<NAME>], [JPN], [3309], [19], [ISHIKAWA Kasumi], [JPN], [3266], [20], [MITTELHAM Nina], [GER], [3264], [21], [<NAME>], [CHN], [3264], [22], [<NAME>], [GER], [3250], [23], [LIU Weishan], [CHN], [3239], [24], [CHEN Yi], [CHN], [3232], [25], [YANG Xiaoxin], [MON], [3222], [26], [<NAME>], [JPN], [3219], [27], [NAGASAKI Miyu], [JPN], [3193], [28], [DIAZ Adriana], [PUR], [3187], [29], [JOO Cheonhui], [KOR], [3161], [30], [<NAME>], [JPN], [3160], [31], [POLCANOVA Sofia], [AUT], [3152], [32], [ANDO Minami], [JPN], [3149], ) ) #pagebreak() #figure( caption: "Men's Singles (1 - 32)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [1], [<NAME>], [CHN], [3754], [2], [<NAME>], [CHN], [3735], [3], [<NAME>], [CHN], [3546], [4], [<NAME>], [CHN], [3520], [5], [<NAME>], [FRA], [3516], [6], [<NAME>], [CHN], [3512], [7], [<NAME>], [TPE], [3450], [8], [<NAME>], [JPN], [3405], [9], [<NAME>], [KOR], [3395], [10], [<NAME>], [CHN], [3393], [11], [<NAME>], [CHN], [3370], [12], [<NAME>], [BRA], [3366], [13], [<NAME>], [GER], [3339], [14], [TANAKA Yuta], [JPN], [3333], [15], [TOGAMI Shunsuke], [JPN], [3328], [16], [<NAME> Su], [KOR], [3314], [17], [QIU Dang], [GER], [3313], [18], [<NAME>onghoon], [KOR], [3311], [19], [<NAME>], [SLO], [3306], [20], [<NAME>], [POR], [3305], [21], [<NAME>], [SWE], [3304], [22], [GERASSIMENKO Kirill], [KAZ], [3281], [23], [W<NAME>], [HKG], [3258], [24], [XIANG Peng], [CHN], [3251], [25], [<NAME>], [DEN], [3249], [26], [OVTCHAROV Dimitrij], [GER], [3248], [27], [MATSUSHIMA Sora], [JPN], [3247], [28], [SUN Wen], [CHN], [3247], [29], [<NAME>], [GER], [3225], [30], [<NAME>], [FRA], [3219], [31], [OH Junsung], [KOR], [3214], [32], [<NAME>], [CHN], [3212], ) )