repo
stringlengths
26
115
file
stringlengths
54
212
language
stringclasses
2 values
license
stringclasses
16 values
content
stringlengths
19
1.07M
https://github.com/bradmartin333/TypstNotes
https://raw.githubusercontent.com/bradmartin333/TypstNotes/main/sections/test2.typ
typst
= Test2 A second success!
https://github.com/morel-olivier/template-typst
https://raw.githubusercontent.com/morel-olivier/template-typst/master/cheatsheet/main.typ
typst
#import "conf.typ": conf #show: doc => conf( fontSize: 12pt, title: "Mon Document", numColumnns: 4, color: true, doc ) = Introduction Toujour mettre des unités avec les formules !!! #grid( columns: (1fr, auto), rows: (auto), [$a = F / m$], [[m/s^2]] ) #lorem(80) == Sous-section 1 #lorem(100) === Sous-sous-section 1 #lorem(100) ==== Sous-sous-sous-section 1 #lorem(100) = Conclusion #lorem(100)
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-FE30.typ
typst
Apache License 2.0
#let data = ( ("PRESENTATION FORM FOR VERTICAL TWO DOT LEADER", "Po", 0), ("PRESENTATION FORM FOR VERTICAL EM DASH", "Pd", 0), ("PRESENTATION FORM FOR VERTICAL EN DASH", "Pd", 0), ("PRESENTATION FORM FOR VERTICAL LOW LINE", "Pc", 0), ("PRESENTATION FORM FOR VERTICAL WAVY LOW LINE", "Pc", 0), ("PRESENTATION FORM FOR VERTICAL LEFT PARENTHESIS", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT PARENTHESIS", "Pe", 0), ("PRESENTATION FORM FOR VERTICAL LEFT CURLY BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT CURLY BRACKET", "Pe", 0), ("PRESENTATION FORM FOR VERTICAL LEFT TORTOISE SHELL BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT TORTOISE SHELL BRACKET", "Pe", 0), ("PRESENTATION FORM FOR VERTICAL LEFT BLACK LENTICULAR BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT BLACK LENTICULAR BRACKET", "Pe", 0), ("PRESENTATION FORM FOR VERTICAL LEFT DOUBLE ANGLE BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT DOUBLE ANGLE BRACKET", "Pe", 0), ("PRESENTATION FORM FOR VERTICAL LEFT ANGLE BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT ANGLE BRACKET", "Pe", 0), ("PRESENTATION FORM FOR VERTICAL LEFT CORNER BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT CORNER BRACKET", "Pe", 0), ("PRESENTATION FORM FOR VERTICAL LEFT WHITE CORNER BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT WHITE CORNER BRACKET", "Pe", 0), ("SESAME DOT", "Po", 0), ("WHITE SESAME DOT", "Po", 0), ("PRESENTATION FORM FOR VERTICAL LEFT SQUARE BRACKET", "Ps", 0), ("PRESENTATION FORM FOR VERTICAL RIGHT SQUARE BRACKET", "Pe", 0), ("DASHED OVERLINE", "Po", 0), ("CENTRELINE OVERLINE", "Po", 0), ("WAVY OVERLINE", "Po", 0), ("DOUBLE WAVY OVERLINE", "Po", 0), ("DASHED LOW LINE", "Pc", 0), ("CENTRELINE LOW LINE", "Pc", 0), ("WAVY LOW LINE", "Pc", 0), )
https://github.com/piepert/philodidaktik-hro-phf-ifp
https://raw.githubusercontent.com/piepert/philodidaktik-hro-phf-ifp/main/src/parts/spue/planung/main.typ
typst
Other
#import "/src/template.typ": * = Planung #author[<NAME>] #include "sequenzplanung.typ" #include "kurzentwurf.typ" #include "langentwurf.typ" #include "tipps.typ" #include "panik.typ"
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/bookletic/0.1.0/example/example.typ
typst
Apache License 2.0
#import "@preview/bookletic:0.1.0": sig #set document(author: "<NAME>", title: "Bookletic Example") //Barebones example #let my-eight-pages = ( [ = Cover Page ], [ = Page One #lorem(80) ], [ = Page Two #lorem(80) ], [ = Page Three #lorem(80) ], [ = Page Four #lorem(80) ], [ = Page Five #lorem(80) ], [ = Page Six #lorem(80) ], [ = Back Cover ], ) // provide your content pages in order and they // are placed into the booklet positions. // the content is wrapped before movement so that // padding and alignment are respected. #sig( contents: my-eight-pages ) // A Fancier example #show heading: set align(center) #set text(size: 16pt, font: "PT Serif", lang: "en") #set par(justify: true) #let more-eight-pages = ( [ #v(45%) = Cover Page ], [ = Page One \ #lorem(80) ], [ = Page Two \ #lorem(100) ], [ = Page Three \ #lorem(80) ], [ = Page Four \ #lorem(40) #lorem(40) #lorem(40) ], [ = Page Five \ #lorem(80) ], [ = Page Six \ #lorem(120) ], [ #v(45%) = Back Cover ], ) #sig( signature-paper: "us-legal", page-margin-top: 0.5in, page-margin-bottom: 0.5in, page-margin-binding: 1in, page-margin-edge: 0.5in, page-border: none, draft: false, p-num-pattern: (..nums) => box(inset: 3pt, text(size: 10pt, sym.lt.curly.double )) + " " + nums .pos() .map(str) .join(".") + " " + box(inset: 3pt, text(size: 10pt, sym.gt.curly.double)), p-num-placment: bottom, p-num-align-horizontal: center, p-num-align-vertical: horizon, p-num-size: 16pt, p-num-pad-horizontal: 0pt, p-num-border: none, pad-content: 0pt, contents: more-eight-pages )
https://github.com/sitandr/typst-examples-book
https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/packages/index.md
markdown
MIT License
# Packages Once the [Typst Universe](https://typst.app/universe) was launched, this chapter has become almost redundant. The Universe is actually a very cool place to look for packages. However, there are still some cool examples of interesting package usage. ## General Typst has packages, but, unlike LaTeX, you need to remember: - You need them only for some specialized tasks, basic formatting _can be totally done without them_. - Packages are much lighter and much easier "installed" than LaTeX ones. - Packages are just plain Typst files (and sometimes plugins), so you can easily write your own! To use mighty package, just write, like this: ```typ #import "@preview/cetz:0.1.2": canvas, plot #canvas(length: 1cm, { plot.plot(size: (8, 6), x-tick-step: none, x-ticks: ((-calc.pi, $-pi$), (0, $0$), (calc.pi, $pi$)), y-tick-step: 1, { plot.add( style: plot.palette.blue, domain: (-calc.pi, calc.pi), x => calc.sin(x * 1rad)) plot.add( hypograph: true, style: plot.palette.blue, domain: (-calc.pi, calc.pi), x => calc.cos(x * 1rad)) plot.add( hypograph: true, style: plot.palette.blue, domain: (-calc.pi, calc.pi), x => calc.cos((x + calc.pi) * 1rad)) }) }) ``` ## Contributing If you are author of a package or just want to make a fair overview, feel free to make issues/PR-s!
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz-plot/0.1.0/src/plot/sample.typ
typst
Apache License 2.0
/// Sample the given single parameter function `samples` times, with values /// evenly spaced within the range given by `domain` and return each /// sampled `y` value in an array as `(x, y)` tuple. /// /// If the functions first return value is a tuple `(x, y)`, then all return values /// must be a tuple. /// /// - fn (function): Function to sample of the form `(x) => y` or `(t) => (x, y)`, where /// `x` or `t` are `float` values within the domain specified by `domain`. /// - domain (domain): Domain of `fn` used as bounding interval for the sampling points. /// - samples (int): Number of samples in domain. /// - sample-at (array): List of x values the function gets sampled at in addition /// to the `samples` number of samples. Values outsides the /// specified domain are legal. /// -> array: Array of (x, y) tuples #let sample-fn(fn, domain, samples, sample-at: ()) = { assert(samples + sample-at.len() >= 2, message: "You must at least sample 2 values") assert(type(domain) == array and domain.len() == 2, message: "Domain must be a tuple") let (lo, hi) = domain let y0 = (fn)(lo) let is-vector = type(y0) == array if not is-vector { y0 = ((lo, y0), ) } else { y0 = (y0, ) } let pts = sample-at + range(0, samples).map(t => lo + t / (samples - 1) * (hi - lo)) pts = pts.sorted() return pts.map(x => { if is-vector { (fn)(x) } else { (x, (fn)(x)) } }) } /// Samples the given two parameter function with `x-samples` and /// `y-samples` values evenly spaced within the range given by /// `x-domain` and `y-domain` and returns each sampled output in /// an array. /// /// - fn (function): Function of the form `(x, y) => z` with all values being numbers. /// - x-domain (domain): Domain used as bounding interval for sampling point's x /// values. /// - y-domain (domain): Domain used as bounding interval for sampling point's y /// values. /// - x-samples (int): Number of samples in the x-domain. /// - y-samples (int): Number of samples in the y-domain. /// -> array: Array of z scalars #let sample-fn2(fn, x-domain, y-domain, x-samples, y-samples) = { assert(x-samples >= 2, message: "You must at least sample 2 x-values") assert(y-samples >= 2, message: "You must at least sample 2 y-values") assert(type(x-domain) == array and x-domain.len() == 2, message: "X-Domain must be a tuple") assert(type(y-domain) == array and y-domain.len() == 2, message: "Y-Domain must be a tuple") let (x-min, x-max) = x-domain let (y-min, y-max) = y-domain let y-pts = range(0, y-samples) let x-pts = range(0, x-samples) return y-pts.map(y => { let y = y / (y-samples - 1) * (y-max - y-min) + y-min return x-pts.map(x => { let x = x / (x-samples - 1) * (x-max - x-min) + x-min return float((fn)(x, y)) }) }) }
https://github.com/ysthakur/PHYS121-Notes
https://raw.githubusercontent.com/ysthakur/PHYS121-Notes/main/Notes/Ch04.typ
typst
MIT License
= Chapter 4 == Motion and Forces / Newton's first law: An object willre main at rest or continue moving in a straight line at the same speed unless it has forces acting on it / Contact forces: Forces that act on an object by touching it / Long-range forces: Forces that act on an object without physical contact Examples of forces: - Weight ($arrow(w)$) - Tension ($arrow(T)$) == Newton's Second Law $arrow(F)_"net" = m arrow(a)$ == Newton's Third Law / Interaction: Mutual influence of two objects on each other An action/reaction pair of forces exists as a pair or not at all / Newton's Third Law: Every force occurs as one member of an action/reaction pair of forces - The two members of an action/reaction pair of forces act on different objects - The two members of an action/reaction pair point in *opposite directions* but have the *same magnitude*
https://github.com/sa-concept-refactoring/doc
https://raw.githubusercontent.com/sa-concept-refactoring/doc/main/chapters/refactoringIdeas.typ
typst
= Refactoring Ideas <refactoring_ideas> // COR Per se wäre es wohl spannend gewesen alle Ideen zu beschreiben, auch wenn sie keine Refactorings sind oder nicht umsetzbar sind. // Jeremy Did we have other ideas? In this section ideas for potential refactoring operations are explored. This serves as the foundation for deciding which features to implement. A total of three ideas are described. The concepts outlined in this section are intentionally presented in a basic state. The objective is to subject subsets of these concepts to further analysis and refinement for each refactoring operation during the implementation phase. The first idea, described in @idea_requirement_transformation, is inspired by sample code from the constraints and concept reference @constraints_and_concepts. The second idea, described in @idea_extraction, came up during experimentation with concepts. Finally, the third idea (@concept-generation) describes automatic concept declaration generation. == Requirement Transformation <idea_requirement_transformation> A refactoring could be provided to transform a function template using constraints between alternate forms. @transformation_idea_listing shows different variations of a function template. They all result in an identical function signature after instantiation and thus cannot co-exist. This was verified using gcc and clangd. The benefit of this is in many cases a more readable function declaration. For more readable code some developers prefer to remove unnecessary code like the `requires` keyword. The versions on the left in @transformation_idea_listing shows how the code looks like without the keyword resulting in the same logic. The potential refactoring would therefore focus on the removal of the `requires` clause. This idea was inspired by the constraints and concept reference @constraints_and_concepts since it lists all these forms in its first code snippet. #figure( kind: table, grid( columns: (auto, auto), gutter: 1em, align(start + horizon)[ ```cpp template<Hashable T> void f(T) {} ``` ], align(start + horizon)[ ```cpp template<typename T> void f(T) requires Hashable<T> {} ``` ], align(start + horizon)[ ```cpp void f(Hashable auto x) {} ``` ], align(start + horizon)[ ```cpp template<typename T> requires Hashable<T> void f(T) {} ``` ], ), caption: [Different ways to constrain a function template using concepts], ) <transformation_idea_listing> #pagebreak() == Extraction of Conjunctions and Disjunctions <idea_extraction> Sometimes more than one constraint is used in a ```cpp requires``` clause. This is expressed by `||` and `&&` operators. The proposed refactoring would offer to extract these logical combinations into a new named concept. One possible hurdle for this refactoring could be that there is no way to trigger a rename of the newly created concept. This seems to be a limitation of the language server protocol @lsp_issue_724 @lsp_issue_764. The refactoring itself would still be possible, but use a generated name for the new concept, requiring the user to rename it. To illustrate the idea, @conjunction_idea_listing shows a method `bar` whose type parameter `T` is constrained by two concepts. These requirements are extracted into a new named concept in @refactored_conjunction_idea_listing. The name _NAME_ would be the name generated by the refactoring. // COR How about (more complex) requires requires clauses? #figure( ```cpp template <typename T> void bar(T a) requires std::integral<T> && Hashable<T> { ... } ```, caption: [An existing conjunction], ) <conjunction_idea_listing> #figure( ```cpp template<typename T> concept NAME = std::integral<T> && Hashable<T>; template <typename T> void bar(T a) requires NAME<T> { ... } ```, caption: [The proposed refactoring to the conjunction in @conjunction_idea_listing], ) <refactored_conjunction_idea_listing> #pagebreak() == Concept Generation <concept-generation> A refactoring that generates a new concept declaration based on an existing function template declaration would be really useful. It would allow developers to write comprehensive concepts more easily without having to analyze their code in detail. For example the function template in @concept_generation_function_example would result in a concept and function template declaration like the one in @concept_generation_concept when applied to the template parameter `T`. #figure( ```cpp template <typename T> void foo(T a) { double x = a.abc(); } ```, caption: "Simple function template declaration", ) <concept_generation_function_example> #figure( ```cpp template<typename T> concept HasAbcMethod = requires(T a) { { a.abc() } -> std::convertible_to<double>; }; template <HasAbcMethod T> void foo(T a) { double x = a.abc(); } ```, caption: [@concept_generation_function_example with the concept extracted], ) <concept_generation_concept>
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/chordx/0.1.0/README.md
markdown
Apache License 2.0
# chordx A library to write song lyrics with chord diagrams in Typst. This library uses [CeTZ](https://github.com/johannes-wolf/typst-canvas) (aka typst-canvas) to generate the diagrams. **Table of Contents** - [Usage](#usage) - [Typst Packages](#typst-packages) - [Local Packages](#local-packages) - [Documentation](#documentation) - [Function: new-graph-chords](#function-new-graph-chords) - [Function: new-single-chords](#function-new-single-chords) - [License](#license) ## Usage `chordx` has two implementations, one using [CeTZ](https://github.com/johannes-wolf/typst-canvas) and another using native functions. The native functions work the same way as the main implementation, in a future it will replace the main implementation, for now you can use both. ```js // Using main implementation #let graph-chord = new-graph-chords() #let single-chord = new-single-chords(style: "italic", weight: "semibold") // Using native implementation #let graph-chord = new-graph-chords-native() #let single-chord = new-single-chords-native(style: "italic", weight: "semibold") ``` ### Typst Packages Typst added an experimental package repository and you can import `chordx` as follows: ```js #import "@preview/chordx:0.1.0": * ``` ### Local Packages If the package hasn't been released yet, or if you just want to use it from this repository, you can use `local-packages` You can read the documentation about typst [local-packages](https://github.com/typst/packages#local-packages) and learn about the path folders used in differents operating systems (Linux / MacOS / Windows). In Linux you can do: ```sh $ git clone https://github.com/ljgago/typst-chords ~/.local/share/typst/packages/local/chordx-0.1.0 ``` And import the lib in your file: ```js #import "@local/chordx:0.1.0": * ``` ## Documentation With `chordx` you can use 2 functions `new-graph-chords` and `new-single-chords`. These functions returns other functions (a closure) with a preset setting. ### Function: `new-graph-chords` ```js // Chord with diagram // Return a function with these settings #let guitar-chord = new-graph-chords( strings: number, font: string, ) ``` - `strings`: number of strings of the instrument (columns of the grid), default: 6, `Optional` - `font`: text font name, default: "Linux Libertine", `Optional` The returned function from `new-graph-chords` has the following parameters: ```js // Generates a chord diagram #guitar-chord( frets: number, fret-number: number or none, capos: array(array) or array(dictionary), fingers: array, notes, chord-name ) ``` - `frets`: number of frets (rows of the grid), default: 5, `Optional` - `fret-number`: shows the fret position, default: none, `Optional` - `capos`: adds one o many capos on the graph, default: (), `Optional` ```js // array(array) or array(dictionary) (( fret: number, start: number, // the first string end: number // the last string ),) ``` - `fret`: number of the fret position relative to grid, `Required` - `start`: number of the first string, `Required` - `end`: number of the last string, `Required` - `fingers`: shows the finger numbers, default: (), (0: without finger, number: one finger), `Optional` - `notes`: shows the notes on the graph ("x": mute note, "n": without note, 0: air note, number: one note), `Required` - `chord-name`: shows the chord name, `Required` Examples: ```js #import "@preview/chordx:0.1.0": * #let guitar-chord = new-graph-chords() #let ukulele-chord = new-graph-chords(strings: 4) // Guitar B chord #guitar-chord( capos: ((fret: 2, start: 1, end: 5),), // capos: ((2, 1, 5),) fingers: (0, 1, 2, 3, 4, 1), ("x", "n", 4, 4, 4, "n") )[B] // Ukulele B chord #ukulele-chord( capos: ((2, 1, 2),), // capos: ((fret: 2, start: 1, end: 2),) fingers: (3, 2, 1, 1), (4, 3, "n", "n") )[B] ``` <h3 align="center"> <img alt="Graph Chord" src="https://raw.githubusercontent.com/ljgago/typst-chords/main/examples/graph-chords.svg" style="max-width: 100%; width: 200pt; padding: 10px 20px; box-shadow: 1pt 1pt 10pt 0pt #AAAAAA; border-radius: 8pt; box-sizing: border-box; background: white" > </h3> ### Function: `new-single-chords` ```js // A single chord (without diagram) // Return a function with these settings #let chord = new-single-chords( style: "italic", weight: "semibold", ... ) ``` The chord without diagram is used to write the chord over a word. All parameters of `new-single-chords` are the same of `text` of `typst`. The returned function from `new-single-chords` has the following parameters: ```js #chord( body, chord-name, body-char-pos ) ``` - `body`: is the word or words where the chord goes, `Required` - `chord-name`: displays the chord name over the selected words in the body, `Required` - `body-char-pos`: positions the chord over a specific character in the body, `[]` or `[0]`: chord centered above the body, `[number]`: chord above character in body, `Required` Examples: ```js #import "@preview/chordx:0.1.0": * #let chord = new-single-chords(style: "italic", weight: "semibold") #chord[Jingle][G][2] bells, jingle bells, jingle #chord[all][C][2] the #chord[way!][G][2] \ #chord[Oh][C][] what fun it #chord[is][G][] to ride \ In a #chord[one-horse][A7][2] open #chord[sleigh,][D7][3] hey! ``` <h2 align="center"> <img alt="Single Chord" src="https://raw.githubusercontent.com/ljgago/typst-chords/main/examples/single-chords.svg" style="max-width: 100%; width: 400pt; padding: 10px 20px; box-shadow: 1pt 1pt 10pt 0pt #AAAAAA; border-radius: 8pt; box-sizing: border-box; background: white" > </h2> ## License [MIT License](./LICENSE)
https://github.com/ckunte/m-one
https://raw.githubusercontent.com/ckunte/m-one/master/inc/reac.typ
typst
= Mapping dynamics While sifting through some of my files yesterday, I found a spread-sheet using Gaussian elimination technique@spottiswoode_gaussian, refreshing old memories. In 2004, we were engineering a SPAR + topside --- the first of its kind for the region. For transport, we designed the topside to straddle atop two lean barges --- like a catamaran. This enabled the topside to be floated over, and mate with the SPAR. We quickly realised that the catamaran arrangement was a stiffness hog for the topside during tow (racking moments resulting from quartering seas) --- driving the need for more steel in the topside, far more than in its in-service conditions.#footnote[US 8312828 B2 patent, _Pre-loading to reduce loads and save steel on topsides and grillage of catamaran systems_,@luo_us8312828b2 describes the measures we took to save steel.] Marine team furnished motion-induced dynamic loads of the catamaran barge ensemble (from motion responses) for topside stress check during its fourteen-day tow. To transfer these on to the topside as inertia loads, we did the following: + First, we converted the dynamic loads into topside coordinate system along with sign conventions. + Then, we generated inertia loads on topside corresponding to 1.0g of linear (surge, sway, heave) and angular accelerations (roll, pitch, yaw), which resulted in a load case each --- six in total, with each containing loads (Fx, Fy, Fz) and moments (Mx, My, Mz). + From the two above, the idea was to get suitable factor to apply to the loads generated in step 2 to match the dynamic loads received from the Marine team --- using the Gaussian elimination technique. Upon solving the following grid, we'd end up with six load factors to multiply with our set of six inertia load sets respectively. #set math.mat(delim: "[") $ mat( F_"x1", F_"x2", F_"x3", F_"x4", F_"x5", F_"x6", dots.v, F_x; F_"y1", F_"y2", F_"y3", F_"y4", F_"y5", F_"y6", dots.v, F_y; F_"z1", F_"z2", F_"z3", F_"z4", F_"z5", F_"z6", dots.v, F_z; M_"x1", M_"x2", M_"x3", M_"x4", M_"x5", M_"x6", dots.v, M_x; M_"y1", M_"y2", M_"y3", M_"y4", M_"y5", M_"y6", dots.v, M_y; M_"z1", M_"z2", M_"z3", M_"z4", M_"z5", M_"z6", dots.v, M_z; ) $ Back to the spreadsheet, I noticed that we had actually generated a multiple pivot-eliminate routines through iterations, until all coefficients (except the principal diagonal) were decomposed, and coefficients in the principal diagonal contained 1 each --- as is done in the technique. Matrices are now available in most modern computing software. Gaussian elimination, on the other hand, was perhaps from an era of logarithms and radians --- designed to simplify computational complexity when done by hand. So, I am not sure why we used this technique, in lieu of matrix functions in Excel or MathCAD available at our disposal. Following the classic recipe of solving linear equations ($A dot x = B$) for x, a column matrix of load factors, where $A$ is a square matrix of inertia loads --- corresponding to 1.0g, and $B$ is a column matrix of dynamic loads from catamaran's motion responses, I punched in the two arrays to see if I could get the same set of $x$. #pagebreak(weak: true) Here's how simple it is with numpy. #v(1em) ```python #!/usr/bin/env python # -*- coding: UTF-8 -*- """ mat.py -- 2015 ckunte. """ import numpy def main(): # Inertia matrix, A, corresponds to 1.0g in surge, sway, # heave, roll, pitch, and yaw. A = numpy.mat("[-11364.0, 0.0, 0.0, 0.0, -412.3, -9.1; \ 0.0, -11364.0, 0.0, 412.3, 0.0, -9.9; \ 0.0, 0.0, -11364.0, 9.1, 9.9, 0.0; \ 0.0, 231661.7, 5129.7, -11569.7, 322.5, 266.6; \ -231661.7, 0.0, 5574.3, 322.5, -15050.3, -239.8; \ -5129.7, -5574.3, 0.0, 266.6, -239.8, -8929.5]") # Motion-induced dynamic loads (one of numerous cases) B = numpy.mat("[-2961.0; -1358.0; -40613.0; 119921.5; \ -68588.5; 210347.9]") # getI() is the matrix inverse function from numpy. x = A.getI() * B print x if __name__ == '__main__': main() ``` The output looks like below --- matching the result we'd obtained from Gaussian elimination method: ```bash $ python mat.py [[ 0.16090823] [ -0.71351288] [ 3.55783674] [-23.53602482] [ 3.27622169] [-23.99421225]] ``` $ - * - $
https://github.com/yhtq/Notes
https://raw.githubusercontent.com/yhtq/Notes/main/数学模型/作业/hw4.typ
typst
#import "../../template.typ": proof, note, corollary, lemma, theorem, definition, example, remark, proposition,der, partialDer, Spec #import "../../template.typ": * // Take a look at the file `template.typ` in the file panel // to customize this template and discover how it works. #show: note.with( title: "作业4", author: "YHTQ", date: none, logo: none, withOutlined : false, withTitle :false, ) #let absSigmaI(i) = $abs(x_sigma(#i))$ #let fL = $f^L$ (应交事件为4月15日) = 4.1 == === 对 $x(t)$ 做傅里叶变换,得到: $ sum_(n=-infinity)^(infinity) 1/(2 pi) integral_(-pi)^(pi) x(t) e^(-i n t) dif t space e^(i n t)\ = sum_(n=-infinity)^(infinity) 1/(2 pi) integral_(-pi)^(pi) sin(sqrt(2) t) e^(-i n t) dif t space e^(i n t)\ = sum_(n=-infinity)^(infinity) 1/(2 pi) integral_(-pi)^(pi) (e^(i sqrt(2) t) - e^(-i sqrt(2) t))/2 e^(-i n t) dif t space e^(i n t)\ = sum_(n=-infinity)^(infinity) 1/(2 pi) (integral_(-pi)^(pi) e^(i (sqrt(2) - n) t) dif t - integral_(-pi)^(pi) e^(-i (sqrt(2) + n) t) dif t)/2 e^(i n t)\ = sum_(n=-infinity)^(infinity) 1/(4 pi) (1/(sqrt(2) - n) (e^(i (sqrt(2) - n) pi) - e^(-i (sqrt(2) - n) pi)) - 1/(sqrt(2) + n) (e^(i (sqrt(2) + n) pi) - e^(-i (sqrt(2) + n) pi)) ) e^(i n t)\ $ === 对 $x(t)$ 做傅里叶变换,得: $ c_n = 1/(2 l) integral_(-l)^(l) x(t) e^(-i n pi t/l) dif t\ = integral_(-1/2)^(1/2) x(t) e^(-2 i n pi t) dif t\ = integral_(-1/2)^(1/2) x(t) e^(-2 i n pi t) dif t\ = integral_(-1/2)^(0) -t e^(-2 i n pi t) dif t + integral_(0)^(1/2) t e^(-2 i n pi t) dif t\ = (1/4 n^(-2) (pi i n + ((-pi i n + 1) e^(pi i n) - 1) e^(pi i n) - e^(pi i n) + 1) e^(-pi i n) / pi^2) $ == 在附件中有编程计算的结果
https://github.com/han0126/MCM-test
https://raw.githubusercontent.com/han0126/MCM-test/main/2024亚太杯typst/chapter/chapter2.typ
typst
= 基本假设 1. 洪水发生的概率可以被明确区分为高、中、低风险类别 2. 过去洪水事件的特征和概率在未来将保持不变。 3. 附件'train.csv'和'test.csv'给出的数据准确无误,且涵盖多种不同的洪水事件,能够代表洪水发生的典型情况和异常情况 4. 所选的统计和机器学习模型适用于洪水灾害的概率预测。 5. 模型能够通过选择适当的指标和降维技术处理潜在共线性问题,使最终使用的指标在模型中是相对独立的。 6. 在计算指标权重和模型参数时,这些值在不同的样本和时间点上是稳定的,不会出现剧烈波动。 \
https://github.com/VZkxr/Typst
https://raw.githubusercontent.com/VZkxr/Typst/master/Seminario/Proyecto%20Final/document.typ
typst
#import "codly-0.2.1/codly.typ": * #show: codly-init.with() #let icon(codepoint) = { box( height: 0.8em, baseline: 0.05em, image(codepoint) ) h(0.1em) } #codly(languages: ( rust: (name: "Rust", icon: icon("brand-rust.svg"), color: rgb("#CE412B")), python: (name: "Python", icon: icon("brand-python.svg"), color: rgb("#3572A5")), )) #set page( paper: "us-letter", numbering: "1", ) #set par(justify: true, leading: 1.5em) #set text(lang: "es", size: 12pt) #pagebreak() //--- INTRODUCCIÓN ---// #align(center)[ #text( font: "Times New Roman", weight: "bold", size: 15pt)[Introducción] ] Este documento busca analizar las problemáticas subyacentes en la enseñanza de las matemáticas y proponer estrategias innovadoras para abordarlas, con un énfasis en la integración de tecnologías modernas y metodologías didácticas efectivas. A lo largo de la lectura, se pretende proporcionar una visión integral que permita mejorar la enseñanza y en consecuencia, mejores resultados académicos. En el primer capítulo, se examinarán las causas del desinterés y el miedo hacia las matemáticas, enfocándose en la falta de integración significativa de esta disciplina con otras áreas del conocimiento y la insuficiente visualización geométrica en la enseñanza. Se discutirán estudios y teorías que respaldan estas observaciones, haciendo mención en cómo la educación matemática actual difícilmente se ajusta a la velocidad con la que el mundo avanza gracias a la tecnología. \ \ El segundo capítulo explorará la metodología de resolución de problemas como una herramienta pedagógica poderosa, enfatizando el uso de tecnologías de la información y la comunicación (TICs). La resolución de problemas, apoyada por las TICs no solo facilita una comprensión más profunda de los conceptos matemáticos, sino que también prepara a los estudiantes para enfrentar desafíos en un mundo laboral cada vez más digitalizado. Se propondrán algunas de las herramientas más comunes y demandadas por el mercado, serán presentadas como recursos valiosos que pueden transformar la forma en que se enseña y se aprende matemáticas. \ \ Finalmente, en el tercer capítulo, se presentará una propuesta concreta de trabajo basada en la resolución de problemas y el uso de TICs. Se incluirán ejemplos prácticos y secuencias didácticas que demuestran cómo estas metodologías pueden ser implementadas en el aula. El objetivo es mostrar que, mediante una correcta integración de estas herramientas, se puede fomentar un aprendizaje más activo, significativo y alineado con las demandas del siglo XXI. #pagebreak() //--- CAPÍTULO I ---// #align(center)[ #text( font: "Times New Roman", weight: "bold", size: 15pt)[Capítulo I: La enseñanza de las matemáticas] ] Entre los intereses de los alumnos de educación básica, no es un secreto que el menos importante es la disciplina matemática; con justa razón, lo que genera el sistema educativo a través de los docentes no es persuación, más bien, miedo por parte del alumnado hacia las matemáticas. Y existen distintas problemáticas dentro de este sistema que, con el debido tiempo y dedicación a la materia, traen como consecuencia los resultados en las pruebas PISA, en donde en 2018, solamente un 44% del alumnado obtuvo un nivel mínimo de competencias en matemáticas @web1. \ Englobar cada problemática que causa este índice tan lamentable en las pruebas PISA nos llevaría más de unas cuantas cuartillas, por ello, sólo mencionaré un par de esas problemáticas que van bien ligadas a la enseñanza de una matemática que no se sostiene ni se ajusta a la velocidad con la que el mundo actual va avanzando gracias la tecnología. \ La primera de ellas es que las matemáticas hoy en día, más que una necesidad, se están convirtiendo en una unificación entre múltiples áreas. El problema aquí es que esa unificación no se ve reflejada si no hasta que se tiene la oportunidad de llegar a un ambiente laboral en donde se requieran de esos conocimientos, y la pregunta que puede surgir ahora es ¿cuántos logran tener esa oportunidad y qué los condiciona a no llegar a ella?. Pues fijando nuestra atención a "La nueva familia de Libros de Texto Gratuitos" proporcionados por la SEP @web2, podemos darnos cuenta que el intento de "unificación" no es más que el simple revoltijo de temas ya planteados en anteriores sexenios, ¿por qué enseñar primero los números naturales, pero en seguida un tema de evolución y tencologías verdes?, si lo que se quiere evitar, como decía <NAME> en su libro "El fracaso de la matemática moderna" @frac es una saturación de información y conceptos, considero que además de que esto contribuye a la antítesis propuesta por Kline, es aún peor mezclar dos cosas que son completamente distintas. #pagebreak() #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[ Análisis visual e interactivo: Una apuesta a la enseñanza #h(1fr) Capítulo I], ) Otra de las dificultades que considero, es de alta prioridad atender, es la falta de visualización geométrica en la matemática; según un estudio realizado por la Universidad Técnica de Manabí @art1, señala que: #block( fill: luma(230), inset: 1.27cm, radius: 4pt )[ #set par( leading: 1.0em, ) #text(style: "italic")["La problemática en la enseñanza de la geometría plantea un desafío a todos los involucrados en el proceso de enseñanza y aprendizaje de las Matemáticas para hallar alternativas de solución, pues la enseñanza de la geometría se puede desvirtuar y se han dejado de lado procesos de razonamiento, argumentación y #text(weight: "semibold")[visualización], los cuales son trascendentales para el aprendizaje." \ #align(right)[p.(22).]] ] Lo cual puede traer como repercusión una falta de entendimiento respecto a las materias como cáluclo integral y diferencial y geometría analítica, pues es en estas materias en donde la visualización de las funciones y figuras forma parte indispensable tanto de los cálculos como para el análisis dentro de la resolución de problemas. \ \ En un primer ensayo argumentativo, destaqué la importancia de no remover la teoría de conjuntos y la lógica para los alumnos de educación básica. Se planteó la necesidad de considerar estas ramas de estudio en ese nivel para una preparación encaminada al uso de Excel, que se ha convertido en un programa demandado por los empleadores y su funcionamiento está basado en funciones, conjuntos y lógica. Ahora, a manera de conectar la visualización geométrica con el uso de software, haré énfasis en los recursos tecnológicos con los que contamos hoy en día, los cuales nos pueden ayudar a no dejar de lado las problemáticas antes mencionadas. #pagebreak() //--- CAPÍTULO II ---// #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[], ) #align(center)[ #text( font: "Times New Roman", weight: "bold", size: 15pt)[Capítulo II: La Resolución de Problemas como metodología de enseñanza y el uso de tecnologías] ] Cuando se publicaron los problemarios a resolver por el alumnado durante el seminario, llegamos a concluir que la manera de abordar uno de los ejercicios es distinta para cada persona y lo que influye más al momento de tomar un camino para la resolución, es el entendimiento del ejercicio en sí. Sin embargo, gracias a las heurísticas propuestas por Polya en su libro "Cómo plantear y resolver problemas" @poly, se logró establecer una serie de consideraciones antes, durante y después de la solución. En particular los estudiantes se centraron en los cuatro pilares que se necesitan según Polya, para resolver un problema, los cuales fueron: comprender el problema, concebir un plan, ejecución del plan y examinar la solución obtenida. Posteriormente durante la exposición de algunas soluciones, se destacó la importancia de la visualización y el uso de las heurísticas para su conclusión. A continuación, durante este capítulo y el siguiente, veremos como puede esto relacionarse con el uso de las TICs en la resolución de problemas. \ \ Hasta antes de la accesibilidad económica para tener un computador en casa, era necesario hacer todo en papel, con cuentas tediosas y visualizaciones de gráficas poco precisas. Hoy en día, además de que es más accesible esta tecnología, se cuenta con arquitecturas para uso remoto como: #list( indent: 5pt, [Google Colab: Es un servicio alojado de Jupyter Notebook que no requiere configuración y que ofrece acceso gratuito a recursos de computación, como tarjetas gráficas y unidades de procesamiento tensorial. Es una solución especialmente adecuada para el aprendizaje automático, la ciencia de datos y la educación.], [Programiz: Es un sitio web que cuenta con ejercicios y tutoriales para aprender a programar en Phyton, R, Java y otros lenguajes.], ) #pagebreak() #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[ Análisis visual e interactivo: Una apuesta a la enseñanza #h(1fr) Capítulo II], ) #list( [Mathematica: software orientado a la computación de problemas en las áreas científicas, de ingeniería, matemáticas y computacionales. No solo es un sistema de álgebra computacional, sino también un potente lenguaje de programación de propósito general.] ) Se hace mención de estas herramientas para dar a conocer la capacidad de cómputo que posee cada una, pero también para familiarizar al lector con ellas, pues pueden ser mencionadas de ahora en adelante. \ \ Consideremos que en el ensayo anterior, titulado "Metodología inmarcesible", se hizo incapié en el uso de Manim para visualizaciones de construcciones matemáticas, pero a fecha de publicación de este ensayo, no existen estudios acerca del apoyo que ha dado Manim a estudiantes de matemáticas, consecuentemente no es posible formar una argumentación sostenible sobre el uso de esta herramienta, sin embargo podemos plantear un análisis casi análogo. \ Notemos que las TICs en educación son las Tecnologías de la Información y la Comunicación que se emplean como recursos y herramientas para el aprendizaje y que almacenan, procesan y transmiten información digital. Por poner un ejemplo, @art2 argumenta que un caso particular que permite una evidencia de trascendencia luego de hacer uso del software Mathematica 10, permitió a los estudiantes universitarios de precálculo un aprendizaje más sencillo gracias a una serie de herramientas desarrolladas en el software. Además, se declara que el punto de partida de esa investigación fue el hecho de que el modelo de aprendizaje tradicional no ofrece una experiencia que genere una comprensión sobre los temas, pues no se permite una interacción con el objeto de conocimiento en cuestión. \ \ La idea de que la tecnología como la conocemos hoy sea un puente para conectar objetos matemáticos con propiedades, haciendo uso de conceptos manipulables dentro de los programas en lugar de objetos abstractos, está bien sustentada en el artículo de @art3. De donde parte un argumento más general a la hora de proponer que los límites y las funciones junto con sus operaciones ayuden no solamente a visualizar, si no también a manipular los valores de estos y consecuentemente, intuir un cambio gráfico y analítico como se menciona en "Diseño de herramientas que fomentan el aprendizaje de matemáticas con ayuda de Mathematica 10" @art4. \ \ De este modo se puede ver que Manim, como lo define el autor, "es un motor de animación para videos #text(weight: "semibold")[explicativos] de matemáticas. Se utiliza para crear animaciones precisas mediante programación en Python", es una TIC per se, con el apoyo de grandes Sponsors y una comunidad en línea @manim_community. Y pese a no tener estudios de ventajas y crecimiento intelectual así como un ahorro en el tiempo de aprendizaje, su uso sí se ve reflejado por importantes universidades como la Universidad de Alcalá: Escuela Politécnica Superior @th1, y también por grandes divulgadores de las matemáticas, por mencionar algunos: Mates Mike, BlueDot, 3Blue1Brown, que sin duda al observar el crecimiento obtenido video a video, junto con los excelentes comentarios de los usuarios, es claro que nuestro argumento general del párrafo anterior, conserva su esencia al estar haciendo uso de esta paquetería de Python. \ \ Una vez comprendido nuestro panorama y visión del futuro de la enseñanza de las matemáticas, estamos seguros para proponer una metodología que encamine a los estudiantes a usar estas herramientas no sólo para la resolución y exposición de problemas, también para un sencillo entendimiento ante las soluciones propuestas por otros compañeros. #pagebreak() //--- CAPÍTULO III ---// #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[], ) #align(center)[ #text( font: "Times New Roman", weight: "bold", size: 15pt)[Capítulo III: Propuesta de trabajo de la Resolución de Problemas] ] Un pequeño ejemplo que aborda las dificultades mencionadas en el primer capítulo, está contenido en la materia de cálculo integral a nivel bachillerato. Primero se nos es presentada una motivación llevada por la necesidad de calcular áreas debajo de curvas, para esto se pide considerar alguna función conocida $f(x)$, posteriormente se nos invita a pensar en la 'mejor' figura para aproximar el área debajo de $f$ y después de algunas propuestas, llegamos a que el rectángulo es la mejor de ellas, haciendo que el área bajo $f$ sea representada como $ A approx sum_(i=1)^n f(x_i)Delta_x = sum_(i=1)^n A_i $ donde $A_i$ es el área de cada rectángulo y $n$ es el número total de rectángulos. Seguido de esto y con ayuda del cálculo diferencial, se nos invita a pensar en $ lim_(n -> infinity) sum_(i=1)^n A_i $ Que, usando la notación de Leibniz eso es igual a $ integral_(a)^(b) f(x) d x $ Siendo esta la noción de comparar sumas finitas con sumas infinitas en términos de áreas bajo curvas. \ \ De aquí podemos identificar una de las problemáticas y es la forma de graficar esta comparación, pues para simular una aproximación en clase, comúnmente se hace uso del pizarrón, dibujando rectángulo a rectángulo la cantidad que convenga, pero lo que sucede es que se pierde tiempo al momento de querer dibujar hasta 20 rectángulos pequeños (recordemos que la idea es llevarlos hasta infinitos rectángulos), más aún es complicada la práctica de manejar y/o comunicar iteraciones en funciones sin ayuda de software. Además de que se puede limitar a un solo ejemplo de este tema tan importante. #pagebreak() #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[ Análisis visual e interactivo: Una apuesta a la enseñanza #h(1fr) Capítulo III], ) Y es aquí donde presento como propuesta didáctica el uso de Manim para visualizar las iteraciones hasta (aparentemente) $n$ rectángulos. Veamos que el código puede ser proporcionado por ChatGPT ejecutando un prompt como: #block( fill: rgb(211, 225, 245), inset: 1.27cm, radius: 4pt )[ #set par( leading: 1.0em, ) #text(style: "normal")[Genera una animación en manim que pueda ser usada como ejemplo de visualización para el entendimiento de sumas finitas y sumas infinitas respecto a áreas bajo la curva, es decir, hablamos de una función que en un principio, su área bajo la curva se pueda aproximar mediante rectángulos y después iterar la cantidad hasta que sean, aparentemente $n$ rectángulos. ] ] Y obtenemos un código extremadamente óptimo que bien puede ser usado para exponer: ```python from manim import * class SumasFinitasEInfinitas(Scene): def construct(self): # Ejes y función axes = Axes(x_range=[0, PI], y_range=[0, 1, 0.2], axis_config={"color": BLUE}) func = lambda x: np.sin(x) graph = axes.plot(func, color=WHITE) # Título title = Text("Sumas Finitas e Infinitas", font_size=36).to_edge(UP) self.play(Write(title)) # Mostrar los ejes y la gráfica self.play(Create(axes), Create(graph)) self.wait(1) # Aproximación con rectángulos rects_group = VGroup() for n in [4, 8, 16, 32]: rects = axes.get_riemann_rectangles(graph, x_range=[0, PI], dx=PI/n, stroke_width=0.5, stroke_color=WHITE, fill_opacity=0.5) rects_group.add(rects) # Mostrar las diferentes aproximaciones for i, rects in enumerate(rects_group): if i == 0: self.play(Create(rects)) else: self.play(Transform(rects_group[i-1], rects)) self.wait(1) # Transformación final a una "suma infinita" final_rects = axes.get_riemann_rectangles(graph, x_range=[0, PI], dx=PI/1000, stroke_width=0.5, stroke_color=WHITE, fill_opacity=0.5) self.play(Transform(rects_group[-1], final_rects)) self.wait(2) # Fin de la animación self.play(FadeOut(rects_group), FadeOut(graph), FadeOut(axes), FadeOut(title)) self.wait(1) ``` Y obtenemos como output un video del restultado óptimo para la visualización que se necesitaba, el cual puede ser consultado en #link("https://youtu.be/26dDxx983QU")[youtube.com/output]. O bien si no se desea depender tanto de la IA podemos construir nuestro propio código: ```python from manim import * class Area(Scene): def construct(self): # Configuración de los ejes ax = Axes( x_range=[0, 5, 1], y_range=[0, 6, 1], axis_config={"include_numbers": True}, ) # Función a graficar def func(x): return 1 + x ** 2 * np.exp(-0.15 * x ** 2) graph = ax.plot(func, color=BLUE) # Texto de introducción int_area_sym = MathTex(r"\int_{a}^b f(x)dx").shift(2 * UP) area_mean_text = MathTex(""" \\text{Aproximaremos el área bajo la curva de} \\hspace{.3cm} f(x) \\hspace{.3cm} \\text{en la región} \\hspace{.3cm} 1 \leq x \leq 4""").scale(0.6).next_to(int_area_sym, DOWN) opening_text = VGroup(int_area_sym, area_mean_text) self.play(Write(opening_text), run_time=4) self.wait(2) self.play(FadeOut(opening_text)) # Añadir ejes y graficar la función self.play(Create(ax), Create(graph)) func_text = MathTex(r"y = f(x)").next_to(graph, UP) self.play(Write(func_text)) self.wait(2) # Límites verticales y etiquetas min_lim = ax.get_vertical_line(ax.input_to_graph_point(1, graph), color=YELLOW) max_lim = ax.get_vertical_line(ax.input_to_graph_point(4, graph), color=YELLOW) self.play(Create(min_lim), Create(max_lim), run_time=0.5) self.wait(2) # Aproximación del área con rectángulos de Riemann approx_text = Text("""El área puede ser aproximada con \n sumas de rectángulos pequeños""").scale(0.5).next_to(graph, 4 * UP) self.play(Write(approx_text)) self.wait(2) rects_initial = ax.get_riemann_rectangles(graph, x_range=[1, 4], dx=0.5, stroke_width=0.1) self.play(Create(rects_initial), run_time=2) self.wait(2) self.play(FadeOut(rects_initial), run_time=1) rects_initial = ax.get_riemann_rectangles(graph, x_range=[1, 4], dx=0.1, stroke_width=0.1) self.play(Create(rects_initial), run_time=2) self.wait(2) self.play(FadeOut(rects_initial), run_time=1) rects_initial = ax.get_riemann_rectangles(graph, x_range=[1, 4], dx=0.01, stroke_width=0.01) self.play(Create(rects_initial), run_time=2) self.wait(2) conclude_text = Text("""Haciendo esto infinitamente, tenemos""").scale(0.5).next_to(graph, 4 * UP) self.play(Transform(approx_text, conclude_text)) self.wait(3) int_area_sym.next_to(graph, IN) self.play(Create(int_area_sym), run_time=1) self.wait(2) # Haciendo los rectángulos más finos rects_finer = ax.get_riemann_rectangles(graph, x_range=[1, 4], dx=0.001, stroke_width=0.001) self.play(Transform(rects_initial, rects_finer), run_time=2) self.wait(3) ``` En donde solamente hay una diferencia de 30 líneas, líneas que bien fueron agregadas para pulir detalles, pero el resultado es todavía mejor para comunicarlo (#link("https://youtu.be/OrobIKN_4yk")[youtube.com/output2]). \ \ Previamente en el seminario, se presentó una secuencia que tenía por título 'La distribución binomial, su esperanza y varianza'. Y si bien no se introdujo una animación en Manim para la visualización de la solución, sí se mostró la gráfica que representaba la función de densidad vinculada con el problema, construida en Python. A continuación se expone la planeación de la secuencia junto con el problema propuesto y el desarrollo de este. #pagebreak() //--- PLANEACIÓN DE CLASE ---// #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[], ) #align(center)[ #text( font: "Times New Roman", weight: "bold", size: 15pt)[La distribución binomial, su esperanza y varianza] ] #text(weight: "semibold", font: "Times New Roman")[Expositores:] - <NAME> - <NAME> - <NAME> #text(weight: "semibold", font: "Times New Roman")[Nivel y grado educativo de destino:] Nivel medio superior para la materia Estadística y Probabilidad II del sexto semestre del Colegio de Ciencias y Humanidades. \ #text(weight: "semibold", font: "Times New Roman")[Tiempo total de secuencia:] Una sesión de 50 minutos. \ #text(weight: "semibold", font: "Times New Roman")[Justificación de propuesta didáctica:] La misión de los CCH es que los alumnos se desarrollen como personas con sensibilidades e intereses científicos y humanísticos, y el estudio de la probabilidad es el cruce de ambos. En particular, el estudio de la distribución binomial es útil como primer acercamiento a análisis de fenómenos aleatorios en poblaciones. \ #text(weight: "semibold", font: "Times New Roman")[Descripción de las actividades:] - Material necesario: Tablet o tableta digitalizadora, laptop para transmisión. - Objetivos generales: - Que el alumno comprenda la importancia de la probabilidad para su aplicación en problemas de la vida real. - Que el alumno tome en cuenta las alternativas que se pueden tener entre el uso de GeoGebra y de Python para construcciones gráficas. - Objetivos de aprendizaje: - Que el alumno conecte ideas y conceptos de la probabilidad de una manera constructiva. - Que el alumno capte la noción gráfica de una función de densidad en un problema de aplicación y no rutinario, interactuando con una simulación desarrollada en GeoGebra. #pagebreak() //--- CAPÍTULO III ---// #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[ Análisis visual e interactivo: Una apuesta a la enseñanza #h(1fr) Capítulo III: Planeación de clase], ) - Heurísticas a trabajar: - Notación - Analogía - Generalización - Elementos auxiliares #table( columns: 3, table.header( [Fase], [Desarrollo de actividades], [Tiempo], ), table.cell( rowspan: 2, align: horizon, rotate(-90deg, reflow: true)[ Apertura ], ), [El profesor inicia la clase con la construcción de la variable aleatoria de la distribución Binomial mediante un ejemplo estructurado para la participación de los alumnos, considerando el archivo de preliminares compartido previamente.], [10 minutos], [El profesor da la noción gráfica de la esperanza, varianza y desviación estándar de la variable aleatoria binomial.], [5 minutos], table.cell( rowspan: 3, align: horizon, rotate(-90deg, reflow: true)[ Desarrollo ], ), [El profesor presenta el siguiente problema en pantalla: \ “Durante la pandemia del 2019, se construyó un software para los hospitales que asignaba una habitación al azar a los pacientes para mantenerlos en cuarentena. En uno de esos hospitales con seis habitaciones, se presentó la situación de que a dos pacientes se les asignó la misma habitación. Se tuvo qué reiniciar el sorteo cuatro veces más para por fin asignarles una habitación distinta. ¿Qué tan común es que suceda esta cantidad de iguales asignaciones? ¿Cómo podría visualizarse qué tan común es?.”], [3 minutos], [El profesor presenta un simulador de asignación del software realizado en GeoGebra y da espacio a que los alumnos interactúen con el applet.], [2 minutos], [El profesor invita al alumnado a pensar en una posible respuesta a las preguntas planteadas en el problema presentado en pantalla, con la ayuda del applet, los preliminares y las construcciones desarrolladas en clase. Intentando contestar a las siguientes preguntas: - ¿Notas una secuencia con los datos obtenidos durante la visualización del applet? - ¿Encuentras una relación entre los datos obtenidos durante tu interacción con el applet y las construcciones de esperanza y varianza desarrolladas en clase? - ¿De qué manera podrías inferir una probabilidad con tus respuestas anteriores en relación a las preguntas iniciales del problema?], [10 minutos], table.cell( rowspan: 3, align: horizon, rotate(-90deg, reflow: true)[ Cierre ], ), [El profesor interactúa con las respuestas de los alumnos, haciendo observaciones y proponiendo algunas alternativas para llegar a la solución propuesta.], [5 minutos], [El profesor desarrolla la solución en el pizarrón o tableta, tomando en cuenta alguna propuesta del alumnado si es que la hubo.], [10 minutos], [Finalmente, el profesor presenta una gráfica de la visualización del problema desarrollada en Python, junto con el código y la explicación de este, el cual contiene todo lo desarrollado durante la clase.], [5 minutos], table.cell( colspan: 3, align: right, [Tiempo total: 50 minutos], ), ) #text(weight: "semibold", font: "Times New Roman")[Propuesta de solución:] \ Notemos que sólo se puede empatar o desempatar, entonces lo que estamos considerando son cuatro ensayos de Bernoulli independientes, así que se trata de un problema con distribución binomial, donde el espacio muestral denotado por $S$ son cuatro sorteos consecutivos de dos personas y la variable aleatoria $X$ es el número de empates en cada $s in S$. \ Hay 36 formas posibles de asignar las dos habitaciones, de las cuales sólo hay 6 formas de asignar la misma, entonces hay una probabilidad $p = 6/36 = 1/6$ de quedar en una misma habitación y $1-p = 5/6$ de probabilidad de no quedar en la misma habitación. \ Así, la probabilidad que de 4 reinicios en el software, los 4 reinicios asignen la misma habitación es $ P(X=4)= mat(4; 4)p^4 (1-p)^0 = (1/6)^4 = 1/1296 $ Para saber qué tan común es este suceso, usaremos su esperanza y veremos que tan alejado está con la desviación estándar. Se tiene que $ EE[X]=n p = 4(1/6) = 2/3 \ sigma = sqrt(n p (1-p)) = sqrt(4(1/6)(5/6))=sqrt(5)/3 $ Ahora, veamos a cuántas "sigmas" se encuentra el evento: $ EE[X] + k sigma &= 4 \ arrow.l.r.double.long k &= (4-EE[X])/sigma \ arrow.l.r.double.long k &= frac(4-2/3, sqrt(5)/3) \ arrow.l.r.double.long k &= 10/sqrt(5) \ & approx 4.47 $ El evento se encuentra aproximadamente a $4.5 sigma$, es decir, es bastante raro. Metiendo los datos a Python, podemos mostrar la siguiente gráfica: #figure( image("Recursos/gr_1.png", width: 80%), caption: [ Visualización del problema. ], supplement: [Figura] ) Código: ```python #Importar paqueterías import numpy as np import matplotlib.pyplot as plt from scipy.stats import binom # Parámetros n = 4 # ensayos p = 1/6 # éxito mu = n * p # media sigma = np.sqrt(mu * (1 - p)) # varianza # Valor objetivo target = 4 # Calcular k k = (target - mu) / sigma # x (números de éxitos) x = np.arange(0, 5) # Probabilidad para cada valor x probabilidades = binom.pmf(x, n, p) # Graficar plt.bar(x, probabilidades, align='center', alpha=0.5) plt.title('Distribución Binomial') plt.xlabel('Número de Éxitos') plt.ylabel('Probabilidad') # Graficar línea vertical en k*sigma desde la media k_sigma_value = mu + k * sigma plt.axvline(x=k_sigma_value, color='red', linestyle='--', label=f'μ + k*σ, (k={k:.2f})') plt.legend() plt.show() # Imprimir valores print(f"El valor de σ es: {sigma}") print(f"El valor de k es: {k}") print(f"El valor de kσ es: {k*sigma}") print(f"El valor de μ + kσ es: {mu + k*sigma}") ``` \ Como se pude notar, existe una pequeña barrera entre la comunicación visual desarrollada en Manim y una construcción gráfica a través de python; por un lado se puede obtener un video y por el otro una simple imagen. Sin embargo el objetivo explicativo se mantiene en cada caso. \ Y aunque esto no se observó en la clase muestra debido a falta de tiempo y organización, la idea es que con la corrección se logre mantener la relación visual interactiva de GeoGebra con la relación visual explicativa de Python; los alumnos podrían deducir una probabilidad mediante los conceptos y el registro de sus datos obtenidos en el applet, más aún, llevar estos datos a un programa escrito en algún lenguaje de programación. //--- PLANEACIÓN DE CLASE ---// #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[], ) #align(left)[ #text( font: "Times New Roman", weight: "bold", size: 13pt)[Carta a la enseñanza] ] Hasta antes de entrar a la carrera de matemáticas, no tuve noción acerca de la creatividad que es requerida para hacer matemáticas, lo que ocasionó un deterioro en mi desempeño durante el primer semestre. Cuando comencé a dar clases, en un principio creía tener cierta responsabilidad para que mis alumnos no tuvieran el mismo problema que yo al ingresar a la licenciatura y comencé a presentar problemas que requerían cierto nivel de paciencia y creatividad. \ Después de este seminario me doy cuenta de que la responsabilidad es parte de todos, pues seas profesor o no, siempre habrá el curioso que indague acerca de lo que haces o de lo que estudias. Y hoy más que nunca, se requiere de personas que posean habilidades matemáticas, lógicas y críticas para la creación, desarrollo y sobre todo mantenimiento de las nuevas tecnologías que se presentan día con día. Son aptitudes que necesitan ser entrenadas durante años, con muchos ejercicios y evitando la mecanicidad. \ Este planteamiento nos lleva a dar sustento a que la resolución de problemas es una gran metodología para preparar a nuevas generaciones a interactuar con un mundo lleno de recursos tecnológicos. En particular los problemas vistos en el seminario pero sobre todo el tipo de análisis que se requirió para su solución (heurísticas) deberían formar parte de una estructura estricta en la educación. A esto sumado con el uso de las TICs (cuyo valor de aprendizaje ya se mencionó en el capítulo II), nos llevaría a una ventaja increíble frente a países desarrollados. \ También se debe considerar el costo que tienen las TICs, pues aunque algunas son de uso libre, la infraestructura no es del todo accesible, por lo que un posible obstáculo puede ser el gasto económico que implica el uso de estas tecnologías desde casa para cada estudiante, lo cual impulsa la inversión del gobierno en la educación y el surgimiento de nuevas becas. \ Se debe dar importancia a esta metodología y a lo que engloba, opino que bajo una correcta dirección de ella, el crecimiento intelectual puede dar mejores resultados para futuras competencias en PISA. #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[], ) #align(left)[ #text( font: "Times New Roman", weight: "bold", size: 13pt)[Autoevaluación] ] Creo que mi desempeño en el seminario fue 'aceptable'; el tiempo que le dediqué fue suficiente para poder entregar la mayoría de los trabajos y participar durante clase, pero me habría gustado dar un poco más de esfuerzo. Sin lugar a duda mi percepción como aprendíz y también como expositor han cambiado, en gran parte fue gracias a los errores que cometí y a la retroalimentación recibida por los profesores. #pagebreak() //--- REFERENCIAS ---// #set page( header: text( font: "Times New Roman", style: "italic", weight: "light", size: 10pt)[], ) #bibliography(title: "Referencias", full: true, "bibli.bib", style:"apa")
https://github.com/lxl66566/my-college-files
https://raw.githubusercontent.com/lxl66566/my-college-files/main/信息科学与工程学院/算法导论/图.typ
typst
The Unlicense
#import "../template.typ": * #show: project.with( title: "5", authors: ( "absolutex", ) ) = 图 == 完成图的邻接矩阵表示方法,以及图的深度优先搜索与广度优先搜索算法。要求报告给出简单的实验思路(程序的运行方法、简单的分析)、代码与运行结果图 #include_code("算法导论/code/graph.py") 邻接矩阵: #table( columns: 7, rows: 7, [\\],[*0*],[*1*],[*2*],[*3*],[*4*],[*5*], [*0*],[0],[1],[1],[0],[0],[1], [*1*],[1],[0],[0],[1],[0],[0], [*2*],[1],[0],[0],[0],[1],[0], [*3*],[0],[1],[0],[0],[1],[1], [*4*],[0],[0],[1],[1],[0],[0], [*5*],[1],[0],[0],[1],[0],[0], ) #figure( image("static/graph-1.png", width:80%), caption: "dfs" ) #figure( image("static/graph-2.png", width:80%), caption: "bfs" )
https://github.com/MrToWy/Bachelorarbeit
https://raw.githubusercontent.com/MrToWy/Bachelorarbeit/master/Template/template.typ
typst
#import "customFunctions.typ": * #show par: it => [#it <meta:content>] #let project( title: "", subtitle: "", author: "", author_email: "", matrikelnummer: 0, prof: none, second_prof: none, date: none, glossaryColumns: 1, enableGlossary: false, enableAbbildungsverzeichnis: false, body, ) = { // Set the document's basic properties. set document(author: author, title: title) set page("a4") let topMargin = 3.75cm set page(margin: (inside: 3.5cm, outside: 2cm, y: topMargin)) //set page(margin: (inside: 2.75cm, outside: 2.75cm, y: 1.75cm)) set par(justify: true) show table : set table.cell(align: left) set text(font: "Arial", lang: "de", size: 12pt, hyphenate: false) // replaced this font: New Computer Modern show math.equation: set text(weight: 400) // heading size show heading.where( level: 1 ): it => pad(bottom: 1em)[ #set text(2em) #it ] // heading size show heading.where( level: 2 ): it => pad(bottom: 0.4em, top: 0.4em)[ #set text(1.3em) #it ] // heading size show heading.where( level: 3 ): it => pad(bottom: 0.4em, top: 0.4em)[ #set text(1.25em) #it ] // heading size show heading.where( level: 9 ): it => pad(rest: 0em, bottom: -1.45em)[ #it ] show heading.where(level: 1): set heading(supplement: [Kapitel]) show heading.where(level: 2): set heading(supplement: [Abschnitt]) show heading.where(level: 3): set heading(supplement: [Unterabschnitt]) show heading.where(level: 9): set heading(supplement: []) show figure.where(kind: "code"): it => { if "label" in it.fields() { state("codly-label").update((_) => it.label) it state("codly-label").update((_) => none) } else { it } } show: codly-init.with() show figure: set block(breakable: true); codly( zebra-fill: white, breakable: true, reference-sep: ", Zeile ", default-color: rgb("#7d7d7d") ) // Title page. v(0.6fr) align(left, image("Wortmarke.svg", width: 26%)) v(1.6fr) text(2em, weight: 700, title) v(1.2em, weak: true) text(author) v(1.2em, weak: true) text(subtitle) v(1.2em, weak: true) text(1.1em, date) align(right, image("Logo.svg", width: 26%)) pagebreak() pagebreak() // Author grid( columns: (1fr, 4fr), rows: (auto), row-gutter: 3em, gutter: 13pt, text("Autor:", weight: "bold"), [#author\ #link("mailto:" + author_email)\ Matrikelnummer: #matrikelnummer ], text("Erstprüfer:", weight: "bold"), prof, text("Zweitprüfer:", weight: "bold"), second_prof, ) align(bottom)[ #align(center, text("Selbständigkeitserklärung", weight: "bold")) Hiermit erkläre ich, dass ich die eingereichte Bachelorarbeit selbständig und ohne fremde Hilfe verfasst, andere als die von mir angegebenen Quellen und Hilfsmittel nicht benutzt und die den benutzten Werken wörtlich oder inhaltlich entnommenen Stellen als solche kenntlich gemacht habe. #v(5.2em, weak: true) #grid( columns: (auto, 4fr), gutter: 13pt, [Hannover, den #date], align(right)[Unterschrift], ) ] pagebreak() // Table of contents. show outline.entry.where( level: 1 ): it => { if(it.element.has("level")){ v(2em, weak: true) strong(it) } else{ v(1.2em, weak: true) it } } outline(depth: 3, indent: true) pagebreak() if(enableAbbildungsverzeichnis){ // table of figures set page(numbering: "I") counter(page).update(1) { show heading: none heading[Abbildungsverzeichnis] } outline( title: [Abbildungsverzeichnis], target: figure, indent: true ) pagebreak() } // glossary if(enableGlossary){ show figure.where(kind: "jkrb_glossary"): it => {emph(it.body)} [ = Glossar <Glossary> #columns(glossaryColumns)[ #make-glossary(glossary-pool) ] ] } // header import "@preview/hydra:0.3.0": hydra set page(header: locate(loc => { // dont print anything when the first element on the page is a level 1 heading let chapter = hydra(1, loc: loc, top-margin: topMargin) //chapter = getCurrentHeadingHydra(loc, topMargin) if(chapter == none){ return } if calc.even(loc.page()) { align(left, smallcaps(getCurrentHeadingHydra(loc, topLevel: true, topMargin))) } else{ align(right, emph(getCurrentHeadingHydra(loc, topMargin))) } smallLine })) // footer set page(footer: locate( loc => if calc.even(loc.page()) { smallLine align(left, counter(page).display("1")); } else { smallLine align(right, counter(page).display("1")); } )) // ensure, that a level 1 heading always starts on an empty page show heading.where(level:1) : it => { pagebreak(weak:true, to: "even"); it} // Main body. set page(numbering: "1", number-align: center) counter(page).update(1) set heading( numbering: "1.1." ) body set page(header: none) // bibliography bibliography(("../sources.bib", "../sources.yaml"), style: "institute-of-electrical-and-electronics-engineers", title: "Literaturverzeichnis") pagebreak() hide("white page") //todo_outline }
https://github.com/takotori/PhAI-Spick
https://raw.githubusercontent.com/takotori/PhAI-Spick/main/sections/aufgaben.typ
typst
#import "../utils.typ" : * = Beispiel Aufgaben == Allgemein === Zahlen wissenschaftlich korrekt darstellen #colgreen("Mit expliziter Angabe des Messfehlers") und #colorange("ohne"): $ 2.521162 plus.minus 0.531 &= colgreen(2.52 plus.minus 0.53) &= colorange(2.5) \ 161261 plus.minus 10000 &= colgreen(1.61 dot 105 plus.minus 0.10 dot 10^5) &= colorange(1.61 dot 10^5) \ 613.627 plus.minus 1.4 &= colgreen(6.136 dot 102 plus.minus 0.014 dot 10^2) &= colorange(6.136 dot 10^2) \ 1610.12 plus.minus 17 &= colgreen(1.610 dot 103 plus.minus 0.017 dot 10^3) &= colorange(1.610 dot 10^3) \ 16.1612 plus.minus 8.7 &= colgreen(1.62 dot 101 plus.minus 0.87 dot 10^1) &= colorange(1.6 dot 10^1) \ 870261 plus.minus 10125 &= colgreen(8.70 dot 105 plus.minus 0.10 dot 10^5) &= colorange(8.70 dot 10^5) \ 870261 plus.minus 40125 &= colgreen(8.70 dot 105 plus.minus 0.40 dot 10^5) &= colorange(8.7 dot 10^5) $ == Kinematik === Zeit zwischen zwei Punkten Berechnen Sie die Zeit, die ein Trolleybus für die Strecke von $600 m$ zwischen zwei Haltestellen benötigt, wenn die Anfahrbeschleunigung $1 m/s^2$, die Bremsverzögerung $0.75 m/s^2$ und die Geschwindigkeit während der gleichförmigen Bewegung $54 (k m)/h$ beträgt. *Lösungsweg:* 1. Skizze erstellen und in Zonen aufteilen #align(center, image("../figures/aufgaben4.png")) 2. Gegebene Werte notieren #table( columns: (1fr, 1fr, 1fr, 1fr), inset: 4pt, fill: (_, row) => if row == 0 { gray }, [Zone 0], [Zone 1], [Zone 2], [Zone 3], $ a_01=1 m/s^2 $, $ a_12=0 m/s^2 $, $ a_23=-0.75 m/s^2 $, $ a_32=0 m/s^2 $, $ r_0=0m $, $ r_1=? $, $ r_2=? $, $ r_3=600m $, $ v_0=0 m/s $, $ v_1 &=54 "km"/h \ &=15 m/s $, $ v_2 &=54 "km"/h \ &=15 m/s $, $ v_3=0 m/s $, $ t_0=0 s $, $ t_1=? $, $ t_2=? $, $ t_3=? $ ) #colbreak() 3. Fehlende Werte für Zone 1 berechnen in dem man Formeln umstellt $ v_1&=v_0+a_01 dot t_1 \ t_1&=(v_1-v_0)/a_01 =(15 m/s-0 m/s)/(1 m/s^2 )=15s \ r_1&=r_0+v_0 dot t_1+1/2 dot a_01 dot t_1^2 \ &=0m+0 m/s dot 15 s+1/2 dot 1 m/s^2 dot (15m)^2=112.5m $ 4. Nun müssen wir $t_3$ berechnen um damit dann $r_2$ berechnen zu können $ t_3&=(0 m/s-15 m/s)/(-0.75 m/s^2 )=20s \ r_3&=r_2+v_2 dot t_3+1/2 dot a_23 dot t_3^2 \ r_2&=r_3-(v_2 dot t_3 )-(1/2 dot a_23 dot t_3^2 ) \ r_2&=600m-(15 m/s dot 20 s)-(1/2 dot -0.75 m/s^2 dot (20 s)^2 ) \ &=450m $ 5. $t_2$ berechnen $ r_2&=r_1+v_1 dot t_2 colred(cancel(+ 1/2 dot a_12 dot t_2^2) "(fällt weg da" a_12=0")") \ t_2& =(r_2-r_1)/v_1 =(450m-112.5m)/(15 m/s)=22.5s $ 6. $t_"Total"$ berechnen $ t_"Total" &=t_0+t_1+t_2+t_3 \ &=0s+15s+22.5s+20s=57.53 s $ === Fussgänger und Radfahrer Ein Radfahrer und ein Fussgänger bewegen sich gleichzeitig von A nach B, wobei der eine stündlich $5 "km"$ und der andere $15 "km"$ zurücklegt. Der Radfahrer hält sich eine Stunde in B auf und trifft auf dem Rückweg den Fussgänger $30 "km"$ vor B entfernt. Wie gross ist die Distanz zwischen A und B? *Lösungsweg:* 1. s-t-Diagramm erstellen #align(center, image("../figures/aufgaben5.png", width: 75%)) 2. Formel für Zeit des Fussgänger, $t_F$ aufstellen. $ t_F = frac(s_"AB" - Delta s, v_F) $ 3. Formel für Zeit des Radfahrer, $t_R$ aufstellen. \ ($Delta t$ = Pause 1h) $ t_R = frac(s_"AB" + Delta s, v_R) + Delta t $ #colbreak() 4. Da gemäss s-t-Diagramm beide Zeiten gleich sind, können wir die beiden Gleichungen gleichsetzen und nach $S_"AB"$ umstellen. $ t_F &= t_R \ frac(s_"AB" - Delta s, v_F) &= frac(s_"AB" + Delta s, v_R) + Delta t \ s_"AB" / v_F - s_"AB" / v_R &= Delta s / v_F + Delta s / v_R + Delta t \ s_"AB" (1/v_F - 1/v_R) &= Delta s (1/v_F + 1/v_R) + Delta t \ s_"AB" &= frac(Delta s (1/v_F + 1/v_R) + Delta t, 1/v_F - 1/v_R) \ s_"AB" &= frac(30 "km" (frac(1, 5 "km" h^"-1") + frac(1, 15 "km" h^"-1")) + 1h, frac(1, 5 "km" h^"-1") + frac(1, 15 "km" h^"-1")) \ &= 67.5 "km" $ === <NAME> #grid( columns: (auto, 30%), gutter: 5pt, [Ein Ball wird unter einem Winkel von $20^circle.small$ (notwendig) schräg nach unten geworfen (12m nach rechts und 7.5m nach unten). Mit welcher Anfangsgeschwindigkeit wurde der Ball geworfen? *Lösungsweg:* 1. Richtungsgleichung aufstellen ], image("../figures/aufgaben6.png") ) $ arrow(r) = arrow(r_0) + arrow(v_0) dot t + 1/2 dot arrow(a) dot t^2 $ 2. Anfangsbedingungen festlegen beim Werfer $ arrow(v_0) &= v_0 dot vec(cos(-20^circle.small), sin(-20^circle.small)) dot t \ arrow(r_0) &= vec(0, 0), arrow(r_1) = vec(12, -7.5) $ 3. Bedinungen in die Richtungsgleichung einsetzen $ vec(12, -7.5) = vec(0, 0) + v_0 dot vec(cos(-20^circle.small), sin(-20^circle.small)) dot t \ + 1/2 dot vec(0, -9.81) dot t^2 $ 4. X-Gleichung nach $v_0 dot t = s_1$ auflösen $ 12 &= v_0 dot cos(-20^circle.small) dot t <=> v_0 dot t = 12 / cos(-20^circle.small) \ &= 12.77 $ 5. Y-Gleichung nach $t$ auflösen $ -7.5 &= v_0 dot sin(-20^circle.small) dot t + 1/2 dot -9.81 dot t^2 \ <=> t &= sqrt(frac(2 dot (7-5 - v_0 dot t dot sin(-20^circle.small)), 9.81)) = 0.8 $ 6. Geschwindigkeit berrechnen $ s_1 / t = v_0 = 12.77 / 0.8 = 16 m/s $ === Radialbeschleunigung Ein Riesenrad hat eine Umlaufdauer von $12 s$. Wie gross sind Geschwindigkeit und Radialbeschleunigung einer Person im Abstand von $5.6 m$ von der Drehachse? *Lösungsweg:* 1. Tangentialgeschwindigkeit $v_T$ berechnen $ v_T &= (Delta s) / (Delta t) &= frac(2 pi dot r, T) &= frac(2 pi dot 5.6 m, 12 s) &= 2.9 m s^"-1" $ 2. Radialbeschleunigung $a_T$ berechnen aus $v_T$ $ a_T &= (v_T^2) / r &= frac(4 pi^2 dot r, T^2) &= frac(4 pi^2 dot 5.6m, (12 s)^2) &= 1.5 m s^"-2" $ === Aufprallgeschwindigkeit Aus welcher Höhe muss ein Mann herunterspringen, um den gleichen Aufprall zu erleben wie ein landender Fallschirmspringer, dessen Sinkgeschwindigkeit $6 m/s$ beträgt ? $ h = frac(v^2, 2 g) = frac(6^2, 2 dot 9.81) = 1.84 m $ == Kräfte === Komplexe Kräfteaufgabe An der Spitze eines $h = 8 m$ hohen Mastes üben die befestigten Leitungen die Zugkräfte $F_1 = 4800 N, F_2 = 1200 N "und" F_3 = 2700 N$ aus. Der Winkel $alpha = 40^circle.small$ und $beta = 30^circle.small$. In welcher Richtung $gamma$ muss ein l = 9.6 m langes schräges Drahtseil verankert werden, damit an der Mastspitze keine horizontale Kraft wirksam wird? Wie gross ist die Zugkraft F im Seil? #align(center, image("../figures/aufgaben1.png")) *Lösungsweg:* Aus dem Seitenriss geht hervor, dass die Kraft F in eine horizontale (xy-Ebene) Kompo nente $F_"xy" = F cos delta$ und in eine vertikale (z-Richtung) Komponente $F_z = F sin delta$ zerlegt werden kann. Somit: #align(center, image("../figures/aufgaben2.png")) 1. Gleichgewicht in x-Richtung: $ - F_1 + F_2 cos(alpha) + F_3 cos(beta) + F cos(delta) cos(gamma) = 0 $ 2. Gleichgewicht in y-Richtung: $ F_2 sin(alpha) - F_3 sin(beta) + F cos(delta) sin (gamma) = 0 $ 3. Gleichgewicht in z-Richtung: $ F_M - F sin(delta) = 0 $ 4. Daraus folgt aus X-Gleichung: $ F cos(delta) cos(gamma) = colmagenta([F_1 - F_2 cos(alpha) - F_3 cos(beta)]) $ 5. Daraus folgt aus Y-Gleichung: $ F cos(delta) sin(gamma) = colblue({- F_2 sin(alpha) + F_3 sin(beta)}) $ 6. Da $F^2 cos^2(delta) (sin^2(gamma) + cos^2(gamma)) = F^2 cos^2(delta)$ können wir die 2 Gleichungen quadrieren und zusammenzählen: $ F^2 cos^2(delta) = colmagenta([ ])^2 + colblue({ })^2 $ 7. Da $sin(delta) = h/l$ erhalten wir für den Cosinus: $ cos^2(delta) = 1 - h^2/l^2 $ 8. Die gesuchte Seilkraft F ist somit: $ F = sqrt((colmagenta([ ])^2 + colblue({ })^2) / (1 - h^2/l^2)) = 3056 N $ 9. Winkel $gamma$ erhalten wir als Quotient von Y und X-Gleichung: $ gamma = arctan(colblue({ }) / colmagenta([ ])) = 17.3^circle.small $ #colbreak() === 2. Newtonsche Gesetz (Kräfte in Bewegung) #grid( columns: (auto, 45%), gutter: 5pt, [Ein Körper A der Masse $1 "kg"$ wird mit Hilfe eines masselosen Seils und einer masselosen, reibunsgfreien Umlenkrolle durch einen Körper B der Masse $1.5 "kg"$ auf einer horizontalen Ebene gezogen. Der Gleitreibungskoeffizient zwischen dem Körper A und der Ebene beträgt $0.5$. ], image("../figures/aufgaben3.png") ) Mit welcher Beschleunigung bewegen sich die beiden Körper und wie gross ist die Kraft im Seil? *Lösungsweg:* 1. Seilkraft für A und B bestimmen \ (#colred([Umlenkrolle lenkt $hat(r)$ um])) $ &F_A=m_a dot a=F_S dot (vec(1,0))+F_R dot (vec(-1,0)) \ <=> &F_S=m_a dot a+μ dot F_N=m_a dot a+μ dot m_a dot g $ #v(10pt) $ &F_B=m_b dot a=F_G dot (vec(1,0))+F_S dot (vec(-1,0)) \ <=> &F_S=-m_b dot a+F_G=-m_b dot a+m_b dot g $ 2. Formeln gleichsetzen und nach $a$ umstellen: $ & m_a dot a+μ dot m_a dot g=-m_b dot a+m_b dot g \ <=> &a=frac(m_b dot g-μ dot m_a dot g, m_a+m_b) \ <=> &a = frac(1.5 dot 9.81-0.5 dot 1 dot 9.81, 1+1.5) = 3.92 m/s^2 $ 3. Geschwindigeit in eine der Formeln einsetzen: $ F_S &= m_a dot a+μ dot m_a dot g \ &= 1 dot 3.92+0.5 dot 1 dot 9.81=8.67 N $ === Bewegung zwei Körper Wir betrachten zwei Wagen mit den Massen $m_1 = 150 g$ und $m_2 = 100 g,$ die sich reibungslos bewegen können. Zwischen den Wagen befindet sich eine Feder mit einer ungespannten Länge von $l_0 = 10 "cm"$ und einer Federkonstante von $k = 100 N m^(-1)$, die zunächst auf eine Länge von $l = 5.0 "cm"$ zusammengestaucht wird. Bei $t = 0 $ werden die Wagen losgelassen und fangen an zu beschleunigen. Die Feder ist an den Wagen befestigt, so dass sich die Wagen nicht beliebig voneinander entfernen können. $=>$ Wagen pendeln hin und her *Bewegungsgleichung:* $ m_1 dot.double(x_1) = k dot (x_2 -x_1 - l_0) \ m_2 dot.double(x_2) = -k dot (x_2 - x_1 - l_0) $ *Funktion der Zeit:* $ Delta x &= x_2 - x_1 = x_R + l_0 \ &= A cos(sqrt(k/m_1 +k/m_2) t) + B sin(sqrt(k/m_1 + k/m_2) t) + l_0 $ 1. Mittels Ableitung der Funktion der Zeit Startwerte bestimmen: $ Delta x (0) = A + l_0 => A = Delta x(0) - l_0 = 5 "cm" - 10 "cm" = -5 "cm" \ Delta x (0) = B sqrt(k/m_1 + k/m_2) = 0 => B = 0 $ 2. A und B in Funktion der Zeit einsetzen: $ Delta x &= -5 "cm" cos(sqrt(100/0.15 + 100/0.1) dot t) + 10 "cm" \ &= -5 "cm" cos((40.8 dot t )/ 1s) + 10 "cm" \ T &= (2 pi) / 40.8 = 0.154 s $ #colbreak() == Energie === Ballwurf mit Energieerhaltung Ein Kind will einen Ball über eine $2m$ von ihm entfernte Mauer werfen. Die dazu minimal erforderliche Wurfhöhe ist $10m$. Welches ist der minimal erforderliche Betrag der Geschwindigkeit, mit der der Junge den Ball abwerfen muss? *Lösungsweg:* 1. In y-Richtung ($y = 10m$) gilt dank Energieerhaltung: $ 1/2 m v_y^2 = m g y \ v_y = sqrt(2 g y) $ 2. Die Flugzeit, bis die Geschwindigkeit in y-Richtung 0 ist, ist: $ v_y &= g t "(Da freier Fall)"\ t &= v_y/g = sqrt(2 y/g) $ 3. In dieser Zeit muss der Ball die Distanz x ($x = 2m$) zurücklegen: $ x = v_x t \ v_x = x/t = sqrt((g x^2) / (2 y)) $ 4. Die gesuchte Geschwindigkeit ist: $ v &= sqrt(v_x^2 + v_y^2) = sqrt((g x^2) / (2 y) + 2 g y) \ &= sqrt((9.81 m \/ s^2 dot 2^2 m^2) / (2 dot 10m) + 2 dot 9.81 m \/ s^2 dot 10m) $ $ = sqrt(198 m^2 \/ s^2)= 14 m \/ s $ === Fall auf eine Feder Eine Masse von $12 "kg"$ fällt aus $70 "cm"$ Höhe auf eine gefederte Unterlage, deren Federkonstante $4000 N/m$ beträgt. Wieviel wird die Feder beim Aufprall zusammengedrückt? *Lösungsweg:* 1. Energieerhaltungssatz anwenden und Gleichungsystem aufstellen $ E_"pot" = E_"Feder" ==> m g (h +s) = (c s^2) / 2 $ 2. Dies ergibt die quadratische Gleichung: $ c/2 s^2 - m g s - m g h = 0 $ 3. Die Lösung der quadratischen Gleichung ist: $ s &= (m g) / c + sqrt(((m g) / c)^2 + 2 (m g h) / c) \ &= (12 dot 9.81) / 4000 + sqrt(((12 dot 9.81) / 4000)^2 + 2 (12 dot 9.81 dot 0.7) / 4000) \ &= 0.2345 = 23,5 "cm" $ == Arbeit / Leistung === Leistung einer Lokomotive Welche Arbeit (in kWh) leistet eine Lokomotive, die einen Zug von Flüelen nach Göschenen zieht? Die totale Masse des Zuges beträgt $400 t = 400^'000 "kg"$, die Strecke $37 "km" = 37^' 000m$, die Höhendifferenz $670 m$ und der Rollreibungskoeffizient $0.002$. Der Luftwiderstand werde vernachlässigt. *Lösungsweg:* 1. Winkel $alpha$ bestimmen mittels Trigometrie $ sin(alpha)=(670m)/(37^' 000m) <=> alpha=1.04^circle.small $ 2. Energieerhaltungssatz aufstellen $ E=1/2 dot m dot v^2+m dot g dot h+(m dot g dot s dot cos(alpha) ) dot μ $ 3. EES Einsetzen am Anfang ($colgreen(v=0), colmagenta(h=0), colorange(s=0)$) $ E_0&=1/2 dot m dot colgreen(0)^2+m dot g dot colmagenta(0)+(m dot g dot colorange(0) dot cos(alpha) ) dot μ \ &= 0 $ 4. EES Einsetzen am Ende ($colgreen(v=0), colmagenta(h=670m), colorange(s=37^'000m)$) $ E_1 &= 1/2 dot m dot colgreen(0)^2+m dot g dot colmagenta(670)+(m dot g dot colorange(37^'000) dot cos(alpha) ) dot μ \ &=2'919'408'166J $ 5. Arbeit $W$ berechnen und in kWh umrechnen $ W=E_0-E_1=(2'919'408'166J - 0J) / (3600s dot 1000) =810.97 "kWh" $ === Leistung eines Autos Ein Auto braucht bei der Geschwindigkeit $80 (k m)/h$ auf $100 "km"$ 8 Liter Benzin. Wie gross ist der Fahrwiderstand (Rollreibung + Luftwiderstand), wenn der Wirkungsgrad des Motors 20 % beträgt? Das Benzin hat eine Dichte von $700 "kg"/m^3$ und einen Heizwert von $42 "MJ"/"kg"$. *Lösungsweg:* $ P_"ab" = F dot v, P_"zu" = rho.alt dot (d V) / (d t) "(Benzinvolumen)" dot H "(Heizwert)" \ P_"zu" = rho.alt dot (d V) / (d s) "(Literverbrauch)" dot (d s) / (d t) dot H = rho.alt dot (d V) / (d s) dot v dot H $ $ F &= P_"ab" / v = frac(eta dot P_"zu", v) = frac(eta dot rho.alt dot (d V)/(d s) dot H, v) \ F &= 0.2 dot 700 dot (8 dot 10^(-3) / 10^(-5) dot 42 dot 10^6) = 470 N $ === Pumpleistung Welche Wassermenge pro Zeiteinheit fördert eine 4-kW-Pumpe in ein $45 m$ höher liegendes Reservoir? *Lösungsweg:* Die an einem infinitesimalen Massen-Element $d m$ geleiste Arbeit $d W$ ist gleich seiner Zunahme an potentieller Energie, also $d W = d m g h$. Somit ist die Leistung $P$ der Pumpe, also die Arbeit pro Zeiteinheit: $ P = (d W)/(d t) = (d m dot g dot h)/(d t) = rho.alt frac(d V, d t) dot g dot h $ 1. Formel nach $(d V )/ (d t) = dot(V)$ auflösen $ (d V )/ (d t) &= P / (rho.alt g h) = (4'000 W) / (1'000 "kg"/m^3 dot 9.81 m/s^2 dot 45 m) = 9,061 dot 10^(-3) m^3/s \ dot(V) &= 9,061 s^(-1) $ == Kosmologie === Satellit ==== Höhe eines geostationären Satelliten Welcher Höhe muss Satellit auf Kreisbahn laufen, wenn er geostationär sein soll? #grid( columns: (75%, auto), grid( columns: (auto, auto), inset: 2pt, [Erdradius:], $r_E=6.378 dot 10^6 m$, [Erdmasse:], $M_E=5.98 dot 10^24 "kg"$, [Gravitation:], $G = 6.67 dot 10^(-11) m^3 "kg"^(-1) s^(-2)$ ), image("../figures/aufgaben7.png") ) *Lösungsweg:* 1. Ist die Geschwindigkeit des Satelliten konstant? _Ja, da Erde nicht schneller oder langsamer wird mit der Zeit $=> v="const" => dot(v) = 0$ \ Hinterer Teil der Kreisbewegung kann ignoriert werden_ 2. Abhängig von Unbekannten Zentripetalkraft Formel anwenden $ F_"äussere"=m dot omega^2 dot r dot hat(r) + m dot 0 dot hat(v) = m dot omega^2 dot r dot hat(r) $ 3. Umlaufzeit Erde berechnen $ T=24h dot 3600=86^'400s $ 4. Formel der Kräfte bestimmen: Satellit muss auf der Bahn bleiben, heisst $ arrow(F_G)-arrow(F_"äussere")&=arrow(0) \ G dot (M_E dot m)/r^2 dot hat(r) &= m dot omega^2 dot r dot hat(r) $ 5. Formel kürzen, einsetzen und nach r auflösen $ G dot M_E/r^3 =omega^2 => G dot M_E/r^3 =(2^2 dot pi^2)/T^2 \ r=root(3, (G dot M_E dot T^2)/(4 pi^2 )) $ 6. Erdradius von Radius abziehen, um Höhe zu bekommen $ h=r-r_E => root(3,((G dot M_E dot T^2)/(4pi^2 )))-r_E ≈ 35800 "km" $ ==== Kinetische Energie des Satelliten Welche kinetische Energie hat der Satellit? *Lösungsweg:* 1. $E_"kin"$ aufstellen $ E_"kin"=m/2 dot v^2 $ 2. v bestimmen $ v=(2 dot pi dot r)/T $ 3. Zahlen in E_kin Formel einsetzen ==== Flughöhe Satelliten bei 2 Umläufen pro Tag Welche Flughöhe muss der Satellit haben, wenn er die Erde zweimal pro Tag umrundet? *Lösungsweg:* 1. Formel nach r auflösen $G dot M/r^3 =ω^2 <=> r=∛((G dot M)/ω^2 )$ 2. Winkelgeschwindigkeit berechnen (Ein Tag = 86’400s) $omega=(2 dot π)/T=(2 dot phi)/(43'200s)$ 3. r berechnen und Radius der Erde abziehen $h=r-r_E$ == Fadenpendel === Winkel eines Fadenpendels Unter welchem Winkel muss ein Fadenpendel losgelassen werden, wenn die maximale Beanspruchung des Fadens gerade doppelt so gross werden soll wie die beim ruhenden Pendel? (Bild siehe Abschnitt Fadenpendel) *Lösungsweg:* 1. Resultierende Kraft ermitteln (Zentripetalkraft) \ $=>$ Bewegungsgleichung $ F_"res" = F - F_G = (m v^2) / l $ 2. Energieerhaltungssatz anwenden \ (potentielle Energie = kinetische Energie) $ (m v^2) / 2 &= m g h = m g l (1 - cos(phi)) \ "(Mal 2, durch l)" ==> (m v^2) / l &= 2 m g (1 - cos(phi)) $ 3. Dies in Bewegungsgleichung einsetzen $ F - F_G = 2 m g (1 - cos(phi)) $ 4. Auflösen nach $cos(phi)$ $ cos(phi) &= 1 - (F - F_G) / (2 F_G) = frac(2 F_G - F + F_G, 2 F_G) = frac(3 F_G - F, 2 F_G) \ &= frac(3 - F / F_G, 2) $ 5. Verhältnis von $F "und" F_G = 2$, also $ F : F_G = F / F_G = 2$ $ cos(phi) = frac(3 - 2, 2) = 0.5 => phi = 60^circle.small $
https://github.com/lkndl/typst-bioinfo-thesis
https://raw.githubusercontent.com/lkndl/typst-bioinfo-thesis/main/modules/footers.typ
typst
#let get-pagination(pagination-align) = { assert(pagination-align in ("outside", "inside", left, right, center)) if pagination-align in ("outside", "inside") { let (a, b) = if pagination-align == "outside" { (left, right) } else { (right, left) } locate(loc => align( if calc.even(loc.page()) { a } else { b }, [#counter(page).display(loc.page-numbering())])) } else { locate(loc => align(pagination-align, [#counter(page).display(loc.page-numbering())])) } }
https://github.com/kdog3682/2024-typst
https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/my.typ
typst
#import "base-utils.typ": * #import "attrs.typ" #import "cetz-setup.typ": * #let blue = colored.with(fill: _blue) #let red = colored.with(fill: _red) #let green = colored.with(fill: _green) #let frac(a, b) = $display(_frac(#resolve-content(a), #resolve-content(b)))$ #let equal = wrap([=], h(4pt)) #let dot = wrap(sym.circle.filled.tiny, h(3pt)) #let plus = wrap(sym.plus, h(3pt)) #let times = wrap(sym.times, h(3pt)) #let arrow = { h(6pt) canvas(inline: true, { line((0, 0), (15, 0), mark: attrs.get("arrow-mark")) }) h(7pt) }
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/rubber-article/0.1.0/src/lib.typ
typst
Apache License 2.0
// Template based on LaTeX Article Class #let article( lang:"de", eq-numbering:none, text-size:10pt, page-numbering: "1", page-numbering-align: center, heading-numbering: "1.1 ", body) = { // Set the document's basic properties. set page( margin: (left: 25mm, right: 25mm, top: 30mm, bottom: 30mm), numbering: page-numbering, number-align: page-numbering-align, ) set text(font: "New Computer Modern", lang:lang, size: text-size) show math.equation: set text(weight: 400) set math.equation(numbering: eq-numbering) set heading(numbering: heading-numbering) set outline(indent: auto) show outline.entry.where( level: 1, ): it => { v(15pt, weak: true) text(size:11pt ,[ #strong(it.body) #box(width: 1fr, repeat[]) #strong(it.page) ])} // Referencing Figures show figure.where(kind: table): set figure(supplement:[Tab.], numbering: "1") if lang == "de" show figure.where(kind: image): set figure(supplement:[Abb.], numbering: "1",) if lang == "de" // Set Table style set table( stroke: none, gutter: auto, fill: none, inset: (right: 1.5em), ) // Configure figures (tables) show figure.where(kind: table): it => { show: pad.with(x: 23pt) set align(center) v(12.5pt, weak: true) // Display the figure's caption. if it.has("caption") { v(if it.has("gap") { it.gap } else { 17pt }, weak: true) strong(it.supplement) if it.numbering != none { [ ] strong(it.counter.display(it.numbering)) } [*: *] it.caption.body // Display the figure's body. it.body } v(15pt, weak: true) } // Configure figures (images) show figure.where(kind: image): it => { show: pad.with(x: 23pt) set align(center) v(12.5pt, weak: true) // Display the figure's body. it.body // Display the figure's caption. if it.has("caption") { v(if it.has("gap") { it.gap } else { 17pt }, weak: true) strong(it.supplement) if it.numbering != none { [ ] strong(it.counter.display(it.numbering)) } [*: *] it.caption.body } v(15pt, weak: true) } // Main body. set par(justify: true) body } #let maketitle( title: "", authors: (), date: none, ) = { set document(author: authors, title: title) // Author information. let authors-text = { set text(size: 1.1em) pad( top: 0.5em, bottom: 0.5em, x: 2em, grid( columns: (1fr,) * calc.min(3, authors.len()), gutter: 1em, ..authors.map(author => align(center, author)), ), )} // Title row. align(center)[ #v(60pt) #block(text(weight: 400, 18pt, title))//1.75em, title)) #v(1em, weak: true) #authors-text #v(1em, weak: true) #block(text(weight: 400, 1.1em, date)) #v(20pt) ] }
https://github.com/binhtran432k/ungrammar-docs
https://raw.githubusercontent.com/binhtran432k/ungrammar-docs/main/contents/literature-review/bun.typ
typst
#import "/components/glossary.typ": gls == Bun <sec-bun> Bun, a modern JavaScript runtime, offers a compelling platform for developing language servers due to its speed, versatility, and rich feature set. As a unified tool encompassing package management, bundling, transpilation, testing, and task running, Bun streamlines the development workflow and enhances developer productivity @bib-bun. By leveraging Bun's capabilities, developers can build high-performance and feature-rich language servers with improved development efficiency. In the following subsections, we will delve into specific aspects of Bun that make it a suitable choice for #gls("lsp") development. === Bun as a Package Manager Bun offers a high-performance and feature-rich package manager, designed to streamline dependency management for JavaScript and TypeScript projects. While compatible with npm and yarn, Bun introduces several enhancements: - *Speed*: Leveraging its core runtime optimizations, Bun's package installation and resolution are significantly faster than traditional package managers. - *Workspace support*: Bun seamlessly manages monorepo structures, allowing for efficient dependency management across multiple packages. - *Zero-config approach*: Bun often eliminates the need for complex configuration files, simplifying project setup. - *Compatibility*: Bun supports npm-style `package.json` files, ensuring compatibility with existing projects. By adopting Bun as a package manager, we can experience faster build times and improved project organization. === TypeScript Support in Bun Bun provides excellent built-in support for TypeScript, eliminating the need for additional transpilation steps. This integration streamlines the development workflow and improves performance. Key features of Bun's TypeScript support: - *Fast compilation*: Bun's optimized TypeScript compiler ensures rapid build times. - *Type checking*: Rigorous type checking helps catch errors early in the development process. - *JSX support*: Enables seamless development of React and other #gls("jsx")-based components. - *Interoperability*: Works seamlessly with existing TypeScript projects and tools. By utilizing Bun's TypeScript capabilities, we can enhance code reliability, maintainability, and developer experience. === Bun as a Bundler Bun's bundling capabilities are designed to optimize JavaScript applications for production deployment. By combining modules into a single file, bundling improves load times and reduces the number of #gls("http") requests. Key features of Bun's bundler: - *Fast bundling*: Leveraging Bun's core performance optimizations for efficient bundling. - *Tree-shaking*: Eliminates unused code to create smaller and faster bundles. - *Code splitting*: Divides the application into smaller chunks for lazy loading. - *Minification*: Reduces code size through minification and obfuscation. Bun's bundling features make it a suitable choice for building production-ready JavaScript applications. === Bun as a Test Runner Bun includes a built-in test runner, providing a convenient way to execute and manage tests within the same environment as the application code. This integration simplifies the testing workflow and promotes efficient test execution. Key features of Bun's test runner: - *Fast test execution*: Leveraging Bun's performance optimizations for rapid test feedback. - *Test discovery*: Automatically finds and executes test files. - *Assertion library*: Provides a comprehensive set of assertion functions for writing tests. - *Code coverage*: Generates code coverage reports to assess test effectiveness. By utilizing Bun's test runner, we can streamline the testing process and improve code quality.
https://github.com/TechnoElf/mqt-qcec-diff-thesis
https://raw.githubusercontent.com/TechnoElf/mqt-qcec-diff-thesis/main/content/state.typ
typst
#import "@preview/fletcher:0.5.1": diagram, node, edge #import "@preview/gentle-clues:0.9.0": example #import "@preview/tablex:0.0.8": tablex #import "@preview/quill:0.3.0": quantum-circuit, lstick, rstick, ctrl, targ, mqgate, meter = State of the Art There are various existing approaches to equivalence checking of quantum circuits. Two broad categories are considered here: Those that use the same @dd\-based equivalence checking methodology used in this thesis and those that use entirely different methods. While @dd\-based equivalence checking is currently the most performant, it is worth noting that there are other approaches as well that may benefit from the ideas developed in this work @burgholzer2021ec. == Quantum Decision Diagram Equivalence Verification The state-of-the-art quantum circuit equivalence checking methodology verifies that two given circuits implement the same functionality by exploiting the identity relation $f^(-1)(f(x))=x$. The equivalence two circuits $G$ and $G'$ and their system matrices $U$ and $U'$ can be similarly stated as follows @burgholzer2021ec: $ G = G' &<=> U = U' \ &<=> U dot U'^(-1) = I \ &<=> U dot U'^dagger = I \ &<=> g_1 dot g_2 dot ... dot g_n dot g'^dagger_m dot ... dot g'^dagger_2 dot g'^dagger_1 = I $ The @dd\-based alternating equivalence checker uses @dd[s] to represent the intermediate state of the system matrix as the gates are applied. Once all gates have been applied to the @dd, it has the identity form if and only if the circuits $G$ and $G'$ have the same system matrix, thus implementing the same functionality. Due to the associativity of the system matrices of quantum gates, the order in which gates are applied to the @dd may freely chosen, as long as the order within their respective circuits remains unaltered. To efficiently check two quantum circuits for equivalence by reducing them to the identity circuit using a @dd thus necessitates the use of an application scheme which determines the order in which gates are to be applied to the @dd. The aim of these schemes is to determine a sequence that keeps the @dd as close as possible to the identity as possible @burgholzer2021ec. There are a variety of existing approaches to providing a suitable oracle for quantum circuit equivalence checking based on @dd[s]. @qcec currently implements gate-cost, lookahead, one-to-one, proportional and sequential application schemes @burgholzer2021ec. The following example illustrates the functionality of each of these schemes. #pagebreak() #example(breakable: true)[ Consider the two circuits in @circuit_g that implement the same system matrix $U$. #figure( [ #grid( columns: (4fr, 1fr, 2fr), align(horizon, quantum-circuit( lstick($|q_0〉$), $H$, 1, 1, [\ ], lstick($|q_1〉$), 1, $X$, 1 )), align(horizon)[$<=>$], align(horizon)[$U = 1/sqrt(2) mat( 0, 1, 0, 1; 1, 0, 1, 0; 0, 1, 0, -1; 1, 0, -1, 0 )$] ) #grid( columns: (4fr, 1fr, 2fr), align(horizon, quantum-circuit( lstick($|q_0〉$), $"S"$, $sqrt(X)$, $"S"$, 1, [\ ], lstick($|q_1〉$), 1, 1, 1, $X$, 1 )), align(horizon)[$<=>$], align(horizon)[$U' = 1/sqrt(2) mat( 0, 1, 0, 1; 1, 0, 1, 0; 0, 1, 0, -1; 1, 0, -1, 0 )$] ) ], caption: [Quantum circuit $G$ and $G'$.] ) <circuit_g> The matrices can be trivially verified for equivalence showing that $G$ and $G'$ do indeed implement the same functionality. This approach, however, is impractical in practice as the size of the system matrix grows exponentially with the number of qubits in the circuit. To verify the equivalence of $G$ and $G'$ using the @dd\-based alternating equivalence checking methodology, the latter must first be inverted to produce $G'^dagger$. @circuit_g_dagger visualises this circuit. In this case $G'^dagger$ happens to have the same system matrix as $G'$, because $G'$ is symmetric, but this is not necessarily always the case. #figure( grid( columns: (4fr, 1fr, 3fr), align(horizon, quantum-circuit( lstick($|q_0〉$), 1, $"S"^dagger$, $sqrt(X)^dagger$, $"S"^dagger$, 1, [\ ], lstick($|q_1〉$), $X^dagger$, 1, 1, 1 )), align(horizon)[$<=>$], align(horizon)[$(U')^(-1) = U'^dagger = 1/sqrt(2) mat( 0, 1, 0, 1; 1, 0, 1, 0; 0, 1, 0, -1; 1, 0, -1, 0 )$] ), caption: [$G'^dagger$, the inverse of $G'$.] ) <circuit_g_dagger> These two circuits can now be concatenated to produce the identity, as demonstrated in @circuit_identity. #figure( grid( columns: (4fr, 1fr, 3fr), align(horizon, quantum-circuit( lstick($|q_0〉$), $H$, 1, 1, $"S"^dagger$, $sqrt(X)^dagger$, $"S"^dagger$, 1, [\ ], lstick($|q_1〉$), 1, $X$, $X^dagger$, 1, 1, 1 )), align(horizon)[$<=>$], align(horizon)[$U dot U'^dagger = mat( 1, 0, 0, 0; 0, 1, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1 ) = I$] ), caption: [The combination of $G$ and $G'^dagger$ produces the identity, thus proving their equivalence.] ) <circuit_identity> As the resulting circuit implements the identity function, this approach thereby proves the equivalence of the two input circuits. The system matrix is instead represented by a @dd in the state-of-the-art equivalence checking method, which usually results in a much more compact representation than the initial approach of comparing the system matrices of the two circuits. The approach presented thus far will, however, also always result in the full construction of circuit $G$ in the intermediate @dd. This is an improvement over fully constructing the system matrices for both circuits, but may still result in an exponentially sized @dd. To avoid this issue, the gates can instead be applied using one of the application schemes shown in @application_schemes. By applying parts of the two circuits in sequence, it is possible to further reduce the size of the @dd. The tuples $(a, b)$ used in the table signify a single application step where $a$ gates are applied from the first circuit ($G$), followed by $b$ gates from the second inverted circuit ($G'^dagger$). The sequential application scheme is therefore identical to the naive scheme presented above. #figure( tablex( columns: (1fr, 1fr, 1fr), [*One-to-one*], [*Sequential*], [*Proportional*], align(center, [$\ (1, 1) \ (1, 1) \ (0, 2)$]), align(center, [$\ (2, 4)$]), align(center, [$\ (1, 2) \ (1, 2)$]) ), caption: [Different state-of-the-art application schemes.] ) <application_schemes> Using the proportional application scheme, for instance, produces the circuit in @application_scheme_prop, which also implements the identity function. Applying the gates to the @dd in this order helps to keep its average size low. This is also the scheme that is used in @mqt @qcec by default. #figure( quantum-circuit( lstick($|q_0〉$), $H$, 1, $"S"^dagger$, 1, $sqrt(X)^dagger$, $"S"^dagger$, 1, [\ ], lstick($|q_1〉$), 1, $X^dagger$, 1, $X$, 1, 1 ), caption: [The circuit produced by the proportional application scheme.] ) <application_scheme_prop> ] == Other Verification Methods Besides reducing a @dd to the identity, alternative methods for verifying equivalent functionality of quantum circuits exist. One approach involves the use of ZX-calculus, an alternative means of representing quantum circuits @kissinger2020pyzx. As with the @dd\-based approach, this representation can be used for equivalence checking by constructing the two ZX diagram and checking if it is possible to transform one into the other @peham2022zx. Alternatively, the same more efficient approach of reducing the concatenation of the first circuit and the inverse of the second circuit to the identity may also be used with this method @duncan2020zx. Another method rewrites the quantum circuit as operations on a limited set of boolean states @berent2022sat. Using this encoding, it is possible to construct a so called miter circuit from two input circuits, which has a single output that is either 0 when the outputs of both circuits are the same and 1 if this is not the case. By applying known boolean satisfiability solving techniques to this circuit, an efficient equivalence checker can thus be constructed by checking whether or not it is possible to produce a 1 at the output. Additionally, there are more efficient solutions for specific cases of the verification problem. For instance, in the case of proving non-equivalence, it is sufficient to find inputs that produce different outputs. In this case, it is usually more efficient to simulate such a case on both circuits and compare the outputs. @qcec makes use of this fact by initially running three simulation instances using random inputs @burgholzer2021ec. Only if the simulation runs each produce identical outputs is another verification method attempted. While these alternative verification methods present interesting perspectives on quantum circuit equivalence checking, they are difficult to compare directly to the @dd\-based approach. They are therefore not used in the comparisons of equivalence checking methodologies portrayed in this work. It may, however, be worth considering the impact that the developed methods may have on these as part of future work.
https://github.com/justinvulz/typst_packages
https://raw.githubusercontent.com/justinvulz/typst_packages/main/drawimg.typ
typst
#import "@preview/cetz:0.2.2" #import "@preview/fletcher:0.4.5" as fletcher: diagram,node,edge #import fletcher.shapes:circle #let margin_size = 2cm #let body_font_size = 30pt #let conf( doc ) = { context { let size = measure(doc) set page(width: size.width+2em, height: size.height+2em) doc } }
https://github.com/WarriorHanamy/typst_tutorial
https://raw.githubusercontent.com/WarriorHanamy/typst_tutorial/main/README.md
markdown
# INSTALLATION ## install rust ```shell curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` ## install typst from cargo ```shell cargo install --git https://github.com/typst/typst --locked typst-cli ``` # Some backup information ## website of source code [typst_source](https://github.com/typst/typst) ## babbles You can also watch source files and automatically recompile on changes. This is faster than compiling from scratch each time because Typst has incremental compilation. ```shell # Watches source files and recompiles on changes. typst watch file.typ ``` Typst further allows you to add custom font paths for your project and list all of the fonts it discovered: ```shell # Adds additional directories to search for fonts. typst compile --font-path path/to/fonts file.typ # Lists all of the discovered fonts in the system and the given directory. typst fonts --font-path path/to/fonts # Or via environment variable (Linux syntax). TYPST_FONT_PATHS=path/to/fonts typst fonts ``` # imagebed [weibo_image_bed](https://github.com/Semibold/Weibo-Picture-Store/blob/master/README.md)
https://github.com/dismint/docmint
https://raw.githubusercontent.com/dismint/docmint/main/linear/pset3.typ
typst
#import "template.typ": * #show: template.with( title: "PSET 3", subtitle: "18.06", pset: true, toc: false, ) #set math.mat(delim: "[") #set math.vec(delim: "[") Collaborators: <NAME> = 3.1.5 == (a) Consider the subspace of $bold(M)$ that is described by all $2 times 2$ matrices such that: $ mat(R, 0; 0, 0) $ This subspace would contain $bold(A)$ but not $bold(B)$ == (b) Yes. If the subspace contains $bold(B)$, then it must also be the case that it contains: $ -1 dot mat(0, 0; 0, -1) = mat(0, 0; 0, 1) $ And thus by addition of two matrices in the subspace, it must contain the identify matrix: $ mat(1, 0; 0, 0) + mat(0, 0; 0, -1) = mat(1, 0; 0, 1) $ == (c) We make a similar subspace as in *(a)* but with the following $2 times 2$ matrices: $ mat(0, R; 0, 0) $ This is a valid subspace but with no non-zero diagonals in it. = 3.1.20 Unless $b$ is a linear combination of already existing columns in $A$ == Increasing Column Space $ mat(1, 1; 0, 0) mat(0; 1) $ == No Change Column Space $ mat(1, 0; 0, 1) mat(1; 1) $ In order for $A x = b$ to be solvable, it must be the case that $b$ can be represented as a linear combination of the columns of $A$ (this is what $x$ represents). Of course, this means that adding $b$ would *not* increase the column space of $A$ = 3.2.4 == (a) $ mat(1, 2, 2, 4, 6; 1, 2, 3, 6, 9; 0, 0, 1, 2, 3) &= mat(1, 2, 2, 4, 6; 0, 0, 1, 2, 3; 0, 0, 1, 2, 3)\ &= boxed(mat(1, 2, 2, 4, 6; 0, 0, 1, 2, 3; 0, 0, 0, 0, 0)) = R\ C &= boxed(mat(1, 0, 0; 1, 1, 0; 0, 1, 0)) $ == (b) $ mat(2, 4, 2; 0, 4, 4; 0, 8, 8) &= boxed(mat(2, 4, 2; 0, 4, 4; 0, 0, 0)) = R\ C &= boxed(mat(1, 0, 0; 0, 1, 0; 0, 2, 0)) $ = 3.2.5 There are three free columns when $1$-indexing, $(2, 4, 5)$. Let us solve each of the systems by setting the variable we are looking for to $1$ and all other free variables to $0$: == Free Column 2 We need to solve the system: $ mat(1, 2, 2, 4, 6; 0, 0, 1, 2, 3; 0, 0, 0, 0, 0) times mat(a, 1, c, 0, 0)^T = mat(0, 0, 0)^T $ This leaves us with the equations: $ a + 2c &= 2\ c &= 0\ $ Thus we have our solutions: $ boxed(a = 2)\ boxed(c = 0) $ == Free Column 4 We need to solve the system: $ mat(1, 2, 2, 4, 6; 0, 0, 1, 2, 3; 0, 0, 0, 0, 0) times mat(a, 0, c, 1, 0)^T = mat(0, 0, 0)^T $ This leaves us with the equations: $ a + 2c &= 4\ c &= 2\ $ Thus we have our solutions: $ boxed(a = 0)\ boxed(c = 2) $ == Free Column 5 We need to solve the system: $ mat(1, 2, 2, 4, 6; 0, 0, 1, 2, 3; 0, 0, 0, 0, 0) times mat(a, 0, c, 0, 1)^T = mat(0, 0, 0)^T $ This leaves us with the equations: $ a + 2c &= 6\ c &= 3\ $ Thus we have our solutions: $ boxed(a = 0)\ boxed(c = 3) $ Thus the nullspaces of $A$ are: $ boxed(mat(2, 1, 0, 0, 0)^T)\ boxed(mat(0, 0, 2, 1, 0)^T)\ boxed(mat(0, 0, 3, 0, 1)^T) $ = 3.2.13 We can rearrange the equation for the plane such that: $ x = 3 y + z + 12 $ Thus all points on the plane take the form of: $ boxed(vec(x, y, z) = vec(12, 0, 0) + y vec(3, 1, 0) + z vec(1, 0, 1)) $ = 3.2.27 For $bold(C)$, the nullspace would have to be a combination of the nullspaces of $bold(A)$ and $bold(B)$. One part of the $bold(C)$ nullspace would have to be the nullspace of $bold(A)$ (think the top half of any potential $bold(x)$ that satisfies the nullspace for $bold(C)$), and the bottom half would have to be a corresponding nullspace of $bold(B)$. Thus the nullspace of $bold(C)$ can be seen as the intersection between the two nullspaces, or: $ boxed(bold(N)(C) = bold(N)(A) sect bold(N)(B)) $ = 3.2.35 $ A^T = mat(-1, 0, 1, -1, 0, 0; 1, -1, 0, 0, -1, 0; 0, 1, -1, 0, 0, -1; 0, 0, 0, 1, 1, 1) &=\ &= mat(1, 0, -1, 1, 0, 0; 1, -1, 0, 0, -1, 0; 0, 1, -1, 0, 0, -1; 0, 0, 0, 1, 1, 1)\ &= mat(1, 0, -1, 1, 0, 0; 0, -1, 1, -1, -1, 0; 0, 1, -1, 0, 0, -1; 0, 0, 0, 1, 1, 1)\ &= mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 1, -1, 0, 0, -1; 0, 0, 0, 1, 1, 1)\ &= mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 0, 0, -1, -1, -1; 0, 0, 0, 1, 1, 1)\ &= mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 0, 0, 1, 1, 1; 0, 0, 0, 1, 1, 1)\ &= mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 0, 0, 1, 1, 1; 0, 0, 0, 0, 0, 0)\ &= mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 0, 0, 1, 1, 1) $ We have free 1-indexed columns in $(3, 5, 6)$ == Free Column 3 We need to solve the system: $ mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 0, 0, 1, 1, 1) times mat(a, b, 1, c, 0, 0)^T = mat(0, 0, 0)^T $ This leaves us with the equations: $ a - 1 + c &= 0\ b - 1 + c &= 0\ c &= 0\ $ Thus we have our solutions: $ boxed(a = 1)\ boxed(b = 1)\ boxed(c = 0) $ == Free Column 5 We need to solve the system: $ mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 0, 0, 1, 1, 1) times mat(a, b, 0, c, 1, 0)^T = mat(0, 0, 0)^T $ This leaves us with the equations: $ a + c &= 0\ b + c + 1 &= 0\ c + 1 &= 0\ $ Thus we have our solutions: $ boxed(a = 1)\ boxed(b = 0)\ boxed(c = -1) $ == Free Column 6 We need to solve the system: $ mat(1, 0, -1, 1, 0, 0; 0, 1, -1, 1, 1, 0; 0, 0, 0, 1, 1, 1) times mat(a, b, 0, c, 0, 1)^T = mat(0, 0, 0)^T $ This leaves us with the equations: $ a + c &= 0\ b + c &= 0\ c + 1 &= 0\ $ Thus we have our solutions: $ boxed(a = 1)\ boxed(b = 1)\ boxed(c = -1) $ Thus the nullspaces of $A^T$ are: $ boxed(mat(1, 1, 1, 0, 0, 0)^T)\ boxed(mat(1, 0, 0, -1, 1, 0)^T)\ boxed(mat(1, 1, 0, -1, 0, 1)^T) $ = 3.3.1 == Step 1 $ A = mat(2, 4, 6, 4; 2, 5, 7, 6; 2, 3, 5, 2) &= mat(2, 4, 6, 4; 0, 1, 1, 2; 2, 3, 5, 2)\ &= mat(2, 4, 6, 4; 0, 1, 1, 2; 0, -1, -1, -2)\ &= mat(2, 4, 6, 4; 0, 1, 1, 2; 0, 0, 0, 0) $ Thus taking the same steps our $c$ becomes: $ mat(4, -1, 0)^T $ == Step 2 $ b_3 - 2 b_1 + b_2 $ == Step 3 The column space of $A$ is the plane consisting of all the linear combinations of the columns in the pivots of $U$. That is, $(2, 2, 2)$ and $(4, 5, 3)$ == Step 4 The special solutions have the free variables in the third and forth column of the resulting matrix from step 1. Substituting in, we get that our two solutions are: $ s_1 = mat(-1, -1, 1, 0)^T\ s_2 = mat(2, -2, 0, 1)^T\ $ Thus the nullspace is the span of these two vectors in $RR^4$ == Step 5 To get the reduced form, we can simplify to the following: $ mat(1, 0, 1, 0; 0, 1, 1, 2; 0, 0, 0, 0) $ This gives us $d$ as: $ mat(4, -1, 0)^T $ == Step 6 Thus this gives us the particular solution as: $ x = mat(4, -1, 0, 0)^T $ Meaning that our complete solution is: $ x = boxed(mat(4, -1, 0, 0)^T + c_1 mat(-1, -1, 1, 0)^T + c_2 mat(2, -2, 0, 1)^T) $ = 3.3.3 == (a) $ mat(1, 3, 7; 2, 6, 14) &= mat(1, 3, 7; 0, 0, 0)\ $ Our nullspace solutions are therefore: $ mat(-3, 1)^T $ The particular solution at the end will be: $ mat(7, 0) $ Meaning that our complete solution is: $ x = boxed(mat(7, 0)^T + c mat(-3, 1)^T) $ == (b) $ mat(1, 3, 3, 1; 2, 6, 9, 5; -1, -3, 3, 5) &= mat(1, 3, 3, 1; 0, 0, 3, 3; -1, -3, 3, 5)\ &= mat(1, 3, 3, 1; 0, 0, 3, 3; 0, 0, 6, 6)\ &= mat(1, 3, 3, 1; 0, 0, 3, 3; 0, 0, 0, 0)\ &= mat(1, 3, 0, -2; 0, 0, 1, 1; 0, 0, 0, 0)\ $ Our nullspace solutions are therefore: $ mat(-3, 1, 0)^T $ The particular solution at the end will be: $ mat(-2, 0, 1) $ Meaning that our complete solution is: $ x = boxed(mat(-2, 0, 1)^T + c mat(-3, 1, 0)^T) $ = 3.3.10 Start off with a matrix that needs one of each column for the nullspace (or any similar multiple of each): $ A = boxed(mat(1, 0, -1; 1, -1, 0)) $ Then to construct $b$ such that the particular solution is $(2, 4, 0)$, we can make $A$ into RREF: $ mat(1, 0, -1; 1, -1, 0) &= mat(1, 0, -1; 0, -1, 1)\ &= mat(1, 0, -1; 0, 1, -1)\ $ Thus a particular solution leaves us with a $d$ of: $ mat(2, 4)^T $ Then to get the original $b$, we need to undo our steps, leading to: $ b = boxed(mat(2, -2)^T) $ = 3.3.28 == (a) Such matrix exists when: $ A = boxed(mat(1, 1; 1, 2; 1, 3)) $ == (b) No, such a matrix $bold(B)$ does not exist because if it did, the dimensions would have to be $2 times 3$. In a matrix with such dimensions resulting in a $3 times 1$ solution, it would cause a problem as it is impossible for the column space of $bold(B)$ to be more than $RR^2$. Thus, it must be the case that with three column vectors, there is not one unique solution, and there exist infinitely many solutions. We use column space here as the $x$ in $A x = b$ results in us doing a linear combination of the three column vectors. = 3.4.1 Putting the first three vectors together, we get: $ mat(1, 1, 1; 0, 1, 1; 0, 0, 1) $ Which is an upper triangular matrix that we know has one unique solution as it is invertible. Thus it *must* be the case that the columns (and rows!) are linearly independent. It is impossible that adding a fourth vector in $RR^3$ would still leave the columns linearly independent because vectors of $RR^3$ can at most represent a vector space of $RR^3$ - $v_4$ would have to be a linear combination of the previous column vectors. Let us solve for when this is equal to $0$: $ mat(1, 1, 1, 2; 0, 1, 1, 3; 0, 0, 1, 4) times mat(a, b, c, d)^T = mat(0, 0, 0)^T $ We can start by setting the last free variable equal to $1$, leaving us with $v_3 = -4$, $v_2 = 1$, and $v_1 = 1$. This our final solution is: $ x = boxed(mat(1, 1, -4, 1)^T) $ = 3.4.16 == (a) $ boxed(mat(1; 1; 1; 1)); $ == (b) $ boxed(mat(1, 1, 1; -1, 0, 0; 0, -1, 0; 0, 0, -1)); $ == (c) $ boxed(mat(1, 1; -1, -1; -1, 0; 0, -1)); $ == (d) $ boxed(mat(1, 0, 0, 0; 0, 1, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1)); $ = 3.4.29 == Column Basis $ mat(1, 0, 0; -1, 0, 0), mat(0, 1, 0; 0, -1, 0), mat(0, 0, 1; 0, 0, -1) $ == Row Basis Too $ mat(1, -1, 0; -1, 1, 0), mat(1, 0, -1; -1, 0, 1) $
https://github.com/MatheSchool/typst-g-exam
https://raw.githubusercontent.com/MatheSchool/typst-g-exam/develop/docs-shiroa/g-exam-doc/latexmit/latexmit-with-points.typ
typst
MIT License
#import "mod.typ": * #show: book-page.with(title: "LaTeX Mit with points") = LaTeX Mit with points
https://github.com/Menahra/CVTemplate
https://raw.githubusercontent.com/Menahra/CVTemplate/main/content/RightPane.typ
typst
#import "utils.typ": spacing, accentTextColor; #let RightPane( applicantData, translationData ) = { // profile align( center, text(16pt, accentTextColor, weight: "bold", translationData.profileTitle) ); v(spacing(amount: -0.5)); text(12pt, applicantData.profile); v(spacing(amount: 1)); // work history align( center, text(16pt, accentTextColor, weight: "bold", translationData.workHistoryTitle) ); for workPlace in applicantData.workHistory { text(11pt, weight: "bold", workPlace.name); text(11pt, " (" + workPlace.from + " - " + workPlace.to + ")"); linebreak(); let workPlaceContent = (); for task in workPlace.tasks { workPlaceContent.push( text(size: 12pt, task) ); } list( indent: 1em, ..workPlaceContent ) v(spacing(amount: 0.33)); }; // education align( center, text(16pt, accentTextColor, weight: "bold", translationData.educationTitle) ); for education in applicantData.education { text(11pt, weight: "bold", education.title); text( 11pt, " | " + education.institute + ": " + education.from + " - " + education.to ); v(spacing(amount: 0.33)); }; // additional stuff align( center, text(16pt, accentTextColor, weight: "bold", translationData.additionalTitle) ); for additionalItem in applicantData.additional { text(11pt, weight: "bold", additionalItem.title); linebreak(); text(11pt, additionalItem.items.join(", ")); v(spacing(amount: 0)); }; };
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/semantic_tokens/fn2.typ
typst
Apache License 2.0
#let fn() = { raw("1 + 2", lang: "typ") ```typ 1 2 ``` [1] }
https://github.com/frectonz/the-pg-book
https://raw.githubusercontent.com/frectonz/the-pg-book/main/README.md
markdown
# The PG Book <NAME>'s essays typset via Typst.Typeset articles can be found in the [book](./book) directory. The [scrapper](./scrapper) directory contains the scripts used to scrape the articles from the [website](https://www.paulgraham.com/articles.html). Currently, the first **14 articles** have already been typset.
https://github.com/akrantz01/resume
https://raw.githubusercontent.com/akrantz01/resume/main/template/education.typ
typst
MIT License
#import "common.typ": date-range, section #let entry( school, date, degree: none, area: none, location: none, gpa: none, courses: (), extra: none, settings: (:), ) = { if area != none and degree == none { panic("An area of study must be accompanied by a degree") } let about = if degree != none [ #strong(school), #emph(if area != none { degree + " in " + area } else { degree }) ] else { strong(school) } set block(above: 1em) grid( columns: (80%, 20%), align(left, about), align(right, date-range(date)), ) pad(left: 1.25em, top: -0.45em)[ #if location != none and settings.locations [ #emph(location) \ ] #if extra != none [ #eval(extra, mode: "markup") \ ] #if gpa != none [ #text(weight: "medium")[GPA]: #gpa \ ] #if courses.len() > 0 [ #text(weight: "medium")[Courses]: #courses.join(", ") \ ] ] } #let education(title: "Education", settings: (:), ..entries) = { section(title) entries.pos().map(((id, school, date, ..rest)) => entry( school, date, settings: settings, ..rest, )).join() }
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/data-04.typ
typst
Other
// Error: 6-16 file not found (searched at typ/compute/nope.csv) #csv("nope.csv")
https://github.com/piepert/typst-seminar
https://raw.githubusercontent.com/piepert/typst-seminar/main/Beispiele/LogikKlausur/typst/template.typ
typst
#let fontsize = 12pt #let project(date: none, title: none, body) = { set text(lang: "de", size: fontsize, font: "Times New Roman") set par(justify: true) set page(margin: (left: 2.5cm, right: 2.5cm, top: 2.5cm, bottom: 2.5cm), header: { block(stroke: (bottom: black), inset: (bottom: 5pt), grid(columns: (1fr, 1fr, 1fr), align(left, date), align(center, counter(page).display() + [ \/ ] + locate(l => counter(page).final(l).first())), align(right, title))) locate(loc => { let pagenum = str(counter(page).at(loc).first()) let dict = state("line_points").final(loc) if dict == none { return } if pagenum in dict { dict = dict.at(pagenum) for key in dict.keys() { style(s => { let stars = stack(dir: ltr, .. (text(size: 16pt)[★],) * dict.at(key)) let width = measure(stars, s).width place(top + right, dy: eval(key) - fontsize, dx: width - 2.25em, stars) }) } } }) }) align(center, text(size: 17pt, weight: "thin", title)) body } #let tasks_points_state = state("tasks_points") #let additional_tasks_points_state = state("additional_tasks_points") #let init_task_points() = { tasks_points_state.update(k => { if k == none { return (points: (0,), tasks: 0) } k }) additional_tasks_points_state.update(k => { if k == none { return (points: (0,), tasks: 0) } k }) } #let increase_task_points(additional, p) = { init_task_points() if additional { additional_tasks_points_state.update(k => { k.points.at(k.tasks) += p k }) } else { tasks_points_state.update(k => { k.points.at(k.tasks) += p k }) } } #let add_task_task_points() = { init_task_points() tasks_points_state.update(k => { let _ = k.points.push(0) k.tasks += 1 k }) } #let get_all_points(location) = { (tasks_points_state.final(location) .points .sum(), additional_tasks_points_state.final(location) .points .sum()) } #let get_task_points(loc, additional: false, payload: none) = { init_task_points() let point_state = if additional { additional_tasks_points_state } else { tasks_points_state } if payload != none { payload(point_state .at(loc) .points .at(point_state .at(loc) .tasks)) } else { str(point_state .final(loc) .points .at(point_state .at(loc) .tasks)) } } #let point(additional: false) = { locate(loc => { let key = repr(loc.position().y) state("line_points").update(k => { let pagenum = str(counter(page).at(loc).first()) let pager = if k == none { (:) } else { k } let value = (:) if pagenum not in pager { pager.insert(pagenum, (:)) } value = pager.at(pagenum) value.insert(key, if key in value { value.at(key) + 1 } else { 1 }) pager.insert(pagenum, value) pager }) }) increase_task_points(additional, 1) } #let task(additional: false, description, body, eh) = { add_task_task_points() counter("tasks").step() let taskname = ([Aufgabe ] + counter("tasks").display()); grid(columns: (1fr, auto), gutter: 3em, strong(smallcaps(taskname)) + [. ] + description, strong(locate(loc => get_task_points(loc)) + [P])) v(0.5em) body set text(size: 10pt) block(stroke: black, { grid(block(inset: 5pt, fill: black, width: 100%, text(fill: white, [Erwartungshorizont #taskname])), block(inset: 1em, width: 100% - 3.5em, [ #block(eh) #v(0.5em) #strong[Erreichbare Punkte: #locate(loc => get_task_points(loc))] ])) }) v(1em) } #let conclude(premise, conclusion) = { style(s => { let bottom = none let top = none let premise_width = measure(premise, s).width let conclusion_width = measure(conclusion, s).width let max_width = premise_width if premise_width < conclusion_width { max_width = conclusion_width } grid(block(inset: 5pt, width: max_width, stroke: (bottom: black, rest: none), premise), block(inset: 5pt, width: max_width, conclusion)) }) }
https://github.com/Hyiker/report-typst
https://raw.githubusercontent.com/Hyiker/report-typst/master/README.md
markdown
Creative Commons Attribution 4.0 International
# Yet another Typst report template ![Example](./assets/example.png) 由[Project-Report-Typst](https://github.com/aurghya-0/Project-Report-Typst),根据我的个人喜好修改的typst模板,支持中/英。 Designed for project reports according to my personal flavor, modified from [Project-Report-Typst](https://github.com/aurghya-0/Project-Report-Typst), support Chinese and English.
https://github.com/Hithereland/whu-typst-template
https://raw.githubusercontent.com/Hithereland/whu-typst-template/main/template/components/algorithm-table.typ
typst
#import "@preview/algorithmic:0.1.0" #import algorithmic: algorithm #let algorithm-table = figure.with(kind: "algorithm-table", supplement: "Algorithm") #let setup-algorithm-table( body-inset: (bottom: 5pt), body ) = { show figure.where(kind: "algorithm-table"): it => { let booktabbed = block( stroke: (y: 1.3pt), inset: 0pt, breakable: false, width: 100%, { set align(left) block( inset: (y: 5pt), width: 100%, stroke: (bottom: .8pt), { strong({ it.supplement sym.space.nobreak counter(figure.where(kind: "algorithm-table")).display(it.numbering) if it.caption != none { [: ] } else { [.] } }) if it.caption != none {it.caption.body} } ) set par(first-line-indent: 0em, leading: 0.2em, hanging-indent: 2em) show par: set block(spacing: 0.2em) block( inset: body-inset, breakable: false, width: 95%, it.body ) } ) let centered = pad(x: 5%, booktabbed) if it.placement in (auto, top, bottom) { place(it.placement, float: true, centered) } else { centered } } body }
https://github.com/Big-Ouden/ensiie_rapport
https://raw.githubusercontent.com/Big-Ouden/ensiie_rapport/main/0.1.5/template/acronyms.typ
typst
#import "@preview/acrostiche:0.3.0": * #let acronyms = ( "DVI": (link("https://fr.wikipedia.org/wiki/Digital_visual_interface")[Digital Visual Interface],), "FPGA": (link("https://en.wikipedia.org/wiki/Field-programmable_gate_array")[Field Programmable Gates Array],), "HDMI": (link("https://en.wikipedia.org/wiki/HDMI")[High Definition Multimedia Inteface],), "LED": (link("https://en.wikipedia.org/wiki/Light-emitting_diode")[Light Emitting Diodes],), "LCD": (link("https://en.wikipedia.org/wiki/Liquid-crystal_display")[Liquid Cristal Display],), "PMod": (link("https://en.wikipedia.org/wiki/Pmod_Interface")[Peripheral Module],), "USB": (link("https://en.wikipedia.org/wiki/USB")[Universal Serial Bus],), "VGA": (link("https://fr.wikipedia.org/wiki/Video_Graphics_Array")[Virutal Graphics Array],), ) #init-acronyms(acronyms)
https://github.com/darkMatter781x/OverUnderNotebook
https://raw.githubusercontent.com/darkMatter781x/OverUnderNotebook/main/entries/auton/far-awp/far-awp.typ
typst
#import "../auto-util.typ": * #auton( "Far Autonomous Win Point", datetime(year: 2024, month: 2, day: 25), "sixBall.cpp", [ AWP for the far side requires that we score the alliance triball into our goal and touch the horizontal elevation bar. This is the code that accomplishes that. ], )[ ```cpp void run6Ball() { // front left corner of the drivetrain aligned with the inside of the puzzling // with alliance triball in intake Robot::chassis->setPose( {MAX_X - TILE_LENGTH - Robot::Dimensions::drivetrainLength / 2, MIN_Y + TILE_LENGTH - Robot::Dimensions::drivetrainWidth / 2, RIGHT}, false); // keep alliance triball in intake Robot::Actions::intake(); // go into goal Robot::chassis->moveToPose( MAX_X - TILE_RADIUS, 0 - TILE_LENGTH - Robot::Dimensions::drivetrainLength, UP, 3000); // once near the goal stop intaking Robot::chassis->waitUntil(20); Robot::Actions::stopIntake(); Robot::chassis->waitUntilDone(); // make sure triball goes into goal Robot::Actions::outtake(); tank(127, 127, 500, 0); // back out of goal tank(-127, -127, 400, 0); Robot::Actions::stopIntake(); // touch horizontal elevation bar Robot::chassis->moveToPose(0 + Robot::Dimensions::drivetrainLength / 2 + 1.5, MIN_Y + TILE_RADIUS - 0.5, LEFT, 4000, {.maxSpeed = 64}); // intake ball under elevation bar Robot::Actions::intake(); } ``` ]
https://github.com/jneug/schule-typst
https://raw.githubusercontent.com/jneug/schule-typst/main/src/_imports.typ
typst
MIT License
// TODO: Import only necessary items to global scope #import "core/base.typ": * // base-template, appendix #import "util/marks.typ" #import "util/args.typ" #import "util/util.typ" #import "util/types.typ" as t #import "theme.typ" #import "api/typo.typ": * #import "api/helper.typ": * #import "api/content.typ": * #import "api/aufgaben.typ": * // Fachspezifische Importe #import "subjects/computer-science.typ" as info #import "subjects/math.typ" as mathe // Some community packages to improve things #import "@preview/wrap-it:0.1.0": wrap-content
https://github.com/storopoli/graphs-complexity
https://raw.githubusercontent.com/storopoli/graphs-complexity/main/slides/slides.typ
typst
Creative Commons Zero v1.0 Universal
#import "@preview/slydst:0.1.0": * #import "@preview/diagraph:0.2.5": * #import "@preview/lovelace:0.3.0": * #set text(lang: "en") #show: slides.with( title: "Graph Theory and Computational Complexity", subtitle: none, date: none, authors: ("<NAME>, PhD",), layout: "medium", ratio: 4 / 3, title-color: orange, ) #set text(size: 16pt) #show link: set text(blue) /* Level-one headings correspond to new sections. Level-two headings correspond to new slides. Blank space can be filled with vertical spaces like #v(1fr). */ == License #align(horizon + center)[#image("images/cc-zero.svg", width: 80%)] == Links #align(horizon + center)[ All links are in #text(blue)[blue]. Feel free to click on them. ] == Content #outline() = Why study Graph Theory and Computational Complexity? #align(horizon + center)[#image( "images/algorithm_analysis_meme.jpg", width: 50%, )] == Computational Theory #align(horizon)[ *Computational theory* is a subfield of computer science and mathematics that seeks to determine which problems can be computed within a given computational model. *Computation* can be defined as the calculation of a function through an algorithm. ] == #link("https://en.wikipedia.org/wiki/Alan_Turing")[Turing] vs. #link("https://en.wikipedia.org/wiki/Alonzo_Church")[Church] #align(horizon + center)[ #figure( grid( columns: 2, gutter: 2mm, image("images/turing.jpg", width: 60%), image("images/church.jpg", width: 60%), ), caption: "Alan Turing and Alon<NAME>", )<turing-church> ] #pagebreak() #align(horizon)[ - Turing proposed the *Turing machine* as a computational model. - Alonzo Church proposed *lambda calculus* as a computational model. - Both models are mathematically *equivalent*. ] == Algorithm #align(horizon)[ An *algorithm* is a finite sequence of executable actions aimed at obtaining a solution to a particular type of problem. ] == Graph Theory #align(horizon)[ Why study Graphs? #v(1em) #align(center)[ _Almost_ everything you do in computing can be modeled as a *graph problem*. ] ] == Computational Complexity #align(horizon)[ *Computational complexity* is a field in computer science that studies the amount of resources required to solve a computational problem#footnote[ a decidable problem. ]. ] #pagebreak() #align(horizon)[ We use the $O$ notation to describe the complexity of an algorithm. - $O(1)$ (*constant* complexity): - Accessing an array - Inserting a node in a linked list - Insertion and removal in a queue - $O(log n)$ (*logarithmic* complexity): - Binary search - Insertion and removal in a binary search tree #pagebreak() - $O(n)$ (*linear* complexity): - Traversing an array - Traversing a linked list - Comparing two strings - $O(n log n)$ (*log-linear* complexity): - _Quick Sort_ algorithm - _Merge Sort_ algorithm #pagebreak() - $O(n^2)$ (*quadratic* complexity): - Traversing a matrix - _Bubble Sort_ algorithm - _Insertion Sort_ algorithm - $O(n^3)$ (*cubic* complexity): - Matrix multiplication (naive approach) - $O(n!)$ (*factorial* complexity): - Traveling salesman problem solution - Generating all permutations of a list ] = Graphs #align(horizon + center)[#image( "images/graph_isomorphism_meme.jpg", width: 50%, )] == What are Graphs? Graphs are mathematical structures that model *relationships between objects*. #align(horizon + center)[ #figure( raw-render(```dot graph G { rankdir=LR; layout=dot; a -- {b, c}; b -- {c, d}; c -- e; d -- e; e -- f; {rank=same; a;}; {rank=same; b; c;}; {rank=same; d; e;}; {rank=same; f;}; } ```), caption: "Graph", ) <graph> ] == Formally Graphs are *ordered pairs* $G = (V, E)$ where: - $V$ is a finite set of *vertices* (also called nodes) - $E$ is a finite set of *edges* (also called arcs) represented by a pair of vertices $(u, v)$ The @graph, for example: #text(size: 14pt)[ $ V = \{a, b, c, d, e, f\} $ $ E = \{(a, b), (a, c), (b, c), (b, d), (c, e), (d, e), (e, f)\} $ ] == Directed Graphs Graphs can be *directed* or *_non_-directed*. #align(horizon + center)[ #figure( raw-render(```dot digraph G { rankdir=LR; layout=dot; a -> {b, c}; b -> c; c -> e; d -> {b, e}; e -> f; {rank=same; a;}; {rank=same; b; c;}; {rank=same; d; e;}; {rank=same; f;}; } ```), caption: "Directed Graph", ) <directed-graph> ] == Weighted Graphs Most graphs are *weighted*, meaning they have values associated with the edges. #align(horizon + center)[ #figure( raw-render(```dot graph G { rankdir=LR; layout=dot; a -- b [label=2]; a -- c [label=3]; b -- c [label=1]; b -- d [label=4]; c -- e [label=1]; d -- e [label=2]; e -- f [label=1]; {rank=same; a;}; {rank=same; b; c;}; {rank=same; d; e;}; {rank=same; f;}; } ```), caption: "Weighted Graph", ) <weighted-graph> ] == Graph Examples #align(horizon)[ - Computer networks - Social networks - City maps - Molecular structures ] == #link("https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg")[The 7 Bridges of Königsberg] The first practical application of graph theory, solved by Euler in 1736. #align(center)[ *Is it possible to cross all bridges without repeating any?* ] #align(horizon + center)[ #figure( image("images/konigsberg_briges.png", width: 35%), caption: "The 7 Bridges of Königsberg", ) <konigsberg-brigdes> ] == #link("https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg")[The 7 Bridges of Königsberg] #align(horizon + center)[ #figure( raw-render(```dot graph G { rankdir=LR; splines=curved; layout=neato; a[pos="-1,0!"]; b[pos="0,1!"]; c[pos="0,-1!"]; d[pos="1,0!"]; a -- {b, c, d}; b:w -- a; c:w -- a; b -- d; c -- d; } ```), caption: "Graph of the 7 Bridges of Königsberg", ) <graph-konigsberg-brigdes> ] == Solution to the 7 Bridges #align(horizon)[ The solution to the Königsberg problem was provided by Euler. The graph requires *two conditions* to be solved: - The graph must be *fully connected* - The graph must have exactly *0 or 2 vertices of odd degree* ] == #link("https://en.wikipedia.org/wiki/Four_color_theorem")[The Four Color Theorem] #align(horizon)[ *No more than four colors are needed to color the regions of any map, so that two adjacent regions do not share the same color.* ] #pagebreak() #align(horizon + center)[ #figure( image("images/four_color_graph.svg", width: 50%), caption: "Abstracting a map with four colors using graphs", ) <four-color-map> ] #text(size: 14pt)[ The theorem was proven in 1976 by <NAME> and <NAME>#footnote[ one of the first theorems proved with the help of computers. ]. ] == Graph Applications - Airline itineraries: Calculate the maximum flow in a directed graph. - Routing software (GPS): Calculate the shortest path between two points. - Solving a sudoku: Solve a graph coloring problem. - Online search algorithms: Determine vertex centralities based on themes. - Social networks: find the largest friend community. == Subgraphs A *subgraph* of a graph $G$ is another graph formed from a *subset of the vertices and edges of $G$*. The vertex subset must include all the edges' vertices, but may include additional vertices. #align(horizon + center)[ #figure( image("images/subgraph.svg", width: 40%), caption: "Subgraph", ) <subgraph> ] == Induced Subgraph An *induced subgraph* is a subgraph that *includes all vertices and edges* whose endpoints belong to the vertex subset. #align(horizon + center)[ #figure( image("images/induced_subgraph.svg", width: 50%), caption: "Induced Subgraph", ) <induced-subgraph> ] == Isomorphism An isomorphism of the graphs $G$ and $H$ is a bijection#footnote[ a function that establishes a one-to-one correspondence between the elements of two sets. ] between the sets of vertices of $G$ and $H$: $ f: V(G) -> V(H) $ #align(horizon + center)[ #figure( image("images/graph_isomorphism.png", width: 66%), caption: "Isomorphic Graphs", ) <isomorphic-graphs> ] == Graph Representation #align(horizon)[ There are several ways to represent graphs, the most common are: - *Adjacency matrix* - *Adjacency list* ] == Adjacency Matrix #align(horizon)[ An *adjacency matrix* is a square matrix $bold(A)$ of size $n times n$: $ bold(A)^(n times n) = a_(i j) $ where $a_(i j)$ is the number of edges between vertices $i$ and $j$. ] #pagebreak() #align(horizon + center)[ #figure( grid( columns: 2, gutter: 2mm, text[$ bold(A) = mat( 1, 1, 0, 0, 1, 0;1, 0, 1, 0, 1, 0;0, 1, 0, 1, 0, 0;0, 0, 1, 0, 1, 1;1, 1, 0, 1, 0, 0;0, 0, 0, 1, 0, 0; ) $], raw-render( ```dot graph G { layout=neato; rankdir=LR; 1 -- {1, 2, 5}; 2 -- {3, 5}; 3 -- {4}; 4 -- {5, 6}; } ```, width: 80%, ), ), caption: "Adjacency matrix and Graph", ) <adjacency-matrix> ] #pagebreak() #align(horizon)[ Properties of an adjacency matrix#footnote[ $n$ is the number of vertices in the graph. ]: - Symmetric for undirected graphs - Asymmetric for directed graphs - Space cost $O(n^2)$ - Construction cost $O(n^2)$ - Edge search cost $O(1)$ ] == Adjacency List #align(horizon)[ An *adjacency list* is a list of lists, where each list $L_i$ contains the vertices adjacent to vertex $i$. ] #pagebreak() #align(horizon + center)[ #figure( grid( columns: 2, gutter: 2mm, table( columns: 2, table.header([*Vertex*], [*Neighbors*]), [1], [1, 2, 5], [2], [1, 3, 5], [3], [2, 4], [4], [3, 5, 6], [5], [1, 2, 4], [6], [4], ), raw-render( ```dot graph G { layout=neato; rankdir=LR; 1 -- {1, 2, 5}; 2 -- {3, 5}; 3 -- {4}; 4 -- {5, 6}; } ```, width: 80%, ), ), caption: "Adjacency list and Graph", ) <adjacency-list> ] #pagebreak() #align(horizon)[ Properties of an adjacency list#footnote[ $n$ is the number of vertices in the graph and $m$ is the number of edges. ]: - Space cost $O(n + m)$#footnote[for undirected graphs.]<adjacency-list-cost> - Construction cost $O(m)$#footnote(<adjacency-list-cost>) - Edge search cost $O(n)$ ] == Practical Section (C or pseudocode) #align(horizon)[ - Represent a directed and undirected graph - Parse a graph from an adjacency matrix - Parse a graph from an adjacency list ] == Paths and Cycles #align(horizon)[ #text(size: 14pt)[ A *path* is a sequence of vertices such that from each vertex there is an edge to the next vertex. A path is called *simple* if none of the vertices in the path are repeated. The *length* of the path is the number of edges the path uses, counting multiple edges more than once. The *cost* of a path in a balanced graph is the sum of the costs of the edges traversed. Two paths are *independent* if they share no vertices, except the first and last. ] ] #pagebreak() #align(horizon + center)[ #figure( raw-render(```dot graph G { rankdir=LR; layout=dot; a -- b[color=red]; a -- c; b -- c[color=red]; a -- d; c -- e[color=red]; d -- e; e -- f[color=red]; {rank=same; a;}; {rank=same; b; c;}; {rank=same; d; e;}; {rank=same; f;}; } ```), caption: "Path of Length 4", ) <path> ] #pagebreak() #align(horizon)[ A *cycle* is a path in which the *first and last vertices coincide*, but no other vertices are *repeated*. ] #pagebreak() #align(horizon + center)[ #figure( raw-render(```dot graph G { rankdir=LR; layout=dot; a -- b; a -- c[color=red]; b -- c; a -- d[color=red]; c -- e[color=red]; d -- e[color=red]; e -- f; {rank=same; a;}; {rank=same; b; c;}; {rank=same; d; e;}; {rank=same; f;}; } ```), caption: "Cycle of Length 4", ) <cycle> ] == Eulerian Path #align(horizon)[ An *Eulerian path* is a path that uses each edge exactly once. If such a path exists, the graph is called traversable. An *Eulerian cycle* is a cycle that uses each edge exactly once. ] == Hamiltonian Path #align(horizon)[ A *Hamiltonian path* is a path that visits each vertex exactly once. A *Hamiltonian cycle*#footnote[ fun fact: one of the first zero-knowledge proof schemes was based on finding a Hamiltonian cycle in a giant graph. For more details, see #link( "https://en.wikipedia.org/wiki/Zero-knowledge_proof#Hamiltonian_cycle_for_a_large_graph", )[Wikipedia] and the #link( "https://web.archive.org/web/20230103032937/http://euler.nmt.edu/~brian/students/pope.pdf", )[original paper]. ] is a cycle that visits each vertex exactly once. ] == #link("https://en.wikipedia.org/wiki/Travelling_salesman_problem")[Traveling Salesman Problem] The *traveling salesman problem* (TSP) is a problem that tries to determine the shortest route to visit a series of cities (visiting each city only once), returning to the starting city. #align(horizon + center)[ #figure( raw-render( ```dot graph G { layout=neato; 1[pos="0,0!"] 2[pos="2,1!"] 3[pos="4,0.5!"] 4[pos="1,-1!"] 5[pos="3,-1!"] 1 -- 2[label=2]; 1 -- 4[label=3]; 1 -- 5[label=6]; 2 -- 3[label=4]; 2 -- 4[label=3]; 3 -- 4[label=7]; 3 -- 5[label=3]; 4 -- 5[label=3]; } ```, width: 50%, ), caption: "Traveling Salesman Problem", ) <travelling-salesman-problem> ] #pagebreak() #align(horizon)[ Formulating it in graph terms, the TSP is a problem of finding a Hamiltonian cycle such that the cost of the cycle is the smallest possible. $ C = min_("cycle") sum_(i=1)^n c_(i, i+1) $ ] == Practical Section (C or pseudocode) #align(horizon)[ - Find an Eulerian path in C - Find a Hamiltonian cycle in C ] = Trees #align(horizon + center)[#image("images/trees_meme.jpg", width: 50%)] == What are Trees? Trees are *acyclic* and *connected* graphs. #align(horizon + center)[ #figure( raw-render( ```dot digraph G { a -> {b, c}; b -> d; c -> {e, f}; } ```, width: 50%, ), caption: "Tree", ) <tree> ] #pagebreak() #align(horizon)[ - *Root*: the vertex with no incoming edges. All trees have (only) one root vertex. - *Leaf*: vertex with no outgoing edges. - *Level*: distance from the root. - *Height*: maximum level. - *Parent*: vertex/vertices with a lower level (closer to the root). - *Child*: vertex/vertices with a higher level (further from the root). - *Ancestor*: vertex/vertices with a lower level. - *Descendant*: vertex/vertices with a higher level. ] == Subtrees Subtrees are trees that are subsets of a tree. #align(horizon + center)[ #figure( raw-render( ```dot digraph G { a b c[color=red,fontcolor=red] e[color=red,fontcolor=red]; f[color=red,fontcolor=red]; a -> {b, c}; b -> d; c -> {e, f}[color=red]; } ```, width: 45%, ), caption: "Subtree", ) <subtree> ] == Tree Types #align(horizon + center)[ #figure( raw-render(```dot digraph G { a -> b b -> c; } ```), caption: "Path Tree", ) <tree-path> ] #pagebreak() #align(horizon + center)[ #figure( raw-render( ```dot graph G { layout=circo; a -- {b, c, d, e , f, g}; } ```, width: 66%, ), caption: "Star Tree", ) <tree-path> ] #pagebreak() #align(horizon + center)[ #figure( raw-render( ```dot // taken from https://stackoverflow.com/a/23430742 digraph G { nodesep=0.2; ranksep=0.5; {node[style=invis,label=""]; cx_d;} {node[style=invis, label="", width=.1]; ocx_f; ocx_b;} {rank=same; b; f; cx_d} {rank=same; a; c; ocx_b} {rank=same; e; g; ocx_f} d -> b; d -> f; b -> a; b -> c; f -> e; f -> g; { edge[style=invis]; // Distantiate nodes d -> cx_d; b -> cx_d -> f; // Force ordering between children f -> ocx_f; e -> ocx_f -> g; b -> ocx_b; a -> ocx_b -> c; } } ```, width: 50%, ), caption: "Binary Tree", ) <tree-binary> ] == Balanced Trees A tree is *balanced* if the height difference between the left and right subtrees is at most 1. #align(horizon + center)[ #figure( raw-render( ```dot digraph G { a -> {b, c}; b -> d; c -> {e, f}; } ```, width: 45%, ), caption: "Balanced Tree", ) <balanced-tree> ] #pagebreak() #align(horizon + center)[ #figure( raw-render( ```dot digraph G { a -> {b, c}; c -> {d, e}; d -> f; } ```, width: 40%, ), caption: "Unbalanced Tree", ) <unbalanced-tree> ] == Practical Section (C or pseudocode) #align(horizon)[ - Detect if a graph is a tree (i.e., if it is acyclic and connected) - Detect which vertex is the root of a tree ] = Interlude: Polynomial and Exponential Functions #align(horizon + center)[#image("images/polynomials_meme.jpg", width: 50%)] == Polynomial Functions #align(horizon)[ A function is polynomial if it can be expressed in the form $ O(a_n dot n^k + a_(n-1) dot n^(k-1) + ... + a_1 dot n + a_0) $ where: - $n$ is the input size - $k$ is a constant - $a_n, a_{n-1}, ..., a_1, a_0$ are constant coefficients ] == Examples #align(horizon)[ - $n$ - $n^2 + 3n + 2$ - $4n^4 + n^3 + 2n^2 + 7$ - $n^(100) + 500n^(50) + 3000$ ] == Big-O Notation #text(size: 15pt)[ #align(horizon)[ Big-O notation is used to describe the complexity of an algorithm. For example, in the function $n^3 + n^2 + 5n + 100$, the largest constant $k = 3$ will asymptotically#footnote[ as something approaches infinity, i.e., $lim -> oo$. ] dominate the computation time, so the complexity is $O(n^3)$. Also, in Big-O notation, we disregard constant coefficients. For example, $O(3n^2)$ simplifies to $O(n^2)$ and $50 O(1)$ simplifies to $O(1)$. ] ] == Types of Polynomial Complexity #align(horizon)[ - Constant: $O(1)$ - Logarithmic: $O(log n)$ - Linear: $O(n)$ - Log-linear#footnote[also called linearithmic.]: $O(n log n)$ - Quadratic: $O(n^2)$ - Cubic: $O(n^3)$ - Polynomial: $O(n^k)$ ] == Exponential Functions #align(horizon)[ A function is exponential if it can be reduced using Big-O notation to $ O(n^m) $ where $m$ *_is not_* a positive constant. For example, $O(2^n)$ is an exponential complexity#footnote[ note that $n$ is not constant. ]. ] == Types of Exponential Complexity #align(horizon)[ - Exponential: $2^n$ - Factorial: $n!$ - Superexponential: $n^n$ - Doubly Exponential: $2^(2^n)$ ] = Computational Complexity #align(horizon + center)[#image("images/big_o_meme.jpg", width: 45%)] == Definition #align(horizon)[ The computational complexity of an algorithm is the *number of computational operations (such as arithmetic operations, comparisons, and memory accesses) required for its execution*. #v(1em) This number clearly depends on the size and nature of the inputs. ] == Bounded Complexity #align(horizon)[ If the complexity of an algorithm is bounded by a function $f(n)$, where $f$ is a polynomial function of $n$ (input size), then the algorithm is said to have *polynomial* complexity. #v(1em) Algorithms with polynomial complexity belong to the class $cal(P)$ ] == Class $cal(P)$ #align(horizon)[ A *decision problem* is a problem that has a *yes* or *no* answer. Such a problem belongs to the class $cal(P)$ if there exists an algorithm that solves any instance of the problem in *polynomial complexity*. ] == Class $cal(N P)$ #align(horizon)[ A decision problem belongs to the class $cal(N P)$ if there exists a *polynomial-time algorithm that _verifies_ the solution to a problem*. #v(1em) It is trivial to establish that $cal(P) subset.eq cal(N P)$. ] == Examples of $cal(P)$ and $cal(N P)$ Problems #align(horizon)[ - Class $cal(P)$: - Sorting algorithms - Search algorithms - Class $cal(N P)$: - Problem of prime factorization of integers - Traveling salesman problem ] == $cal(P)$ vs $cal(N P)$ #text(size: 9.8pt)[ #table( columns: 3, align: left + horizon, table.header([], [*$cal(P)$*], [*$cal(N P)$*]), [*Solvability*], [Solvable efficiently in polynomial time.], [Efficient verification, but the solution may not be found efficiently.], [*Time Complexity*], [Polynomial-time algorithms are known.], [Efficient verification algorithms are known, but efficient solution algorithms are not guaranteed.], [*Nature of Solutions*], [Solutions can be found efficiently.], [Solutions, once proposed, can be verified efficiently.], [*Known Relationship*], [$cal(P)$ is a subset of $cal(N P)$.], [It is unknown whether $cal(N P)$ is a proper subset of $cal(P)$ or if they are equal.], ) ] == $cal(P)$ vs $cal(N P)$ #align(horizon + center)[#image("images/p_vs_np.png")] == $cal(N P)$-complete #align(horizon)[ A $cal(N P)$-complete problem is an $cal(N P)$ problem that is *as hard as any other problem in $cal(N P)$*. If a $cal(N P)$-complete problem can be solved in polynomial time, then all problems in $cal(N P)$-complete can also be solved in polynomial time. ] == Boolean Satisfiability (SAT) #align(horizon)[ The Boolean satisfiability problem (SAT) seeks to determine whether a *propositional formula can be made true* by means of an appropriate assignment ("solution") of truth values to its variables. $ (a and b and c) or (d and e and f) or (g and h and i) or (j and k and l) $ where $a, b, c, d, e, f, g, h, i, j, k, l$ are boolean variables, and $and$ (`AND`) and $or$ (`OR`) are boolean operators. #pagebreak() #v(1em) Although it is easy to verify whether a given assignment makes the formula true, there is no known faster method for finding a satisfying assignment other than testing all possible assignments. #v(1em) #link("https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem")[Cook and Levin proved] that every problem that can be easily verified can be solved as quickly as SAT, which is why it is NP-complete. ] == $cal(N P)$-hard #align(horizon)[ An $cal(N P)$-hard problem is one for which *no efficient algorithm is known to solve it*. However, if an efficient algorithm for an $cal(N P)$-hard problem is found, then all problems in $cal(N P)$ can be solved efficiently. ] == $cal(P)$ vs $cal(N P)$-complete and $cal(N P)$-hard #align(horizon + center)[#image("images/P_np_np-complete_np-hard.svg")] == $cal(P)$ vs $cal(N P)$-complete and $cal(N P)$-hard #align(horizon)[ - $cal(N P)$-complete: - Traveling Salesman Problem in decision form: "Is there a cycle with a cost less than or equal to X?" - $cal(N P)$-hard: - Traveling Salesman Problem in optimization form: "What is the minimum-cost cycle?" ] == Practical Section (C) #text(size: 14pt)[ #align(horizon)[ #link("https://en.wikipedia.org/wiki/Knapsack_problem")[*Knapsack Problem*] You are an adventurer and have found a cave full of treasures. However, your backpack has limited capacity and you need to decide which items to take to maximize the total value, without exceeding the capacity of the backpack. You have a list of `n` items, where each item `i` has: - *Value*: $v[i]$ (in gold) - *Weight*: $w[i]$ (in kilograms) Your backpack's capacity is $W$ (in kilograms). #pagebreak() - Write an algorithm that determines the subset of items that maximizes the total value in the backpack without exceeding the total weight $W$. - Write an algorithm that, given a certain input of items and capacity, determines whether it is possible to fit all items in the backpack. ] ] = Identifying Algorithm Complexity #align(horizon + center)[#image( "images/recursion_joker_debugging_meme.jpg", width: 80%, )] == Introduction #align(horizon)[ Complexity analysis is essential for evaluating the *efficiency of algorithms*. It helps us predict the behavior of an algorithm as the input size increases, which is crucial for *optimization* and choosing the *right algorithm* for a specific application. ] == Big-O Notation #align(horizon)[ Big-O notation ($O$) is used to describe the *worst-case runtime of an algorithm* in terms of the *input size $n$*. ] == Steps to Determine Complexity #align(horizon)[ 1. *Identify the dominant operations*: Focus on operations that are executed repeatedly, such as loops, recursions, and function calls. #pagebreak() 2. *Estimate how many times these operations are executed*: Analyze the depth and number of iterations of loops and recursions. #pagebreak() 3. *Ignore non-dominant terms and constants*: In Big-O notation, we ignore constant multiplicative factors and lower-order terms. #pagebreak() 4. *Choose the appropriate Big-O notation*: Use the result from the previous steps to identify the correct Big-O complexity. ] #pagebreak() #align(horizon + center)[#image( "images/recursion_world_burn_meme.jpeg", width: 80%, )] == Control Structures - *Sequential Structures*: constant complexity $O(1)$ - *Conditional Structures*: constant complexity $O(1)$ - *Loops*: linear complexity $O(n)$ - *Nested Loop*: quadratic complexity $O(n^2)$ - *Recursion*: - *Linear*: linear complexity $O(n)$ - *Divide-and-Conquer*: logarithmic complexity $O(log n)$ - *Binary*: complexity $O(n log n)$ - *Exponential*: exponential complexity $O(2^n)$ == Sequential Structures #align(horizon)[ Control structures that do not involve loops or recursion have constant complexity $O(1)$. ```c int x = 5; int y = 10; int z = x + y; // O(1) ``` ] == Conditional Structures #align(horizon)[ Simple conditional structures, such as `if`-`else`, do not affect complexity, but the execution of internal blocks should be considered. ```c if (x > y) { z = x - y; // O(1) } else { z = y - x; // O(1) } // Total complexity: O(1) ``` ] == Loops #align(horizon)[ The complexity of a loop depends on the number of iterations: - *Simple Loop*: ```c for (int i = 0; i < n; i++) { // O(1) operation } // Total complexity: O(n) ``` #pagebreak() - *Nested Loop*: ```c for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // O(1) operation } } // Total complexity: O(n^2) ``` #pagebreak() - *Loop with multiplicative increment*: ```c for (int i = 1; i < n; i *= 2) { // O(1) operation } // Total complexity: O(log n) ``` ] == Recursion #align(horizon + center)[#image( "images/recursion_joker_stackoverflow_meme.jpeg", width: 80%, )] #pagebreak() #align(horizon)[ The complexity of recursive algorithms depends on the number of recursive calls and the input size in each call. - *Linear Recursion*: ```c void linear_recursion(int n) { if (n == 0) return; // O(1) operation linear_recursion(n-1); } // Total complexity: O(n) ``` #pagebreak() - *Divide-and-Conquer Recursion*: ```c void divide_and_conquer_recursion(int n) { if (n == 0) return; // O(1) operation divide_and_conquer_recursion(n/2); } // Total complexity: O(log n) ``` #pagebreak() - *Binary Recursion (like Merge Sort):*: ```c void merge_sort(int arr[], int n) { if (n < 2) return; int mid = n / 2; merge_sort(arr, mid); merge_sort(arr + mid, n - mid); merge(arr, mid, n - mid); // O(n) } // Total complexity: O(n log n) ``` #pagebreak() - *Exponential Recursion*: ```c int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n-1) + fibonacci(n-2); } // Total complexity: O(2^n) ``` ] == Practical Example - Linear Search #align(horizon)[ ```c int linear_search(int arr[], int n, int x) { for (int i = 0; i < n; i++) { if (arr[i] == x) return i; } return -1; } // Total complexity: O(n) ``` ] == Practical Example - Binary Search #align(horizon)[ ```c int binary_search(int arr[], int n, int x) { int start = 0, end = n - 1; while (start <= end) { int mid = (start + end) / 2; if (arr[mid] == x) return mid; if (arr[mid] < x) start = mid + 1; else end = mid - 1; } return -1; } // Total complexity: O(log n) ``` ] == Practical Example - Bubble Sort #text(size: 14pt)[ #align(horizon)[ ```c void bubble_sort(int arr[], int n) { for (int i = 0; i < n-1; i++) { for (int j = 0; j < n-i-1; j++) { if (arr[j] > arr[j+1]) { // swap elements int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } } } // Total complexity: O(n^2) ``` ] ] == Practical Section (C or pseudocode) #align(horizon)[ - Implement and determine the complexity of an algorithm that counts the number of occurrences of an element in a matrix. - Find a way to reduce the complexity of calculating Fibonacci. ] = Interlude: Analyzing Algorithm Complexity using C Code #align(horizon + center)[#image("images/programming_meme.jpg", width: 50%)] #pagebreak() #align(horizon)[ #text(size: 12pt)[ Reversing an array: // Total complexity: O(n) ```c void reverse_array(int arr[], int n) { int start = 0, end = n - 1; while (start < end) { int temp = arr[start]; arr[start] = arr[end]; arr[end] = temp; start++; end--; } } ``` #pagebreak() Checking if a string is a #link("https://en.wikipedia.org/wiki/Palindrome")[palindrome]: // Total complexity: O(n) ```c bool is_palindrome(char str[]) { int start = 0; // strings in C are null-terminated int end = strlen(str) - 1; while (start < end) { if (str[start] != str[end]) { return false; } start++; end--; } return true; } ``` #pagebreak() Find the maximum difference between two elements in an array where the larger element comes after the smaller one: // Total complexity: O(n) #text(size: 11pt)[ ```c int max_difference(int arr[], int n) { int min_element = arr[0]; int max_diff = arr[1] - arr[0]; for (int i = 1; i < n; i++) { if (arr[i] - min_element > max_diff) { max_diff = arr[i] - min_element; } if (arr[i] < min_element) { min_element = arr[i]; } } return max_diff; } ``` ] #pagebreak() Sorting an array using the #link("https://en.wikipedia.org/wiki/Insertion_sort")[insertion sort] algorithm: // Total complexity: O(n^2) ```c void insertion_sort(int arr[], int n) { for (int i = 1; i < n; i++) { int key = arr[i]; int j = i - 1; while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j = j - 1; } arr[j + 1] = key; } } ``` #pagebreak() Find the duplicates in an array: // Total complexity: O(n^2) ```c void find_duplicates(int arr[], int n) { for (int i = 0; i < n - 1; i++) { for (int j = i + 1; j < n; j++) { if (arr[i] == arr[j]) { printf("Duplicate found: %d\n", arr[i]); } } } } ``` #pagebreak() Compute the power of a number: // Total complexity: O(log n) ```c int power(int x, int n) { if (n == 0) { return 1; } int half = power(x, n / 2); if (n % 2 == 0) { return half * half; } else { return x * half * half; } } ``` #pagebreak() Find the #link("https://en.wikipedia.org/wiki/Greatest_common_divisor")[greatest common divisor] of two numbers: // Total complexity: O(log(min(a, b))) ```c int gcd(int a, int b) { if (b == 0) { return a; } return gcd(b, a % b); } ``` #pagebreak() #link("https://en.wikipedia.org/wiki/Primality_test")[Prime number check] (naive method): // Total complexity: O(sqrt(n)) ```c bool is_prime(int n) { if (n <= 1) { return false; } for (int i = 2; i * i <= n; i++) { if (n % i == 0) { return false; } } return true; } ``` #pagebreak() Find the majority element (an element that appears more than $n/2$ times) using #link( "https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_majority_vote_algorithm", )[Boyer-Moore's Voting Algorithm]. // Total complexity: O(n) #text(size: 6pt)[ ```c int find_majority_element(int arr[], int n) { int count = 0, candidate = -1; // Find potential candidate for (int i = 0; i < n; i++) { if (count == 0) { candidate = arr[i]; count = 1; } else if (arr[i] == candidate) { count++; } else { count--; } } // Verify if the candidate is the majority element count = 0; for (int i = 0; i < n; i++) { if (arr[i] == candidate) { count++; } } if (count > n / 2) { return candidate; } else { return -1; // No majority element } } ``` ] #pagebreak() Generate #link("https://en.wikipedia.org/wiki/Pascal%27s_triangle")[Pascal's Triangle]: // Total complexity: O(n^2) #text(size: 11pt)[ ```c void generate_pascals_triangle(int n) { int arr[n][n]; for (int line = 0; line < n; line++) { for (int i = 0; i <= line; i++) { if (i == 0 || i == line) { arr[line][i] = 1; } else { arr[line][i] = arr[line - 1][i - 1] + arr[line - 1][i]; } printf("%d ", arr[line][i]); } printf("\n"); } } ``` ] #pagebreak() #link("https://en.wikipedia.org/wiki/Maximum_subarray_problem#Kadane's_algorithm")[Kadane's algorithm] to find the maximum subarray sum: // Total complexity: O(n) #text(size: 11pt)[ ```c int max_subarray_sum(int arr[], int n) { int max_ending_here = 0; int max_so_far = INT_MIN; for (int i = 0; i < n; i++) { max_ending_here = max_ending_here + arr[i]; if (max_so_far < max_ending_here) { max_so_far = max_ending_here; } if (max_ending_here < 0) { max_ending_here = 0; } } return max_so_far; } ``` ] #pagebreak() #link( "https://en.wikipedia.org/wiki/Longest_common_subsequence", )[Longest Common Subsequence (LCS)]: // Total complexity: O(n * m) #text(size: 10pt)[ ```c int lcs(char *X, char *Y, int m, int n) { int dp[m + 1][n + 1]; for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { if (i == 0 || j == 0) { dp[i][j] = 0; } else if (X[i - 1] == Y[j - 1]) { dp[i][j] = dp[i - 1][j - 1] + 1; } else { dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]); } } } return dp[m][n]; } ``` ] #pagebreak() Merge two sorted arrays: // Total complexity: O(N log N) #text(size: 8pt)[ ```c void merge_sorted_arrays(int A[], int B[], int m, int n, int C[]) { int i = 0, j = 0, k = 0; int N = m + n; // N is the combined length of A and B while (i < m && j < n) { if (A[i] <= B[j]) { C[k++] = A[i++]; } else { C[k++] = B[j++]; } } // Copy the remaining elements of A, if any while (i < m) { C[k++] = A[i++]; } // Copy the remaining elements of B, if any while (j < n) { C[k++] = B[j++]; } } ``` ] #pagebreak() This algorithm is a #link( "https://en.wikipedia.org/wiki/Fast_inverse_square_root", )[fast way to compute the inverse square root], $1 / sqrt(x)$, made famous by its use in the game Quake III Arena by #link("https://en.wikipedia.org/wiki/John_Carmack")[<NAME>]. The method uses a clever approximation and a single iteration of #link("https://en.wikipedia.org/wiki/Newton%27s_method")[Newton's method] to refine it. // Total complexity: O(1) #text(size: 8pt)[ ```c float Q_rsqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = *(long*)&y; // Evil bit-level hacking i = 0x5f3759df - (i >> 1); // What the fuck? y = *(float*)&i; y = y * (threehalfs - (x2 * y * y)); // 1st iteration of Newton's method // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; } ``` ] ] ] = Search Algorithms #align(horizon + center)[#image( "images/search_algorithms_meme.png", width: 100%, )] == What is a Search Algorithm? #align(horizon)[ A *search algorithm* is a sequence of instructions that allows finding a specific element within a data structure. It is fundamental in computer science as it optimizes data access and manipulation. ] == Why Study Search Algorithms? #align(horizon)[ - *Efficiency*: Efficient search algorithms save time and resources. - *Foundations*: They are the basis for more complex algorithms and data structures. - *Practical Applications*: Used in databases, operating systems, artificial intelligence, among others. ] == Types of Search Algorithms #align(horizon)[ - *Linear Search* - *Binary Search* - *Graph Search Algorithms*: - *Breadth-First Search (BFS)* - *Depth-First Search (DFS)* ] == Linear Search === Concept #align(horizon)[ *Linear search* is the simplest search algorithm. It sequentially checks each element of the data structure until it finds the desired element or reaches the end of the structure. ] #pagebreak() === Characteristics of Linear Search #align(horizon)[ - *Simple to Implement* - *Does Not Require Ordered Structure* - *Time Complexity*: $O(n)$, where $n$ is the number of elements. ] #pagebreak() === Pseudoalgorithm #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Linear Search], text(size: 12pt)[ #pseudocode-list( title: smallcaps[given a list $A$ of $n$ elements with values $A_0, dots A_(n-1)$, and target value $T$:], )[ + *for* $i$ *in* $A$: + *if* $A_i = T$: + *return* $i$ + *return* _null_ ] ], ) <linear-search> ] #pagebreak() === Example in C #align(horizon)[ #text(size: 14pt)[ ```c int linear_search(int arr[], int n, int x) { for (int i = 0; i < n; i++) { if (arr[i] == x) // Element found at position i return i; } return -1; // Element not found } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Best Case*: The element is at the first position; $O(1)$. - *Worst Case*: The element is at the last position or not present; $O(n)$. - *Average Case*: On average, it checks half of the elements; $1/2 O(n) = O(n)$. ] == Binary Search #align(horizon)[ *Binary search* is an efficient algorithm to find an element in an ordered list, reducing the search space by half with each iteration. ] #pagebreak() === Characteristics of Binary Search #align(horizon)[ - *Requires Ordered Structure* - *Time Complexity*: $O(log n)$ - *More Efficient than Linear Search in Large Data Sets* ] #pagebreak() === Pseudoalgorithm #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Binary Search], text(size: 9pt)[ #pseudocode-list( title: smallcaps[given an _ordered_ list $A$ of $n$ elements with values $A_0, dots A_(n-1)$, and target value $T$:], )[ + $L := 0$; $R := n-1$. + *while* $L <= R$: + $m := floor((L+R) / 2)$ + *if* $A_m < T$: + $L := m+1$ + *else if* $A_m > T$: + $R := m-1$ + *else*: + *return* $m$ + *return* _null_ ] ], ) <binary-search> ] #pagebreak() === Example in C #align(horizon)[ #text(size: 14pt)[ ```c int binary_search(int arr[], int n, int x) { int start = 0, end = n - 1; while (start <= end) { int mid = start + (end - start) / 2; if (arr[mid] == x) return mid; // Element found if (arr[mid] < x) start = mid + 1; else end = mid - 1; } return -1; // Element not found } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - With each iteration, the algorithm halves the search space. - *Time Complexity*: $O(log n)$ - *Efficiency*: Much faster than linear search in large data sets. ] #pagebreak() === What if we $k > 1$ pivots? #align(horizon)[ #text(size: 14pt)[ At each step: - Divides the array into $k+1$ partitions - Search space reduces to $n / (k+1)$ - Comparisons increase from $1$ to $k$ Total comparisons: - Total steps: $log_(k+1) n$ - Total comparisons: $k dot.c log_(k+1) n$ Complexity: - $O (k dot.c (log n) / (log(k+1))) = O(log n)$ ] ] == Graph Search Algorithms === Types of Graph Search Algorithms #align(horizon)[ - *Breadth-First Search (BFS)* - *Depth-First Search (DFS)* ] #pagebreak() === Applications #align(horizon)[ - *Finding Paths*: Between two vertices in a graph. - *Checking Connectivity*: Whether all vertices are reachable. - *Cycle Detection*: In directed and undirected graphs. ] == Breadth-First Search (BFS) === Concept #align(horizon)[ *Breadth-first search* explores a graph by visiting all vertices at the same distance layer from the origin before moving to the next layer. ] #pagebreak() === Characteristics of BFS #align(horizon)[ - *Uses a Queue* - *Guarantees the Shortest Path in Unweighted Graphs* - *Time Complexity*: $O(V + E)$, where $V$ is the number of vertices and $E$ is the number of edges. ] #pagebreak() === Pseudoalgorithm #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Breadth-first Search], text(size: 8pt)[ #pseudocode-list( title: smallcaps[given a graph $G$, a _root_ vertex, and target value $T$:], )[ + $Q := "queue"$ + _root_.explored $:= "true"$ + $Q$.enqueue(_root_) + *while* $!Q$.empty(): + $v := Q$.dequeue() + *if* $v = T$ + *return* $v$ + *for* all edges from $v$ to $w$ *in* $G$.adjacentEdges(v): + *if* $!w$.explored: + $w$.explored $:= "true"$ + $w$.parent $:= v$ + $Q$.enqueue($w$) + *return* _null_ ] ], ) <breadth-first-search> ] #pagebreak() === Example in C #align(horizon)[ #text(size: 7pt)[ ```c int bfs(int graph[][MAX], int start, int n, int target) { if (start->value == T) return start->id; int visited[MAX] = {0}; int queue[MAX], front = 0, rear = 0; visited[start] = 1; queue[rear++] = start; while (front < rear) { int current = queue[front++]; for (int i = 0; i < n; i++) { if (current->value == T) return current->id; if (graph[current][i] && !visited[i]) { visited[i] = 1; queue[rear++] = i; } } } return -1; } ``` ] ] #pagebreak() === Illustration of BFS #align(horizon + center)[ #figure( raw-render( ```dot digraph BFS { rankdir=TB; node [shape=circle, style=filled, color=lightgrey]; // Nodes with numbered labels A [label="A\n(1)"]; B [label="B\n(2)"]; C [label="C\n(3)"]; D [label="D\n(4)"]; E [label="E\n(5)"]; F [label="F\n(6)"]; G [label="G\n(7)"]; H [label="H\n(8)"]; // Edges A -> B; A -> C; B -> D; B -> E; C -> F; C -> G; E -> H; } ```, width: 37%, ), caption: "Illustration of BFS with vertices numbered by visitation order", ) ] == Depth-First Search (DFS) === Concept #align(horizon)[ *Depth-first search* explores as far as possible along each branch before backtracking. ] #pagebreak() === Characteristics of DFS #align(horizon)[ - *Uses a Stack* (can be implemented recursively) - *Does Not Guarantee the Shortest Path* - *Time Complexity*: $O(V + E)$ ] #pagebreak() === Pseudoalgorithm #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Depth-first Search], text(size: 12pt)[ #pseudocode-list( title: smallcaps[given a graph $G$, a vertex $v$, and target value $T$:], )[ + $v$.discovered $:= "true"$ + *if* $v = T$ + *return* $v$ + *for* all edges from $v$ to $w$ *in* $G$.adjacentEdges(v): + *if* $!w$.discovered: + DFS($G$, $w$) + *return* _null_ ] ], ) <depth-first-search> ] #pagebreak() === Example in C (Recursive) #align(horizon)[ #text(size: 14pt)[ ```c int dfs(int graph[][MAX], int current, int visited[], int n) { if (current->value == T) return current->id; visited[current] = 1; for (int i = 0; i < n; i++) { if (graph[current][i] && !visited[i]) { dfs(graph, i, visited, n); } } return -1; } ``` ] ] #pagebreak() === Illustration of DFS #align(horizon + center)[ #figure( raw-render( ```dot digraph DFS { rankdir=TB; node [shape=circle, style=filled, color=lightgrey]; // Nodes with numbered labels A [label="A\n(1)"]; B [label="B\n(2)"]; D [label="D\n(3)"]; E [label="E\n(4)"]; H [label="H\n(5)"]; C [label="C\n(6)"]; F [label="F\n(7)"]; G [label="G\n(8)"]; // Edges A -> B; A -> C; B -> D; B -> E; E -> H; C -> F; C -> G; } ```, width: 37%, ), caption: "Illustration of DFS with vertices numbered by visitation order", ) ] == Comparison between BFS and DFS #align(horizon)[ #text(size: 12pt)[ #table( columns: 3, align: left + horizon, table.header([*Characteristic*], [*BFS*], [*DFS*]), [*Data Structure*], [Queue], [Stack], [*Memory Usage*], [Higher (stores all neighbors)], [Lower (stores only current path)], [*Shortest Path*], [Yes (in unweighted graphs)], [Not necessarily], [*Completeness*], [Yes], [Yes], [*Applications*], [Shortest path, node levels], [Cycle detection, topological sorting], ) ] ] = Sorting Algorithms #align(horizon + center)[ #image( "images/sorting_algorithms_meme.jpg", width: 50%, ) ] == Introduction #align(horizon)[ *Sorting algorithms* are algorithms that put elements of a list in a certain order. The most frequently used orders are numerical order and lexicographical order. Sorting is important because it: - Organizes data to make it more usable. - Optimizes the efficiency of other algorithms that require sorted data. - Facilitates searching and data representation. ] == Types of Sorting Algorithms #align(horizon)[ #text(size: 14pt)[ Sorting algorithms can be classified based on several factors: - *Comparison vs Non-comparison based*: Whether elements are compared to determine their order. - *Stable vs Unstable*: Whether equivalent elements maintain their original relative order. - *Time Complexity*: How the runtime increases with the number of elements. - *Space Complexity*: The amount of memory required beyond the input data. ] ] == Stable vs Non-Stable #align(horizon + center)[ #image( "images/sort_stable_vs_unstable.png", width: 100%, ) ] == Common Sorting Algorithms #align(horizon)[ - *Bubble Sort* - *Selection Sort* - *Insertion Sort* - *Merge Sort* - *Quick Sort* - *Heap Sort* - *Counting Sort* - *Radix Sort* - *Bucket Sort* ] == Bubble Sort #align(horizon)[ *Bubble Sort* is a simple comparison-based algorithm where each pair of adjacent elements is compared, and the elements are swapped if they are in the wrong order. This process is repeated until no swaps are needed. It is called "bubble" sort because smaller elements "bubble" to the top of the list. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. Compare each pair of adjacent items. 2. Swap them if they are in the wrong order. 3. Repeat steps 1 and 2 for all elements. 4. Continue the process until a pass completes with no swaps. ] #pagebreak() === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Bubble Sort], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Given an array $A$ of $n$ elements:], )[ + *for* $i$ from $0$ to $n - 1$: + *for* $j$ from $0$ to $n - i - 1$: + *if* $A[j] > A[j + 1]$: + swap $A[j]$ and $A[j + 1]$ ] ], ) <bubble-sort> ] #pagebreak() === Example in C #align(horizon)[ #text(size: 14pt)[ ```c void bubble_sort(int arr[], int n) { for (int i = 0; i < n - 1; i++) { // Last i elements are already in place for (int j = 0; j < n - i - 1; j++) { if (arr[j] > arr[j + 1]) { // Swap arr[j] and arr[j + 1] int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; } } } } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Time Complexity*: - Best Case: $O(n)$ (when the array is already sorted) - Average Case: $O(n^2)$ - Worst Case: $O(n^2)$ - *Space Complexity*: $O(1)$ (in-place sorting) - *Stability*: Stable (equal elements maintain their relative order) ] == Selection Sort #align(horizon)[ *Selection Sort* divides the input list into two parts: a sorted sublist of items built up from left to right, and a sublist of the remaining unsorted items. It repeatedly selects the smallest (or largest) element from the unsorted sublist, swapping it with the leftmost unsorted element. The process continues moving the sublist boundaries one element to the right. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. Set the first unsorted element as the minimum. 2. Compare this minimum with the next element. 3. If the next element is smaller, set it as the new minimum. 4. Continue until the end of the array. 5. Swap the minimum with the first unsorted position. 6. Move the boundary one element to the right. 7. Repeat until the array is sorted. ] === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Selection Sort], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Given an array $A$ of $n$ elements:], )[ + *for* $i$ from $0$ to $n - 1$: + $"minIdx" := i$ + *for* $j$ from $i + 1$ to $n$: + *if* $A[j] < A["minIdx"]$: + minIdx := $j$ + swap $A[i]$ and $A["minIdx"]$ ] ], ) <selection-sort> ] #pagebreak() === Example in C #align(horizon)[ #text(size: 13pt)[ ```c void selection_sort(int arr[], int n) { for (int i = 0; i < n - 1; i++) { int min_idx = i; for (int j = i + 1; j < n; j++) { if (arr[j] < arr[min_idx]) min_idx = j; } // Swap the found minimum element with // the first element int temp = arr[min_idx]; arr[min_idx] = arr[i]; arr[i] = temp; } } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Time Complexity*: - Best Case: $O(n^2)$ - Average Case: $O(n^2)$ - Worst Case: $O(n^2)$ - *Space Complexity*: $O(1)$ - *Stability*: Unstable (equal elements may not maintain their relative order) ] == Insertion Sort #align(horizon)[ *Insertion Sort* builds the final sorted array one item at a time. It assumes that the first element is already sorted, then inserts each subsequent element into the correct position relative to the sorted portion. It's similar to how people arrange a hand of playing cards. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. Start from the second element (index $1$). 2. Compare the current element with elements in the sorted portion. 3. Shift all larger elements in the sorted portion one position to the right. 4. Insert the current element into its correct position. 5. Repeat until the array is sorted. ] #pagebreak() === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Insertion Sort], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Given an array $A$ of $n$ elements:], )[ + *for* $i$ from $1$ to $n - 1$: + $"key" := A[i]$ + $j := i - 1$ + *while* $j >= 0$ *and* $A[j] > "key"$: + $A[j + 1] := A[j]$ + $j := j - 1$ + $A[j + 1] := "key"$ ] ], ) <insertion-sort> ] #pagebreak() === Example in C #align(horizon)[ #text(size: 11pt)[ ```c void insertion_sort(int arr[], int n) { for (int i = 1; i < n; i++) { int key = arr[i]; int j = i - 1; // Move elements of arr[0..i-1], // that are greater than key, // to one position ahead of // their current position while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j = j - 1; } arr[j + 1] = key; } } ``` ] ] === Complexity Analysis #align(horizon)[ - *Time Complexity*: - Best Case: $O(n)$ (when the array is already sorted) - Average Case: $O(n^2)$ - Worst Case: $O(n^2)$ - *Space Complexity*: $O(1)$ - *Stability*: Stable ] == Merge Sort #align(horizon)[ *Merge Sort* is a divide and conquer algorithm that divides the array into halves, sorts each half, and then merges them back together. It is efficient and has a guaranteed runtime of $O(n log n)$. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. If the array is of length $0$ or $1$, it is already sorted. 2. Divide the array into two halves. 3. Recursively sort each half. 4. Merge the two sorted halves into one sorted array. ] #pagebreak() === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Merge Sort], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Function merge_sort(array A, left, right):], )[ + *if* $"left" < "right"$: + $"mid" := ("left" + "right") / 2$ + $"merge_sort"(A, "left", "mid")$ + $"merge_sort"(A, "mid" + 1, "right")$ + $"merge"(A, "left", "mid", "right")$ ] ], ) <merge-sort> ] #pagebreak() === `merge` Function #align(horizon)[ The `merge` function combines two sorted subarrays into one sorted array. - Left subarray: $A["left".."mid"]$ - Right subarray: $A["mid"+1.."right"]$ ] #pagebreak() === Example in C #align(horizon)[ #text(size: 6pt)[ ```c void merge(int arr[], int l, int m, int r) { int n1 = m - l + 1; int n2 = r - m; // Create temp arrays int L[n1], R[n2]; // Copy data to temp arrays L[] and R[] for (int i = 0; i < n1; i++) L[i] = arr[l + i]; for (int j = 0; j < n2; j++) R[j] = arr[m + 1 + j]; // Merge the temp arrays back into arr[l..r] int i = 0, j = 0, k = l; while (i < n1 && j < n2) { if (L[i] <= R[j]) { arr[k++] = L[i++]; } else { arr[k++] = R[j++]; } } // Copy the remaining elements of L[], if any while (i < n1) arr[k++] = L[i++]; // Copy the remaining elements of R[], if any while (j < n2) arr[k++] = R[j++]; } ``` ] ] #pagebreak() #align(horizon)[ #text(size: 12pt)[ ```c void merge_sort(int arr[], int l, int r) { if (l < r) { int m = l + (r - l) / 2; merge_sort(arr, l, m); merge_sort(arr, m + 1, r); merge(arr, l, m, r); } } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Time Complexity*: - Best Case: $O(n log n)$ - Average Case: $O(n log n)$ - Worst Case: $O(n log n)$ - *Space Complexity*: $O(n)$ (due to auxiliary arrays) - *Stability*: Stable ] == Quick Sort #align(horizon)[ *Quick Sort* is a divide and conquer algorithm that selects a *pivot* element and partitions the array around the pivot, such that elements less than the pivot are on the left, and elements greater than the pivot are on the right. It then recursively sorts the subarrays on either side of the pivot. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. Choose a pivot element. 2. Partition the array into two subarrays: - Elements less than the pivot. - Elements greater than the pivot. 3. Recursively apply the above steps to the subarrays. ] #pagebreak() === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Quick Sort], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Function quick_sort(array A, low, high):], )[ + *if* $"low" < "high"$: + $pi := "partition"(A, "low", "high")$ + $"quick_sort"(A, "low", pi - 1)$ + $"quick_sort"(A, pi + 1, "high")$ ] ], ) <quick-sort> ] #pagebreak() === `partition` Function #align(horizon)[ The `partition` function rearranges the array such that: - All elements less than the pivot come before it. - All elements greater than the pivot come after it. - The pivot is in its final sorted position. It returns the index of the pivot. ] #pagebreak() === Example in C #align(horizon)[ #text(size: 9pt)[ ```c int partition(int arr[], int low, int high) { int pivot = arr[high]; // pivot int i = low; // Index of smaller element for (int j = low; j <= high - 1; j++) { // If current element is smaller than or equal to pivot if (arr[j] <= pivot) { i++; // increment index of smaller element int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } } // Swap arr[i] and arr[high] (or pivot) int temp = arr[i]; arr[i[high]; arr[high] = temp; return i; } ``` ] ] #pagebreak() #align(horizon)[ #text(size: 14pt)[ ```c void quick_sort(int arr[], int low, int high) { if (low < high) { int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quick_sort(arr, low, pi - 1); quick_sort(arr, pi + 1, high); } } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Time Complexity*: - Best Case: $O(n log n)$ - Average Case: $O(n log n)$ - Worst Case: $O(n^2)$ (when the smallest or largest element is always chosen as the pivot) - *Space Complexity*: $O(log n)$ (due to recursive calls) - *Stability*: Unstable ] == Heap Sort #align(horizon)[ *Heap Sort* involves building a `max heap` from the input data, and then repeatedly extracting the maximum element from the heap and rebuilding the heap. It uses the properties of a heap data structure to sort elements. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. Build a `max heap` from the input data. 2. Swap the root (maximum value) of the heap with the last element. 3. Reduce the heap size by one. 4. `heapify` the root element to get the highest element at root again. 5. Repeat steps 2 to 4 until the heap size is greater than 1. ] #pagebreak() === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Heap Sort], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Function heap_sort(array A, n):], )[ + *for* $i$ from $n / 2 - 1$ down to $0$: + heapify(A, n, i) + *for* $i$ from $n - 1$ down to $0$: + swap $A[0]$ and $A[i]$ + heapify(A, i, 0) ] ], ) <heap-sort> ] #pagebreak() === `heapify` Function #align(horizon)[ The `heapify` function ensures the subtree rooted at index $i$ satisfies the max heap property. - If the children of node $i$ are max heaps but node $i$ might be smaller than its children. - Swap node $i$ with its largest child. - Recursively `heapify` the affected subtree. ] #pagebreak() === Example in C #align(horizon)[ #text(size: 9pt)[ ```c void heapify(int arr[], int n, int i) { int largest = i; // Initialize largest as root int l = 2 * i + 1; // left = 2*i + 1 int r = 2 * i + 2; // right = 2*i + 2 // If left child is larger than root if (l < n && arr[l] > arr[largest]) largest = l; // If right child is larger than largest so far if (r < n && arr[r] > arr[largest]) largest = r; // If largest is not root if (largest != i) { int swap = arr[i]; arr[i] = arr[largest]; arr[largest] = swap; // Recursively heapify the affected sub-tree heapify(arr, n, largest); } } ``` ] ] #pagebreak() #align(horizon)[ #text(size: 12pt)[ ```c void heap_sort(int arr[], int n) { // Build heap (rearrange array) for (int i = n / 2 - 1; i >= 0; i--) heapify(arr, n, i); // One by one extract an element from heap for (int i = n - 1; i >= 0; i--) { // Move current root to end int temp = arr[0]; arr[0] = arr[i]; arr[i] = temp; // call max heapify on the reduced heap heapify(arr, i, 0); } } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Time Complexity*: - Best Case: $O(n log n)$ - Average Case: $O(n log n)$ - Worst Case: $O(n log n)$ - *Space Complexity*: $O(1)$ - *Stability*: Unstable ] == Counting Sort #align(horizon)[ *Counting Sort* is an integer sorting algorithm that operates by counting the number of objects that possess distinct key values (kind of hashing). It is not a comparison sort and has a running time of $O(n + k)$ where $k$ is the range of the input data. It's efficient when the range of input data is not significantly greater than the number of objects to be sorted. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. Find the maximum element in the array. 2. Initialize a count array of size ($max + 1$) with zeros. 3. Store the count of each element at their respective index. 4. Modify the count array by adding the previous counts. 5. Build the output array by placing elements at their correct positions. ] #pagebreak() === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Counting Sort], text(size: 10pt)[ #pseudocode-list( title: smallcaps[Given an array $A$ of $n$ elements:], )[ + $max := "find_max"(A)$ + initialize count array $C[0..max]$ + *for* each element in $A$: + $C["element"] := C["element"] + 1$ + *for* $i$ from $1$ to $max$: + $C[i] := C[i] + C[i - 1]$ + *for* $i$ from $n - 1$ down to $0$: + $"output"[C[A[i]] - 1] := A[i]$ + $C[A[i]] := C[A[i]] - 1$ + copy output array to $A$ ] ], ) <counting-sort> ] #pagebreak() === Example in C #align(horizon)[ #text(size: 5.8pt)[ ```c void counting_sort(int arr[], int n) { int output[n]; int max = arr[0]; // Find the largest element in the array for (int i = 1; i < n; i++) { if (arr[i] > max) max = arr[i]; } int count[max + 1]; // Initialize count array with all zeros for (int i = 0; i <= max; ++i) count[i] = 0; // Store the count of each element for (int i = 0; i < n; i++) count[arr[i]]++; // Store the cumulative count for (int i = 1; i <= max; i++) count[i] += count[i - 1]; // Find the index of each element of the original array in count array, and // place the elements in output array for (int i = n - 1; i >= 0; i--) { output[count[arr[i]] - 1] = arr[i]; count[arr[i]]--; } // Copy the sorted elements into original array for (int i = 0; i < n; i++) arr[i] = output[i]; } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Time Complexity*: $O(n + k)$ where $k$ is the range of the input. - *Space Complexity*: $O(n + k)$ - *Stability*: Stable - *Limitation*: Only works with integers and when $k$ is not significantly greater than $n$. ] == Radix Sort #align(horizon)[ *Radix Sort* is a non-comparative sorting algorithm that sorts data with integer keys by grouping keys by individual digits that share the same significant position and value. It uses Counting Sort as a subroutine to sort elements. ] #pagebreak() === Algorithm Steps #align(horizon)[ 1. Find the maximum number to know the number of digits. 2. Do `counting_sort` for every digit, starting from least significant digit to most significant digit. ] #pagebreak() === Pseudocode #align(horizon)[ #figure( kind: "algorithm", supplement: [Algorithm], caption: [Radix Sort], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Function radix_sort(array A, n):], )[ + $max := "find_max"(A)$ + *for* $"exp" := 1$; $"max" / "exp" > 0$; $"exp" *= 10$: + $"counting_sort_by_digit"(A, n, "exp")$ ] ], ) <radix-sort> ] #pagebreak() === `counting_sort_by_digit` Function #align(horizon)[ The `counting_sort_by_digit` function sorts the array according to the digit represented by `exp` (exponent). - For $"exp" = 1$, sort according to the least significant digit. - For $"exp" = 10$, sort according to the second least significant digit. ] #pagebreak() === Example in C #align(horizon)[ #text(size: 9pt)[ ```c void counting_sort_by_digit(int arr[], int n, int exp) { int output[n]; int count[10] = {0}; // Store count of occurrences in count[] for (int i = 0; i < n; i++) count[(arr[i] / exp) % 10]++; // Change count[i] so that count[i] contains actual // position of this digit in output[] for (int i = 1; i < 10; i++) count[i] += count[i - 1]; // Build the output array for (int i = n - 1; i >= 0; i--) { output[count[(arr[i] / exp) % 10] - 1] = arr[i]; count[(arr[i] / exp) % 10]--; } // Copy the output array to arr[], so that arr[] now // contains sorted numbers according to current digit for (int i = 0; i < n; i++) arr[i] = output[i]; } ``` ] ] #pagebreak() #align(horizon)[ #text(size: 14pt)[ ```c void radix_sort(int arr[], int n) { // Find the maximum number to know the number of digits int max = arr[0]; for (int i = 1; i < n; i++) if (arr[i] > max) max = arr[i]; // Do counting sort for every digit for (int exp = 1; max / exp > 0; exp *= 10) counting_sort_by_digit(arr, n, exp); } ``` ] ] #pagebreak() === Complexity Analysis #align(horizon)[ - *Time Complexity*: $O(d dot.c (n + b))$ - $d$: Number of digits - $b$: Base of numbering system (10 for decimal) - *Space Complexity*: $O(n + b)$ - *Stability*: Stable - *Limitation*: Works best when $d$ is not significantly large. ] == Comparison of Sorting Algos #align(horizon)[ #text(size: 9pt)[ #table( columns: 7, align: left + horizon, table.header( [*Algorithm*], [*Best Case*], [*Average Case*], [*Worst Case*], [*Space*], [*Stable*], [*Method*], ), [*Bubble Sort*], [$O(n)$], [$O(n^2)$], [$O(n^2)$], [$O(1)$], [Yes], [Exchange], [*Selection Sort*], [$O(n^2)$], [$O(n^2)$], [$O(n^2)$], [$O(1)$], [No], [Selection], [*Insertion Sort*], [$O(n)$], [$O(n^2)$], [$O(n^2)$], [$O(1)$], [Yes], [Insertion], [*Merge Sort*], [$O(n log n)$], [$O(n log n)$], [$O(n log n)$], [$O(n)$], [Yes], [Merge], [*Quick Sort*], [$O(n log n)$], [$O(n log n)$], [$O(n^2)$], [$O(log n)$], [No], [Partition], [*Heap Sort*], [$O(n log n)$], [$O(n log n)$], [$O(n log n)$], [$O(1)$], [No], [Selection], [*Counting Sort*], [$O(n + k)$], [$O(n + k)$], [$O(n + k)$], [$O(n + k)$], [Yes], [Counting], [*Radix Sort*], [$O(n k)$], [$O(n k)$], [$O(n k)$], [$O(n + k)$], [Yes], [Digit], ) ] ] == Practical Section (C or pseudocode) #align(horizon)[ - *Task*: Implement a sorting algorithm of your choice and analyze its time and space complexity. - *Task*: Modify the Quick Sort algorithm to use a random pivot to improve performance on already sorted arrays. - *Task*: Implement a stable version of Selection Sort. ] = Recursion #align(horizon + center)[ #image( "images/recursion_meme.jpg", width: 50%, ) ] == What is Recursion? #align(horizon)[ *Recursion* is a programming technique where a function calls itself to solve a smaller problem of the same type _until_ it reaches a base case. It is a way to *divide a complex problem into simpler and more manageable subproblems*. ] == How Does It Work? #align(horizon)[ 1. *Base Case*: Defines when the recursive function should stop calling itself. It is the stopping condition. 2. *Recursive Call*: The function calls itself with a modified input that brings it closer to the base case. 3. *Resolution*: The recursive calls return values that are combined to solve the original problem. ] == Classic Example: Factorial #align(horizon)[ - *Mathematical Definition*: $ n! = cases( 1 "if" n = 0, n times (n - 1)! "if" n > 0 ) $ - *Recursive Implementation in C*: ```c int factorial(int n) { if (n == 0) return 1; else return n * factorial(n - 1); } ``` ] == Visualization of Factorial Recursion #align(horizon + center)[ #figure( raw-render( ```dot digraph Factorial { n4 [label="f(4)"]; n3 [label="f(3)"]; n2_1 [label="f(2)"]; n2_2 [label="f(2)"]; n1_1 [label="f(1)"]; n1_2 [label="f(1)"]; n1_3 [label="f(1)"]; n0_1 [label="f(0)"]; n0_2 [label="f(0)"]; n4 -> {n3, n2_1}; n3 -> {n2_2, n1_3}; n2_1 -> {n1_1, n0_1}; n2_2 -> {n1_2, n0_2}; } ```, width: 70%, ), caption: "Recursion Tree for factorial(4)", ) ] == When to Use Recursion? #align(horizon)[ - *Problems that can be divided into similar subproblems*: Such as trees, graphs, and hierarchical structures. - *Algorithms that require backtracking*: Such as depth-first search (DFS), permutation and combination algorithms. - *Simplify Implementation*: Some algorithms are easier to implement recursively than iteratively. ] == Advantages and Disadvantages #align(horizon)[ *Advantages*: - Cleaner and more readable code for certain problems. - Naturally suited for recursive data structures (trees, graphs). - Facilitates solving complex problems by dividing them into smaller parts. #pagebreak() *Disadvantages*: - Consumes more memory due to the call stack. - May be less efficient in terms of time compared to iterative solutions. - Risk of stack overflow if the recursion is too deep. ] == Recursion vs Iteration #align(horizon)[ - *Recursion*: - Uses function calls to repeat code. - May be less efficient due to function call overhead. - More intuitive for problems that are naturally recursive. - *Iteration*: - Uses repetition structures like loops (`for`, `while`). - Generally more efficient in terms of memory and time usage. - May be less intuitive for certain problems. #pagebreak() *Example*: Calculating the factorial of `n`. - *Recursive*: ```c int factorial(int n) { if (n == 0) return 1; else return n * factorial(n - 1); } ``` #pagebreak() - *Iterative*: ```c int factorial(int n) { int result = 1; for (int i = 2; i <= n; i++) { result *= i; } return result; } ``` ] == Beware of Excessive Recursion #align(horizon)[ - *Stack Overflows*: Each recursive call adds a frame to the call stack. Very deep recursion can lead to overflows. - *Redundant Calculations*: In some recursions, like the naive Fibonacci calculation, many calculations are repeated. - *Optimization*: Techniques like *_memoization_* or transforming recursion into iteration can improve efficiency. ] == Memoization #align(horizon)[ Stores the results of subproblems already solved to avoid repeated calculations. *Example with Fibonacci*: #text(size: 11pt)[ ```c int fibonacci(int n, int memo[]) { if (memo[n] != -1) return memo[n]; if (n == 0) memo[n] = 0; else if (n == 1) memo[n] = 1; else memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo); return memo[n]; } ``` ] ] == Tail Recursion #align(horizon)[ - *Definition*: A recursion where the recursive call is the last operation of the function. - *Benefits*: Some compilers can optimize tail recursions to avoid stack growth (tail call elimination). - *Example*: #text(size: 11pt)[ ```c int factorial_tail(int n, int accumulator) { if (n == 0) return accumulator; else return factorial_tail(n - 1, n * accumulator); } ``` ] ] == Practical Part (C or pseudocode) #align(horizon)[ - *Problem*: Implement a recursive function that determines if a word is a palindrome. - *Hint*: Compare the first and last characters of the string and call the function recursively on the inner substring. ] = Divide and Conquer #align(horizon + center)[ #image( "images/divide_and_conquer_meme.png", width: 50%, ) ] == What is Divide and Conquer? #align(horizon)[ *Divide and Conquer* is an algorithm design paradigm that consists of dividing a problem into smaller subproblems, solving these subproblems recursively, and then combining the solutions to obtain the final solution. ] == How Does It Work? #align(horizon)[ 1. *Divide*: The problem is divided into smaller subproblems that are instances of the same type as the original problem. 2. *Conquer*: The subproblems are solved recursively. If they are small enough, they are solved directly. 3. *Combine*: The solutions of the subproblems are combined to solve the original problem. ] == Classic Examples #align(horizon)[ - *_Merge Sort_*: A sorting algorithm that divides the _array_ in half, sorts each half, and then combines the two sorted halves. - *_Quick Sort_*: An algorithm that selects a pivot, divides the _array_ into _subarrays_ smaller and larger than the pivot, and then recursively sorts the _subarrays_. - *Binary Search*: A search method that divides the search space in half at each iteration. ] == Master Theorem #align(horizon)[ In algorithm analysis, the #link("https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)")[*Master Theorem*] for divide and conquer recurrences provides an asymptotic analysis (using Big-O notation) for *recurrence relations* that occur in the analysis of many divide and conquer algorithms. #pagebreak() Consider a problem that can be solved using a recursive algorithm like the following: #figure( kind: "algorithm", supplement: [Algorithm], caption: [Example of Recursion], text(size: 12pt)[ #pseudocode-list( title: smallcaps[Procedure $p$(input $x$ of size $n$)], )[ + *if* $n < "some constant" k$: + solve $x$ directly, without recursion + *else*: + create $a$ subproblems from $x$, each of size $n/b$ + call procedure $p$ recursively on each subproblem + combine the results of the subproblems ] ], ) <master-theorem> #pagebreak() - The call tree has a node for each recursive call. - The leaf nodes are the base cases of the recursion: subproblems of size less than $k$ that are not solved recursively. - Each node performs an amount of work corresponding to the size of the subproblem $m$ given by $p(m)$. - The total amount of work performed by the complete algorithm is the sum of the work performed by all nodes in the tree. #pagebreak() #align(horizon + center)[ #image( "images/master_theorem_intuition.png", width: 100%, ) ] ] == Complexity Analysis #align(horizon)[ The complexity of divide and conquer algorithms can be expressed by the *Master Recurrence*: $ T(n) = a T(n / b) + f(n) $ #pagebreak() Where: - $T(n)$: Execution time of the algorithm on an input of size $n$. - $a$: Number of subproblems. - $b$: Factor by which the problem size is divided. - $f(n)$: Cost of dividing and combining the subproblems. #pagebreak() The solution to this recurrence depends on the relationship between $f(n)$ and $n^(log_b a)$. - *Case 1*: if $f(n) = O(n^(log_b a - epsilon))$ for some $epsilon > 0$, then $T(n) = O(n^(log_b a))$. - *Case 2*: if $f(n) = O(n^(log_b a) log^k n)$ for some $k >= 0$, then $T(n) = O(n^(log_b a) log^(k+1) n)$. - *Case 3*: if $f(n) = O(n^(log_b a + epsilon))$ for some $epsilon > 0$ e se $a f(n/b) <= c f(n)$ for some $c < 1$, then $T(n) = O(f(n))$. ] == Example: _Merge Sort_ #align(horizon)[ #text(size: 14pt)[ - *_Merge Sort_* divides the problem into 2 subproblems of size $n/2$: $ T(n) = 2 T(n / 2) + O(n) $ - Here, $a = 2$, $b = 2$, $f(n) = O(n)$. - We calculate $n^(log_b a) = n^(log_2 2) = n^1$. - Since $f(n) = O(n^(log_b a))$, we are in *Case 2* of the Master Theorem. - Therefore, $T(n) = O(n log n)$. ] ] == Example: _Quick Sort_ (Worst Case) #align(horizon)[ - In the worst case, *_Quick Sort_* divides the problem into one subproblem of size $n - 1$ and another of size 0: $ T(n) = T(n - 1) + O(n) $ - This recurrence resolves to $T(n) = O(n^2)$. - In the best case (balanced partitions), the complexity is $O(n log n)$. ] == Applications of Divide and Conquer #align(horizon)[ - *Multiplication of Large Integers* (#link("https://en.wikipedia.org/wiki/Karatsuba_algorithm")[Karatsuba Algorithm]) - *Fast Fourier Transform* (#link("https://en.wikipedia.org/wiki/Fast_Fourier_transform")[FFT]) - *Matrix Multiplication* (#link("https://en.wikipedia.org/wiki/Strassen_algorithm")[Strassen's Algorithm]) - *Computational Geometry Problems* (#link("https://en.wikipedia.org/wiki/Convex_hull")[Convex Hull], etc.) ] == Advantages and Disadvantages #align(horizon)[ #text(size: 14pt)[ *Advantages*: - Can reduce the complexity of complex problems. - Utilizes recursion, facilitating the implementation of complex algorithms. *Disadvantages*: - May have time and space overhead due to recursive calls. - Not all problems are naturally divisible into smaller subproblems. ] ] == Practical Part (C or pseudocode) #align(horizon)[ - *Problem*: Implement an algorithm that raises a number $x$ to a power $n$ using the divide and conquer paradigm, optimizing for $O(log n)$. - *Hint*: Use the property that $x^n = (x^(n/2))^2$ for even $n$. ]
https://github.com/bkorecic/curriculum
https://raw.githubusercontent.com/bkorecic/curriculum/main/README.md
markdown
# Typst CV My personal template for my curriculum, using [Typst](https://typst.app/). ## Dependencies The dependencies are `typst`, [Font Awesome](https://fontawesome.com/) (included in `fonts/`) and a few of [Twitter emojis](https://github.com/twitter/twemoji) (included in `emojis/`).
https://github.com/dasayan05/typst-ai-conf-templates
https://raw.githubusercontent.com/dasayan05/typst-ai-conf-templates/master/cvpr/cvpr.typ
typst
MIT License
#let pad_int(i, N: 3) = { let s = str(i) let n_pads = N - s.len() for _ in range(n_pads) { s = "0" + s } [#s] } #let manuscript(title: [Paper Title], authors: (), anonymous: true, id: 1234, abstract: [], content) = [ #let RULERWIDTH = 0.6in #set page( paper: "a4", margin: ( left: 0.8in - RULERWIDTH, right: 0.9in - RULERWIDTH, top: 1.0625in, bottom: 1.0625in ), header: [ #set align(center) #set text( rgb(50%, 50%, 100%), font: "Times", size: 8pt, weight: "bold" ) CVPR 2022 Submission \##id. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. ] ) #let A4WIDTH = 8.5in #set par( justify: true, first-line-indent: 0.166666in, leading: 0.55em ) #set text( font: "Times", size: 10pt, spacing: 100% ) #show raw: set text(font: "CMU Typewriter Text") #set heading(numbering: "1.1.") #show heading: h => [ #set text(size: 12pt) #h #v(5pt) ] #if anonymous { authors = ([ Anonymous CVPR submission \ \ Paper ID #id ], ) } #grid(columns: (RULERWIDTH, auto, RULERWIDTH), gutter: 0pt, [ #if anonymous { set text(rgb(50%, 50%, 100%), weight: "bold") set par(leading: 0.6em) set align(left) for i in range(100) { locate(loc => [ #pad_int(loc.page() * i) #linebreak() ]) } } ], [ #align(center)[ #v(0.5in) #text(size: 14pt)[*#title*] ]\ #align(center)[ #set text(size: 11pt) #let c = () #for value in range(authors.len()) { c.push(1fr) } #grid( columns: c, ..authors ) ]\ \ #columns(2, gutter: 0.3125in, [ #align(center)[ #text(size: 12pt)[*Abstract*] ] #emph[#abstract] \ \ #content ]) ], [ #if anonymous { set text(rgb(50%, 50%, 100%), weight: "bold") set align(right) for i in range(100) { locate(loc => [ #pad_int(loc.page() * i + 54) #linebreak() ]) } } ] ) ]
https://github.com/azduha/typst-templates
https://raw.githubusercontent.com/azduha/typst-templates/main/dospelak.typ
typst
MIT License
#import "template.typ": * #let data = json.decode("{}") #show: duha.with( title:"Závazná přihláška dospěláka", author: "<NAME>" ) #title("Závazná přihláška dospěláka", subtitle: "na letní dětský tábor Duha AZ") #credentials(fields: ( "Organizátor:": [ Duha Alternativní Zóna (dále jen Duha AZ) \ \1. máje 325/1, 591 01 Žďár nad Sázavou \ IČO: 670 24 629, \ registrace: VSP/1 – 3086/90 R ze dne 28. 2. 1990 ], "Místo:": [ táborová základna Údol<NAME> poblíž Železných Horek ], "Termín:": [ 30. 6. 2024 - 13. 7. 2024 ] )) #v(1em) #form([ #field("Jméno a příjmení:", to: 65%, content: optional(data, "jmeno") + " " + optional(data, "prijmeni")) #field("Přezdívka:", content: optional(data, "prezdivka")) #field("Rodné číslo:", to: 35%, content: optional(data, "datum_narozeni")) #field("Číslo OP:", to: 65%, content: optional(data, "op")) #field("Telefon:", content: format-phone(optional(data, "telefon"))) #field("Bydliště:", content: optional(data, "bydliste")) #field("Zdravotní pojišťovna:", content: optional(data, "zp")) #options("Vlastním platnou průkazku zdravotní pojišťovny?", options: ( "ANO": 90%, "NE": 100% ), selected: if optional(data, "jmeno").len() > 0 {"ANO"}) #field("Zaměstnání:", to: 40%, content: optional(data, "zamestani")) #field("Zaměstnavatel:", content: optional(data, "zamestnavatel")) #field("Funkce na táboře:", content: optional(data, "funkce")) #options("Vlastním platný potravinářský průkaz?", options: ( "ANO": 90%, "NE": 100% ), selected: ("NE", "ANO", "").at(data.at("potravinak", default: 2))) #options("Mám platný kurz zdravotníka podle vyhlášky 106/2001 Sb.", options: ( "ANO": 90%, "NE": 100% ), selected: ("NE", "ANO", "").at(data.at("kurz_zdravotnik", default: 2))) #options("Mám platný kurz hlavního vedoucího akreditovaný MŠMT?", options: ( "ANO": 90%, "NE": 100% ), selected: ("NE", "ANO", "").at(data.at("kurz_hvt", default: 2))) ]) #v(1em) #field("Datum:", to: 40%, content: format-datetime(optional(data, "datum"))) #field("Podpis:") #v(4em) #separator() #v(3em) #title("Prohlášení o bezinfekčnosti", subtitle: "písemné prohlášení ne starší jednoho dne před začátkem tábora") #field("Jméno a příjmení:", to: 60%, content: optional(data, "jmeno") + " " + optional(data, "prijmeni")) #field("Rodné číslo:", content: optional(data, "datum_narozeni")) Prohlašuji, že jsem úplně zdráv/zdráva a nepociťuji známky akutního onemocnění (např. horečky nebo průjmu). Dále prohlašuji, že v předešlých 14 kalendářních dnech jsem nepřišel/nepřišla do styku s fyzickou osobou nemocnou infekčním onemocněním nebo podezřelou z nákazy. Také prohlašuji, že okresní hygienik ani ošetřující lékař mi nenařídil karanténní opatření, zvýšený zdravotní dozor nebo lékařský dohled. Jsem si vědom/vědoma právních i finančních důsledků, které by pro mne vyplynuly, kdyby z nepravdivých údajů tohoto prohlášení vzniklo zdravotní ohrožení táborového kolektivu. #field("Datum:", to: 40%) #field("Podpis:") #pagebreak() // #page([], header: []) #v(1em) #heading("I. Smluvní strany") Duha AZ, 1. máje 325/1, 591 01 Žďár nad Sázavou, IČO 670 24 629, zastoupená <NAME>, hlavním vedoucím tábora, rodné číslo 951210/5240, bytem Bohdalov 318, 592 13, Bohdalov (dále jen Duha AZ) a #field("", to: 50%, content: optional(data, "jmeno") + " " + optional(data, "prijmeni")) #field(", rodné číslo", content: optional(data, "datum_narozeni")) #field("Bytem", to: 75%, content: optional(data, "bydliste")) (dále jen dobrovolník), Uzavírají podle § 2430 a násl. Zákona č. 89/2012 Sb., občanský zákoník, v platném znění, #v(0.8em) #align(center, title("PŘÍKAZNÍ SMLOUVU")) #heading("II. Druh práce") #field("Funkční zařazení na táboře:", content: optional(data, "funkce")) #field("Místo výkonu práce:", content: if optional(data, "tabor_od").len() > 0 { "tábořiště Železné Horky v termínu od " + format-datetime(optional(data, "tabor_od")) + " do " + format-datetime(optional(data, "tabor_do"))}) #field("Nástup dne:", content: format-datetime(optional(data, "tabor_od")) + if optional(data, "tabor_od").len() > 0 [ v Železných Horkách]) #par([ Další ujednání: V rámci plnění svých povinností se bude dobrovolník pohybovat i mimo vyhrazené tábořiště, bude používat různé prostředky hromadné dopravy a vykonávat další činnosti běžně spojené s pobytem na táboře. Součástí je i předtáborová příprava, stavění a bourání tábora, balení věcí a potáborový úklid. ], hanging-indent: 6.4em) #v(0.8em) #heading("III. Podmínky dohody") #heading("A. Povinnosti dobrovolníka", level: 2) 1. Dobrovolník se zavazuje, že sjednané činnosti bude vykonávat osobně. 2. Dobrovolník se zavazuje konat svěřenou práci řádně, svědomitě podle svých schopností a znalostí a bez přímé finanční odměny. Dále se zavazuje dodržovat obecně závazné právní a hygienické normy, dbát na bezpečnostní a morální pravidla u sebe i u ostatních. 3. Dobrovolník se bude řídit pokyny nadřízených, zejména dle pokynů hlavního vedoucího a jím pověřených osob, jimž zodpovídá za výkon funkce a plnění úkolů, a vnitřními předpisy tábora. Pro svou práci se předem připraví podle svého funkčního zařazení. 4. Dobrovolník se zavazuje nesnížit svoji schopnost výkonu svěřené práce požitím alkoholu nebo jiných omamných látek či dlouhým bděním. 5. Dobrovolník je povinen řádně pečovat o svěřený majetek a po skončení činnosti jej řádně vrátit. Dobrovolník odpovídá za škodu, kterou způsobil svým zaviněním nebo nedbalostí. Byla-li škoda způsobena také zaviněním Duhy AZ, nese Duha AZ škodu poměrně. 6. Dobrovolník bere na vědomí, že organizace může uplatnit nárok na náhradu škody způsobené svévolným porušením povinností dobrovolníka nebo nedodržením podmínek této smlouvy. 7. Dobrovolník se zavazuje svým jednáním, které souvisí s činností Duhy AZ, nepoškozovat dobré jméno Duhy AZ. 8. Dobrovolník se zavazuje pracovat pro rozvoj Duhy AZ a naplňovat poslání Duhy AZ. #pagebreak() #heading("B. Povinnosti organizace", level: 2) 1. Duha AZ odpovídá za zajištění podmínek pro práci dobrovolníka \ v táboře v míře, která je požadována zejména vyhláškou č. 106/2001 Sb. O hygienických požadavcích na zotavovací akce pro děti. 2. Duha AZ poskytne dle svých možností dobrovolníkovi prostředky nezbytné k vykonávání sjednané činnosti a adekvátní prostor pro odpočinek a osobní volno. 3. Duha AZ uhradí za dobrovolníka stravu a ubytování. Případné další nároky na úhradu nákladů spojených s prací dobrovolníka je nutné předem konzultovat s Duhou AZ. 4. Duha AZ seznámí před uzavřením dohody dobrovolníka s jeho právy a povinnostmi z ní vyplývajících, s vnitřními předpisy tábora, zdravotními a bezpečnostními předpisy a zásadami poskytování první pomoci. 5. Duha AZ odpovídá dobrovolníkovi za škodu, která mu byla způsobena v souvislosti s výkonem sjednaných činností. Byla-li škoda způsobena také zaviněním dobrovolníka, nese dobrovolník škodu poměrně. #heading("IV. Prohlášení") 1. Dobrovolník prohlašuje, že prošel stanoveným lékařským vyšetřením způsobilosti pro práci s dětmi a mládeží, jehož potvrzení předal Duze AZ společně s potvrzením o bezinfekčnosti. 2. Dobrovolník prohlašuje, že byl proškolen ze zásad bezpečnosti práce a požární ochrany a ze zásad poskytování první pomoci. #heading("V. Obecná a závěrečná ujednání") 1. Nestanoví-li tato dohoda jinak, řídí se právní vztahy z ní vyplývající Občanským zákoníkem. 2. Tato dohoda může být měněna a doplňována pouze se souhlasem obou smluvních stran písemnými dodatky. 3. Smlouva je sepsána ve dvou vyhotoveních, z nichž každá strana obdrží jednu kopii. 4. Tato dohoda se uzavírá na dobu určitou do 13. července 2024 včetně. Z důvodů zachování kontinuity programů Duhy AZ je tato smlouva pro obě smluvní strany vázána minimálně 14-ti denní výpovědní lhůtou. Toto ustanovení neruší právo hlavního vedoucího tábora odvolat v odůvodněných případech dobrovolníka z tábora (např. při svévolném porušení povinností dobrovolníka nebo nedodržení podmínek této smlouvy nebo pokud další pobyt není možný např. z výchovných důvodů). 5. Dobrovolník může tuto dohodu zrušit, pokud nemůže svěřenou funkci a činnosti spojené s ní vykonávat např. ze zdravotních důvodů nebo jiných závažných příčin, a to písemně nebo po vzájemné dohodě. 6. Tato smlouva nabývá účinnosti dnem podpisu. #v(3em) #field("Podpis dobrovolníka", to: 55%) #field(", za Duhu AZ ") #field("Dne", to: 40%) #field("v(e)")
https://github.com/jneug/typst-finite
https://raw.githubusercontent.com/jneug/typst-finite/main/draw.typ
typst
MIT License
// imports cetz and t4t #import "./util.typ": * #import "layout.typ" /// Draw a state at the given #arg[position]. /// /// #example[``` /// #cetz.canvas({ /// import finite.draw: state /// state((0,0), "q1", label:"S1", initial:true) /// state("q1.right", "q2", label:"S2", final:true, anchor:"left") /// }) /// ```] /// /// - position (coordinate): Position of the states center. /// - name (string): Name for the state. /// - label (string,content,auto,none): Label for the state. If set to #value(auto), the #arg[name] is used. /// - initial (boolean,alignment,dictionary): Whether this is an initial state. This can be either /// - #value(true), /// - an #dtype("alignment") to specify an anchor for the inital marking, /// - a #dtype("string") to specify text for the initial marking, /// - an #dtype("dictionary") with the keys `anchor` and `label` to specifiy both an anchor and a text label for the marking. Additionally, the keys `stroke` and `scale` can be used to style the marking. /// - final (boolean): Whether this is a final state. /// - anchor (string): Anchor to use for drawing. /// - ..style (any): Styling options. #let state(position, name, label:auto, initial:false, final:false, anchor:"center", ..style) = { assert.no-pos(style) let style = style.named() if not is.dict(label) { style.insert("label", (text:label)) } else { style.insert("label", label) } if "text" not in style.label or is.a(style.label.text) { style.label.insert("text", name) } if initial == true { initial = (:) } else if is.align(initial) { initial = (anchor: initial) } else if is.str(initial) { initial = (label: initial) } let t = coordinate.resolve-system(position) (( name: name, coordinates: (position, ), radius: (ctx) => styles.resolve(ctx.style, style, root: "state").at("radius", default:default-style.state.radius), before: prepare-ctx, anchor: anchor, add-default-anchors: false, custom-anchors-ctx: (ctx, pos) => { let style = styles.resolve(ctx.style, style, root: "state") return ( center: pos, left: vector.add(pos, (-style.radius, 0, 0)), right: vector.add(pos, (style.radius, 0, 0)), top: vector.add(pos, (0, style.radius, 0)), bottom: vector.add(pos, (0, -style.radius, 0)), top-left: vector.add(pos, vector-set-len((-1, 1), style.radius)), top-right: vector.add(pos, vector-set-len((1, 1), style.radius)), bottom-left: vector.add(pos, vector-set-len((-1, -1), style.radius)), bottom-right: vector.add(pos, vector-set-len((1, -1), style.radius)), ) }, render: (ctx, center) => { let style = styles.resolve(ctx.style, style, root: "state") let (rx, ry) = util.resolve-radius(style.radius).map(util.resolve-number.with(ctx)) cmd.ellipse(..center, rx, ry, fill: style.fill, stroke: style.stroke) if not is.empty(label) { let cnt = draw.content( // name + ".center", center, fit-content( ctx, rx*.9 * ctx.length, ry*.9 * ctx.length, style.label.text, size: style.label.size ), frame: none ).first() (cnt.render)(ctx, ..(cnt.transform-coordinates)(ctx, ..cnt.coordinates)) } // Mark state as final if final { let thickness = util.resolve-number(ctx, get.stroke-thickness(style.stroke)) cmd.ellipse(..center, (rx - thickness)*.9, (ry - thickness)*.9, fill: none, stroke: style.stroke) } // Draw arrow to mark initial state if initial != false { style.insert("initial", ( anchor: left, label: "Start", "stroke": style.stroke, scale: 1 ) + initial) let thickness = util.resolve-number(ctx, get.stroke-thickness(style.initial.stroke)) let color = get.stroke-paint(style.initial.stroke) let align-vec = align-to-vec(style.initial.anchor) let s-end = vector.add( center, vector.scale(align-vec, (rx + thickness)) ) let s-start = vector.scale(vector.add( s-end, vector.scale(align-vec, rx) ), style.initial.scale) cmd.path( ("line", s-start, s-end), stroke: style.initial.stroke ) cmd.mark( vector.add(s-end, vector.scale(align-vec, ctx.style.mark.size)), s-end, ">", fill: color, stroke: style.initial.stroke ) if "label" in style.initial { let s-label = vector.add( s-start, vector-rotate( vector.scale( align-vec, .2), -90deg ) ) let cnt = draw.content( s-label, angle: { if style.initial.anchor in (top, top+right, right, bottom+right) { vector.angle2((0,0), align-vec) } else { vector.angle2(align-vec, (0,0)) } }, text(.88em, color, style.initial.label) ).first() (cnt.render)(ctx, ..(cnt.transform-coordinates)(ctx, ..cnt.coordinates)) } } }, after: (ctx, ..) => { ctx.nodes.at(name).insert( "radius", styles.resolve(ctx.style, style, root: "state").at("radius", default:default-style.state.radius) ) return ctx } ),) } /// Draw a transition between two states. /// /// The two states #arg[from] and #arg[to] have to be existing names of states. /// #example[``` /// #cetz.canvas({ /// import finite.draw: state, transition /// state((0,0), "q1") /// state((2,0), "q2") /// transition("q1", "q2", label:"a") /// transition("q2", "q1", label:"b") /// }) /// ```] /// /// - from (string): Name of the starting state. /// - to (string): Name of the ending state. /// - inputs (string,array,none): A list of input symbols for the transition. /// If provided as a #dtype("string"), it is split on commas to get the list of /// input symbols. /// - label (string,content,auto,dictionary): A label for the transition. For #value(auto) /// the #arg[input] symbols are joined with commas. Can be a #dtype("dictionary") with /// a `text` and additional styling keys. /// - anchor (alignment): Anchor for loops. Has no effect on normal transitions. /// - ..style (any): Styling options. #let transition( from, to, inputs: none, label: auto, anchor: top, ..style ) = { assert.no-pos(style) let style = style.named() assert.all-of-type("string", from, to) let name = from + "-" + to if is.not-empty(inputs) { if is.str(inputs) { inputs = inputs.split(",") } else if not is.arr(inputs) { inputs = (inputs,) } } else { inputs = none } if is.a(label) { if is.not-none(inputs) { label = inputs.map(str).join(",") } else { label = "" } } if not is.dict(label) { label = (text:label) } else if not "text" in label and is.not-none(inputs) { label.insert("text", inputs.map(str).join(",")) } style.insert("label", label) let coords = ( from + ".center", from + ".right", to + ".left", to + ".center" ) let t = coords.map(coordinate.resolve-system) return (( name: name, coordinates: coords, add-default-anchors: false, custom-anchors-ctx: (ctx, sc, sr, el, ec) => { let style = styles.resolve(ctx.style, style, root: "transition") if style.curve == left { style.curve = default-style.transition.curve * -1 } let (start, end, c1, c2) = transition-pts( sc, ec, sr.at(0) - sc.at(0), ec.at(0) - el.at(0), curve: style.curve, anchor: anchor ) return ( center: mid-point(start, end, c1, c2), start: start, end: end, ctrl1: c1, ctrl2: c2, label: label-pt(start, end, c1, c2, style, loop:sc==ec) ) }, render: (ctx, sc, sr, el, ec) => { let style = styles.resolve(ctx.style, style, root: "transition") let l = style.label if style.curve == left { style.curve = default-style.transition.curve * -1 } let (start, end, c1, c2) = transition-pts( sc, ec, sr.at(0) - sc.at(0), ec.at(0) - el.at(0), curve: style.curve, anchor: anchor ) // let c1 = vector.add(start, vector.scale(vector.sub(ctrl, start), 2/3)) // let c2 = vector.add(end, vector.scale(vector.sub(ctrl, end), 2/3)) cmd.path( ("cubic", start, end, c1, c2), stroke: style.stroke ) let dir = mark-dir(start, end, c1, c2, scale:ctx.style.mark.size) cmd.mark( vector.sub(end, dir), end, ">", fill: get.stroke-paint(style.stroke), stroke: get.stroke-thickness(style.stroke) + get.stroke-paint(style.stroke) ) if is.not-empty(l.text) { if is.a(l.size) { l.size = default-style.transition.label.size } l.size = util.resolve-number(ctx, l.size) * ctx.length if is.a(l.color) { l.color = get.stroke-paint(style.stroke) } let label-pt = label-pt(start, end, c1, c2, style, loop:sc==ec) let cnt = draw.content( label-pt, angle: if type(l.angle) == "angle" { l.angle } else if sc == ec { 0deg } else { let d = cubic-derivative(start, end, c1, c2, l.pos) let a = vector.angle2((0,0), d) if a > 90deg and a < 270deg { a = vector.angle2((0,0), vector.scale(d, -1)) } a }, {set text(l.size, l.color); l.text;} ).first() (cnt.render)(ctx, ..(cnt.transform-coordinates)(ctx, ..cnt.coordinates)) } } ),) } /// Create a transition loop on a state. /// /// This is a shortcut for @@transition that takes only one /// state name instead of two. #let loop( state, inputs: none, label: auto, anchor: top, ..style ) = transition(state, state, inputs: inputs, label: label, anchor: anchor, ..style ) /// Draws all transitions from a transition table with a common style. /// /// - states (dictionary): A transition table given as a dictionary of dictionaries. /// - ..style (any): Styling options. #let transitions( states, ..style ) = { assert.no-pos(style) style = style.named() for (from, transitions) in states { for (to, label) in transitions { let name = from + "-" + to transition( from, to, inputs: label, label: label, ..style.at("transition", default:(:)), ..style.at(name, default:(:)) ) } } }
https://github.com/frectonz/the-pg-book
https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/050.%20vcsqueeze.html.typ
typst
vcsqueeze.html The Venture Capital Squeeze November 2005In the next few years, venture capital funds will find themselves squeezed from four directions. They're already stuck with a seller's market, because of the huge amounts they raised at the end of the Bubble and still haven't invested. This by itself is not the end of the world. In fact, it's just a more extreme version of the norm in the VC business: too much money chasing too few deals.Unfortunately, those few deals now want less and less money, because it's getting so cheap to start a startup. The four causes: open source, which makes software free; Moore's law, which makes hardware geometrically closer to free; the Web, which makes promotion free if you're good; and better languages, which make development a lot cheaper.When we started our startup in 1995, the first three were our biggest expenses. We had to pay $5000 for the Netscape Commerce Server, the only software that then supported secure http connections. We paid $3000 for a server with a 90 MHz processor and 32 meg of memory. And we paid a PR firm about $30,000 to promote our launch.Now you could get all three for nothing. You can get the software for free; people throw away computers more powerful than our first server; and if you make something good you can generate ten times as much traffic by word of mouth online than our first PR firm got through the print media.And of course another big change for the average startup is that programming languages have improved-- or rather, the median language has. At most startups ten years ago, software development meant ten programmers writing code in C++. Now the same work might be done by one or two using Python or Ruby.During the Bubble, a lot of people predicted that startups would outsource their development to India. I think a better model for the future is <NAME>, who outsourced his development to a more powerful language instead. A lot of well-known applications are now, like BaseCamp, written by just one programmer. And one guy is more than 10x cheaper than ten, because (a) he won't waste any time in meetings, and (b) since he's probably a founder, he can pay himself nothing.Because starting a startup is so cheap, venture capitalists now often want to give startups more money than the startups want to take. VCs like to invest several million at a time. But as one VC told me after a startup he funded would only take about half a million, "I don't know what we're going to do. Maybe we'll just have to give some of it back." Meaning give some of the fund back to the institutional investors who supplied it, because it wasn't going to be possible to invest it all.Into this already bad situation comes the third problem: Sarbanes-Oxley. Sarbanes-Oxley is a law, passed after the Bubble, that drastically increases the regulatory burden on public companies. And in addition to the cost of compliance, which is at least two million dollars a year, the law introduces frightening legal exposure for corporate officers. An experienced CFO I know said flatly: "I would not want to be CFO of a public company now."You might think that responsible corporate governance is an area where you can't go too far. But you can go too far in any law, and this remark convinced me that Sarbanes-Oxley must have. This CFO is both the smartest and the most upstanding money guy I know. If Sarbanes-Oxley deters people like him from being CFOs of public companies, that's proof enough that it's broken.Largely because of Sarbanes-Oxley, few startups go public now. For all practical purposes, succeeding now equals getting bought. Which means VCs are now in the business of finding promising little 2-3 man startups and pumping them up into companies that cost $100 million to acquire. They didn't mean to be in this business; it's just what their business has evolved into.Hence the fourth problem: the acquirers have begun to realize they can buy wholesale. Why should they wait for VCs to make the startups they want more expensive? Most of what the VCs add, acquirers don't want anyway. The acquirers already have brand recognition and HR departments. What they really want is the software and the developers, and that's what the startup is in the early phase: concentrated software and developers.Google, typically, seems to have been the first to figure this out. "Bring us your startups early," said Google's speaker at the Startup School. They're quite explicit about it: they like to acquire startups at just the point where they would do a Series A round. (The Series A round is the first round of real VC funding; it usually happens in the first year.) It is a brilliant strategy, and one that other big technology companies will no doubt try to duplicate. Unless they want to have still more of their lunch eaten by Google.Of course, Google has an advantage in buying startups: a lot of the people there are rich, or expect to be when their options vest. Ordinary employees find it very hard to recommend an acquisition; it's just too annoying to see a bunch of twenty year olds get rich when you're still working for salary. Even if it's the right thing for your company to do.The Solution(s)Bad as things look now, there is a way for VCs to save themselves. They need to do two things, one of which won't surprise them, and another that will seem an anathema.Let's start with the obvious one: lobby to get Sarbanes-Oxley loosened. This law was created to prevent future Enrons, not to destroy the IPO market. Since the IPO market was practically dead when it passed, few saw what bad effects it would have. But now that technology has recovered from the last bust, we can see clearly what a bottleneck Sarbanes-Oxley has become.Startups are fragile plants—seedlings, in fact. These seedlings are worth protecting, because they grow into the trees of the economy. Much of the economy's growth is their growth. I think most politicians realize that. But they don't realize just how fragile startups are, and how easily they can become collateral damage of laws meant to fix some other problem.Still more dangerously, when you destroy startups, they make very little noise. If you step on the toes of the coal industry, you'll hear about it. But if you inadvertantly squash the startup industry, all that happens is that the founders of the next Google stay in grad school instead of starting a company.My second suggestion will seem shocking to VCs: let founders cash out partially in the Series A round. At the moment, when VCs invest in a startup, all the stock they get is newly issued and all the money goes to the company. They could buy some stock directly from the founders as well.Most VCs have an almost religious rule against doing this. They don't want founders to get a penny till the company is sold or goes public. VCs are obsessed with control, and they worry that they'll have less leverage over the founders if the founders have any money.This is a dumb plan. In fact, letting the founders sell a little stock early would generally be better for the company, because it would cause the founders' attitudes toward risk to be aligned with the VCs'. As things currently work, their attitudes toward risk tend to be diametrically opposed: the founders, who have nothing, would prefer a 100% chance of $1 million to a 20% chance of $10 million, while the VCs can afford to be "rational" and prefer the latter.Whatever they say, the reason founders are selling their companies early instead of doing Series A rounds is that they get paid up front. That first million is just worth so much more than the subsequent ones. If founders could sell a little stock early, they'd be happy to take VC money and bet the rest on a bigger outcome.So why not let the founders have that first million, or at least half million? The VCs would get same number of shares for the money. So what if some of the money would go to the founders instead of the company?Some VCs will say this is unthinkable—that they want all their money to be put to work growing the company. But the fact is, the huge size of current VC investments is dictated by the structure of VC funds, not the needs of startups. Often as not these large investments go to work destroying the company rather than growing it.The angel investors who funded our startup let the founders sell some stock directly to them, and it was a good deal for everyone. The angels made a huge return on that investment, so they're happy. And for us founders it blunted the terrifying all-or-nothingness of a startup, which in its raw form is more a distraction than a motivator.If VCs are frightened at the idea of letting founders partially cash out, let me tell them something still more frightening: you are now competing directly with Google. Thanks to <NAME>, <NAME>, <NAME>, and <NAME> for reading drafts of this.Romanian TranslationHebrew TranslationJapanese Translation If you liked this, you may also like Hackers & Painters.
https://github.com/jneug/typst-nassi
https://raw.githubusercontent.com/jneug/typst-nassi/main/assets/example-2.typ
typst
MIT License
#import "../src/nassi.typ" #set page(width: auto, height: auto, margin: 5mm) #nassi.diagram( width: 12cm, theme: nassi.themes.greyscale, ``` function inorder(tree t) if t has left child inorder(left child of t) end if process(root of t) if t has right child inorder(right child of t) end if endfunction ```, )
https://github.com/Leadlegend/Curriculum-Vitae
https://raw.githubusercontent.com/Leadlegend/Curriculum-Vitae/main/cv.typ
typst
Apache License 2.0
#import "template.typ": * #show: layout #show link: underline #cvHeader(hasPhoto: false, align: left) #autoImport("education") #autoImport("professional") #autoImport("projects") //#autoImport("certificates") //#autoImport("publications") #autoImport("skills") #cvFooter()
https://github.com/lublak/typst-echarm-package
https://raw.githubusercontent.com/lublak/typst-echarm-package/main/examples/radar.typ
typst
MIT License
#set page(width: 200mm, height: 150mm, margin: 0mm) #import "../typst-package/lib.typ" as echarm #echarm.render(width: 100%, height: 100%, options: ( legend: ( top: "10%", data: ("Allocated Budget", "Actual Spending") ), radar: ( radius: "45%", indicator: ( (name: "Sales", max: 6500), (name: "Administration", max: 16000), (name: "Information Technology", max: 30000), (name: "Customer Support", max: 38000), (name: "Development", max: 52000), (name: "Marketing", max: 25000), ) ), series: ( ( name: "Budget vs spending", type: "radar", data: ( ( name: "Allocated Budget", value: (4200, 3000, 20000, 35000, 50000, 18000) ), ( name: "Actual Spending", value: (5000, 14000, 28000, 26000, 42000, 21000) ) ) ) ) ))
https://github.com/mem-courses/calculus
https://raw.githubusercontent.com/mem-courses/calculus/main/note-1/2.极限.typ
typst
#import "../template.typ": * #show: project.with( course: "Calculus I", course_fullname: "Calculus (A) I", course_code: "821T0150", semester: "Autumn-Winter 2023", title: "Note #2: 极限", authors: ( ( name: "memset0", email: "<EMAIL>", id: "3230104585", ), ), date: "October 22, 2023", ) #let def(x) = text("【" + x + "】", weight: "bold") #let deft(x) = text("【" + x + "】", weight: "bold", fill: rgb("#FFFFFF")) = 数列的极限 #definition[ 设 ${a_n}$ 是一个数列,如果存在 $A in RR$,使得对于任意 $eps>0$ 都有 $N in N_+$ 使得对于任意 $n>N$ 都有 $|a_n-A|<eps$。那么称 ${a_n}$ 收敛于 $A$,并把 $A$ 称作#bb[数列的极限]。 $ lim_(n->oo) a_n = A quad <==> quad forall eps>0,sp exists N in NN_+,sp st forall n>N,sp |a_n-A|<eps. $ ] #definition[ 如果存在 $A in RR$ 使得数列 ${a_n}$ 收敛于 $A$,则称数列 ${a_n}$ #bb[收敛],否则称其#bb[发散]。 #note[ 【证明数列收敛的方法】对于任意 $eps$ 先要求 $|a_n-A|<eps$ $=>$ 解不等式或利用其他性质得出 $N(eps)$ $=>$ 说明对于 $n>N(eps)$ 都成立 $=>$ 得证。 ] ] == 数列极限的性质 #property(name: "唯一性")[ 如果数列 ${a_n}$ 收敛,那么它的极限是唯一的。 #proof[ (反证法)设 $display(lim_(n->oo) x_n = a)$ 且 $display(lim_(n->oo) x_n=b)quad (a<b)$。取 $eps=(b-a)/2$。 - 因 $display(lim_(n->oo) x_n = a)$,$exists N_1 in NN_+,sp st |x_n-a|<eps => x_n<(a+b)/2$; - 因 $display(lim_(n->oo) x_n = b)$,$exists N_2 in NN_+,sp st |x_n-b|<eps => x_n>(a+b)/2$; 取 $N=max{N_1,N_2}$,则当 $n>N$ 时,$x_n$ 满足的不等式矛盾。故原命题为假。 ] ] #property(name: "保号性")[ 设 $display(lim_(n->oo) a_n = A)$,$display(lim_(n->oo) b_n = B)$。 1. 如果 $A>B$,那么存在 $N in NN_+$ 使得 $forall n>N,sp a_n>b_n$。(取不到等号) 2. 如果存在 $N in NN_+$ 使得 $forall n>N,sp a_n>=b_n$,那么 $A>=B$。 #proof[ 1. 取 $eps=(A-B)/2$,于是存在 $N_1,N_2 in NN_+$,使得 $ cases( forall n>N_1\,sp |a_n-A|<eps => a_n>A-eps = (A+B)/2, forall n>N_2\,sp |b_n-B|<eps => b_n<B+eps = (A+B)/2, ) $ 取 $N=max{N_1,N_2}$,当 $n>N$ 时,有 $a_n>(A+B)/2>b_n$。 2. 证明:(反证法)假设 $A<B$,根据前面的结论 $exists N' in NN_+,sp st forall n>N',sp b_n>a_n$,矛盾。故假设不成立,应有 $A>=B$。 ] ] #property(name: "列与子列的性质")[ 1. 如果极限存在,那么任何一个子列都有相同的极限。 2. 要证明一个数列极限不存在,只需要找到两个子列不相等或者至少一个子列极限不存在。 ] #note[常用于证明极限不存在的问题。] #property(name: "四则运算法则")[ 已知 $display(lim_(n->oo) a_n = A\,sp lim_(n->oo) b_n=B)$,那么 1. (加减法)$display(lim_(n->oo) (a_n pm b_n) = A pm B)$; 2. (乘法)$display(lim_(n->oo) a_n b_n = A B)$; 3. (除法)若 $b_n != 0$ 且 $B != 0$,则 $display(lim_(n->oo) a_n/b_n = A/B)$。 #caution[ 1. 这里要求各部分的极限都存在,故不能直接从 $display(lim_(n->oo) (a_n-b_n) = 0)$ 推出 $display(lim_(n->oo) a_n = lim_(n->oo) b_n)$,有可能两个极限都不存在。 2. 四则运算法只允许被*有限次*嵌套使用,反例如 $display(lim_(n->oo) sum_(i=1)^n 1/n = 1 != lim_(n->oo) sum_(i=1)^n 0)$。 ] ] #theorem(name: "夹逼定理")[ 设数列 ${a_n}, {x_n}, {y_n}$,如果存在 $N in NN_+$ 满足: $ forall n>N,sp x_n<=a_n<=y_n quad and quad lim_(n->oo)x_n = lim_(n->oo)y_n = c $ 那么 ${a_n}$ 收敛于 $c$。 #note[这是常用的计算数列极限的手段,特别是能猜出这个极限的时候。] ] #theorem(name: "单调有界定理")[ 如果数列 ${a_n}$ 单增有上界或者单减有下界,那么 ${a_n}$ 收敛。 #note[这是常用的证明数列存在的证据,特别是不方便直接求出或者暂不知道这个极限的时候。] ] = 函数的极限 #definition[ 函数在无穷处的极限 $display(lim_(x->+oo) f(x) = A)$ 的定义:设 $f(x)$ 的定义域为 $RR$。如果对于任意 $eps>0$,存在实数 $X$ 使得 $forall x>X$,$|f(x)-A|<eps$,那么称 $f(x)$ 当 $x$ 趋于正无穷时收敛与 $A$ 处的极限,也记作 $f(+oo)$。($display(lim_(x->-oo) f(x) = A)$ 的定义同理) $ lim_(x->oo) f(x) = A <==> lim_(x->+oo) f(x) = A and lim_(x->-oo) f(x) = A $ ] #definition[ 函数在单点处的极限 $display(lim_(x->x_0^+) f(x) = A)$ 的定义:设 $f(x)$ 在 $x_0$ 的右领域内有定义。如果对于任意 $eps>0$,存在 $delta>0$ 使得 $forall x in (x_0,x_0 +delta)$,$|f(x)-A| < eps$,那么称 $f(x)$ 当 $x$ 从右侧趋于 $x_0$ 时收敛于 $A$,也记作 $f(x_0^+)$ $ lim_(x->x_0) f(x) = A <==> lim_(x->x_0^+) f(x) = A and lim_(x->x_0^-) f(x) = A $ ] == 两个重要极限 #lemma[$display(lim_(x->0) (sin x) / x = 1)$。] #lemma[$display(lim_(x->oo) (1 + 1/x)^x = e)$。] #caution[需要注意取到极限的位置,如实际上有 $display(lim_(x->oo) (sin x) / x = 0 != 1)$。] == 函数极限的性质 函数极限的唯一性、局部有界性、局部保号性、单调有界定理、夹逼定理,可以类比数列极限,这里省略不记。 #property(name: "四则运算、复合运算法则")[ 如果 $display(lim_(x->x_0)f(x) = A),sp display(lim_(x->x_0) f(x) = B)$,那么: 1. $display(lim_(x->x_0) (f(x) pm g(x)) = A pm B)$; 2. $display(lim_(x->x_0) f(x) g(x) = A B)$; 3. 若 $B!=0$ 且 $g$ 在 $x_0$ 的一个领域内不为 $0$,则 $display(lim_(x->x_0) f(x)/g(x) = A/B)$; 4. 如果 $display(lim_(y->A) F(y) = C)$,并且存在 $x_0$ 的一个邻域使得 $f(x) != A$,那么我们有 $display(lim_(x->x_0) F(f(x)) = C)$。 ] #property(name: "海涅定理,归结定理")[ $display(lim_(x->a) f(x) = A)$ 的充分必要条件是:对于任意满足 $display(lim_(n->oo) x_n) = a$ 且 $x_n != a sp (n=1,2,dots.c)$ 的数列 ${x_n}$,相应的函数值数列 ${f(x_n)}$ 满足 $display(lim_(n->oo) f(x_n) = A)$。 #note[ 海涅定理是连接数列极限和函数极限的桥梁: 1. 求数列极限时,可以将 $n$ 换为 $x$,将 $n->oo$ 换为 $x->+oo$,如果能够算出 $display(lim_(x->+oo) f(x) = A)$,那么原数列极限 $display(lim_(n->oo) a_n)$ 一定存在且等于 $A$。 2. 判断函数极限 $display(lim_(x->a) f(x))$ 不存在:找两个不等于 $a$ 的子列 ${x_n},{y_n}$,他们的极限均为 $a$,但 $display(lim_(n->oo) f(x_n))$ 和 $display(lim_(n->oo) f(y_n))$ 不相等或至少一个不存在,那么 $display(lim_(x->a) f(x))$ 一定不存在。 ] ] == 无穷小 #definition[ 设 $alpha,beta$ 是同一变化过程中的无穷小。 - 若 $lim display(beta/alpha) = 0$,则称 $beta$ 是 $alpha$ 的#bb[高阶无穷小],记作 $beta = o(alpha)$; - 若 $lim display(beta/alpha) = oo$,则称 $beta$ 是 $alpha$ 的#bb[低阶无穷小]; - 若 $lim display(beta/alpha) = C != 0$,则称 $beta$ 是 $alpha$ 的#bb[同阶无穷小]; - 若 $lim display(beta/alpha^k) = C != 0$,则称 $beta$ 是 $alpha$ 的 #bb[$k$ 阶无穷小]; - 若 $lim display(beta/alpha) = 1$,则称 $beta$ 是 $alpha$ 的#bb[等价无穷小],记作 $alpha sim beta$。 ] #theorem(name: "等价代换")[ 若当 $x->x_0$ 时,$f(x) sim g(x)$ 且 $display(lim_(x->x_0) f(x)/h(x) = A)$ 或 $oo$;那么 $display(lim_(x->x_0) g(x)/h(x) = A)$ 或 $oo$。 ] #note[ 【总结】常用的等价无穷小。当 $x->0$ 时: $ (1+x)^a - 1 sim a x quad quad sin x sim x sim tan x sim arcsin x sim arctan x sim ln(1+x) sim e^x-1\ 1 - cos x sim 1 / 2 x^2 quad quad 1-cos^a x = (a x^2) / 2 quad quad x-sin x sim x^3 / 6 quad quad tan x - x sim x^3 / 3 $ ] = 连续与间断 == 函数的连续性 #definition[ 如果 $display(lim_(x->a) f(x) = f(a))$,则称 $f(x)$ 在 $x=a$ 处#bb[连续]。如果左极限、右极限和 $f(x)$ 中有一个不存在或不相等,则称 $f(x)$ 在 $x=a$ 处#bb[间断]。 ] #definition[ 如果 $f(x)$ 在区间 $(a,b)$ 内每一个点都连续,且 $display(lim_(x->a^+) f(x) = f(a) \, lim_(x->b^-) f(x) = f(b))$,则称 $f(x)$ 在区间 $[a,b]$ 上连续,记 $f(x) in C[a,b]$。 ] == 间断点 #definition[ 若 $f(x)$ 在 $x=a$ 处间断且 $f(a^+)$ 和 $f(a^-)$ 都存在,满足: 1. $f(a^+) = f(a^-) != f(a)$,称 $x=a$ 为 $f(x)$ 的#bb[可去间断点]。 2. $f(a^+) != f(a^-)$,称 $x=a$ 为 $f(x)$ 的#bb[跳跃间断点]。 #fake_par 可去间断点和跳跃间断点都是#bb[第一类间断点]。 ] #definition[ 若 $f(x)$ 在 $x=a$ 处间断,且 $f(a^+)$ 与 $f(a^-)$ 中至少一个不存在,满足: 1. $display(lim_(x->a) f(x) = oo)$,则称 $x=a$ 为 $f(x)$ 的#bb[无穷间断点]。 2. 除1以外的其他情况,则称 $x=a$ 为 $f(x)$ 的#bb[跳跃间断点]。 #fake_par 无穷间断点和跳跃间断点都是#bb[第二类间断点]。 ] #note[ 判断间断点是第一类还是第二类,关键在于判断 $f(a^+)$ 和 $f(a^-)$ 是否都存在。 ] == 性质 #theorem(name: "最值定理")[ 若 $f(x) in C[a,b]$,则 $f(x)$ 在 $[a,b]$ 上一定存在最小值 $m$ 和最大值 $M$。 ] #theorem(name: "有界定理")[ 若 $f(x) in C[a,b]$,则 $f(x)$ 在 $[a,b]$ 上一定有界。 ] #theorem(name: "零点定理")[ 若 $f(x) in C[a,b]$ 且 $f(a) f(b) < 0$,则 $exists xi in (a,b) st f(xi) = 0$。 ] #theorem(name: "介值定理")[ 若 $f(x) in C[a,b]$,则 $forall eta in [m,M],sp exists xi in [a,b] st f(xi) = eta$。 ]
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/array-09.typ
typst
Other
// Test negative indices. #{ let array = (1, 2, 3, 4) test(array.at(0), 1) test(array.at(-1), 4) test(array.at(-2), 3) test(array.at(-3), 2) test(array.at(-4), 1) }
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/salsa-dip/0.1.0/README.md
markdown
Apache License 2.0
# Salsa Dip Salsa Dip is a library for making [DIP](https://en.wikipedia.org/wiki/Dual_in-line_package) chip labels in Typst. # Examples ```typst #import "@preview/salsa-dip:0.1.0": dip-chip-label #set text(font: ("JetBrains Mono", "Fira Code", "DejaVu Sans Mono"), weight: "extrabold") #set page(width: auto, height: auto, margin: .125cm) #let z80-labels = ("A11", ..., "A9", "A10") #dip-chip-label(40, 0.54in, z80-labels, "Z80", settings: (pin-number-margin: 1pt, pin-number-size: 2.5pt, chip-label-size: 5pt)) ``` ![Z80 Chip Label](examples/z80.png "Z80 Chip Label") ```typst #import "@preview/salsa-dip:0.1.0": dip-chip-label #set text(font: ("JetBrains Mono", "Fira Code", "DejaVu Sans Mono"), weight: "extrabold") #set page(width: auto, height: auto, margin: .125cm) #let labels = ("1A", "1B", "1Y", "2A", "2B", "2Y", "GND", "3Y", "3A", "3B", "4Y", "4A", "4B", "VCC") #dip-chip-label(14, 0.24in, labels, "74LS00") ``` ![74ls00 Chip Label](examples/74ls00.png "74ls00 Chip Label") # Customization The `dip-chip-label` function is called with four parameters, an integer number of pins for the chip, the width (usually `0.24in` or `0.54in`), the list of pin labels (if no labels are desired, an empty array can be passed), and the chip label. There is an additional `settings` parameter which can be used to fine tune the apperance of the chip labels. The `settings` parameter is a dictionary optionally containing any of the setting keys: - `chip-label-size`: Font size for the chip label - `pin-number-margin`: Margin to give next to pin numbers - `pin-number-size`: Font size for pin numbers - `pin-label-size`: Font size for pin labels - `include-numbers`: Boolean enabling pin numbers - `pin-spacing`: Spacing of pins - `vertical-margin`: Total margin to put into spacing above and below pin labels
https://github.com/chilingg/kaiji
https://raw.githubusercontent.com/chilingg/kaiji/main/index_part1.typ
typst
Other
#let part_sym = "上" #let part = "5" #set document(author: "佐藤敬之辅", title: "汉字 " + part_sym) #import "template/cover.typ": cover #cover(part_sym, part) #pagebreak() #include "part1/front_page.typ" #pagebreak() #include "part1/chapter1.typ" #pagebreak() #include "part1/chapter2.typ" #pagebreak() #include "part1/chapter3.typ" #pagebreak()
https://github.com/tingerrr/masters-thesis
https://raw.githubusercontent.com/tingerrr/masters-thesis/main/src/figures/util.typ
typst
#import "/src/util.typ": fletcher, cetz, algorithm, i18n, math-type, math-func #import fletcher: edge, node #let complexity-comparison( cases: ([worst], [average], [best]), columns, ..args, ) = { let headers = columns.keys() let rows = (:) for (column, entry) in columns { for key in entry.keys() { if key not in rows { rows.insert(key, (:)) } if column not in entry.at(key) { rows.at(key).insert(column, ()) } for idx in range(cases.len()) { let comp = entry.at(key).at(idx) rows.at(key).at(column).push(if comp == none [-] else { comp }) } } } let column-gutter = ( ..(0.5em,) + ((0pt,) * (cases.len() - 1)), ) * (columns.len() + 1) table( columns: 1 + columns.len() * cases.len(), stroke: none, align: (x, y) => if y > 1 { left } else { center }, column-gutter: column-gutter, ..args.named(), table.header( [Operation], ..headers.map(table.cell.with(colspan: cases.len(), stroke: (bottom: 0.5pt))), none, ..(cases * columns.len()), table.hline(stroke: 0.5pt), ), ..rows.pairs().map(((op, entries)) => { (op, ..headers.map(header => { entries.at(header, default: ([-], ) * 3) })) }).flatten() ) } #let instance = node.with(extrude: (-2, 0)) #let fdiag = fletcher.diagram.with( node-stroke: 0.075em, spacing: 4em, ) #let dstroke(color, ..args) = (dash: "dashed", paint: color, ..args.named()) #let group(centers, name, color, ..args) = { let args = arguments( enclose: centers, inset: 10pt, ..args, ) node( hide(name), layer: -1, stroke: dstroke(color), fill: color.lighten(75%), ..args, ) node( context { let size = measure(name) // NOTE: align + move ensures the node thinks the text is inside, ensuring the text is not // hyphenated unecessarily align(top + left, move( dy: -(size.height + 10pt + 0.5em), dx: -10pt, box(fill: white, outset: 1pt, name), )) }, layer: 1, stroke: none, fill: none, ..args, ) }
https://github.com/drupol/cv
https://raw.githubusercontent.com/drupol/cv/master/src/cv/main.typ
typst
#import "cv-theme.typ": * #set document( title: "Pol Dellaiera", author: "<NAME>", ) #show: resume.with( firstname: "Pol", lastname: "Dellaiera", )
https://github.com/maucejo/elsearticle
https://raw.githubusercontent.com/maucejo/elsearticle/main/src/_utils.typ
typst
MIT License
#import "_globals.typ": * #import "@preview/subpar:0.1.1" // Subfigures #let subfigure = { subpar.grid.with( numbering: n => if isappendix.get() {numbering("A.1", counter(heading).get().first(), n) } else { numbering("1", n) }, numbering-sub-ref: (m, n) => if isappendix.get() {numbering("A.1a", counter(heading).get().first(), m, n) } else { numbering("1a", m, n) } ) } // Equations #let nonumeq(body) = { set math.equation(numbering: none) body } // Create a dictionary for authors #let create_dict(default-dict, user-dict) = { let new-dict = default-dict for (key, value) in user-dict { if key in default-dict.keys() { new-dict.insert(key, value) } } return new-dict }
https://github.com/daskol/typst-templates
https://raw.githubusercontent.com/daskol/typst-templates/main/icml/appendix.typ
typst
MIT License
= You #emph[can] have an appendix here. You can have as much text here as you want. The main body must be at most $8$ pages long. For the final version, one more page can be added. If you want, you can use an appendix like this one. The `\onecolumn` command above can be kept in place if you prefer a one-column appendix, or can be removed if you prefer a two-column appendix. Apart from this possible change, the style (font size, spacing, margins, page numbering, etc.) should be kept the same as the main body.
https://github.com/VisualFP/docs
https://raw.githubusercontent.com/VisualFP/docs/main/SA/design_concept/content/introduction/tool_research/agda.typ
typst
= Agda Even though Agda #footnote("https://wiki.portal.chalmers.se/agda/pmwiki.php") isn't a visual programming environment, the tools for it have been designed in an interactive way and are of interest to this project. The basis for these tools is a language-server, which, in combination with the powerful checker and editor extensions not only allow to verify the correctness of the code, but also to check incomplete programs. Additional context, such as checker errors or incomplete expressions, is then provided in a separate window inside the editor. Using this tooling, it is, for example, possible to insert a hole in the code, a so-called _metavariable_, which the checker will detect and display alongside the expected type in the context window, as shown in @agda_context_window. #figure( image("../../../static/agda-context-window.png", width: 70%), caption: [Screenshot of Agda context window after inserting a hole in an expression using the `?` sign] ) <agda_context_window> The Agda language server is also able to provide automatic case-splitting for pattern matching. During this process, the checker will determine the possible cases and present only those to the user. Thus, the user is freed from remembering and typing out all matching cases. Additionally, the language server offers to normalize or deduce any given expression inside the context of the currently loaded code. The checker is also utilized for syntax highlighting, providing the user with semantical value in the form of coloring.
https://github.com/derekchai/k-mapper
https://raw.githubusercontent.com/derekchai/k-mapper/main/README.md
markdown
MIT License
# `k-mapper` for Typst (1.1.0) 📖 See the `k-mapper` Manual [here](https://github.com/derekchai/k-mapper/blob/1f334d9e0f02cc656c01835302474bf728db9f80/manual.pdf)! The Manual features much more documentation, and is typeset using Typst. This is a package for adding Karnaugh maps into your Typst projects. See the changelog for the package [here](https://github.com/derekchai/k-mapper/blob/698e8554ce67e3a61dd30319ab8f712a6a6b8daa/changelog.md). ## Features - 2-variable (2 by 2) Karnaugh maps - 3-variable (2 by 4) Karnaugh maps - 4-variable (4 by 4) Karnaugh maps ## Getting Started Simply import `k-mapper` using the Typst package manager to begin using `k-mapper` within your Typst documents. ```typst #import "@preview/k-mapper:1.1.0": * ``` ## Example ```typst #karnaugh( 16, x-label: $C D$, y-label: $A B$, manual-terms: ( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ), implicants: ((5, 7), (5, 13), (15, 15)), vertical-implicants: ((1, 11), ), horizontal-implicants: ((4, 14), ), corner-implicants: true, ) ``` ![Code result](https://github.com/derekchai/k-mapper/blob/005cb0a839499d0dfa90ee48d2128d41e582d755/readme-example.png) For more detailed documentation and examples, including function parameters, see the Manual [PDF](https://github.com/derekchai/k-mapper/blob/1f334d9e0f02cc656c01835302474bf728db9f80/manual.pdf) and [Typst file](https://github.com/derekchai/k-mapper/blob/1f334d9e0f02cc656c01835302474bf728db9f80/manual.typ) in the [Github repo](https://github.com/derekchai/typst-karnaugh-map).
https://github.com/RandomcodeDev/FalseKing-Design
https://raw.githubusercontent.com/RandomcodeDev/FalseKing-Design/main/engine/intro.typ
typst
= Introduction This document outlines the design of the custom-made 3D game engine for False King and subsequent Randomcode Developers games. == Language The engine will be written in C++03, likely using a custom replacement for the STL and/or the C runtime. == Design The engine will be based on an entity component system, fairly clean separation between independant components with certain common ones, like platform abstraction, and simple data formats. == Systems Each system will be a static or dynamically loaded shared library, and expose a general interface in addition to ECS systems. This will make it easy to add and integrate new features.
https://github.com/alxsimon/quarto-pci
https://raw.githubusercontent.com/alxsimon/quarto-pci/main/_extensions/pci/definitions.typ
typst
// Some quarto-specific definitions. #show raw.where(block: true): block.with( fill: luma(230), width: 100%, inset: 8pt, radius: 2pt ) #let block_with_new_content(old_block, new_content) = { let d = (:) let fields = old_block.fields() fields.remove("body") if fields.at("below", default: none) != none { // TODO: this is a hack because below is a "synthesized element" // according to the experts in the typst discord... fields.below = fields.below.amount } return block.with(..fields)(new_content) } #let empty(v) = { if type(v) == "string" { // two dollar signs here because we're technically inside // a Pandoc template :grimace: v.matches(regex("^\\s*$$")).at(0, default: none) != none } else if type(v) == "content" { if v.at("text", default: none) != none { return empty(v.text) } for child in v.at("children", default: ()) { if not empty(child) { return false } } return true } } // 2023-10-09: #fa-icon("fa-info") is not working, so we'll eval "#fa-info()" instead #let callout(body: [], title: "Callout", background_color: rgb("#dddddd"), icon: none, icon_color: black) = { block( breakable: false, fill: background_color, stroke: (paint: icon_color, thickness: 0.5pt, cap: "round"), width: 100%, radius: 2pt, block( inset: 1pt, width: 100%, below: 0pt, block( fill: background_color, width: 100%, inset: 8pt)[#text(icon_color, weight: 900)[#icon] #title]) + block( inset: 1pt, width: 100%, block(fill: white, width: 100%, inset: 8pt, body))) }
https://github.com/SillyFreak/tu-wien-software-engineering-notes
https://raw.githubusercontent.com/SillyFreak/tu-wien-software-engineering-notes/main/optimizing-compilers/oc/flow-graphs.typ
typst
#import "@preview/commute:0.2.0": node, arr, commutative-diagram #let empty = $$ #let skip = $upright("skip")$ #let assign(v, t) = $#v := #t$ #let stmt-node(..args) = { rect( width: 5em, stroke: 0.5pt, // if no args are given, make sure that we include the empty statement so that there's content setting the rect's height ..if args.pos().len() == 0 { (empty,) }, ..args, ) } #let edge(a, b, nodes: none, start-space: none, end-space: none, label-pos: 1em, curve: 0deg, stroke: 0.45pt, ..options) = { assert(nodes != none) arr( nodes.at(a).at(0), nodes.at(b).at(0), [], start-space: start-space, end-space: end-space, label-pos: label-pos, curve: curve, stroke: stroke, ..options ) } #let edges(nodes: none, edge: edge, ..names) = { assert(nodes != none) names = names.pos() range(names.len() - 1).map((i) => { let a = names.at(i) let b = names.at(i+1) edge(a, b, nodes: nodes) }) } #let node-labelled-graph(nodes: none, ..options) = { assert(nodes != none) commutative-diagram( node-padding: (-10pt, 20pt), arr-clearance: 0.2em, ..nodes.values().map(((pos, body)) => node(pos, body)), ..options, ) } #figure(caption: [node-labelled flow graph])[ #let node = stmt-node #let nodes = ( // put something in column 3 so that spacing is correct "dummy": ((0, 3), node(stroke: none)), "1": ((0, 1), node()), "2": ((1, 0), node($assign(a, c)$)), "3": ((2, 0), node($assign(x, a+b)$)), "4": ((2, 2), node()), "5": ((2, 4), node($assign(y, a+b)$)), "6": ((3, 1), node()), ) #let edge = edge.with(nodes: nodes) #let edges = edges.with(nodes: nodes) #node-labelled-graph( nodes: nodes, ..edges("1", "2", "3", "6"), ..edges("1", "4", "6"), edge("4", "5", curve: -40deg), edge("5", "4", curve: -40deg), ) ] #let circ-node = circle.with( radius: 1em, stroke: 0.5pt, ) #let stmt-edge(a, b, stmt, nodes: none, ..options) = { assert(nodes != none) arr( nodes.at(a).at(0), nodes.at(b).at(0), stmt, ..options ) } #let stmt-edges(nodes: none, edge: stmt-edge, ..namesAndStmts) = { assert(nodes != none) namesAndStmts = namesAndStmts.pos() assert(calc.rem(namesAndStmts.len(), 2) == 1) range(namesAndStmts.len() - 1, step: 2).map((i) => { let a = namesAndStmts.at(i) let stmt = namesAndStmts.at(i+1) let b = namesAndStmts.at(i+2) edge(a, b, stmt, nodes: nodes) }) } #let edge-labelled-graph(nodes: none, ..options) = { assert(nodes != none) commutative-diagram( node-padding: (20pt, 20pt), arr-clearance: 0.2em, ..nodes.values().map(((pos, body)) => node(pos, body)), ..options, ) } #figure(caption: [edge-labelled flow graph])[ #let node = circ-node #let nodes = ( // put something in column 3 so that spacing is correct "1": ((0, 1), node()), "3": ((1, 0), node()), "4": ((1, 2), node()), "5": ((1, 4), node()), "6": ((2, 1), node()), ) #let edge = stmt-edge.with(nodes: nodes) #let edges = stmt-edges.with(nodes: nodes) #edge-labelled-graph( nodes: nodes, ..edges( "1", $assign(a, c)$, "3", $assign(x, a+b)$, "6", ), ..edges( "1", empty, "4", empty, "6", ), edge("4", "5", $assign(y, a+b)$, curve: -40deg), edge("5", "4", empty, curve: -40deg), ) ]
https://github.com/wychwitch/typst-mla9-template
https://raw.githubusercontent.com/wychwitch/typst-mla9-template/main/example.typ
typst
MIT License
#import "template.typ": * #show: mla.with( title: "Toward a Recovery of Nineteenth Century Farming Handbooks", author: ( firstname: "E.L", lastname: "Angeli" ), professor: [Professor <NAME>], course:[English 624], date: [12 February 2012], bibliography-file: "example.bib", font-size: 12pt, ) While researching texts written about nineteenth century farming, I found a few authors who published books about the literature of nineteenth century farming, particularly agricultural journals, newspapers, pamphlets, and brochures. These authors often placed the farming literature they were studying into an historical context by discussing the important events in agriculture of the year in which the literature was published (see Demaree, for example). However, while these authors discuss journals, newspapers, pamphlets, and brochures, I could not find much discussion about another important source of farming knowledge: farming handbooks. My goal in this paper is to bring this source into the agricultural literature discussion by connecting three agricultural handbooks from the nineteenth century with nineteenth century agricultural history. To achieve this goal, I have organized my paper into four main sections, two of which have sub-sections. In the first section, I provide an account of three important events in nineteenth century agricultural history: population and technological changes, the distribution of scientific new knowledge, and farming’s influence on education. In the second section, I discuss three nineteenth century farming handbooks in connection with the important events described in the first section. Special attention is paid to the role that these handbooks played in the dissemination of agricultural knowledge (and the creation of genuinely new knowledge). I end with a third section that offers research questions that could be answered in future versions of this paper and conclude with a fourth section that discusses the importance of expanding this particular project. I also include an appendix after the Works Cited that contains images of the three handbooks I examined. Before I can begin the examination of the three handbooks, however, I need to provide an historical context in which the books were written, and it is to this that I now turn. = Historical Context The nineteenth century saw many changes to daily American life with an increase in population, improved methods of transportation, developments in technology, and the rise in the importance of science. These events impacted all aspects of nineteenth century American life (most significantly, those involved in slavery and the Civil War). However, one part of American life was affected that is quite often taken for granted: the life of the American farmer. == Population and Technological Changes One of the biggest changes, as seen in nineteenth century America’s census reports, is the dramatic increase in population. The 1820 census reported that over 10 million people were living in America; of those 10 million, over 2 million were engaged in agriculture. Ten years prior to that, the 1810 census reported over 7 million people were living in the states; there was no category for people engaged in agriculture. In this ten-year time span, then, agriculture experienced significant improvements and changes that enhanced its importance in American life. One of these improvements was the development of canals and steamboats, which allowed farmers to “sell what has previously been unsalable [sic]” and resulted in a “substantial increase in [a farmer’s] ability to earn income” @danhof_change_1969[5]. This improvement allowed the relations between the rural and urban populations to strengthen, resulting in an increase in trade. The urban population (defined as having over 2,500 inhabitants) in the northern states increased rapidly after 1820.#link(<note_1>)[#super[1]] This increase accompanied the decrease in rural populations, as farmers who “preferred trade, transportation, or ‘tinkering’” to the tasks of tending to crops and animals found great opportunities in the city @danhof_change_1969[7]. Trade and transportation thus began to influence farming life significantly. Before 1820, the rural community accounted for eighty percent of consumption of farmers’ goods @hurt_american_1994[127]. With the improvements in transportation, twenty-five percent of farmers’ products were sold for commercial gain, and by 1825, farming “became a business rather than a way of life” @hurt_american_1994[128]. This business required farmers to specialize their production and caused most farmers to give “less attention to the production of surplus commodities like wheat, tobacco, pork, or beef” @hurt_american_1994[128]. The increase in specialization encouraged some farmers to turn to technology to increase their production and capitalize on commercial markets @hurt_american_1994[172]. The technology farmers used around 1820 was developed from three main sources: Europe, coastal Native American tribes in America, and domestic modifications made from the first two sources’ technologies. Through time, technology improved, and while some farmers clung to their time-tested technologies, others were eager to find alternatives to these technologies. These farmers often turned to current developments in Great Britain and received word of their technological improvements through firsthand knowledge by talking with immigrants and travelers. Farmers also began planning and conducting experiments, and although they lacked a truly scientific approach, these farmers engaged in experiments to obtain results and learn from the results.#link(<note_2>)[#super[2]] Agricultural organizations were then formed to “encourage … experimentation, hear reports, observe results, and exchange critical comments” @danhof_change_1969[53]. Thus, new knowledge was transmitted orally from farmer to farmer, immigrant to farmer, and traveler to farmer, which could result in the miscommunication of this new scientific knowledge. Therefore, developments were made for knowledge to be transmitted and recorded in a more permanent, credible way: by print. == The Distribution of New Knowledge Before 1820 and prior to the new knowledge farmers were creating, farmers who wanted print information about agriculture had their choice of agricultural almanacs and even local newspapers to receive information @danhof_change_1969[54]. After 1820, however, agricultural writing took more forms than almanacs and newspapers. From 1820 to 1870, agricultural periodicals were responsible for spreading new knowledge among farmers. In his published dissertation The American Agricultural Press 1819-1860, <NAME> presents a “description of the general content of [agricultural journals]” @demaree_american_1941[xi]. These journals began in 1819 and were written for farmers, with topics devoted to “farming, stock raising, [and] horticulture” @demaree_american_1941[12]. The suggested “birthdate” of American agricultural journalism is April 2, 1819 when <NAME> published his periodical American Farmer in Baltimore. Demaree writes that Skinner’s periodical was the “first continuous, successful agricultural periodical in the United States” and “served as a model for hundreds of journals that succeeded it” @demaree_american_1941[19]. In the midst of the development of the journal, farmers began writing handbooks. Not much has been written on the handbooks’ history, aside from the fact that C.M. Saxton & Co. in New York was the major handbook publisher. Despite the lack of information about handbooks, and as can be seen in my discussion below, these handbooks played a significant role in distributing knowledge among farmers and in educating young farmers, as I now discuss. == Farming’s Influence on Education One result of the newly circulating print information was the “need for acquiring scientific information upon which could be based a rational technology” that could “be substituted for the current diverse, empirical practices” @danhof_change_1969[69]. In his 1825 book Nature and Reason Harmonized in the Practice of Husbandry, <NAME> begins his first chapter by stating that “[v]ery erroneous theories have been propagated” resulting in faulty farming methods @lorain_nature_1825[1]. His words here create a framework for the rest of his book, as he offers his readers narratives of his own trials and errors and even dismisses foreign, time-tested techniques farmers had held on to: “The knowledge we have of that very ancient and numerous nation the Chinese, as well as the very located habits and costumes of this very singular people, is in itself insufficient to teach us…” @lorain_nature_1825[75]. His book captures the call and need for scientific experiments to develop new knowledge meant to be used in/on/with American soil, which reflects some farmers’ thinking of the day. By the 1860s, the need for this knowledge was strong enough to affect education. <NAME> anticipated this effect in 1820 in the “Experiments” section of his book The Farmer’s Assistant; Being a Digest of All That Relates to Agriculture and the Conducting of Rural Affairs; Alphabetically Arranged and Adapted for the United States: #quote[ Perhaps it would be well, if some institution were devised, and supported at the expense of the State, which would be so organized as would tend most effectually to produce a due degree of emulation among Farmers, by rewards and honorary distinctions conferred by those who, by their successful experimental efforts and improvements, should render themselves duly entitled to them.#link(<note_3>)[#super[3]] @nicholson_farmers_1820[92] ] Part of Nicholson’s hope was realized in 1837 when Michigan established their state university, specifying that “agriculture was to be an integral part of the curriculum” @danhof_change_1969[71]. Not much was accomplished; however, much to the dissatisfaction of farmers, and in 1855, the state authorized a new college to be “devoted to agriculture and to be independent of the university” @danhof_change_1969[71]. The government became more involved in the creation of agricultural universities in 1862 when President Lincoln passed the Morrill Land Grant College Act, which begins with this phrase: “AN ACT Donating Public Lands to the several States and Territories which may provide Colleges for the Benefit of Agriculture and Mechanic Arts [sic].” The first agricultural colleges formed under the act suffered from a lack of trained teachers and “an insufficient base of knowledge,” and critics claimed that the new colleges did not meet the needs of farmers @hurt_american_1994[193]. Congress addressed these problems with the then newly formed United States Department of Agriculture (USDA). The USDA and Morrill Act worked together to form “… State experiment stations and extension services … [that] added [to] … localized research and education…” @baker_century_1996[415]. The USDA added to the scientific and educational areas of the agricultural field in other ways by including research as one of the organization’s “foundation stone” @baker_century_1996[367] and by including these seven objectives: #quote[ (1) [C]ollecting, arranging, and publishing statistical and other useful agricultural information; (2) introducing valuable plants and animals; (3) answering inquiries of farmers regarding agriculture; (4) testing agricultural implements; (5) conducting chemical analyses of soils, grains, fruits, plants, vegetables, and manures; (6) establishing a professorship of botany and entomology; and (7) establishing an agricultural library and museum. @baker_century_1996[14] ] These objectives were a response to farmers’ needs at the time, mainly to the need for experiments, printed distribution of new farming knowledge, and education. <NAME>, the first Commissioner of Agriculture, ensured these objectives would be realized by stressing research and education with the ultimate goal of helping farmers improve their operations @hurt_american_1994[190]. Before the USDA assisted in the circulation of knowledge, however, farmers wrote about their own farming methods. This brings me to my next section in which I examine three handbooks written by farmers and connect my observations of the texts with the discussion of agricultural history I have presented above. = Level One Heading #lorem(43) == Level Two Heading #lorem(40) === Level Three Heading #lorem(40) = CONCLUSION From examining Drown’s, Allen’s, and Crozier and Henderson’s handbooks in light of nineteenth century agricultural history, I can say that science and education seem to have had a strong influence on how and why these handbooks were written. The authors’ ethos is created by how they align themselves as farmers with science and education either by supporting or by criticizing them. Regardless of their stance, the authors needed to create an ethos to gain an audience, and they did this by including tables of information, illustrations of animals and buildings, reasons for educational reform, and pieces of advice to young farmers in their texts. It would be interesting to see if other farming handbooks of the same century also convey a similar ethos concerning science and education in agriculture. Recovering more handbooks in this way could lead to a better, more complete understanding of farming education, science’s role in farming and education, and perhaps even an understanding of the rhetoric of farming handbooks in the nineteenth century. #pagebreak() #figure( table( columns: 4, [Year], [Bachelor’s Degrees], [Master’s Degrees], [Doctor’s Degrees], [1996–97], [13,053], [2,470], [793], [1997–98], [13,618], [2,367], [793], [1998–99], [14,163], [2,267], [757], [1999-2000], [14,186], [2,228], [804], [2000–01], [14,292], [2,244], [818], [2001-02], [14,236], [2,284], [780], [2002-03], [14,854], [2,256], [749], [2003-04], [15,408], [2,307], [743], [2004-05], [16,008], [2,517], [762], [2005-06], [16,762], [2,637], [777], ), caption: [Degrees in Modern Foreign Languages and Literature Conferred by Degree-Granting Institutions of Higher Education in the United States], supplement: [Adapted from National Center for Education Statistics. _Digest of Education Statistics_, 2007, #link("nces.ed.gov/programs/digest/d07/tables/dt07_297.asp")], ) <table_1> #pagebreak() #figure( image("illustrations/photograph-workers-cutting-cane.jpg"), caption: [Photograph of workers cutting cane in the Cairns district, ca 1890. _State Library of Queensland_, https://hdl.handle.net/10462/deriv/40037] ) <figure_1> #pagebreak() #figure( image("illustrations/berthe-morisot-reading-1873.jpg"), caption: [<NAME>. Reading. 1873, Cleveland Museum of Art.] ) #pagebreak() #align(center, "Notes") + Danhof includes “Delaware, Maryland, all states north of the Potomac and Ohio rivers, Missouri, and states to its north” when referring to the northern states (11). <note_1> + For the purposes of this paper, “science” is defined as it was in nineteenth century agriculture: conducting experiments and engaging in research. <note_2> + Please note that any direct quotes from the nineteenth century texts are written in their original form, which may contain grammar mistakes according to twenty-first century grammar rules. <note_3>
https://github.com/7sDream/fonts-and-layout-zhCN
https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/06-features-2/checking.typ
typst
Other
#import "/template/template.typ": web-page-template #import "/template/components.typ": note #import "/template/consts.typ" #import "/lib/glossary.typ": tr #show: web-page-template // Using hb-shape to check positioning rules == 使用 `hb-shape` 检查#tr[positioning]规则 // In the previous chapter we introduced the Harfbuzz utility `hb-shape`, which is used for debugging the application of OpenType rules in the shaper. As well as looking at the glyphs in the output stream and seeing their advance widths, `hb-shape` also helps us to know how these glyphs have been repositioned in the Y dimension too. 在上一章中我们介绍了HarfBuzz的`hb-shape`工具,并用它调试了#tr[shaper]对OpenType规则的应用。我们看到它可以显示#tr[glyph]在原输入流中的位置,以及它们的#tr[advance width]信息。其实它也可以帮助我们了解#tr[glyph]在Y轴方向上进行了怎样的重#tr[positioning]。 // For example, suppose we are using a mark-to-base feature to position a virama on the Devanagari letter CHA: 比如这个在天城文字母`CHA`上使用锚点对`virama`符号进行#tr[positioning]的例子: ```fea markClass @mGC_blwm.virama_n172_0 <anchor -172 0> @MC_blwm.virama; pos base dvCHA <anchor 276 57> mark @MC_blwm.virama; ``` // What this says is "the attachment point for the virama is at coordinate (-172,0); on the letter CHA, we should arrange things so that attachment point falls at coordinate (276,57)." Where does the virama end up? `hb-shape` can tell us: 这段代码的含义是:`virama`的锚点在坐标`(-172, 0)`处;在`CHA`字母上如果要附加`virama`符号,就将其锚点放在坐标`(276, 57)`处。那么`virama`现在到底位于什么#tr[position]呢?使用`hb-shape`可以得知: #[ #show raw: set text(font: consts.font.western-mono + ("Hind",)) ```bash $ hb-shape build/Hind-Regular.otf 'छ्' [dvCHA=0+631|dvVirama=0@-183,57+0] ``` ] // So we have a CHA character which is 631 units wide. Next we have a virama which is zero units wide! But when it is drawn, its position is moved - that's what the "@-183,57" component means: we've finished drawing the CHA, and then we move the "pen" negative 183 units in the X direction (183 units to the left) and up 57 units before drawing the virama. 首先我们看到,`CHA`#tr[character]有631单位宽。然后下一个是0宽度的`virama`!不过它的绘制位置移动了,输出中的 `@183,57`的意思是:当画完`CHA`字母后,将画笔向X轴负方向(也就是向左)移动183单位,再向上移动57个单位,然后再绘制`virama`。 // Why is it 183 units? First, let's see what would happen *without* the mark-to-base positioning. We can do this by asking `hb-shape` to turn off the `blwm` feature when processing: 为什么是 183 个单位呢?我们先来看看在进行#tr[positioning]前是什么样的。这可通过关闭`blwm`特性来实现: #[ #show raw: set text(font: consts.font.western-mono + ("Hind",)) ```bash $ hb-shape --features='-blwm' Hind-Regular.otf 'छ्' [dvCHA=0+631|dvVirama=0+0] ``` ] // As you can see, no special positioning has been done. Another utility, `hb-view` can render the glyphs with the feature set we ask for. If we ask to turn off the `blwm` feature and see what the result is like, this is what we get: 正如你所见,这样的话就没有什么特殊的#tr[positioning]操作了。我们可以使用`hb-view`工具来渲染当前特性集下#tr[glyph]的实际样子。结果如下: #[ #show raw: set text(font: consts.font.western-mono + ("Hind",)) ```bash $ hb-view --features='-blwm' Hind-Regular.otf 'छ्' -O png > test.png ``` ] #figure( placement: none, )[#image("hind-bad-virama.png", width: 30%)] #note[ // > You can also make `hb-view` output PNG files, PDFs, and other file formats, which is useful for higher resolution testing. (Look at `hb-view --help-output` for more options.) But the old-school ANSI block graphics is quite cute, and shows what we need in this case. 你可以让`hb-view`输出PNG、PDF等各种格式(`hb-view --help-output`可以查看相关选项),这对高分辨率测试很有用。但老式的ANSI块状#tr[character]组成的图形很有意思,而且在这个例子中也够用了。 ] // Obviously, this is badly positioned (that's why we have the `blwm` feature). What needs to happen to make it right? 很明显,这个符号的位置不对(所以我们才需要`blwm`特性)。现在如果想让它回到正确的位置,需要怎么做呢? #figure( placement: none, )[#include "virama-pos.typ"] // As you can see, the glyph is 631 units wide (Harfbuzz told us that), so we need to go back 355 units to get to the CHA's anchor position. The virama's anchor is 172 units to the left of that, so in total we end up going back 183 units. We also raise the glyph up 57 units from its default position. 我来解释一下这张图。`CHA`这个#tr[glyph]的宽度是631(HarfBuzz告诉我们的),所以我们要往回355个单位才能走到`CHA`中锚点的水平位置。`virama`的锚点在局部坐标-172单位处,所以整体上来看我们需要将它往左移183单位。为了锚点对齐,还需要将它向上移动57单位。 // This example was one which we could probably have tested and debugged from inside our font editor. But as our features become more complex, `hb-shape` and `hb-view` become more and more useful for understanding what the shaper is doing with our font files. 这个例子中的测试和调试工作可能在字体编辑软件中就能完成。但当特性变得越来越复杂的时候,`hb-shape`和`hb-view`就会变得越来越有用。它们可以帮助你理解#tr[shaper]到底是如何和你的字体文件协同工作的。
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/splash/0.3.0/README.md
markdown
Apache License 2.0
# splash A library of color palettes for Typst. This library provides different color palettes with human-readable names in Typst dictionaries. Currently there are just a few different palettes to choose from. Any contributions or suggestions are welcome! ## Usage ```typst #import "@preview/splash:0.3.0": xcolor #box(width: 3em, height: 1em, fill: xcolor.dandelion) ``` ## Documentation For a list of available colors please see [this document](https://github.com/kaarmu/typst-palettes/blob/main/doc/main.pdf).
https://github.com/vEnhance/1802
https://raw.githubusercontent.com/vEnhance/1802/main/src/type-safety.typ
typst
MIT License
#import "@local/evan:1.0.0":* = Type safety Before we get started with the linear algebra and calculus, I want to talk quickly about _types of objects_. This is an important safeguard for the future in checking your work and auditing your understanding of a topic; a good instructor will point out, in your work, any time you make a type-error. == [TEXT] Type errors In mathematics, statements are usually either true or false. Examples of false statements#footnote[Indiana Pi bill and 1984, respectively.] include $ pi = 16 / 5 #h(2em) upright("or") #h(2em) 2 + 2 = 5 . $ However, it’s possible to write statements that are not merely false, but not even "grammatically correct", such as the nonsense equations $ pi = mat(1, 0; 0, 1) , #h(2em) mat(0, 1, 2; 3, 4, 5) = cos vec(6, 7), #h(2em) det vec(5, 11) != sqrt(2). $ To call these equations false is misleading. If your friend asked you whether $2 + 2 = 5$, you would say "no". But if your friend asked whether $pi$ equals the $2 times 2$ identity matrix, the answer is a different kind of "no"; really, it’s "your question makes no sense". These three examples are *type errors*. This term comes from programming: most programming languages have different data types like integer, boolean, string, array, etc., and will usually#footnote[JavaScript is a notable exception. In JavaScript, you may know that `[]` and `{}` are an empty array and an empty object, respectively. Then `[]+[]` is the empty string, `[]+{}` is the string `‘[object Object]’`, `{}+[]` is `0`, and `{}+{}` is `NaN` (not a number).] prevent you from doing anything idiotic like adding a string to an array. Objects in mathematics work in a really similar way. In the first weeks of 18.02, you will meet real numbers, vectors, and matrices; these are all different types of objects, and certain operations are defined (aka "allowed") or undefined (aka "not allowed") depending on the underlying types. Table @operations lists some common examples with vectors you've seen from precalculus. #figure( table( columns: 5, align: (left,center,left,left,left,), table.header([Operation], [Notation], [Input 1], [Input 2], [Output],), table.hline(), [Add/subtract], [$a pm b$], [Scalar], [Scalar], [Scalar], [Add/subtract], [$bf(v) pm bf(w)$], [Length $d$ vector], [Length $d$ vector], [Length $d$ vector], [Add/subtract], [$M pm N$], [$m times n$ matrix], [$m times n$ matrix], [$m times n$ matrix], [Multiply], [$c bf(v)$], [Scalar], [Length $d$ vector], [Length $d$ vector], [Multiply], [$a b$], [Scalar], [Scalar], [Scalar], [Multiply], [$M N$], [$m times n$ matrix], [$n times p$ matrix], [$m times p$ matrix], [Dot product], [$bf(v) dot.op bf(w)$], [Length $d$ vector], [Length $d$ vector], [Scalar], [Cross product], [$bf(v) times bf(w)$], [Length $3$ vector], [Length $3$ vector], [Length $3$ vector], [Length/mag.], [$lr(|bf(v)|)$], [Any vector], [_n/a_], [Scalar], [Determinant], [$det A$], [Any square matrix], [_n/a_], [Scalar], ), caption: [Common linear algebra operations. For 18.02, "scalar" and "real number" are synonyms.], kind: table ) <operations> #digression[ A common question at this point is how you are supposed to figure out whether a certain operation is allowed or not. For example, many students want to try and multiply two vectors together component-wise; why is $ vec(2, 3) vec(4, 5) =^? vec(8, 15) $ not a legal sentence? It seems like it would make sense. The answer is that you _don't_ have to figure out — you are _told_; @operations isn't something that you derive. That is, @operations consists of the _definitions_ which you have been given. (Or more sarcastically, it's all just a social construct. Well, it's bit more nuanced than that. Definitions aren't judged by "correctness"; that doesn't make sense; you are allowed to make up whatever definitions you want. Instead, definitions are judged by whether they are _useful_. Which is obviously subjective, but it’s less subjective than you might guess.) ] == [TEXT] Why you should care There are two action items to take away from this section. === When learning a new object, examine its types first What this means is that, every time you encounter a new kind of mathematical object or operation (e.g. partial derivative), *the first thing you should do is ask what types are at play*. This helps give you a sanity check on your understanding of the new concept. We'll use boxes like this throughout the box to do this: #typesig[ This is an example of a type signature box. When we want to make comments about the types of new objects, we'll put them in boxes like this. ] === Whenever writing an equation, make sure the types check out Practically, what's really useful is that if you have a good handle on types, then it *gives you a way to type-check your work*. This is the analog of dimensional analysis from physics, where you know you messed up if some equation has $"kg" dot "meters" dot "seconds"^(- 2)$ on the left but $"kg" dot "meters" dot "seconds"^(- 1)$ on the right. For example, if you are reading your work and you see something like #eqn[ $ lr(|arrow(v) times arrow(p)|) = 9 arrow(p) $ <wrong-type> ] then you can immediately tell that there’s a mistake, because the two sides are incompatible — the left-hand side is a real number (scalar), but the right-hand side is a vector. == [RECAP] Takeaways from type safety - Throughout this book, every time you meet a new operation, make sure you know what types of objects it takes as input and which it takes as output. - Whenever you write an equation, make sure it passes a type-check. You can catch a lot of errors like @wrong-type using type safety alone. == [EXER] Practice with type safety #todo[have a list of equations here, and ask to identify the type errors? or similar]
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/array-11.typ
typst
Other
// Error: 2-12 array is empty #().first()
https://github.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024
https://raw.githubusercontent.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024/giga-notebook/entries/identify-launcher.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#import "/packages.typ": notebookinator #import notebookinator: * #import themes.radial.components: * #show: create-body-entry.with( title: "Identify: Moving Triballs", type: "identify", date: datetime(year: 2023, month: 9, day: 5), author: "<NAME>", witness: "<NAME>", ) Moving the triballs is vital to our ability to score points. The faster we move triballs to the opposite side of the field the more points we get. Not only do we get 2 points per triball that is in our offensive zone, the faster we can supply triballs to our alliance partner, the faster they can score them into the goal. #align(center)[ #image("../assets/game-challenges/moving-triballs.svg", height: 25%) ] = Design Constraints - Needs to use 11Ws of motors or less - Needs to fit within an 18" by 18" by 18" cube - If it expands, it cannot make the robot go beyond a vertical size of 36" = Design Goals - Be able to move a triball across the field in under 5s
https://github.com/claudiomattera/typst-modern-cv
https://raw.githubusercontent.com/claudiomattera/typst-modern-cv/master/src/link.typ
typst
MIT License
// Copyright <NAME> 2023-2024. // // Distributed under the MIT License. // See accompanying file License.txt, or online at // https://opensource.org/licenses/MIT /// Draw a link with a label in current font /// /// - url (str): The link URL. /// - label (str): The link label. /// -> content: The formatted link. #let draw_link(url, label) = { link(url, label) } /// Draw a link to an URL (with an optional label) in monospace font /// /// - url (str): The link URL. /// - label (str): The link label (optional). /// -> content: The formatted link. #let draw_literal_link(url, label: "") = { let label = if label == "" { url } else { label } link(url, raw(label)) }
https://github.com/herbhuang/utdallas-thesis-template-typst
https://raw.githubusercontent.com/herbhuang/utdallas-thesis-template-typst/main/layout/thesis_template.typ
typst
MIT License
#import "/layout/cover.typ": * #import "/layout/titlepage.typ": * #import "/layout/disclaimer.typ": * #import "/layout/acknowledgement.typ": acknowledgement as acknowledgement_layout #import "/layout/transparency_ai_tools.typ": transparency_ai_tools as transparency_ai_tools_layout #import "/layout/abstract.typ": * #import "/utils/print_page_break.typ": * #let thesis( title: "", titleGerman: "", degree: "", program: "", supervisor: "", advisors: (), author: "", startDate: datetime, submissionDate: datetime, abstract_en: "", abstract_de: "", acknowledgement: "", transparency_ai_tools: "", is_print: false, body, ) = { cover( title: title, degree: degree, program: program, author: author, ) pagebreak() titlepage( title: title, titleGerman: titleGerman, degree: degree, program: program, supervisor: supervisor, advisors: advisors, author: author, startDate: startDate, submissionDate: submissionDate ) print_page_break(print: is_print, to: "even") disclaimer( title: title, degree: degree, author: author, submissionDate: submissionDate ) transparency_ai_tools_layout(transparency_ai_tools) print_page_break(print: is_print) acknowledgement_layout(acknowledgement) print_page_break(print: is_print) abstract(lang: "en")[#abstract_en] abstract(lang: "de")[#abstract_de] set page( margin: (left: 30mm, right: 30mm, top: 40mm, bottom: 40mm), numbering: "1", number-align: center, ) let body-font = "New Computer Modern" let sans-font = "New Computer Modern Sans" set text( font: body-font, size: 12pt, lang: "en" ) show math.equation: set text(weight: 400) // --- Headings --- show heading: set block(below: 0.85em, above: 1.75em) show heading: set text(font: body-font) set heading(numbering: "1.1") // Reference first-level headings as "chapters" show ref: it => { let el = it.element if el != none and el.func() == heading and el.level == 1 { [Chapter ] numbering( el.numbering, ..counter(heading).at(el.location()) ) } else { it } } // --- Paragraphs --- set par(leading: 1em) // --- Citations --- set cite(style: "alphanumeric") // --- Figures --- show figure: set text(size: 0.85em) // --- Table of Contents --- outline( title: { text(font: body-font, 1.5em, weight: 700, "Contents") v(15mm) }, indent: 2em ) v(2.4fr) pagebreak() // Main body. set par(justify: true, first-line-indent: 2em) body // List of figures. pagebreak() heading(numbering: none)[List of Figures] outline( title:"", target: figure.where(kind: image), ) // List of tables. pagebreak() heading(numbering: none)[List of Tables] outline( title: "", target: figure.where(kind: table) ) // Appendix. pagebreak() heading(numbering: none)[Appendix A: Supplementary Material] include("/layout/appendix.typ") pagebreak() bibliography("/thesis.bib") }
https://github.com/El-Naizin/cv
https://raw.githubusercontent.com/El-Naizin/cv/main/modules/skills.typ
typst
Apache License 2.0
#import "../brilliant-CV/template.typ": * #cvSection("Skills") #cvSkill( type: [Languages], info: [English #hBar() French #hBar() Chinese] ) #cvSkill( type: [Tech Stack], info: [Tableau #hBar() Python (Pandas/Numpy) #hBar() PostgreSQL] ) #cvSkill( type: [Personal Interests], info: [Swimming #hBar() Cooking #hBar() Reading] )
https://github.com/SkiFire13/master-thesis
https://raw.githubusercontent.com/SkiFire13/master-thesis/master/chapters/background/6-game-characterization.typ
typst
#import "../../config/common.typ": * #import "@preview/cetz:0.2.2": canvas, draw, vector == Game characterization of the solution of systems of equations === Game definition The solution of systems of fixpoint equations can be characterized using a parity game @baldan_games, also called a powerset game. This characterization in particular allows to determine whether some element of a basis is under the solution for one of the variables of the system. This makes sense because in practice the actual solution of the system may include lot of informations we are not interested about, for example for the $mu$-calculus it would include all the states that satisfy the given formula, while we might be only interested in knowing whether one particular state is included, or for bisimilarity it would include all pairs of processes that are bisimilar, when again we are only interested in a single pair. #definition("powerset game")[ Let $(L, sub)$ be a complete lattice and $B_L$ a basis of $L$. Let $E = tup(x) feq_tup(eta) tup(f) (tup(x))$ be a system of $n$ fixpoint equations. The powerset game is a parity game associated with $E$ defined as follows: - the vertices for player 0 are $V_0 = B_L times range(n) = { (b, i) | b in B_L and i in range(n) }$ - the vertices for player 1 are $V_1 = (2^(B_L))^n = { (X_1, ..., X_n) | X_i in 2^(B_L) }$ - the moves from player 0 vertices are $E(b, i) = { tup(X) | tup(X) in (2^(B_L))^n and b sub f_i (join tup(X)) }$ - the moves from player 1 vertices are $A(tup(X)) = { (b, i) | i in range(n) and b in X_i }$ - the priority function is defined such that: - $p(tup(X)) = 0$; - $p((b, i))$ is even if $eta_i = nu$ and odd if $eta_i = mu$; - $p((b, i)) < p((b', j))$ if $i < j$. ] Intuitively each vertex $(b, i)$, owned by player 0, represents the fact that the basis element $b$ is under the $i$-th component of the solution. Its moves then are all the possible assignments to the tuple of variables $tup(x)$. These are expressed as tuples of subsets $X_1, ..., X_n$ of the basis, with the requirement that $b$ is under the result of $f_i (join X_1, ..., join X_n)$. Player 1 then can challenge player 0 by claiming that one of those subsets contains an element of the basis that is not actually under the solution, and this continues either infinitely or until one of the two players has no move possible. The priority function is not fully specified, but it can be shown that there exist a mapping to $bb(N)$ that satisfies the given order and partition into even/odd. An intuitive way would be to just list the priorities in order and give to map each of them to the next available even or odd natural number. It has been proven in @baldan_games that such characterization is both correct and complete, allowing us to solve generic systems of fixpoint equations with it. #theorem("correctness and completeness of the powerset game")[ Let $E$ be a system of $n$ fixpoint equations over a complete lattice $L$ with solution $s$. For all $b in B_L$ and $i in range(n)$, we have $b sub s_i$ if and only if the player 0 has a winning strategy on the powerset game associated to $E$ starting from the vertex $(b, i)$. ] #example("game characterization", label: <game-ch-example>)[ Consider for example the system of equations given in @system-example over the boolean lattice $bb(B)$: $ syseq( x_1 &feq_mu x_1 or x_2 \ x_2 &feq_nu x_1 and x_2 \ ) $ The corresponding game characterization would be the following: #let game_example(withstrategy) = canvas({ import draw: * set-style(content: (padding: .2), stroke: black) let node(pos, name, p, label, pr) = { let cname = name + "content" content(pos, label, name: cname, padding: 1em) if p == 0 { circle(pos, name: name, radius: (1, 0.65), stroke: black) content((v => vector.add(v, (0, .16)), cname + ".south"), text(str(pr))) } else { let (x, y) = pos rect(cname + ".north-west", cname + ".south-east", name: name, radius: 0.05) content((v => vector.add(v, (-.2, .2)), cname + ".south-east"), text(str(pr))) } } node((4.5, 0), "t1", 0, $[tt, 1]$, 1) node((4.5, -2.5), "t2", 0, $[tt, 2]$, 2) node((0, 0), "tt_e", 1, $({tt}, varempty)$, 0) node((0, -2.5), "e_tt", 1, $(varempty, {tt})$, 0) node((10, -1.25), "tt_tt", 1, $({tt}, {tt})$, 0) let edge(ni, ai, nf, af, a, w) = { let pi = (name: ni, anchor: ai) let pf = (name: nf, anchor: af) let c = if withstrategy and not w { (dash: "dotted") } else { black } bezier(pi, pf, (pi, 50%, a, pf), fill: none, stroke: c, mark: (end: ">")) } edge("t1", 160deg, "tt_e", 20deg, -20deg, false) edge("t1", 240deg, "e_tt", 20deg, 20deg, true) edge("t1", 20deg, "tt_tt", 130deg, 20deg, false) edge("t2", -20deg, "tt_tt", -130deg, -20deg, true) edge("tt_e", -20deg, "t1", 200deg, -20deg, false) edge("e_tt", -20deg, "t2", 200deg, -20deg, false) edge("tt_tt", 160deg, "t1", -20deg, 20deg, false) edge("tt_tt", 200deg, "t2", 20deg, -20deg, false) }) #figure( game_example(true), caption: [Example of a game characterization], ) <game-example> As before, elliptic vertices represent player 0 positions while rectangular vertices represent player 1 positions. The priorities are now represented with the numbers on the bottom center or right, while the non-dotted edges correspond to the winning strategies. The way this is obtained is by starting with the player 0 positions, which are the ones we care about, since if one wanted to prove whether $tt$ is under the solution for $x_1$ or $x_2$ they would have to check whether $[tt, 1]$ or $[tt, 2]$ are winning or not. From those vertices we then have the following moves: $ E(tt, 1) &= { #h(0.3em) ({tt}, varempty), #h(0.6em) (varempty, {tt}), #h(0.6em) ({tt}, {tt}) #h(0.3em) } \ E(tt, 2) &= { #h(0.3em) ({tt}, {tt}) #h(0.3em) } $ Note that the remaining position of player 1 $(varempty, varempty)$ is not reachable, and thus was omitted from the figure. The game is ultimately won by player 0 on every position, since it can force every play to go through the position $[tt, 2]$ over and over. This position has the highest priority, at 2, thus being the highest of every play, and since it is even it makes player 0 the winner. Hence we can infer that $tt sub x^*_1$ and $tt sub x^*_2$, which implies $x^*_1 = tt$ and $x^*_2 = tt$. One can also see that swapping the equations would result in the same parity graph, except for position $[tt, 1]$ which now would have a higher odd priority than $[tt, 2]$. This makes the game losing for player 0 on all positions, since player 1 can force every play to go through $[tt, 1]$ and win. We thus get $tt subn x^*_1$ and $tt subn x^*_2$, which imply $x^*_1 = ff$ and $x^*_2 = ff$, like we already saw in @order-equations. ] === Selections In practice it is not convenient to consider all the possible moves for player 0. For instance in @game-ch-example the move from $[tt, 1]$ to $({tt}, {tt})$ is never convenient for player 0, since the moves to $({tt}, varempty)$ and $(varempty, {tt})$ would give player 1 strictly less choices. In fact going from $[tt, 2]$ to $({tt}, {tt})$ would be a losing move for player 0, and the only way to win is to go to $(varempty, {tt})$. In general, for player 0, it will be always convenient to play moves consisting of sets of elements with limited cardinality and as small as possible in the order. We will now see a formalization of this idea. To start we will need to consider a new order, called _Hoare preorder_: #definition("Hoare preorder")[ Let $(P, sub)$ be a poset. The Hoare preorder, written $hsub$, is a preorder on the set $2^P$ such that, $forall X, Y subset.eq P. X hsub Y <=> forall x in X. exists y in Y. x sub y$. ] We also consider the pointwise extension $phsub$ of the Hoare preorder on the set $(2^(B_L))^n$, that is $forall X, Y in (2^(B_L))^n, tup(X) phsub tup(Y) <=> forall i in range(n). X_i hsub Y_i$, and the upward-closure with respect to it, that is given $T subset.eq (2^(B_L))^n$ then $up_H T = { tup(X) | exists tup(Y) in T and tup(Y) phsub tup(X) }$. The idea will then be for player 0 to avoid playing a move $tup(Y)$ if there exist another move $tup(X)$ such that $tup(X) phsublt tup(Y)$. More formally, it can be proven that any set of moves whose upward-closure with respect to $phsub$ is equal to $E(b, i)$ is equivalent to it for the purpose of the game. That is, we can replace the moves for that player 0 position and it would not change the winners compared to the original game. We call such sets of moves _selections_, and a point of interest will be finding small selections in order to reduce the size of the game. #definition("selection")[ Let $(L, sub)$ be a lattice. A selection is a function $sigma : (B_L times range(n)) -> 2^((2^(B_L))^m)$ such that $forall b in B_L, i in range(n). up_H sigma(b, i) = E(b, i)$. ] === Logic for upward-closed sets <upward-logic> Ideally we would be interested in the least selection; this can be shown to always exist in finite lattices, but not in infinite ones. Moreover when it exists it might be exponential in size. #example("least selection may not exist")[ Consider for example the complete lattice $bb(N)_omega$ seen in @poset-example, and consider a system of fixpoint equations with only the equation $x feq_mu f(x)$ where $f(n) = n + 1$ if $n in bb(N)$ and $f(omega) = omega$. We will pick the lattice itself as its basis and we will want to prove $omega sub x^*$ with $x^*$ being the solution of this equation. This will generate a powerset game starting from position $omega$ with moves $E(omega)$, for which it can be shown that the move $bb(N)$ is winning for player 0. We are however interested in selections for $E(omega)$, and it can be shown that any ${ X }$ where $X subset.eq bb(N)$ and $X$ is infinite is a valid selection for $E(omega)$. In fact $omega sub f(join X)$ can only be satisfied if $f(join X) = omega$ and thus $join X = omega$, which is true for all and only the infinite $X$. There are however infinitely many such sets, and there is no smallest one, since it is always possible to get a smaller one by removing one element. Thus there cannot be a smallest selection. ] #example("least selection can be exponential")[ The least selection can be exponential with respect to the number of variables and basis size. Take for example the function $f(x_1, ..., x_(2n)) = (x_1 or x_2) and (x_3 or x_4) and ... and (x_(2n-1) or x_(2n))$ over the boolean lattice. The corresponding minimal selection would be ${ ({tt}, varempty, {tt}, varempty, ...), ..., (varempty, {tt}, varempty, {tt}, ...) }$, which lists all the ways to satisfy each $x_(2i-1) or x_(2i)$ without making them both $tt$, which is $2^n$ and thus exponential in the number of variables. A similar construction can be made for the basis size, by taking as domain the set of $n$-tuples over the boolean lattice. ] For these reasons a logic for upward-closed sets is used to represent the $E(b, i)$ set in a more compact way. Additionally this allows us to generate relative selections which are typically small, even if they are not the least ones. From now on we will refer to formulas in such logic with "logic formulas". #definition("logic for upward-closed sets")[ Let $(L, sub)$ be a complete lattice and $B_L$ a basis of $L$. Given $n in bb(N)$ we define the following logic, where $b in B_L$ and $i in range(n)$: $ phi := [b, i] | and.big_(k in K) phi_k | or.big_(k in K) phi_k $ ] The formulas $tt$ and $ff$ are then implicitly defined as $and_(k in varempty) phi_k$ and $or_(k in varempty) phi_k$. We now give the semantics of a logic formula, which consist in the set of moves that the formula is representing. We will be interested in formulas whose semantics will be equal to the set $E(b, i)$. #definition("logic formulas semantics")[ Let $(L, sub)$ be a complete lattice, $B_L$ a basis of $L$, $n in bb(N)$, $i in range(n)$ and $phi$ a logic formula. The semantics of $phi$, that is the set of player 1 vertices is represents, is a upward-closed set $sem(phi) subset.eq (2^(B_L))^n$ with respect to $phsub$, define as follows: $ sem([b, i]) &= { tup(X) | b in tup(X)_i } \ sem(and.big_(k in K) phi_k) &= sect.big_(k in K) sem(phi_k) \ sem(or.big_(k in K) phi_k) &= union.big_(k in K) sem(phi_k) $ ] Given a logic formula we can however define a generator for symbolic moves, which is a selection for the set represented by the logic formula semantics. This will be the set of moves that we will use in practice when solving the parity game. #definition("generator for symbolic moves")[ Let $(L, sub)$ be a complete lattice, $B_L$ a basis of $L$, $n in bb(N)$, $i in range(n)$ and $phi$ a logic formula. The moves generated by $phi$, written $M(phi)$ are: $ M([b, i]) &= { tup(X) } "with" X_i = { b } "and" forall j != i. X_j = varempty \ M\(and.big_(k in K) phi_k\) &= { union.big X | X in product_(k in K) M(phi_k) } \ M\(or.big_(k in K) phi_k\) &= union.big_(k in K) M(phi_k) $ ] Another advantage of representing selections using such formulas is that they can be simplified when it becomes known that some position $[b, i]$ for player 0 is winning or losing. This corresponds to the assigning either true or false to the atom $[b, i]$ in the formula and propagating that through the operators it is contained in. In the parity game this would translate to either removing some moves for player 0, due to them being winning for player 1, or replacing ,moves for player 0 moves with a smaller number of them that do not give player 1 the option to reach positions that are winning for player 0. This is already exploited in the existing implementation of the symbolic algorithm @flori to potentially remove lot of edges at once, thus simplifying the game, while preserving the winners on all positions. An alternative intuition for this logic is to see it as splitting the variables $x_i$ and functions $f_i$ of the original system of equations into multiple $x_(i b_1), x_(i b_2), ...$ and $f_(i b_1), f_(i b_2), ...$, with one for each element of the basis. Logic formulas then represent these functions, while moves for an original position $(b, i)$ for player 0 can be seen as partial assignments to the new boolean variables such that $f_(i b) (x_(1 b_1), ...)$ is true. Finally, generating symbolic moves is equivalent to extracting approximately minimal assignments from the formulas representing the functions. We will leave further exploration of this topic to the future, but for our purposes it still provides a nice intuition for how to define logic formulas for our implementations. === Translating $mu$-calculus formulas <mucalculus-translation> As seen in @mucalculus-application, $mu$-calculus formulas can be translated into systems of fixpoint equations. The functions appearing in such systems can also be automatically translated into logic formulas for upward-closed sets. Consider a system of fixpoint equations generated by a $mu$-calculus formula: $ syseq( x_1 &feq_eta_1 phi_1 (x_1, ..., x_n) \ &#h(0.3em)dots.v \ x_n &feq_eta_n phi_n (x_1, ..., n_n) \ ) $ We need to define a logic formula representing the moves for player 0 for each vertex $(b, i)$ for a basis element $b$ and a variable index $i$. Recall that the system of equations is defined over $2^bb(S)$, the powerset lattice of its states, while the basis is $B_(2^bb(S))$, consisting of singletons, given in @powerset-basis. We thus need to define a formula for each state $s$ and index $i$ such that the formula is true when the state $s$ satisfies the formula $phi_i (tup(x^*))$, with $tup(x^*)$ representing the actual solution of the system. Moreover we are allowed to refer to any vertex controlled by player 0 in this formula, which is equivalent to being able to require that any another state $s'$ satisfies one of the formulas $phi_j (tup(x^*))$. We can then define the logic formula for the vertex $(s, i)$ as $F(s, phi_i (x_1, ..., x_n))$, where $F$ is in turn defined by structural induction on its second argument: #eq-columns( $ F(s, tt) &= tt \ F(s, ff) &= ff \ F(s, p) &= cases( tt & "if" s in rho_0(p) \ ff & "if" s in.not rho_0(p) ) \ F(s, psi_1 or psi_2) &= F(s, psi_1) or F(s, psi_2) \ $, $ F(s, x_i) &= [b, i] \ F(s, diam(A) psi) &= and.big_(a in sem(A)) and.big_(s ->^a t) F(t, psi) \ F(s, boxx(A) psi) &= or.big_(a in sem(A)) or.big_(s ->^a t) F(t, psi) \ F(s, psi_1 and psi_2) &= F(s, psi_1) and F(s, psi_2) \ $ ) It is interesting to note that the cases for $diam(A) psi$ and $boxx(A) psi$ are effectively taking the respective semantics definition, which use universal and existential quantifiers, and translating them to finite sequence of respectively conjunctions and disjunctions between the elements that satisfy such quantifiers. The definition also did not include fixpoint formulas since those were already been removed when translating to a system of fixpoint equations. === Translating bisimilarity <bisimilarity-translation> Likewise for bisimilarity we have seen in @bisimilarity-application that it can be translated to a fixpoint equation, which in turn can be seen as a system of a single fixpoint equation. As with $mu$-calculus formulas the domain is the powerset lattice $2^(bb(S) times bb(S))$, and thus its basis is $B_(2^(bb(S) times bb(S)))$, which can also be expressed as ${ {(s_1, s_2)} | s_1, s_2 in bb(S) }$. Since there is just one variable and equation we will only define $F(s_1, s_2)$, representing the formula for the player 0 vertex $((s_1, s_2), 1)$: $ F(s_1, s_2) = ( (and.big_(a in Act) and.big_(s_1 ->^a t_1) or.big_(s_2 ->^a t_2) [(t_1, t_2), 1]) and (and.big_(a in Act) and.big_(s_2 ->^a t_2) or.big_(s_1 ->^a t_1) [(t_1, t_2), 1]) ) $ #example("logic formulas for bisimilarity")[ For example the formulas for the pair of states in the labelled transition systems shown in @bisimilarity-example are the following: #import "@preview/equate:0.2.0": equate #equate($ F(v_0, u_0) &= ([(v_1, u_1), 1] and [(v_2, u_1), 1]) and ([(v_1, u_1), 1] or [(v_2, u_1), 1]) \ &= [(v_1, u_1), 1] and [(v_2, u_1), 1] \ F(v_0, u_1) &= ff and ff = ff \ F(v_0, u_2) &= ff and tt = ff \ #v(0em) \ F(v_1, u_0) &= ff and ff = ff \ F(v_1, u_1) &= [(v_3, u_2), 1] and [(v_3, u_2), 1] = [(v_3, u_2), 1] \ F(v_1, u_2) &= ff and tt = ff \ #v(0em) \ F(v_2, u_0) &= ff and ff = ff \ F(v_2, u_1) &= [(v_3, u_2), 1] and [(v_3, u_2), 1] = [(v_3, u_2), 1] \ F(v_2, u_2) &= ff and tt = ff \ #v(0em) \ F(v_3, u_0) &= tt and ff = ff \ F(v_3, u_1) &= tt and ff = ff \ F(v_3, u_2) &= tt and tt = tt $) ] === Translating parity games <parity-translation> It is known that parity games can also be translated to nested fixpoints @parity_to_fixpoint, which in turn are equivalent to systems of fixpoint equations. We will later use this fact to generate simple problems for testing our implementation. In particular, given a parity game $G = (V_0, V_1, E, p)$ we can define a system of fixpoint equations on the boolean lattice $bb(B)$, where $tt$ represents a vertex being winning for player 0 while $ff$ is winning for player 1. Then for each vertex $v in V_0 union V_1$ a variable $x_v$ will be defined along with the following equation: $ x_v feq_eta cases(, union.sq.big_(u in v E) x_u & "if " v in V_0, sect.sq.big_(u in v E) x_u & "if " v in V_1 ) #h(4em) "with" eta = cases( nu & "if" p(v) "even", mu & "if" p(v) "odd" ) $ Intuitively, a vertex in $V_0$ is winning for player 0 if any of its successors is also winning for them, because they can choose to move to that successor and keep winning. Meanwhile, a vertex in $V_1$ is winning for player 0 if all its successors are winning for them, because otherwise player 1 would have the option to move to any successor that is not winning for player 0 and win. The priority of vertices must however also be taken into account in order to determine the winner of infinite plays, which we can reduce to plays ending with a cycle. If one happens the last equation corresponding to a vertex of the cycle will have both $tt$ and $ff$ as fixpoint, and will thus decide the winner for the entire cycle, hence why equations corresponding with vertices with higher priorities have to be sorted last. The winner is then chosen by whether the fixpoint equation is a greatest fixpoint or a least fixpoint: if it is a greatest fixpoint the solution will be $tt$ and player 0 will win, otherwise it will be $ff$ and player 1 will win. This is the reason why the fixpoint type was chosen according to the priority of the vertex: if it is even then player 0 wins the cycle in the parity game and hence the equation must be a greatest fixpoint, otherwise player 1 wins and the equation must be a least fixpoint. These functions can be trivially converted to logic formulas. Notice that the atom $(tt, i)$, where $i$ is the index of the equation with variable $x_u$, is true if and only if the solution for $x_u$ is $tt$, otherwise if the atom is false then the solution is $ff$. As such the equations of the system can be converted to logic formulas by replacing each variable $x_u$ with the atom $(tt, i)$, where $i$ is the index of variable the $x_u$, each $join$ with $or$ and each $meet$ with $and$. #example("translation and logic formulas for a parity game")[ For example the parity game in @parity-example would be translated to the following system of fixpoint equations: $ syseq( v_0 &feq_nu v_1 join v_2 \ v_1 &feq_nu v_0 \ v_2 &feq_mu v_1 meet v_3 \ v_4 &feq_nu v_2 join v_3 \ v_3 &feq_mu v_4 ) $ Which can then be translated to the following formulas: $ F(tt, 1) &= [tt, 2] or [tt, 3] \ F(tt, 2) &= [tt, 1] \ F(tt, 3) &= [tt, 2] and [tt, 5] \ F(tt, 4) &= [tt, 3] or [tt, 5] \ F(tt, 5) &= [tt, 4] $ ]
https://github.com/goshakowska/Typstdiff
https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_complex/para/para_inserted_result.typ
typst
In this report, we will explore the various factors that influence #emph[fluid dynamics] in glaciers ha they contribute to the formation and behaviour of these natural structures. #underline[Something];#underline[ ];#underline[new.] All manuscripts are to be submitted electronically to the ScholarOne Abstracts site created for each conference. The manuscript upload will be enabled several weeks after acceptance notices have been sent. Presenting authors of accepted papers will receive an email with instructions when manuscript submission opens. It is important that presenting authors keep their email addresses up-to-date so they do not miss this notice. It is the responsibility of the author to obtain any required government or company reviews for their papers in advance of publication. Start early to determine if the reviews are required; this process can take several weeks. #underline[New];#underline[ ];#underline[paragraph.]
https://github.com/r8vnhill/apunte-bibliotecas-de-software
https://raw.githubusercontent.com/r8vnhill/apunte-bibliotecas-de-software/main/unit2.typ
typst
= Unidad 2: Programación orientada a objetos #include "Unit2/OOP.typ" #include "Unit2/Objects.typ" #include "Unit2/Classes.typ" #include "Unit2/Lateinit.typ" #include "Unit2/Encapsulation.typ" #include "Unit2/Inheritance.typ" #include "Unit2/Constructors.typ" #include "Unit2/Interfaces.typ" #include "Unit2/abstract_classes.typ" #include "Unit2/open_closed.typ" #include "Unit2/operator_overloading.typ" #include "Unit2/properties.typ" #include "Unit2/Extension.typ" #include "Unit2/infix.typ" #include "Unit2/data_classes.typ" #include "Unit2/companion.typ" #include "Unit2/enums.typ" == Respuestas: *Ejercicio: Implementación de un Gestor de Eventos* ```kotlin object EventManager { val events = mutableListOf<String>() fun addEvent(event: String) { events.add(event) } fun getEvents(): List<String> { return events.toList() } } ``` *Ejercicio: Interfaz y Enumeración de Acciones del Juego* ``` class Player(health: Int, mana: Int) { var health = health private set var mana = mana private set fun increaseHealth(amount: Int) { health += amount } fun decreaseHealth(amount: Int) { health -= amount } fun increaseMana(amount: Int) { mana += amount } fun decreaseMana(amount: Int) { mana -= amount } } ``` ``` enum class GameEvent : GameAction { HEALTH_BOOST { override fun execute(player: Player) { player.increaseHealth(20) } }, MANA_DRAIN { override fun execute(player: Player) { player.decreaseMana(15) } }, POISON { override fun execute(player: Player) { player.decreaseHealth(10) } }, ENERGY_SURGE { override fun execute(player: Player) { player.increaseMana(20) } }; } ```
https://github.com/slashformotion/typst-http-api
https://raw.githubusercontent.com/slashformotion/typst-http-api/master/README.md
markdown
# Typst HTTP API <!-- This sentence is from the typst repo --> > [Typst](https://github.com/typst/typst) is a new markup-based typesetting system > that is designed to be as powerful as LaTeX while being much easier to learn and use. I recommend that you check it out if you don't know it yet. This project is a web server that allows users to compile typst markup remotely by a simple API call. This webserver is provided in the form of a docker container. *For now there is no official image on any registry.* I want to bring some elements to your attention: - Please be aware that while the container runs, I do not consider this project production-ready, more work is needed. - All contributions are welcome of course welcome. - Currently, there is no way to compile a file that loads external resources (images or other `.typ` files for example). Current version: v0.1.0 ## Build and run Build the docker image ```shell docker build . -t typst_image # This command build an image using the Dockerfile at the root of the project, # then tag it with "typst_image" ``` Create a container: ```shell docker run -p 8000:8000 typst_image # This command creates a docker container based on the image created at the last step ``` Send `test.typ` to the api and output the file to `result.pdf`: ```shell curl -H "Content-Type:text/plain" --data-binary @test.typ http://localhost:8000 --output result.pdf ``` Or more simply use [httpie](https://httpie.io/cli): ```shell cat test.typ | http POST http://localhost:8000 > result.pdf ``` - If the compilation succeeds, you will get a response with an HTTP code `200`. The body of the response will contain the pdf document. - On an invalid input you will get a JSON containing the error returned by the compiler with an HTTP code `422 Unprocessable Content`. ```json { "error": "compile error: 16:21 expected length, found string" } ```
https://github.com/Dav1com/resume
https://raw.githubusercontent.com/Dav1com/resume/main/modules/certificates.typ
typst
Apache License 2.0
#import "../brilliant-CV/template.typ": * #cvSection("Archievements") #cvHonor( date: [2024], title: [Best Application], location: [Santiago, Chile], issuer: [Verifiable Randomness Hackathon by Laboratory of Applied Cryptography and Cybersecurity (CLCERT)], ) #cvHonor( date: [2022], title: [Place 21/105], location: [Coquimbo, Chile], issuer: [International Collegiate Programming Contest South America/South Finals], ) #cvHonor( date: [2021], title: [Place 30/63], location: [Valparaíso, Chile], issuer: [International Collegiate Programming Contest South America/South Finals] ) #cvHonor( date: [2019], title: [Place 11/153], location: [Santiago, Chile], issuer: [Chilean Olympiads of Informatics (OCI)] )
https://github.com/freundTech/typst-notebook
https://raw.githubusercontent.com/freundTech/typst-notebook/main/example.typ
typst
#import "typst-notebook.typ": * //#notebook-add-mimetype-handler("application/vnd.vegalite.v3+json", repr) = Introduction to Typst Notebook == Code and result #notebook("python")[ #set image(width: 35%) #notebook-cell( ```python from matplotlib import pyplot as plt import numpy as np # Generate 100 random data points along 3 dimensions x, y, scale = np.random.randn(3, 100) fig, ax = plt.subplots() # Map each onto a scatterplot we'll create with Matplotlib ax.scatter(x=x, y=y, c=scale, s=np.abs(scale)*500) ax.set(title="Some random data, created with Typst!") plt.show() ``` ) ] == Split in multiple parts #notebook("python")[ Initialize a string #notebook-cell(```python string = "Hello " ```) Add a name #notebook-cell(```python string += "typst" ```) Output the string #notebook-cell(```python display(string) ```) ] == Using other kernels is also supported #notebook("xsqlite")[ #notebook-cell(```sql %CREATE example_db.db ```) #notebook-cell(```sql CREATE TABLE players (Name STRING, Class STRING, Level INTEGER, Hitpoints INTEGER) ```) #notebook-cell(```sql INSERT INTO players (Name, Class, Level, Hitpoints) VALUES ("<NAME>", "Warrior", 3, 40) ```) #notebook-cell(```sql INSERT INTO players (Name, Class, Level, Hitpoints) VALUES ("<NAME>", "Cleric", 2, 20); ```) #notebook-cell(```sql SELECT Name, Level, Hitpoints FROM players; ```) #notebook-cell(```sql INSERT INTO players (Name, Class, Level, Hitpoints) VALUES ("<NAME>", "Wizard", 1, 10); ```) #notebook-cell(```sql %XVEGA_PLOT X_FIELD Level Y_FIELD Hitpoints MARK circle WIDTH 100 HEIGHT 200 <> SELECT Level, Hitpoints FROM players ```) ]
https://github.com/piepert/philodidaktik-hro-phf-ifp
https://raw.githubusercontent.com/piepert/philodidaktik-hro-phf-ifp/main/src/parts/spue/planung/kurzentwurf.typ
typst
Other
#import "/src/template.typ": * == #ix("Kurzentwurf") Für ein Beispiel des Inhaltes eines Kurzentwurfes siehe S. #refpage(<ephid-plan-descartes>). Wichtig ist, dass Sie für jede Aufgabe der fertig geplanten Stunde einen #ix("Erwartungshorizont") entwerfen.
https://github.com/0x1B05/nju_os
https://raw.githubusercontent.com/0x1B05/nju_os/main/lecture_notes/content/12_真实世界的并发Bug.typ
typst
#import "../template.typ": * #pagebreak() = 真实世界的并发 Bug == 伤人性命的并发 Bug === 先回顾一个并发 Bug 在捡起东西的瞬间 “拿起” 物品,“拿起” 的物品会变成 “捡起” 的物品 - 视频 - 似乎是由共享状态引起的 ```js v = "$1"; // Expected timing // InHand = v; Event(pickup) { v = "$99"; Inventory.append(v); } // Actual timing InHand = v; ``` === “Killed by a Machine” Therac-25 Incident (1985-1987) - 至少导致 6 人死亡 - 事件驱动导致的并发 bug (没有多线程) === The Therac-25 在更早的产品中,使用电路互锁 (interlock) 在 assert fail 时停机 ``` assert mode in [XRay, Electron] assert mirror in [On, Off] assert not (mode == XRay and mirror == Off) ``` === The Killer Software Bug in History 原子性 + 异步执行假设导致的灾难 - 选择 X-ray Mode - 机器开始移动 mirror - 需要 ~8s 完成 - 此时切换到 Electron Mode - 射线源以 X-ray Mode 运行 - Mirror 尚未到位 - Malfunction 54 - 每天处理各种 Malfunction 的操作员下意识地 Continue - 灾难发生 === 这甚至不是 Therac-25 里的最后一个杀人 Bug 问题修复后…… - If the operator sent a command at the exact moment the counter overflowed, the machine would skip setting up some of the beam accessories 最终解决方法 - 独立的硬件安全方案 - 在大计量照射发生时直接停机 == 死锁 === 死锁 (Deadlock) #tip("Tip")[ A deadlock is a state in which each member of a group is waiting for another member, including itself, to take action. ] 出现线程 “互相等待” 的情况 === AA-Deadlock Model checker 可以告诉你为什么 ```c lock(&lk); // lk = LOCKED; lock(&lk); // while (xchg(&lk, LOCKED) == LOCKED) ; ``` 看起来很傻,你觉得自己不会犯这错误? - 不,你会犯的! - 真实系统的复杂性等着你 - 多层函数调用 - 隐藏的控制流 例如内核代码: ```c Tworker(){ func(); lock(&lk); interrupt(); unlock(&lk); } interrupt(){ ... lock(&lk); // 寄 interrupt(); unlock(&lk); ... } ``` 例如多层函数: ```c Tworker(){ func(); lock(&lk); func2(..); unlock(&lk); } func2(..){ ... func3(..); } func3(..){ ... lock(&lk); // 寄 ... } ``` === ABBA-Deadlock 哲 ♂ 学家吃饭问题 #code(caption: [哲学家吃饭死锁])[ ```c void Tphilosopher() { P(&avail[lhs]); P(&avail[rhs]); // ... V(&avail[lhs]); V(&avail[rhs]); } ``` ] - T1 : P(1) - 成功, P(2) - 等待 - T2 : P(2) - 成功, P(3) - 等待 - T3 : P(3) - 成功, P(4) - 等待 - T4 : P(4) - 成功, P(5) - 等待 - T5 : P(5) - 成功, P(1) - 等待 === 死锁产生的必要条件 [ System deadlocks (1971) ](https://dl.acm.org/doi/10.1145/356586.356588):死锁产生的四个必要条件(原文很好,值得一读) - 用 “资源” 来描述 - 状态机视角:就是 “当前状态下持有的锁 (校园卡/球)” 1. Mutual-exclusion - 一张校园卡只能被一个人拥有 2. Wait-for - 一个人等其他校园卡时,不会释放已有的校园卡 3. No-preemption - 不能抢夺他人的校园卡 4. Circular-chain - 形成校园卡的循环等待关系 四个条件 “缺一不可” - 打破任何一个即可避免死锁(但实际上要校园卡依然有互斥的作用,要打破123是非常困难的) - 在程序逻辑正确的前提下 “打破” 根本没那么容易…… 例如2(一个人等待其他校园卡时,不会释放已有的校园卡): ```c lock(A) action1; lock(B) ... unlock(B) action2; unlock(A) ``` 如果破坏这个条件,即等待B的时候可以把A释放掉,那action1和action2就可以分开了,就不是互斥了。 #tip("Tip")[ 消除循环等待是个更为经济划算的方法。 ] == 数据竞争 #tip("Tip")[ - 所以不上锁不就没有死锁了吗? - 数据竞争并发bug的根源 ] === 数据竞争 #definition("Definition")[ 数据竞争:*不同的线程*同时访问*同一内存*,且*至少有一个是写*。 ] - 两个内存访问在 “赛跑”,“跑赢” 的操作先执行 - 例子:共享内存上实现的 Peterson 算法 #tip("Tip")[ 锁消灭了数据竞争。 ] ==== “跑赢” 并没有想象中那么简单 Weak memory model 允许不同观测者看到不同结果 Since C11: #link("https://en.cppreference.com/w/c/language/memory_model")[ data race is undefined behavior ] #tip("Tip")[ 读一下更严谨的关于data race的定义。 ] === 数据竞争:你只要记得 用锁保护好共享数据 消灭一切数据竞争 === 数据竞争:例子 以下代码概括了你们遇到数据竞争的大部分情况 - 不要笑,你们的 bug 几乎都是这两种情况的变种 ```c // Case #1: 上错了锁 void thread1() { spin_lock(&lk1); sum++; spin_unlock(&lk1); } void thread2() { spin_lock(&lk2); sum++; spin_unlock(&lk2); } // race -> UB // Case #2: 忘记上锁 void thread1() { spin_lock(&lk1); sum++; spin_unlock(&lk1); } void thread2() { sum++; } ``` === 为什么不要笑? 不同的线程同时访问同一内存,且至少有一个是写(不同的顺序会导致不同的结果) - “内存” 可以是地址空间中的任何内存 - 可以是全部变量 - 可以是堆区分配的变量 - 可以是栈 - “访问” 可以是任何代码 - 可能发生在你的代码里 - 可以发生在框架代码里 - 可能是一行你没有读到过的汇编代码 - 可能时一条 ret 指令(会访问栈,在后面多处理器操作系统内核实验会遇到) #tip("Tip")[ 找到所有的的线程所有可能访问内存的地方是很难的。 ] == 原子性和顺序违反 === 并发编程的本质 人类是 sequential creature - 我们只能用 sequential 的方式来理解并发 - 程序分成若干 “块”,每一块看起来都没被打断 (原子) - 具有逻辑先后的 “块” 被正确同步 - 例子:produce → (happens-before) → consume 并发控制的机制完全是 “后果自负” 的 - 互斥锁 (lock/unlock) 实现原子性 - 忘记上锁——原子性违反 (Atomicity Violation, AV) - 条件变量/信号量 (wait/signal) 实现先后顺序同步 - 忘记同步——顺序违反 (Order Violation, OV) === 那么,程序员用的对不对呢? “Empirical study” 实证研究 - 收集了 105 个真实系统的并发 bugs(开发环境没有找出来,生产的时候才被发现) - MySQL (14/9), Apache (13/4), Mozilla (41/16), OpenOffice (6/2) - 观察是否存在有意义的结论 97% 的非死锁并发 bug 都是原子性或顺序错误 - “人类的确是 sequential creature” - #link("https://dl.acm.org/doi/10.1145/1346281.1346323")[ Learning from mistakes - A comprehensive study on real world concurrency bug characteristics ] (ASPLOS'08, Most Influential Paper Award) === 原子性违反 (AV) “ABA” - 我以为一段代码没啥事呢,但被人强势插入了 - 即便分别上锁 (消除数据竞争),依然是 AV - Diablo I 里复制物品的例子 - Therac-25 中 “移动 Mirror + 设置状态” #image("images/2024-03-18-18-17-51.png") 操作系统中还有更多的共享状态 - “TOCTTOU” - time of check to time of use - #image("images/2024-03-18-18-18-05.png") #tip("Tip")[ 快捷方式很危险,如果创建了一个系统程序的符号链接,虽然用户没有权限访问, 但是`Sendmail`程序有权限。这里就是利用了`Check`->`Use`间隙进行hack。 ] - [ TOCTTOU vulnerabilities in UNIX-style file systems: An anatomical study ](https://www.usenix.org/legacy/events/fast05/tech/full_papers/wei/wei.pdf) (FAST'05); 我们可以用进程模型复现这个问题! === 顺序违反 (OV) “BA” - 怎么就没按我预想的顺序(AB)来呢? - 例子:concurrent use after free(需要一个同步,在use之后,free之前进行同步) #image("images/2024-03-18-18-18-34.png")
https://github.com/Isaac-Fate/booxtyp
https://raw.githubusercontent.com/Isaac-Fate/booxtyp/master/src/theorems/proof.typ
typst
Apache License 2.0
#import "../colors.typ": color-schema #let proof(content) = { block(width: 100%)[ // The word "Proof" in italic #text(style: "italic", fill: color-schema.orange.primary)[*Proof*] // A small space between the word "Proof" and the main content #h(0.3em) // Main content of the proof #content // Start a new line #parbreak() // Push the following square to the end of the lines #h(1fr) // A filled square at the end indicating the end of the proof, Q.E.D. $square.filled$ ] }
https://github.com/lyzynec/orr-go-brr
https://raw.githubusercontent.com/lyzynec/orr-go-brr/main/09/main.typ
typst
#import "../lib.typ": * #knowledge[ #question(name: [Explain the necessary changes to the LQR framework in the case when the system is exposed to some random disturbances and the initial conditions are random as well (this is called _stochastic_ LQR problem). The answer is that there are no changes needed, the same formulas for the optimal state--feedback gain can be used as in the deterministic case.])[ The answer is laready there. ] #question(name: [Discuss the guarantees on the stability margins for the LQR state--feedback regulator.])[ #align(center)[#grid(columns: 2, row-gutter: 10pt, column-gutter: 10pt, align: left, inset: 5pt, [gain margin], $"GM"_+ &= oo\ "GM"_- &= 1/2$, [phase margin], $"PM"_+ &>= 60 deg\ "PM"_- &<= -60 deg$ )] ] #question(name: [Discuss the possible extensions of the LQR framework in the situation when not all the states are measured. In particular, explain the idea behind the LQG controller, that is, a combination of a LQR state--feedback controller and a Kalman filter.])[ The idea is, that we use the optimal Gaussian obserer -- the Kalman filter, and use it as a source of information for LQR controller. ] #question(name: [Discuss the guarantees on the stability margins for LQG controller. Here, John Doyle's famous abstract gives a sufficient answer: there are none...])[ The robustness, loop--by--loop could be any, but put together, there are none of these guarantees. ] #question(name: [Explain the key idea behind the Loop Transfer Recovery (LTR) control strategy as a heuristic means of restoring the robustness of an LQG controller.])[ In order to force Kalman not to trust so heavily the input information, introduce noise to input signal. ] #question(name: [Reformulate both the LQR and the LQG problems with the new setup featuring a _generalized system_ and a feedback controller in the feedback loop.])[ #align(center)[#image("includes/h2-generalized.png", width: 60%)] ] #question(name: [Give the definition of the $H^2$ system norm and explain how its minimization relates to LQR/LQG--optimal control.])[ $H^2$ norm is defined for stable and strictly proper system $ norm(bold(G))_2 = sqrt( 1/(2 pi) integral_(-oo)^(oo) op("tr") [bold(G)^* (j omega) bold(G) (j omega)] upright(d) omega) $ in time domain $ norm(bold(G))_2 = sqrt( integral_0^oo op("tr") [bold(g)^T (t) bold(g) (t)] upright(d) t ) $ where $bold(g) (t)$ is impulse response. $H^2$ norm indicates the amplification of white noise $ E{bold(y)^T (oo) bold(y) (oo)} = norm(bold(G))_2^2 S_w $ #align(center)[#image("includes/h2-lq-generalized.png", width: 60%)] ] ] #skills[ #question(name: [Design an LQG/LTR regulator/controller.])[] ]
https://github.com/Myriad-Dreamin/shiroa
https://raw.githubusercontent.com/Myriad-Dreamin/shiroa/main/github-pages/docs/book.typ
typst
Apache License 2.0
#import "@preview/shiroa:0.1.1": * #show: book #book-meta( title: "shiroa", description: "shiroa Documentation", repository: "https://github.com/Myriad-Dreamin/shiroa", repository-edit: "https://github.com/Myriad-Dreamin/shiroa/edit/main/github-pages/docs/{path}", authors: ("Myriad-Dreamin", "7mile"), language: "en", summary: [ // begin of summary #prefix-chapter("introduction.typ")[Introduction] = User Guide - #chapter("guide/installation.typ")[Installation] - #chapter("guide/get-started.typ")[Get Started] - #chapter("guide/faq.typ")[Frequently Asked Questions] - #chapter(none)[Further reading] = Reference Guide - #chapter("cli/main.typ")[Command Line Tool] - #chapter("cli/init.typ")[init] - #chapter("cli/build.typ")[build] - #chapter("cli/serve.typ")[serve] - #chapter("cli/clean.typ")[clean] - #chapter("cli/completions.typ")[completions] - #chapter("format/main.typ")[Format] - #chapter("format/book.typ")[book.typ] - #chapter("format/book-meta.typ")[Book Metadata] - #chapter(none)[Draft chapter] // - #chapter(none)[chapter with - markers] // - #chapter(none, "= Introduction") // - #chapter(none)[#text("= Introduction")] - #chapter("format/build-meta.typ")[Build Metadata] - #chapter("format/theme.typ")[Theme] - #chapter("format/supports.typ")[Typst Support] - #chapter("format/supports/cross-ref.typ")[Cross Reference] - #chapter("format/supports/cross-ref-sample.typ")[Cross Reference Sample] - #chapter("format/supports/embed-html.typ")[Embed Sanitized HTML Elements] - #chapter("format/supports/multimedia.typ")[Multimedia components] - #chapter("format/supports/sema-desc.typ")[Semantic Page Description] - #chapter(none)[For developers] - #chapter(none)[Typst-side APIs] - #chapter(none)[shiroa CLI Internals] - #chapter(none)[Alternative Backends] // end of summary ], ) #build-meta(dest-dir: "../dist") #get-book-meta() // re-export page template #import "/contrib/typst/gh-pages.typ": project, heading-reference #let book-page = project #let cross-link = cross-link #let heading-reference = heading-reference
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10800.typ
typst
Apache License 2.0
#let data = ( ("CYPRIOT SYLLABLE A", "Lo", 0), ("CYPRIOT SYLLABLE E", "Lo", 0), ("CYPRIOT SYLLABLE I", "Lo", 0), ("CYPRIOT SYLLABLE O", "Lo", 0), ("CYPRIOT SYLLABLE U", "Lo", 0), ("CYPRIOT SYLLABLE JA", "Lo", 0), (), (), ("CYPRIOT SYLLABLE JO", "Lo", 0), (), ("CYPRIOT SYLLABLE KA", "Lo", 0), ("CYPRIOT SYLLABLE KE", "Lo", 0), ("CYPRIOT SYLLABLE KI", "Lo", 0), ("CYPRIOT SYLLABLE KO", "Lo", 0), ("CYPRIOT SYLLABLE KU", "Lo", 0), ("CYPRIOT SYLLABLE LA", "Lo", 0), ("CYPRIOT SYLLABLE LE", "Lo", 0), ("CYPRIOT SYLLABLE LI", "Lo", 0), ("CYPRIOT SYLLABLE LO", "Lo", 0), ("CYPRIOT SYLLABLE LU", "Lo", 0), ("CYPRIOT SYLLABLE MA", "Lo", 0), ("CYPRIOT SYLLABLE ME", "Lo", 0), ("CYPRIOT SYLLABLE MI", "Lo", 0), ("CYPRIOT SYLLABLE MO", "Lo", 0), ("CYPRIOT SYLLABLE MU", "Lo", 0), ("CYPRIOT SYLLABLE NA", "Lo", 0), ("CYPRIOT SYLLABLE NE", "Lo", 0), ("CYPRIOT SYLLABLE NI", "Lo", 0), ("CYPRIOT SYLLABLE NO", "Lo", 0), ("CYPRIOT SYLLABLE NU", "Lo", 0), ("CYPRIOT SYLLABLE PA", "Lo", 0), ("CYPRIOT SYLLABLE PE", "Lo", 0), ("CYPRIOT SYLLABLE PI", "Lo", 0), ("CYPRIOT SYLLABLE PO", "Lo", 0), ("CYPRIOT SYLLABLE PU", "Lo", 0), ("CYPRIOT SYLLABLE RA", "Lo", 0), ("CYPRIOT SYLLABLE RE", "Lo", 0), ("CYPRIOT SYLLABLE RI", "Lo", 0), ("CYPRIOT SYLLABLE RO", "Lo", 0), ("CYPRIOT SYLLABLE RU", "Lo", 0), ("CYPRIOT SYLLABLE SA", "Lo", 0), ("CYPRIOT SYLLABLE SE", "Lo", 0), ("CYPRIOT SYLLABLE SI", "Lo", 0), ("CYPRIOT SYLLABLE SO", "Lo", 0), ("CYPRIOT SYLLABLE SU", "Lo", 0), ("CYPRIOT SYLLABLE TA", "Lo", 0), ("CYPRIOT SYLLABLE TE", "Lo", 0), ("CYPRIOT SYLLABLE TI", "Lo", 0), ("CYPRIOT SYLLABLE TO", "Lo", 0), ("CYPRIOT SYLLABLE TU", "Lo", 0), ("CYPRIOT SYLLABLE WA", "Lo", 0), ("CYPRIOT SYLLABLE WE", "Lo", 0), ("CYPRIOT SYLLABLE WI", "Lo", 0), ("CYPRIOT SYLLABLE WO", "Lo", 0), (), ("CYPRIOT SYLLABLE XA", "Lo", 0), ("CYPRIOT SYLLABLE XE", "Lo", 0), (), (), (), ("CYPRIOT SYLLABLE ZA", "Lo", 0), (), (), ("CYPRIOT SYLLABLE ZO", "Lo", 0), )
https://github.com/dangh3014/postercise
https://raw.githubusercontent.com/dangh3014/postercise/main/utils/scripts.typ
typst
MIT License
// Color and size states #let color-primary = state("color-primary", teal) #let color-background = state("color-background", white) #let color-accent = state("color-accent", yellow) #let color-titletext = state("color-titletext", black) #let size-titletext = state("size-titletext", 2em) // Content states #let title-content = state("title-body") #let subtitle-content = state("subtitle-body") #let author-content = state("author-body") #let affiliation-content = state("affiliation-body") #let logo-1-content = state("logo-1-body") #let logo-2-content = state("logo-2-body") #let focus-content = state("focus-body") #let footer-content = state("footer-body") #let theme( primary-color: rgb(28,55,103), // Dark blue background-color: white, accent-color: rgb(243,163,30), // Yellow titletext-color: white, titletext-size: 2em, body, ) = { set page( margin: 0pt, ) color-primary.update(primary-color) color-background.update(background-color) color-accent.update(accent-color) color-titletext.update(color-titletext => titletext-color) size-titletext.update(size-titletext => titletext-size) body } #let poster-header( title: none, subtitle: none, authors: none, affiliation: none, logo-1: none, logo-2: none, // text-color: none, // body ) = { title-content.update(title-body => title) subtitle-content.update(subtitle-body => subtitle) author-content.update(author-body => authors) affiliation-content.update(affiliation-body => affiliation) logo-1-content.update(logo-1-body => logo-1) logo-2-content.update(logo-2-body => logo-2) } #let poster-footer( footer-kwargs: none, body ) = { footer-content.update(footer-body => body) }
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/valkyrie/0.1.1/CHANGELOG.md
markdown
Apache License 2.0
# [unreleased](https://github.com/jamesxX/valakyrie/releases/tags/) ## Added ## Removed ## Changed --- # [v0.1.1](https://github.com/jamesxX/valakyrie/releases/tags/v0.1.1) ## Changed - fixed syntax error in Typst 0.11+ because of internal context type --- # [v0.1.0](https://github.com/jamesxX/valakyrie/releases/tags/v0.1.0) Initial Release
https://github.com/freefrancisco/typst-resume
https://raw.githubusercontent.com/freefrancisco/typst-resume/main/cv.typ
typst
#import "resumetemplate.typ": * #show: cv.with( author: "<NAME>", contacts: ( [#icon("email.svg") #link("mailto:<EMAIL>")], [#icon("github.svg") #link("https://github.com/freefrancisco")[freefrancisco]], [#icon("linkedin.svg") #link("https://www.linkedin.com/in/francisco/")[francisco]], [#icon("home.svg") Miami Beach, FL], ) ) = Brief Software Engineer with 15+ years of experience. Proficient in Haskell and Python, interested in Julia, but comfortable with most popular languages. Currently excited about LLMs and generative AI. Best at the intersection between math and software. Looking to work remotely on mathematical applications such as AI, NLP, Machine Learning, Data Science, Quantitative Finance, Bioinformatics, Statistics, etc. = Experience #exp( "Tontine Trust", "Haskell Software Engineer / Quantitative Analyst", "Miami Beach, FL", "Nov 2023 – Present", [ - Developed financial actuarial models for a tontine, a new retirement product. - Developed quantitative finance forecasts and simulations in Haskell for robo-actuary software. - Skills: Haskell, Functional Programming, Quantitative Finance, Forecasting, Simulations, Statistics, Nix, Julia, Python, Spreadsheets. ] ) #exp( "Independent Study", "AI Research Engineer", "Miami Beach, FL", "Feb 2023 – Nov 2023", [ - Generative AI with LLMs, chatGPT, GPT-4, the OpenAI API, and the DeepLearning.AI classes. - Natural Language Processing Coursera specialization. - Skills: Generative AI, LLMs, NLP, RAG, OpenAI, Vector Databases, Python, Julia, Haskell, Javascript. ] ) #exp( "NFTYEA", "CTO / Co-founder", "Miami Beach, FL", "Jan 2022 – Jan 2023", [ - Main developer and technical co-founder for web3 artificial intelligence startup to aggregate, categorize, and summarize information from public sources for NFT/crypto investors using GPT-3. - Managed team of front end Javascript developers. Took application from idea to production. - Skills: Haskell, Functional Programming, IHP, LLMs, OpenAI, Web3, Nix, Javascript, Typescript, Julia, Python, Software Architecture, Leadership, Engineering Management, Entrereneurship. ] ) #exp( "Independent Study", "Crypto Engineer", "Miami Beach, FL", "Aug 2021 – Dec 2021", [ - Bitcoin, Stacks, Cardano, and other crypto technologies with Chaincode seminars on Bitcoin Protocol Development and Lightning Network, stacks protocol clarity language camp, and online resources. - Skills: Crypto, Bitcoin, Stacks, Cardano, Web3, Clarity, Haskell, Plutus, Javascript ] ) #exp( "Capital One", "Master AI Software Engineer", "San Francisco, CA", "Apr 2018 – Jun 2021", [ - Functional programming in Haskell. Wrote simulations/agent based models to generate synthetic data that matches real distributions. Evangelized Haskell internally, mentored other developers on the use of Haskell and functional programming techniques. - Machine learning. Developed models for synthetic data product, mentored coworkers, led internal machine learning reading group, led team in reinforcement learning competition. - Internal entrepreneurship. Part of team that took synthetic data product from idea to production. - Five patent applications around agent based models and synthetic data, main author in two of them. - Skills: Haskell, Functional Programming, Python, Simulations, Agent Based Models, Probabilistic Graphical Models, Synthetic Data, Machine Learning, Reinforcement Learning, Mentoring, Leadership. ] ) #exp( "Independent Study", "Deep Learning Engineer, Haskell Functional Programmer", "San Francisco, CA", "2017 – 2018", [ - Deep Learning with Andrew Ng's classes, Coursera certification, and book by Goodfellow et al. - Deep Reinforcement Learning with class by <NAME>, and book by Sutton and Barto. - Haskell with book by Allen and Moronuki. - Skills: Haskell, Deep Learning, Reinforcement Learning, Linear Algebra, Statistics, Python. ] ) #exp( "CollabRx", "Software Architect", "San Francisco, CA", "2016 – 2017", [ - Designed and developed precision medicine knowledge graph database to recommend cancer treatments based on cancer cells genotype. - Skills: Elixir, Phoenix, Neo4j, Python, Ruby, Graph Databases, Knowledge Engineering, Biology. ] ) #exp( "Gumroad", "Machine Learning Engineer", "San Francisco, CA", "2015 – 2016", [ - Main developer and maintainer of machine learning application for risk scoring and fraud detection. - Skills: Machine Learning, Fraud Detection, Python, Ruby, Scikit-learn, Numpy, Pandas, Flask. ] ) #exp( "Liquid Labs (Now Gigster)", "CTO / Co-founder", "San Francisco, CA", "2013 – 2014", [ - Main developer for all ideas in fast idea to prototype cycle. Projects included contact management tool, webRTC video chat application, experts exchange, and online stock market game. ] ) #exp( "Tapjoy", "Senior Software Engineer / Data Scientist", "San Francisco, CA", "2012 – 2013", [ - Developed collaborative filtering model to recommend mobile apps, and an A/B testing framework using Octave, R, and Ruby. ] ) #exp( "Kabam", "Lead Backend Engineer", "San Francisco, CA", "2011 - 2012", [ - Developed production code for fast growing online game, Dragons of Atlantis, using Ruby on Rails. ] ) #exp( "Biemedia", "Senior Software Engineer", "Denver, CO", "2009 - 2010", [ - Developed web and command line applications for order management, data scraping and aggregation, process management, automated link clicking, search, and automated video generation in Ruby. ] ) #exp( "Mercurio Marketing LLC", "Owner / President / CEO", "Denver, CO", "2008 - 2010", [ - Software development and consulting services. - Management of modeling agency, production of rock in Spanish concerts, and nightclub events. ] ) #exp( "Inspekt Security", "Principal Scientist", "Denver, CO", "2007 - 2008", [ - Developed statistical pattern recognition software to detect hacking attempts in computer systems from abnormal activity patterns using Ruby and R. ] ) #exp( "Collective Intellect", "Director Of Research / Cofounder", "Denver, CO", "2005 - 2006", [ - Developed NLP machine learning software to extract sentiment, detect topics, and identify influencers in online message boards using Ruby, Python, and R. ] ) #exp( "Dante Group / WebMethods", "Senior Software Engineer", "Denver, CO", "2002 - 2005", [ - Developed statistical anomaly detection, failure prediction, and real time OLAP software for analysis engine of business activity monitoring and business intelligence product in Java and Python. ] ) #exp( "Quark", "Product Manager", "Denver, CO", "1999 - 2000", [ - Developed ad and content personalization technology for publishers leading a team of three people. Authored a patent application on the personalization technology. ] ) #exp( "MicroStrategy", "Associate", "Falls Church, VA", "1997 - 1999", [ - Associate program for recent college graduates included a 6 week bootcamp where we studied SQL, Data Warehousing and Decision Support Systems in depth, and then we had a rotation through different areas of the company. I worked in marketing, product management, and web development. ] ) #exp( "First Quadrant", "Equity Research Assistant", "Pasadena, CA", "Summer 1995, 1996 - 1997", [ - Analyzed historical market data to create computational financial models. - Wrote genetic algorithms in C++ during summer research internship. ] ) #exp( "Japan Development Bank", "Summer Intern", "Tokyo, Japan", "Summer 1996", [ - Wrote and presented report on Internet commerce, business opportunities created by the Internet, uses of the Internet in traditional businesses, and impact of the Internet on the global economy. - Experienced Japanese culture, customs, language, and business practices. ] ) = Education #exp( "California Institute Of Technology", "B.S. Double Major: (1) Engineering & Applied Science, (2) Economics", "Pasadena, California", "1992 – 1996", [ - Double major with emphasis in physics, applied math, computer science, and quantitative finance. - Coursework included classical, statistical, and quantum mechanics, electromagnetism, relativity, thermodynamics, physics research, chemistry, biology, computer science, circuit design, linear systems analysis, signal processing, vector and multivariate calculus, probability, statistics, linear and abstract algebra, differential equations, real and complex analysis, microeconomics, macroeconomics, econometrics, political science, cooperative and non-cooperative game theory, corporate finance, accounting, option pricing theory, stochastic calculus, investment analysis, and laboratory work in electrical, mechanical, computer, and software engineering. - Research work included developing molecular dynamics simulations in C for the physics department, and genetic algorithms in C++ for internship through the economics department. ] ) #exp( "ITESM", "M.S. Computer Science, Intelligent Systems", "Monterrey, Mexico", "2000 – 2002", [ - Computer science program focusing on practical applications of artificial intelligence, machine learning, robotics, and automation in general. - Coursework included Bayesian networks, neural networks, genetic algorithms, simulated annealing, reinforcement learning, search and heuristics, logical inference systems, fuzzy logic, autonomous agents, decision trees, multi-agent systems, robotics, knowledge engineering, expert systems, rule-based systems, learning classifier systems, data mining, algorithm analysis, and statistics. - Published paper: “Biometrics and Data Mining: Comparison of Data Mining-Based Keystroke Dynamics Methods for Identity Verification”, Springer-Verlag Heidelberg, ISSN: 0302-9743, Volume 2313 / 2002, Book Title: MICAI 2002: Advances in Artificial Intelligence : Second Mexican International Conference on Artificial Intelligence Merida, Yucatan, Mexico, April 22-26, 2002. Proceedings ] ) #exp( "University Of Colorado Denver", "M.S. Applied Math Classes, Computational Biology Certification", "Denver, CO", "2004 – 2010", [ - Cross disciplinary program in applied math, computational biology, and statistics. - Coursework included Bayesian statistics, graph theory, linear algebra, stochastic processes, information theory, computational biology, bioinformatics, and biology. - Developed class projects with R, Python, Octave, and Matlab. ] ) #exp( "Coursera, DeepLearning.AI", "Deep Learning Specialization, Issued Apr 2018", "San Francisco, CA", "2017 – 2018", [ - Neural Networks and Deep Learning. Issued Sep 2017 - Improving Deep Neural Nets: Hyperparameter tuning, Regularization and Optimization, Sep 2017 - Structuring Machine Learning Projects. Issued Oct 2017 - Convolutional Neural Networks. Issued Feb 2018 - Sequence Models. Issued Apr 2018 ] ) #exp( "Coursera, DeepLearning.AI", "Natural Language Processing Specialization, In Progress", "Miami Beach, FL", "2023 – Present", [ - Generative AI with Large Language Models. In progress - Natural Language Processing with Classification and Vector Spaces. Issued Apr 2023 - Natural Language Processing with Probabilistic Models. Issued Jul 2023 - Natural Language Processing with Sequence Models. In Progress - Natural Language Processing With Attention Models. In progress ] )
https://github.com/mem-courses/calculus
https://raw.githubusercontent.com/mem-courses/calculus/main/homework-1/calculus-homework8.typ
typst
#import "../template.typ": * #show: project.with( title: "Calculus Homework #8", authors: (( name: "<NAME> (#47)", email: "<EMAIL>", phone: "3230104585" ),), date: "November 23, 2023", ) = P129 习题3-1 3 #prob[设 $f(x)=e^x$,求满足 $f(x+Dx)-f(x)=f'(x+theta Dx)Dx sp (0<theta<1)$ 的 $theta$ 的值.] 代入 $f(x) = e^x,sp f'(x) = e^x$ 得: $ e^(x + Dx) - e^x = e^(x + theta Dx) Dx => e^(theta Dx) = (e^Dx - 1)/(Dx) => theta = (ln(e^Dx - 1) - ln(Dx))/Dx $ 另拉格朗日中值定理,必定存在 $theta in (0,1)$ 满足条件,故此处解得的 $theta in (0,1)$. = P129 习题3-1 4 #prob[ 证明:若 $x>=0$,则 (1) $display(sqrt(x+1)-sqrt(x)=1/(2sqrt(x+theta(x))))$,其中 $display(1/4<=theta(x)<=1/2)$; (2) $display(lim_(x->0^+) theta(x)=1/4\,sp lim_(x->+oo)theta(x)=1/2)$. ] (1) 由已知: $ &2sqrt(x+theta(x))=1/(sqrt(x+1)-sqrt(x))=sqrt(x+1)+sqrt(x)\ => &4(x+theta(x)) = 2x+1+2sqrt(x(x+1))\ => &theta(x)=1/4+1/2(sqrt(x(x+1))-x) $ 其中 $theta(x)>=display(1/4)$ 显然成立,下证明 $theta(x)<=display(1/2)$: $ theta(x)<=1/2 arrow.l.double sqrt(x(x+1))-x<=1/2 arrow.l.double sqrt(x(x+1))<=1/2+x arrow.l.double x^2+x<=x^2+x+1/4 $ (2) 考虑 $g(x)=sqrt(x(x+1))-x$,有 $g'(x) = display((2x+1)/(2sqrt(x(x+1))) - 1)$,下证明 $g'(x)>0$: $ g'(x) > 0 arrow.l.double 2x+1>2sqrt(x(x+1)) arrow.l.double 4x^2+4x+1>4x^2+4x $ 所以 $g(x)$ 在 $[0,+oo)$ 上单调增且是连续函数.考虑: $ lim_(x->0^+) theta(x) &= theta(0) = 1/2\ lim_(x->+oo) theta(x) &= lim_(x->+oo) x(sqrt(1+1/x)-1) = lim_(x->0) (sqrt(1+x)-1)/x = lim_(x->0) (1/2 x)/x = 1/2 $ = P129 习题3-1 5(2) #prob[利用中值定理证明不等式:$display(p y^(p-1)(x-y) < x^p - y^p < p x^(p-1) (x-y))$.] 由于 $f(x) = x^p$ 在 $RR$ 上满足拉格朗日条件,所以 $exists xi in (x,y) st x^p - y^p = (x-y) p xi^(p-1)$.其中 $ x<xi<y =>& x^(p-1) < xi^(p-1) < y^(p-1)\ =>& (x-y)p x^(p-1) < (x-y)p xi^(p-1) < (x-y)p y^(p-1)\ =>& (x-y)p x^(p-1) < x^p - y^p < (x-y)p y^(p-1)\ $ = P129 习题3-1 5(4) #prob[利用中值定理证明不等式:$display((a-b)/a < ln a/b < (a-b)/b)$,其中 $0<b<a$.] 由于 $f(x) = ln x$ 在 $[b,a]$ 上满足拉格朗日条件,所以 $exists xi in [b,a] st ln b - ln a = (b - a)/xi$.其中 $ b<xi<a => 1/a < 1/xi < 1/b => (a-b)/b < (a-b)/xi < (a-b)/a => (a-b)/b < ln(a/b) < (a-b)/a $ 即原命题得证. = P129 习题3-1 7 #prob[ 设函数 $f(x)$ 在闭区间 $[x_1,x_2]$ 上可微,并且 $x_1 x_2>0$,证明: $ 1/(x_1-x_2)[x_1 f(x_2) - x_2 f(x_1)] = f(xi)-xi f'(xi) $ 其中 $x_1<xi<x_2$. ] 令左式的分子分母同除 $x_1x_2 >0$ 可以得到: $ (f(x_2)/x_2 - f(x_1)/x_1)/(1/x_2 - 1/x_1) = f(xi)-xi f'(xi) $ 取 $g(x) = display(x f(1/x))$,那么 $ g'(x) = f(1/x) + x f'(1/x) (-1/x^2) = f(1/x) - 1/x f'(1/x) $ 根据拉格朗日中值定理,必存在 $display(1/x_2 < mu < 1/x_1) st g'(mu) = "左式"$.令 $xi=display(1/mu)$,可以得到: $ "左式" = g'(1/xi) = f(xi) - xi f'(xi) $ 且 $x_1<xi<x_2$,故原命题得证. = P129 习题3-1 10 #prob[证明:方程 $x^3-3x+b=0$($b$ 为常数)在区间 $(-1,1)$ 内至多只有一个实根.] 设 $g(x) = x^3 - 3x$,则 $g'(x) = 3x^2 - 3 = 3(x-1)(x+1)$.所以 $forall x in (-1,1),sp g'(x) < 0$,故 $g(x)$ 在 $(-1,1)$ 上单调递减.所以 $g(x)=-b$ 在 $(-1,1)$ 上至多只有一个解,即原方程在 $(-1,1)$ 上至多只有一个实根. = P130 习题3-1 18 #prob[ 设函数 $f(x)$ 和 $g(x)$ 在 $[a,b]$ 上存在二阶导数,并且 $g''(x)!=0$,$f(a)=f(b)=g(a)$=$g(b)=0$,试证: (1) 在开区间 $(a,b)$ 内,$g(x)!=0$; (2) 在开区间 $(a,b)$ 内,至少存在一点 $xi$,使 $display(f(xi)/g(xi) = (f''(xi))/(g''(xi)))$. ] (1) 由于 $g(x)$ 在 $[a,b]$ 上存在二阶导数,所以 $g''(x)$ 在 $[a,b]$ 上连续,由于 $g''(x)!=0$,所以要么 $forall x in (a,b),sp g''(x)>0$;要么 $forall x in (a,b),sp g''(x)<0$. 对于第一种情况,由于 $g(a)=g(b)=0$,所以存在 $xi in (a,b) st g'(xi)=0$,那么 $g(x)$ 在 $(a,xi)$ 上单调递减,$g(x)$ 在 $(xi,b)$ 上单调递增.所以不存在 $x in (a,b) st g(x)$.对于第二种情况,同理可证. (2) 由 (1) 可知,$forall x in [a,b],sp g(x) != 0$ 且 $g''(x) != 0$.那么有 $ h(x) &= f(x) g'(x) - f'(x) g(x)\ => h'(x) &= f'(x) g'(x) + f(x) g''(x) - f'(x) g'(x) - f''(x) g(x) = f(x) g''(x) - g(x) f''(x)\ $ 其中,由于 $f(a)=f(b)=g(a)=g(b)=0$,所以 $h(a)=h(b)=0$,且 $h(x)$ 在 $[a,b]$ 上满足拉格朗日条件,故至少存在一个 $xi in (a,b) st$ $ h'(xi) = 0 <=> f(xi) g''(xi) - g(xi) f''(xi) <=> (f(xi))/(g(xi)) = (f''(xi))/(g''(xi)) $ = P130 习题3-1 22 #prob[ 设 $f(x)$ 在 $[a,b]$ 上连续,在 $(a,b)$ 内可导,且 $f(a) = f(b) = 0$,则至少存在一点 $xi in (a,b)$,使 $k f(xi) = f'(xi)$($k$ 为给定实数). ] 令 $g(x) = e^(-k x)f(x)$,则 $g(a) = g(b) = 0$,且 $g(x)$ 也满足在 $[a,b]$ 上连续,在 $(a,b)$ 内可导.故存在 $xi in (a,b) st g'(xi) = 0$,其中 $ g'(xi) = -k e^(-k xi) f(xi) + e^(-k xi) f'(xi) = 0 <=> f'(xi) - k f(xi) = 0 <=> k f(xi) = f'(xi) $ 故原命题得证. = P138 习题3-2 1(2) #prob[求极限:$display(lim_(x->0) (x cot x-1)/(x^2))$.] $ "原式" &= lim_(x->0) (x cos x - sin x)/(x^2 sin x) = lim_(x->0) (-x sin x)/(2x sin x + x^2 cos x)\ &= lim_(x->0) (-sin x)/(2 sin x + x cos x) = lim_(x->0) (-cos x)/(3 cos x - x sin x) = -1/3 $ = P138 习题3-2 1(4) #prob[求极限:$display(lim_(x->0) (a^x - a^(sin x))/(x^3)) sp(a>0,a!=1)$.] $ "原式" &= lim_(x->0) (a^x ln a - a^(sin x) ln a cos x)/(3 x^2)\ &= lim_(x->0) (a^x ln^2 a - a^(sin x) ln^2 a cos^2 x + a^(sin x) ln a sin x)/(6 x)\ &= lim_(x->0) (a^x ln^2 a - a^(sin x) ln^2 a cos^2 x + a^(sin x) ln a sin x)/(6 x)\ &= lim_(x->0) (a^x ln^3 a - a^(sin x) ln^3 a cos^3 x - a^(sin x) ln^2 a sin x cos x + a^(sin x) ln a cos x)/6\ &= (ln^3 a - ln^3 a - 0 + ln a)/6 = (ln a)/6 $ = P139 习题3-2 1(6) #prob[求极限:$display(lim_(x->0) (cos(sin x) - cos x)/(x^4))$.] $ "原式" &= lim_(x->0) ((1-1/2 sin^2 x + 1/24 sin^4 x + o(sin^4 x)) - (1-1/2 x^2 + 1/24 x^4 + o(x^4)))/(x^4)\ &= lim_(x->0) (1/2 (x^2 - sin^2 x) + 1/24 (sin^4 x - x^4))/(x^4)\ &= lim_(x->0) (1/2 (x^2 - (x - 1/6 x^3 + o(x^3))^2) + 1/24 ((x + o(x))^4 - x^4))/(x^4)\ &= lim_(x->0) (1/2 (x^2 - x^2 + 2 dot (1/6 x^4) + o(x^4)))/(x^4) =1/6 $ = P139 习题3-2 2(3) #prob[求极限:$display(lim_(x->pi/2) (tan x)/(tan 3x))$.] $ "原式" &= lim_(x->pi/2) (tan x)/(tan 3x) = lim_(x->pi/2) (sin x cos 3x)/(cos x sin 3x) = - lim_(x->pi/2) (cos 3x)/(cos x)\ &= - lim_(x->pi/2) (sin (pi/2 - 3x))/(sin (pi/2 - x)) = lim_(x->0) (sin 3x)/(sin x) = lim_(x->0) (3x)/x = 3 $ = P139 习题3-2 2(4) #prob[求极限:$display(lim_(x->0^+) (ln x)/(cot x))$.] $ "原式" = lim_(x->0^+) (1/x)/(-1/(sin^2 x)) = lim_(x->0^+) (sin^2 x)/x = 0 $ = P139 习题3-2 2(5) #prob[求极限:$display(lim_(x->0^+) (ln sin m x)/(ln sin x))$.] $ "原式" = lim_(x->0^+) ((m cos m x)/(sin m x))/((cos x)/(sin x)) = lim_(x->0^+) (m cos m x sin x)/(cos x sin m x) = lim_(x->0^+) (m sin x)/(sin m x) = 1 $ = P139 习题3-2 3(1) #prob[求极限:$display(lim_(x->1) (1/(ln x) - 1/(x-1)))$.] $ "原式" = lim_(x->1) ((x-1) - ln x)/(ln x(x-1)) = lim_(x->1) (1-1/x)/(1/x (x-1) + ln x) = lim_(x->1) (x-1)/((x-1) + x ln x) = lim_(x->1) 1/(ln x + 2) = 1/2 $ = P139 习题3-2 3(3) #prob[求极限:$display(lim_(x->0) [(1)/(ln(x+sqrt(1+x^2)))-1/(ln(1+x))])$.] $ "原式" $ = P139 习题3-2 3(5) #prob[求极限:$display(lim_(x->+oo) x^2 e^(-0.1x))$.] $ "原式" = lim_(x->+oo) (x^2)/(e^(0.1x)) = lim_(x->+oo) (2x)/(0.1 e^(0.1x)) = lim_(x->+oo) (2)/(0.01 e^(0.1x)) = 0 $ = P139 习题3-2 3(7) #prob[求极限:$display(lim_(x->1^-) ln x ln (1-x))$.] $ "原式" &= lim_(x->1^-) (ln(1-x))/(1/(ln x)) = lim_(x->1^-) (-1/(1-x))/(-1/(x ln^2 x)) = lim_(x->1^-) (x ln^2 x)/(1-x) = lim_(x->1^-) (ln^2 x + 2 ln x)/(-1) = 0 $ = P139 习题3-2 4(2) #prob[求极限:$display(lim_(x->pi/4) (tan x)^(tan 2x))$.] $ "原式" &= exp lim_(x->pi/4) (ln tan x)/(1/(tan 2x)) = exp lim_(x->pi/4) (1/(tan x cos^2 x))/((-2)/(tan^2 2x cos^2 2x)) = exp lim_(x->pi/4) (tan^2 2x cos^2 2x)/(-2 tan x cos^2 x)\ &= exp (- (lim_(x->pi/2) tan x cos x)^2) = exp (- (lim_(x->0) tan (pi/2-x) sin x)^2)\ &= exp (- (lim_(x->0) (tan (pi/2-x))/(1/x))^2) = exp (- (lim_(x->0) (-1/(cos^2 (pi/2-x)))/(-1/(x^2)))^2)\ &= exp (- (lim_(x->0) (x^2)/(sin^2 x))^2) = 1/e $ = P139 习题3-2 4(4) #prob[求极限:$display(lim_(x->+oo) (2/pi arctan x)^x)$.] $ "原式" = $ TBD = P139 习题3-2 4(5) #prob[求极限:$display(lim_(x->0) [(1+x)^(1/x)/e]^(1/x))$.] $ "原式" &= exp lim_(x->0) 1/x (1/x ln(1+x) - 1) = exp lim_(x->0) (ln(1+x) - x)/(x^2) = exp lim_(x->0) (1/(1+x) - 1)/(2x)\ &= exp lim_(x->0) (-x)/(2x(1+x)) = exp lim_(x->0) (-1)/(2(1+x)) = 1/sqrt(e) $ = P139 习题3-2 6(1) #prob[求极限:$display(lim_(n->oo) tan^n (pi/4 + 2/n))$.] $ "原式" &= exp lim_(n->oo) n ln(tan (pi/4 + 2/n)) = exp lim_(x->0) ln(tan (pi/4 + 2x))/x\ &= exp lim_(x->0) (2/(cos^2 (pi/4 + 2x)))/(tan (pi/4 + 2x)) = exp (2/(1/sqrt(2))^2)/1 = e^4 $ = P139 习题3-2 6(4) #prob[求极限:$display(lim_(x->0) [a/x - (1/(x^2)-a^2) ln(1+a x)])$.] $ "原式" &= lim_(x->0) (a x-(1-a^2 x^2)ln(1+a x))/(x^2) = a^2 lim_(x->0) (x-(1-x^2)ln(1+x))/(x^2)\ &= a^2 lim_(x->0) (1+2 x ln(1+x)-(1-x^2)/(1+x))/(2x) = a^2 lim_(x->0) (2 x ln(1+x)+x)/(2x)\ &= a^2 lim_(x->0) (2ln(1+x)+1)/2 = 1/2 a^2 $
https://github.com/Tiggax/zakljucna_naloga
https://raw.githubusercontent.com/Tiggax/zakljucna_naloga/main/src/sec/1uvod.typ
typst
#set heading(offset: 0) #import "../additional.typ": todo = Introduction #include "uvod/1. intro.typ" == Purpose #include "/src/sec/2namen.typ" = Theory == Bioreactors #include "uvod/2. bioreactors.typ" == Numerical Methods #include "uvod/3. num metode.typ" == Bioreactor Modeling #include "uvod/4. bioreactor modeling.typ"
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/fruitify/0.1.1/README.md
markdown
Apache License 2.0
# Fruitify Make your equations more fruity! This package automatically replaces any single letters in equations with fruit emoji. Refer to [`example-documentation.pdf`](https://codeberg.org/T0mstone/typst-fruitify/src/tag/0.1.1/example-documentation.pdf) for more detail. ## Emoji support Until 0.12, typst did not have good emoji support for PDF. This meant that even though this package worked as intended, the output would look very wrong when exporting to PDF. Therefore, it is recommended to stick with PNG export for those older typst versions.
https://github.com/hrmnjt/resume
https://raw.githubusercontent.com/hrmnjt/resume/main/resume.typ
typst
Apache License 2.0
// Copyright 2020-2023 <NAME> // This work is licensed under a Creative Commons // Attribution-NonCommercial-ShareAlike 4.0 International License. // Terms - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode // GLOBAL STYLING // using A4 page size and setting a 1.5cm square margin #set page( paper: "a4", margin: (x: 1.5cm, y: 1.5cm), ) // all links are underlined #show link: underline // macro to create an underline below subheadings // TODO: Need to extend the macro so that subheadings can default with an underline #let subheadingline() = { v(-5pt); line(length: 100%); v(-5pt) } #let sectionseparator() = { v(-5pt); line(length: 100%); v(-5pt) } // HEADER #grid( columns: (1fr, 1fr), align(left)[ = <NAME> #link("https://www.hrmnjt.dev") ], align(right)[ Email: #link("mailto:<EMAIL>") \ Phone: #link("tel:+971503937005") \ Github: #link("https://github.com/hrmnjt")[hrmnjt] - Linkedin: #link("https://www.linkedin.com/in/hrmnjt")[hrmnjt] ] ) #align(center)[ #set par(justify: false) *About* - Engineering Manager with a knack for building and scaling robust systems and data products. I'm passionate about open source software, developer experience and systems engineering. Consistently promoted and selected among top 1% performers, resulting in continious development and role expansion. ] == Experience #subheadingline() #grid( columns: (1fr, 1fr), align(left)[ *<NAME>uttaim* \ *#emph("Engineering Manager (Data)")* ], align(right)[ Dubai, UAE \ Sep 2023 - Present ] ) - Created an identity graph for MAFs diverse 65M customer record collected across 8 varied businesses - Conceptualized and built classification of customers based their interactions with MAF businesses allowing marketing and customer experience teams win their "hearts, minds and wallets" - Got buy-in from leadership to start a dedicated team which aims to perfect quality, reliability and observability across data, data products and data infrastructure - *Tech Stack*: Spark, Splink, S3, Vertica, DuckDB, Airflow, Datahub, Kafka, Argo Workflows, Kubernetes, AWS #grid( columns: (1fr, 1fr), align(left)[ *#emph("Engineering Manager (Product)")* ], align(right)[ Sep 2021 - Sep 2023 ] ) - Led build for central business intelligence platform (internal name: Connect) for MAF which provides insights across market trends, competitive benchmarking and internal business performance - Led build and integration for internal MarTech and AdTech automation product aiming to improve, consolidate and standardize marketing and advertising workflow across MAF's retail ad network - *Tech Stack*: Spark, Cube.dev, S3, Vertica, Postgres, Airflow, NodeJS, React, ArgoCD, Kubernetes, AWS // - Leading a talented team of 10 engineers to self-organize, own and implement an active product roadmap. Ran a Spotify-inspired agile model to arrange team in tribe and squads - 2 squads building a solutions and 3rd squad implementing a commerical bought software // - Become the bridge between BD/Sales and PDT team to translate requirements into solutions; created a culture of writing technical design docs (called ADRs) and drive implementation within reasonable milestones // - Balance product delivery with sustaining engineering to ensure code can be maintained over time and iterations. Invested `~`30% average bandwidth on reducing technical debt and improving scalability // - Be a proponent of open source solutions and convince team to employ learning and practices on observability, orchestration, documentation and security to promote organization culture. Conducted 20+ architecture review and actively participated in `>`70 hiring discussions over last year #grid( columns: (1fr, 1fr), align(left)[ *#emph("Lead Data Engineer")* ], align(right)[ Jun 2019 - Sep 2021 ] ) - Started as an lead engineer to handle central data infrastructure for MAF i.e. integration with 30+ source/sink, data observability and managing databases, data-lake, reporting and event-bus infrastructure // only MAF engineer with 4 external vendor DE // batch data pipelines, data modelling and warehousing, ACL for postgres and vertica, data observability, golden customer record, infrastructure migration and optiomization, data operations - Led and scaled a team from 5 to 18 engineers over 2 years to create Data Engineering and Operations team backing up the data management and operations required for flagship programs for loyalty, marketing and operations. // Share program, BAU data operations for C4, Vox, Magic Planet, Ski, Shopping Malls // Work with MarTech tools - Braze integration, Oneview integration, OneTrust integration - Led build for an internal product for marketing platform which would crunch customer base of 30M with 600 odd attributes to craft a campaign specific audience with sub-second latency. - *Tech Stack*: Spark, S3, Vertica, Postgres, HDFS, Airflow, Jenkins, Docker Swarm, AWS // % Notes to explain this role // % - Joined MAF as a Senior Data Engineer and at the time MAF Holding was starting to centralize the data infrastructure they had. To understand this better, I should mention that MAF is a AED 18 Bn company operating in 18 countries. It has assets worth AED 63 Bn across Properties, Retail, Leisure & Entertainment and Lifestyle business. Because each OpCo was responsible for their own P&L, each business had different maturity with data and analytics. Holding's role (where I was hired) was to act like a chief architect and improve this maturity for all business. // % - Started with removing legacy infrastructure. Migrated from HDFS to S3; as nothing else from Hadoop stack was being used. Improved Vertica datawarehouse by fine-tuning netperf, ioperf, sysperf and data within: projections, aggregations. Migrated legacy ETL from Informatica to Airflow. Improved data observability i.e. monitoring and notifications for data. Maintained and built knowledge about Customer Master Data called GCR. // % - Created a framework with Airflow, Spark and Confluent to orchestrate data pipelines for ingestion, warehousing and analytics. Create loyalty data warehouse i.e. SHARE. Created a ACL for managing access to Vertica. Created integrations for MarTech (Braze Integration, Sendgrid integration, SAP c4c integration, etc.). Created a audience segmentation tool which can compete with Salesforce Marketing cloud but was much faster, cleaner and purpose built for MAF. // % - XSight had started as a business inside MAF Future Solution and we had conceptualized couple of tools - Dataverse and XConnect. // % TODO: Explain XConnect. #sectionseparator() #grid( columns: (1fr, 1fr), align(left)[ *Mu Sigma Inc.* \ *#emph("Decision Scientist, Apprentice Leader")* ], align(right)[ Bangalore, India \ May 2015 - Jun 2019 ] ) - For leading US Airline, optimized recommender system to create a 30% significant lift in yearly revenue. Worked on reverse engineering performance of real-time recommendations (based on Oracle RTD) used for airlines check-in process - For leading UK retailer, built data and platform infrastucuture for retail pricing and inventory management. - For MAF, built backend and data pipelines for internal employee application serving 46K employees with 150+ MAUs - For MAF, built a configuration based framework for to run data integration and ETL tasks. Enabled 140+ data pipelines from 15 varied data sources to enable BI and analytics use cases == Education #subheadingline() #grid( columns: (1fr, 1fr), align(left)[ *PES Institute of Technology* \ #emph("Bachelor of Engineering - Electrical and Electronics") ], align(right)[ Bangalore, India \ Jun 2011 - Jun 2015 ] ) == Skills #subheadingline() - *Domains*: Product Engineering, Data Engineering, Machine Learning, Business Intelligence, and DevSecOps - *Languages*: Proficient - Python, Shell, SQL; Familiar - Scala, Golang, JavaScript - *Data Stack*: Postgres, SQLite, Vertica, ElasticSearch, DuckDB, Spark, Confluent Kafka, Cube.dev, Trino, Airflow, Superset // About me // My full name is <NAME> but almost everyone calls me Harman. On web, you can find me with the handle hrmnjt i.e. my first name without any vowels. I'm originally from Jamshedpur (India) and I moved to Bangalore (India) for college and started my first job in Bangalore. I got a chance to move to Dubai in 2017 for work and post that I've been in Dubai. I consider myself a generalist and have had a diverse experience but my forte is data engineering. When I'm not working I'm reading scrolling twitter, hackernews, reading books; trying out new tech; going on long walks alone or with my wife and dog. // About work stuff // In college I was studying electrical and electronics where I was most fascinated about control systems. I started my career as an intern in a startup which was building a CRM solution of motor industries and it exposed me to world of data science and software engineering. I could draw parallels between control systems and how software systems are created but was more interested learning how to work with data. // After the internship and completeing college, I joined US based consultancy which specialized in data analytics and data science called Mu Sigma. I got a lot of exposure to work on big data systems with United Airlines, with Tesco and with Majid Al Futtaim. This gave me a chance to learn a lot of data engineering, devops culture, infrastructure, automation, etc. // I started working with <NAME> as a consultant and later got an offer to join the team directly. Since almost 4 years I've been working with the core data engineering team in MAF. 2 years back I was promoted to Engineering Manager role and have been working on internal products primarily related to digital marketing and advertisement space. // Explain core data engineering // Started with upgrading legacy infrastructure. Migrated from HDFS to S3; as nothing else from Hadoop stack was being used. // Improved Vertica performance - 2 ways, infra and data model // infra - fine-tuning netperf, ioperf, sysperf, changing node types, rebalancing, dba activities // data model - projections, aggregations, incremental data load logic. // Migrated legacy ETL from Informatica to Airflow // Improved data observability i.e. monitoring and notifications for data. Maintained and built knowledge about Customer Master Data called GCR. // Explain DLA // Internal tool to create audiences and activate them. Audiences are basically customer segments who have satisfy a condition, such as age-range, gender, last_active_30days, did_trx_in_mall, avg_spend etc. // MAF active customer base = 15M across 16 countries (largest in UAE and KSA around 9M) // Around 200 attributes for each customer // 9 Business Units each with 3-4 marketing managers; no synergy between marketing efforts // consent, duplicate emails, incorrect targetting, no personalization // Explain Connect // It solves for many problems that MAF has - Semantic layer for business logic, repository for dashboards and reports, custom dashboards, audience creation activation, running market research (close loop), running ad campaigns (close loop) // Dashboard shows problem with data say sales is decreasing // Marketeer can run a market research i.e. survey, see results in a dashboard and understand product needs brand awareness // Marketeer can run an ad campaign for increasing brand awareness; see results in a dashboard for campaign effectiveness and understand change in sales.
https://github.com/maxdinech/typst-recipe
https://raw.githubusercontent.com/maxdinech/typst-recipe/main/README.md
markdown
# `typst-recipe` A template for cooking recipes. ## Example ![](result.png)
https://github.com/yhtq/Notes
https://raw.githubusercontent.com/yhtq/Notes/main/抽象代数/章节/群论.typ
typst
#import "../../template.typ": * #let chapterGroup = [ = 前言 - 抽象代数或名近世代数,一般以 Galois 理论为开端,在此之前的古典代数学着重于研究代数方程。而现代代数学逐渐转向研究各种代数结构。 - 本课程将学习三个基本代数结构:群、环、域,所谓代数结构主要研究的是代数结构的运算性质 - 所谓代数结构,即为给定一个抽象的非空集合以及一些满足一定规则(称为公理)的运算 - 代数上一些写序号的习惯:下指标表示随指标递增,上指标表示随指标递减 == 集合 ...... == 运算/映射 - #box[往往将集合上的二元映射(也即 $R times R -> R $)称为集合上的运算 \ (注:线性空间中数乘一般不能称为此种意义下的运算)\ ......\ 通常习惯于用乘法代替二元运算,也即记作 $a dot b 或 a b$\ 交换律:$forall a, b in A , a b = b a$。若运算没有交换性但两个元素可交换,则称两元素可交换\ 结合律:$forall a, b, c in A , (a b) c= a (b c)$,此时前式可记为 $a b c$ ] - #box[定理:若集合 $A$ 上的运算有结合律,则有广义结合律,即任意有限个元素的任意组合结果都相等\ 归纳证明即可] - #box[单位元:$exists e in A , forall a in A , e a = a e = a$\ 若单位元存在,则必然唯一] - #box[逆元:$exists b, a b = b a = e$\ 设运算满足结合律,则逆元必然唯一: 设 $a$ 有两个逆元 $b, c$,则: $ b (a c) = (b a) c = e c = c\ b (a c) = b e = b\ => b = c $ 一般而言,若逆元唯一,则记其为 $Inv(a)$\ 若 $a$ 可逆,则 $Inv(a)$ 显然也可逆\ 若 $a$ 有左逆或右逆且有结合律,则其左逆或右逆即为$Inv(a)$ 若 $a, b$均可逆且有结合律,则 $Inv(a b) = Inv(b) Inv(a)$\ ] - #box[设有结合律,可以自然的定义 $a$ 的方幂 $a^n$,其中 $n in NN$\ 若有单位元,则自然定义 $a^0 = e$\ 若 $a$ 可逆,则自然定义 $a^(-n) = Inv(a^n)$\ 对任意有定义的方幂,运算性质: $ a^n a^m = a^(n+m)\ (a^n)^m = a^(n m) $ 均成立 ] - 有时也将运算记为加法,此时逆元记为 $-a$,方幂记为数乘 == 等价关系和偏序关系 #definition[等价关系][ $A times A$ 的任一子集称为 $A$ 上的二元关系。称二元关系 $R$ 为等价关系,如果: - 自反性:$forall a in A , a R a$ - 对称性:$forall a, b in A , a R b => b R a$ - 传递性:$forall a, b, c in A , a R b and b R c => a R c$\ 此时若 $a R b$ ,也称 $a$ 和 $b$ 等价,记作 $a tilde b$ ] #definition[等价类][ 设 $R$ 为 $A$ 上的等价关系,对任意 $a in A$,称 $a$ 的等价类为 $a$ 在 $R$ 下的等价类,记作 $overline(a)$,定义为: $overline(a) = {b in A | a R b}$\ 也即 $a$ 在 $R$ 下的等价类为 $a$ 的所有等价元素的集合\ 所有等价类构成的集合记为 $A \/ R$ ] #theorem[][ 设 $R$ 是 $A$ 上的等价关系,则有: + $b in overline(a) => overline(b) = overline(a)$ + $overline(a)$ 与 $overline(b)$ 或者相等,或者不相交 + $A = union_x overline(x)$ 换言之,$A$ 可以划分为若干个不相交的等价类 ] #example[][ 设 $f: A -> B$, 则可诱导出 $A$ 上的等价关系 $R$,定义为: $a R b <=> f(a) = f(b)$\ 进一步,$f$ 可以诱导出 $A \/ R$ 到 $B$ 的一个单射 $overline(f): overline(f)(overline(a)) = f(a)$\ 再取 $ pi : A &-> A \/R \ a &-> overline(a) \ $ 为$A -> A\/R$ 的自然映射或者典范映射(显然这是一个满射),则 $f = overline(f) pi$\ 也即任何映射都可以分解为一个满射和一个单射的复合 ] 设 $A$ 上定义了乘法运算,能否在商集上自然诱导出运算? 显然需要满足条件: $ x tilde x' and y tilde y' => x y tilde x' y' $ #definition[偏序集][ 称二元关系 $R$ 为偏序关系,如果: - 自反性:$forall a in A , a R a$ - 反对称性:$forall a, b in A , a R b and b R a => a = b$ - 传递性:$forall a, b, c in A , a R b and b R c => a R c$\ 将 $a R b$ 记作 $a <= b$ 或 $b >= a$\ 若 $a <= b$ 或 $b <= a$,则称 $a$ 和 $b$ 可比\ 若所有元素均可比,则称 $R$ 为全序关系\ 若 $a <= b => a = b$,则称 $b$ 为极大元\ 若 $forall a in B subset A :space a <= b$,则称 $b$ 为上界 ] #theorem[佐恩引理][ 设 $A$ 为非空偏序集,若 $A$ 中任何全序子集都有上界,则 $A$ 中必有极大元(没有其他元素比它更大) ] 佐恩引理等价于选择公理 = 基本代数结构 == 群 #definition[半群/幺半群][ 设 $A$ 为非空集合,$dot$ 为 $A$ 上的二元运算,若 $dot$ 满足结合律,则称 $A$ 为半群\ 若 $dot$ 还有单位元 $e$,则称 $A$ 为幺半群\ ] #definition[群][ 设 $A$ 为非空集合,$dot$ 为 $A$ 上的二元运算,若 $dot$ 满足结合律,且 $dot$ 有单位元 $e$,且 $dot$ 中每个元素都可逆,则称 $A$ 为群\ 若 $dot$ 满足交换律,则称 $A$ 为交换群或阿贝尔群(交换群的运算更多写成加法)\ ] #example[][ - 幺半群中所有可逆元构成群 - 整数关于加法构成交换群,称为整数加法群 - $RR, QQ, CC$ 关于加法构成交换群 - $RR \/ {0}, QQ \/ {0}, CC \/ {0}$ 关于乘法构成交换群 - 数域 $K$ 上所有 $n$ 阶矩阵关于矩阵乘法构成幺半群,所有可逆矩阵构成群,一般记为 $"GL"_n (K)$ (一般线性群) - 将所有行列式为 $1$ 的矩阵记为 $"SL"_n (K)$,显然它也是一个群 - 设 $M$ 为一个集合,$T_M$ 表示 $M$ 上的所有变换,构成幺半群。$S_M$ 表示 $T_M$ 中所有可逆变换,构成群,称为 $M$ 上的全变换群 - 特别的,若 $M$ 有限,则可不妨设 $M = {1, 2, 3, ..., n} := \[n\]$,此时称$S_M$ 中的一个元素为一个 $n$ 元置换。事实上,这就是 $n$ 个元素的一个排列。此时,记 $S_n := S_(\[n\)$ 为 $n$ 元对称群 ] #proposition[][ - 群中有消去律,即: $ forall a, b, c in G, space a b = a c or b a = c a => b = c $ - 穿脱法则成立,即: $ Inv((a_1 a_2 .. a_n)) = Inv(a_n) Inv(a_(n-1)) .. Inv(a_1) $ 注:未必有 $(a b)^n = a^n b^n$,除非 $a, b$ 可交换 ] #definition[直积/direct product][ 设 $G_1, G_2$ 为群,定义 $G_1 times G_2$ 上的运算: $ (a_1, a_2) (b_1, b_2) = (a_1 b_1, a_2 b_2) $ 则 $G_1 times G_2$ 构成群,称为 $G_1$ 和 $G_2$ 的直积 ] #example[二面体群][ 记 $D_(2n)$ 为一个正 $n$ 边形的对称群(自身到自身的双射),称为二面体群。它可以被表示为: $ D_(2n) = {#box[ $e, r, r^2, ..., r^(n-1)$ \ $s = s_1, r s = s_2, r^2 s = s_3, ...$]} $ 并有 $s^2 = e, r^n = 1, s r s = r^(-1)$ ] #definition[生成元/set of generators][ $S$ 为 $G$ 一子集,若 $G$ 中任何元素都可写成 $S$ 中元素(或其逆)的乘积,则称 $S$ 为 $G$ 的生成元 ] 更进一步,有时我们用生成关系来表示群。也即: $ G = angle.l s_1, s_2, ..., s_n | R_1, R_2, ..., R_m angle.r $ 表示可被元素 $s_i$ 和规则 $R_i$ 推导出的群,例如: #example[][ $ D_(2n) = angle.l r, s | r^n = 1, s^2 = 1, s r s = r^(-1) angle.r \ Z_6 = angle.l r(2), s(3) | r^3 = s^2 = 1, r s = s r angle.r \ $ ] #example[轮换/cycle][ $(a_1 a_2 a_3 ... a_k)$ 表示 $S_n$ 中一元素,把 $a_i$ 映到 $a_(i+1)$,把 $a_k$ 映到 $a_1$,把其它元素映到自身。称 $(a_1 a_2 a_3 ... a_k)$ 为一个 $k$ 轮换,$k$ 称为轮换的长度。若 $k = 2$,则称为对换 ] #proposition[][ - 不交的轮换是可交换的 - $S_n$ 中任意元素都可写成不交的轮换的乘积 - $(i j), (i space i+1), {(1 space 2),(1 space 2 space 3 .. n)}$ 都是 $S_n$ 的生成元 ] #definition[子群/subset][ 设 $G$ 为群,$H$ 为 $G$ 的非空子集,若 $H$ 关于 $G$ 的运算构成群,则称 $H$ 为 $G$ 的子群 ] #definition[生成子群][ 设 $G$ 为群,$S$ 为 $G$ 的子集,$H$ 为 $G$ 中所有可由 $S$ 中元素(或其逆)构成的元素的集合,则 $H$ 为 $G$ 的生成子群(subgroup of G generated by A),记作 $angle.l S angle.r$。显然它也是包含 $S$ 的最小子群 ] #definition[阶/order][ 设 $G$ 为群,$a in G$,若存在最小的正整数 $n$ 使得 $a^n = e$,则称 $a$ 的阶为 $n$,记作 $|a| = n$。若不存在这样的 $n$,则称 $a$ 的阶为无穷 ] = 群论 == 群、商群、同态 #definition[左陪集/left coset][ 设 $G$ 为群,$H$ 为 $G$ 的子群,$a in G$,称 $a H$ 为 $a$ 在 $H$ 下的左陪集,定义为: $ a H = {a h | h in H} $ ] 类似可以定义右陪集 $H a$。在阿贝尔群中左右陪集没有区别。 #proposition[][ 两个左陪集要么不交,要么相等 ] #definition[商群][ 记 $ G\/H = {g H | g in G}\ H\\G = {H g | g in G} $ ] #lemma[][ $G = union G \/H = union_(g H in G\/H) g H$ ] #theorem[Lagrange's theorem][ 设 $G$ 为有限群,$H$ 为 $G$ 的子群 - $|G| = |G\/H| |H|$ - 特别的,$|H|$ 整除 $|G|$ - 若 $G$ 是有限群,则 $"ord"(x) | |G|$。进而,$x^(|G|) = e$ ] #example[Euler's theorem][ 设 $n > 1$,$a, n$ 互素,则 $a^(phi(n)) = 1 (mod n)$\ 其中 $phi(n)$ 表示小于 $n$ 且与 $n$ 互素的数的个数 ] #corollary[][ 若 $|G| = p$ 为一素数,则 $G$ 是循环群,进而是交换群 ] #definition[共轭/conjugate][ 设 $G$ 为群,$a, b in G$,若存在 $c in G$ 使得 $b = c a c^(-1)$,则称 $a$ 和 $b$ (通过 $c$)共轭(conjugate of a by g) ] #lemma[][ 若 $H <= G$,则 $g H Inv(g) <= G$ ] #definition[正规子群/normal subgroup][ 设 $H <= G$,若(容易证明三者等价): - $forallSa(g in G, H = g H Inv(g))$ - $H$ 的任意共轭是 $H$ 自身 - $H$ 的左右陪集相等 则称 $H$ 为 $G$ 的正规子群(normal subgroup of G),记作 $H norS G$ ] d引理:设 $psi$ 是同态,则 $ker(psi)$ 是正规子群 #theorem[第一同构定理][ $G\/ker(psi)$ 同构于 $im(psi)$ 证明:构造 $ G\/ker(psi) &-> im(psi)\ g ker(psi) &-> g $] 验证其良定义,同态,同构即可 注:它是满射是显然的。任何群上的同态是单射当且仅当 $ psi(g) = e <=> g = e $ #theorem[第二同构定理][ $A <= G, B lt.tri.eq G$, 则 $A B <= G, B lt.tri.eq A B, A sect B lt.tri.eq A$,并且: $ A B \/ B tilde.rev.equiv A \/ A sect B $] 证明:$A B <= G, B lt.tri.eq A B$ 证明容易\ 定义 $ phi: A &-> A B\/B \ a &-> a B $ 验证 $phi$ 是满射,$ker(phi) = A sect B$,由第一同构定理立得结论,同时有$ker(phi) = A sect B lt.tri.eq A$ #theorem[第三同构定理][$H lt.tri.eq K lt.tri.eq G$,则: $ (K\/H) lt.tri.eq (G\/H) $ 并且 $ (G\/H)\/(K\/H) tilde.rev.equiv G\/K $] 证明:用类似的思路,定义: $ phi: (G\/H) &-> G\/K \ g H &-> g H K = g K "(使用集合运算可以避免良定义的验证)" $ 验证 $phi$ 是满射,$ker(phi) = K\/H$,由第一同构定理立得结论,同时有$ker(phi) = K\/H lt.tri.eq (G\/H)$ #theorem[Lattice 同构定理][ $N lt.tri.eq G$,则: $ {H | N <= H <= G } &<-> {A\/N | A\/N <= G \/ N}\ A &<-> A\/N $ 保持包含关系,相交关系,商群关系] 注:一般来说,若有同态 $phi:G -> B$, $H lt.tri.eq G$, 我们希望定义映射 $ psi: G\/H &-> B \ g H &-> phi(g) $ 但此映射并不始终良定义的。事实上,它是良定义当且仅当 $N subset ker(phi)$\ 当此条件成立时,则称 $phi$ factor through(穿过) $N$ 例:任意一个从$Z$到复数乘法群的同态$phi$,由同态性它由 $phi(1) = 1$ 唯一确定\ 而对 $Z$ 的一个子群 $Z\/n Z$,它到复数乘法群的同态 $psi$ 需满足$n Z subset ker(psi) <=> psi(n) $ 是 $n$ 次单位根 群论的终极目标:分类所有(满足给定条件的)有限群。由以上的同构定理和商群概念可以看出正规子群往往给出了原群的一种分类,从而应该抛掉正规子群使问题简化。 定义:没有非平凡正规子群的非平凡群称为单群 例:$Z_p$ 没有任何非平凡子群(事实上它是所有无非平凡子群的同构类)\ (交错群)$A_n (n >= 5)$ \ 无穷单群也是存在的,从略 Holder's program: + 分类所有有限单群(已完成)\ 所有奇数阶单群均同构于 $Z_p$ (236页的大型证明)\ 单群分类定理(一千页的大型证明):所有有限单群同构于18种群族之一或者26个零散单群之一 + 找到所有“合成群”的方法 == 合成群列 <series> 定义:(合成序列)若 ${e} = N_0 lt.tri.eq N_1 lt.tri.eq N_2 lt.tri.eq ... lt.tri.eq N_k = G$,并且 $N_(i+1) \/ N_(i)$ 是单的,则称它是一个合成序列,$N_k\/N_(k-1)$ 称为合成因子或 Jordan-Holder 因子 例:${e} lt.tri.eq angle.l r^2 angle.r lt.tri.eq angle.l r angle.r lt.tri.eq angle.l D_8 angle.r$\ ${e} lt.tri.eq angle.l s angle.r lt.tri.eq angle.l s r^2 angle.r lt.tri.eq angle.l D_8 angle.r$ 下面的证明需要如下引理: #lemma[][ $A B <= G => A B = B A$ <commutable> ] #proof[ $ A B = Inv((A B)) = Inv(B) Inv(A) = B A $ ] #lemma[][ $B norS C => A sect B norS A sect C$ <sect> ] #proof[ $ forall a in A sect C:\ a (A sect B) Inv(a) = (a A Inv(a)) sect (a B Inv(a)) = A sect B $<subgroup> ] #lemma[][ $A norS A B <=> A in N_G (B)$ ] #proof[ $ forall a in A:\ a (A B) Inv(a) = (a A Inv(a)) (a B Inv(a)) = A B $ ] #lemma[][ $A(B sect C) <= G => A sect (B C) = (A sect B) (A sect C)$ ] #proof[ 容易注意到: $ A sect B subset A sect (B C) \ A sect C subset A sect (B C) \ $ 从而: $ (A sect B) (A sect C) subset A sect (B C) $ 对另一侧, ] #theorem[Jordan-Holder][ 设 $G$ 是非平凡有限群 + G 有合成群列 + G 的合成因子是唯一的。也就是说,G 的任意两个合成序列长度相等,且对应的合成因子在不计顺序但记数量的视角下同构] #proof[ (1): 若 $G$ 是单群,结论显然\ 若 $G$ 不是单群,则其存在一个非平凡正规子群 $H$,对 $N$ 和 $G\/N$ 归纳可得。\ 事实上,$N$ 和 $G\/N$ 的合成序列相连就是 $G$ 的合成序列。详细来说: $ {e} = N_0 lt.tri.eq N_1 lt.tri.eq N_2 lt.tri.eq ... lt.tri.eq N_k = N = pi^(-1)({e}) lt.tri.eq pi^(-1)(B_2) lt.tri.eq ... lt.tri.eq pi^(-1)(B_n) = G $ Lattice 同构定理保证了上述是一个合成序列($pi^(-1)(B_(i+1)) \/ pi^(-1)(B_(i)) tilde.rev.equiv B_(i+1)\/B_i$) (2):为了证明结论,可以先在集合的退化情形做以类比。 #lemma[][对有限集合 $S$ 上两个滤链,将有: $ quotient(A_(i - 1) union (A_i sect B_j),(A_(i-1) union (A_i sect B_(j+1)))) = quotient(B_(j - 1) union (A_i sect B_j),(B_(j-1) union (A_(i-1) sect B_(j)))) $ ] #proof[ 画图可以看出,如果找到中间的桥梁 $quotient(A_i sect B_j, (A_i) sect B_(i-1) union (A_(i-1) sect B_j))$,分别证明其与两者都同构,则结论成立。 ] #lemma[][对有限群 $G$ 上两个滤链,将有: $ quotient(A_(i - 1) union (A_i sect B_j),(A_(i-1) union (A_i sect B_(j+1)))) = quotient(B_(j - 1) union (A_i sect B_j),(B_(j-1) union (A_(i-1) sect B_(j)))) $] #proof[ 类似上面的发现,往证: $ quotient(A_i sect B_(j - 1), (A_i sect B_(j-1) (A_(i-1) sect B_j))) tilde.rev.equiv quotient(A_(i - 1) (A_i sect B_j),(A_(i-1) (A_i sect B_(j - 1)))) $ ,另一侧同理。 ] - 首先证明,$A_(i-1)(A_i sect B_(j-1)) norS A_(i-1(A_i sect B_j))$\ 断言 $A_i sect B_(j-1) norS A_i sect B_j$,从而子群关系成立。\ 再验证它确实是正规子群: $ forall a b in A_(i-1)(A_i sect B_(j)):\ &a b A_(i-1)(A_i sect B_(j-1)) Inv(b) Inv(a) \ =& a b A_(i-1) Inv(b) b A_i sect B_(j-1) Inv(b) Inv(a) \ =& a (b A_(i-1) Inv(b)) (b A_i sect B_(j-1) Inv(b)) Inv(a) \ =& a A_(i-1)(A_i sect B_(j-1)) Inv(a) \ =& A_(i-1)(A_i sect B_(j-1)) Inv(a) \ =& A_(i-1)(A_i sect B_(j-1)) Inv(a) \ =& (A_i sect B_(j-1)) A_(i-1) Inv(a) \ ("这是因为" A_(i-1)(A_i sect B_(j-1)) &<= A_i =>A_(i-1)(A_i sect B_(j-1)) = (A_i sect B_(j-1))A_(i-1) ) \ =& A_i sect B_(j-1) $ - 我们将要利用第一同构定理,定义: $ funcDef(phi, A_i sect B_j, quotient(A_(i-1)(A_i sect B_j), A_(i-1)(A_i sect B_(j-1))), x , x A C)\ $ 良定义及同态性显然。\ 最后,我们要验证 $ker(phi) = (A_(i-1) sect B_j)(A_i sect B_(j-1))$。显有: $ ker(phi) = (A_i sect B_j) sect (A_(i-1) (A_i sect B_(j-1)))\ $ ] 定义:(可解群)若在 $G$ 的一个合成序列中 $N_(i+1) \/ N_(i)$ 是交换的,则称 $G$ 是可解的\ 可换的单群仅有 $Z_p$,从而显然群 $G$ 可解当且仅当其 Jordan-Holder 因子交换 历史上,阿贝尔和伽罗瓦独立证明了代数方程有根式解当且仅当对应的群是可解的 例:取 $G L(n)$ 中可逆的上三角矩阵 $B$,它是可解的: - 取 $N$ 是严格上三角矩阵(对角线是 $1$),则 $N lt.tri.eq B$ - 取 $N_1 = {mat(1,0,*; 0, 1, 0; 0, 0, 1)}$ 则$N_1 lt.tri.eq N$ - ... - 可以验证这是一个合成群列,而相邻商是同构于实数乘法群的次幂的 == 交错群 我们的目标是给出一个合成群列: $ {e} <= A_n <= S_n, n >= 5 $ #definition[][ $forall sigma in S_n, sigma(Delta) = product_(1<=i<j<=n)(X_(sigma(i)) - X_(sigma(j))) = plus.minus Delta$ ] #definition[][ $sgn(sigma) := sigma(Delta) / Delta$ ] #lemma[][ $sgn(sigma tau) = sgn(sigma) sgn(tau)$,换言之,它是 $S_n -> {plus.minus 1}$ 的同态 ] #proof[ $ sgn(sigma tau) &= (product_(1<=i<j<=n)(X_(sigma tau(i)) - X_(sigma tau(j)))) / (product_(1<=i<j<=n)(X_i - X_j)))\ sgn(sigma) sgn(tau) &= (product_(1<=i<j<=n)(X_(sigma(i)) - X_(sigma(j)))) / (product_(1<=i<j<=n)(X_i - X_j)) dot (product_(1<=i<j<=n)(X_(tau(i)) - X_(tau(j)))) / (product_(1<=i<j<=n)(X_i - X_j))\ &= (product_(1<=i<j<=n)(X_(sigma(tau(i))) - X_(sigma(tau(j))))) / (product_(1<=i<j<=n)(X_tau(i) - X_tau(j))) dot (product_(1<=i<j<=n)(X_(tau(i)) - X_(tau(j)))) / (product_(1<=i<j<=n)(X_i - X_j))\ &"(换元中造成的前后颠倒会在分母上下同时消掉)"\ &= sgn(sigma tau) $ ] #proposition[][ - $sgn((i j)) = -1$ - 若 $sigma$ 可以写成 $k$ 个对换的积,则 $sgn(sigma) = (-1)^k$ - $sgn(sigma) = 1 <=> sigma "是偶数个对换的乘积" <=> sigma "是偶置换"$ ] #definition[][ 称 $A_n = ker("sgn")$ 也即所有偶置换构成的群为交错群。显然它是 $S_n$ 的正规子群。 ] #proposition[][ $|A_n| = n! / 2$ ] #proposition[][ - $A_3 tilde.eq Z_3$ - $Z_2 times Z_2 norS A_4$ ] #lemma[共轭][ $tau(a_1 a_2 ... a_n)Inv(tau) = (tau(a_1) tau(a_2) ... tau(a_n))$ ] #theorem[][ $A_n, n>=5$ 是单群 ] #proof[ 考虑三轮换 $(i j k)$\ + $A_n$ 是由三轮换生成的 $ (a b)(c d) = (a c b)(a c d)\ (a b)(a c) = (a c b)\ $ + 若 $N norS A_n$ 且 $N$ 包含一个三轮换$(i j k)$,则它包含所有三轮换(从而 $N = A_n)$\ 显然,只需证明所有三轮换在 $A_n$ 中共轭即可。\ 设若 $sigma (i j k) Inv(sigma)$ - 若 $sigma in A_n$,则已证毕 - 若 $sigma in.not A_n$,则考虑: $ sigma(i j)(i j k)(i j)Inv(sigma) = sigma(j i k)Inv(sigma) \ => (sigma(i j))^2 "就是我们所求的共轭轮换" $ + 证明 $N$ 中有三轮换:固定 $sigma in N$ - 若 $sigma$ 是三轮换,则已证毕 - 若 $sigma$ 写成不相交轮换乘积时,最长串长度 $>= 4$\ 从而: $ sigma = tau (a_1 a_2 ... a_t) $ $ (a_1 a_2 a_3) sigma Inv((a_1 a_2 a_3)) = tau (a_2 a_3 a-1 a_4 ... a_5) := sigma' in N $ 结合上两式我们有: $ Inv((a_1 ... a_5))(a_1 a_2 a_3 ... a_t) = (a_1 a_3 a_5) in N $ + 讨论余下情况,$sigma$ 由一些二轮换和三轮换生成 #set enum(numbering: "Step 4.1.") - 简单起见,分别进行二次方和三次方,可以化为均由二轮换生成和均由三轮换生成的情况 - 设 $sigma$ 均由二轮换生成,显然至少有两个二轮换。注意到 $A_4$ 不是单群,从而必须引入除两个二轮换外的其他元素\ $ sigma = tau(a_1 a_2)(a_3 a_4)\ (a_1 a_2 a_3) sigma Inv((a_1 a_2 a_3)) = (a_1 a_3)(a_2 a_4) := sigma' in N $ 在此利用 $N >=5$ (这是我们第一次利用此条件),取 $a_5$ 异于前四个元素,有: $ (a_1 a_2 a_5)sigma'Inv((a_1 a_2 a_5)) = (a_1 a_2 a_3 a_4 a_5) in N $ 技术上,我们通过用三轮换取共轭的方式延长了轮换长度,这就回到了 Step 3 的情况。 - 设 $sigma$ 均由三轮换生成 ] == 交换群 #definition[有限生成群][ 称群是有限生成的,如果它可以写成一些有限个元素及它们的逆的任意顺序的乘积 ] #theorem[有限生成群分解定理][ 设 $G$ 是有限生成的阿贝尔群,则 $G$ 可以唯一的被表示为: $ G tilde.eq (Z)^r times Z_(n_1) times Z_(n_2) times ... times Z_(n_k), "with" r>=0, 2<=n_1 | n_2 | n_3 | ... | n_k $ 通常将 $r$ 称为 $G$ 的秩,将 $n_1, n_2, ..., n_k$ 称为 $G$ 的不变因子。$(Z)^r$ 称为自由部分,$Z_(n_1) times Z_(n_2) times ... times Z_(n_k)$ 称为挠部分 ] #proof[ 阿贝尔群 = $ZZ-"module"$\ 将在后面利用有限生成模的结构定理证明 ] #lemma[][ $gcd(m, n) = 1 => Z_(m n) tilde.eq Z_m times Z_n$ ] #proof[ 定义: $ funcDef(phi, Z_(m n), Z_m times Z_n, a mod m n, (a mod n ,b mod m)) $ 验证它是同态,且 $ker(phi) = {0}$,并由元素个数知它也是同构,证毕 ] 由上述引理,立得有限生成阿贝尔群分解的另一种形式: #theorem[有限生成群分解定理2][ 设 $G$ 是有限生成的阿贝尔群,则 $G$ 可以唯一的被表示为: $ G tilde.eq (Z)^r times (Z_(p_1^alpha_11) times Z_(p_2^alpha_12) ... Z_(p_n^alpha_1n)) times ... $ ] == 群作用 群作用的最简单案例是 $S_n$ 在 $(1,2,3, ...,n)$ 上的作用 #definition[群(左)作用][ 设 $G$ 为一个群,$X$ 是一个集合。称: $ funcDef(phi, G times X, X, (g, x), g x) $ 为一个群作用,如果:($phi(g, x)$ 常记作 $g x$) + $forall x in X, space e x = x$ + $g (h x) = (g h) x$ 也称 $G$ 作用在 $X$ 上 ] 右作用非常类似,要求结合律在右侧即可。注意到左右作用本质上是完全不同的,因为它们的定义中的 $g$ 和 $h$ 的顺序是不同的。 #proposition[][ + 群作用也可视作 $G ->(X -> X)$ + $[x -> g x]$ 一定是 $X$ 上的双射。显然它的逆恰为 $[x | x -> Inv(g) x]$ ] #proposition[][ 若有右作用 $phi: G -> (X -> X)$,则容易诱导出自然的左作用: $ phi' = [g -> phi(Inv(g))] $ 大多数时候人们不愿考虑右作用 ] #proposition[][ 群作用 $phi: G -> (X -> X)$ 给出 $G -> S_X$ 的群同态。其中 $S_X$ 是 $X$ 上所有双射构成的对称群。这由 $phi(g h)(x) = phi(g)(phi(h))(x) = (phi(g) dot phi(h))(x)$ 显然给出。\ 该命题的反之也成立,也即若 $phi$ 给出如上的一个同态,则它也是一个群作用。\ ] #definition[][ + 如果 $phi$ 是平凡同态(全部映成单位元),则称对应的群作用也是平凡的。 + 如果 $phi$ 是单同态,则称群作用是忠实(faithful)的。 注:“忠实”的含义可以理解为忠实的反映了对方的信息,也即 $S_X$ 中两个不一样的的元素在 $G$ 中来看也是不一样的。 ] #example[][ 群 $G$ 在自身上显然有很多作用 + 左平移:$[g -> [x -> g x]]$ + 右平移:$[g -> [x -> x Inv(g)]]$ (为了保证它是左作用特地选取 $Inv(g)$)\ 验证:\ $g(k (x)) = k(x)Inv(g) = x Inv(k) Inv(g) = x Inv((g k)) = (g k)(x)$\ 如果没有选取逆,计算可验证它没有左结合性,此时它是右作用。这里也可体现出左右作用的区别。 + 共轭作用:$[g -> [x -> g x Inv(g)]]$\ 它有更好的性质:事实上,$[x -> g x Inv(g)]$ 一定是 $G$ 上的同构。一般将共轭作用对应的映射 $G -> S_G$ 记为 $Ad(g)$ ] #theorem[Caylay][ 每个群都同构于某个置换群的子群。特别的,若 $|G| = n$,则 $G$ 同构于 $S_n$ 的一个子群 ] 这个定理很平凡,更多的是出于历史和哲学上的考虑:历史上人们最早将群定义为置换群的子群,而不是现在的抽象定义,定理表明这两者是等价的。这个定理也可以看作是群论的一个基本定理,它告诉我们,群论的研究可以归结为置换群的研究。或者说,我们可以通过作用由具体的置换研究具体的群的性质。 #definition[自同构(automorphism)][ 称 $phi: G -> G$ 是 $G$ 的一个自同构,如果它是一个同构。所有的自同构构成群 $Aut(G)$。它显然是集合 $G$ 上置换群的一个子群 ] #example[][ $G$ 在自身上的共轭作用给出 $G$ 到 $Aut(G)$ 的同态。 ] 对上述例子略微推广,便可引出下节中半直积的概念。 #example[][ $G$ 到 $G$ 的所有子群构成的集合上也有自然的作用,取共轭即可。 ] #definition[][ - 若群 $G$ 作用在 $X$ 上,则称 $X$ 是 $G-$集合 - $Stab_G(x) := {g in G | g x = x}$,称为 $x$ 处的"稳定子"(Stabilizer)群 - $Orb_g(x) := G x = {g x | g in G}$ ] #proposition[][ - $Stab_G (x) <= G$ - $Orb(x)$ 与 $Orb(y)$ 要么不交,要么相等。进而 $X$ 可以分解为 $Orb(x)$ 的不交并,即 $X = tack.b.double_("orbite" OO) OO$ - 若 $y in Orb_G(x)$,也即 $y = g x$,我们有: $Stab_G (x) = Inv(g) Stab_G (y) g$ ] #proof[ 前两个容易验证。验证第三个: $ h y = y <=> h g x = g x <=> g^(-1) h g x = x $ 从而: $ h in Stab_G (y) <=> g^(-1) h g in Stab_G (x) $ ] #example[][ $H <= G$,$H$ 在 $G$ 上左乘自然产生一个群作用:\ $Orb(g) = H g$(这里变成了右陪集)\ $Stab(x) = {e}$ ] 我们的目标是给出轨道的一些描述。 #definition[$G-$等变映射][ 设 $G$ 作用在 $X, Y$ 上,$f: X -> Y$。称 $f$ 是 $G-$等变的,如果: $ forall g in G, forall x in X, f(g x) = g f(x) $ ] #definition[传递][ 称 $G$ 是传递的,如果 $forall x, y in X, exists g: y = g x$ ] #proposition[][ 若 $G$ 是传递的,则令 $H = Stab_G (x)$,那么: $ funcDef(phi, G \/ H, X, g H, g x) $ 是一个 $G-$ 等变的双射 ] #proof[ + 良好定义: $ g_1 x =g_2 h x = g_2 x $ + 满射 由传递性易知 + 单射 $g_1 x = g_2 x => Inv(g_2) g_1 x = x => Inv(g_2) g_1 in H$ + $GEquiv(G)$ : $ phi(g' (g H)) = g' g x =g' phi(g H) $ ] 一般而言,$quotient(G, Stab_G (x)) tilde.eq Orb_G (x)$,因此轨道分解也可写成商集分解。 #definition[中心化子/ 正规化子][ - $C_G (g) = {h in G | h g = g h} <= G$。它就是共轭作用下的稳定子。 - 若 $S <= G$,记 $C_G (S) = sect_(s in S) = {g in G | forall s in S:conjugateRight(g,a) = s}$ - 记 $Z_G = C_G (g) = ker(A d)$ 为群的中心 - 若 $H <= G$,记 $N_G (H) = {g in G | conjugateRight(g,H) = H}$ 为 $H$ 的正规化子。显然它包含 $C_G (H)$。它可以看作 $G$ 以共轭作用作用于所有子群之上的稳定子群 ] #definition[共轭(conjugate)关系][ 称 $a, b in G$ 共轭,如果 $exists g: a = conjugateRight(g, a)$ 等价于它们在 $G$ 共轭作用中处于同一轨道 ] #example[][ - $G$ 是交换群,则与 $a$ 共轭的元素只有它本身。 - $"GL"_n (CC)$ 中的共轭类就是按约当标准型分类 - $S_n$ 中的共轭类就是按轮换类型分类 ] #theorem[类方程][ 设 $G$ 是有限群,以共轭作用作用于自身 - 取 $g_i$ 是各个轨道的代表元,有: $ |G| = sum_(i=1)^k |Orb(g_i)| = sum_(i=1)^k |G : C_G (g_i)| $ - 取 $g_i not in Z(G)$,显然 $G : C_G (g_i)$ 不是单点集,从而: $ |G| = |Z_G(G)| + sum_(i=1)^k |G : C_G (g_i)| $ ] #example[$S_5$ 的分类计数][ - 单位元 $1$,中心化子为 $S_5$ - 二轮换 $a b$,中心化子为 $S_3 times S_2$ - 三轮换 $a b c$,中心化子为 $S_2 times ZZ_3$ - 两个二轮换,中心化子为 $ZZ_2^2 times.r ZZ_2$ - 四轮换,中心化子为 $ZZ_4$ - 一个二轮换一个三轮换,中心化子为 $ZZ_2 times ZZ_3$ - 五轮换,中心化子为 $ZZ_5$ ] #theorem[][ 设 $p$ 为素数,称 $G$ 为 $p$ 群,如果 $|G| = p^n$。对于非平凡的 $p$ 群,一定有 $Z(G)$ 非平凡 ] #proof[ $|G| = |Z(G)| + sum_(i=1)^k |G : C_G (g_i)|$\ 等式左端是 $p$ 的幂,右边求和的每一项都是 $p$ 的幂,从而 $|Z(G)|$ 也是 $p$ 的幂,因此它非平凡。 ] #definition[自同构群][ 显然所有的共轭作用都是自同构。注意到 $ker("Ad") = Z(G)$,定义 $"Inn"(G) = im("Ad")$ 为 $G$ 的内自同构群。\ 断言 $"Inn"(G) norS Aut(G)$,它对应的商群称为外自同构群,记为 $"Out"(G)$ ] #proof[ 我们需要证明 $forall sigma in G, forall Ad_g in "Inn"(G): conjugateRight(sigma, "Ad"_g) in "Inn"(G)$\ 事实上,显然 $conjugateRight(sigma, "Ad"_g)$ 是自同构。断言 $conjugateRight(sigma, "Ad"_g) = "Ad"_(sigma(g))$\,足以说明自同构群是正规子群。\ 验证: $ sigma(Ad1(Inv(sigma)(h), g)) = sigma(g Inv(sigma)(h)Inv(g)) = sigma(g) sigma(Inv(sigma)(h))sigma(Inv(g)) = sigma(g) h Inv(sigma)(g)\ = Ad1(h, sigma(g)) $ ] #example[][ - 考虑 $G = "GL"_n(QQ)$,考虑其上的共轭作用 $Ad$。 $ker("Ad")$ 是与其他矩阵都交换的矩阵,显然为数量矩阵。\ $"Inn"(G) = quotient(G, ker("Ad"))$\ 它的外同构群本质只有一个元素 $psi = [A | A -> Inv((A)^T)]$。它显然不是共轭作用,因为共轭作用不改变特征值,而它把特征值变成了逆。\ 事实上,有: $ "PGL"_n(QQ) := "Inn"(G) times.r {1, psi} $ 均为自同构。 其中 $psi times g = (Inv(g))^T$\ 另一方面,本质来说 $"GL_n"(QQ) tilde.eq "SL"_n(QQ) times Q^times$,其中 $"SL"_n(Q)$ 是特征值为 $1$ 的矩阵。事实上: $ Aut("SL"_n(Q)) = "PGL"_n(QQ) times.r {1, psi} $ - 在所有 $S_n, n>= 4$ 中,除 $n = 6$ 外所有同构都是内自同构。 ] == 群的直积,半直积 以下定理给出我们如何将群分解成子群的直积 #theorem[][ 设 $H\ norS K, G$,满足:$H sect K = {e}$,则有: $ H K tilde.eq H times K $ ] #proof[ 由前面的引理,$H K <= G$\ 定义: $ funcDef(phi, H times K, H K, (h, k), h k) $ 验证: + $phi$ 是同态 $ phi((h_1, k_1) (h_2, k_2)) = phi((h_1 h_2, k_1 k_2)) = h_1 h_2 k_1 k_2 space ? = space h_1 k_1 h_2 k_2 = phi((h_1, k_1)) phi((h_2, k_2)) $ 只要证明 $h, k$ 交换即可。事实上: $ & h_2 k_1 = k_1 h_2\ & <=> h_2 k_1 h_2^(-1) = k_1\ & <=> h_2 k_1 h_2^(-1) k_1^(-1) = e $ 注意到: $ h_2 k_1 Inv(h_2) in K, Inv(k_1) in K => h_2 k_1 h_2^(-1) k_1^(-1) in K\ h_2 k_1 Inv(h_2) in H, Inv(k_1) in H => h_2 k_1 h_2^(-1) k_1^(-1) in H $ 从而由 $H sect K ={e}$ 立得原结论 + $phi$ 是满射:显然 + $ker(phi) = {e}$ $ &phi((h, k)) = e \ <=>& h k = e\ <=>& h = k^(-1)\ =>& h, k in H sect K\ =>& h = k = e $ ] 我们将要给出半直积的概念,通过一个正规子群,另一个子群,以及正规子群在子群上的共轭作用重建群。 #definition[半直积][ 设 $N, H$ 是群,且 $phi: H -> Aut(N)$ 是一个同态($phi$ 也是 $H$ 在 $N$ 上的作用)。对 $h in H$,定义 $phi_h = phi(h)$\ 定义半直积(semi-direct product)$N times.r H := N times.r_phi H$ 为在 $N times H$ 上定义乘法运算: $ (n_1, h_1)(n_2, h_2) = (n_1 phi_(h_1)(n_2), h_1 h_2) $ 它确实是群,验证结合律: $ ((n_1, h_1)(n_2, h_2))(n_3, h_3) &= (n_1 phi_(h_1)(n_2), h_1 h_2)(n_3, h_3)\ &= (n_1 phi_(h_1)(n_2) phi_(h_1 h_2)(n_3), h_1 h_2 h_3)\ $ $ (n_1, h_1) ((n_2, h_2))(n_3, h_3)) &= (n_1, h_1)(n_2 phi_(h_2)(n_3), h_2 h_3)\ &= (n_1 phi_(h_1)(n_2 phi_(h_2)(n_3)), h_1 h_2 h_3)\ &= (n_1 phi_(h_1)(n_2) phi_(h_1) (phi_(h_2)(n_3)), h_1 h_2 h_3)\ &= (n_1 phi_(h_1)(n_2) phi_(h_1 h_2)(n_3) , h_1 h_2 h_3)\ $ ] #proposition[][ $N, H$ 是 $N times.r H$ 的正规子群 ] 半直积的本质是我们希望在 $N times H$ 上复刻 $N H$。显然我们需要考虑如何看待 $h n Inv(h)$。本质来讲,我们的目标就是: $ h n Inv(h) = phi_h (n) $ 这里也可以看出半直积和共轭作用是一脉相承的。很自然的我们有: #proposition[][ 设 $G$ 是群,$N norS G, H <= G$ 且 $H sect G = {e}$。从而: + $N H = H N <= G$ + $N H tilde.eq N times.r H$ ] #example[][ $ZZ_n^times := {a mod n | gcd(a, n) = 1}$ (乘法运算)作用在 $Z_n$ 上。 + $Z_n times.r {plus.minus 1} tilde.eq D_(2n)$\ $(a, 1) -> r^a$\ $(0, -1) -> s $ + $Z_q^times$ ($q$ 为素数)为循环群,阶为 $q - 1$。\ 取其一个子群 $Z_P^times$ 便得 $Z_q times.r Z_p$。\ 例如:$Z_7^times times.r Z_3$,取 $phi_2(1) = 2, phi_4(1) = 4$ 均可得到的半直积。他们看起来不同,但可以找到他们的同构: $ funcDef(Phi, Z_7^times times.r_(phi_2) Z_3, Z_7^times times.r_(phi_4) Z_3, (a, b), (a, 2b)) $ 这是因为: $ Phi((a, b)(c,d)) = Phi(a + 2^b c, b+d)=(a+2^b c, 2 b + 2 d) \ = (a + 4 ^(2 b) c , 2 b + 2 d) = ... = Phi((a, b))Phi((c, d)) $ ] == Sylow 定理及其应用 某种意义上,这是在用完全抽象的方法研究有限群,类似于整数的素分解。本节中所有群都是有限群。 目标:给定素数 $p$,称群 $G$ 是 $p$ 群,如果 $|G| = p^n$。我们希望: + 研究 $p$ 群的结构。 + 对阶形如 $p^alpha m$ 的群 $G$,其中 $gcd(p, m) = 1$。对 $H <= G$,若 $|H| = p^alpha$ ,则称之为 Sylow-p 子群。记 $G$ 中所有 Sylow-p 子群为 $"Sylow"_p (G)$,我们希望把群分解到 Sylow-p 子群上。 #theorem[Sylow 第一定理][ 任意群 $G$,Sylow-p 子群是存在的。 ] #proof[ 这个定理的证明有很多种,且难度都很大。这里给出一种想法更加结构化的方法。\ 抽象的想法是用可能的方法降低 $|G|$,再用各种方法将子群的 Sylow-p 子群与原群的 Sylow-p 子群联系起来。\ 我们对 $|G|$ 进行归纳: + $|G| = 1$,结论是显然的 + 假设小于 $|G|$ 的情况已经证明: - 若 $p | |Z(G)|$,且 $Z(G)$ 是有限交换群。由有限交换群的分类定理,可以将 $Z(G)$ 分解为: $ Z_(p^(t_1)) times Z_(p^(t_2)) times ... times Z_(p^(t_n)) times R $ 考虑 $G' := quotient(G, Z(G))$,注意到: $ |G'| = (|G|) / (|Z(G)|) $ 对它进行归纳假设,它有 Sylow-p 子群 $H'$,取其原像 $H$,断言: $ H times Z_(p^(t_1)) times Z_(p^(t_2)) times ... times Z_(p^(t_n)) $ 就是我们想要的 Sylow-p 子群。 - 令 $K = Z_(p^(t_1)) times Z_(p^(t_2)) times ... times Z_(p^(t_n))$,由定义知 $K subset Z(G)$ 与 $H$ 中的元素全部可交换,从而 $H times K tilde.eq H K <= G$。\ - $|H K| = |H| |K| = p^alpha m'$,证毕 - 如若不然,考虑类方程: $ |G| = |Z_G(G)| + sum_(i=1)^k |G : Stab_G(g_i)| $ 假设表明后者的求和式中一定有某个 $|G : Stab_G(g_i)|$ 不含 $p$,从而: $ |Stab_G(g_i)| = p^alpha m' < |G| $ 由归纳假设它有 Sylow-p 子群,而它的 Sylow-p 子群一定也是 $G$ 的 Sylow-p 子群 ] #theorem[Sylow 第二定理][ 若 $P$ 是 Sylow-p 子群,$Q <= G$ 是 $p$ 群,则 $exists g in G$,使得: $ Q <= g P Inv(g) $ 这个结论等价于以下事实成立: - 所有 Sylow-p 子群都共轭。 - 所有 $p-$ 子群一定包含于某个 Sylow-p 子群中。 ] #proof[ 这个定理的证明有一些通用的方法:\ 在此我们忽略一些简单情况。\ 令 $Q$ 以左乘作用于 $quotient(G, P)$,并设: $ |Q| = p^(r) $ 则: $ m = |quotient(G, P)| = sum_("orbit" OO) OO $ 但我们注意到该作用下的都是 $Q$ 的某个陪集,这些陪集的元素个数要么是 $p$ 的幂,要么是 $1$\ 从而我们可以得到一定有一个元素的轨道: $ OO = {g P} $ 满足: $ forall q in Q:\ q g P = g P => Inv(g) q g P = P => Inv(g) q g in P => q in g P Inv(g) $ 换言之,这给出了 $Q subset g P Inv(g)$,这就证明了结论。 ] #corollary[][ - 若某个 Sylow-p 子群正规,则它是唯一的 Sylow-p 子群 - 若 Sylow-p 子群唯一,则它一定正规 ] #corollary[][ 设 $P$ 是 Sylow-p 子群,则有: $ N_G (P) = N_G (N_G (P)) $ 且 $N_G (P)$ 包含唯一的 Sylow-p 子群 ] #proof[ 显然 $P norS N_G (P) <= G$,这给出 $P$ 也是 $N_G (P)$ 的 Sylow-p 子群。 对于第一个结论,只需证: $ N_G (P) supset N_G (N_G (P)) $ 任取 $x in N_G (N_G (P))$,将有: $ x N_G (P) Inv(x) = N_G (P) $ 而: $ x P Inv(x) norS x N_G (P Inv(x) = N_G (P)) $ 由 Sylow-p 子群唯一性,知 $x P Inv(x) = P$,进而 $x in N_G (p)$,证毕。 ] #theorem[Sylow 第三定理][ 记 $|"Sylow"_p (G)| = n_p$ - $p | n_p - 1$ - $n_p | m$ ] #proof[ 考虑 $G$ 以共轭作用于 $"Sylow"(G)$。由第二定理知作用是传递的,我们有: $ n_p = (|G|) / (|Stab_G (P)|) $ 由上面的推论,$Stab_G (P) = N_G (P)$ 包含唯一的 Sylow-p 子群,这就说明结论 2 成立。\ 对于结论 1,考虑将 Sylow-p 子群 $P$ 作用 于$"Sylow"(G)$,有: $ n_p = sum((|P|) / (|Stab_P (Q)|)) $ 我们发现: - 若 $Stab_P (Q) != P$,则对应项一定是 $p$ 的倍数; - 若 $Stab_P (Q) = P$,则 $P <= N_G (Q)$。由引理,这将给出 $P = Q$,显然右端有且仅有一项成立; 以上事实给出 $n_p = k p + 1$,证毕。 ] == Sylow 定理的应用 Sylow 定理十分重要,例如判断单群。 #example[][ $132$ 阶群不可能是单群。 ] #proof[ 假设 $|G| = 132 = 3 dot 4 dot 11$,且是单群。\ 显然 $n_3, n_4, n_11 > 1$,否则对应群将成为正规子群。 - $n_4$ : $ 2 | n_4 - 1\ n_4 | 33\ $ 给出 $n_4 = 33$(不可能) 或 $n_4 = 11$ 或 $n_4 = 3$\ 注意到每个 $4$ 阶群至少给出两个 $2$ 阶元,从而总计至少给出 $6$ 个二阶元\ - $n_3$ : $ 3 | n_3 - 1\ n_3 | 44\ $ 给出 $n_3 = 4$。\ 这意味着我们有 $4$ 个 $3-$ 子群,它们将给出 $2 dot 4 = 8$ 个三阶元。\ - $n_11$: $ 11 | n_11 - 1\ n_11 | 12\ $ 给出 $n_11 = 12$。\ 类似的,可以给出 $12$ 个 $11-$ 子群,它们将给出 $10 dot 12 = 120$ 个 $11-$ 阶元。\ 总计给出了 $134$ 个非平凡元,矛盾! ] #example[][ 设 $|G| = p q$,则 $G$ 只能为以下两种: - $Z_p times Z_q$ - $p | q - 1$,则 $Z_p <= Aut(Z_q)$,此时可以构造两者之间的半直积(而且是唯一的) - ] #example[][ 设 $N norS G$,$P$ 是 Sylow-p 子群。取 $G -> quotient(G, N)$ 的自然同态 $pi$,则: - $pi(P)$ - $P sect N$ 都是 Sylow-p 子群 ] #proof[ 注意到: $ |P| = |P sect N| |pi(P)| $ ] #example[][ 设 $|G| = 105$ 包含正规的 Sylow-3 子群,则它是循环群 ] #proof[ 105 = 3 dot 5 dot 7\ 类似可得: $ n_5 = 1, 21\ n_7 = 1, 15\ $ 考虑 $G$ 共轭作用于 $P_3$,显然 $G$ 将给出其上的自同构。\ 另一方面,$P_3$ 的自同构只有两个,这意味着存在 $G -> ZZ_2$ 的同态。而 |G| 是奇数,进而这些作用都是平凡。\ 换言之,$G$ 中所有元素都与 $P_3$ 中元素交换,因此: $ P_3 subset Z (G) $ 回到 Sylow 第三定理的证明,我们有: $ n_5 = (|G|) / (C_G (g)) $ 我们将有: $ P_3 <= Z (G) <= C_G (g) => 3 | C_G (g) $ 另一方面,$5 | C_G (g)$,因此 $n_5$ 无法取 $21$\ 同理可得 $n_7$ 无法取 $15$\ ] == 交换子群 #definition[交换子][ 对 $x, y in G$,定义它们的交换化子: $ [x, y] = Inv(x) Inv(y) x y $ ] #proposition[][ - $[x, y] = e <=> x, y$ 可交换 - $g [x, y] Inv(g) = [g x Inv(g), g y Inv(g)]$ ] #definition[导出子群][ 称: $ G^"der" = G' := generatedBy([x,y]\, x\, y in G) $ 为 $G$ 的导出子群(derived subgroup) ] 注意:导出子群是交换化子生成的群,但其中元素未必都是交换化子 #proposition[][ - $G' norS G$,由交换化子的共轭性质可得 - $quotient(G, G')$ 是交换群 ] #lemma[][ 若 $A$ 是交换群,$phi: G -> A$ 是同态,则有: $ G' subset ker(phi) $ 进而,$phi$ 可分解成: $ G ->^pi quotient(G, G') ->^(overline(phi)) A $ 或者说: $ Hom(G, A) tilde Hom(quotient(G, G'), A) $ ] #corollary[][ $quotient(G, G')$ 是所有商群中最大的可交换的商群 ] #example[][ 取 $G = D_(2 n)$,则: - $[r, s] = r^(-2) in G'$ - $generatedBy(r^2) subset G'$ - 考虑同态: $ funcDef(psi, G, {1,-1}, r\,s, 1\,-1,) $ 从而 $G' subset ker psi = generatedBy(r)$ - 若 $n$ 是奇数,则 $generatedBy(r^2) = generatedBy(r)$,进而 $G' = generatedBy(r)$ - 若 $n$ 是偶数,考虑同态: $ funcDef(psi, G, {1,-1} times {1, -1}, r\,s, (-1, 1)\,(1, -1),) $ 类似可以证明 $G' subset ker psi = generatedBy(r^2)$ - 由此,容易给出 $D_(2n)$ 到任意交换群的同态 ] #definition[导出序列][ 对于任意群 $G$,定义: $ G^0 = G, G^1 = [G, G], ..., G^(n+1) = [G^n, G^n] $ 为 $G$ 的导出序列(derived series) ] 可解群定义于@series \ 导出序列的动机来源于可解群。为了验证一个群是否可解,我们当然希望让商群尽可能大,剩下的群尽可能小,很自然的我们应该考虑 $[G, G]$,以此类推即可。 #theorem[][ 群 $G$ 可解当且仅当 $G^n = {e}$ 对某个 $n$ 成立 ] #proof[ <=: 注意到每次取交换化子得到之前群的正规子群,因此直接取导出序列即可\ =>: 设 ${1} lt.tri.eq H_1 lt.tri.eq ... lt.tri.eq H_k = G$是满足可解群要求的序列。 - 显然 $[H_i, H_i] subset H_(i+1)$,从而: - $G^1 = [G, G] = [H_k, H_k] subset H_(k-1)$ - $G^2 = [G^1, G^1] subset [H_(k-1), H_(k-1)] subset H_(k-2)$ - ... 依此类推,我们有 $G^n subset H_(k-n)$,从而 $G^n = {e}$,证毕 ] #definition[特征子群][ 称 $H <= G$ 是特征子群,如果 $forall phi in Aut(G): phi(H) = H$ ] 特征子群是 $G$ 的骨架,任何自同构都无法改变它。\ #proposition[][ - 若 $G$ 只有一个某阶的子群,显然它是特征子群。\ - 特征子群显然是正规子群(在共轭作用/内自同构下保持不变) ] #lemma[][ $G^i$ 是 $G$ 的特征子群 ] #proof[ - $ G_1 = [G, G] = generatedBy([x,y]\, x\, y in G)\ phi(G_1) = generatedBy([phi(x),phi(y)]\, x\, y in G) = [G, G] = G_1 $ - 类似归纳证明: $phi(G^i) = phi([G^(i-1), G^(i-1)]) = [phi(G^(i-1)), phi(G^(i-1))] = [G^(i-1), G^(i-1)] = G^i$ ] #proposition[][ - 若 $H <= G$,显然 $H^i <= G^i$,从而 $G$ 可解 $=> H$ 可解 - 若 $G -> K$ 存在商同态,则 $pi(G^(i)) = K^i$,从而 $G$ 可解 $=> K$ 可解 - 若 $N norS G$,且 $N, quotient(G, N)$ 都可解,则 $G$ 可解 ] == 幂零群 #definition[幂零群][ 对群 $G$,定义: $ G^0 = G, G^1 = [G, G], G^2 = [G, G^1], ..., G^(n+1) = [G, G^n] $ 显然它比导出序列的每项略大,称为 lower central series。\ 类似可以证明它的每项也是 $G$ 的特征子群。\ 称 $G$ 是幂零的,如果 $G^n = {e}$ 对某个 $n$ 成立 ] #example[][ 之前在@series 构造的上三角矩阵群是经典的可解但不幂零的群。\ ] #proposition[][ 幂零群一定是可解群 ] #definition[upper central series][ 对于群 $G$,考虑 $quotient(G, ZZ(G)) := overline(G)$,它产生了新的中心 $ZZ(quotient(G, ZZ(G)))$,称它在 $G$ 中的原像为 $G$ 的第二中心,记为 $ZZ_2(G)$,它是正规的。\ 以此类推,定义 $ZZ_n (G)$ 为 $ZZ(quotient(G, ZZ_(n-1)(G)))$ 在 $G$ 中的原像。\ 这个序列称为 upper central series ] #theorem[][ - 群 $G$ 幂零当且仅当 upper central series 终止于 $G$。更准确地说,$G^c = {1} <=> ZZ_c (G) = G$。 - 上式成立时,我们有: $ G^(c-1-i) <= ZZ_i (G) $ ] 注:这说明两个序列类似于对偶的关系。很容易产生 $ZZ_i (G) <= G^(c- i - 1)$的想法,但它是错误。 #proof[ 对 $c$ 做归纳,考虑 $overline(G) = quotient(G, ZZ (G))$\ 为了证明结论,我们需要: + $G^c = {1} <=> overline(G)^(c-1) = {1}$\ 注意到: $ G^(c) = [G, G^(c-1)] = {1} <=> G^(c-1) subset ZZ(G) $ + $ZZ_(c-1)(overline(G)) = overline(G) <=> ZZ_(c)(G) = G$\ 归纳证明: $ Inv(pi)(ZZ_i (overline(G))) = ZZ_(i+1)(G) $ 以上两个事实便可给出第一个结论 对于结论二,注意到: $ G^(c-i-1) <= Inv(pi)(overline(G^(c-i-1))) = Inv(pi)(ZZ_i(overline(G))) = ZZ_(i+1)(G) $ #theorem[][ 设 $P$ 是 $p-$ 群 - $P$ 是幂零的。事实上,它的子群还是 $p-$ 群,且中心一直非平凡,因此结论显然 - 若 $H norS P, H != {1}$,则 $H sect Z(P) != {1}$\ - 若 $H lt.nequiv P$,则 $ H lt.nequiv N_P (H)$ ] #proof[ 结论二:用 $P$ 于 $H$ 上的共轭作用的类方程,单点轨道就是 $H sect Z(P)$,非单点轨道的元素个数都是 $p$ 的倍数 结论三: 用归纳法 + 若 $Z_(P) subset.not H$,结论是显然的。 + 若不然,考虑 $quotient(H, Z_(P))$,归纳结论应用于商群便可得到结论 ] #theorem[幂零群结构定理][ 设 $|G| = p_1^(alpha_1) p_2^(alpha_2) ... p_n^(alpha_n)$,则以下事实等价: + $G$ 是幂零群 + 若 $H lt.nequiv G$,则 $H lt.nequiv N_G (H)$ + 它的 Sylow 子群全部正规。 + $G$ 是各个 Sylow 子群的直积 ] #proof[ - $3 => 4$ 是容易的\ - $4 => 1$ 注意到所有 $p-$ 群都幂零\ - $2 => 3$ 回忆对 Sylow-p 子群 $P$,将有: $ N_G (P) = N_G (N_G (P)) $ 结合 2 立得 $N_G (P) = G$ 从而 $P$ 正规 - $1 => 2$ 类似之前 p-群幂零的证明,注意到幂零群的中心非平凡,仿照其证明即可 ] #theorem[Shur - Zasnhous][ 设 $G$ 是有限群,$N norS G$。若: $ gcd(|N|, |quotient(G, N)|) = 1 $ 则将有: $ G = N times.r quotient(G, N) $ ] ] == 总结 群论部分我们依次研究了: + 循环群,极其简单 + 交换群,有有限交换群分类定理 + 上升序列和下降序列可以给出群中结构简单的部分 + 对于单群,以上分解都是无效的,它们是群中相对较为复杂而核心的部分 ]
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/long-clip-arrow.typ
typst
Apache License 2.0
// The project function defines how your document looks. // It takes your content and some metadata and formats it. // Go ahead and customize it to your liking! #let project(title: "", authors: (), body) = { // Set the document's basic properties. set document(author: authors, title: title) set page(numbering: "1", number-align: center) set text(font: "New Computer Modern", lang: "en") show math.equation: set text(weight: 400) // Title row. align(center)[ #block(text(weight: 700, 1.75em, title)) ] // Author information. pad( top: 0.5em, bottom: 0.5em, x: 2em, grid( columns: (1fr,) * calc.min(3, authors.len()), gutter: 1em, ..authors.map(author => align(center, text(font: "New Computer Modern Sans", author))), ), ) // Main body. set par(justify: true) body } #show: project.with( title: "Math Document", ) #let long-symbol(sym, factor) = { assert(type(sym) == "symbol", message: "Input needs to be a symbol") assert(type(factor) == "integer" or type(factor) == "float", message: "Scale factor must be a number") assert(factor >= 1, message: "Scale factor must be >= 1") factor = 5*factor - 4 let body = [#sym] style(styles => { let (body-w,body-h) = measure(body,styles).values() align(left)[ #box(width: body-w*2/5,height: body-h,clip: true)[ #align(left)[ #body ] ] #h(0cm) #box(height: body-h, width: body-w*1/5*factor)[ #scale(x: factor*100%,origin:left)[ #box(height: body-h, width: body-w*1/5,clip:true)[ #align(center)[ #body ] ] ] ] #h(0cm) #box(width: body-w*2/5,clip: true)[ #align(right)[ #body ] ] ] }) } $ f: X attach(limits(#long-symbol(sym.arrow.r.hook,8)), t: "injective map") Y\ #long-symbol(sym.arrow.l.r.double.long,10) \ #long-symbol(sym.arrow.l.filled,5) $
https://github.com/kevung/bkgm-typst
https://raw.githubusercontent.com/kevung/bkgm-typst/main/a.typ
typst
MIT License
#let conf = ( board : ( direction: right, thickness: 3pt, outerColor: black, innerColor: yellow, ), cube : ( size: 32pt, color: black, textSize: 24pt, textColor: white, offset: 1.5pt, ), checker : ( size: 30pt, color1: white, color2: black ), point: ( textSize: 19pt, color1: blue, color2: red ), ) #let _drawCube(value, textSize: 24pt) = square( size: conf.cube.size, fill: conf.cube.color, radius: 5pt, [ #set align(center + horizon) #set text(size: conf.cube.textSize , fill: conf.cube.textColor) #value ]) #let cube = ( direction: conf.board.direction.inv() ) #let board = ( width: 13*conf.checker.size, height: 11*conf.checker.size, ) #let pointNumber(arg, vertAlign) = square( size: conf.checker.size, [ #set align(center + vertAlign) #set text(size: conf.point.textSize) #arg ] ) #let quadrantNumber(num) = { if num == 1 [ #if conf.board.direction == right [ #stack( dir: rtl, pointNumber(1, top), pointNumber(2, top), pointNumber(3, top), pointNumber(4, top), pointNumber(5, top), pointNumber(6, top), ) ] else [ #stack( dir: ltr, pointNumber(1, top), pointNumber(2, top), pointNumber(3, top), pointNumber(4, top), pointNumber(5, top), pointNumber(6, top), ) ] ] else if num == 2 [ #if conf.board.direction == right [ #stack( dir: rtl, pointNumber(7, top), pointNumber(8, top), pointNumber(9, top), pointNumber(10, top), pointNumber(11, top), pointNumber(12, top), ) ] else [ #stack( dir: ltr, pointNumber(7, top), pointNumber(8, top), pointNumber(9, top), pointNumber(10, top), pointNumber(11, top), pointNumber(12, top), ) ] ] else if num == 3 [ #if conf.board.direction == right [ #stack( dir: ltr, pointNumber(13, bottom), pointNumber(14, bottom), pointNumber(15, bottom), pointNumber(16, bottom), pointNumber(17, bottom), pointNumber(18, bottom), ) ] else [ #stack( dir: rtl, pointNumber(13, bottom), pointNumber(14, bottom), pointNumber(15, bottom), pointNumber(16, bottom), pointNumber(17, bottom), pointNumber(18, bottom), ) ] ] else [ #if conf.board.direction == right [ #stack( dir: ltr, pointNumber(19, bottom), pointNumber(20, bottom), pointNumber(21, bottom), pointNumber(22, bottom), pointNumber(23, bottom), pointNumber(24, bottom), ) ] else [ #stack( dir: rtl, pointNumber(19, bottom), pointNumber(20, bottom), pointNumber(21, bottom), pointNumber(22, bottom), pointNumber(23, bottom), pointNumber(24, bottom), ) ] ] } #let _drawBoard = rect( width: board.width + 2*conf.board.thickness, height: board.height +2*conf.board.thickness, fill: conf.board.outerColor, [ #let bar = rect( width: 1*conf.checker.size, height: 11*conf.checker.size, ) #let triangle = ( id1: polygon( fill: conf.point.color1.lighten(80%), stroke: conf.point.color1, (0%, 0pt), (conf.checker.size, 0pt), (0.5*conf.checker.size, 5*conf.checker.size), ), id2: polygon( fill: conf.point.color2.lighten(80%), stroke: conf.point.color2, (0%, 0pt), (conf.checker.size, 0pt), (0.5*conf.checker.size, 5*conf.checker.size), ) ) #let quadrant = stack( dir: ltr, triangle.id1, triangle.id2, triangle.id1, triangle.id2, triangle.id1, triangle.id2, ) #let checker = ( id1: circle( radius: 0.5 * conf.checker.size, fill: conf.checker.color1, stroke: black), id2: circle( radius: 0.5 * conf.checker.size, fill: conf.checker.color1, stroke: black) ) #place(horizon + center, rect( width: board.width, height: board.height, fill: conf.board.innerColor, inset: 0pt, [ #place(top + center, bar) #if conf.board.direction == right [ #place(bottom + cube.direction, dx: -conf.cube.size -conf.board.thickness -conf.cube.offset, _drawCube(2)) ] else [ #place(bottom + cube.direction, dx: conf.cube.size +conf.board.thickness +conf.cube.offset, _drawCube(2)) ] #place(top + left, quadrant) #place(top + right, quadrant) #place(bottom + left, rotate(180deg)[#quadrant]) #place(bottom + right, rotate(180deg)[#quadrant]) // quadrant number #if conf.board.direction == right [ #place(top + left, dy: -conf.checker.size -conf.board.thickness, quadrantNumber(3)) #place(top + right, dy: -conf.checker.size -conf.board.thickness, quadrantNumber(4)) #place(bottom + left, dy: conf.checker.size +conf.board.thickness, quadrantNumber(2)) #place(bottom + right, dy: conf.checker.size +conf.board.thickness, quadrantNumber(1)) ] else if conf.board.direction == left [ #place(top + right, dy: -conf.checker.size -conf.board.thickness, quadrantNumber(3)) #place(top + left, dy: -conf.checker.size -conf.board.thickness, quadrantNumber(4)) #place(bottom + right, dy: conf.checker.size +conf.board.thickness, quadrantNumber(2)) #place(bottom + left, dy: conf.checker.size +conf.board.thickness, quadrantNumber(1)) ] //white checkers #place( bottom + left, dx: (6 -1) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, checker.id2, checker.id2, ) ) #place( top + right, dx: (1 -2) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, checker.id2, checker.id2, ) ) #place( top + right, dx: (1 -1) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, checker.id2, checker.id2, checker.id2, checker.id2, circle(radius: 0.5 * conf.checker.size, fill: white, stroke: black, [ #set align(center + horizon) #set text(size: 19pt) 7 ]), ) ) #place( top + right, dx: 1.1 * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, spacing: 2.3pt, rect(width: conf.checker.size, height: 0.13 * conf.checker.size, fill: white, stroke: black), rect(width: conf.checker.size, height: 0.13 * conf.checker.size, fill: white, stroke: black), rect(width: conf.checker.size, height: 0.13 * conf.checker.size, fill: white, stroke: black), rect(width: conf.checker.size, height: 0.13 * conf.checker.size, fill: white, stroke: black), rect(width: conf.checker.size, height: 0.13 * conf.checker.size, fill: white, stroke: black), ) ) // black checkers #place( center + horizon, dx: 0 * conf.checker.size, dy: -1 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( top + right, dx: (1 -5) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( bottom + left, dx: (1 -1) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( bottom + left, dx: (2 -1) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( bottom + right, dx: (1 -6) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( bottom + right, dx: (1 -5) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( bottom + right, dx: (1 -4) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( bottom + right, dx: (1 -3) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) #place( bottom + right, dx: (1 -2) * conf.checker.size, dy: 0 * conf.checker.size, stack( dir:ttb, circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), circle(radius: 0.5 * conf.checker.size, fill: black, stroke: black), ) ) ] ) ) ] ) #let infoDisplay(msg, alignDir, textSize: 12pt) = rect( width: (board.width +2*conf.checker.size +2*conf.board.thickness)/2, height: conf.checker.size, [ #set text(size: textSize) #set align(alignDir) #msg ] ) #set align(center) #grid( rows: (conf.checker.size, auto, conf.checker.size), gutter: 3pt, grid( columns: (auto, auto), gutter: 3pt, infoDisplay("Unlimited match", left + horizon), infoDisplay("(5 checkers off) pip: 29", right + horizon), ), rect( width: board.width +2*conf.checker.size +2*conf.board.thickness, height: board.height +2*conf.checker.size +2*conf.board.thickness, inset: 0pt, [ #place(horizon + center, _drawBoard) ] ), grid( columns: (auto, auto), gutter: 3pt, infoDisplay("", left + horizon), infoDisplay("pip: 114", right + horizon), ), ) // end grid
https://github.com/typst-community/mantodea
https://raw.githubusercontent.com/typst-community/mantodea/main/src/component/table-of-contents.typ
typst
MIT License
#import "/src/_pkg.typ" #import "/src/_valid.typ" #import "/src/theme.typ" as _theme #let _columns = columns #let _fill(arr, len, new) = if arr.len() >= len { arr } else { arr + ((new,) * (len - arr.len())) } /// Generate the default table of contents. /// /// - title (str, content, none): The title to use, if this is a `heading` /// itself it will not be passed to `outline`. /// - target (selector, function): The target to show, this is primarily used /// to restrict the search space of the default selector. If something other /// than a heading selector is passed, then this may not work. /// - columns (int): The amount of columns to use for the outline. /// - theme (theme): The theme to use for this table of contents. /// -> content #let make-table-of-contents( title: heading(outlined: false, numbering: none)[Table of Contents], target: heading.where(outlined: true), columns: 1, theme: _theme.default, _validate: true, ) = { if _validate { import _valid as z _ = z.parse(title, z.content(), scope: ("title",)) _ = z.parse(columns, z.integer(), scope: ("columns",)) _ = z.parse(theme, _theme.schema(), scope: ("theme",)) _pkg.t4t.assert.any-type(selector, function, target) } // NOTE: unsure if this looks good, this also doens't work in CI for now // set text(font: theme.fonts.headings) set block(spacing: 0.65em) set _columns(columns) if title != none { title } let _numbering(part) = { let nums = counter(heading).at(part.location()) if part.level == 1 { strong(numbering("I", nums.first())) } else { numbering("1.1", ..nums.slice(1)) } } let _page(part) = { let loc = part.location() let num = _pkg.t4t.def.if-none("1", loc.page-numbering()) numbering(num, counter(page).at(loc).first()) } context { let indent-stack = () let parts = query(target) let page-max = 1em for part in parts { if part.level >= indent-stack.len() { indent-stack = _fill(indent-stack, part.level, 0em) } let level-max = indent-stack.at(part.level - 1) let new = measure(_numbering(part)).width if level-max.to-absolute() <= new { indent-stack.at(part.level - 1) = new } let new = measure(_page(part)).width if page-max.to-absolute() <= new { page-max = new } } let front(part) = { let loc = part.location() let body = link(loc, { let max = indent-stack.at(part.level - 1) box(width: max, align(right, _numbering(part))) h(0.6em) part.body }) if part.level == 1 { strong(body) } else { let indent = indent-stack.slice(0, part.level - 1) h(0.6em) indent.map(h).join(h(0.6em)) body } } for part in parts { if part.level == 1 { v(0.85em, weak: true) front(part) } else { let loc = part.location() linebreak() front(part) [ ] box(width: 1fr, repeat(" . ")) [ ] link(loc, box(width: page-max, align(right, _page(part)))) } } } }
https://github.com/sitandr/typst-examples-book
https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/basics/extra.md
markdown
MIT License
# Extra ## Bibliography Typst supports bibliography using BibLaTex `.bib` file or its own Hayagriva `.yml` format. BibLaTex is wider supported, but Hayagriva is easier to work with. > Link to Hayagriva [documentation](https://github.com/typst/hayagriva/blob/main/docs/file-format.md) and some [examples](https://github.com/typst/hayagriva/blob/main/tests/data/basic.yml). ### Citation Style The style can be customized via CSL, citation style language, with more than 10 000 styles available online. See [official repository](https://github.com/citation-style-language/styles).
https://github.com/7sDream/fonts-and-layout-zhCN
https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/01-history/postscript.typ
typst
Other
#import "/template/template.typ": web-page-template #import "/template/components.typ": note #import "/lib/glossary.typ": tr #show: web-page-template // ## PostScript == PostScript // Also in 1978, <NAME> was working at Xerox PARC in Palo Alto. His research project, on which he worked together with <NAME>, was to develop a way of driving laser printers, describing way that the graphics and text should be laid out on a page and how they should look. Prior to this point, every printer manufacturer had their own graphics format; the idea behind the Interpress project was to create a language which could be shared between different printers. However, other manufacturers were not interested in the Interpress idea, and it also had a "careful silence on the issue of fonts",[^7] so the project was shelved. 同样是在1978年,<NAME>在施乐公司的帕洛阿尔托研究中心工作。他和同事<NAME>负责研究一种描述图形和文字在页面上的排布和呈现的方法,以及由此驱动激光打印机的技术。在这之前,每家打印机制造商都有自己的图形格式,因而这个被称为Interpress的项目背后的理念就是创造一种能在不同打印机之间共享的语言。然而,其他的打印机制造商对Interpress并不感兴趣,而且因为它“对字体保持了谨慎的沉默”@Reid.PostScriptInterpress.1985,最终遭到了搁置。 // Still, it was a good idea, and Geschke and Warnock left Xerox in 1982 to found their own company to further develop the ideas of Interpress into a system called PostScript. The company was named after a small stream at the back of Warnock's house: Adobe. Adobe PostScript debuted in 1984, with the first PostScript printer appearing in 1985. 不过这仍然是一个极好的想法。随着Geschke和Warnock在1982年离开施乐并创办自己的公司,Interpress进一步发展成为了PostScript。这家公司由 Warnock家后面的一条小溪命名,它就是Adobe。Adobe PostScript在1984年首次亮相,而第一台PostScript打印机则在1985年问世。 // The first version of PostScript was not *at all* silent on the issue of fonts, and contained a number of innovations which have shaped font technology to this day. The first is the development of *hinting*. Although vector fonts can be scaled up and down in ways that bitmap fonts cannot, we mentioned that at some point, they need to be rasterized. This is evidently the case for on-screen view, where the screen is made up of discrete phosphor-coated glass, liquid crystal or LED cells; but it is also the case when the designs are printed: digital printers also work on the concept of discrete "dots" of ink. PostScript的第一个版本对字体不再沉默,并且包含了一系列时至今日依然深刻影响字体技术的创新。其一就是*#tr[hinting]*。正如我们之前所提及的,相比#tr[bitmap]字体,尽管矢量字体可以任意放大缩小,但它们依然需要被#tr[rasterization]。显然这是出于屏幕显示的考量,毕竟屏幕是由离散的荧光玻璃、液晶或者LED单元组成的;但它也适用于印刷场景:数字打印机的工作原理也是基于离散“墨点”的。 // One way to rasterize would be to simply say that a pixel is "on" if more than a certain proportion - 50%, for example - of its surface is covered by the outline. Unfortunately, when the pixels are quite large compared to the outline (when the font is being displayed at small sizes), this strategy leads to quite unattractive results. 一种最简单的#tr[rasterization]可以描述为,如果一个像素的一部分(比如50%)落在了#tr[outline]包围的区域内,它就被“开启”。不幸的是,当像素相对于#tr[outline]比较大的时候(也就是当字体以较小尺寸显示时),这一策略将会导致糟糕的结果。 #figure(caption: [ // The perils of simplistic rasterization 过于简单的#tr[rasterization]带来的问题。 ])[#image("bad-rasterization.png")] <figure:bad-rasterization> // We see that key parts of serifs and strokes are missing, and that stem widths have become uneven. Rasterization can also lead to points which should be aligned (such as the tops of the downstrokes of the m) becoming misaligned, although this has not happened in this example. 从@figure:bad-rasterization 中可以看出,#tr[serif]和笔画的关键部分都有缺失,#tr[stem]的粗细也不均匀。#tr[rasterization]还会使本应对齐的点(比如m中后两笔的顶端)不再对齐,虽然图中并没有反映出来。 // *Hinting* is a way of giving information to the rasterization process to help communicate what is important about the design: which stems need to be preserved and have the same width as others, where the baselines, x-heights, cap heights and other important places are that pixels should align in nice straight lines. PostScript's hinting improved the rasterization of fonts at small point sizes. *#tr[hinting]*会告诉#tr[rasterization]程序,对于设计师而言哪些因素是重要的:哪些#tr[stem]需要保留;哪些笔画的宽度要一致;#tr[baseline]、#tr[x-height]、#tr[cap height]和其他重要的位置信息,像素又该怎样与之对齐。通过PostScript的#tr[hinting],小字号下的字体#tr[rasterization]效果得以大幅提升。 // The second innovation was the way that PostScript implemented curves. Whereas IKARUS used sections of circles joined together[^8] and METAFONT used cleverly computed cubic splines,[^9] PostScript represented shapes in terms of *Bézier curves*. 第二项创新则是PostScript中曲线的描述方式。与IKARUS用相连的圆弧#footnote[技术上被称为<EMAIL>。]和METAFONT用精心计算的三次样条曲线#footnote[我在这里略做了简化。METAFONT84和PostScript一样使用了三次贝塞尔曲线,但它的控制点是隐式的,由John Hobby算法自动生成,而PostScript则是显式指定。我不了解METAFONT79中的情况。]都不同,PostScript用*贝塞尔曲线*来描述形状。 // ### The Bézier Curve === 贝塞尔曲线 // <NAME> was an engineer at Renault who specializing in the design of tools; initially these were manual tools, but in the mid 1950s he became interested in automated tools for precise drilling and milling. As manager of the technical development divison of Renault, he began work on a CADCAM (computed-aided design and manufacturing system) called UNISURF. UNISURF needed a way to allow designers to draw and manipulate curves and surfaces on a computer, both for technical drawing and design and for conversion into programs for computer-controlled machine tools. <NAME>是雷诺汽车的一位工程师,他专精于工具的设计和制造。起初都是一些手工工具,但在1950年代中期,他逐渐对能够精密钻铣的自动化工具产生兴趣。作为雷诺技术开发部主任,他开始着手研发一套CADCAM(计算机辅助设计与制造)软件,称为UNISURF。在UNISURF中,既要让设计师能在计算机上绘制和操作曲线,同时又要保证能把曲线转换为用于控制机械工具的计算机程序。 // To do this, Bézier adapted an algorithm invented by <NAME>, who was doing the same kind of things as Bézier was but over at the *other* French car company, Citroën. De Casteljau's algorithm takes a mathematical description of a curve and - essentially - rasterizes it: turning the *equation* which describes the curve into a *set of points* which can be joined together to draw the curve. Bézier's contribution - as well as popularising the de Casteljau algorithm - was to make it suitable for designers to manipulate by introducing the notion of *control points*. Bézier curves have a start point and an end point and then one or more *offcurve points* which "pull" the curve towards them. 为此,Bézier改良了一套由<NAME>发明的算法,后者和Bézier从事类似的工作,不过是在*另一家*法国汽车公司雪铁龙。de Casteljau的算法给出了曲线的一种数学描述,或者说是#tr[rasterization]的手段,即将描述曲线的*方程*转化为*一组点*,这些点可以连接在一起构成曲线。Bézier的贡献,除了推广de Casteljau的算法,还有引入了*控制点*,使得设计师能够上手操作。贝塞尔曲线包含一个起始点和一个终结点,以及一个或多个可以“拉动”曲线的*线外点*。 // Here is a Bézier curve of order three (start and end, and two off-curve points). It also known as a cubic Bézier curve because it is described using an equation which is a cubic polynomial. We'll talk about this in more detail in the next chapter. @figure:cubic-bezier 是一条三阶贝塞尔曲线(一个起始点、一个终结点和两个线外点)。它也被称为三次贝塞尔曲线,因为也可以使用三次多项式来描述它。我们将在下一章探讨更多的细节。 #figure(caption: [ // A cubic Bézier curve 一条三次贝塞尔曲线。 ])[#include "decasteljau.typ"] <figure:cubic-bezier> // I have also shown three points generated by applying de Casteljau's algorithm in three places, and an approximation of the curve generated by joining the de Casteljau points with straight lines - in a sense, doing something similar to what the Linotron 202 did with the outlines of its font data. 图中我还标出了三个根据de Casteljau算法生成出来的点以及连接它们的直线,这是原曲线的一个近似——某种意义上,这和Linotron 202对字体#tr[outline]数据做的处理是类似的。 // Bézier curves were found to give the designer of curves - such as those used for Renault's car bodies - an easy and intuitive way to express the shapes they wanted. By dragging around control points, designers can "sculpt" a curve. The cubic Bézier curve, with its start and end point and two off-curve control points, was adopted into PostScript as a command for graphics drawing, and from there, it made its way into the design of digital fonts. 事实证明,贝塞尔曲线为设计师提供了一种方便且直观的方式来表达脑海中的形象——比如雷诺汽车的外轮廓。通过拖拽控制点,设计师可以自由地调整曲线。带有起始点、终结点和两个线外点的三次贝塞尔曲线被吸收进了PostScript,并作为一条图形指令。从此,它也就走进了数字字体设计的世界。
https://github.com/alberto-lazari/cv
https://raw.githubusercontent.com/alberto-lazari/cv/main/alberto-lazari-cv.typ
typst
#import "common.typ": * // Treat headings as sections #show heading.where(level: 1): it => section(it.body.text) #show: cv #modules( "education", "experience", "volunteering", "certifications", "honors", "skills", )
https://github.com/platformer/typst-homework-template
https://raw.githubusercontent.com/platformer/typst-homework-template/main/hw-example.typ
typst
MIT License
#import "hw-template.typ": hw, problem // using hw template #show: doc => hw( doc, number: 1, class: "Typst 101 Fall 2023", authors: ( "<NAME> (id 00001)", ), date: "August 30, 2023" ) #problem()[ #lorem(100) ] // "break" starts the problem on the next page #problem(break-strategy: "break")[ #lorem(100) ] // "no-break" starts the problem on the current page #problem(break-strategy: "no-break")[ #lorem(400) ] // "fit" will only start the problem on the next page // if it doesn't fit on the current page #problem(break-strategy: "fit")[ #set enum(numbering: "a)") + #lorem(10) + #lorem(10) + #lorem(10) ] #problem(break-strategy: "fit")[ #lorem(50) ] #problem(break-strategy: "fit")[ #lorem(200) ] #problem(title: "Bonus", break-strategy: "break")[ #lorem(300) ]
https://github.com/goshakowska/Typstdiff
https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_complex/ordered_list/ordered_list_inserted_result.typ
typst
+ The climate#underline[ ];#underline[intensive] - Precipitation#underline[ ];#underline[important] - Temperature#underline[ ];#underline[scales] + degree - hot - cold - warm - #underline[really];#underline[ ];#underline[hot] + sun + #underline[cloud] + #underline[wind] + The geology + #underline[something];#underline[ ];#underline[new]
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-12480.typ
typst
Apache License 2.0
#let data = ( ("CUNEIFORM SIGN AB TIMES NUN TENU", "Lo", 0), ("CUNEIFORM SIGN AB TIMES SHU2", "Lo", 0), ("CUNEIFORM SIGN AD TIMES ESH2", "Lo", 0), ("CUNEIFORM SIGN BAD TIMES DISH TENU", "Lo", 0), ("CUNEIFORM SIGN BAHAR2 TIMES AB2", "Lo", 0), ("CUNEIFORM SIGN BAHAR2 TIMES NI", "Lo", 0), ("CUNEIFORM SIGN BAHAR2 TIMES ZA", "Lo", 0), ("CUNEIFORM SIGN BU OVER BU TIMES NA2", "Lo", 0), ("CUNEIFORM SIGN DA TIMES TAK4", "Lo", 0), ("CUNEIFORM SIGN DAG TIMES KUR", "Lo", 0), ("CUNEIFORM SIGN DIM TIMES IGI", "Lo", 0), ("CUNEIFORM SIGN DIM TIMES U U U", "Lo", 0), ("CUNEIFORM SIGN DIM2 TIMES UD", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES ANSHE", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES ASH", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES ASH AT LEFT", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES DIN", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES DUN", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES ERIN2", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES GA", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES GI", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES GIR2 GUNU", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES GISH", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES HA", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES HI", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES IGI GUNU", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES KASKAL", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES KUR", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES KUSHU2", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES KUSHU2 PLUS KASKAL", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES LAK-020", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES LAM", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES LAM TIMES KUR", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES LUH PLUS GISH", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES MASH", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES MES", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES MI", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES NI", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES PI", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES SHE", "Lo", 0), ("CUNEIFORM SIGN DUG TIMES SI GUNU", "Lo", 0), ("CUNEIFORM SIGN E2 TIMES KUR", "Lo", 0), ("CUNEIFORM SIGN E2 TIMES PAP", "Lo", 0), ("CUNEIFORM SIGN ERIN2 X", "Lo", 0), ("CUNEIFORM SIGN ESH2 CROSSING ESH2", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES ASH", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES HI", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES IGI GUNU", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES LA", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES LAL", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES ME", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES MES", "Lo", 0), ("CUNEIFORM SIGN EZEN SHESHIG TIMES SU", "Lo", 0), ("CUNEIFORM SIGN EZEN TIMES SU", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES BAHAR2", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES DIM GUNU", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES DUG TIMES IGI GUNU", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES DUG TIMES KASKAL", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES EREN", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES GA", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES GAR PLUS DI", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES GAR PLUS NE", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES HA PLUS A", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES KUSHU2 PLUS KASKAL", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES LAM", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES LAM TIMES KUR", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES LUH", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES MUSH", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES NE", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES NE PLUS E2", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES NE PLUS GI", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES SHIM", "Lo", 0), ("CUNEIFORM SIGN GA2 TIMES ZIZ2", "Lo", 0), ("CUNEIFORM SIGN GABA ROTATED NINETY DEGREES", "Lo", 0), ("CUNEIFORM SIGN GESHTIN TIMES U", "Lo", 0), ("CUNEIFORM SIGN GISH TIMES GISH CROSSING GISH", "Lo", 0), ("CUNEIFORM SIGN GU2 TIMES IGI GUNU", "Lo", 0), ("CUNEIFORM SIGN GUD PLUS GISH TIMES TAK4", "Lo", 0), ("CUNEIFORM SIGN HA TENU GUNU", "Lo", 0), ("CUNEIFORM SIGN HI TIMES ASH OVER HI TIMES ASH", "Lo", 0), ("CUNEIFORM SIGN KA TIMES BU", "Lo", 0), ("CUNEIFORM SIGN KA TIMES KA", "Lo", 0), ("CUNEIFORM SIGN KA TIMES U U U", "Lo", 0), ("CUNEIFORM SIGN KA TIMES UR", "Lo", 0), ("CUNEIFORM SIGN LAGAB TIMES ZU OVER ZU", "Lo", 0), ("CUNEIFORM SIGN LAK-003", "Lo", 0), ("CUNEIFORM SIGN LAK-021", "Lo", 0), ("CUNEIFORM SIGN LAK-025", "Lo", 0), ("CUNEIFORM SIGN LAK-030", "Lo", 0), ("CUNEIFORM SIGN LAK-050", "Lo", 0), ("CUNEIFORM SIGN LAK-051", "Lo", 0), ("CUNEIFORM SIGN LAK-062", "Lo", 0), ("CUNEIFORM SIGN LAK-079 OVER LAK-079 GUNU", "Lo", 0), ("CUNEIFORM SIGN LAK-080", "Lo", 0), ("CUNEIFORM SIGN LAK-081 OVER LAK-081", "Lo", 0), ("CUNEIFORM SIGN LAK-092", "Lo", 0), ("CUNEIFORM SIGN LAK-130", "Lo", 0), ("CUNEIFORM SIGN LAK-142", "Lo", 0), ("CUNEIFORM SIGN LAK-210", "Lo", 0), ("CUNEIFORM SIGN LAK-219", "Lo", 0), ("CUNEIFORM SIGN LAK-220", "Lo", 0), ("CUNEIFORM SIGN LAK-225", "Lo", 0), ("CUNEIFORM SIGN LAK-228", "Lo", 0), ("CUNEIFORM SIGN LAK-238", "Lo", 0), ("CUNEIFORM SIGN LAK-265", "Lo", 0), ("CUNEIFORM SIGN LAK-266", "Lo", 0), ("CUNEIFORM SIGN LAK-343", "Lo", 0), ("CUNEIFORM SIGN LAK-347", "Lo", 0), ("CUNEIFORM SIGN LAK-348", "Lo", 0), ("CUNEIFORM SIGN LAK-383", "Lo", 0), ("CUNEIFORM SIGN LAK-384", "Lo", 0), ("CUNEIFORM SIGN LAK-390", "Lo", 0), ("CUNEIFORM SIGN LAK-441", "Lo", 0), ("CUNEIFORM SIGN LAK-449", "Lo", 0), ("CUNEIFORM SIGN LAK-449 TIMES GU", "Lo", 0), ("CUNEIFORM SIGN LAK-449 TIMES IGI", "Lo", 0), ("CUNEIFORM SIGN LAK-449 TIMES PAP PLUS LU3", "Lo", 0), ("CUNEIFORM SIGN LAK-449 TIMES PAP PLUS PAP PLUS LU3", "Lo", 0), ("CUNEIFORM SIGN LAK-449 TIMES U2 PLUS BA", "Lo", 0), ("CUNEIFORM SIGN LAK-450", "Lo", 0), ("CUNEIFORM SIGN LAK-457", "Lo", 0), ("CUNEIFORM SIGN LAK-470", "Lo", 0), ("CUNEIFORM SIGN LAK-483", "Lo", 0), ("CUNEIFORM SIGN LAK-490", "Lo", 0), ("CUNEIFORM SIGN LAK-492", "Lo", 0), ("CUNEIFORM SIGN LAK-493", "Lo", 0), ("CUNEIFORM SIGN LAK-495", "Lo", 0), ("CUNEIFORM SIGN LAK-550", "Lo", 0), ("CUNEIFORM SIGN LAK-608", "Lo", 0), ("CUNEIFORM SIGN LAK-617", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES ASH", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES BAD", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES DUN3 GUNU GUNU", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES KU3", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES LA", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES TAR", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES TE", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES U2", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES UD", "Lo", 0), ("CUNEIFORM SIGN LAK-617 TIMES URUDA", "Lo", 0), ("CUNEIFORM SIGN LAK-636", "Lo", 0), ("CUNEIFORM SIGN LAK-648", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES DUB", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES GA", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES IGI", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES IGI GUNU", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES NI", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES PAP PLUS PAP PLUS LU3", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES SHESH PLUS KI", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES UD", "Lo", 0), ("CUNEIFORM SIGN LAK-648 TIMES URUDA", "Lo", 0), ("CUNEIFORM SIGN LAK-724", "Lo", 0), ("CUNEIFORM SIGN LAK-749", "Lo", 0), ("CUNEIFORM SIGN LU2 GUNU TIMES ASH", "Lo", 0), ("CUNEIFORM SIGN LU2 TIMES DISH", "Lo", 0), ("CUNEIFORM SIGN LU2 TIMES HAL", "Lo", 0), ("CUNEIFORM SIGN LU2 TIMES PAP", "Lo", 0), ("CUNEIFORM SIGN LU2 TIMES PAP PLUS PAP PLUS LU3", "Lo", 0), ("CUNEIFORM SIGN LU2 TIMES TAK4", "Lo", 0), ("CUNEIFORM SIGN MI PLUS ZA7", "Lo", 0), ("CUNEIFORM SIGN MUSH OVER MUSH TIMES GA", "Lo", 0), ("CUNEIFORM SIGN MUSH OVER MUSH TIMES KAK", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES DIM GUNU", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES GISH", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES GUL", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES HI", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES KESH2", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES LAK-050", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES MASH", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES PAP PLUS PAP", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES U", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES U PLUS U", "Lo", 0), ("CUNEIFORM SIGN NINDA2 TIMES URUDA", "Lo", 0), ("CUNEIFORM SIGN SAG GUNU TIMES HA", "Lo", 0), ("CUNEIFORM SIGN SAG TIMES EN", "Lo", 0), ("CUNEIFORM SIGN SAG TIMES SHE AT LEFT", "Lo", 0), ("CUNEIFORM SIGN SAG TIMES TAK4", "Lo", 0), ("CUNEIFORM SIGN SHA6 TENU", "Lo", 0), ("CUNEIFORM SIGN SHE OVER SHE", "Lo", 0), ("CUNEIFORM SIGN SHE PLUS HUB2", "Lo", 0), ("CUNEIFORM SIGN SHE PLUS NAM2", "Lo", 0), ("CUNEIFORM SIGN SHE PLUS SAR", "Lo", 0), ("CUNEIFORM SIGN SHU2 PLUS DUG TIMES NI", "Lo", 0), ("CUNEIFORM SIGN SHU2 PLUS E2 TIMES AN", "Lo", 0), ("CUNEIFORM SIGN SI TIMES TAK4", "Lo", 0), ("CUNEIFORM SIGN TAK4 PLUS SAG", "Lo", 0), ("CUNEIFORM SIGN TUM TIMES GAN2 TENU", "Lo", 0), ("CUNEIFORM SIGN TUM TIMES THREE DISH", "Lo", 0), ("CUNEIFORM SIGN UR2 INVERTED", "Lo", 0), ("CUNEIFORM SIGN UR2 TIMES UD", "Lo", 0), ("CUNEIFORM SIGN URU TIMES DARA3", "Lo", 0), ("CUNEIFORM SIGN URU TIMES LAK-668", "Lo", 0), ("CUNEIFORM SIGN URU TIMES LU3", "Lo", 0), ("CUNEIFORM SIGN ZA7", "Lo", 0), ("CUNEIFORM SIGN ZU OVER ZU PLUS SAR", "Lo", 0), ("CUNEIFORM SIGN ZU5 TIMES THREE DISH TENU", "Lo", 0), )
https://github.com/NiklasEi/ndc_spin
https://raw.githubusercontent.com/NiklasEi/ndc_spin/main/templates/package/file.typ
typst
#import "@preview/cetz:0.2.2" #let input = if sys.inputs.keys().contains("input") {json.decode(sys.inputs.input);} else {json("sample.json");} #let data = ( ([Belgium], 24), ([Germany], input.germany), ([Greece], 18), ([Spain], 21), ([France], 23), ([Hungary], 18), ([Netherlands], 27), ([Romania], 17), ([Finland], 26), ([Turkey], 13), ) #cetz.canvas({ import cetz.chart import cetz.draw: * let colors = gradient.linear(red, blue, green, yellow) chart.piechart( data, value-key: 1, label-key: 0, radius: 4, slice-style: colors, inner-radius: 1, outset: 3, inner-label: (content: (value, label) => [#text(white, str(value))], radius: 110%), outer-label: (content: "%", radius: 110%)) })
https://github.com/mariuslb/thesis
https://raw.githubusercontent.com/mariuslb/thesis/main/content/01-einleitung.typ
typst
= Einleitung == Kontext und Bedeutung des Themas Der Klimawandel stellt eine der drängendsten globalen Herausforderungen dar, und die Transformation der Mobilitätsbranche ist entscheidend, um den ökologischen Fußabdruck signifikant zu reduzieren. Fahrzeuge müssen im Allgemeinen effizienter werden, um den Energieverbrauch zu minimieren und die CO2-Emissionen zu senken. Elektrofahrzeugen (EVs) wird nachgesagt, effizienter zu sein als Fahrzeuge mit Verbrennungsmotoren (ICEs), insbesondere im Hinblick auf die Energienutzung und die damit verbundenen Emissionen. Darüber hinaus kennzeichnet der Wechsel von ICEs zu EVs einen Paradigmenwechsel, der sowohl aus ökologischer als auch ökonomischer Perspektive zunehmend wichtig wird @IEA. In 2023 waren fast 25% der neuen Fahrzeuganmeldungen in Europa Elektrofahrzeuge, was den anhaltenden Trend zur nachhaltigeren Mobilität verdeutlicht. Diese Entwicklung wird durch zahlreiche politische Maßnahmen unterstützt, die von Steuervergünstigungen bis hin zur Förderung der Infrastrukturentwicklung reichen, um die Verbreitung von Elektrofahrzeugen weiter zu fördern @IEA. Dabei wird die Effizienz von Fahrzeugen in erster Linie durch den Energieverbrauch bestimmt, wobei EVs als potenziell effizienter gelten, da sie elektrische Energie direkt in Antriebsleistung umwandeln, während ICEs chemische Energie durch Verbrennung in mechanische Energie umwandeln, was mit höheren Energieverlusten verbunden ist. Diese Arbeit zielt darauf ab, durch eine detaillierte Analyse von Vergleichsfahrten mit PKWs beider Antriebsarten, standardisiert nach der ISO 23795-1:2022 Norm, evidenzbasierte Erkenntnisse über die Effizienz und Nachhaltigkeit von Elektrofahrzeugen im Vergleich zu Verbrennungsmotoren zu gewinnen. Ebenso sollen darüber hinaus die gewonnenen Daten dazu beitragen, die Genauigkeit und Praxisnähe von Verbrauchsnormen wie dem Worldwide Harmonized Light Vehicles Test Procedure (WLTP) zu bewerten. Die Überprüfung dieser Normwerte ist entscheidend, um eine präzisere Einschätzung des Energieverbrauchs und der CO2-Emissionen von Fahrzeugen im realen Betrieb zu ermöglichen. == Zielsetzung der Arbeit Diese Bachelorarbeit verfolgt das Ziel, durch detaillierte Analyse von Vergleichsfahrten die Effizienz und Nachhaltigkeit von Elektrofahrzeugen im Vergleich zu Verbrennungsmotoren zu untersuchen, standardisiert nach der ISO 23795-1:2022 Norm. Dabei wird insbesondere die Genauigkeit und Praxisnähe von Verbrauchsnormen wie dem @wltp bewertet, um eine präzisere Einschätzung des realen Energieverbrauchs zu ermöglichen @ISO. == Praxisfall der Arbeit Ein wesentlicher Aspekt dieser Arbeit ist die praktische Anwendung des Produkts @lcmm von @tsi. @lcmm unterstützt die Aggregation und Analyse von Fahrdaten gemäß der ISO 23795-1:2022 Norm und ermöglicht eine effiziente Überwachung und Verwaltung von Fahrzeugflotten in Bezug auf Energieverbrauch und CO2-Emissionen. Durch die Nutzung von LCMM können Kunden nicht nur die Effizienz ihrer Flotten verbessern, sondern auch signifikante Einsparungen bei den Betriebskosten erzielen und zur Reduktion von Umweltbelastungen beitragen @LCMM_2022. Ziel ist es, für @lcmm eine solide wissenschaftliche Basis für die Weiterentwicklung in Bezug auf Integration von Elektrofahrzeugen zu bieten. Laut der von Willenbrock (2023) präsentierten Daten ermöglicht LCMM die Messung und Bewertung des Fahrverhaltens von Fahrzeugen und stellt ein Effizienzprofil basierend auf einem Algorithmus bereit, der als Grundlage für eine eigene ISO-Norm dienen wird. Einsparungen beim Kraftstoffverbrauch können durch individuelle Fahrempfehlungen direkt in der LCMM-App an den Fahrer beim Fahren gegeben werden ("Öko-Fahrtraining") oder anhand der mit LCMM aufgezeichneten Fahrten mithilfe der Web-App von der Disposition analysiert und damit Routen optimiert werden @willenbrock2023lcmm. == Forschungsfrage und Hypothesen Die Forschungsfragen, die in dieser Arbeit behandelt werden, konzentrieren sich darauf, inwiefern sich Elektrofahrzeuge von Fahrzeugen mit Verbrennungsmotor in ihrem Energieverbrauch unter realen Fahrbedingungen unterscheiden und wie diese Unterschiede im Vergleich zu den WLTP-Normwerten stehen. Daraus ergeben sich zwei zentrale Hypothesen: #table( stroke: none, columns: (auto, auto), inset: 10pt, [*H1*], [*Elektrofahrzeuge demonstrieren unter einer Vielzahl realer Fahrbedingungen eine erhöhte Energieeffizienz gegenüber Fahrzeugen mit Verbrennungsmotor.*], [*H2*], [*Es besteht eine signifikante Diskrepanz zwischen den WLTP-Normwerten und den unter realen Fahrbedingungen gemessenen Energieverbräuchen.*] ) == Aufbau der Arbeit (tbd) Die Arbeit ist in sechs Hauptkapitel unterteilt, die den Forschungsprozess von der Einleitung bis zur Diskussion und den Ausblick umfassen. In der Einleitung des Dokuments wird ein umfassender Überblick über den Kontext und die Bedeutung des behandelten Themas gegeben. Hier werden die Zielsetzung der Arbeit, der Praxisfall und die Forschungsfragen klar definiert. Anschließend führt das zweite Kapitel in die theoretischen Grundlagen und Konzepte ein, die für die Untersuchung von zentraler Bedeutung sind. Das dritte Kapitel widmet sich der Methodik und beschreibt detailliert den Forschungsansatz, die Methoden der Datenerhebung und -analyse sowie die verwendeten statistischen Verfahren. Das vierte Kapitel stellt den Kern der Untersuchung dar und beinhaltet die systematische Verarbeitung und Analyse der gesammelten Daten. Im fünften Kapitel werden die Ergebnisse der Untersuchung präsentiert und die zentralen Erkenntnisse diskutiert. Das sechste Kapitel geht einen Schritt weiter und interpretiert die Ergebnisse, reflektiert über mögliche Limitationen und Selbstkritik und setzt die Ergebnisse in Bezug zum bestehenden Forschungsstand. Abschließend fasst das siebte Kapitel die wichtigsten Erkenntnisse zusammen, leitet praktische Implikationen für die beteiligten Stakeholder ab und bietet Anregungen für zukünftige Forschungsarbeiten. #pagebreak()