repo
stringlengths
26
115
file
stringlengths
54
212
language
stringclasses
2 values
license
stringclasses
16 values
content
stringlengths
19
1.07M
https://github.com/knuesel/typst-minideck
https://raw.githubusercontent.com/knuesel/typst-minideck/main/README.md
markdown
MIT License
# typst-minideck A small package for dynamic slides in typst. minideck provides basic functionality for dynamic slides (`pause`, `uncover`, ...), integration with [fletcher](https://typst.app/universe/package/fletcher) and [CetZ](https://typst.app/universe/package/cetz/), and some minimal infrastructure for theming. ## Usage Call `minideck.config` to get the functions you want to use: ```typst #import "@preview/minideck:0.2.1" #let (template, slide, title-slide, pause, uncover, only) = minideck.config() #show: template #title-slide[ = Slides with `minideck` == Some examples <NAME> #datetime.today().display() ] #slide[ = Some title Some content #show: pause More content 1. One 2. Two #show: pause 3. Three ] ``` This will show three subslides with progressively more content. (Note that the default theme uses the font Libertinus Sans from the [Libertinus](https://github.com/alerque/libertinus) family, so you may want to install it.) Instead of `#show: pause`, you can use `#uncover(2,3)[...]` to make content visible only on subslides 2 and 3, or `#uncover(from: 2)[...]` to have it visible on subslide 2 and following. The `only` function is similar to `uncover`, but instead of hiding the content (without affecting the layout), it removes it. ```typst #slide[ = `uncover` and `only` #uncover(1, from:3)[ Content visible on subslides 1 and 3+ (space reserved on 2). ] #only(2,3)[ Content included on subslides 2 and 3 (no space reserved on 1). ] Content always visible. ] ``` Contrary to `pause`, the `uncover` and `only` functions also work in math mode: ```typst #slide[ = Dynamic equation $ f(x) &= x^2 + 2x + 1 \ #uncover(2, $&= (x + 1)^2$) $ ] ``` When mixing `pause` with `uncover`/`only`, the sequence of pauses should be taken as reference for the meaning of subslide indices. For example content after the first pause always appears on the second subslide, even if it's preceded by a call to `#uncover(from: 3)[...]`. The package also works well with [pinit](https://typst.app/universe/package/pinit): ```typst #import "@preview/pinit:0.1.4": * #slide[ = Works well with `pinit` Pythagorean theorem: $ #pin(1)a^2#pin(2) + #pin(3)b^2#pin(4) = #pin(5)c^2#pin(6) $ #show: pause $a^2$ and $b^2$ : squares of triangle legs #only(2, { pinit-highlight(1,2) pinit-highlight(3,4) }) #show: pause $c^2$ : square of hypotenuse #pinit-highlight(5,6, fill: green.transparentize(80%)) #pinit-point-from(6)[larger than $a^2$ and $b^2$] ] ``` ### Handout mode minideck can make a handout version of the document, in which dynamic behavior is disabled: the content of all subslides is shown together in a single slide. To compile a handout version, pass `--input handout=true` in the command line: ```bash typst compile --input handout=true myfile.typ ``` It is also possible to enable handout mode from within the document, as shown in the next section. ### Configuration The behavior of the minideck functions depends on the settings passed to `minideck.config`. For example, handout mode can also be enabled like this: ```typst #import "@preview/minideck:0.2.1" #let (template, slide, pause) = minideck.config(handout: true) #show: template #slide[ = Slide title Some text #show: pause More text ] ``` (The default value of `handout` is `auto`, in which case minideck checks for a command line setting as in the previous section.) `minideck.config` accepts the following named arguments: * `paper`: a string for one of the paper size names recognized by [`page.paper`](https://typst.app/docs/reference/layout/page/#parameters-paper), or one of the shorthands `"16:9"` or `"4:3"`. Default: `"4:3"`. * `landscape`: use the paper size in landscape orientation. Default: `true`. * `width`: page width as an absolute length. Takes precedence over `paper` and `landscape`. * `height`: page height as an absolute length. Takes precedence over `paper` and `landscape`. * `handout`: whether to make a document for handout, with content of all subslides shown together in a single slide. * `theme`: the theme (see below). * `cetz`: the CeTZ module (see below). * `fletcher`: the fletcher module (see below). For example to make slides with 16:9 aspect ratio, use `minideck.config(paper: "16:9")`. ### Themes Use `minideck.config(theme: ...)` to select a theme. Currently there is only one built-in: `minideck.themes.simple`. However you can also pass a theme implemented by yourself or from a third-party package. See the [theme documentation](themes/README.md) for how that works. Themes are functions and can be configured using the standard [`with` method](https://typst.app/docs/reference/foundations/function/#definitions-with): * The `simple` theme has a `variant` setting with values "light" (default) and "dark". Here's an example: ```typst #import "@preview/minideck:0.2.1" #let (template, slide) = minideck.config( theme: minideck.themes.simple.with(variant: "dark"), ) #show: template #slide[ = Slide with dark theme Some text ] ``` Note that you can override part of a theme with show and set rules: ```typst #import "@preview/minideck:0.2.1" #let (template, slide) = minideck.config( theme: minideck.themes.simple.with(variant: "dark"), ) #show: template #set page(footer: none) // get rid of the page number #show heading: it => text(style: "italic", it) #set text(red) #slide[ = Slide with dark theme and red text Some text ] ``` ### Integration with CeTZ You can use `uncover` and `only` (but not `pause`) in CeTZ figures, with the following extra steps: * Get CeTZ-specific functions `cetz-uncover` and `cetz-only` by passing the CeTZ module to `minideck.config` (see example below). This ensures that minideck uses CeTZ functions from the correct version of CeTZ. * Add a `context` keyword outside the `canvas` call. This is required to access the minideck subslide state from within the canvas without making the content opaque (CeTZ needs to inspect the canvas content to make the drawing). Example: ```typst #import "@preview/cetz:0.2.2" as cetz: * #import "@preview/minideck:0.2.1" #let (template, slide, only, cetz-uncover, cetz-only) = minideck.config(cetz: cetz) #show: template #slide[ = In a CeTZ figure Above canvas #context canvas({ import draw: * cetz-only(3, rect((0,-2), (14,4), stroke: 3pt)) cetz-uncover(from: 2, rect((0,-2), (16,2), stroke: blue+3pt)) content((8,0), box(stroke: red+3pt, inset: 1em)[ A typst box #only(2)[on 2nd subslide] ]) }) Below canvas ] ``` ### Integration with fletcher The same steps are required as for CeTZ integration (passing the fletcher module to get fletcher-specific functions), plus an additional step: * Give explicitly the number of subslides to the `slide` function. This is required because I could not find a reliable way to update a typst state from within a fletcher diagram, so I cannot rely on the state to keep track of the number of subslides. Example: ```typst #import "@preview/fletcher:0.5.0" as fletcher: diagram, node, edge #import "@preview/minideck:0.2.1" #let (template, slide, fletcher-uncover) = minideck.config(fletcher: fletcher) #show: template #slide(steps: 2)[ = In a fletcher diagram #set align(center) Above diagram #context diagram( node-stroke: 1pt, node((0,0), [Start], corner-radius: 2pt, extrude: (0, 3)), edge("-|>"), node((1,0), align(center)[A]), fletcher-uncover(from: 2, edge("d,r,u,l", "-|>", [x], label-pos: 0.1)) ) Below diagram ] ``` ## Comparison with other slides packages Performance: minideck is currently faster than [Polylux](https://typst.app/universe/package/polylux/) when using `pause`, especially for incremental compilation, but a bit slower than [Touying](https://typst.app/universe/package/touying), according to my tests. Features: Polylux and Touying have more themes and more features, for example support for [pdfpc](https://pdfpc.github.io/) which provides speaker notes and more. Minideck allows using `uncover` and `only` in CeTZ figures and fletcher diagrams, which Polylux currently doesn't support. Syntax: package configuration is simpler in minideck than Touying but a bit more involved than in Polylux. The minideck `pause` is more cumbersome to use: one must write `#show: pause` instead of `#pause`. On the other hand minideck's `uncover` and `only` can be used directly in equations without requiring a special math environment as in Touying (I think). Other: minideck sometimes has better error messages than Touying due to implementation differences: the minideck stack trace points back to the user's code while Touying errors sometimes point only to an output page number. ## Limitations * `pause`, `uncover` and `only` work in enumerations but they require explicit enum indices (`1. ...` rather than `+ ...`) as they cause a reset of the list index. * Usage in a CeTZ canvas or fletcher diagram requires a `context` keyword in front of the `canvas`/`diagram` call (see above). * fletcher diagrams also require to specify the number of subslides explicitly (see above). * `pause` doesn't work in CeTZ figures, fletcher diagrams and math mode. * `pause` requires writing `#show: pause` and its effect is lost after the `#show` call goes out of scope. For example this means that one can use `pause` inside of a grid, but further `pause` calls after the grid (in the same slide) won't work as intended. ## Internals The package uses states with the following keys: `__minideck-subslide-count`: an array of two integers for counting pauses and keeping track of the subslide number automatically. The first value is the number of subslides so far referenced in current slide. The second value is the number of pauses seen so far in the current slide. Both values are kept in one state so that an update function can update the number of subslides based on the number of pauses, without requiring a context. This avoids problems with layout convergence. `__minideck-subslide-step`: the current subslide being generated while processing a slide.
https://github.com/chen-qingyu/Typst-Code
https://raw.githubusercontent.com/chen-qingyu/Typst-Code/master/gamma.typ
typst
#let LF = {v(3em); linebreak()} $ Gamma(s) &= integral_0^(+infinity) e^(-x) x^(s-1) dif x space (s > 0) LF &= integral_0^(+infinity) -x^(s-1) dif (e^(-x)) LF &= 0 - integral_0^(+infinity) e^(-x) dif (-x^(s-1)) LF &= integral_0^(+infinity) e^(-x) dif (x^(s-1)) LF &= integral_0^(+infinity) e^(-x) (s - 1) x^(s-2) dif x LF &= (s - 1) integral_0^(+infinity) e^(-x) x^(s-2) dif x LF &= (s - 1) Gamma(s - 1) LF &= (s - 1) (s - 2) Gamma(s - 2) LF &= (s - 1) (s - 2) dots.c Gamma(1) LF &= (s - 1) (s - 2) dots.c 1 LF &= (s - 1)! LF $
https://github.com/typst-community/mantodea
https://raw.githubusercontent.com/typst-community/mantodea/main/docs/util.typ
typst
MIT License
#import "@preview/mantys:0.1.4" #let package = toml("/typst.toml").package #let issue(num) = text(eastern, link(package.repository + "/issues/" + str(num))[hydra\##num]) #let issues = text(eastern, link(package.repository + "/issues/")[GitHub:typst-community/mantodea])
https://github.com/Caellian/UNIRI_voxels_doc
https://raw.githubusercontent.com/Caellian/UNIRI_voxels_doc/trunk/content/prijevremeno.typ
typst
= Prijevremen prikaz - Primjene: medicina, statične scene i slike == Ray casting - Skoro je i real time sada, mislim da je noisy, ima light propagation delay, ... == Splatting Transparentni slojevi za brz prolazak kroz slojeve MRI slike. == Shear warp #pagebreak()
https://github.com/thudep/award-cert-printer
https://raw.githubusercontent.com/thudep/award-cert-printer/master/README.md
markdown
# Award Certificates Printer A sh script to print certificates for some contests. ## How to use ### Install dependencies - `typst`: a LaTeX alternative, its [package name](https://repology.org/project/typst/versions) hardly vary on different platforms - Source Han Serif: font used in the template ### Provide a name list The input file `list.CSV` should be like this: ```text 清小华,三等奖,2024地球中微子暑期学校 华小清,二等奖,2024地球中微子暑期学校 ``` ### (Optional) Modify the template You may modify `template.typ` to customize the style. ### Run the script There are two ways to run the script. Here are some examples, and you can use `-h` to show the help message. #### 1. Use the default secret(stored in `secret.txt`) to generate the sha1 Create a `secret.txt` file with the secret in it, then run: ```bash ./generate.sh ``` #### 2. Use a custom secret to generate the sha1 ```bash ./generate.sh -s <your_secret> ```
https://github.com/Henriquelay/pathsec-checker
https://raw.githubusercontent.com/Henriquelay/pathsec-checker/main/presentation/figures/skipping_boxes.typ
typst
#set page(width: auto, height: auto, margin: (x: 0pt, y: 0pt)) #set text(font: "DejaVu Sans Mono") #let myswitch(name, digest, expected) = { box[ #table( columns: 2, fill: if (digest == expected) { lime } else { red }, align: (right, left), [name], [#name], [digest], [#digest], [expected], [#expected], ) ] } #table( columns: 4, stroke: none, inset: 3pt, myswitch([e1], [0X61E8D6E7], [0X61E8D6E7]), myswitch([s1], [0XAE91434C], [0XAE91434C]), myswitch([s2], [0X08C97F5F], [0X08C97F5F]), myswitch([s3], [0XEFF1AAD2], [0XEFF1AAD2]), myswitch([s4], [0X08040C89], [0X08040C89]), myswitch([s5], [skipped], [0XAA99AE2E]), myswitch([s6], [0XB0437A53], [0X7669685E]), myswitch([s7], [0X63589D0A], [0X03E1E388]), myswitch([s8], [0X629B7B3B], [0X2138FFD3]), myswitch([s9], [0XBD53E851], [0X1EF2CBBE]), myswitch([s10], [0X90BDF731], [0X99C5FE05]), ) #pagebreak() #table( columns: 4, stroke: none, inset: 3pt, myswitch([e10], [0XABADCAFE], [0XABADCAFE]), myswitch([s10], [0X2247084B], [0X2247084B]), myswitch([s9], [0X81C417A7], [0X81C417A7]), myswitch([s8], [0XF4E82C1E], [0XF4E82C1E]), myswitch([s7], [0X31881788], [0X31881788]), myswitch([s6], [0X7C79E406], [0X7C79E406]), myswitch([s5], [skipped], [0X8E4AB525]), myswitch([s4], [0XD0944C2B], [0XDB11D4A9]), myswitch([s3], [0XBC7166C4], [0X36A0DAB0]), myswitch([s2], [0X27ACDB88], [0X361B8641]), myswitch([s1], [0X96D7328F], [0X53CEB3ED]), )
https://github.com/andreasKroepelin/TypstJlyfish.jl
https://raw.githubusercontent.com/andreasKroepelin/TypstJlyfish.jl/main/examples/typstry.typ
typst
MIT License
#import "../typst/lib.typ": * #read-julia-output(json("typstry-jlyfish.json")) #jl-pkg("Typstry") #jl(```julia using Typstry ```) #jl(result: false, ```julia A = rand(1:5, 4, 3) .// 3 B = rand(1:5, 3, 5) .// 2 C = A * B ```) An example of a matrix-matrix product: $ #jl(`Typst(A)`) #jl(`Typst(B)`) = #jl(`Typst(C)`) $ The largest noninteger unit fraction: #jl(preferred-mimes: "text/typst", `typst"$1 / 2$"`)
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-E0100.typ
typst
Apache License 2.0
#let data = ( ("VARIATION SELECTOR-17", "Mn", 0), ("VARIATION SELECTOR-18", "Mn", 0), ("VARIATION SELECTOR-19", "Mn", 0), ("VARIATION SELECTOR-20", "Mn", 0), ("VARIATION SELECTOR-21", "Mn", 0), ("VARIATION SELECTOR-22", "Mn", 0), ("VARIATION SELECTOR-23", "Mn", 0), ("VARIATION SELECTOR-24", "Mn", 0), ("VARIATION SELECTOR-25", "Mn", 0), ("VARIATION SELECTOR-26", "Mn", 0), ("VARIATION SELECTOR-27", "Mn", 0), ("VARIATION SELECTOR-28", "Mn", 0), ("VARIATION SELECTOR-29", "Mn", 0), ("VARIATION SELECTOR-30", "Mn", 0), ("VARIATION SELECTOR-31", "Mn", 0), ("VARIATION SELECTOR-32", "Mn", 0), ("VARIATION SELECTOR-33", "Mn", 0), ("VARIATION SELECTOR-34", "Mn", 0), ("VARIATION SELECTOR-35", "Mn", 0), ("VARIATION SELECTOR-36", "Mn", 0), ("VARIATION SELECTOR-37", "Mn", 0), ("VARIATION SELECTOR-38", "Mn", 0), ("VARIATION SELECTOR-39", "Mn", 0), ("VARIATION SELECTOR-40", "Mn", 0), ("VARIATION SELECTOR-41", "Mn", 0), ("VARIATION SELECTOR-42", "Mn", 0), ("VARIATION SELECTOR-43", "Mn", 0), ("VARIATION SELECTOR-44", "Mn", 0), ("VARIATION SELECTOR-45", "Mn", 0), ("VARIATION SELECTOR-46", "Mn", 0), ("VARIATION SELECTOR-47", "Mn", 0), ("VARIATION SELECTOR-48", "Mn", 0), ("VARIATION SELECTOR-49", "Mn", 0), ("VARIATION SELECTOR-50", "Mn", 0), ("VARIATION SELECTOR-51", "Mn", 0), ("VARIATION SELECTOR-52", "Mn", 0), ("VARIATION SELECTOR-53", "Mn", 0), ("VARIATION SELECTOR-54", "Mn", 0), ("VARIATION SELECTOR-55", "Mn", 0), ("VARIATION SELECTOR-56", "Mn", 0), ("VARIATION SELECTOR-57", "Mn", 0), ("VARIATION SELECTOR-58", "Mn", 0), ("VARIATION SELECTOR-59", "Mn", 0), ("VARIATION SELECTOR-60", "Mn", 0), ("VARIATION SELECTOR-61", "Mn", 0), ("VARIATION SELECTOR-62", "Mn", 0), ("VARIATION SELECTOR-63", "Mn", 0), ("VARIATION SELECTOR-64", "Mn", 0), ("VARIATION SELECTOR-65", "Mn", 0), ("VARIATION SELECTOR-66", "Mn", 0), ("VARIATION SELECTOR-67", "Mn", 0), ("VARIATION SELECTOR-68", "Mn", 0), ("VARIATION SELECTOR-69", "Mn", 0), ("VARIATION SELECTOR-70", "Mn", 0), ("VARIATION SELECTOR-71", "Mn", 0), ("VARIATION SELECTOR-72", "Mn", 0), ("VARIATION SELECTOR-73", "Mn", 0), ("VARIATION SELECTOR-74", "Mn", 0), ("VARIATION SELECTOR-75", "Mn", 0), ("VARIATION SELECTOR-76", "Mn", 0), ("VARIATION SELECTOR-77", "Mn", 0), ("VARIATION SELECTOR-78", "Mn", 0), ("VARIATION SELECTOR-79", "Mn", 0), ("VARIATION SELECTOR-80", "Mn", 0), ("VARIATION SELECTOR-81", "Mn", 0), ("VARIATION SELECTOR-82", "Mn", 0), ("VARIATION SELECTOR-83", "Mn", 0), ("VARIATION SELECTOR-84", "Mn", 0), ("VARIATION SELECTOR-85", "Mn", 0), ("VARIATION SELECTOR-86", "Mn", 0), ("VARIATION SELECTOR-87", "Mn", 0), ("VARIATION SELECTOR-88", "Mn", 0), ("VARIATION SELECTOR-89", "Mn", 0), ("VARIATION SELECTOR-90", "Mn", 0), ("VARIATION SELECTOR-91", "Mn", 0), ("VARIATION SELECTOR-92", "Mn", 0), ("VARIATION SELECTOR-93", "Mn", 0), ("VARIATION SELECTOR-94", "Mn", 0), ("VARIATION SELECTOR-95", "Mn", 0), ("VARIATION SELECTOR-96", "Mn", 0), ("VARIATION SELECTOR-97", "Mn", 0), ("VARIATION SELECTOR-98", "Mn", 0), ("VARIATION SELECTOR-99", "Mn", 0), ("VARIATION SELECTOR-100", "Mn", 0), ("VARIATION SELECTOR-101", "Mn", 0), ("VARIATION SELECTOR-102", "Mn", 0), ("VARIATION SELECTOR-103", "Mn", 0), ("VARIATION SELECTOR-104", "Mn", 0), ("VARIATION SELECTOR-105", "Mn", 0), ("VARIATION SELECTOR-106", "Mn", 0), ("VARIATION SELECTOR-107", "Mn", 0), ("VARIATION SELECTOR-108", "Mn", 0), ("VARIATION SELECTOR-109", "Mn", 0), ("VARIATION SELECTOR-110", "Mn", 0), ("VARIATION SELECTOR-111", "Mn", 0), ("VARIATION SELECTOR-112", "Mn", 0), ("VARIATION SELECTOR-113", "Mn", 0), ("VARIATION SELECTOR-114", "Mn", 0), ("VARIATION SELECTOR-115", "Mn", 0), ("VARIATION SELECTOR-116", "Mn", 0), ("VARIATION SELECTOR-117", "Mn", 0), ("VARIATION SELECTOR-118", "Mn", 0), ("VARIATION SELECTOR-119", "Mn", 0), ("VARIATION SELECTOR-120", "Mn", 0), ("VARIATION SELECTOR-121", "Mn", 0), ("VARIATION SELECTOR-122", "Mn", 0), ("VARIATION SELECTOR-123", "Mn", 0), ("VARIATION SELECTOR-124", "Mn", 0), ("VARIATION SELECTOR-125", "Mn", 0), ("VARIATION SELECTOR-126", "Mn", 0), ("VARIATION SELECTOR-127", "Mn", 0), ("VARIATION SELECTOR-128", "Mn", 0), ("VARIATION SELECTOR-129", "Mn", 0), ("VARIATION SELECTOR-130", "Mn", 0), ("VARIATION SELECTOR-131", "Mn", 0), ("VARIATION SELECTOR-132", "Mn", 0), ("VARIATION SELECTOR-133", "Mn", 0), ("VARIATION SELECTOR-134", "Mn", 0), ("VARIATION SELECTOR-135", "Mn", 0), ("VARIATION SELECTOR-136", "Mn", 0), ("VARIATION SELECTOR-137", "Mn", 0), ("VARIATION SELECTOR-138", "Mn", 0), ("VARIATION SELECTOR-139", "Mn", 0), ("VARIATION SELECTOR-140", "Mn", 0), ("VARIATION SELECTOR-141", "Mn", 0), ("VARIATION SELECTOR-142", "Mn", 0), ("VARIATION SELECTOR-143", "Mn", 0), ("VARIATION SELECTOR-144", "Mn", 0), ("VARIATION SELECTOR-145", "Mn", 0), ("VARIATION SELECTOR-146", "Mn", 0), ("VARIATION SELECTOR-147", "Mn", 0), ("VARIATION SELECTOR-148", "Mn", 0), ("VARIATION SELECTOR-149", "Mn", 0), ("VARIATION SELECTOR-150", "Mn", 0), ("VARIATION SELECTOR-151", "Mn", 0), ("VARIATION SELECTOR-152", "Mn", 0), ("VARIATION SELECTOR-153", "Mn", 0), ("VARIATION SELECTOR-154", "Mn", 0), ("VARIATION SELECTOR-155", "Mn", 0), ("VARIATION SELECTOR-156", "Mn", 0), ("VARIATION SELECTOR-157", "Mn", 0), ("VARIATION SELECTOR-158", "Mn", 0), ("VARIATION SELECTOR-159", "Mn", 0), ("VARIATION SELECTOR-160", "Mn", 0), ("VARIATION SELECTOR-161", "Mn", 0), ("VARIATION SELECTOR-162", "Mn", 0), ("VARIATION SELECTOR-163", "Mn", 0), ("VARIATION SELECTOR-164", "Mn", 0), ("VARIATION SELECTOR-165", "Mn", 0), ("VARIATION SELECTOR-166", "Mn", 0), ("VARIATION SELECTOR-167", "Mn", 0), ("VARIATION SELECTOR-168", "Mn", 0), ("VARIATION SELECTOR-169", "Mn", 0), ("VARIATION SELECTOR-170", "Mn", 0), ("VARIATION SELECTOR-171", "Mn", 0), ("VARIATION SELECTOR-172", "Mn", 0), ("VARIATION SELECTOR-173", "Mn", 0), ("VARIATION SELECTOR-174", "Mn", 0), ("VARIATION SELECTOR-175", "Mn", 0), ("VARIATION SELECTOR-176", "Mn", 0), ("VARIATION SELECTOR-177", "Mn", 0), ("VARIATION SELECTOR-178", "Mn", 0), ("VARIATION SELECTOR-179", "Mn", 0), ("VARIATION SELECTOR-180", "Mn", 0), ("VARIATION SELECTOR-181", "Mn", 0), ("VARIATION SELECTOR-182", "Mn", 0), ("VARIATION SELECTOR-183", "Mn", 0), ("VARIATION SELECTOR-184", "Mn", 0), ("VARIATION SELECTOR-185", "Mn", 0), ("VARIATION SELECTOR-186", "Mn", 0), ("VARIATION SELECTOR-187", "Mn", 0), ("VARIATION SELECTOR-188", "Mn", 0), ("VARIATION SELECTOR-189", "Mn", 0), ("VARIATION SELECTOR-190", "Mn", 0), ("VARIATION SELECTOR-191", "Mn", 0), ("VARIATION SELECTOR-192", "Mn", 0), ("VARIATION SELECTOR-193", "Mn", 0), ("VARIATION SELECTOR-194", "Mn", 0), ("VARIATION SELECTOR-195", "Mn", 0), ("VARIATION SELECTOR-196", "Mn", 0), ("VARIATION SELECTOR-197", "Mn", 0), ("VARIATION SELECTOR-198", "Mn", 0), ("VARIATION SELECTOR-199", "Mn", 0), ("VARIATION SELECTOR-200", "Mn", 0), ("VARIATION SELECTOR-201", "Mn", 0), ("VARIATION SELECTOR-202", "Mn", 0), ("VARIATION SELECTOR-203", "Mn", 0), ("VARIATION SELECTOR-204", "Mn", 0), ("VARIATION SELECTOR-205", "Mn", 0), ("VARIATION SELECTOR-206", "Mn", 0), ("VARIATION SELECTOR-207", "Mn", 0), ("VARIATION SELECTOR-208", "Mn", 0), ("VARIATION SELECTOR-209", "Mn", 0), ("VARIATION SELECTOR-210", "Mn", 0), ("VARIATION SELECTOR-211", "Mn", 0), ("VARIATION SELECTOR-212", "Mn", 0), ("VARIATION SELECTOR-213", "Mn", 0), ("VARIATION SELECTOR-214", "Mn", 0), ("VARIATION SELECTOR-215", "Mn", 0), ("VARIATION SELECTOR-216", "Mn", 0), ("VARIATION SELECTOR-217", "Mn", 0), ("VARIATION SELECTOR-218", "Mn", 0), ("VARIATION SELECTOR-219", "Mn", 0), ("VARIATION SELECTOR-220", "Mn", 0), ("VARIATION SELECTOR-221", "Mn", 0), ("VARIATION SELECTOR-222", "Mn", 0), ("VARIATION SELECTOR-223", "Mn", 0), ("VARIATION SELECTOR-224", "Mn", 0), ("VARIATION SELECTOR-225", "Mn", 0), ("VARIATION SELECTOR-226", "Mn", 0), ("VARIATION SELECTOR-227", "Mn", 0), ("VARIATION SELECTOR-228", "Mn", 0), ("VARIATION SELECTOR-229", "Mn", 0), ("VARIATION SELECTOR-230", "Mn", 0), ("VARIATION SELECTOR-231", "Mn", 0), ("VARIATION SELECTOR-232", "Mn", 0), ("VARIATION SELECTOR-233", "Mn", 0), ("VARIATION SELECTOR-234", "Mn", 0), ("VARIATION SELECTOR-235", "Mn", 0), ("VARIATION SELECTOR-236", "Mn", 0), ("VARIATION SELECTOR-237", "Mn", 0), ("VARIATION SELECTOR-238", "Mn", 0), ("VARIATION SELECTOR-239", "Mn", 0), ("VARIATION SELECTOR-240", "Mn", 0), ("VARIATION SELECTOR-241", "Mn", 0), ("VARIATION SELECTOR-242", "Mn", 0), ("VARIATION SELECTOR-243", "Mn", 0), ("VARIATION SELECTOR-244", "Mn", 0), ("VARIATION SELECTOR-245", "Mn", 0), ("VARIATION SELECTOR-246", "Mn", 0), ("VARIATION SELECTOR-247", "Mn", 0), ("VARIATION SELECTOR-248", "Mn", 0), ("VARIATION SELECTOR-249", "Mn", 0), ("VARIATION SELECTOR-250", "Mn", 0), ("VARIATION SELECTOR-251", "Mn", 0), ("VARIATION SELECTOR-252", "Mn", 0), ("VARIATION SELECTOR-253", "Mn", 0), ("VARIATION SELECTOR-254", "Mn", 0), ("VARIATION SELECTOR-255", "Mn", 0), ("VARIATION SELECTOR-256", "Mn", 0), )
https://github.com/tingerrr/hydra
https://raw.githubusercontent.com/tingerrr/hydra/main/doc/examples/skip/content.typ
typst
MIT License
= Chapter A #lorem(50) = Chapter B #lorem(20)
https://github.com/DieracDelta/presentations
https://raw.githubusercontent.com/DieracDelta/presentations/master/polylux/book/src/utils/progress.md
markdown
# How much longer? 🥱 There are a handful of features that let you display the progress of the presentation. The most simple one is directly displaying the current slide number. Remember that each slide might produce an arbitrary amount of subslides, i.e. PDF pages, so we cannot rely on the builtin page counter. Instead, there is the `logical-slide` counter in the `logic` module. Therefore, you can use ```typ #logic.logical-slide.display() ``` to see what the current slide number is. If you want to put that into relation to how many slides there are in total, you can also display ```typ #utils.last-slide-number ``` which is a short-hand way of getting the final value of `logic.logical-slide`. Note that both these things are content, though, so you can only display them and not calculate with the numbers. A common calculation you might want do to is finding their ratio, i.e. current slide number divided by total number of slides. To that end, you can use the function `utils.polylux-progress`. You can pass a function to it that turns the current ratio into some content. For example: ```typ #utils.polylux-progress( ratio => [ You already made it through #calc.round(ratio * 100) #sym.percent of the presentation! ]) ``` Some themes utilise this to display a little progress bar, for example.
https://github.com/zadigus/math
https://raw.githubusercontent.com/zadigus/math/main/number-theory/page-6/main.typ
typst
#import "template.typ": * #show: project.with( title: "test", authors: ( "zadigus", ), ) #lemma[ $p$ prime $=> 3 divides p+2 or 3 divides p+4$ ] #proof[ Consider the triple $(n, n+1, n+2) forall n in NN^*$. Then, necessarily, $ 3 divides n or 3 divides n+1 or 3 divides n+2. $ Consequently, if $p$ is prime, then $3 divides.not p$, and $ 3 divides p+1 or 3 divides p+2, $ which is equivalent to $ 3 divides p+4 or 3 divides p+2. $ ]
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/bugs/math-realize_02.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test equations can embed equation pieces built by functions #let foo(v1, v2) = { // Return an equation piece that would've been rendered in // inline style if the piece is not embedded $v1 v2^2$ } #let bar(v1, v2) = { // Return an equation piece that would've been rendered in // block style if the piece is not embedded $ v1 v2^2 $ } #let baz(..sink) = { // Return an equation piece built by joining arrays sink.pos().map(x => $hat(#x)$).join(sym.and) } Inline $2 foo(alpha, (M+foo(a, b)))$. Inline $2 bar(alpha, (M+foo(a, b)))$. Inline $2 baz(x,y,baz(u, v))$. $ 2 foo(alpha, (M+foo(a, b))) $ $ 2 bar(alpha, (M+foo(a, b))) $ $ 2 baz(x,y,baz(u, v)) $
https://github.com/mkhoatd/Typst-CV-Resume
https://raw.githubusercontent.com/mkhoatd/Typst-CV-Resume/main/CoverLetter/typstcoverletter.typ
typst
MIT License
#let date_colour= rgb("#666666") #let primary_colour= rgb("#2b2b2b") #let headings_colour= rgb("#6A6A6A") #let subheadings_colour= rgb("#333333") // Set font type for all text #let fonttype = "macfont" #let font_head = { if fonttype == "macfont" { "Helvetica Neue" } else if fonttype == "openfont" { "PT Sans" } else { "Times New Roman" } } #let font_term = { if fonttype == "macfont" { "Heiti TC" } else if fonttype == "openfont" { "PT Sans" } else { "Times New Roman" } } #let font_descript = { if fonttype == "macfont" { "Heiti SC" } else if fonttype == "openfont" { "PT Sans" } else { "Times New Roman" } } #let font_info = { if fonttype == "macfont" { "Helvetica" } else if fonttype == "openfont" { "PT Sans" } else { "Times New Roman" } } #let sectionsep = { [#v(5pt)] } #let subsectionsep = { [#v(2pt)] } #let recepient(date, department, university, address, postcode) = { align(left,{ text(10pt,font: "Helvetica", fill: subheadings_colour,weight: "bold", )[#department] h(1fr) text(10pt,font: "Helvetica", fill: primary_colour,weight: "light", )[#date\ ] text(10pt,font: "Helvetica", fill: subheadings_colour,weight: "bold", )[#university\ ] text(10pt,font: "Helvetica", fill: headings_colour,weight: "light", )[#address\ ] text(10pt,font: "Helvetica", fill: headings_colour,weight: "light", )[#postcode ] } ) } #let main( name: "", address: "", contacts: (), mainbody, ) = { // show contact details let display(contacts) = { set text(11pt,font:font_term,fill:headings_colour, weight: "medium",top-edge:"baseline",bottom-edge:"baseline",baseline: 2pt) contacts.map(contact =>{ if contact.link == none [ contact.text ] else { link(contact.link)[#{contact.text}] } } ).join(" | ") } set page( margin: ( left: 2cm, right: 2cm, top: 3.2cm, bottom: 1.5cm, ), header:{ // Head Name Section text(25pt,font: font_head,fill:primary_colour, weight:"light",top-edge:"baseline",bottom-edge:"baseline",baseline: 12pt)[#align(center,[#name])] text(11pt,font:font_descript,fill:headings_colour, weight: "medium",top-edge:"baseline",bottom-edge:"baseline")[#align(center,[#address])] align(center)[#display(contacts)] line(length: 100%, stroke:0.5pt + primary_colour) }, header-ascent: 1em, ) mainbody set text(11pt,font: "Helvetica", fill: primary_colour, weight: "regular", ) [Sincerely,\ ] [*#name*] }
https://github.com/marisbaier/Typst_eineTeXAlternative
https://raw.githubusercontent.com/marisbaier/Typst_eineTeXAlternative/main/Introduction.typ
typst
= Dies ist eine Überschrift // fractions are done with / $ a/b $ // \quad is just quad $ RR quad "blablabla" $ // align enviornment is just integrated in regular math: $ a &= 5 dot cos(x) - 20 dot Gamma(2) \ b &= integral_RR e^(-x^2) dif x $
https://github.com/devraza/warehouse
https://raw.githubusercontent.com/devraza/warehouse/main/blog/template.typ
typst
MIT License
#let conf(title: none, doc) = { show link: underline show raw: set text(font: "<NAME>") set text( font: "EtBembo", size: 10pt ) set page( paper: "a4", margin: 1cm, ) set par( justify: true, leading: 0.52em, ) align(left, text(font: "EtBembo", size: 24pt, weight: 900)[ #title ]) line(length: 70%) set align(left) doc }
https://github.com/tingerrr/masters-thesis
https://raw.githubusercontent.com/tingerrr/masters-thesis/main/src/figures.typ
typst
#import "figures/util.typ": * #let ( list, t4gl, vector, tree, b-tree, finger-tree, srb-tree, ) = { import "figures/algorithms.typ": * import "figures/figures.typ": * import "figures/listings.typ": * import "figures/math.typ": * import "figures/tables.typ": * ( ( new: list-new, copy: list-copy, pop: list-pop, push: list-push, ), ( ex: ( array1: t4gl-ex-array1, array2: t4gl-ex-array2, ), layers: ( new: t4gl-layers-new, shallow: t4gl-layers-shallow, deep-new: t4gl-layers-deep-new, deep-mut: t4gl-layers-deep-mut, ), analogies: t4gl-analogies, ), ( repr: vector-repr, example: vector-ex, ), ( new: tree-new, shared: tree-shared, ), ( node: b-tree-node, ), ( repr: finger-tree, ranges: finger-tree-ranges, def: ( old: finger-tree-def-old, new: finger-tree-def-new, illegal: finger-tree-def-illegal, node: finger-tree-def-node, digits: finger-tree-def-digits, self: finger-tree-def-self, ), alg: ( search: finger-tree-alg-search, pushl: finger-tree-alg-push-left, popl: finger-tree-alg-pop-left, appendl: finger-tree-alg-append-left, takel: finger-tree-alg-take-left, nodes: finger-tree-alg-nodes, concat: finger-tree-alg-concat, split: finger-tree-alg-split, insert: finger-tree-alg-insert, remove: finger-tree-alg-remove, ), ), srb-tree, ) }
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/style_03.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test a few style exceptions. $h, bb(N), cal(R), Theta, italic(Theta), sans(Theta), sans(italic(Theta))$
https://github.com/crd2333/Astro_typst_notebook
https://raw.githubusercontent.com/crd2333/Astro_typst_notebook/main/src/docs/here/index.typ
typst
--- order: 1 --- #set page(margin: 1em, height: auto) #let typst = { text(font: "Linux Libertine", weight: "semibold", fill: eastern)[typst] } = Typst 笔记 == #typst: Compose paper faster $<-$ 戳左边的 test 页查看更多例子 $ cases( dot(x) = A x + B u = mat(delim: "[", 0, 0, dots.h.c, 0, - a_n; 1, 0, dots.h.c, 0, - a_(n - 1); 0, 1, dots.h.c, 0, - a_(n - 2); dots.v, dots.v, dots.down, dots.v, dots.v; 0, 0, dots.h.c, 1, - a_1) x + mat(delim: "[", b_n; b_(n - 1); b_(n - 2); dots.v; b_1) u, y = C x = mat(delim: "[", 0, 0, dots.h.c, 1) x ) $ #set text(font: ("Garamond", "Noto Serif CJK SC")) #import "@preview/tablem:0.1.0": tablem #tablem[ | *English* | *German* | *Chinese* | *Japanese* | | --------- | -------- | --------- | ---------- | | Cat | Katze | 猫 | 猫 | | Fish | Fisch | 鱼 | 魚 | ] #import "/src/components/TypstTemplate/lib.typ": clorem #clorem(200)
https://github.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024
https://raw.githubusercontent.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024/giga-notebook/entries/tournament-roosevelt/entry.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#import "/packages.typ": notebookinator, diagraph #import notebookinator: * #import themes.radial.components: * #import diagraph: * #show: create-body-entry.with( title: "Capital Beltway Challenge", type: "test", date: datetime(year: 2023, month: 12, day: 16), author: "<NAME>", witness: "<NAME>", ) = Qualification Matches #tournament(( match: "Q1", red-alliance: (teams: ("676D", "17760X"), score: 38), blue-alliance: (teams: ("53E", "3700A"), score: 38), won: false, auton: false, awp: false, ), ( match: "Q13", red-alliance: (teams: ("53B", "9080R"), score: 119), blue-alliance: (teams: ("20096G", "53E"), score: 89), won: false, auton: true, awp: false, ), ( match: "Q19", red-alliance: (teams: ("53E, 7135D"), score: 121), blue-alliance: (teams: ("9080S, 53C"), score: 50), won: true, auton: true, awp: false, ), ( match: "Q27", red-alliance: (teams: ("53E, 5588R"), score: 4), blue-alliance: (teams: ("9080C, 53A"), score: 154), won: false, auton: true, awp: false, ), ( match: "Q37", red-alliance: (teams: ("5588E, 53D"), score: 101), blue-alliance: (teams: ("23098A, 53E"), score: 64), won: false, auton: false, awp: false, ), ( match: "Q43", red-alliance: (teams: ("53E, 9080H"), score: 138), blue-alliance: (teams: ("5588H, 17760Y"), score: 21), won: true, auton: false, awp: false, )) = Alliance Selection We were in 21st place in eliminations so the most we could do was hope for another team to choose us. In the beginning, 9080C reached out to us and showed interest in wanting to pair with us. However, they later ended up pairing with 53A. We had our best hopes to be picked by 53C and was then reached out to by team 9080H who picked us in the end. #raw-render[```dot digraph { rankdir=LR; start->"9080C picks us" "9080C picks us"->"53C picks us" [label = "no"] "9080C picks us"->"end" [label = "yes"] "53C picks us"->"9080H picks us" [label = "no"] "53C picks us"->"end" [label = "yes"] "9080H picks us"->"We pick 53C" [label = "no"] "9080H picks us"-> end [label = "yes"] "We pick 53C"->end start[shape=Mdiamond] end[shape=Msquare] } ```] The above flow chart represents our decision making plan. In the end, 9080C picked 53A and 9080H ended up picking us. = Elimination Matches #tournament(( match: "R16 #8-1", red-alliance: (teams: ("9080H", "53E"), score: 154), blue-alliance: (teams: ("5588H", "5525A"), score: 62), won: true, auton: true, awp: false, ), ( match: "QF #4-1", red-alliance: (teams: ("7135D", "9080S"), score: 66), blue-alliance: (teams: ("9080H", "53E"), score: 99), won: true, auton: true, awp: false, ), ( match: "SF #2-1", red-alliance: (teams: ("7135S", "9080R"), score: 75), blue-alliance: (teams: ("9080H", "53E"), score: 142), won: true, auton: true, awp: false, ), ( match: "F #1-1", red-alliance: (teams: ("9080C", "53A"), score: 147), blue-alliance: (teams: ("9080H", "53E"), score: 93), won: false, auton: false, awp: false, )) = Reflection #grid( columns: (1fr, 1fr), pie-chart( (value: 5, color: green, name: "wins"), (value: 1, color: orange, name: "ties"), (value: 4, color: red, name: "losses"), ), [ Ultimately, the qualification matches were a bit of a struggle for our bot this competition but we were able to make a comeback during the elimination rounds, taking us to the finals. ], ) Our overall robot performance was as follows: #pro-con( pros: [ - Flywheel was able to accurately and consistently fire triballs across the field. - Intake and wedges were both used efficiently and stayed strong. ], cons: [ - We did not have a working hang mechanism. - We had no skills auton and didn't have a consistent auton that could allow us to earn the AWP. ], ) Overall, our main goal in the coming meetings should be to build a strong, consistent, and reliable hanging mechanism for our robot. We should also have a solid autonomous route coded and ready for both skills and matches. The auton should also be ready for both offensive and defensive sides, with the main goal to win the AWP.
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.2.0/src/mark.typ
typst
Apache License 2.0
#let typst-length = length #import "drawable.typ" #import "vector.typ" #import "matrix.typ" #import "util.typ" #import "path-util.typ" #import "styles.typ" #import "mark-shapes.typ": get-mark #let check-mark(style) = style != none and (style.start, style.end, style.symbol).any(v => v != none) #let process-style(ctx, style, root, path-length) = { let base-style = ( symbol: auto, fill: auto, stroke: auto, slant: auto, harpoon: auto, flip: auto, reverse: auto, inset: auto, width: auto, scale: auto, length: auto, sep: auto, pos: auto, offset: auto, flex: auto, xy-up: auto, z-up: auto, shorten-to: auto, position-samples: auto ) if type(style.at(root)) != array { style.at(root) = (style.at(root),) } if type(style.symbol) != array { style.symbol = (style.symbol,) } let out = () for i in range(calc.max(style.at(root).len(), style.symbol.len())) { let style = style style.symbol = style.symbol.at(i, default: auto) style.at(root) = style.at(root).at(i, default: auto) if type(style.symbol) == dictionary { style = styles.resolve(style, merge: style.symbol) } if type(style.at(root)) == str { style.symbol = style.at(root) } else if type(style.at(root)) == dictionary { style = styles.resolve(style, root: root, base: base-style) } style.stroke = util.resolve-stroke(style.stroke) style.stroke.thickness = util.resolve-number(ctx, style.stroke.thickness) if "angle" in style and type(style.angle) == angle { style.width = calc.tan(style.angle / 2) * style.length * 2 } // Stroke thickness relative attributes for (k, v) in style { if k in ("length", "width", "inset", "sep") { style.insert(k, if type(v) == ratio { style.stroke.thickness * v / 100% } else { util.resolve-number(ctx, v) } * style.scale) } } // Path length relative attributes for k in ("offset", "pos",) { let v = style.at(k) if v != none and v != auto { style.insert(k, if type(v) == ratio { v * path-length / 100% } else { util.resolve-number(ctx, v) }) } } out.push(style) } return out } #let transform-mark(style, mark, pos, dir, flip: false, reverse: false, slant: none, harpoon: false) = { let up = style.xy-up if dir.at(2) != 0 { up = style.z-up } mark.drawables = drawable.apply-transform( matrix.mul-mat( ..( matrix.transform-translate(..pos), matrix.transform-rotate-dir(dir, up), matrix.transform-rotate-z(90deg), matrix.transform-translate(if reverse { mark.length } else { mark.tip-offset }, 0, 0), if slant not in (none, 0%) { if type(slant) == ratio { slant /= 100% } matrix.transform-shear-x(slant) }, if flip or reverse { matrix.transform-scale({ if flip { (y: -1) } if reverse { (x: -1) } }) } ).filter(e => e != none) ), mark.drawables ) return mark } #let place-mark-on-path(ctx, styles, segments, is-end: false) = { if type(styles) != array { styles = (styles,) } let distance = 0 let shorten-distance = 0 let shorten-pos = none let drawables = () for (i, style) in styles.enumerate() { let is-last = i + 1 == styles.len() if style.symbol == none { continue } // Override position, if set if style.pos != none { distance = style.pos } // Apply mark offset distance += style.offset let (mark-fn, reverse) = get-mark(ctx, style.symbol) style.reverse = (style.reverse or reverse) and not (style.reverse and reverse) let mark = mark-fn(style) mark.length = mark.distance + if style.reverse { mark.at("base-offset", default: style.stroke.thickness / 2) } else { mark.at("tip-offset", default: style.stroke.thickness / 2) } let pos = if style.flex { path-util.point-on-path( segments, if distance != 0 { distance * if is-end { -1 } else { 1 } } else { if is-end { 100% } else { 0% } }, extrapolate: true) } else { let (_, dir) = path-util.direction( segments, if is-end { 100% } else { 0% }, clamp: true) let pt = if is-end { path-util.segment-end(segments.last()) } else { path-util.segment-start(segments.first()) } vector.sub(pt, vector.scale(vector.norm(dir), distance * if is-end { 1 } else { -1 })) } assert.ne(pos, none, message: "Could not determine mark position") let dir = if style.flex { let a = pos let b = path-util.point-on-path( segments, (mark.length + distance) * if is-end { -1 } else { 1 }, samples: style.position-samples, extrapolate: true) if b != none and a != b { vector.sub(b, a) } else { let (_, dir) = path-util.direction( segments, distance, clamp: true) vector.scale(dir, if is-end { -1 } else { 1 }) } } else { let (_, dir) = path-util.direction( segments, if is-end { 100% } else { 0% }, clamp: true) if dir != none { vector.scale(dir, if is-end { -1 } else { 1 }) } } assert.ne(pos, none, message: "Could not determine mark direction") mark = transform-mark( style, mark, pos, dir, reverse: style.reverse, slant: style.slant, flip: style.flip, harpoon: style.harpoon, ) // Shorten path to this mark let inset = mark.at("inset", default: 0) if style.shorten-to != none and (style.shorten-to == auto or i <= style.shorten-to) { shorten-distance = distance + mark.length - inset shorten-pos = vector.add(pos, vector.scale(vector.norm(dir), mark.length - inset)) } drawables += mark.drawables distance += mark.length // Add separator distance += style.sep } return ( drawables: drawables, distance: shorten-distance, pos: shorten-pos ) } #let place-marks-along-path(ctx, style, segments) = { let distance = (0, 0) let snap-to = (none, none) let drawables = () if style.start != none or style.symbol != none { let (drawables: start-drawables, distance: start-distance, pos: pt) = place-mark-on-path( ctx, process-style(ctx, style, "start", path-util.length(segments)), segments ) drawables += start-drawables distance.first() = start-distance snap-to.first() = pt } if style.end != none or style.symbol != none { let (drawables: end-drawables, distance: end-distance, pos: pt) = place-mark-on-path( ctx, process-style(ctx, style, "end", path-util.length(segments)), segments, is-end: true ) drawables += end-drawables distance.last() = end-distance snap-to.last() = pt } if distance != (0, 0) { segments = path-util.shorten-path( segments, ..distance, mode: if style.flex { "CURVED" } else { "LINEAR" }, samples: style.position-samples, snap-to: snap-to) } return (drawables, segments) }
https://github.com/j10ccc/zjut-social-practice-report-typst
https://raw.githubusercontent.com/j10ccc/zjut-social-practice-report-typst/main/layout/project.typ
typst
#import "/layout/cover.typ": cover #import "/layout/catalog.typ": catalog #import "/constants/fonts.typ": font_family #let project( body, ..meta ) = { set page( paper: "a4", numbering: "1" ) set heading(numbering: "1.") set text( size: 11pt, font: font_family.songti ) // cover cover(meta.named()) pagebreak() // catalog catalog() pagebreak() // main content show par: set block(above: 1.4em, below: 2em) set par( leading: 20pt, linebreaks: "simple", first-line-indent: 2em ) show heading: it => { let empty_par = { v(-1em) box() } let margin = (above: 0em, below: 0em); if it.level == 1 { margin = (above: 3em, below: 1.2em) } else if it.level == 2 { margin = (above: 2.6em, below: 1em) } set block(..margin) it empty_par } [#body] }
https://github.com/goshakowska/Typstdiff
https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_complex/all_types_working/all_types_working.typ
typst
GNU nano 6.2 test1.typ = Introduction In this report, we will explore the various factors that influence _fluid dynamics_ in glaciers and how they contribute to the formation and behaviour of these natural structures. The equation $Q = rho A v + C$ defines the glacial flow rate. The flow rate of a glacier is defined by the following equation: $ Q = rho A v + C $ The flow rate of a glacier is given by the following equation: $ Q = rho A v + "time offset" $ Total displaced soil by glacial flow: $ 7.32 beta + sum_(i=0)^nabla (Q_i (a_i - epsilon)) / 2 $ $ v := vec(x_1, x_2, x_3) $ $ a arrow.squiggly b $ #lorem(10) /* MODES */ Number: #(1 + 2) $-x$ is the opposite of $x$ let name = [*Typst!*] /* MARKUP */ *strong* `print(1)` https://typst.app/ <intro> = Heading $x^2$ 'single' or "double" ~, --- /* MATH MODE */ $x^2$ $ x^2 $ $x_1$ $x^2$ $1 + (a+b)/5$ $x \ y$ $x &= 2 \ &= 3$ $pi$ $arrow.r.long$\ $x y$ $->, !=$ $a "is natural"$ $floor(x)$ #lorem(30) /* SCRIPTING */ #emph[Hello] \ #"hello".len() // proste zmienne w bloku #{ let a = [from] let b = [*world*] [hello ] a + [ the ] + b } // zmienne i funkcje #let name = "Typst" This is #name's documentation. It explains #name. #let add(x, y) = x + y Sum is #add(2, 3). // krotki, listy, słowniki #let (x, y) = (1, 2) The coordinates are #x, #y. #let (a, .., b) = (1, 2, 3, 4) The first element is #a. The last element is #b. #let books = ( Shakespeare: "Hamlet", Homer: "The Odyssey", Austen: "Persuasion", ) #let (Austen,) = books Austen wrote #Austen. #let (Homer: h) = books Homer wrote #h. // zmienne _ #let (_, y, _) = (1, 2, 3) The y coordinate is #y. // zip zmiennych i wywołanie funkcji na wartościach // dobrze formatuje, ale typst automatycznie dodaje kolor, więc trochę działa i trochę nie #let left = (2, 4, 5) #let right = (3, 2, 6) #left.zip(right).map( ((a,b)) => a + b ) // instrukcje warunkowe #if 1 < 2 [ This is shown ] else [ This is not. ] // pętla for z break (dobrze formatuje jedynie litery jak w przykładzie) // dłuższy tekst - dodaje nową linię // liczby - nie zachowuje kolorowania typsta #for letter in "abc nope" { if letter == " " { break } letter } // słowniki i body zmiennej działa #let dict = (greet: "Hello") #dict.greet \ #let it = [= Heading] #it.body \ // metody na tekście 3 is the same as #"abc".len() // metody na konkretnych zmiennych #let array = (1, 2, 3, 4) #array.pop() \ #array.len() \ #("a, b, c" .split(", ") .join[ --- ]) // Wszystkie operatory działają #if 3 in (1, 2, 3) [ Dobrze ] else [ Niedobrze ] *Date:* 26.12.2022 \ *Topic:* Infrastructure Test \ *Severity:* High \ #lower("ABC") \ #lower[*My Text*] \ #lower[already low] #upper("abc") \ #upper[*my text*] \ #upper[ALREADY HIGH] "This is in quotes." #set text(lang: "de") "Das ist in Anführungszeichen." #set text(lang: "fr") "C'est entre guillemets." 1#super[st] try! #text(font: "Linux Libertine", style: "italic")[Italic] #text(font: "DejaVu Sans", style: "oblique")[Oblique] This is #underline[important]. Take #underline( stroke: 1.5pt + red, offset: 2pt, [care], )
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/enum_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page #enum[Embrace][Extend][Extinguish]
https://github.com/jamesrswift/ionio-illustrate
https://raw.githubusercontent.com/jamesrswift/ionio-illustrate/main/src/extras.typ
typst
MIT License
#import "extras/callout-above.typ": callout-above #import "extras/callout-aside.typ": callout-aside #import "extras/title.typ": title #import "extras/callipers.typ": callipers #import "extras/cetz-raw.typ": cetz-raw #import "extras/content.typ": content
https://github.com/DaAlbrecht/lecture-notes
https://raw.githubusercontent.com/DaAlbrecht/lecture-notes/main/computer_networks/ipv6.typ
typst
MIT License
#import "../template.typ": * = IPv6 (Internet Protocol version 6) The predecessor of IPv4 was the Network Control Program (NCP), operational since 1972. TCP (Transport Control Protocol) was developed between 1973 and 1978, separating NCP into IP and TCP. IPv6, initially referred to as IPv7 in 1998, has been in development since and is now the modern standard, as described in RFC 8200. IPv6 addresses the limitation of IPv4’s address space, offering around 340 sextillion addresses compared to IPv4's 4 billion. IPv6 goals included: - Simplifying the protocol for faster router processing, - Reducing routing table size, - Supporting multimedia and real-time applications, - Enhancing security, - Improving multicasting, - Supporting virtual connections and mobile hosts. IPv6 was designed to coexist with IPv4 during the transition. Despite its advantages, IPv6 adoption is still partial, with most traffic relying on IPv4. As of January 2018, over 22% of Google users used IPv6. == Representation IPv6 addresses are 128 bits long, divided into eight 16-bit blocks separated by colons. Each block is represented in hexadecimal, with a range of 0 to FFFF (0 to 65535 in decimal). The following rules apply to simplify IPv6 address representation: - Leading zeros in a block can be omitted. - A single sequence of consecutive blocks of zeros can be replaced with two colons (::) to represent multiple blocks of zeros. - The address can be shortened to its smallest representation. #example[ #set text(font: "Berkeley Mono", size: 10pt,spacing: 80%) #show table.cell.where(y: 0): strong #table( columns: (2fr,1.5fr), table.header([IPv6 Address], [Shortened Address]), [0000:0000:0000:0000:0000:0000:0000:0001], [::1], [2001:0db8:0000:0042:0000:8a2e:0370:7334], [2001:db8:0:42:0:8a2e:370:7334], [2fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b], [2001:db8::42:0:8a2e:370:7334], [2fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b], [2001:db8:0:42::8a2e:370:7334], [2fdf8:f53e:61e4::18], [2fdf8:f53e:61e4::18] ) ] #pagebreak() == IPv6 Header The IPv6 header is simpler than the IPv4 header, with fewer fields and a fixed size of 40 bytes. #figure( image("../resources/Ipv6_header.svg.png", width: 70%), caption: [IPv6 header structure], ) <ipv6header> #pagebreak() == Addressing IPv6 addresses are 128 bits in length, unlike IPv4, which uses subnetting to maximize its smaller address space. In IPv6, subnets use 64 bits for the host portion (interface identifier) and 64 bits for the routing prefix. The large address space is considered sufficient for the future. IPv6 addresses are divided into three types: - Unicast: Identifies a single interface. - Multicast: Identifies multiple interfaces. - Anycast: Identifies multiple interfaces, but the packet is delivered to the nearest interface. === Scope *Link-Local Address* Link-local addresses are used for communication on a single link, such as a local network segment. They are automatically configured and do not require manual configuration. Link-local addresses are identified by the prefix `fe80::/10`. *Local Addressing* Local addresses are used for communication within a local network. *Global Addressing* Global addresses are used for communication across the internet. The procedure for assigning global IPv6 addresses is similar to constructing local addresses, with the prefix being provided by router advertisements. Multiple prefix announcements can configure multiple addresses. Stateless Address Autoconfiguration (SLAAC) requires a /64 block. == Neighbor Discovery IPv6 uses Neighbor Discovery (ND) to manage the link layer, replacing ARP (Address Resolution Protocol) and ICMPv4. ND includes the following functions: - Router Discovery: Routers advertise their presence. - Neighbor Discovery: Nodes discover neighbors on the same link. - Address Autoconfiguration: Nodes automatically configure their addresses. - Duplicate Address Detection: Nodes verify the uniqueness of their addresses. - Redirect: Routers inform nodes of a better first-hop router. - Optimal Path MTU Discovery: Nodes discover the maximum transmission unit (MTU) size. #pagebreak() == Solicited-Node Multicast Address IPv6 uses the Solicited-Node Multicast Address to resolve the MAC address of a node. The Solicited-Node Multicast Address is formed by appending the last 24 bits of the unicast address to the prefix `fdf8:f53e:61e4::18:ff00:0/104`. #example[ #set text(font: "Berkeley Mono", size: 10pt,spacing: 80%) #show table.cell.where(y: 0): strong #table( columns: (1fr,1fr), table.header([Unicast Address], [Solicited-Node Multicast Address]), [2001:db8::aaa:bffb:ac2c:1fad], [fdf8:f53e:61e4::18:ffac:1fad], [2001:db8::1], [fdf8:f53e:61e4::18:ff00:1] ) ] == Coexistence with IPv4 IPv6 was designed to coexist with IPv4 during the transition period. Several mechanisms were developed to facilitate this coexistence: - Dual-Stack: Nodes support both IPv4 and IPv6. - Tunneling: IPv6 packets are encapsulated in IPv4 packets. - Translation: IPv6 packets are translated into IPv4 packets. == Security IPv6 includes security features to address the limitations of IPv4: - IPsec: Integrated into the protocol, IPsec provides authentication, encryption, and data integrity. - Secure Neighbor Discovery (SEND): Protects against Neighbor Discovery attacks. - Cryptographically Generated Addresses (CGA): Prevents address spoofing.
https://github.com/JarKz/math_analysis_with_typst
https://raw.githubusercontent.com/JarKz/math_analysis_with_typst/main/main.typ
typst
MIT License
#set text(size: 14pt) #set par(justify: true) #include("groups/first.typ") #include("groups/second.typ") #include("groups/third.typ") #include("groups/fourth.typ") #include("groups/fifth.typ")
https://github.com/TypstApp-team/typst
https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/compiler/plugin.typ
typst
Apache License 2.0
// Test WebAssembly plugins. // Ref: false --- #let p = plugin("/files/hello.wasm") #test(p.hello(), bytes("Hello from wasm!!!")) #test(p.double_it(bytes("hey!")), bytes("hey!.hey!")) #test( p.shuffle(bytes("value1"), bytes("value2"), bytes("value3")), bytes("value3-value1-value2"), ) --- #let p = plugin("/files/hello.wasm") // Error: 2-20 plugin function takes 0 arguments, but 1 was given #p.hello(bytes("")) --- #let p = plugin("/files/hello.wasm") // Error: 10-14 expected bytes, found boolean // Error: 27-29 expected bytes, found integer #p.hello(true, bytes(()), 10) --- #let p = plugin("/files/hello.wasm") // Error: 2-17 plugin errored with: This is an `Err` #p.returns_err() --- #let p = plugin("/files/hello.wasm") // Error: 2-16 plugin panicked: wasm `unreachable` instruction executed #p.will_panic()
https://github.com/yhtq/Notes
https://raw.githubusercontent.com/yhtq/Notes/main/常微分方程/作业/hw1.typ
typst
#import "../../template.typ": proof, note, corollary, lemma, theorem, definition, example, remark, proposition,der, partialDer, Spec #import "../../template.typ": * // Take a look at the file `template.typ` in the file panel // to customize this template and discover how it works. #show: note.with( title: "作业1", author: "YHTQ", date: none, logo: none, withOutlined : false, withTitle :false, ) #set heading(numbering: none) = P6 == 1 (2) 代入得: $ y' + y/x = (x cos x - sin x)/x^2 + (sin x)/x^2 = (cos x)/x\ $ 从而确实是原微分方程的解 == 3 $ y = c_1 e^x + c_2 x e^x\ y' = c_1 e^x + c_2 e^x + c_2 x e^x = c_1 e^x + c_2 (x + 1) e^x\ y'' = c_1 e^x + c_2 (x+2) e^x $ 消去 $c_1, c_2$: $ y' - y = c_2 e^x \ y'' - y' = c_2 e^x\ => y' - y = y'' - y' $ 这就是所求微分方程 == 4 $ &0 = c_1 x + (y - c_2)^2\ =>& 0 = dif (c_1 x + (y - c_2)^2) = c_1 dif x + 2 (y - c_2) dif y\ =>& 0 = c_1 + 2 (y - c_2) der(y, x)\ =>& 0 = der((c_1 + 2 (y - c_2) der(y, x)), x)\ =>& 0 = 2 (der(y, x))^2 + 2(y - c_2) (dif^2 y)/(dif x^2)\ =>& 0 = (der(y, x))^2 + (y - c_2) (dif^2 y)/(dif x^2) $ 整理得: $ y - c_2 = - y'^2/y''\ c_1 = -2(y - c_2) y' = 2 y'^3/y''\ $ 代入原式: $ 0 = 2 y'^3/y'' x + y'^4/y''^2\ <=> 0 = 2 y'' x + y' $ 这就是所求微分方程 == 5 平面上一切圆的方程形如: $ (x - c_1)^2 + (y-c_2)^2 = c_3 $ 求微分及求导得: $ 0 = dif ((x - c_1)^2 + (y-c_2)^2) = 2(x - c_1) dif x + 2(y - c_2) dif y\ => 0 = (x - c_1) + (y - c_2) y'\ => 0 = der(((x - c_1) + (y - c_2) y'), x)\ => 0 = 1 + y'^2 + (y - c_2) y''\ => 0 = 1/y'' + y'^2/y'' + (y - c_2) "(显然" y'' != 0 ")"\ => 0 = der((1/y'' + y'^2/y'' + (y - c_2)), x)\ => 0 = -1/y''^2 y''' + (2 y' y''^2 - y'^2 y''')/y''^2 + y'\ => 0 = - y''' + 2 y' y''^2 - y'^2 y''' + y' y''^2\ => y''' = 3 y' y''^2 - y'^2 y''' $ = P10 == 2.(2) 注意到 : $ y' = k <=> x^2 + y^2 = k $ 这是平面上的圆周。换言之,线素场在某点处的斜率为到原点距离的平方。由此原方程的解应该单调递增,且距离原点越远增长越快 == 4. 拐点应满足: $ y'' = 0\ <=> (y')' = 0\ <=> (y - x^2)' = 0\ <=> y' = 2x\ <=> y - x^2 = 2x $ 这就是拐点满足的方程 = P18 #let properEquationCheck(p, q) = $partialDer(#p, y) - partialDer(#q, x)$ == 1. $ partialDer(4 x^2 y - y, y) - partialDer(3 x + y, x) = 4x^2 - 1 - 3= 4x^2 - 4 != 0 $ 不是恰当方程 == 3. $ properEquationCheck((a x - b y), (b x - c y)) = - b - b = - 2 b $ 当且仅当 $b = 0$ 时是恰当方程,此时方程形如: $ a x dif x - c y dif y = 0\ <=> dif (a x^2 - c y^2) = 0 $ 通解为 $a x^2 - c y^2 = C$ == 5. $ properEquationCheck((3 x^2 (1 + ln y)), (- 2 y + x^3 / y)) = 3x^2/y - 3 x^2 /y = 0 $ 是恰当方程\ 设其解为 $phi$,注意到: $ partialDer(phi, x) = 3 x^2 (1 + ln y) =>& phi = x^3 (1 + ln y) + C(y)\ =>& partialDer(phi, y) = x^3/y + C'(y) = - 2 y + x^3 / y\ =>& C'(y) = - 2 y\ =>& C(y) = - y^2 + C_1\ =>& phi = x^3 (1 + ln y) - y^2 + C_1 $ == 7. $ properEquationCheck((2x(1 + sqrt(x^2 - y))), (-sqrt(x^2 - y))) = -x/sqrt(x^2 - y) + x/sqrt(x^2 - y) = 0 $ 是恰当方程\ 尝试进行配凑: $ (2x(1 + sqrt(x^2 - y))) dif x + (-sqrt(x^2 - y)) dif y = 0\ dif x^2 + 2x sqrt(x^2 - y) dif x +(-sqrt(x^2 - y)) dif y = 0\ dif x^2 + 2/3 dif (x^2 - y)^(3/2) = 0\ dif (x^2 + 2/3 (x^2 - y)^(3/2)) = 0\ x^2 + 2/3 (x^2 - y)^(3/2) = C $ == 9. $ properEquationCheck(y/x, (y^3 + ln x)) = 1/x - 1/x = 0 $ 是恰当方程,尝试进行配凑: $ y/x dif x + (y^3 + ln x)dif y = 0\ y dif ln x + ln x dif y + 1/4 dif y^4 = 0\ dif (y ln x) + 1/4 dif (y^4) = 0\ y ln x + 1/4 y^4 = C $ = P42 1. == (1) 容易验证这不是恰当方程,猜测积分因子形如 $f(x)$,应满足: $ properEquationCheck(y^2 f(x), y f(x)) = 0\ <=> 2y f(x) = y f'(x)\ $ 只需取 $f(x) = e^(2 x)$,原方程变为: $ e^(2x)(x^2 + x) dif x + e^(2x) y^2 dif x + e^(2x) y dif y = 0\ e^(2x)(x^2 + x) dif x + 1/2 y^2 dif e^(2x) + 1/2 e^(2x) dif y^2 = 0\ dif (integral e^(2x)(x^2 + x) dif x + 1/2 y^2 e^(2x)) = 0\ integral e^(2x)(x^2 + x) dif x + 1/2 y^2 e^(2x) = C \ e^(2x) (1/2 x^2 - 1/2 x + 1/4) + 1/2 y^2 e^(2x) = C $ == (3) $ y/sqrt(1 + y^2) dif y = x dif y + y dif x\ dif (sqrt(1 + y^2)) = dif(x y)\ sqrt(1 + y^2) = x y + C $ == (5) - $y = 0$ 是原方程的一个解 - 当 $y != 0$ 时 $ (y dif x - x dif y)/(x^2) = 2x tan y/x dif x\ 2x tan y/x dif x + dif y/x = 0\ 2x dif x + 1/(tan y/x) dif y/x = 0\ dif x^2 + dif (integral 1/(tan y/x) dif y/x) = 0 $ 其中: $ integral 1/(tan u) dif u = integral (cos u)/(sin u) dif u = ln |sin u| + C $ 上式化为: $ x^2 + ln |sin y/x| = C $ 显然该曲线上 $y$ 应该保持定号,因此: $ x^2 + ln (sin y/x) = C\ x^2 + ln (- sin y/x) = C\ $ 都是可能的通解 == (7) 猜测积分因子形如 $f(x)$,试算: $ 0 = properEquationCheck(f(x)(x^2-y^2+y), x f(x) (2y-1)) = (1-2y)f(x) - (2y - 1)(x f(x))' $ 只需取 $(x f(x))' = -f(x), x f'(x) + 2 f(x) = 0$ 的一个解 $f(x) = 1/x^2$ 即可,原方程变为: $ (1 - (y^2 - y)/x^2) dif x + (2y - 1)/x dif y = 0\ dif x + (x(2y - 1) dif y - (y^2 - y) dif x)/x^2 = 0\ dif x + dif ((y^2 - y)/x) = 0\ x^2 + y^2 - y = C x\ $ = P27 == 1. === (4) 令 $t = y + 2 x - 1/2$,有: $ t' = y' + 2 = sqrt(2 t) + 2\ $ 显然 $sqrt(2t) + 2 > 0$,因此: $ (dif t) / (sqrt(2 t) + 2) = dif x\ integral (dif t) / (sqrt(2 t) + 2) = x + C\ $ 为求 $integral (dif t) / (sqrt(2 t) + 2)$,令 $u = sqrt(2 t)$,有: $ dif u = sqrt(2)/(2 sqrt(t)) dif t\ integral (dif t) / (sqrt(2 t) + 2) = integral (sqrt(2t) dif u)/(sqrt(2t) + 2) = integral (u dif u)/(u + 2) \ = integral 1 - 2/(u + 2) dif u \ = u - 2 ln |u + 2| + C\ = u - 2 ln (u + 2) + C\ = sqrt(2 t) - 2 ln (sqrt(2 t) + 2) + C\ $ 综上,原方程的通解为: $ sqrt(4x + 2y - 1) - 2 ln(sqrt(4x + 2y - 1) + 2) = x + C $ 代入点 $(0, 1)$,得: $ 1 - 2 ln 3 = C\ sqrt(4x + 2y - 1) - 2 ln(sqrt(4x + 2y - 1) + 2) = x + 1 - 2 ln 3 $ === (7) $ 3y^2 dif y + (16 x - 2x y^3) dif x = 0\ dif y^3 - y^3 2x dif x + 16 x dif x = 0\ e^(-x^2) dif y^3 + y^3 dif(e^(-x^2)) + 16x e^(-x^2) dif x = 0\ dif (y^3 e^(-x^2)) + 8 e^(-x^2) dif (x^2) = 0\ y^3 e^(-x^2) - 8 e^(-x^2) = C\ y^3 - 8 = C e^(x^2) $ 显然 $x -> infinity$ 时有界当且仅当 $C = 0$,此时 $y^3 = 8$,即 $y = 2$ == 3. 设 $f(x)$ 在 $a$ 的邻域 $U$ 上连续,方程的解 $y(x)$ 定义在集合 $X$ 上\ 容易注意到 $y = a$ 一定是原方程的一个解。对于任何 $x_0$ 该解都满足要求。\ 因此,待证结论等价于: - $forall x_0 in RR$,不存在非平凡的解使得 $y(x_0) = a$ 任取 $y(x)$ 是一个解,断言: - $Inv(y)(a)$ 是连通的\ 任取 $x_1, x_2$ 满足 $y(x_1) = y(x_2) = a$,取 $y$ 在 $[x_1, x_2]$ 处的最大值点 $x$,显然有: $ y'(x) = 0 <=> f(y(x)) = 0 <=> y(x) = a $ 最小值点同理,因此 $y(x)$ 在 $[x_1, x_2]$ 上恒为 $a$ - $Inv(y)(a)$ 是闭集\ 由 $y$ 的连续性这是显然的 - $Inv(y)(a)$ 一定是闭区间 - $y$ 是单调的 设 $y(x_1) = y(x_2) = y_0$,$y_0 = a$ 时区间 $[x_1, x_2]$ 上 $y$ 是常函数,下面不妨设 $y_0 != a$\ - 断言 $exists.not x_3 in (x_1, x_2)$ 使得 $y(x_3) = y_0$,否则由微分中值定理在 $[x_1, x_3], [x_3, x_2]$ 之间将分别存在一个导函数的零点 $x_(1 3), x_(2 3)$,这意味着: $ y(x_(1 3)) = a, y(x_(2 3)) = a => forall x in [x_(1 3), x_(2 3)], y(x) = a => y(x_3) = a $ 矛盾! - 由微分方程,显有 $y'(x_1) = y'(x_2) = f(y_0) != 0$,不妨设它们都是正数。然而,这意味着: $ exists x_3, x_4 in (x_1, x_2), y(x_4) < y_0 < y(x_3) $ 为了取得 $x_4$,只需要考虑 $x_1$ 的右半邻域,不可能其中所有点都小于等于 $y_0$,否则 $x_1$ 处的右导数非正,这是矛盾的。$x_3$ 可以类似取得 由此,由介值定理,$x_3, x_4$ 之间将存在某点的值为 $y_0$,与之前的结论矛盾!\ - 事实上,上面的证明也说明了假若 $y$ 在区间 $I$ 上至多一点为 $a$,则它在 $I$ 上将严格单调\ // 我们可取一列 $U$ 的子区间 $I_n$ 使得: // - $I_n subset I_(n+1)$ // - $a in.not I_n$ // - $I_n -> (a, +infinity) sect U$ 或 $(-infinity, a) sect U$\ // 具体来说,对于任意 $y in (a, +infinity) sect U \/ (-infinity, a) sect U$,存在 $n$ 使得 $y in I_n$\ // (上半区间和下半区间的情况是类似的,这里合并讨论即可) 令 $I$ 是上半区间 $I^+ = (a, +infinity) sect U$ 或下半区间 $I^- = (-infinity, a) sect U$, 在 $I$ 中任取一点 $y_0$,令: $ F_I (y) = integral_(y_0)^y 1/f(t) dif t, forall y in I $ 可以保证 $F(y)$ 是良定义的(右侧的积分一定是通常积分),且 $F'(y) = 1/f(y)$\ 注意到 $F'(y)$ 在定义域上不变号且非负,因此 $F(y)$ 作为连续的严格递增函数有可导的反函数 $F^(-1)(x)$\ 设 $I_x$ 是某个区间满足 $a in.not y(I_x)$,前面说明了 $y$ 在 $I_x$ 上应当严格单调,同时 $1/f(y)$ 也保持定号且非零。取 $I$ 是 $y(I_x)$ 对应的 $y$ 值区间($a$ 之上或之下),在 $I_x$ 上由微分方程将有: $ 1/f(y) dif y = dif x\ x = integral 1/f(y) dif y + C\ x = F_I (y) + C\ y = F^(-1)_I (x - C) $<equation> 此外,题中条件: $ |integral_(a)^(a plus.minus epsilon) 1/f(y) dif y| = + infinity, forall epsilon > 0 $ 恰好等价于:(注意到 $1/f(y)$ 是定号的,因此反常积分要么存在要么为无穷,类似的变上限积分 $F(y)$的极限也存在或为无穷) $ 对 I = I^+ 和 I^-,有 | lim_(y -> a plus.minus) F_I (y) | = + infinity $ - 假设该条件成立,往证不可能 $exists x_0, y(x_0) = a$,否则又考虑到解非平凡,可设 $y(x_1) != a$\ 由于: $ Inv(y)(a sect [x_0, x_1]) != [x_0, x_1], emptyset "且是闭集" $ 因此不是开集,进而它的补集 $[x_0, x_1] - Inv(y)(a)$ 不是闭集。\ 换言之,将存在 $x_i -> eta in X$ 使得: $ y(x_i) != a\ y(eta) = a $ 不妨设 $x_i$ 都在 $eta$ 的同侧,进而由于之前证明了 $f$ 一定单调,因此 $y(x_i)$ 一定都在 $a$ 的同侧,设 $y(x_i) in I = I^+ 或 I^-$,它们应当满足@equation: $ x_i = F_I (y(x_i)) + C $ 但令 $i -> +infinity$,上式将导出矛盾 $eta = infinity + C$ - 若该条件不成立,不妨设 $F = F_(I^+)$ 满足: $ lim_(y -> a minus) F (y) = A in RR $ 任取 $x_0 in RR$,我们试图构造非平凡的解使得 $y(x_0) = a,$ 令: $ g(x) = cases( a quad x <= x_0, F_I^(-1)(x - C) quad x > x_0 ) $ 其中 $C = x_0 - A$\ 只需验证它满足微分方程。对于 $x != x_0$ 处微分方程的成立性是显然的,只需计算 $x_0$ 处的导数为 $y' = f(y(x_0)) = 0$ (事实上只需计算右导数)即可\ 熟知若 $f(x)$ 于 $x_0$ 处导函数的右极限存在,则右导数存在且等于导函数的右极限。对于 $g(x)$,由于当 $x > x_0$ 时函数满足: $ g' = f(g) $ 当 $x -> x_0$ 时,当然有 $g'(x) -> f(g(x_0)) = 0$,由上面叙述的定理知的确有: $ g'(x_0) = 0 $ 这就验证了 $g$ 是满足要求的原微分方程的解 == 4. 先设 $x > 0$,做变量替换: $ x = t^(alpha s) \ y = t^(beta s) v $ 有: $ dif x = alpha s t^(alpha s - 1) dif t\ dif y = beta s t^(beta s - 1) v dif t + t^(beta s) dif v $ 原方程化为: $ P(t^(alpha s), t^(beta s) v) (alpha s t^(alpha s - 1) dif t) + Q(t^(alpha s), t^(beta s) v) (beta s t^(beta s - 1) v dif t + t^(beta s) dif v ) = 0\ t^(d_0 s + alpha s - 1)P(1, v) alpha s dif t + t^(d_1 s + beta s - 1) Q(1, v) beta s v dif t + t^(d_1 s +beta s) Q(1, v) dif v = 0\ t^(d_0 s + alpha s - 1 - d_1 s - beta s) P(1, v) alpha s dif t + t^(- 1) Q(1, v) beta s v dif t + Q(1, v) dif v = 0\ t^(- 1) P(1, v) alpha s dif t + t^(- 1) Q(1, v) beta s v dif t + Q(1, v) dif v = 0 "(利用" d_0 + alpha = d_1 + beta ")"\ t^(- 1) dif t + t^(- 1) dif t + Q(1, v)/(P(1, v) alpha s + Q(1, v) beta s v) dif v = 0 \ $ 这是可分离变量的形式,可以用初等积分法求解,进而 $x, y$ 也可解出。\ $x < 0$ 的情形是完全同理的
https://github.com/01mf02/jq-lang-spec
https://raw.githubusercontent.com/01mf02/jq-lang-spec/main/syntax-todo.typ
typst
jq BNF: ``` defs = module def* main = module term module = "module" term ";" (("include" cstr | "import" cstr "as" var) ";")* term(op) = atom (op atom)* (("as" var)? "|" term(op))? term = term(bin_op | "and" | "or" | ",") term_no_comma = term(bin_op | "and" | "or" ) atom = atom_head "?"? path atom_head = | num | str | def term | "-" atom | "if" term "then" term ("elif" term "then" term)* ("else" term)? "end" | "try" atom ("catch" atom)? | "label" var "|" term | "break" var | fold atom "as" var args | var | const args? | "[" term? "]" | "{" (obj_entry ("," obj_entry)* ","?)? "}" | "." key_opt? path | ".." | "(" term ")" def = "def" const args? ":" term ";" fold = "reduce" | "foreach" args = "(" term ("," term)* ")" num = str = @const? cstr cstr = bin_op = const = var = obj_entry = | "(" term ")" ":" term_no_comma | (var | key | str) (":" term_no_comma)? path = path_part_opt* ("." key_opt path_part_opt*)* path_part_opt = "[" path_part "]" "?"? key_opt = key "?"? path_part = term | term ":" term | term ":" | ":" term ``` We will now create a bridge between the concrete jq syntax and the high-level intermediate representation. In particular, we will simplify the following constructions of the jq syntax: - Shadowed definitions: We can define a filter with the same name and arity multiple times; for example, if we define `def one: 1; def two: one + one; def one: [1]`, then `two` will yield `2` and `one` will yield `[1]`. We can always rename definitions to eliminate such shadowing; e.g. by `def one: 1; def two: one + one; def one_: [1]`. - Definitions with variable bindings: The jq language allows for definitions of the shape `def x(a_1; ...; a_n): g`, where for any `i`, `a_i` may be either an identifier (without a leading `$`) or a variable (with leading `$`). We can always transform definitions to a semantically equivalent form where all arguments are non-variables by the following procedure: We repeat the following as long as there is a largest `i` such that `a_i` is a variable: We come up with a fresh identifier `b_i`, replace `g` by `b_i as a_i | g`, and replace the argument `a_i` by `b_i`. For example, this could replace `def f($x; g): $x + g` by `def f( x; g): x as $x | $x + g`. - Nested definitions: We can nest filter definitions. This is more than just syntactic sugar to limit the scope of an auxiliary filter; for example, consider the definition `def repeat(f): f, repeat(f)`, which repeats the output of the filter `f` ad infinitem. Most jq implementations to date take quadratic time to evaluate $n$ outputs of `repeat(0)`, because every time that `repeat(f)` calls `repeat(f)`, it creates a new closure around `f` to yield the `f` for the recursive call.#footnote[ In principle, such calls could be detected and optimized. For example, in Haskell, we can express `repeat` by #set raw(lang: "haskell") `f x = x () : f (\ () -> x ())` and see that `f (\ () -> 0)` executes in linear time. However, when we change the definition of `f` to `f x = x () : f (\ () -> 1 + x ())` (adding 1 to every call of `x ()`), then `f (\ () -> 0)` executes in quadratic time. This is because when the $n$-th recursive call of `f` calls `x()`, it evaluates to `1 + ... + 1 + 0`, where this sum consists of $n$ summands. ] However, nested definitions allow the same filter to be written as `def repeat(f): def rec: f, rec; rec`. This makes it clear that `f` remains the same for all recursive calls, and allows evaluation of $n$ outputs of `repeat(0)` in linear time. For the sake of this specification, however, we assume that no nested definitions are present. We can always extract a nested definition from its parent definition by adding all arguments from ancestor definitions to its arguments. For our improved `repeat` example, this would yield `def repeat_rec(f): f, repeat_rec(f); def repeat(f): repeat_rec(f)`. - Conditional expressions with multiple branches: if-then-else expressions have the shape - `if c then t`, followed by arbitrarily many instances of - `elif c then t`, potentially followed by - `else e`, and terminated by - `end`. Here, `c`, `t`, and `e` denote expressions. For example: ``` if c_0 then t_0 elif c_1 then t_1 ... elif c_n then t_n else e end ``` We write such an expression equivalently as: ``` if c_0 then t_0 else if c_1 then t_1 ... else if c_n then t_n else e end ... end end ``` When `else e` is not given, then we assume that `else .` was given. Finally, in HIR, we omit the trailing `end`.
https://github.com/rmolinari/thesis_1999
https://raw.githubusercontent.com/rmolinari/thesis_1999/master/1999_thesis.typ
typst
// #let thesis_title = "Properties of relative recursive enumerability" #let author = "<NAME>" //////////////////////////////////////// // Theorem environment // https://github.com/sahasatvik/typst-theorems #import "theorems.typ": * //////////////////////////////////////// // Tablex // Extended table support // https://github.com/PgBiel/typst-tablex #import "tablex.typ": tablex, gridx, hlinex, colspanx #let myresult = thmbox.with( base_level: 1, titlefmt: strong, bodyfmt: emph, inset: 0em, padding: (top: 0.0em), separator: [#h(0.5em)] // using my change to theorems.typ - pull requested ) #let theorem = myresult("theorem", "Theorem") #let lemma = myresult("theorem", "Lemma", bodyfmt: text) #let proposition = myresult("theorem", "Proposition") #let conjecture = myresult("theorem", "Natural Conjeccture") #let qed = [#h(1fr) $square$] #let proof = thmplain( none, "Proof", titlefmt: strong, bodyfmt: body => [ #body #qed ], padding: (top: 0em, bottom: 0em), inset: 0em, separator: [:#h(1em)] ).with(numbering: none) #let theoremRef(name) = thmref(name)[Theorem] #let lemmaRef(name) = thmref(name)[Lemma] #let chapRef(num) = ref(label("chapter" + str(num)), supplement: "Chapter") // Convenience symbols #let phi = sym.phi.alt #let epsilon = sym.epsilon.alt #let join = sym.plus.circle #let neq = sym.eq.not // not equal #let leq = sym.lt.eq // greater than or equal #let geq = sym.gt.eq // less than or equal #let st = sym.bar.v // vertical bar: "such that" //////////////////////////////////////// // Some standard notation // Set difference #let setdiff(a, b) = $#a tilde.op #b$ // Turing interval #let turinginterval(a, b) = $[#a, #b]_T$ // Turing less than and leq. Note that we have extra space after this symbol. See https://github.com/typst/typst/issues/877. The // workaround is to specify 0 space ourselves. #let ltt = $scripts(<)_T$ #let leqt = $scripts(lt.eq)_T$ #let geqt = $scripts(gt.eq)_T$ #let notleqt = $scripts(lt.eq.not)_T$ #let equivt = $scripts(equiv)_T$ #let notequivt = $scripts(equiv.not)_T$ // "Zero jump" #let zerojump = $emptyset'$ // Pseudojump V applied to X #let pseudojump(X, V) = $#X join #V^(#X)$ // Logical implication, informally #let implies = $arrow.r.double$ #let iff = $arrow.l.r.double$ // Calculation converges #let converge = $#h(0em) arrow.b #h(0.05em)$ #let diverge = $#h(0em) arrow.t #h(0.05em)$ // "State transition" #let trans(a, b) = $#a arrow.r.bar #b$ // r.e.[Z] #let reIn(z) = $"r.e."[#z]$ // REA[Z] #let reInAbove(z) = $upright("REA")[#z]$ // dREA[Z] #let dreInAbove(z) = $upright("d")reInAbove(#z)$ // Tuple with angle brackets #let angletup(..z) = $lr(angle.l #z.pos().join([, ]) angle.r)$ // Standard pairing function #let pair(a, b) = $angletup(#a, #b)$ // Restriction of a to b #let restr(a, b) = $#a harpoon.tr #b$ // Concatenation of sequences a and b #let concat(a, b) = $#a paren.t #b$ #let concatone(a, b) = $concat(#a, #angletup(b))$ // Functional F relativized by Y, evaluated at x // Define a macro because of the tedium of backslashing the semicolon // Note the semicolon after #F to force the parser out of code-mode. #let fff(F, Y, x) = $#F;(#Y\; #x)$ #let setconcat(M, N) = $#M\; #N$ // Inline 1/2. Typst does a bad job with fractions inline, insisting on using a vertical layout. It is surprising. #let inlinefrac(n,m) = $#n #h(0em) \/ #h(0em) #m$ #let halfof(n) = inlinefrac(n,2) #let half = halfof(1) // "Finite sequences of" #let finseq(a) = $#a^(< infinity)$ // A "column" of a set: those pairs selected on the second coordinate by b. #let column(a, b) = $#a^([#b])$ // Row j of an omega^2 set of cycles, and a more general "slice" of a higher-dimensional set #let row(j) = $cal(R)_#j$ #let slice(..j) = $cal(S)_(#j.pos().join([,]))$ // A cycle pattern. Note awkward negative space to get good placement of the subscript #let pattern(s) = $cal(P)#h(-0.2em)_#s$ // State/stage/strategy/row numbers/names, with nonbreaking space //#let set-normal(z) = text(style: "normal")[#z] #let state(num) = [state~#num] // it's oddly difficult to make the prefix upright in all contexts. I couldn't work out how to use text styling: it kept coming out // italic inside math mode. (See #set-normal, just above.) #let named-state(prefix, num) = [state~$upright(prefix)$#num] #let nstate(num) = named-state("N", num) #let pstate(num) = named-state("P", num) #let strat(s) = [strategy~#s] #let stalpha = [#strat($alpha$)] #let stg(num) = [stage~#num] #let theRow(j) = [row~$row(#j)$] #let cycle(name) = [cycle~#name] // The "equality" property #let Eq(x, y) = $sans("Eq")(#x, #y)$ #let blankEq = $Eq(ast.op, ast.op)$ // with stars as arguments //////////////////////////////////////// // The names of things in the Pattern Lemmas #let patternName(n) = $sans(#n)$ #let prelimCrampedRow = patternName("prelimCrampedRow") #let finalCrampedRow = patternName("finalCrampedRow") #let crampedRow = patternName("crampedRow") #let uncrampedRow = patternName("uncrampedRow") #let abandonedRow = patternName("abandonedRow") #let prelimRow = patternName("prelimRow") #let finalRow = patternName("finalRow") #let validPattern = patternName("validPattern") #let validPatternForP = patternName("validPatternForP") //////////////////////////////////////// // Small-scale layout things #let stage-hdr(name) = [Stage #name: #h(1em)] #let case(name) = [#smallcaps([Case #name]) #h(1em)] #let phase(name) = { set text(font: "Sans Serif") [Phase #name #h(1em)] } #let squad = h(1em) // The and wedge doesn't get enough space around it in display math. Try this #let sand = $#h(0.5em) and #h(0.5em)$ //////////////////////////////////////// // Global formatting #set par(justify: true) #set text(font:"New Computer Modern") // Based on an answer in the Discord from PgSuper (2023-04-13 1:43 PM) // See issue #9 on my GitHub #let setupenum(doc, prefix: "", formats: ("1.", "(a)", "i.")) = { set enum( full: true, numbering: (..n) => { let n = n.pos() if n.len() > 2 { numbering(formats.at(2), n.last()) } else if n.len() == 2 { numbering(formats.at(1), n.last()) } else { numbering(prefix + formats.at(0), ..n) } } ) doc } #let defEnum(..fmts) = { show: doc => setupenum(doc, formats: fmts) } #show: doc => setupenum(doc) //////////////////////////////////////// // Title page #align(horizon + center)[ #[ #set text(weight: "bold", size: 16pt) #thesis_title ] #v(1in) by \ #author #v(1in) A dissertation submitted in partial fulfillment \ of the requirements for the degree of \ Doctor of Philosophy \ (Mathematics) \ in The University of Michigan \ 1999 #v(1in) #box( width: 3in )[ #align(left)[ #set par(hanging-indent: 0.5in) Doctoral Committee: \ Professor <NAME>, Chair \ Professor <NAME>, \ Professor <NAME>, \ Assistant Professor <NAME>, \ Associate Professor <NAME> ] ] #pagebreak() #pagebreak() #grid( columns: (0.5in, 2in), sym.copyright, align(horizon + left)[ #underline[<NAME>] #h(1em) 1999 \ All Rights Reserved ] ) #pagebreak() ] #heading(numbering: none, "ACKNOWLEDGEMENTS") #v(0.5in) I would first like to thank my adviser, <NAME>, without whose patience, encouragement, and help this thesis would not exist. Thanks also go to all my friends, both inside and outside the Department, and to my family. Without their constant support these past six years would have been long indeed. Finally, I will be eternally grateful to the people of Michigan and the other United States, whose generosity made by studies at the University possible, and whose unfailing hospitality made me feel welcome. #pagebreak() #outline() #pagebreak() // Experimental. Based on something I found it in the Discord from user Heinenen 04/05/2023 #set heading(numbering: "1.") #show heading.where(level: 1): it => { set heading(numbering: "I") pagebreak(weak: true) v(2in) set text(weight: "bold") align(center)[ CHAPTER #counter(heading).display()\ #v(0.5em) #it.body #v(0.8in) ] // We number footnotes by chapter. // This doesn't really belong here, in a formatting function, but where else? // Maybe define a new function #chapter that makes the header and resets this counter. counter(footnote).update(0) } = Introduction == Definitions and notation The notation used in this paper is largely standard, and the reader is directed to @Soare1987 for an exposition. We note the following. Uppercase Greek letters, $Phi, Psi, dots$ will denote recursive functionals, with associated uses $phi, psi, dots$ where the oracle will be understood from context. Without loss of generality we assume that $phi(x, s)$ is increasing in both arguments. We use $subset$ to denote the subset relation, and $subset.neq$ to denote a proper subset. Set difference is denoted $setdiff(X, Y)$. It will be convenient to use the notation $turinginterval(X, Y) = { Z st X leqt Z leqt Y }$. We will make frequent use of Lachlan's hat-trick. Given an enumeration ${C_s}_(s geq 0)$ of an r.e. set $C$ define for each stage $s geq 0$ $ c_s = cases( min(setdiff(C_s, C_(s-1))) quad &"if" setdiff(C_s, C_(s-1)) neq emptyset\,, max(C_s union {s}) &"otherwise," ) $ where we take $C_(-1) = emptyset$. We say that the stage $s$ is $C$-_true_ if $restr(C_s, c_s) = restr(C, c_s)$. Now for the $C$-recursive function $Phi(C)$ we define $ hat(Phi)_s(C; x) = cases( Phi_s(C_s; x) quad & "if this computation converges and" phi(x, s) < c_s\,, "undefined" & "otherwise," ) $ and $ hat(phi)(x,s) = cases( phi(x, s) quad & "if" hat(Phi)(C_s; x) converge\,, 0 & "otherwise." ) $ The point of all this is the following. If $Phi\(C; x) converge$, then cofinitely often $hat(Phi)_s(C; x) converge$, and for every $C$-true stage $s$, $hat(Phi)_s(C_s; x) arrow.r.double hat(Phi)(C; x) converge$. The hat-trick serves to weed out at $C$-true stages all but the correct computations. Finite sequences are denoted variously with parentheses, $(x_0, dots, x_(n-1))$ and angle brackets $angle.l x_0, dots, x_(n-1) angle.r$. The length of the sequence $alpha$ is denoted $|alpha|$. The empty sequence, $angle.l angle.r$, is written as $emptyset$. The concatenation of the finite sequences $sigma$ and $gamma$ is written as $concat(sigma, gamma)$. For $e lt.eq |alpha|$, $restr(alpha, e)$ is the initial segment of $alpha$ of length $e$. We will commonly be constructing a set, $X$, to be recursively enumerable relative to a given set, C ("$X$ is $reIn(C)$".) The most convenient way to do this is as follows. We actually construct an r.e. set, $U$, of _axioms_. Each axiom is (a code for) an ordered triple $(D, x, e)$ where $D$ is a finite set and $x$ and $e$ are both natural numbers. Then the set $ X = U^C = { x st (exists e)[ (restr(C, e), x, e) in U] } $ is $reIn(C)$. The axiom $(restr(C, e), x, e)$ _witnesses_ the fact that $x in U^C$, and $e$ is the _use_ of the enumeration. All $reIn(C)$ sets are realizable in this way (up to degree). Note that, once it is defined, $U$ does not depend essentially in any way on $C$. Thus we may consider, for _any_ set $Y$, the $reIn(Y)$ set $U^Y$. $U$ then becomes a _pseudojump operator_, $U : Y arrow.r.bar pseudojump(Y, U)$. These operators will appear in #chapRef(6). A set $Y$ is _recursively enumerable in, and above_ $X$ ("Y is $reInAbove(X)$") if $Y$ is $reIn(X)$ and $X leqt Y$. If, instead, $Y$ is the difference of two $reIn(X)$ sets, and $X leqt Y$ then Y is said to be $dreInAbove(X)$. = A patched proof of the weak density of the properly d.r.e. degrees <chapter2> == Introduction In @CLW1989 a proof is given of the weak density of the properly d.r.e. degrees: #theorem[ Given recursively enumerable sets $C ltt G$ there is a d.r.e. set $D$ not of r.e. degree such that $C ltt D ltt G$. <theorem2.1> ] The proof given in @CLW1989 has two technical but important flaws. The first, involving the timing of injuries caused by different strategies competing on the priority tree, was noted and fixed by LaForte in @LaForte. The second, involving the claim that the various functionals defined in the construction (specifically, the $Delta(C)$ functionals) are always defined consistently, was noted by the present author and is discussed here. We assume the reader has access to a copy of @CLW1989. When discussing the construction in @CLW1989 during the remainder of this section we will use notation matching the rest of this thesis. This notation varies slightly from that used in @CLW1989. We do, however, refer to the cycle-state numbers as defined in @CLW1989, rather than their equivalents (if any) in this paper. === The central claim The argument in @CLW1989 constructs a d.r.e. set $A$ satisfying each of the requirements $ R_e: quad A neq Theta_e(E_e) or E_e neq Phi_e(C join A) $ where $E_e$ is an r.e. set, and $Theta_e$ and $Phi_e$ are partial recursive functionals. The basic module presented to satisfy $E_e$ consists of an infinite collection of _cycles_, indexed by $omega^2$. Together, these cycles attempt to define functionals $Delta(C)$ and $Gamma_j(C)$ (for $j in omega$) such that, if the strategy fails to satisfy $R_e$, one of these functionals demonstrates $G leqt C$, contrary to assumption. Cycle $(j, k)$ is allowed to define the values $Delta(C\; j)$ and $Gamma_j(C; k)$. After the description of the basic module (@CLW1989[p141]) two claims are made: + "Whenever cycle $(j,k)$ is started, any previous version of it has been cancelled and its functionals have become undefined through $C$-changes." + Because of 1, "$Gamma_j$ and $Delta$ are defined consistently." We will demonstrate that both of these claims are false. In the case of claim 2 this means that, even if claim 1 were true, this still wouldn't be enough to show that the functional $Delta$ is defined consistently. === Counterexamples Consider the case in which $C = emptyset$, so that no $C$-change ever occurs, and once we define a value for a functional we are stuck with it. Write $Delta(j)$ for $Delta(emptyset \; j)$. We first show that 1 does not hold. Consider the situation in which, at stage $t$, cycle $(j,k)$ is in state (5), cycle $(j, k+1)$ is in state (10) and cycle $(j+1,0)$ is in state (7). Now suppose that there are stages $t < s < s' < s''$, which are the next three stages and which any of the cycles of the strategy act, such that those actions are: - Stage $s$: #h(1em) Cycle $(j+1, 0)$ defines $Delta(j+1)$ with use $v$. - Stage $s'$: #h(1em) Cycle $(j, k)$ sees the $G$-permission it has been waiting for and stops cycles $(j, k+1)$ and $(j+1, 0)$. At this point, cycle $(j,k)$ advances to state (7). - Stage $s''$: #h(1em) Cycle $(j,k)$ sees the stage (which it calls $s_2$) it has been waiting for, and so (re)starts cycle $(j+1, 0)$. The value for $Delta(j+1)$ that cycle $(j+1, 0)$ defined at stage $s$ has not become undefined, and claim 1 is false. #v(1em) Now suppose that somehow we patch the algorithm so that claim 1 holds, without changing any of the other essential features of the construction. We show that it still may be that the functional $Delta$ is not defined consistently. Now the problem is that, for a given value $j$, any of the cycles $(j, k)$ (for $k in omega$) may define $Delta(j)$, and it is these definitions which clash. So consider the situation in which, at stage $t$, cycle $(j, k)$ is in state (5) and cycle $(j, k+1)$ is in state (7). Suppose also that there are stages $t < s < s' < s''$, which are the next three stages at which any of the cycles of the strategy act, such that these actions are: - Stage $s$: #h(1em) Cycle $(j, k+1)$ sees the stage (called $s_2$) it is waiting for, and so defines $Delta(j)$ with use $v'$, advancing to state (10). - Stage $s'$: #h(1em) Cycle $(j, k)$ gets the $G$-permission it has been waiting for and advances to state (7), stopping cycle $(j, k+1)$. - Stage $s''$: #h(1em) Cycle $(j, k)$ sees _its_ version of stage $s_2$ (this is what it waits for in state (7)), and so attempts to define its own value of $Delta(j)$. We further suppose that $G_s(j) neq G_(s'')(j)$ (this assumption is independent of any of the activity at stages $s$, $s'$ and $s''$). Then the values of $Delta(j)$ that cycles $(j,k)$ and $(j, k+1)$ define will differ, but will both be present at stage $s''$. When boiled down, the problem is the tension between the definitions of the functions $Delta(C)$ and $Gamma_j(C)$. The apparent need to keep the definition of $Gamma_j(C)$ synchronized with enumerations into the set $G$ conflicts with the more subdued approach needed to keep $Delta(C)$ consistent. The inconsistency sneaks in when we "back the wrong horse," in committing to the wrong $G$-change, rather than waiting for the one associated with a $Delta(C \; j)$-definition to pan out. Now, we have no way of knowing ahead of time which horse to back: there are no Pharlaps #footnote[Gratuitous Australian reference] here. If we hold back and hope that the $Delta$ route pans out we may be left dealing with the fact that we have ignored a (now) vital $G$-change. If we jump at the $G$-change though (as in @CLW1989) we are left with the possibility of the inconsistency of $Delta(C)$. The author tried, and failed, for some time to reconcile these conflicting demands. I think my adviser, <NAME>, for suggesting the correct compromise: we back both horses, hedging our bets until we have a better idea which is likely to be the right one. This chapter, then, gives a correct proof of #theoremRef(<theorem2.1>), slightly strengthening it to obtain the following result: #theorem[ Given r.e. sets $C ltt G$ there are d.r.e. sets $D ltt E$ such that $turinginterval(D, F) subset turinginterval(C, G)$ and there is no r.e. set $E in turinginterval(D, F)$. <theorem2.2> ] == The construction We will construct d.r.e. sets $A$ and $B$ such that $D = C join A$ and $F = C join A join B$ satisfy the theorem. To do this we satisfy all requirements of the form $ R_e: quad A neq Phi_e(E_e) or thin E_e neq Psi_e(C join A join B) $ and $ P_e: quad B neq Theta_e(C join A) $ where ${angle.l E_e, Phi_e, Psi_e angle.r}_(e geq 0)$ enumerates all triples in which $E_e$ is an r.e. set and $Phi_e$ and $Psi_e$ are recursive functionals. ${Theta_e}_(e geq 0)$ merely enumerates the recursive functionals. We will ensure that $A leqt G$ and $B leqt G$ by delayed, direct permitting. The first thing we do is to give basic modules for each of the two types of requirement. It is useful to note here that elements are enumerated into or out of $A$ only in satisfying $R_e$ requirements, and $B$ receives elements only in satisfying $P_e$ requirements. We also note that $B$ turns out to be r.e., and not just d.r.e., as we never need to remove elements from $B$ once they are enumerated in. === The basic module for $R_e$ <basicModuleRe> The basic module is very nearly the same as the one given in @CLW1989. (It appears to be somewhat different here, as we use slightly different notation, and a reduction in the number of states.) There is an extra state necessary to avoid $Delta$-inconsistency. Suppose $e$ is fixed and write $angletup(E, Phi, Psi)$ for $angletup(E_e, Phi_e, Psi_e)$. We will describe the strategy for satisfying $R_e$. It consists of a $(omega^2)$-sequence of cycles ordered lexicographically. Cycle $(0,0)$ starts first, and each cycle $(j,k)$ may start cycles $(j, k+1)$ and $(j+1, 0)$, as well as stopping all cycles $> (j,k)$. The strategy as a whole threatens to demonstrate that, if no cycle satisfies the requirement, then $G leqt C$ _via_ one of the functionals $Gamma_j(C)$ (for $j in omega$) or $Delta(C)$. The cycle $(j, k)$ may define the values $Gamma_j(C\; k)$ and $Delta(C\; k)$. We refer to the collection $row(j) = { (j, k) st k in omega }$ as the _$j$-th row of cycles_. All cycles begin in state 0. A cycle is _started_ by letting it pass from state 0 to another state, as determined by its history. In starting, a given cycle $(j, k)$ may in fact start subsequent cycles at the same stage, depending on whether cycle $(j, k)$ has been abandoned in the past. This may start a "cascade" of cycle-startings. See state 0, below. A cycle is _reset_ by putting it back into state 0, returning its restraints to 0 and undefining the values of its parameters $u$ and $v$. // (Note that the paper @CLW1989 uses "_cancelled_" for this operation. We reserve this word for another purpose: see the description of the priority tree construction in @section2.2.3 below.) // A cycle is _abandoned_ by returning its restraints to 0 and stopping all activity for that cycle. This is done when a cycle has categorically failed to satisfy $R_e$, due to the squandering of the various $G$-changes to which it has access. We gain through this failure the correct definition of a value for one of the functionals $Gamma_j(C)$ or $Delta(C)$. A cycle is said to _act_ whenever it moves from one state to another. An exception to this is the transition from state~2 to state~3: this transition is made purely for bookkeeping purposes. Also, when (say) cycle $(j, k)$ acts and in doing so resets cycles to its right, we entirely discard any functionals $Gamma_l(C)$ for $l > j$, starting them completely afresh if ever needed. Cycle $(j,k)$ of the strategy proceeds as follows. 0. Until given the go-ahead, do nothing. When told to start, if $k = 0$ we check if row $row(j)$ has been previously abandoned _en masse_. If so, advance directly to state~8 and follow the instructions at that state. Otherwise check if cycle $(j, k)$ itself has been abandoned. If so, there is no point in trying to satisfy $R_e$ with this cycle, so jump straight to state~7 and follow the instructions at that state. Otherwise, choose a new witness $x$ larger than any number used so far in the construction (including all currently imposed $A$-restraints, and the current stage) and larger than both $j$ and $k$. Advance to state~1. + Wait for a stage $s_1$ at which the following statement, which we call $Eq(x, s_1)$, holds: $ ( A(x) = Phi(E \; x) )[s_1] sand (restr(E, phi(x)) = ( restr(hat(Psi)(C join A join B), phi(x)) )[s_1] $ [Note that if $s_1$ doesn't exist, we automatically satisfy the requirement.] If $G_(s_1)(k) = 1$ we jump straight to state~7 and follow the instructions there. Otherwise put $u = (hat(psi) phi(x))[s_1]$. Restrain $restr(A, u)$ and $restr(B, u)$, put $Gamma_j(C; k) = G_(s_1)(k) thin (= 0)$ with use $gamma_j(k) = u$ and start cycle $(j, k+1)$ to run simultaneously. Advance to state~2. + Wait for a stage $t_1$ at which either + $restr(C_(t_1), u) neq restr(C_(s_1), u)$; or + $G_(t_1)(k) neq G_(s_1)(k)$. [Note that we do not wait for a stage $t_1$ at which $C_(t_1) neq C_(t_1 - 1)$, (or where there is similar change in $G$) but rather for a change from the situation at stage $s_1$. In either case, once we combine the various strategies using a priority tree (see @section2.2.3 below) #stalpha is not "accessible" at every stage. There may be times at which a relevant $G$- or $C$-change occurs but $alpha$ is not accessible, only to become accessible later. The reaction to the change, and hence permission, is "delayed" until the strategy is accessible. It is common in these situations to account for the "gaps" in the accessibility of $alpha$ by defining for each node $beta$ in the priority tree an auxiliary enumeration for the r.e. set $C$: $ C_s^beta = cases( C_s &"if node" beta "is accessible at stage" s\, , C_(s-1)^beta quad &"otherwise" ) $ where we take $C_(-1)^beta = emptyset$. Here we do _not_ use this construct. The part of the verification argument, below, which deals with the permission delays inherent with our set up (#lemmaRef(<lemma2.25>)) would only be complicated by the use of such variant enumerations.] Now, if + $restr(C, u)$ changes first, reset all cycles $> (j, k)$, drop the $A$- and $B$-restraint of cycle $(j, k)$ back to 0, and return to state~1. While if + $G(k)$ changes first, it it time to see if we need to hedge our bets. There are two subcases. + If some cycle $(j, k')$ of $row(j)$ is currently in stage 5 or 6 (there is at most one, by #lemmaRef(<lemma2.3>) below) we cannot act on the $G(k)$ change yet. We set the marker $mu(x) = v_(s_1)(j, k')$, defined below, (with the intention of allowing $x$ to enter $A$ later with a a $restr(C, mu(x))$ change) and advance to state~3. Recall that this transition does _not_ count as an action. + If no such $(j, k')$ exists we reset all cycles $> (j, k)$, enumerate $x$ into $A$ and advance to state~4. + Wait for a stage $t_2$ such that $restr(C_(t_2), mu(x)) neq restr(C_(t_1), mu(x))$. (The idea here is that the change in $restr(C, mu(x))$ has undefined the computation of $Delta(j)$ previously set by cycle $(j, k')$, allowing it be redefined in the future. This is how we avoid the $Delta$-inconsistency of the original paper, @CLW1989.) Reset all cycles $> (j, k)$, enumerate $x$ into $A$ and advance to state~4. + Wait for a stage $s_2$ such that $Eq(x, s_2)$. [As before, if $s_2$ doesn't exist we automatically satisfy the requirement.] If $G_(s_2)(j) = 1$ we jump straight to state~8 and follow the instructions there. Otherwise, we note that since $ (Phi(E\; x))[s_2] = A_(s_2)(x) neq A_(s_1)(x) = (Phi(E\; x))[s_1] $ we must have $restr(E_(s_2), phi_(s_1)(x)) neq restr(E_(s_1), phi_(s_1)(x))$, and since $E$ is r.e. this change is permanent and hence a target. Put $v = (hat(psi) phi(x))[s_2]$, restrain $restr(A, v)$ and $restr(B, v)$, put $Delta(C\; j) = G_(s_2)(j) thick (= 0)$ with use $delta(j) = v$ and start cycle $(j+1, 0)$ to run simultaneously. Advance to state~5. + Wait for a stage $t_3$ at which either + $restr(C_(t_3), v) neq restr(C_(s_2), v)$; or + $G_(t_3)(j) neq G_(s_2)(j)$. On reaching stage $t_3$ reset all cycles $> (j, k)$. Then + If $restr(C, v)$ changes first, return the $A$- and $B$-restraints to $u$, and return to state~4. + Otherwise, remove $x$ from $A$ and advance to state~6. Note that now $restr(A_(t_3 + 1), u) = restr(A_(s_1), u)$. + Wait for $restr(C_(t_4), u) neq restr(C_(s_1), u)$. If this ever occurs, advance to state~7. If $restr(C, u) = restr(C_(s_1), u)$ we satisfy the requirement by $ restr(hat(Psi)(C join A join B), phi_(s_1)(x)) &= (restr(hat(Psi)(C join A join B), phi(x)))[s_1] \ &= (restr(E, phi(x)))[s_1] \ &= restr(E, phi_(s_1)(x)) $ + We only reach this state if it is safe (in terms of the consistency of $Gamma_j(C)$) and accurate to set $Gamma_j(C; k) = 1$ with use 0. Do so, unless it has already been done, (permanently) abandon cycle $(j, k)$ and start cycle $(j, k+1)$. Once we reach this state, we define a value for $Gamma_j(C; k)$ which we _know_ to be correct, since $G(k)$ has already changed, and won't change again, $G$ being r.e. Also, the "once-off" nature of the $G$-change means that the only way cycle $(j,k)$ is going to be able to satisfy requirement $R_e$ in the future, even with a new witness, is by being infinitely often in state~1; it cannot enumerate its witness into $A$, as the $G$-change it needs has already come and gone. Although it is possible that $(j, k)$ will be able to succeed in this manner, it is improbable. More likely is that cycle $(j, k)$ will be eventually stuck in state 2, waiting forlornly for an impossible $G$-change, but in the meantime computing a correct value for $Gamma_j(C; k)$. We may as well cut our losses and simplify by abandoning this cycle: we content ourselves with the modest gain of a single correct value for $Gamma_j(C; k)$ and the knowledge that if we end up permanently abandoning _all_ cycles like this, we'll be able to compute $G$ from $C$ (see #lemmaRef(<lemma2.17>) below), a contradiction. + We only reach _this_ state if it is similarly safe to set $Delta(C\; j) = 1$ with use 0. Do so, unless it has already been done. We permanently abandon the whole of row $row(j)$, and since there is no need to keep any of this row in business, it is convenient for technical reasons to reset every cycle in row $row(j)$, put cycle $(j, 0)$ into stage 8, and start cycle $(j+1, 0)$. The same comments as in state~7 just above apply here, but the result of the failure of cycle $(j, k)$ is even more stark. Now we have defined a correct value for $Delta(C; j)$, and have seen (and "wasted") the only change in $G(j)$ that will ever occur. Thus all cycles which rely on a change in $G(j)$ at some point are our of luck in the future, and we may as well not bother with them. These cycles include _all_ of row $row(j)$, which is why we permanently abandon this whole row. We content ourselves now with the single correct value $Delta(C\; j)$. === The basic module for $P_e$ The $P_e$ requirements are simpler than those of the first kind, and we implement a standard diagonalization approach to satisfy them. To ensure that $B leqt G$ we again use a system of cycles, but now we only have a one-dimensional arrangement. Again, suppose $e$ is fixed, and write $Theta$ for $Theta_e$. We have a $omega$-sequence of cycles, and again threaten to show $G leqt C$, by means of a functional $Xi(C)$. _Starting_ and _abandoning_ have the same definitions as before. _Resetting_ is similar, but now we need only worry about the single parameter, $u$. _Acting_ now happens with any change of state, as we have no equivalent of the bookkeeping state~3 to worry about. To distinguish the names of the states from those in the module for the $R_e$-requirements we will prefix the numbers here with the letter P. Cycle $k$ proceeds as follows. // TODO: this is hacky. We set up num for the rest of the document with a P prefix, and then undo that below. How can we restrict // the scope? #show: doc => setupenum(doc, prefix: "P") 0. Until given the go-ahead, do nothing. When told to start check if cycle $k$ has been abandoned in the past. If so, jump straight to state P4 and follow the instructions there. Otherwise, choose a new witness $y$ larger than any number mentioned in the construction so far (including all currently defined $B$-restraints, and the current stage) and larger than $k$. Advance to state P1. + Wait for a stage $s_1$ at which $ (B(y) = hat(Theta)(C join A\; y))[s_1] $ and let $u = hat(theta)_(s_1)(y)$. Restrain $restr(A, u)$, put $Xi(C\; k) = G_(s_1)(k)$ with use $xi(k) = u$ and start cycle $k+1$ to run simultaneously. Advance to state P2. [Note that if there is no such stage $s_1$ we immediately satisfy the requirement, by diagonalization.] + Wait for a stage $t_1$ at which either + $restr(C_(t_1), u) neq restr(C_(s_1), u)$; or + $G_(t_1)(k) neq G_(s_1)(k)$. On reaching $t_1$, reset all cycles $k' > k$. Then + If $restr(C, u)$ changes first, set the $B$-restraint of this cycle back to 0 and return to state P1. + Otherwise, enumerate $y$ into $B$. This has been permitted (perhaps after a delay) by the change in $G(k)$. Proceed to state P3. + Wait for a stage $s_2$ at which $ (B(y) = hat(Theta)(C join A\; y))[s_2] $ If there is no such stage, $y$ again witnesses the success of our strategy. If such an $s_2$ exists, note that we have $ (hat(Theta)(C join A\; y))[s_2] = B_(s_2)(y) = 1 neq 0 = B_(s_1)(y) = (hat(Theta)(C join A\; y))[s_1]. $ By the restraint on $A$, $restr(A_(s_2), u) = restr(A_(s_1), u)$ so we must have $restr(C_(s_2), u) neq restr(C_(s_1), u)$. This change in $C$ allows us to redefine $Xi(C\; k)$, which we do after advancing to state P4. + It is now safe and correct to define $Xi(C\; k) = 1$ with use 0. Do so, unless this has already been done, permanently abandon cycle $k$, and start cycle $k+1$. [This is just like state~7 in the basic module for the $R_e$ requirements.] // TODO: hacky (see above) #show: doc => setupenum(doc) === Combining the modules <section2.2.3> Now that we have described the strategy for satisfying a single requirement in isolation we must consider how to satisfy all requirements simultaneously. Since each strategy may well act infinitely often we must use a _priority tree_ to manage this. The standard reference for this technique is Chapter XIV of Soare @Soare1987. #let outcome = $concatone(alpha, (j, k))$ In @LaForte LaForte introduced a path restraint to deal with a problem in the original construction in @CLW1989. Basically, that construction worked the tree angle in an "obvious" way. As soon a #stalpha's cycle $(j, k)$ became "active" we use #outcome as the outcome; this happens as soon as cycle $(j, k)$ chooses a witness. (For the moment the consider the case of $R_e$-strategies.) However, if cycle $(j, k)$ later sees a relevant computation converge and imposes a restraint $r$, those strategies in the subtree below #outcome started in the meantime will not have chosen witnesses to respect this new restraint. This is naturally a Bad Thing. LaForte ingeniously solves the problem by introducing the path restraint: as the new restraint is imposed it is incorporated into the path restraint for strategies below #outcome and respected "after the fact." Strategies below #outcome constantly check the extent of the path restraint being imposed on them. #let outcome = none This method works fine, as seen in @LaForte. However, it not particularly pretty. In particular, the point of tree-based arguments is to remove the need for strategies to themselves keep an eye on the restraints set by other strategies. If possible, we would like to avoid the path restraint, and there is a simple trick that lets us do so. We only follow a child corresponding to cycle $(j,k)$ when cycle $(j, k)$ has actually imposed a restraint. Until that happen we follow a child corresponding to the rightmost cycle to the left of $(j, k)$ which imposes restraint. This is perfectly safe, as, so long as $(j, k)$ imposes no restraint, we cannot injure any computations by letting the strategies below the leftward cycle operate. Once such a restraint is imposed, we automatically respect it by starting to follow a child corresponding to $(j, k)$. The only trick we actually need is to add a new child, $concatone(alpha, -1)$, to be followed when no cycles at all of #stalpha impose a restraint. Each cycle can impose restraint in two "waves". By seeing $Eq(x, s_1)$ cycle $(j, k)$ restrains $restr(A, u)$ and $restr(B, u)$. Later, on seeing $Eq(x, s_2)$, it further restrains $A$ and $B$ as far as $v$. Thus, corresponding to each cycle $(j, k)$ we will have _two_ outcomes, $((j, k), 1)$ and $((j, k), 2)$, progressively used to respect these two waves of restraint. $P_e$-restraints impose only one wave of restraint and so need only one outcome per cycle on the tree. So, let $Lambda = {-1} union ((omega^2) times {1, 2}) union omega$. We partially order $Lambda$ lexicographically on $(omega^2) times {1, 2}$, with the natural ordering on $omega$, and making -1 come before everything. We don't define any relative order between elements of $(omega^2) times {1, 2}$ and $omega$, as this won't be necessary. Let $ T = {f in Lambda^(< omega) st f(n) in {-1} union (omega^2) times {1, 2} "if" n "is even", f(n) in {-1} union omega "if" n "is odd" } $ be the priority tree, with the started partial ordering $<_L$ inherited from the order imposed on $Lambda$ above. If $alpha in T$ has even length $|alpha| = 2e$ then $alpha$ aims to satisfy requirement $R_e$, while if $|alpha| = 2e + 1$ then $alpha$ works towards satisfying $P_e$. Recall that we make no distinction between a node on the tree and the (instance of the) strategy it is using. A strategy is _cancelled_ by resetting all of its cycles and discarding any functionals it may have (partially) defined. Any parameter, once defined, keeps that value until it is redefined or undefined. The construction proceeds as follows. Stage 0: #h(1em) All parameters are undefined or $emptyset$ as appropriate, and all cycles are in state 0 or state P0. #let nextval = $f_(s+1)(t)$ Stage $s+1$: #h(1em) We define, in substages $t < s$, a finite path $nextval$ through the tree, of length $s$. So, suppose $alpha = (restr(nextval, t)) in T$ is defined. If no cycle of #stalpha has been started since $alpha$ was last cancelled, start $alpha$'s cycle $(0, 0)$ or $0$, as appropriate, and put $nextval(t) = -1$. Otherwise, first suppose that $|alpha|$ is even, so that $alpha$ is using an $R_e$ strategy. Allow any cycles of #stalpha able to make the transition from state~2 to state~3 do so. Now there are 2 cases. - #smallcaps("Case 1") #h(1em) Some least cycle $nu$ of #stalpha is able (or forced by a $C$-change) to act. We allow cycle $nu$ to act. Let $lambda$ be the rightmost cycle of #stalpha now imposing restraint (if there is any such cycle.) It is not necessarily the case that $lambda = nu$. If cycle $lambda$ is now in state~2, 3, or 4 then put $nextval = (lambda, 1)$. If instead, $lambda$ is in stage 5 or 6 then put $nextval = (lambda, 2)$. Cancel all strategies $beta$ with $concatone(alpha, nextval) <_L beta$. If $lambda = nu$ and the action of cycle $nu$ involved enumerating a number into or out of $A$ or into $B$ we also cancel all strategies $beta supset concatone(alpha, nextval)$. If there is no such cycle $lambda$ then put $nextval = -1$ and cancel all strategies $beta$ with $concatone(alpha, -1) <_L beta$. - #smallcaps("Case 2") #h(1em) No cycle of #stalpha is able, or forced, to act. We do nothing, and nothing needs to be cancelled. Define $nextval$ just as above. No strategies need to be cancelled. If $|alpha|$ is odd, then we behave similarly. Now, given the rightmost cycle, $lambda$, imposing restraint, we simply put $nextval = lambda$, rather than worrying about two kinds of restraint. If $t + 1 < s$ we advance to substage $t + 1$. We say that the strategies $alpha subset f_(s+1)$ are _accessible_ at stage $s+1$. == Verification The verification of the construction is a long and tedious one, and is broken up into a sequence of lemmas. As the arguments for the two types of module are of necessity quite different, for the first part of the verification we discuss the modules separately. We will refer to the parameters associated with cycle $nu$ of #stalpha as they are defined at stage $s$ like so: $u_s(alpha, nu)$, $v_s(alpha, nu)$, _etc_. When the strategy is clear from context (as it usually will be), we will drop it. === Lemmas for the $R_e$ strategy <section2.3.1> ==== The layout of the cycle states We begin with a sequence of lemmas which describes the different arrangements possible of the states of the various cycles at any time. The aim is to formalize the intuitive ideas that develop from an understanding of the way the construction works. We assume that we have a certain, fixed strategy, $alpha$, of even length in mind, and that all cycles mentioned belong to this strategy. Also, we ignore the fact that #stalpha may not be accessible at all (or even all sufficiently large) stages: we just treat the stages mentioned as being the successive ones at which #stalpha _is_ accessible. It will be convenient to refer to a cycle with is in either stage 5 or state~6 as being "in state~5/6". #lemma[ For any row $row(j)$, at most one cycle $(j, k)$ is in state~5/6. <lemma2.3> ] #proof[ We show that if cycle $(j, k)$ is in state~5 or state~6 at stage $s$ then nothing to the right of $(j, k)$ in row $row(j)$ (namely, a cycle $(j, k') > (j, k)$) is in either of these states at stage $s$. If cycle $(j, k)$ entered state~5 from state~4 (and there is no other way), no cycles to the right of $(j, k)$ are in any state other than 0 at the start of stage $s$, because by entering state~4, cycle $(j, k)$ reset every cycle to its right, and no new cycles were started so long as $(j, k)$ remained in state~4. Upon entering state~5, cycle $(j, k)$ starts cycle $(j+1, 0)$, and no cycle to the right of $(j, k)$ in row $row(j)$ is started so long as $(j, k)$ stays in state~5. On entering state~6, cycle $(j, k)$ resets every cycle to its right (including those in rows $row(j')$ for $j' > j$), and no cycle to its right will be started so long as $(j, k)$ remains in this state. ] // p.14 #lemma[ Suppose cycle $(j, k)$ enters state~3 at stage $s$ due to cycle $(j, k')$ being in state~5/6. If at stage $t > s$ cycle $(j, k')$ leaves state~5/6 for the first time, for any reason, then cycle $(j, k)$ is no longer in state~3 at the end of stage $t$. <lemma2.4> ] #proof[ Note that $mu_s(x(j, k)) = v_s(j, k')$. Cycle $(j, k')$ leaves state~5/6 either through acting or through being reset. If $(j, k') < (j, k)$ then we see that the action/resetting of $(j, k')$ also resets $(j, k)$, and the latter is no longer in state~3. (It will turn out later that a cycle can't be in state~3 when something in the same row to its left is in state~5/6, but we can't rule out that possibility yet.) If $(j, k) < (j, k')$ we must work substantially harder. In this case, if $(j, k')$ is able to get out of state~5/6 without being reset, we must have a change in $restr(C, v_t(j, k'))$ (if $(j, k')$ goes to state~4) or in $restr(C, u_t(j, k')) subset restr(C, v_t(j, k'))$ (if it goes to state~7). This very change in $C$ allows $(j, k)$ to move to state~4, unless another cycle to its left acts for this same reason, resetting cycle $(j, k)$ completely. If $(j, k')$ is reset at $t$ by the action of a cycle to the left of $(j, k)$, cycle $(j, k)$ is reset also. Thus, aiming for a contradiction, we need only consider the case in which for some $k''$ with $k < k'' < k'$, cycle $(j, k'')$ acts at stage $t$, but $restr(C, v_t(j, k'))$ does not change at stage $t$. (Note that $v_t(j, k') = v_s(j, k')$.) Without loss of generality we may assume that $t$ is minimal in witnessing the failure of this lemma. Since cycle $(j, k')$ is "awake" (that is, in a state other than 0) between stages $s$ and $t$, cycle $(j, k'')$ must be in one of the states 2, 3 or 7, and cannot change states (other than going from 2 to 3) during this time, for otherwise cycle $(j, k')$ would be reset. We may may immediately discount the possibility that $(j, k'')$ is in state~7, because a cycle in this state cannot act. Thus, as stage $t$ starts, cycle $(j, k'')$ is in state~2 or state~3. We first claim that $(j, k'')$ can't make the transition from state~2 to state~1. Indeed, such a transition indicates a change in $restr(C, u(j, k''))$. But cycle $(j, k')$ starts after cycle $(j, k'')$ enters state~2, so by construction, $v_t(j, k') > x(j, k') > u(j, k'')$, and we have a change in $restr(C, v_t(j, k'))$ at stage $t$, which is a contradiction. Cycle $(j, k'')$ can't go from state~2 to state~3 at stage $t$, as this does not count as an action, so the only remaining possibility is the $3 arrow.r.bar 4$ transition, so that there is a change in $restr(C, mu_t(x(j, k'')))$. We claim that $mu_t(x(j, k'')) = v_t(j, k')$, and obtain the contradiction of a change in $restr(C, v_t(j, k'))$. Suppose otherwise, so that $(j, k'')$ enters state~3 for the sake of yet another cycle $(j, k''')$ being in state~5/6, or for another "incarnation" of cycle $(j, k')$; that is, for the sake of cycle $(j, k')$ being in state~5/6 based on another computation. Well, if we are in the former case, cycle $(j, k''')$ must leave state~5/6 by stage $s$, by #lemmaRef(<lemma2.3>), forcing cycle $(j, k'')$ out of state~3, by the assumption of the minimality of $t$. The same argument applies to another "incarnation" of cycle $(j, k')$. Thus, cycle $(j, k'')$ enters state~3 for the sake of the same $(j, k')$-related computations that force cycle $(j, k)$ to do likewise, and $mu_t(x(j, k'')) = mu_s(x(j, k')) = v_s(j, k') = v_t(j, k')$. We are done. ] #lemma[ For all $j$, if cycles $(j, k) neq (j, k')$ are both in state~3 at stage $s$, then $(mu(x(j, k)))[s] = (mu(x(j, k')))[s]$. <lemma2.5> ] #proof[ Suppose $(j, k)$ enters state~3 at stage $t$ and remains there until $(j, k')$ does the same at stage $t' > t$, and that they both stay in this state until at least stage $s$. By Lemmas #thmref(<lemma2.3>) and #thmref(<lemma2.4>), both cycles must enter state~3 for the sake of the same cycle being in stage 5/6, and for the same computations. The lemma follows. ] We are now ready to describe the various patterns made by the successive cycle-states.#footnote[Such as Athens, Sparta, Hamburg, #sym.dots . Oh, no, that's something else.] To do this we first need to introduce some definitions and notation. Consider a stage $s$, and the states that all the various cycles of #stalpha are in at the end of stage $s$. We will call this arrangement the _pattern of #stalpha at stage $s$_, and denote it by $pattern(s) = pattern(s)(alpha)$. The notation used to represent patterns is based on the row structure of the cycles. $pattern(s)$ will be given as a finite sequence, one term each for those rows $row(j)$ of the strategy with at least one cycle in a state other than 0. Each term in this sequence will itself be a finite sequence, one term each for the cycles of row $row(j)$ (say) in a state other than 0. Let $X = {0, 1, 2, dots, 8}$. For sets $M, N$ of finite sequences (of unspecified type) we let $setconcat(M, N) = {concat(theta, sigma) st theta in M and sigma in N}$, the finite sequences got by appending a sequence from $N$ to a sequence from $M$. For convenience we also allow the notation $angletup(M) = { angletup(theta) | theta in M }$, the length 1 sequences consisting of single terms from $M$. We define the following subsets of $finseq(X)$: $ prelimCrampedRow &= setconcat(finseq({2, 3, 7}), angletup({5})), \ finalCrampedRow &= setconcat(finseq({2, 3, 7}), angletup({6})), \ crampedRow &= prelimCrampedRow union finalCrampedRow, \ uncrampedRow &= setconcat(finseq({2, 7}), angletup({1, 4})), \ abandonedRow &= angletup({8}), \ prelimRow &= prelimCrampedRow union abandonedRow, \ finalRow &= finalCrampedRow union uncrampedRow, $ and a subset of $finseq((finseq(X)))$ $ validPattern = setconcat(finseq(prelimRow), angletup(finalRow)). $ The names are intended to be somewhat mnemonic. "Cramped" refers to a row in which cycles are prevented from reaching state~4 by the presence of a cycle in that row in state~5/6. These cycles have their style cramped: they must bide their time in state~3 waiting for the chance to go to state~4 later. A "#patternName("prelim")" row is one that isn't the last in the list: the row after it also has at least one cycle not in state 0. When we want to make it clear how long a finite sequence is, we subscript the sequence with its length, like so: $angletup(0, 1, dots, 7)_8$. The claim is now that if #stalpha has been started since last being cancelled, its pattern in "valid": #lemma(name: "Pattern Lemma")[ If #stalpha has at least one cycle not in state 0 at stage $s$, $pattern(s) in validPattern$. <patternLemma> ] #proof[ #let angle8 = angletup(8) We proceed by induction on the number of stages since the last time #stalpha had a cycle started after previously being cancelled. When a strategy is started up (perhaps not for the first time), as stage $s$, cycle $(0, 0)$ is started. If this cycle, or row $row(0)$, has been abandoned before, subsequent cycles are automatically started as well in the cascading effect mentioned at the start of @basicModuleRe. Let $j = min_iota{ "row" row(iota) "never abandoned" }$, and let $k = min_kappa { "cycle" (j, k) "never abandoned" }$. Then the pattern at stage $s$ is $ pattern(s) = angletup(angle8, angle8, dots, angle8, angletup(7, 7, dots, 7, 1)_(k+1))_(j+1). $ This is a valid pattern, as $angle8 in prelimRow$ and $angletup(7, dots, 7, 1) in uncrampedRow subset finalRow$. Now suppose that $alpha$'s pattern is valid coming into stage $s$, that #stalpha is not cancelled at $s$, and that something actually happens: some cycle of the strategy changes state. We let $pattern(s-1) = angletup(p_0, p_1, dots, p_n, f)$, where $p_i in prelimRow$ and $f in finalRow$. First consider any $2 arrow.r.bar 3$ transitions. These can occur only in a crampedRow, as only such rows have anything in state~5/6. But a 3 in place of a 2 leaves the type of crampedRow (either #patternName("prelim") or #patternName("final")) unchanged, so the pattern is still valid after such changes. From now on let $pattern(s-1)$ represent the pattern after all $2 arrow.r.bar 3$ state transitions are taken into account, bet before any action is recorded. If no cycle of the strategy actually acts at stage $s$ we are done. Otherwise, let $(j, k)$ be the leftmost cycle which acts. We have a large collection of cases and subcases. #show: doc => setupenum(doc, formats: ("I.", "1.", "a.")) + $j = n + 1$, so the action is in the last row. + Row $row(j)$ is cramped: $f = angletup(h_0, h_1, dots, h_m, 6)$, $h_i in {2, 3, 7}$. + $k = m + 1$, so cycle $(j, k)$ is in state~6. The only way that cycle $(j, k)$ to act is to go to stage 7, starting cycle $(j, k+1)$. This means that we can't have any $j_i = 3$ for $i leq m$, since any cycle $(j, i) < (j, k)$ in state~3 at the start of stage $s$ would have left that state, by #lemmaRef(<lemma2.4>). Let $k' = min_(kappa > k) { "cycle" (j, kappa) "never abandoned" }$. Then the new pattern for row $row(j)$ is $f' = angletup(h_0, dots, h_m, 7, dots, 7, 1)_(k' + 1) in finalRow$. Thus $pattern(s) = angletup(p_0, dots, p_n, f')$ in validPattern. + $k < m+1$ and cycle $(j, k)$ is in state~2. The action of $(j, k)$ can't just be to enter state~3, as such a transition does not count as an action. Neither can $(j, k)$ enter state~4, as the row is cramped, and such a transition is prohibited. Thus the action is to go back to state~1 due to a change in $restr(C, u_s(j, k))$. Now, since the action of cycle $(j, k)$ resets cycle $(j, m+1)$, by #lemmaRef(<lemma2.4>) we cannot have $h_i = 3$ for any $i < k$. Thus $h_i in {2, 7}$ for $i < k$. But now the new pattern for row $row(j)$ is $f' = angletup(h_0, dots, h_(k-1), 1) in uncrampedRow$ and again $pattern(s) in validPattern$. + $k < m+1$ and cycle $(j, k)$ is in state~3. First note that, as in the previous case, $h_i in {2, 7}$ for $i < k$. In entering state~4 cycle $(j, k)$ resets all cycles to its right, so the new pattern for row $row(j)$ is $f' = angletup(h_0, dots, h_(k-1), 4) in uncrampedRow$. Again $pattern(s) in validPattern$. We don't need a case for cycle $(j, k)$ being in state~7, as such a state can't act. + Row $row(j)$ is uncramped: $f = angletup(h_0, h_1, dots, h_m, b)$, $h_i in {2, 7}, b in {1, 4}$. + $k = m + 1$, and cycle $(j, k)$ is in state~1. The action of $(j, k)$ must take it to state~2, starting cycle $(j, k+1)$. Let $k' = min_(kappa > k) { "cycle" (j, kappa) "never abandoned" }$. The new pattern for row $row(j)$ is $f = angletup(h_0, dots, h_m, 7, dots, 7, 1)_(k' + 1) in finalRow$. Thus $pattern(s) in validPattern$. + $k = m + 1$, and cycle $(j, k)$ is in state~4. If the action of $(j, k)$ is to go to state~5, the new pattern for row $row(j)$ must be $f' = angletup(h_0, dots, h_m, 5) in prelimCrampedRow subset prelimRow$. In the same way as above, let $j' = min_(iota > j) { "row" row(iota) "never abandoned" }$ and $k' = min_kappa { "cycle" (j', kappa) "never abandoned" }$. Then the new pattern for the strategy is $ pattern(s) = angletup(p_0, dots, p_n, f', angle8, dots, angle8, angletup(7, dots, 7, 1)_(k' + 1))_(j' + 1). $ If, instead, $(j, k)$'s action is to go to state~8, row $row(j)$ is abandoned as a whole, and its new pattern will simple be angle8. Define $j'$ and $k'$ as before and the new pattern for the strategy will be $ pattern(s) = angletup(p_0, dots, p_n, angle8, dots, angle8, angletup(7, dots, 7, 1)_(k' + 1))_(j' + 1). $ In either case, the new pattern is valid. + $k < m+1$. We have that cycle $(j, k)$ is in state~2, since a cycle can't act if it is in state~7. This action can take cycle $(j, k)$ to either state~1 or state~4. In either case, all cycles to the right of $(j, k)$ are reset and the new pattern for row $row(j)$ is $f' = angletup(h_0, dots, h_(k-1), b')$, where $b' = 1$ or $b' = 4$ according to how the cycle // p.18 acted. In either case, $f' in uncrampedRow$, and the new pattern for the strategy is $pattern(s) = angletup(p_0, dots, p_n, f')$ is still valid. + $j < n+1$. Row $row(j)$ can't ever have been abandoned, as otherwise no cycle it in could act, so the part of the pattern corresponding to #theRow($j$) is $p_j = angletup(h_0, dots, h_m, 5) in prelimCrampedRow$. Note that for $i leq m$, $h_i in {2, 3, 7}$. + $k = m+1$, so cycle $(j, k)$ is in state~5. If the action consists of returning to state~4, no cycles in row $row(j)$ to the left of $(j, k)$ can still be in state~3, by Lemmas #thmref(<lemma2.4>) and #thmref(<lemma2.5>). Thus $h_i in {2, 7}$ for $i < m+1$. The action resets all cycles to the right of $(j, k)$ (including those in rows $row(l)$, $l > j$), so the new pattern for row $row(j)$ is $p'_j = angletup(h_0, dots, h_m, 4) in uncrampedRow$, and the pattern for the whole strategy is $pattern(s) = angletup(p_0, dots, p_(j-1), p'_j) in validPattern$. If instead the action of $(j, k)$ consists of advancing to state~6, again all cycles to the right of $(j, k)$ are reset, but now we can't say anything new about the status of cycles in row $row(j)$ to the left of $(j, k)$. This doesn't matter though, since the new pattern for row $row(j)$ is $p'_j = angletup(h_0, dots, h_m, 6) in finalCrampedRow$, and the pattern for the strategy is $pattern(s) = angletup(p_0, dots, p_(j-1), p'_j) in validPattern$. + $k < m + 1$. Clearly $h_k neq 7$, so we have two cases, $h_k = 2$ and $h_k = 3$. + $h_k = 2$. Cycle $(j, k)$ can't advance to state~4, as the row is cramped, and it can't go to state~3, as such a transition doesn't count as acting. Thus the action must consist of cycle $(j, k)$ returning to state~1 on the basis of a change in $restr(C, u_s(j, k))$. As above (case I.1.b) we conclude that the new pattern for row $row(j)$ is $p'_j = angletup(h_0, dots, h_(k-1), 1) in uncrampedRow$. As all cycles to the right of $(j, k)$ are reset, the pattern for the whole strategy is therefore $pattern(s) = angletup(p_0, dots, p_(j-1), p'_j) in validPattern$. + $h_k = 3$. This case is to case I.1.c as case II.2.a is to case I.1.b. Thus the new pattern $pattern(s)$ is valid, and we are done. ] #show: doc => setupenum(doc) ==== Consistency of the functions $Gamma_j(C)$ and $Delta(C)$ Here we show that the functionals $Gamma_j$ and $Delta$ are defined in a such a way that at every stage $s$ $(Gamma_j(C))[s]$ and $(Delta(C))[s]$ are well defined (partial) functions. The failure of this property was one of the technical flaws in the original paper @CLW1989. For the following lemmas we again assume that we have fixed in our minds a specific node/strategy of the construction, and restrict our attention to the functionals associated with this strategy. // p.19 #lemma[ For all $j$ and $k$, if cycle $(j, k)$ is in state~5 at stage $s$, then $(Delta(C\; j))[s] converge$. The same conclusion can be made if row $row(j)$ was abandoned at some stage before $s$. <lemma2.7> ] #proof[ If cycle $(j, k)$ is in state~5, we must have in particular $restr(C_s, v(j, k)) = restr(C_(s_2), v(j, k))$. But $v(j, k) = delta_(s_2)(j)$, so the computation for $Delta(C\; j)$ that was defined by cycle $(j, k)$ when it entered state~5 is still defined. If, instead, row $row(j)$ was abandoned at some earlier stage, this abandonment was accompanied by a definition of $Delta(C\; j)$ with use 0. Such a computation can never become undefined, and must persist to stage $s$. ] #lemma[ If some cycle $(j, k)$ acts at stage $s$ to define a computation for $Delta(C\; j)$, then for each $i < j$, ($Delta(C\; i))[s] converge$. <lemma2.8> ] #proof[ Such a cycle can act in this way only by moving from state~4 to either state~5 or state~8. In either case, the pattern corresponding to row $row(j)$ coming into stage $s$ must have been an uncrampedRow. So, by the Pattern Lemma, for each $i < j$, the pattern for row $row(i)$ must either be angle8 (indicating that row $row(i)$ was abandoned at some time) or an element of prelimCrampedRow. In the latter case, each row $row(i)$ has some cycle in state~5 as we enter stage $s$. But no cycle in any row $row(i)$, $i < j$ acts at stage $s$, as otherwise cycle $(j, k)$ would be reset and unable to act. Thus the pattern of each of these rows is unchanged (except perhaps for 2 changing to 3) during stage $s$, and each has a cycle in state~5 at the end of the stage. So, by #lemmaRef(<lemma2.7>), $(Delta(C\; i))[s] converge$. ] A similar argument establishes the following. #lemma[ If some cycle $(j, k)$ acts at stage $s$ to define a computation for $Gamma_j(C; k)$, then for each $i < k$, ($Gamma_j(C; i))[s] converge$. <lemma2.9> ] Now we can prove that the functionals are defined consistently. #show: doc => setupenum(doc, formats: ("(I)", "a")) #lemma[ For all $j in omega$, + For $i < j$, if $(Delta(C\; i))[s] converge$ and $(Delta(C\; j))[s] converge$ with $delta_s(j) > 0$ then $delta_s(i) < delta_s(j)$. + Row $row(j)$ defines a computation for $Delta(C\; j)$ only when no other such computation is currently defined. <lemma2.10> ] #proof[ Notice that we may assume that the strategy in question is not cancelled during any of the stages of interest to us in this lemma. If such a cancellation were to occur, all functionals associated with our strategy would be discarded and the lemma follows trivially. We proceed by induction. Assume that (I) and (II) hold for $0, 1, dots, j-1$. First note that when any cycle $(j, k)$ of $row(j)$ starts it chooses a witness $x(j, k)$ larger than any number mentioned so far. In particular, $x(j, k)$ is larger than the use of any $Delta(C\; i)$ computation (for $i < j$) still defined when the witness is chosen. As the definition of such a new computation would involve the resetting of cycle $(j, k)$ (by the Pattern Lemma), $x(j, k)$ will remain larger than the use of any currently defined $Delta(C\; i)$ computation. But if cycle $(j, k)$ ever defines a computation for $Delta(C\; j)$, then $delta(j) = v(j, k) > x(j, k)$ will also be larger than the uses of prior $Delta(C\; i)$ computations. So (I) will never be violated by a row defining a $Delta$-computation with a use smaller than the uses of computations defined by earlier rows. So, suppose that for the first time, at stage $s$, (I) is about to be violated by a row defining a computation with a use larger than the use currently defined by a later row: $Delta(C\; j)[s] converge$, $i < j$, and we are about to define $Delta(C\; i)$ such that $delta_s(i) geq delta_s(j) > 0$. Let $t < s$ be the stage at which the computation $(Delta(C\; j))[s]$ was defined. By #lemmaRef(<lemma2.8>), $(Delta(C\; i))[t] converge$ and by the minimality of $s$, $delta_t(i) < delta_t(j) = delta_s(j)$. By the inductive hypothesis, $(Delta(C\; i))[t]$ must get undefined before stage $s$, as we redefine this value at stage $s$. But such an undefinition#footnote[This is only marginally a word.] can only occur through a change in $restr(C, delta_t(i))$ which implies a change in $restr(C, delta_s(j))$ and the undefinition of the computation $(Delta(C\; j))[t]$, contradicting our assumption. This verifies (I) for $j$. For the sake of contradiction, suppose (II) fails at $j$. That is, suppose that at stage $s$ cycle $(j, k)$ defines $Delta(C\; j)$, and another computation for this value is still current, having been defined at stage $t < s$ by cycle $(j, k')$ with use $delta_t(j) = v_t(j, k')$. We note the following: - $restr(C_s, v_t(j, k')) = restr(C_t, v_t(j, k'))$. - For all $i < j$, $(Delta(C\; i))[t] converge$ and by (I) $delta_t(i) < delta_t(j) = v_t(j, k')$. - We know that $delta_t(j) > 0$, for a definition of $Delta(C\; j)$ with use 0 would have led to the abandoning of row $row(j)$ in its entirety at stage $t$ and the consequent impossibility of cycle $(j, k)$ acting now. - All cycles in row $row(j)$ to the left of $(j, k')$ must be in state~2 or state~7 at stage $t$. This follows from the Pattern Lemma, as the definition of the $Delta(C\; j)$ computation at stage $t$ implies that cycle $(j, k')$ started that stage in state~4, and so the pattern for row $row(j)$ formed an #uncrampedRow at $t$. - Because of the constraints listed at state~2, no cycle of row $row(j)$, except perhaps for cycle $(j, k')$ itself, may act so long as cycle $(j, k')$ remains in state~5 or 6. (The $C$-change which allows a cycle to the left of $(j, k')$ to leave state~3 is precisely the change that forces cycle $(j, k')$ out of state 5 or 6. The cycles in row $row(j)$ to the right of cycle $(j, k')$ cannot act, as the "next" cycle to the right of $(j, k')$ allowed to act under these circumstances is cycle $(j+1, 0)$, in row $row(j+1)$.) Now, cycle $(j, k')$ does not exit cycle 5/6 before stage $s$ "under its own steam", as this would involve a change in $restr(C, v_t(j, k'))$, which we have seen does not occur. Thus the only way that _any_ // p.21 cycle in row $row(j)$ can act at stage $s$ is if all the cycles of the row are first reset by the action of a cycle in row $row(i)$, for $i < j$, at stage $t'$, where $t < t' < s$. When row $row(i)$ later starts row $row(i+1)$ (which is must do before stage $s$) it in the process defines a new computation for $Delta(C\; i)$. By the inductive hypothesis, the previous computation must have become undefined, which means that $restr(C_s, delta_t(i)) neq restr(C_t, delta_t(i))$ and hence $restr(C_s, v_t(j, k')) neq restr(C_t, v_t(j, k'))$, contradicting our assumption. Thus such an attempted redefinition never occurs, and the inductive step in complete. ] We have the analogous result for the $Gamma$ functionals. #lemma[ For each $j$ and each $k$ + For $i < k$, if $(Gamma_j(C; i))[s] converge$ and $(Gamma_j(C; k))[s] converge$ with $gamma_(j,s)(k) > 0$ then $gamma_(j,s)(i) < gamma_(j,s)(k)$. + Cycle $(j, k)$ defines a computation for $Gamma_j(C; k)$ only when no other such computation is currently defined. <lemma2.11> ] #proof[ Again we may assume that the strategy is not cancelled during stages that concern us. We proceed as before, by induction. So, fix $j$ and assume that (I) and (II) hold for $0, 1, dots, k-1$. The comments at the start of the proof of #lemmaRef(<lemma2.10>) are valid here too: a computation for $Gamma_j(C; k)$ will never be defined with a a non-zero use less than the use of a previously defined computation for $Gamma_j(C; i)$, where $i < k$. Suppose that at stage $s$, $(Gamma_j(C; k))[s] converge$ and for the first time we are about to violate (I): we define $Gamma_j(C; i)$ with $i < k$ such that $gamma_(j,s)(i) geq gamma_(j,s)(k) > 0$. Let $t < s$ be the stage at which the current computation for $Gamma_j(C; k)$ was defined. By #lemmaRef(<lemma2.9>), $(Gamma_j(C; i))[t] converge$ and by the minimality of $s$, $gamma_(j,t)(i) < gamma_(j,t)(k) = gamma_(j,s)(k)$. But the computation for $Gamma_j(C; i)$ valid at stage $t$ must get undefined before stage $s$, by the inductive hypothesis, so $restr(C_s, gamma_(j,t)(i)) neq restr(C_t, gamma_(j,t)(i))$ which implies $restr(C_s, gamma_(j,s)(k)) neq restr(C_t, gamma_(j,s)(k))$. This means that the computation $(Gamma_j(C; k))[s]$ actually becomes undefined at some stage between $t$ and $s$, a contradiction. This establishes (I) for $k$. Now suppose that (II) fails for k: at stage $s$ cycle $(j, k)$ defines $Gamma_j(C; k)$ but another computation, $(Gamma_j(C; k))[t]$, exists from an earlier stage $t < s$. Note that $restr(C_s, u_t(j, k)) = restr(C_t, u_t(j, k))$. Note also that $gamma_(j,t)(k) > 0$, since the definition of a computation of use 0 would lead to the permanent abandonment of cycle $(j, k)$ at stage $t$. This cycle would therefore be unable to act at stage $s$. Now, only cycle $(j, k)$ can define a computation for $Gamma_j(C; k)$. It cannot merely have returned to state~1 and again to state~2 between stages $t$ and $s$, as this requires a change in $restr(C, u_t(j, k))$. Neither can it advance from state~2 to state~7 between stages $t$ and $s$, as entering state~7 entails the same $C$-change. Thus in order to have another crack at // p.22 defining $Gamma_j(C; k)$, cycle $(j, k)$ must be reset and later restarted. If ever something in row $row(i)$, for $i < j$, acts, the functional $Gamma_j(C)$ is discarded wholesale, preventing any conflicting definition at stage $s$. So, at some stage $t' in (t, s)$ some cycle $(j, k') < (j, k)$ acts, resetting $(j, k)$ (if it hadn't been reset since stage $t$ already.) By #lemmaRef(<lemma2.9>), $(Gamma_j(C; k'))[t'] converge$ and by part (I) $gamma_(j,t')(k') < gamma_(j,t')(k)$. Before #stg($s$) #cycle($(j, k')$) must restart #cycle($(j, k' + 1)$), and at the same time define a new computation for $Gamma_j(C; k')$. But by the inductive hypothesis the previous such computation (_i.e._ that valid at stage $t$) must have become undefined. This means that there has been a change since stage $t$ in $restr(C, gamma_(j,t)(k')) subset restr(C, gamma_(j,t)(k))$. But $gamma_(j,t)(k) = u_t(j, k)$, so this is a contradiction. The lemma is proved. ] === Lemmas for the $P_e$ strategy <section2.3.2> @section2.3.1 was a long and complicated one. As the $P_e$ strategy is so much simpler than the $R_e$ one, the corresponding set of lemmas is also. We assume we have fixed a #stalpha of odd length. Again we treat all stages mentioned as being the successive ones at which #stalpha is actually accessible. We start by discussing the patterns that the cycle states can make. We again refer to the pattern at stage $s$ as $pattern(s)$. As the $P_e$ strategy involves a one-dimensional array of cycles, the pattern formed by the cycle-states in this case is simply a finite sequence of state-names. There is no need for the sequence of sequences used in the $R_e$ strategy argument. #let plabel(n) = $upright(P)#n$ Let $Y = {plabel(0), plabel(1), ..., plabel(4)}$. Using the same notation as in the definition of #validPattern we may define a single subset of $finseq(Y)$: $ validPatternForP = setconcat(finseq({plabel(2), plabel(4)}), angletup({plabel(1), plabel(3)})). $ We then have the following analogue to the Pattern Lemma.#footnote[We don't refer to this result as a "Pattern Lemma", as it is too simple to deserve a name.] #lemma[ If #stalpha has at least one cycle not in state #plabel(0) at stage $s$, $pattern(s) in validPatternForP$. <lemma2.12> ] #show: doc => setupenum(doc, formats: ("I.", "1.", "a.")) #proof[ If #stalpha is started at stage $s$, cycle 0 is started, perhaps having been abandoned in the past. Let $j = min_iota{ "cycle" iota "never abandoned" }$. Then the pattern at the end of stage $s$ is $pattern(s) = angletup(plabel(4), dots, plabel(4), plabel(1))_(j+1) in validPatternForP$. Now suppose that $pattern(s-1)$, $alpha$'s pattern coming into stage $s$, was valid and that #stalpha is not cancelled at $s$. If no cycle of $alpha$ acts at stage $s$ then $pattern(s) = pattern(s-1)$ and there is nothing to prove. So, suppose some cycle does act, let $k$ be the leftmost one, and write $pattern(s-1) = angletup(h_0, dots, h_m, b)$, where $h_i in {plabel(2), plabel(4)}$ and $b in {plabel(1), plabel(3)}$. We again have several cases. + $k = m+1$. There are two subcases. + $b = plabel(1)$. Cycle $k$ must act by advancing to state #plabel(2), starting cycle $k+1$ in the process. Let $j = min_(j' > k){"cycle" j' "never abandoned"}$. Then the new pattern is $ pattern(s) = angletup(h_0, dots, h_m, plabel(2), plabel(4), dots, plabel(4), plabel(1))_(j+1) in validPatternForP. $ + $b = plabel(3)$. Now cycle $k$ acts by advancing to state $plabel(4)$, again starting cycle $k+1$. Using the same definition for $j$ as in the previous case we have $ pattern(s) = angletup(h_0, dots, h_m, plabel(4), dots, plabel(4), plabel(1))_(j+1) in validPatternForP. $ + $k < m + 1$. Now there is only one case, $h_k = plabel(2)$, as a cycle already in state $plabel(4)$ cannot act. However cycle $k$ acts, all cycles to the right of $k$ are reset. If cycle $k$ acts by returning to state #plabel(1) the new pattern is $ pattern(s) = angletup(h_0, dots, h_(k-1), plabel(1)). $ If, however, cycle $k$ enters state #plabel(3) we have $ pattern(s) = angletup(h_0, dots, h_(k-1), plabel(3)). $ Either way, $pattern(s) in validPatternForP$, and we are done. ] Now there are some results corresponding to Lemmas #thmref(<lemma2.7>)--#thmref(<lemma2.11>). #lemma[ If cycle $k$ is in state #plabel(2) or state #plabel(4) at stage $s$ then $fff(Xi, C, k)[s] converge$. <lemma2.13> ] #proof[ If cycle $k$ is in state #plabel(2) at stage $s$ then, in particular, $restr(C_s, u_t(k)) = restr(C_t, u_t(k))$ where $t < s$ is the stage at which cycle $k$ last entered state #plabel(2). This means that $(Xi(C\; k))[t]$, the computation defined at $t$, is still valid at $s$. If cycle $k$ is in state #plabel(4) then it must have been abandoned at some earlier stage. The abandonment was accompanies by a definition of $Xi(C\; k)$ with use 0, so this computation must persist to stage $s$. ] #lemma[ If cycle $k$ is in any state other than #plabel(0) at stage $s$, then $(Xi(C\; k'))[s] converge$ for all $k' < k$. <lemma2.14> ] #proof[ Lemmas #thmref(<lemma2.12>) and #thmref(<lemma2.13>). ] #lemma[ #show: doc => setupenum(doc, formats: ("(I)", "1.", "a.")) + For $i < k$, if $(Xi(C\; i))[s] converge$ and $(Xi(C\; k))[s] converge$ with $xi_s(k) > 0$ then $xi_s(i) < xi_s(k)$. + Cycle $k$ defines a computation for $Xi(C\; k)$ only when no such other computation is currently defined. <lemma2.15> ] #proof[ As in Lemmas~#thmref(<lemma2.10>) and~#thmref(<lemma2.11>) we may assume that #stalpha is not cancelled during the stages that concern us. We proceed by induction. Fix $k$ and assume (I) and (II) for $0, 1, dots, k-1$. Statement (I) holds for $k$ by the same argument as in #lemmaRef(<lemma2.11>), which we don't repeat. Suppose that (II) fails for $k$: that is, as some (least) stage $s$, cycle $k$ defines a computation $Xi(C\; k)$ while another computation $(Xi(C\; k))[t]$ is still valid from an earlier stage. Note that $restr(C_s, u_t(k)) = restr(C_t, u_t(k))$ and that $u_t(k) = xi_t(k) > 0$. Now, since $C$ did not change below $u_t(k)$ between stages $t$ and $s$, cycle $k$ cannot merely have returned to state #plabel(1) and then attempted to redefine $Xi(C\; k)$. The only possibility is that cycle $k$ is reset by the action of some cycle to the left, at some stage between $t$ and $s$. Let $k' < k$ be the leftmost cycle to act between stages $t$ and $s$, and let it so act for the first time at stage $t' in (t, s)$. Since it is leftmost in acting, it is not itself reset between $t$ and $s$. We note that $(Xi(C\; k'))[t] converge$ (by #lemmaRef(<lemma2.14>)) and $u_t(k') = xi_t(k') < xi_t(k)$, by (I). Now cycle $k'$ must have been in state #plabel(2) by stage~$t$, by #lemmaRef(<lemma2.12>), as cycles in state~#plabel(4) cannot act again before being reset. Cycle $k'$ cannot act at $t'$ by returning to state~#plabel(1) as this would mean a change in $restr(C, xi_t(k')) subset restr(C, xi_t(k))$ which contradicts our assumption. Thus cycle $k'$ acts by reaching state~#plabel(3). As cycle $k'$ is not reset before stage $s$ it cannot reenter state~#plabel(1) before $s$. It must therefore enter #plabel(4) before stage~$s$, by #lemmaRef(<lemma2.12>), as cycle $k$ is not in state~#plabel(0) at stage~$s$. But cycle $k'$ passes from state~#plabel(3) to state~#plabel(4) only when it sees a change in $restr(C, xi_t(k'))$, which again leads to the contradiction of a change in $restr(C, xi_t(k))$. We are done. ] === Satisfaction of the requirements <sec2.3.3> The following sequence of lemmas derives from~@LaForte, but we do not here concern ourselves with path restraint. At the point, many of the fundamental differences between the two types of strategy have been abstracted away into the preceding lemmas. From now on th we discuss both types in each lemma: there is no need for separation. The key object in the verification is the _true path_, $f$, through the priority tree $T$, defined by $f(n) = xi$, where $concatone((restr(f, n)), xi)$ is the leftmost successor of $restr(f, n)$ accessible infinitely often. The following result is the key one. #proposition[ #show: doc => setupenum(doc, formats: ("1.", "a.")) For all $n in omega$ + $f(n)$ is defined; + $restr(f, (n+1))$ is cancelled only finitely often (note that $restr(f, 0) = emptyset$ is never cancelled); + #strat($restr(f, n)$) satisfies the requirement towards which it works; and + for all sufficiently large $C$-true stages $t$, $restr(f, (n+1)) subset f_t$. <prop2.16> ] So, inductively assume 1, 2, 3, and 4 for $n = eta - 1$, and let $alpha = restr(f, eta)$. Fix a stage $s_0$ so large that $alpha$ is not cancelled after $s_0$, and for every $C$-true stage $t > s_0$, $alpha subset f_t$. We say that #stalpha _acts finitely_ if there is a stage $s$ after which no cycle of $alpha$ every acts. Otherwise we say that $alpha$ _acts infinitely_. #lemma[ If $alpha$ acts infinitely then some specific cycle of $alpha$ acts infinitely often. <lemma2.17> ] #proof[ Suppose otherwise, and begin by assuming that $|alpha| = 2e$. Infinitely many individual cycles of $alpha$ eventually act, but each does so only finitely often. So, each of these cycles must eventually get permanently stuck in a state which does not prevent subsequent cycles from acting in turn. There are two basic possibilities. #show: doc => setupenum(doc, formats: ("(A)",)) + Some (leftmost) row $row(j)$ acts infinitely often. That is, infinitely often a cycle of the form $(j, k)$ acts, but no single cycle of this form acts infinitely often. + Every row acts, but each acts only finitely often. We consider (A) first. Fix $j$ minimal so that row $row(j)$ acts infinitely, and let $t_0 > s_0$ be so large that no cycle of the form $(i, k)$ for $i < j$ acts after stage~$t_0$. Since row~$row(j)$ acts infinitely, but each cycle in it acts only finitely often, _every_ cycle $(j, k)$ must eventually act, and get stuck in a way which does not prevent cycle~$(j, k+1)$ from acting. This means that each cycle in the row must eventually get permanently stuck in state~2 or state~3, or is abandoned. By #lemmaRef(<lemma2.4>) a cycle gets permanently stuck in #state(3) only if another cycle in its row gets permanently stuck in #state(5) or #state(6), which we have seen does not happen to row~$row(j)$. Thus in fact every cycle of row~$row(j)$ eventually gets permanently stuck in #state(2) or is abandoned in #state(7). In the latter case, $Gamma_j(C; k)$ is correctly defined to be $1 = G(k)$ with use~0. We claim that the cycles which get permanently stuck in #state(2) also compute a correct value. Well, suppose that $(j, k)$ gets so stuck. It must be that $restr(C, u(j,k)) = restr(C_(s_1(j,k)), u(j, k))$ _and_ $G(k) = G_(s_1(j,k))(k)$. But then $ Gamma_j(C; k) &= (Gamma_j(C; k))[s_1] \ &= (G(k))[s_1] \ &= G(k) $ and we have a correct definition again. Now, by~#lemmaRef(<lemma2.11>), $Gamma_j(C)$ is a well-defined, $C$-recursive function. By the argument above, for all $k in omega$, $G(k) = Gamma_j(C; k)$, and we see that $G leqt C$, a contradiction. Suppose outcome (B) happens. We aim for a similar contradiction. Each row acts only finitely, but every row eventually acts, so given $j in omega$ there are $k_j$ and $t_j > t_(j-1)$ such that cycle $(j, k_j)$ starts cycle $(j+1, 0)$ at stage $t_j$ and no cycle of row $row(j)$ ever acts again. So, at stage $t_j$, cycle $(j, k_j)$ must enter #state(5) never to leave, or #state(8), abandoning row $row(j)$. In the latter case, $Delta(C\; j) = 1 = G(j)$. In the former, we may argue as above that each such cycle computes a value for $Delta(C\; j)$ with agrees with $G(j)$. So again $G leqt C$, which is a contradiction, and outcome (B) cannot occur. For the case $|alpha| = 2e + 1$ the argument is much simpler. Since the cycle sequence is one-dimensional, _every_ cycle must end up getting stuck in #state(plabel(2)) or #state(plabel(4)). As before, each cycle will compute a value for $Xi(C)$ which agrees with $G$, and we see that $G leqt C$ again. The lemma is proved. ] We extract part of the proof of the preceding lemma as a separate result. #lemma[ Given s #stalpha, if $chi$ is the leftmost cycle of #stalpha to act infinitely often then only finitely often can _any_ cycle to the left of $chi$ act. <lemma2.18> ] #proof[ Suppose otherwise, and write $chi = (j, k)$. If there is infinitely much action involving cycles to the left of cycle~$(j, 0)$ then one of the rows $row(0), dots, row(j-1)$ must act infinitely, and we find ourselves in the impossible case (A) of~#lemmaRef(<lemma2.17>). But now now there must be infinite action among the cycles $(j, 0), dots (j, k-1)$. But this is impossible, as one of these cycles must act infinitely often, contradicting the definition of~$chi$. ] #lemma[ Some (leftmost) successor of $alpha$ is accessible infinitely often. <lemma2.19> ] #proof[ If $alpha$ acts finitely, then after some stage~$s$ no cycle of $alpha$ ever acts again. If $nu$ is the rightmost cycle of $alpha$ imposing restraint at stage~$s$, then $concatone(alpha, (nu, 1))$, $concatone(alpha, (nu, 2))$, or $concatone(alpha, nu)$ (as appropriate to the parity of $|alpha|$ and the behavior of cycle $nu$) will be accessible at every stage after $s$ at which $alpha$ is accessible. If there is no such $nu$ then $concatone(alpha, -1)$ will be accessible in the same way. But $alpha$ is accessible at every $C$-true stage $t > s_0$. Otherwise $alpha$ acts infinitely. Suppose first that $|alpha|$ is even and (by #lemmaRef(<lemma2.17>)) let $nu$ be the leftmost cycle of #stalpha which acts infinitely often. By #lemmaRef(<lemma2.18>) choose $s > s_0$ large enough that cycle $nu$ is not reset after stage $s$ by the action of any $alpha$-cycles to its left. Suppose for the moment that $nu^-$ is the rightmost cycle of $alpha$ to the left of $nu$ imposing restraint at stage $s$. (That is, suppose such $nu^-$ exists.) Note that cycle~$nu^-$ will never change state after stage $s$, and so will impose the same restraint forever more. Cycle $nu$ must return // p.27 infinitely often either to #state(1) (at which time either $concatone(alpha, (nu^-, 1))$ or $concatone(alpha, (nu^-, 2))$ will be accessible as appropriate to the state in which $nu^-$ finds itself) or to state 4 (so that $concatone(alpha, (nu, 1))$ will be accessible.) If there is no such $nu^-$ then the respective cases find $concatone(alpha, -1)$ and $concatone(alpha, (nu, 1))$ accessible. If $|alpha|$ is odd then the argument similar, simpler, and omitted. ] This establishes part 1 of the Proposition for $n = eta$ and we may assume that there is a value, $epsilon$, for $f(eta)$. We write this value variously as $epsilon = (nu_eta, i_eta)$ (for some $nu_eta in omega^2$ and $i_eta in {1, 2}$, if $|alpha|$ is even), $epsilon = nu_eta in omega$ (if $|alpha|$ is odd), or $epsilon = -1$ (if appropriate). If there is a cycle of #stalpha which acts infinitely often then we denote the leftmost one by $nu^+$. It will be convenient to make the following definition. If $|alpha|$ is even and $i = 1$ or~2, then we say that cycle $nu$ of #stalpha is _lacking for $i$ at stage~$s$_ if, at that stage, cycle $nu$ imposes less restraint than is indicated by an outcome of $(nu, i)$. That is, if $i = 1$ and $nu$ imposes no restraint at stage $s$, or if $i = 2$ and $nu$ imposes only the restraints $restr(A, u)$ and $restr(B, u)$. If $|alpha|$ is odd then we say that cycle $nu$ is _lacking at stage~$s$_ if it imposes no restraint at that stage. #lemma[ Suppose that $nu_eta$ is defined (that is, $epsilon neq -1$). If $|alpha|$ is even then $nu_eta$ is lacking for $i_eta$ at only finitely many stages. If $|alpha|$ is odd then $nu_eta$ is lacking at only finitely many stages. <lemma2.20> ] #proof[ If $alpha$ acts only finitely then the result is trivial. Otherwise, by #lemmaRef(<lemma2.17>) $nu^+$ is defined, and $nu^+ geq nu_eta$. Thus, by #lemmaRef(<lemma2.18>) we may choose $s > s_0$ so large that no $alpha$-cycle to the left of $nu_eta$ acts after stage~$s$. We first give the argument for $|alpha|$ even. Cycle~$nu_eta$ can be lacking at infinitely many stages after $s$ in two ways. #show: doc => setupenum(doc, formats: ("1.",)) + $i_eta = 1$ and $nu_eta$ returns infinitely often to #state(1). But in this case, either $concatone(alpha, nu_eta^-)$ is accessible infinitely often (if there is such a $nu_eta^-$ as defined in the proof of #lemmaRef(<lemma2.19>)) or $concatone(alpha, -1)$ is accessible infinitely often. Both of these contradict the definition of $nu_eta$. + $i_eta = 2$ and $nu_eta$ is infinitely often in a state numbered less than 4. Once cycle $nu_eta$ reaches #state(4) it can only return to a lower numbered state by being reset. But by definition this cycle is not reset after state~$s$, so the only way it can infinitely often be in a state numbered less than 4 is if it never reaches #state(4). This contradicts the definition of $i_eta$. In the case $|alpha| = 2e + 1$ the argument is again similar and simpler. ] #lemma[ $restr(f, (eta + 1)) = concatone(alpha, epsilon)$ is cancelled only finitely often. <lemma2.21> ] #proof[ If $alpha$ acts only finitely, then $concatone(alpha, epsilon)$ is certainly not cancelled after the last time any cycle of #stalpha acts. Otherwise, we note that by assumption $alpha$ is cancelled only finitely often, so after stage $s_0$, $concatone(alpha, -1)$ is never cancelled. For other possible values of $epsilon$, $concatone(alpha, epsilon)$ is cancelled only when #show: doc => setupenum(doc, formats: ("1.",)) + cycle $nu_eta$ is lacking for $i_eta$ (or just lacking, if $|alpha|$ is odd); or + cycle $nu_eta$ of #stalpha enumerates something into or out of $A$, or something into $B$. By #lemmaRef(<lemma2.20>) cancellations of the first kind happen only finitely often. We claim also that cancellations of the second kind can happen only finitely often. By #lemmaRef(<lemma2.20>) choose $s > s_0$ so large that $nu_eta$ is not lacking for $i_eta$ (or just lacking) after stage~$s$. In particular, cycle $nu_eta$ is not reset after stage~$s$, as in being reset it would (temporarily) be lacking. Thus $eta_nu$ works only with its final witness, $x$ (resp.~$y$), after $s$. But the worst $nu_eta$ can now do is enumerate $x$ into $A$ and out again (or into $B$) once. Thus $concatone(alpha, epsilon)$ is cancelled only finitely often. ] This establishes part 2 of the Proposition for $n = eta$. #lemma[ #stalpha satisfies the requirement towards which it was working. <lemma2.22> ] #proof[ By Lemmas #thmref(<lemma2.17>), #thmref(<lemma2.18>), and #thmref(<lemma2.21>) we have just two possibilities. #show: doc => setupenum(doc, formats: ("1.",)) + Only finitely often does any cycle of #stalpha act. + Either $epsilon neq -1$ and cycle $nu^+$ acts infinitely often, but is only reset finitely often, or $epsilon = -1$ and cycle $(0, 0)$ (resp. 0) returns infinitely often to #state(1) (resp. #state(plabel(0))). We start with the argument for $|alpha| = 2e$. In the first case, some cycle gets permanently stuck in #state(1), #state(4), or #state(6). In any of these cases, we satisfy the requirement through a successful diagonalization. In the second case, let $sigma = nu^+$ if $epsilon neq -1$, and $sigma = (0,0)$ otherwise. Let $s > s_0$ be large enough that cycle~$sigma$ is not reset after stage~$s$, so that it works with the same witness, $x$, after~$s$. The only way that cycle $sigma$ can act infinitely often is if it alternates infinitely between states~1 and~2, or (if $epsilon neq -1$) between states~4 and~5. This implies that at least one of $Phi_e(E_e)$ and $Psi_e(C join A join B)$ is partial. Indeed, suppose that both functions are total and that we bounce infinitely often between #state(1) and #state(2). Let $t > s$ be so large that $(Phi_e(E_e; x))[t] converge$, $restr(E_(e,t), phi_t(x)) = restr(E_e, phi_t(x)) = (restr(hat(Psi)_e(C join A join B), phi_t(x)))[t]$, and each of $C$, $A$, and $B$ have reached their final values on the use of the $hat(Psi)_e(C join A join B)$ computations. Then there is no way that a $C$-change will ever subsequently cause cycle $sigma$ to jump from #state(2) back to #state(1). The case in which $epsilon neq -1$ and $sigma$ alternates infinitely between states~4 and~5 is the same. If $|alpha| = 2e + 1$, finite action again leads to success through diagonalization. The only way that cycle~$nu^+$ can act infinitely often is if it alternates infinitely often between states~#plabel(1) and~#plabel(2). We argue as above that in this case $Xi_e(C join A)$ is partial. ] This establishes part 3 of the Proposition for $n = eta$. Naturally, #lemmaRef(<lemma2.22>) describes what "really" happens to #stalpha: the construction of $Gamma_j$ and $Delta$ is only a threat to ensure that we get $G$-changes when we need them, and not too many $C$-changes. If $Phi(E)$ and $Psi(C join A join B)$ are both total, then we actually win by diagonalization. If not, we track down a witness to the partialness. #lemma[ For all sufficiently large $C$-true stages $t$, $restr(f, (eta + 1)) = concatone(alpha, epsilon) subset f_t$. <lemma2.23> ] #proof[ Again we start with the case $alpha = 2e$. First suppose that $alpha$ acts finitely. Let $q > s_0$ be so large that no cycle of $alpha$ acts after stage~$q$. Then $concatone(alpha, epsilon) subset f_t$ whenever $alpha$ is accessible, provided that also $t > q$. But $alpha subset f_t$ at every $C$-true stage $t$ after $s_0$. Now suppose that $alpha$ acts infinitely, and that $alpha$'s cycle $nu^+$ jumps infinitely often between states~1 and~2. By #lemmaRef(<lemma2.18>) we may choose $q > s_0$ so that no cycle $(j, k) < nu_eta$ acts after stage $q$, and cycle $nu_eta$ is not in #state(0) at $q$. Notice that, since cycle~$nu_eta$ is not reset after stage $q$, it can never advance past #state(2), for otherwise nothing could ever cause it to return to #state(1). Now, if $nu^+$ remains in #state(2) at a $C$-true stage $t > q$ then it will never subsequently see a change in $restr(C, u)$. (We use $hat(Psi)$ instead of just $Psi$ in the definition of $Eq(x, s_1)$ for just this purpose.) But this means that $nu^+$ will never return to #state(1). So, as $nu^+$ doesn't advance past #state(2), and doesn't get reset (and hence returned to #state(0)) after $q$, $nu^+$ must be in #state(1) at stage~$t$. But $nu^+$'s being in #state(1) implies that $nu_eta = (nu^+)^-$ (if any) is rightmost in imposing restraint, and $concatone(alpha, epsilon) subset f_t$. If there is no cycle to left of $nu^+$ imposing restraint after $q$ then $epsilon = -1$ and $concatone(alpha, -1) subset f_t$. If $nu^+$ jumps between states~4 and~5 then a similar argument shows that $nu^+$ is in #state(4) at every $C$-true stage $t > q$. But in this case $nu^+ = nu_eta$ and $i_eta = 1$, so at such stages~$t$, $concatone(alpha, epsilon) = concatone(alpha, (nu_eta, i_eta)) subset f_t$. The case for $|alpha| = 2e + 1$ is essentially identical, and is omitted. The lemma is proved. ] This establishes the last part of the Proposition for $n = eta$ and the inductive step is complete. #thmref(<prop2.16>)[Proposition] is proved. #qed Thus all the requirements are satisfied, and we have constructed $D = C join A$ and $F = C join A join B$, d.r.e.~sets forming a proper interval $turinginterval(D, F) subset turinginterval(C, zerojump)$ free of r.e.~degrees. It remains to show that there is enough permitting in the construction to ensure that $F leqt G$. We follow the method and notation of~@LaForte. For $alpha in T$ we let $ e^alpha = max({j, k st (exists beta in T)[concatone(beta, (j, k)) subset alpha]} union {k st (exists beta in T)[concatone(beta, k) subset alpha]}) $ the largest number which occurs in the path leading to $alpha$. We also define $ s^alpha = min{s st restr(G_s, e^alpha) = restr(G, e^alpha)}. $ (Note that the function $lambda alpha[s^alpha]$ is $G$-recursive.) The point is that none of the cycles (of the strategies) leading to $alpha$ will ever see any of the $G$-changes they are looking for after stage $s^alpha$. The following two lemmas are technical, but basically say that if $t > s^alpha$ is a $C$-true stage, then either #stalpha is accessible at stage~$t$, or is cancelled before ever being accessible again. This allows us to get a handle on the delayed permitting nature of the argument. #lemma[ Suppose that $t > s^(concatone(alpha, nu))$ is a $C$-true stage, and that $alpha$'s cycle $nu$ is in a state other than 0, 1, and~4 (if $|alpha|$ is even), or a state other than #plabel(0) and~#plabel(1) (if $|alpha|$ is odd). Then if cycle $nu$ does not act at stage $t$ it will never act subsequently without first being reset. <lemma2.24> ] #proof[ We consider the case $|alpha| = 2e$. The case $|alpha| = 2e + 1$ is much the same, and simpler, as we don't have to worry about the parameter~$mu$. We immediately dispense with the case in which $nu$ is in #state(7) or #state(8) at stage~$t$, as by construction such a cycle needs to be reset to act again. Thus $nu$ is in #state(2), 3, 5, or~6. Since $t$ is $C$-true, $nu$'s failure to act at $t$ due to a $C$-change (so as to make a state-transition #trans(2, 1), #trans(3, 4), #trans(5, 4), or #trans(6, 7)) means that such action is also impossible in the future. Also, $t > s^(concatone(alpha, nu))$, so (writing $nu = (j,k)$), ${j, k} sect G = {j, k} sect G_t$, and so by stage~$t$ cycle~$nu$ will have seen all of the explicit $G$-permission it will ever see. Finally, if $nu$ makes the transition #trans(2, 3) at stage~$t$, then the value of $mu$ just calculated is based on some computations in some cycle to the right, and these computations will never be subsequently injured by a $C$-change, as $t$ is $C$-true. Thus cycle $nu$ will be stuck in #state(3) until it is reset. The upshot of all of this is that by not acting at $t$, cycle $nu$ has demonstrated that it is unable ever subsequently to act without first being reset. ] #lemma[ Suppose that $alpha subset f_s$, $t > max{s, s^alpha}$ is $C$-true, and $s' > t$. Then for $beta subset alpha$, if $beta subset.not f_t$ but $beta subset f_(s')$ then there is a $tau in (s, s']$ such that $beta$ is cancelled at stage~$tau$. <lemma2.25> ] #proof[ We proceed by induction on the length of $beta subset alpha$. As $emptyset$ is always accessible we assume the result for $beta$ and first consider $beta^+ = concat(beta, nu) subset alpha$. So assume $beta^+ subset f_(s')$ but $beta^+ subset.not f_t$. If also $beta subset.not f_t$ then by the inductive hypothesis $beta$ is cancelled at some stage in $(s, s']$ which leads to $beta^+$ being cancelled as well. So it suffices to assume that $beta subset f_t$ and that $beta$ is never cancelled in $(s, s']$. Suppose cycle~$nu$ of #strat($beta$) is reset at some $tau in (s, s']$. As $beta$ isn't cancelled at $tau$, $nu$ is reset by the action at $tau$ of some cycle $nu' < nu$ of #strat($beta$). By construction, this leads to the cancellation of node $beta^+$. (In what follows it will be convenient to refer to a cycle which is not in #state(0) or #state(plabel(0)) as _awake_. Cycles in #state(0) or #state(plabel(0)) are _asleep_.) So it remains to consider the case in which $nu$ is not reset at any $tau in (s, s']$. The following argument applies necessarily to the case $|alpha| = 2e$. The case $|alpha| = 2e + 1$ is much the same. It is also simpler because we do not have to worry about the parameter~$mu$. Write $nu = (j, k)$. $beta^+ = concatone(beta, nu) subset f_s$, so cycle $nu$ is awake at stage $s$. As it is not reset in $(s, s']$ it remains awake during this period, and in particular is awake at stage~$t$. But $beta^+ subset.not f_t$, so some cycle to the right of $nu$ must also be awake at $t$. This means that $(j, k)$ must be in one of the states 2, 3, 5, 7, or~8 by the Pattern Lemma. Now, $t > s^alpha geq s^(beta^+)$, so we may apply #lemmaRef(<lemma2.24>) to see that cycle~$nu$ does not act before being first reset. As it is not reset in $(s, s']$, it cannot act at or before $s'$, and $concatone(beta, (j, k)) subset.not f_(s')$, a contradiction. If, instead, $beta^+ = concatone(beta, -1) subset alpha$, assume that $beta^+ subset.not f_t$. This means that, at stage $t$, some (leftmost) cycle~$chi$ of #strat($beta$) is imposing restraint $r$. As $t$ is $C$-true this restraint is based on computations which will never be injured by a later $C$-change. Thus $chi$ will always impose at least $r$-much restraint unless #strat($beta$) (and hence #strat($beta^+$)) is cancelled. Thus, if $beta^+ subset f_(s')$ then #strat($beta^+$) is cancelled by stage~$s'$. ] Now we can show that the permitting works. #lemma[ $A join B leqt G$. <lemma2.26> ] #proof[ Let $x in omega$. If $x$ is not chosen as a witness by stage~$x$ then it never will be, and $x in.not A union B$. Otherwise, suppose $x$ is chosen at stage $s_0$ to be the witness for a cycle $nu = (j,k)$ of #stalpha of even length. Note that $alpha subset f_(s_0)$, and that $x in.not B$. If $k in.not G$ or $G_(s_0)(k) = 1$ then $alpha$'s cycle~$nu$ will never get the first permission that it needs, and $x in.not A$. Suppose now that $k in setdiff(G_s, G_(s-1))$. Let $t$ be the first $C$-true stage larger than each of $s$, $s_0$, and $s^(concatone(alpha, nu))$. We claim that if $x$ is not enumerated into $A$ by stage $t$ it never will be. Well, if $alpha subset.not f_t$ then by #lemmaRef(<lemma2.25>) #stalpha will be cancelled (and witness $x$ forgotten) before $alpha$ gets a chance to act again. So if $x$ hasn't entered $A$ before~$t$, we must have $alpha subset f_t$ if $x$ is ever to have a chance. If some cycle $(j', k') < nu$ of #stalpha acts at $t$ then cycle~$nu$ will be reset, and its witness forgotten. Otherwise, if cycle $nu$ acts at or after stage~$t$ due only to $Eq(x, s_1)$ holding, then certainly $x in.not A$, as by construction cycle~$(j,k)$ will jump straight to #state(7) rather than attempt to enumerate $x$ into $A$. If $nu$ is in #state(4) // p.32 at stage~$t$ then $x$ would have already entered $A$. So we may assume that cycle~$nu$ is in a state other than 0, 1, or~4 at stage~$t$, and by #lemmaRef(<lemma2.24>) is unable ever to act again without getting reset first. So if $x in.not A_t$, $x in.not A$. If $x in A_t$ we must check to see if $x$ ever gets removed from $A$. If $j in.not G$ then cycle $nu$ will never see the necessary permission, and $x in A$. Otherwise, let $j in setdiff(G_w, G_(w-1))$. Let $t'$ be the first $C$-true stage greater than both $t > s^(concatone(alpha, nu))$ and $w$. The same reasoning as before shows that $x$ will have been removed from $A$ by stage $t'$ if it ever will be. Thus $A(x) = A_w(x)$. If $x$ is chosen at $s_0$ to be a witness for cycle $k$ of #stalpha of _odd_ length then the same basic argument applies, but now we need not worry about $x$ being enumerated out of $B$: we just check if it ever gets enumerated in. All of the above can be done by asking questions of a $C$ oracle and a $G$ oracle. As $C ltt G$, a $G$ oracle suffices, and $A join B leqt G$. ] //////////////////////////////////////// // Chapter 3 // p.33 = Avoiding $n$-r.e. degrees with dREA sets <chapter3> == Introduction Soare and Stob prove (see @SoareStob1982) #theorem[ Given a nonrecursive, r.e. set $G$ there is an $reInAbove(G)$ set $F$ not of r.e. degree. <theorem3.1> ] A question arises: what other sorts of degrees can we avoid in this way? For example, can we always construct $F$ to be not of d.r.e. degree? The answer is no: #theorem(name: [Arslanov, LaForte, and Slaman; @ALS1998])[ There exists an r.e. set $G notequivt emptyset$ such that every $reInAbove(G)$ set $F$ is of d.r.e. degree. <theorem3.2> ] The question is then how we might relax the requirements on the construction of $F$. Rather than work with a fixed "base" set $G$, Jockusch and Shore consider what happens if the choice of r.e.~$G$ is completely free (see @JockuschShore1984[Thm1.6a]). That is, $F$ is required only to be 2-REA, _i.e._ $reInAbove(G)$ for _some_ r.e. set~$G$, which we are free to construct: #theorem(name: [Jockusch and Shore])[ Let $A_0, A_1, dots$ be uniformly recursive in $zerojump$. Then there is a 2-REA set $F leqt zerojump$ such that for all $i geq 0$, $F notequivt A_i$. <theorem3.3> ] Here "uniformly recursive in $zerojump$" means that there is a $zerojump$-recursive function $f$ such that $A_i(x) = f(i, x)$ for all $i$ and $x$. This is an important result, as many interesting families of sets are uniformly recursive in~$zerojump$. Examples are the $n$-r.e. sets, for each $n$, and the union over $n$ of these families. So we immediately have #theorem(name: [Jockusch and Shore])[ For each $n$, there is a 2-REA set $F_n leqt zerojump$ not of $n$-r.e. degree. In fact, there is a single 2-REA set $F leqt zerojump$ which fails to be of $n$-r.e. degree for any $n geq 0$. <theorem3.4> ] Rather than give up control over $G$ we will give up some rigor in the way $F$ is enumerated from $G$. When constructing $F$ to be $dreInAbove(G)$ the following result is obtained. #theorem(name: [Cholak and Hinman; @CholakHinman])[ Given any nonrecursive, r.e. set $G$ there is a $dreInAbove(G)$ set $F$ not of d.r.e. degree. <theorem3.5> ] This result has been strengthened by Hinman, @Hinman: #theorem[ Given a nonrecursive, r.e. set $G$ there is a $dreInAbove(G)$ set $F$ not of 3-r.e. degree. <theorem3.6> ] Can we avoid 4-r.e. degrees _via_ $dreInAbove(G)$ sets in this way? $n$-r.e. degrees in general? We cannot answer these questions at the moment. However, if we drop the requirement that the constructed set be Turing-above $G$, we can avoid $n$-r.e. degrees, and at the same time place the "base set" $G$ (which we now call $D$) in a prescribed r.e.~interval. #theorem[ For any $n in omega$ and any r.e.sets $C ltt G$ there is r.e. $D in turinginterval(C, G)$ and $dreInAbove(D)$ $F$ such that $F$ is not of $n$-r.e. degree. <theorem3.7> ] Note that $D leqt F$, but we do not know whether or not we can ensure $G leqt F$. This result is in some sense a middle point between Theorems~#thmref(<theorem3.1>) and~#thmref(<theorem3.4>). We maintain some control over the base set by allowing more flexibility in the construction of $F$ from it. == The construction for the case $n = 4$ <section3.2> #let udvd = $setdiff(U^D, V^D)$ We start by giving a proof for the case $n = 4$. In @section3.4 we comment on the changes needed for larger values of $n$. We will construct $D = C join A leqt G$ and $F = C join A join (udvd)$ with the required properties. We must meet all of the requirements $ R_e: quad udvd neq Phi_e(E_e) thick or thick E_e neq Psi_e(C join A join (udvd)) $ in which ${angletup(E_e, Phi_e, Psi_e)}_(e geq 0)$ enumerates all triples in which $E_e$ is a 4-r.e. set and $Phi_e$ and $Psi_e$ are recursive functionals. We will ensure that $D leqt G$ by direct permitting. As in #chapRef(2) this permitting is delayed, as there will be "gaps" in the stages at which any particular strategy is accessible. It will be convenient to enumerate elements into $U^(C join A)$ and $V^(C join A)$ with separate $C$- and $A$-uses. Thus we will actually be enumerating into r.e. sets $U$ and $V$ axioms of the form $angletup(x, Z_1, Z_2)$, where are $Z_1$ and $Z_2$ are finite sets thought of as initial segments (correct at some stage $s$) of $C$ and $A$ respectively. Where the structure $D = C join A$ is important, we will write it out in full. In other places we will just use $D$. === The basic module <section3.2.1> The construction used to satisfy the requirements is (loosely) based on the basic module given in @CLW1989. It is similar to the module in #chapRef(2). The strategy for a single requirement consists of a // p.35 (potentially unbounded) number of cycles, each of which makes a very simplistic attempt to satisfy the requirement. We argue that if no cycle succeeds then we have $G leqt C$, a contradiction. So, fix $e in omega$. We describe the strategy for satisfying requirement $R_e$. To simplify notation we write $angletup(R, Phi, Psi)$ for $angletup(E_e, Phi_e, Psi_e)$. In #chapRef(2) we avoided an r.e.~opponent by changing our constructed set twice. When avoiding a 4-r.e. set we must change our set 5 times. This is not as bad as it seems as we have sweeping powers over the set, $F$, we construct. Firstly, $F$ is (the join of an r.e.~set with) the difference of two r.e.[$D$] sets, and membership of individual numbers in such sets may change many times during a construction due to changes in $D$. Furthermore, $D = C join A$ and we have complete control over $A$. This will allow us to eject elements from $udvd$ with great flexibility. Now, as we wish to ensure $A leqt G$ we must ask for $G$-permission each time we put an element into~$A$. It turns out that in the $n = 4$ case we must do this twice, which leads to a two dimensional cycle layout, as in #chapRef(2). Thus, the strategy consists of an $(omega^2)$-sequence of cycles ordered lexicographically. Cycle $(0,0)$ starts first. Cycle $chi = (j, k)$ may start $(j, k+1)$ and $(j+1, 0)$ as well as stopping all cycles $> chi$. Cycle $chi$ may define the values $Gamma_j(C; k)$ and $Delta(C\; j)$. Again we refer to rows of cycles, $row(j) = {(j,k) st k in omega}$. Cycles may declare various numbers to be _levers_. These are used when we want to remove some some element, $x$, from $V^D$. When $x$ is enumerated into $V^D$ we choose some new large element, $lambda$, not already a member of $D$ (actually, not a member of $A$, over which we have control) and put $x$ into $V^D$ with an $A$-use that is larger than $lambda$. When it comes to remove $x$ from $V^D$ we "pull the lever": we enumerate $lambda$ into $A$, thus ejecting $x$ from $V^D$. Each cycle begins in #state(0). A cycle is _started_ by letting it pass from #state(0) to another state, as determined by its history in much the same way as in #chapRef(2); we have the same cascading effect. A cycle is _reset_ by putting it back into #state(0), returning its restraints to 0 and undefining its parameters $x, u, tilde(u), v, tilde(v), lambda^1(x)$, and $lambda^2(x)$. A cycle is _abandoned_ by returning its restraints to 0 and stopping all activity for that cycle. This is done in much the same situations as in #chapRef(2): a cycle has failed to satisfy $R_e$. A cycle is said to _act_ whenever it moves from one state to another, except in the case of the bookkeeping transition from #state(4) to #state(5). Cycle $chi = (j, k)$ proceeds as follows. #show: doc => setupenum(doc, formats: ("1.", "(i)",)) 0. Until given the go-ahead, do nothing. When told to start, if $k=0$ or row $R_j$ has previously been abandoned _in toto_, advance directly to #state(11) and follow the instructions there. Otherwise, check if cycle $chi$ has been abandoned in the past. In this case jump straight to #state(10) and follow the instruction there. Otherwise, choose a witness~$x$, // p.36 larger than any number mentioned in the construction so far, including all currently defined $(udvd)$-restraints, and larger than both $j$ and~$k$. Advance to #state(1). + Let $Eq(x, s)$ denote the condition $ ((udvd)(x))[s] = (Phi(E\; x))[s] #h(1fr) \ #h(1fr) and (restr(E, phi(x)))[s] = (restr(hat(Psi)(C join A join (udvd)), phi(x)))[s]. $ Wait for a stage $s_1$ at which $Eq(x, s_1)$ holds. There are two kinds of computation use we must consider. The first is $u = (hat(psi) phi(x))[s_1]$, the total (direct) use of the $hat(Psi)(C join A join udvd)$ computations. Also implied here are the $C$- and $A$-uses needed to enumerate that part of $setdiff(U^(C join A), V^(C join A))$ used in the computation. So, we defined $ tilde(u) = max({eta_C^U(x), eta_A^U(x) st x in restr(U^(C join A), u)} union {eta_C^V(x), eta_A^V(x) st x in restr(V^(C join A), u)}) $ where $eta_C^U(x)$ is the $C$-use of the axiom which witnesses the membership of $x$ in $U^(C join A)$, and the other terms are defined analogously. The point is that a $C$- or $A$-change below $tilde(u)$ and hence destroy the important computations. Conversely, the definition ensures that a change in $restr((udvd), u)$ is accompanied (nay, caused!) by a change in $restr(C, tilde(u))$ or $restr(A, tilde(u))$. We restrain $restr((udvd), u)$ and $restr(A, tilde(u))$, enumerate $x$ into $U^(C join A)$ with $C$-use $tilde(u)$ and $A$-use~0, and advance to #state(2). [Note that if $s_1$ does not exist then $x$ is already a witness to the success of our strategy. The same comment applies to $s_2, dots, s_5$ below.] + Wait for a stage $s_2$ at which either + $restr(C_(s_2), tilde(u)) neq restr(C_(s_1), tilde(u))$; or + $restr(C_(s_2), tilde(u)) = restr(C_(s_1), tilde(u))$, $Eq(x, s_2)$ holds. If (i) occurs then return to #state(1), setting the (udvd)- and $A$-restraints back to~0. Note that the change in $C$ automatically ejects the witness $x$ from $U^D$. If we have (ii) let $v = (hat(psi)phi(x))[s_2] > u$, the total use of the $hat(Psi(C join A join udvd))$ computations at stage~$s_2$, and define $tilde(v) > tilde(u)$ analogously to~$tilde(u)$. Note that because of the enumeration at #state(1) info $(udvd)$ we have $(Phi(E\; x))[s_2] = 1 neq 0 = (Phi(E\; x))[s_1]$, so that $restr(E_(s_2), phi_(s_1)(x)) neq restr(E_(s_1), phi_(s_1)(x))$. Also note that by reaching this point we still have $restr(C_(s_2), tilde(u)) = restr(C_(s_1), tilde(u))$. We set $ lambda^1(x) = (min lambda)[lambda > tilde(v) sand lambda > k sand lambda > s_2 sand lambda in.not A_(s_2) \ and lambda "is larger than any number mentioned in the construction so far"]. $ // p.37 Declare $lambda^1(x)$ to be a lever, restrain $restr((udvd), v)$ and $restr(A, lambda^1(x) + 1)$, and enumerate $x$ into $V^(C join A)$ with $C$-use $tilde(v)$ and $A$-use $lambda^1(x) + 1$. Note that now, since we have just removed $x$ from $udvd$, we have $ (restr((C join A join (udvd)), u))[s_2 + 1] = (restr((C join A join (udvd)), u))[s_1]. $ Advance to #state(3). + Wait for a stage $s_3$ at which either + $restr(C_(s_3), tilde(u)) neq restr(C_(s_1), tilde(u))$; + $restr(C_(s_3), tilde(u)) = restr(C_(s_1), tilde(u))$ but $restr(C_(s_3), tilde(v)) neq restr(C_(s_2), tilde(v))$; or + we see no appropriate $C$-change, $Eq(x, s_3)$ holds. In case (i), return to #state(1), setting the cycle's restraints back to 0. In case (ii), return to #state(2), setting the $(udvd)$-restraint to $u$, and the $A$-restraint to $tilde(u)$. In either of these cases we also discard our choice of the lever, $lambda^1(x)$. Note that in case~(i) (resp.~(ii)), $x$ has been ejected from both $U^D$ and $V^D$ (resp. from $V^D$) by the change in $C$. In either of these cases we also reset all cycles $> chi$. In case (iii) we have $restr(E_(s_3), phi_(s_1)(x)) = restr(E_(s_1), phi_(s_1)(x))$, so there is a $y < phi_(s_1)(x)$ such that $E_(s_3)(y) = E_(s_1)(y) neq E_(s_2)(y)$. Thus $E$ has changed (at least) twice on $y$ so far. Fix this $y$ in subsequent discussion. We wish to continue our tactic of reacting to changes in $E$ by changing $(udvd)(x)$. The witness $x$ is already in both of $U^D$ and $V^D$, so to get it back into the difference we must remove it from $V^D$. We have a mechanism for doing this: pulling the lever $lambda^1(x)$. However, enumerating $lambda^1(x)$ into $A$ means asking for $G$-permission. We do this now. If $G_(s_3)(k) = 1$ we have no hope of getting the $G$-change we rely on; jump straight to #state(10) and follow the instructions there. Otherwise we prepare to wait for $G(k)$ to change to get the permission we need. Define $Gamma_j(C; k) = G_(s_3)(k) thin (=0)$ with use $tilde(v)$ and start cycle $(j, k+1)$ to run simultaneously. Advance to #state(4). + Wait for a stage $t_1$ at which either + $restr(C_(t_1), tilde(u)) neq restr(C_(s_1), tilde(u))$; + $restr(C_(t_1), tilde(u)) = restr(C_(s_1), tilde(u))$ but $restr(C_(t_1), tilde(v)) neq restr(C_(s_2), tilde(v))$; or + $G_(t_1)(k) neq G_(s_3)(k)$. // p.38 In cases (i) and (ii) we reset all the cycles $> chi$ and behave as we did in #state(3), returning to #state(1) or #state(2) as appropriate. We also declare $lambda^1(x)$ not to be a lever any more. In case (iii) we have two subcases, just as in #state(2) of the strategy for $R_e$-requirements in #chapRef(2): #[ #show: doc => setupenum(doc, formats: ("1.", "(a)",)) + If some cycle $chi'$ of row $row(j)$ is currently in #state(8) or #state(9) (as in #chapRef(2) there will be at most one such cycle) we set the marker $mu(x) = tilde(v)_(t_1)(chi')$ and advance to #state(5). This transition does not count as an action. + Otherwise no such $chi'$ exists and we reset all cycles $> chi$, enumerate $lambda^1(x)$ into $A$ (so that $x$ re-enters $udvd$) and advance to #state(6). ] + Wait for a stage $t_2$ such that either + $restr(C_(t_2), tilde(u)) neq restr(C_(s_1), tilde(u))$; + $restr(C_(t_2), tilde(u)) = restr(C_(s_1), tilde(u))$ but $restr(C_(t_2), tilde(v)) neq restr(C_(s_2), tilde(v))$; or + $restr(C_(t_2), tilde(v)) = restr(C_(s_2), tilde(v))$ but $restr(C_(t_2), mu(x)) neq restr(C_(t_1), mu(x))$. In cases (i) and (ii) we behave as we did in #state(3). In case (iii) reset all cycles $> chi$, enumerate $lambda^1(x)$ into $A$ and advance to #state(6). + [Once we reach this point, any subsequent change in $restr(C, tilde(v))$ from its shape at stage~$s_2$ is disastrous to our underlying computations. By taking advantage of the change in $G(k)$ to enumerate our lever we have passed the point of no return and cannot cope with a $C$-change by going back to #state(1) or #state(2). However, as in #chapRef(2) such a $C$-change gives us the small victory of a correct definition of the value $Gamma_j(C; k)$. So, if we ever subsequently see such a change in $restr(C, tilde(v))$, reset all cycles $> chi$ and jump straight to #state(10). This instruction is implicit in all the states that follow, up to #state(10) itself.] Wait for a stage $s_4$ such that $Eq(x, s_4)$ holds. Now, since $ (restr((C join A join (udvd)), v))[s_4] = (restr((C join A join (udvd)), v))[s_2] $ we must have that $restr(E_(s_4), phi_(s_1)(x)) = restr(E_(s_2), phi_(s_1)(x))$. Thus $E_(s_4)(y) = E_(s_2)(y) neq E_(s_3)(y)$ = $E_(s_1)(y)$ and $E$ has now changed 3~times on $y$. We prepare to enumerate $x$ back into $V^D$ by defining another lever: $ lambda^2(x) = (min lambda)[lambda > tilde(v) sand lambda > j sand lambda > s_4 sand lambda in.not A_(s_4) \ and lambda "is larger than any number mentioned in the construction so far"]. $ Declare $lambda^2(x)$ to be a lever and restrain $restr(A, lambda^2(x) + 1)$. (The restraint $restr((udvd), v)$ is still in place from before.) Enumerate $x$ into $V^(C join A)$ with $C$-use $tilde(v)$ and $A$-use $lambda^2(x) + 1$. // p.39 This enumeration ensures that $ (restr((C join A join (udvd)), u))[s_4 + 1] = (restr((C join A join (udvd)), u))[s_1]. $ Advance to #state(7). + Wait for a stage $s_5$ such that $Eq(x, s_5)$ holds. Now we have $restr(E_(s_5), phi_(s_1)(x)) = restr(E_(s_1), phi_(s_1)(x))$ so that $E_(s_5)(y) = E_(s_3)(y) = E_(s_1)(y) neq E_(s_4)(y) = E_(s_2)(y)$. $E$ has changed 4~times on $y$ and, being 4-r.e., can't change again. We want to put $x$ back into $udvd$ to take advantage of the fact that $restr(E, phi_(s_1)(x))$ can't return to its $s_2$ shape. This entails pulling the lever $lambda^2(x)$, which means asking for $G$-permission again. If $G_(s_5)(j) = 1$ already, jump straight to $state(11)$ and follow the instructions there. Otherwise set $Delta(C \; j) = G_(s_5)(j)$ with use $tilde(v)$, and start cycle $(j+1, 0)$ to run simultaneously. Advance to #state(8). + Wait for a stage $t_3$ such that $G_(t_3)(j) neq G_(s_5)(j)$. Then reset all cycles to the right of $chi$, enumerate $lambda^2(x)$ into $A$, and advance to #state(9). + Wait for a stage $t_4$ such that $restr(C_(t_4), tilde(v)) neq restr(C_(s_2), tilde(v))$. (We make explicit the implicit instruction mentioned in #state(6).) If this happens, advance to #state(11). Otherwise $restr(C, tilde(v)) = restr(C_(s_2), tilde(v))$ and we satisfy the requirement because $E(y)$ cannot change any more: $ restr(E, phi_(s_1)(x)) & neq (restr(E, phi_(s_1)(x)))[s_2] \ & = (restr(hat(Psi)(C join A join (udvd)), phi_(s_1)(x)))[s_2] \ & = restr(Psi(C join A join (udvd)), phi_(s_1)(x)). $ + This state is analogous to #state(7) in #chapRef(2). If we arrive here it is safe and accurate to set $Gamma_j(C; k) = 1$ with use~0. Do so, unless it has already been done, (permanently) abandon cycle $(j, k)$, and start cycle $(j, k+1)$. + Arriving here means we can with confidence set $Delta(C\; j)$ with use~0. Do so, unless it has already been done, (permanently) abandon row $row(j)$, and start cycle $(j+1, 0)$. For technical reasons also reset every cycle in row $row(j)$ and put cycle $(j, 0)$ into #state(11). === Combining the modules The basic modules are combined in much the same way as we used in #chapRef(2) with a tree. However, there is a very important difference. In #chapRef(2) a cycle could act infinitely often without being reset: it could bounce back and forth between states 1 and~2, or between states 4 and~5. It was important in that construction that such infinite action was not accompanied by any enumerations into or out of the sets // p.40 under construction. The proof of #lemmaRef(<lemma2.21>) depended on this fact: after a cycle is reset for the last time it can only cause finitely much enumeration. In the present construction, however, this is not true. A cycle returning infinitely often to #state(1) (or to #state(2)) must infinitely often change the value of $(udvd)(x)$, only to have it changed back again when a $C$-change causes the return to #state(1). We need a way to deal with this. In #chapRef(2) we used multiple outcomes for each cycle. We make use of them again, both to remove the need for a path restraint, and to deal with the potentially infinite changes in $(udvd)(x)$ mentioned above. For each cycle~$nu$ of the basic strategy there are six fundamentally different situations at stage~$s$. #[ #set align(center) #tablex( columns: (1.3in,) * 4, rows: 3em, align: horizon + center, [$nu$'s state], [$x in (udvd)$?], [Restraint on \ $(udvd)$], [Restraint on $A$], $0, 1, 10, 11$, [doesn't\ matter], $0$, $0$, $2$, [yes], $u$, $tilde(u)$, $3, 4, 5$, [no], $v$, $(lambda^1(x) + 1)[s]$, $6$, [yes], $v$, $(lambda^1(x) + 1)[s]$, $7, 8$, [no], $v$, $(lambda^2(x) + 1)[s]$, $9$, [yes], $v$, $(lambda^2(x) + 1)[s]$ ) ] (The only state in the first row to which $nu$ can return infinitely often without being reset infinitely often is #state(1), and whenever $nu$ is in this state $x(nu) in.not (udvd)[s]$. This is why we have a "doesn't matter" in this row.) We will have a separate outcome for each of these possibilities but the first. This first possibility is dealt with, as in #chapRef(2), by using the rightmost cycle to the left which imposes restraint. So let $Lambda = {-1} union (omega^2 times {1, 2, 3, 4, 5})$, ordered lexicographically with $-1$ coming first. Now let $T = finseq(Lambda)$ with the standard partial order $<_L$. As before, we make no distinction between a node of the tree and (instance of the) strategy it is implementing. The node $alpha in T$ attempts to satisfy requirement $R_(|alpha|)$. A strategy is _cancelled_ by resetting all of its cycles and discarding any functionals it may have partially defined. Any parameter of a strategy keeps its assigned value until that value is redefined or undefined. The construction proceeds as follows. #stage-hdr(0) All strategies are cancelled. // p.41 #stage-hdr($s+1$) We defined, in substages $t < s$ a finite path $f_(s+1)$ through the tree, of length $s$. Suppose $alpha = (restr(f_(s+1), t)) in T$ has been defined by substage $t-1$. If no cycle of #stalpha has been started since $alpha$ was last cancelled then start $alpha$'s cycle $(0,0)$ and set $nextval = -1$. Otherwise let any cycles of #stalpha able to make the transition from #state(4) to #state(5) do so. Let any cycle forced solely by a $C$-change to change state do so. There are now two cases. - #case(1) Some leftmost cycle $nu$ of #stalpha is able to act. #let bigS = $sans(upright("S"))$ We allow cycle $nu$ to act. Let $lambda$ be the rightmost cycle of #stalpha now imposing restraint of some sort (if there is such a cycle). Let $lambda$ be in state~#bigS (note that $bigS neq 0, 1, 10, 11$) and let $i$ be defined by $ i = cases( 1 quad & "if" bigS = 2\,, 2 & "if" bigS = 3\, 4\, "or" 5\,, 3 & "if" bigS = 6\,, 4 & "if" bigS = 7 "or" 8\,, 5 & "if" bigS = 9. ) $ Now set $nextval = (nu, i)$. If there is no such cycle $lambda$ put $nextval = -1$. In any case, cancel all strategies $beta$ with $concatone(alpha, nextval) <_L beta$. - #case(2) No cycle of #stalpha is able to act. We do nothing at this substage. Define $nextval$ just as above. There is nothing to cancel. If $t + 1 < s$ we advance to substage $t+1$. A node $alpha$ is _accessible_ at stage $s+1$ if $alpha subset f_(s+1)$. One of the points of multiple outcomes for each cycle is to cope with the coming and going of elements of $udvd$ as $C$ changes. It is important to observe that every time $concatone(alpha, (nu, i))$ ($i = 1, 2, 3, 4, 5$) is accessible, $(udvd)(x(alpha, nu))$ is the same, where $x(alpha, nu)$ is the witness chosen by cycle~$nu$ of #stalpha. == Verification for $n = 4$ <section3.3> At heart this construction is very like the one in #chapRef(2). We use the same mechanism to avoid $Delta$-inconsistency, and the underlying aim is the same: change the constructed set frequently enough that our opponent (previously an r.e.~set; here a 4-r.e.~set) cannot keep up with us. As such, we would expect the verification to take largely the same tack. This is the case. The verification argument given in #chapRef(2) is detailed#footnote[The less charitable reader may prefer another word.] and it would please no-one to go through the same sort of thing again in its entirety. So, when an argument follows the // p.42 same lines as the corresponding discussion in #chapRef(2) we will just indicate the essential modifications, if any. As in #chapRef(2), we will refer to parameters associated with cycle~$nu$ of #stalpha as they are defined at stage~$s$ like so: $u_s(alpha, nu)$, $lambda^1_s(alpha, x(nu))$. Whenever it is made possible by context we will drop the strategy name. === Layout of the cycle states We begin again with a description of the possible state-arrangements, and state a Pattern Lemma. We assume we have a certain, fixed #stalpha in mind, and all cycles mentioned are assumed to belong to it. As before, we regard the stages mentioned as being the successive ones at which #stalpha is accessible. Just as in #chapRef(2), we refer to a special "double state": a cycle in either #state(8) or #state(9) is said to be "in state~8/9". #lemma[ For any row $row(j)$, at most one cycle $(j, k)$ of the row is in state~8/9. <lemma3.8> ] #proof[As #lemmaRef(<lemma2.3>).] #lemma[ Suppose cycle $chi = (j, k)$ enters #state(5) at stage $s$ due to cycle $chi'$ being in state~8/9. If at stage $t > s$ cycle~$chi'$ leaves stage~8/9 for the first time after~$s$, for any reason, $chi$ is no longer in #state(5) at the end of stage~$t$. <lemma3.9> ] #proof[ We start by noting that $mu(x(chi)) = tilde(v)_s(chi') = tilde(v)_t(chi')$. Now, cycle $chi'$ only leaves state~8/9 through acting or being reset. If $chi' < chi$ then the action/resetting of $chi'$ also resets $chi$, by construction. We consider the case $chi < chi'$. If cycle $chi'$ leaves state~8/9 without being reset it must reach either #state(10) (if it sees a change in $restr(C, tilde(v)_t(chi'))$ while in #state(8)) or #state(11) (if that $C$-change is seen while in #state(9)). In either case there is a change in $restr(C, mu(x(chi)))$, and cycle~$chi$ will change state, or be reset by the action of a cycle to its left. The case left to consider is that there is a third cycle, $chi''$ with $chi < chi'' < chi'$, which acts at stage~$t$. To reach a contradiction we assume that this action is not accompanied by a change in $restr(C, tilde(v)_t(chi'))$. Without loss of generality we may assume that $t$ is minimal in witnessing the failure of the lemma in this way. Now, as cycle~$chi'$ is not in #state(0) at stage~$s$, cycle~$chi''$ must be in one of the following states at that time: 4, 5, or~10. Cycle~$chi''$ cannot change state between stages~$s$ and~$t$ (except for the transition~#trans(4,5)) as to do so would reset cycle~$chi'$, contradicting the definition of~$t$. We may discard the possibility that $chi''$ is in #state(10) at stage~$s$, as such a cycle can never act again without first being reset. Cycle $chi''$ can't make the transition~#trans(4,5) at stage~$t$, as such a transition doesn't count as an action. The transitions~#trans(4,1) and~#trans(4,2) entail a change in $restr(C, tilde(v)_t(chi''))$. But $tilde(v)_t(chi'') < tilde(v)_t(chi')$ since cycle~$chi'$ starts after $chi''$ reaches #state(2) and $tilde(v)_t(chi'')$ is defined. Thus such a $C$-change is impossible. // p.42 Thus, the only possible transition left is~#trans(5,6). That this is impossible follows from the same argument as was used for the #trans(3,4) transition in #chapRef(2). ] #lemma[ Given $j$, if cycles $chi, chi' in row(j)$ are both in #state(5) at stage~$s$ then $(mu(x(chi)))[s] = (mu(x(chi')))[s]$. <lemma3.10> ] #proof[ As #lemmaRef(<lemma2.5>). ] We are now ready to state the Pattern Lemma for this construction. Let $X = {0, 1, dots, 11}$ and recall that for sets $M, N$ of finite sequences (of unspecified type) we set $setconcat(M, N) = {concat(theta, sigma) | theta in M and sigma in N}$, and $angletup(M) = {angletup(theta) | theta in M}$. The constructions being very similar, the set of valid patterns to be defined is all-but-isomorphic to that in #chapRef(2). We define the following subsets of $finseq(X)$: #let commabr = [\,\ ] $ prelimCrampedRow &= setconcat(finseq({4, 5, 10}), angletup({8})) commabr finalCrampedRow &= setconcat(finseq({4, 5, 10}), angletup({9})) commabr crampedRow &= prelimCrampedRow union finalCrampedRow commabr uncrampedRow &= setconcat(finseq({4, 10}), angletup({1, 2, 3, 6, 7})) commabr abandonedRow &= angletup({11}) commabr prelimRow &= prelimCrampedRow union abandonedRow commabr finalRow &= finalCrampedRow union uncrampedRow\, $ and a subset of $finseq((finseq(X)))$, $ validPattern = setconcat(finseq(prelimRow), angletup(finalRow)). $ As in #chapRef(2) we define by $pattern(s)(alpha)$ the cycle-state arrangement of the #stalpha at stage~$s$. We also refer to the cycle arrangements of individual rows as "patterns". #lemma[ If #stalpha has at least one cycle not in #state(0) at #stg($s$), $pattern(s)(alpha) in validPattern$. <lemma3.11> ] #proof[ The arguments are very similar to those in the corresponding proof in #chapRef(2), and consist of an exhaustion of cases. The same follow-your-nose approach works just fine; nothing is to be gained by repeating it. ] === Consistency of the functions $Gamma_j(C)$ and $Delta(C)$ Now we prove the consistency of the constructed functions $Gamma_j(C)$ and $Delta(C)$. The proofs need little beyond the corresponding ones in #chapRef(2). The only change necessary is typically // p.44 a slightly more involved exhaustion of possibilities brought about by the fact that each cycle has five outcomes corresponding to it, rather than the two of the earlier chapter. Again we assume that we have a specific strategy, $alpha$, in mind. #lemma[ If cycle~$(j,k)$ is in #state(8) at #stg($s$), then $(Delta(C\; j))[s] converge$. The same conclusion may be reached if #theRow($j$) was abandoned at some stage before~$s$. <lemma3.12> ] #proof[ As #lemmaRef(<lemma2.7>). ] #lemma[ If some cycle $(j, k)$ acts at #stg($s$) to define $Delta(C\; j)$ then for each $i < j$, $(Delta(C\; i))[s] converge$. <lemma3.13> ] #proof[ As #lemmaRef(<lemma2.8>). ] Similarly we have #lemma[ If some cycle $(j, k)$ acts at #stg($s$) to define $Gamma_j(C; k)$ then for each $i < k$, $(Gamma_j(C; i))[s] converge$. <lemma3.14> ] The consistency of $Delta(C)$ and $Gamma_j(C)$ are proved just as they were in #chapRef(2). #lemma[ For all $j in omega$, row~$row(j)$ defines a computation for $Delta(C\; j)$ only when no other such computation is currently defined. <lemma3.15> ] #proof[ As #lemmaRef(<lemma2.10>). ] #lemma[ Cycle $(j, k)$ defines a computation for $Gamma_j(C; k)$ only when no other such computation is currently defined. <lemma3.16> ] #proof[ As #lemmaRef(<lemma2.11>). ] === Satisfaction of the requirements As in #chapRef(2) we now prove that all the requirements are satisfied. All that will then remain is to check that $A leqt G$. Again we define the true path, $f$, through the priority tree: $f(n) = xi$ where $concatone((restr(f, n)), xi)$ is the leftmost successor of $restr(f, n)$ accessible infinitely often. We have the same proposition as before. #proposition[ #show: doc => setupenum(doc, formats: ("1.", "a.")) For all $n in omega$ + $f(n)$ is defined; + $restr(f, (n+1))$ is cancelled only finitely often (note that $restr(f, 0) = emptyset$ is never cancelled); + #strat($restr(f, n)$) satisfies requirement $R_n$; // p.45 + for all sufficiently large $C$-true stages $t$, $restr(f, (n+1)) subset f_t$. <prop3.17> ] So, inductively assume 1, 2, 3, and 4 for $n = eta - 1$ and let $alpha = restr(f, eta)$. Fix a #stg($s_0$) so large that $alpha$ is not cancelled after~$s_0$ and that for for every $C$-true stage $t > s_0$, $alpha subset f_t$. // This notation is mystifying. See github ticket #48 //, $rho(alpha, t) = liminf_s rho(alpha, s)$, //and $tilde(rho)(alpha, t) = liminf_s tilde(rho)(alpha, s)$. Recall that we say _$alpha$ acts finitely_ if there is a stage after which no cycle of #stalpha acts, and otherwise we say that _$alpha$ acts infinitely_. #lemma[ If $alpha$ acts infinitely then some specific cycle of $alpha$ acts infinitely often. <lemma3.18> ] #proof[ As #lemmaRef(<lemma2.17>). ] The next result follows as it did in #chapRef(2). #lemma[ Given a #stalpha, if $chi$ is the leftmost cycle of #stalpha to act infinitely often then only finitely often can _any_ cycle to the left of $chi$ act. <lemma3.19> ] #lemma[ Some (leftmost) successor of $alpha$ is accessible infinitely often. <lemma3.20> ] #proof[ As #lemmaRef(<lemma2.19>). ] This establishes part~1 of the Proposition for $n = eta$ and we may assume that there is a value $f(eta) = epsilon in Lambda$. Write $epsilon = (nu_eta, i_eta)$ or $epsilon = -1$ as appropriate, where $i_eta = 1, 2, 3, 4, "or" 5$. It will again be convenient to define what it means for a cycle to be _lacking_ at #stg($s$). We say that #cycle($nu$) of #stalpha is lacking for~$i$ at #stg($s$) if $nu$ is in #state(10) or #state(11), or (a)~$i=1$ and $nu$ is in a state numbered less than 2, (b)~$i=2$ and $nu$ is in a state numbered less than 3, (c)~$i=3$ and $nu$ is in a state numbered less than 6, (d)~$i=4$ and $nu$ is in a state numbered less than 7, or (e)~$i=5$ and $nu$ is in a state numbered less than 9. Then we have the following results, proved as were Lemmas #thmref(<lemma2.20>)-#thmref(<lemma2.23>) from the definition of $nu_eta$. #lemma[ If $nu_eta$ is defined, then it is lacking for $i_eta$ only finitely often. <lemma3.21> ] #lemma[ $restr(f, (eta+1)) = concatone(alpha, epsilon)$ is cancelled only finitely often. <lemma3.22> ] #lemma[ #stalpha satisfies requirement $R_(|alpha|)$. <lemma3.23> ] #lemma[ For all sufficiently large $C$-true stages $t$, $restr(f, (eta+1)) = concatone(alpha, epsilon) subset f_t$. <lemma3.24> ] These results establish parts 2-4 of the Proposition and complete the inductive step. #thmref(<prop3.17>)[Proposition] is proved. #qed Thus all of the requirements are satisfied, and we have constructed r.e. $D geqt G$ and two r.e.[$D$] sets $U^D$ and~$V^D$ such that $D join (udvd)$ is not of 4-r.e. degree. It remains only to show that in fact $D leqt G$. We use the same method as we did in #chapRef(2). // p.46 For $alpha in T$ we set $ e^alpha = max ( {j, l | (exists beta in T, i = 1, dots, 5)[concatone(beta, ((j, k), i)) subset alpha]}) $ the largest number which occurs in the path leading to $alpha$ and which may be called upon by a cycle of some strategy on that path to be a witness to a $G$-change. We set $ s^alpha = min { s | restr(G_s, e^alpha) = restr(G, e^alpha)} $ and recall that $lambda alpha [s^alpha]$ is $G$-recursive. #lemma[ Suppose that $t > s^(concatone(alpha, nu))$ is a $C$-true stage, and that $alpha$'s cycle $nu$ is in state 4, 5, 8, 10, or 11. Then if #cycle($nu$) does not act at #stg($t$) it will never act subsequently without first being reset. <lemma3.25> ] #proof[ As #lemmaRef(<lemma2.24>). ] #lemma[ Suppose that $alpha subset f_s$, $t > max{s, s^alpha}$ is $C$-true, and $s' > t$. Then for $beta subset alpha$, if $beta subset.not f_t$ but $beta subset f_(s')$ then there is a $tau in (s, s']$ such that $beta$ is cancelled at #stg($tau$). <lemma3.26> ] #proof[ As #lemmaRef(<lemma2.25>). ] We can now prove that the delayed permitting works. #lemma[ $A leqt G$. <lemma3.27> ] #proof[ Let $y in omega$. As the construction always picks levers to be larger than the current stage, if $y$ has not been chosen as a lever by #stg($y$) it never will be and $y in.not A$. Otherwise, suppose that $y$ is chosen at #stg($s_0$) to be a lever for cycle~$chi = (j,k)$ of #stalpha. Note that $alpha subset f_(s_0)$. Assume that $y$ is actually chosen as $lambda^1_(s_0)(x(chi))$. If $k in.not G$ or $k in G_(s_0)$ then #cycle($chi$) will never get the $G$-permission it needs to enumerate $y$ into $A$ and $y in.not A$. Otherwise let $k in setdiff(G_s, G_(s-1))$ and let $t$ be the first $C$-true stage larger than each of $s_0$, $s$, and $s^(concatone(alpha, chi))$. We claim that $y$ is enumerated into $A$ by #stg($t$) or not at all, so that $G(y) = G_t(y)$. If $a subset.not f_t$ then by #lemmaRef(<lemma3.26>) #stalpha will be cancelled before being accessible again, and $y$ will be lost. If some cycle $chi' < chi$ of #stalpha acts at #stg($t$) then $chi$ will be reset and again $y$ will be lost. Otherwise, if $chi$ is in #state(1) or~2 at #stg($t$) then the lever~$y$ has already been discarded since being chosen, and will never get a chance after $t$ to be enumerated. If $chi$ is in #state(3) then, since $G_t(k) = 1$, $y$ will never be enumerated into~$A$. If $chi$ is in #state(6), 7, or~8 then by construction, $y in A_t$. Otherwise we apply #lemmaRef(<lemma3.25>) to see that #cycle($xi$) must act at #stg($t$) if it ever will without first being reset, and lever~$y$ is lost. // p.47 If $y$ is chosen as $lambda^2_(s_0)(x(chi))$ the argument is similar, with $j$ replacing $k$. ] == The cases $n > 4$ <section3.4> The complications which arise as $n$ gets larger are of notation, rather than of approach. When avoiding $n$-r.e. sets we must change our constructed set $n+1$ times, leading to an $(n+1)$-dimensional cycle structure. This leads to an increase in the number of times that we must ask for $G$-permission for the levers corresponding to a given witness~$x$, and in the number of different functionals we construct. We will not attempt to give anything more than the briefest indications of how to adapt the $n=4$ construction to larger values of~$n$. We will start by calculating how many times our basic module must ask for $G$-permission for a given witness. Our basic approach remains the same. Given an $n$-r.e. set~$E$ and a witness~$x$, we aim to defeat any agreement of the type $Eq(x, s)$ by changing $n+1$ times the membership of $x$ in $udvd$, thus exhausting $E$'s ability to reach. The question is thus: while pushing $x$ in and out of $udvd$, how many times must we actually enumerate a lever into~$A$, thus requiring $G$-permission? Well, suppose first than $n = 2m + 1$ is odd, so we must make $n+1 = 2m+2$ changes to $(udvd)(x)$. Our method is the same as before: we make the first change by putting $x$ into $U^D$, and subsequent ones by pushing $x$ in and out of $V^D$. Only the "out of $V^D$" action requires $G$-permission: #[ #set align(center) #gridx( columns: (1in, 1in, 1in), align: (col, row) => { bottom + if col == 1 { left } else { center } }, // [Action on\ $udvd$], [Method], [Permission?], hlinex(), [1. #h(1fr) in:], [$x$ into $U^D$], [], [2. #h(1fr) out:], [$x$ into $V^D$], [], [3. #h(1fr) in:], [$x$ out of $V^D$], [(yes)], [4. #h(1fr) out:], [$x$ into $V^D$], [], [5. #h(1fr) in:], [$x$ out of $V^D$], [(yes)], [6. #h(1fr) out:], [$x$ into $V^D$], [], colspanx(2)[$dots.v$], (), [], [$2m+1$. #h(1fr) in:], [$x$ out of $V^D$], [(yes)], [$2m+2$. #h(1fr) out:], [$x$ into $V^D$], [] ) ] There are thus $m+1$ pairs of actions, each (except the first) needing exactly one "layer" of $G$-permission. Thus the number of times that we must ask for $G$-permission is just $m = halfof((n-1))$. In the case that $n = 2m$ is even, the only change to the table above is the removal of the $(2m+2)$nd line, and we sill need permission $m$ times. Thus, given any $n$, we need permission $m = floor(halfof(n))$ times for a given witness. (Notice that in the $n=4$ case we seek permission $2 = floor(halfof(4))$ times for each witness.) Suppose that $n=7$. What needs to be done to adapt the basic $n=4$ module? // p.48 Well, most obviously, the cycle structure will now be an $(omega^3)$-sequence of cycles $chi = (j, k, l)$, to accommodate the 3 layers of permission that we will (potentially) need for each witness. Secondly, in addition to constructing functions $Gamma_j(C)$ and $Delta(C)$ we will need a third tier, $Upsilon_(j,k)$ to handle the extra layer of $G$-permission. With the extra dimension, we need a more general concept than "row". In general, for the $n$-dimensional structure $omega^n$, we define an $(n-i)$-dimensional _slice_ by specifying the first $i < n$ components: $ slice(c_1, dots, c_i) = {(c_1, dots, c_i, c_(i+1), dots, c_n) | c_(i+1), dots, c_n in omega}. $ Just as before we had a $Delta$-protecting, "waiting" state, 5, which was used to prevent the over-eager employment of $G$-changes leading to the inconsistent definition of $Delta(C)$, we must now have states which protect both $Gamma_j$ and $Delta$. Before, the trigger for entering #state(5) was the existence of some cycle of slice~#slice($j$) in state~8/9. To allow us some abstraction, call this double state the _endgame for $Delta$_. In the new construction, there will be endgames for $Gamma_j$ and $Delta$. In each case, the endgame consists of the two states immediately following the definition of $Gamma_j(C;k)$ and $Delta(C\; j)$, respectively, in which the functional value just defined is still valid, and remains and important part of our overall approach. While some cycle is in an endgame like this we cannot have cycles to the left acting impetuously, compromising the consistency of $Gamma_j(C)$ and $Delta(C)$. Now action on $G$-permission corresponding to definitions of values for $Upsilon_(j,k)(C)$ must wait until there are no cycles to the right in endgames. The subcases for behavior upon seeing a $G$-change in #state(4) will now look something like this: #show: doc => setupenum(doc, formats: ("(a)",)) + If some (leftmost) cycle $chi'$ of slice #slice($j$) is currently in an endgame for $Gamma_j$ or for $Delta$ then set the marker $mu^1(x) = tilde(v)_(t_3)(chi')$ and advance to #state(5). This transition does not count as an action. + [As it was before.] Note that we need only one $Upsilon_(j,k)$-related waiting state to protect both $Gamma_j$ and $Delta$ computations. We set $mu^1(x)$ to keep our eye on the leftmost cycle, $chi'$, to our right in a $Gamma_j$- or $Delta$-endgame. If $chi'$ should leave its endgame, the monotonicity of the computation function will ensure that all cycles to _its_ right will leave their respective endgames as well. Now, $G$-changes corresponding to $Gamma_j$ definitions must also be treated with caution: action upon these changes must still respect $Delta$-endgames. Thus the state immediately after the definition of the value $Gamma_j(C; k)$ will have similar subcases to determine the response to a $G$-change. In this case we need only keep an eye out for $Delta$-endgames. (Note that the extra waiting state that will follow means that a $Gamma_j$-endgame actually consists of the _three_ states immediately following the functional definition. We lied before.) Apart from that, the same construction will work for $n=7$, and may be adapted for any $n > 4$. // p.49 Every time an increase in $n$ requires the addition of an extra dimension to the cycle structure (that is, every time $n$ is increased from $2m-1$ to $2m$), we just "bolt" one to the front: add an extra tier of functionals, with a corresponding waiting stage to protect all of the existing tiers. == Further comments As all of the strategies are self-contained, it does not hurt to combine strategies corresponding to different values of $n$, so long as we associate their enumerations with different $dreInAbove(D)$ sets. So, those strategies concerning themselves with avoiding 5-r.e. degrees (say) work with the set $C join A join (setdiff(U^(D_5)_5, V^(D_5)_5))$, while those avoiding 13-r.e. degrees work with the separate $C join A join (setdiff(U^(D_(13))_13, V^(D_(13))_13))$. A description of the priority tree then becomes more complicated (as different nodes will have different successor-sequences), but in principle the construction is no different. Indeed, as all of the strategies for all $n$ can be combined, we can actually construct a single $D leqt G$ for which, given $n$, there is a $dreInAbove(D)$ set not of $n$-r.e. degree: #theorem[ Given r.e. sets $C ltt G$ there is r.e. $D in turinginterval(C, G)$ such that for all $n in omega$ there is a $dreInAbove(D)$ set $F_n$ not of $n$-r.e. degree. <theorem3.28> ] In fact, there is no need to keep enumerations for different values of $n$ separate: we can construct a single $dreInAbove(D)$ set $F$ with is not $n$-r.e. for any $n$: #theorem[ Given r.e. sets $C ltt G$ there is r.e. $D in turinginterval(C, G)$ and a $dreInAbove(D)$ set $F$ not of $n$-r.e. degree for any $n in omega$. <theorem3.29> ] It is also interesting to note that the sets $F_n$ we construct are just barely $dreInAbove(D)$. In the construction, elements are only ever enumerated into $U^D$ once, at least modulo "unwanted" ejections due to $C$-changes. In the absence of these $C$-changes the set $U^D$ would be recursively enumerable. Hence the set(s) $udvd$ are really the difference of an (almost) r.e.~set and a $C$-r.e. one. Then the question is, how much do we really use the full strength of "$dreInAbove(D)$"? Can we get the same final result using just $reInAbove(D)$ in place of $dreInAbove(D)$? Using the present technique, the answer is "no". The key point in the construction is that we know, ahead of time, how many times we will have to change the membership of a particular witness in the set $udvd$, and hence the number of times we will have to ask for corresponding $G$-permissions. This means that, for a specific basic module, we can specify ahead of time what the cycle structure is going to look like, $omega^(floor(halfof(n)))$, and hence what the possible outcomes will be. All of this is possible because for each witness we have two _anchor-points_, $restr(E_(s_1), phi_(s_1)(x))$ and $restr(E_(s_2), phi_(s_2)(x))$, to which we return over and over. Since we force $restr(E, phi_(s_1)(x))$ to repeatedly flip-flop between these two states, we see that there is a _fixed_ number (called $y$ in the construction) on which $E$ must change each and every time. As we know how many times $E(y)$ can change we can therefore // p.50 put a bound, _before any cycle of the strategy starts_, on how many times we will have to flip-flop. Consider what would happen here if we tried to construct $F = C join A join V^D$, an $reInAbove(D)$ set. At our equivalent of #state(1) we would choose a lever $lambda^1(x)$ larger than $tilde(u)$ and enumerate $x$ into $V^D$ with use $lambda^1(x) + 1$. Then, when we see $s_2$, the next stage of agreement, with total use $tilde(v)$, we would (after waiting for permission) kick $x$ out of $V^D$ by pulling the lever~$lambda^1(x)$. However, it is impossible for us to restrain $restr(A, tilde(v))$ from #stg($s_2$) onwards, as we cannot be sure that $lambda^1(x) > v$. The very act of returning $V^D$ to its $s_1$ shape may change $restr(A, tilde(v))$. Thus, instead of two anchor-points, we will only have one, $restr(E_(s_1), phi_(s_1)(x))$, to which we can be sure of returning each time. Thus, while we can say each time that $ restr(E_(s_("odd")), phi_(s_1)(x)) = restr(E_1, phi_(s_1)(x)) neq restr(E_(s_("even")), phi_(s_1)(x)), $ there is no coordination between $E$ at the $s_("even")$ stages,#footnote[Of course, $s_("even")$ should in no way be confused with #[_s_]even or even _seven_.] and we cannot be sure that $E$ changes on the same element each time. Thus, instead of the number of changes in $restr(E, phi_(s_1)(x))$ that we must wait for being bounded in advance by~$n$, we must allow $E$ (potentially) to change $n$ times on each element less than $phi_(s_1)(x)$. Thus, instead of needing to change our constructed set $n+1$ times, we may need to change it $m(x) = m(x, s_1) = (1 + n dot phi_(s_1)(x))$ times for the witness~$x$. This bound is clearly not known before the cycle starts: we have to wait until stage~$s_1$ to find it. This, of course, is where the problems start. Before, we knew that all the witnesses chosen by a given strategy would be content with just $n+1$ changes and could therefore do with an $omega^(floor(n\/2))$ cycle structure. Now, as we choose larger and larger witnesses for the various cycles, the potential number of times that we must seek permission may grow without bound. This fact by itself does not make the construction impossible: we can use $omega^(< omega)$ (ordered lexicographically) to organize our cycles, and we can speak of slices of all finite dimensions. Define the _slice dimension_ of a #cycle($chi$) as the dimension of the smallest slice containing $chi$ and all of its predecessors. Thus the slice dimension of cycle $(1, 1, 1)$ is 3, while that of $(1, 0, 0, 0)$ is 4. Various cycles will now have varying (finite) numbers of internal states (determined by each cycle dynamically as soon as $phi_(s_1)(x)$ is calculated), and the strategy as a whole may have infinitely many different ones. There is now a fundamentally different kind of possibility that must be considered in the proof of #lemmaRef(<lemma3.18>) (which is really just the proof of #lemmaRef(<lemma2.17>)): ($infinity$) #h(0.5em) For all $i in omega$ a non-zero number of cycles of slice dimension greater than $i$ act. The author has been unable to turn a failure of this type into a demonstration that $G leqt C$. In the $dreInAbove(D)$ construction, possibility~(A), say, of #lemmaRef(<lemma3.18>) (actually of #lemmaRef(<lemma2.17>)) // p.51 led to computation of $G$ from $C$ "along the first component". In general, for any $n$, a failure of the $dreInAbove(D)$ computation leads a computation of $G$ along one of the components. Outcome ($infinity$) allows no such computation. The same problems occur, even when we allow $F$ to be $dreInAbove(D)$, if we try to avoid $omega$-r.e. degrees, as again the number of flip-flops depends on the particular witness chosen. === The special case $C = emptyset$ The case in which $C = emptyset$ was the first to be proved by the author. It was obtained before the method was developed to ensure the consistency of the $Delta$ functional, as that method is not needed in the special case. The overall construction is in any case vastly simplified. To see why, consider what would happen in the construction if $C = emptyset$. In particular, we never experience a $C$-change. At no time would a #cycle($chi$) need to return to an earlier numbered state due to a computation being destroyed. So long as it is not reset, $chi$ will only ever make progress, or (at worst) stay put. This means that no strategy will act infinitely often. (Otherwise, by Lemmas~#thmref(<lemma3.18>) and~#thmref(<lemma3.19>), some cycle would act infinitely without being reset infinitely often.) In other words, _each strategy causes only finitely much injury._ Once we have a finite injury argument, we can do away with the entire apparatus of the priority tree. The finite injury nature of the construction also means that the functionals $Gamma_j$ and $Delta$ do not need to be constructed "on the fly", but can be extracted without too much trouble after the face, in the verification, under the assumption that the construction has failed. This allows us to completely avoid the problems of $Delta$-inconsistency in the original Cooper, Lempp, and Watson method. Hence there was no need for the special method we used above. An interest artifact of the finite injury construction is that witnesses are enumerated out of our set~$F$ only when we want them to be, in forcing the opponent set, $E$, to change. That is, when avoiding an $n$-r.e. set, the value of $(udvd)(x)$, for a given witness, will change at most $n+1$ times: the constructed set will be $(n+1)$-r.e. Hence we have the following. #theorem[ Given an r.e. set $G notequivt emptyset$, there is r.e. $D leqt G$ such that for all $n in omega$ there is a set $F$ which is simultaneously $dreInAbove(D)$ and $(n+1)$-r.e. but not of $n$-r.e. degree. <theorem3.30> ] When combining requirements corresponding to different values of $n$ we can tell for the sake of which $n$-value a specific witness $x$ was chosen, and hence the maximum number of times that $(udvd)(x)$ will change. We therefore have the following result, which corresponds to #thmref(<theorem3.29>) just as #thmref(<theorem3.30>) corresponds to #thmref(<theorem3.28>)[Theorem]. #theorem[ Given an r.e. set $G notequivt emptyset$, there is r.e. $D leqt G$ and a set $F$ which is simultaneously $dreInAbove(D)$ and $omega$-r.e., but not of $n$-r.e. degree for any $n in omega$. <theorem3.31> ] // p.52 === A related result In @ALS1998 the following is proved. #theorem(name: [Arslanov, LaForte, and Slaman])[ Any $omega$-r.e. set with is 2-REA is actually of 2-r.e. degree. <theorem3.32> ] The question then occurs: does the behavior occur for numbers greater than 2? The same paper answers the question negatively: #theorem(name: [Arslanov, LaForte, and Slaman])[ There is a set F which is both 3-REA and $(n+1)$-r.e. but fails to be of $n$-r.e. degree. <theorem3.33> ] In that paper, $F$ is constructed to be 3-REA by making it recursively enumerable in, and above, a d.r.e. set $D$. (The names for these sets are different in~@ALS1998.) In the present paper we also construct such a set, $F$. Our $F$ is certainly 3-REA, as it is above and d.r.e. in (and hence 2-REA in) an r.e. set $D$. By using the construction corresponding to #thmref(<theorem3.30>)[Theorem] we can take $F$ to be $(n+1)$-r.e. and the whole point of $F$ is that it is not of $n$-r.e. degree. The proof in @ALS1998 of #theoremRef(<theorem3.33>) is of finite injury, and involves a construction using two anchor-points. Thus an adaption is possible, involving cycles, which will find the d.r.e. set $D$ below any previously given, non-recursive, r.e. set~$G$. Therefore Theorem~11 in @ALS1998 may be slightly strengthened to read #theorem[ Given any non-recursive, r.e. set $G$ there is a d.r.e set $D leqt G$ such that, for every $n geq 3$, there exists a set $F_n$ which is simultaneously $reInAbove(D)$ and $(n+1)$-r.e. but is not of $n$-r.e. degree. <theorem3.34> ] //////////////////////////////////////// // Chapter IV // p.53 = For high $C$ the properly $reInAbove(C)$ intervals are weakly dense <chapter4> == Introduction In #chapRef(3) we gave a generalization of (a weaker form) of the original Soare and Stob result. In #chapRef(5) we will prove a generalization in another direction: // TODO: label this 5.2 #theorem[ For any non-recursive r.e. set $C$, there are $reInAbove(C)$ sets $A$ and $B$ such that $A ltt B$ and there is no r.e. set $D in turinginterval(A, B)$. ] In this chapter we consider the latter result from the point of density: can such r.e.-free intervals be found densely in the r.e. degrees? #conjecture[ For all r.e. sets $C$, $G$, and $H$ such that $emptyset ltt C leqt G ltt H$ there are $reInAbove(C)$ sets $D ltt F$ such that $turinginterval(D, F) subset turinginterval(G, H)$ and there is no r.e. set $E in turinginterval(D, F)$. <conjecture4.1> ] This (and even the weaker version in which we allow $D = F$) is false because of #theorem(name: [Arslanov, Lempp and Shore, @ALS1996])[ There is a recursively enumerable set $C$ with $emptyset ltt C ltt emptyset'$ such that every $reInAbove(C)$ set $A$ with $C leqt A leqt emptyset'$ is of r.e. degree. <theorem4.2> ] However, we can succeed if $C$ is high: #theorem[ If $C$ is r.e. and high and $G ltt H$ are r.e. with $C ltt H$ there are $reInAbove(C)$ sets $D ltt F$ such that $turinginterval(D, F) subset turinginterval(G, H)$ and there is no r.e. set $E$ with $E in turinginterval(D, F)$. Furthermore, $D$ and $F$ may be chosen to be d.r.e. <theorem4.3> ] == The construction The proof we give is derived from one given in @ALS1996 of the similar statement // This appears to be Thm 2.1 in the other paper #theorem[ If $C ltt H$ are r.e. and high (that is, $C' equivt H' equivt emptyset''$), there is a d.r.e. set $E$ which is $reInAbove(C)$ but not of r.e. degree such that $C ltt E ltt H$. <theorem4.4> ] // p.54 That proof suffers from several flaws. It is based on the original proof of #theoremRef(<theorem2.1>) given in~@CLW1989, and hence has the same problems: injury caused by "weaker" strategies (noted and fixed by LaForte) and $Delta$-inconsistency. It also has a flaw all its own (see @section4.4[Section] below). As well as strengthening #theoremRef(<theorem4.4>) we simplify the argument, fixing at a stroke both the $Delta$-consistency and local problems. The method of previous chapters is used for the remaining obstacle. The simplification which allows us to do away with $Delta$-inconsistency without the need for the "waiting states" of earlier chapters is the reduction of the cycle structure from two dimensions to one. For each witness we still need two layers of $H$-permission, but we get the second one (almost) for free, from the "high permitting" argument inherent in the construction. The use in~@ALS1996 of a two dimensional cycle structure is overkill. To see how we will use the hypothesis that $C$ is high consider the following. By a result of Robinson~@Robinson we may assume (perhaps by replacing $C$ with a Turing equivalent r.e. set) that we can find a recursive enumeration ${C_s st s in omega}$ such that the _computation function_ $ c_C(x) = (mu s)[restr(C_s, x) = restr(C, x)] $ dominates all (total) recursive functions. That is, if $f$ is a total, recursive function, there is $n in omega$ such that $(forall m > n)[c_C(m) > f(m)]$. (Roughly speaking, our construction will require $C$-permission "late" in the strategy, and the fact that $c_C$ dominates every recursive function means that we get this permission when we need it. Whenever you (recursively) guess how long it takes initial segments of $C$ to converge, you are wrong cofinitely often.) Fix such an enumeration, and enumerations ${G_s}_(s geq 0)$ and ${H_s}_(s geq 0)$. We will construct an auxiliary r.e. set $B$ and arrange things so that $D = G join A$ and $F = G join A join B$ have the required properties. That $D$ and $F$ are d.r.e will follow from the fact that the approximations to the set~$A$ will change at most twice on any element (at worst, $x$ will be enumerated into $A$, and later removed forever.) We must satisfy all the requirements of the form $ R_e: quad A neq Phi_e(E_e) thick or thick E_e neq Psi_e(G join A join B) $ and $ N_e: quad B neq Theta_e(G join A) $ where ${angletup(E_e, Phi_e, Psi_e)}_(e geq 0)$ enumerates all triples in which $E_e$ is an r.e. set and $Phi_e$ and $Psi_e$ are recursive functionals, and ${Theta_e}_(e geq 0)$ simply enumerates all recursive functionals. We will ensure that $A leqt H$ by a combination of direct permitting, and the high permitting used to make $A$ r.e. in~$C$. We ensure $B leqt H$ by direct permitting. As in earlier chapters all of the permission is potentially delayed. // p.55 === The Basic Modules ==== The $R_e$ requirements For the requirements of the first type (the "r.e.-avoiding" requirements) the basic module is a simplified version of the one used in @ALS1996[Theorem 2.1]. This in turn is basically the approach used in @CLW1989[Theorem 1], but incorporating high permitting. The strategy used to satisfy $R_e$ consists of a (potentially unbounded) number of _cycles_, each of which tries to satisfy the requirement in a very simplistic way. If each cycle fails, we argue that $H leqt G$, contradicting the hypothesis of the theorem. Suppose $e$ is fixed, and write $angletup(E, Phi, Psi)$ for $angletup(E_e, Phi_e, Psi_e)$. We will describe the strategy for satisfying~$R_e$. It consists of an $omega$-sequence of cycles. Cycle~0 starts first, and each #cycle($k$) can start cycle $k+1$, as well as stopping all cycles $k' > k$. The strategy as a whole threatens to demonstrate that $H leqt G$ by constructing a functional $Gamma(G) = H$. The #cycle($k$) may define the value $Gamma(G\; k)$. The strategy also defines values for an auxiliary (partial) recursive function~$m$, used in the high permitting part of the argument. All cycles begin in #state(0). A cycle is _started_ by letting it pass from #state(0) to another state, depending on its history, as in earlier chapters. Again, a cascade of cycle-startings might occur. A cycle is _reset_ by putting it back into #state(0), returning its restraints to 0, and undefining the values of its parameters, $u$, $x$, and $p$. A cycle is _abandoned_ by returning its restraints to 0 and (permanently) stopping all activity for that cycle. This is done when a cycle has categorically failed to satisfy $R_e$, as in the earlier chapters. A cycle is said to _act_ when it moves from one state to another. Cycle~$k$ proceeds as follows. // reset to default #show: doc => setupenum(doc) 0. Until given the go-ahead, just hang out with the other cycles. When told to start first check if #cycle($k$) has been abandoned at any time. If so jump straight to #state(6) and follow the instructions there. Otherwise choose a witness, $x$, larger than any number mentioned in the construction so far (including all currently defined $A$-restraints and the current stage) and larger than $k$. + Let $Eq(x, s)$ denote the condition $ (A(x) = Phi(E\; x))[s] squad and squad (restr(E, phi(x)) = restr(hat(Psi)(G join A join B), phi(x)))[s] $ Wait for a stage $s_1$ at which $Eq(x, s_1)$ holds and let $u = (hat(phi) phi(x))[s_1]$, the total use of the $hat(Psi)(G join A join B)$ computations. [Note that if no such $s_1$ exists, then $x$ witnesses the success of our strategy.] If $H_(s_1)(k) = 1$ then we have no hope ever of seeing the $H$-change we need for permission, so go to #state(6). Otherwise restrain $restr(A, u)$ and $restr(B, u)$ from now on, set $Gamma(G\; k) = H_(s_1)(k) (= 0)$ with // p.56 use~$u$, and start cycle $k+1$ to run simultaneously. Advance to #state(2). + Wait for a stage $s'$ at which either + $restr(G_(s'), u) neq restr(G_(s_1), u)$, or + $H_(s')(k) neq H_(s_1)(k)$. On reaching $s'$, reset all cycles $k' > k$ of the strategy. Then + if $restr(G, u)$ changes first, drop the $A$- and $B$-restraints of #cycle($k$) back to 0 and return to #state(1). (Note that the change in $G$ will automatically undefine the values $Gamma(G\; k')$ for $k' geq k$, allowing these values to be redefined later, keeping $Gamma$ consistent.) While + if $H(k)$ changes first, let $p in omega$ be least such that $m(p)$ is not defined yet. Enumerate $x$ into $A$ with $C$-use $p$. This enumeration has just been permitted by the change in $restr(H, x)$ (since, by choice, $x > k$.) Proceed to #state(3). Note that we now know that $H(k) = 1$: it will never change again, as $H$ is r.e. Thus, if we ever subsequently get a change in $G$ below $u$, we may redefine $Gamma(G\; k) = 1$ with use~0, and be sure that $Gamma(G\; k) = H(k)$. From now on, if we see such a change in $G$, jump straight to #state(6). + Set the marker $xi(x) = p$ and wait for a stage $s_2$ at with either + $restr(G_(s_2), u) neq restr(G_(s_1), u)$; or + $restr(G_(s_2), u) = restr(G_(s_1), u)$ and $Eq(x, s_2)$. Note that if such an $s_2$ does not exist, $x$ again witnesses our success. It in entirely possible that while we are waiting for $s_2$, $C$ changes below $xi(x)$, ejecting $x$ from $A$. We want $x$ to remain in $A$ for now, so we "artificially" keep it there by enumerating new axioms $angletup(x, restr(C_t, p))$ into $U$ (where we are constructing $A = U^C$) whenever $restr(C_t, p) neq restr(C_(t-1), p)$. (Note that this is enough to keep $A$ d.r.e. We consider $x$'s ejection from $A$ a transitory phenomenon, not affecting the enumeration of $A$ that our algorithm defines. To decide whether $x in A$ at #stg($s$), check if $x in A$ at the _end_ of the stage.) When we reach such a stage, set $m(p) = s_2$. If we have just seen a $G$-change, jump straight to #state(6). Otherwise, proceed to #state(4). Note that in the latter case we have $A_(s_1)(x) = 0$ and $A_(s_2)(x) = 1$ and so we must have $restr(E_(s_1), phi_(s_1)(x)) neq restr(E_(s_2), phi_(s_1)(x))$. This change is irreversible, as $E$ is r.e., and we attempt to exploit this, by waiting for $x$ to be enumerated out of $A$ by a $C$-change. Start #cycle($k+1$) to run simultaneously. (Note that, although in #state(2) we reset all cycles $k' > k$, this resetting cannot destroy the computations $Gamma(G \; k+1), Gamma(G \; k+2), dots$ that these cycles may have defined: // p.57 there has not been a convenient $G$-change. Thus the restarted $k+1$ (and its cronies $k+2, k+3, dots$) may produce values for $Gamma(G)$ at points where it is already defined. We will argue that such multiple definitions only persist when #cycle($k$) gets permanently stuck in #state(4) and this will only happen to finitely many cycles.) + Wait for a #stg($s''$) at which $restr(C, xi(x))$ changes. Let this change in $C$ remove $x$ from A, reset all cycles $k' > k$ and go to #state(5). Now if $restr(G, u) = restr(G_(s_1), u)$, $x$ finally witnesses the success of our strategy, since $ restr(hat(Psi)(G join A join B), phi_(s_1)(x)) &= (restr(hat(Psi)(G join A join B), phi(x)))[s_1] \ &= restr(E_(s_1), phi_(s_1)(x)) \ &neq restr(E, phi_(s_1)(x)) $ Note that we are potentially stuck with the disagreement $Gamma(G\; k) neq H(k)$ (if we don't see the desired change in $restr(C, xi(x))$, or a change in $G$ below $u$ which would allow us to redefine our (incorrect) value $Gamma(G\; k)$.) The fact that the computation function $c_C$ is dominant means that we will only have to put up with this finitely often (see #lemmaRef(<lemma4.6>)), and we will still be able to threaten to compute $H$ recursively from $G$. + Wait for $restr(G, u) neq restr(G_(s_1), u)$. If this never happens, the strategy succeeds by the argument in #state(4), above. If it does happen, reset all cycles $k' > k$ and advance to #state(6) to redefine $Gamma(G\; k)$ as a value we now know to be correct, and abandon the cycle. (Note that the change in $G$ automatically undefines any values $Gamma(G\; k+1), Gamma(G\; k+2), dots$ which here defined while #cycle($k$) was waiting in #state(4). Thus, so long as we don't get permanently stuck in~4, the extraneous $Gamma$ values that are defined while we wait for the $G$-change don't persist. Of course, leaving~4 but failing to reach 6 means we get stuck in 5, which leads to success.) + Redefine $Gamma(G\; k) = H(k) = 1$ with use 0, abandon #cycle($k$) and start #cycle($k+1$). ==== The $N_e$ requirements The requirements $N_e$ are simpler than those of the first kind, are we implement a standard diagonalization approach to satisfy them. This is slightly complicated by the fact that we must ensure that $B leqt H$, but we can just us a stripped-down version of the Cooper, Lempp and Watson method. Again, suppose $e$ is fixed, and write $Theta$ for $Theta_e$. The strategy for $N_e$ has the same cycle structure as that for $R_e$. Cycle~0 starts first. We again threaten to show $H leqt G$, this time by constructing a functional $Delta(G) = H$. We don't need any auxiliary function like~$m$. _Starting_, _resetting_, _abandoning_, and _acting_ all have the same definitions as before. Cycle~$k$ proceeds as follows. // p.58 #show: doc => setupenum(doc, prefix: "N") 0. Until given the go-ahead, do nothing. When given the signal to proceed, check if #cycle($k$) has been abandoned in the past. If so jump straight to #nstate(4). Otherwise choose a witness, $y$, larger than any number mentioned so far in the construction (including all currently defined $B$-restraints and the current stage), and larger than~$k$. + Wait for a stage $s_1$ at which $ (B(y) = hat(Theta)(G join A; y))[s_1] $ and let $v = hat(theta)_(s_1)(y)$, the use of the $hat(Theta)(G join A)$ computation. Restrain $restr(A, v)$ from now on. Set $Delta(G\; k) = H_(s_1)(k)$ with use $delta(k) = v$ and start cycle $k+1$ to run simultaneously. Advance to #nstate(2). + Wait for a stage $s'$ at which + $restr(G_(s'), v) neq restr(G_(s_1), v)$, or + $H_(s')(k) neq H_(s_1)(k)$. On reaching $s'$, reset all cycles $k' > k$. Then + if $restr(G, v)$ changes first, return the $A$-restraint of this cycle to 0 and return to #nstate(1). (As before, the $G$-change undefined $Delta(G\; k')$ for $k' > k$.) While + if $H(k)$ changes first then enumerate $y$ into $B$. This has just been permitted by the change in $restr(H, y)$. Proceed to #nstate(3). + Wait for a stage $s_2$ at which $ (B(y) = hat(Theta)(G join A; y))[s_2] $ If there is no such state, $y$ witnesses the success of our strategy. If such an $s_2$ exists, note that we have $ (Theta(G join A\; y))[s_2] = B_(s_2)(y) = 1 neq 0 = B_(s_1)(y) = (Theta(G join A\; y))[s_1] $ By the restraint on $A$, $restr(A_(s_1), v) = restr(A_(s_2), v)$, so we must have $restr(G_(s_1), v) neq restr(G_(s_2), v)$. We reset all cycles $k' > k$ and advance to #nstate(4). Note that the $G$-change has undefined all computations for $Delta(k')$, $k' > k$, except those computations with 0 use (which are correct anyway). + Set $Delta(G\; k) = 1$ (with use 0), a value we now know to be correct. Start cycle $k+1$ and abandon #cycle($k$). Note that $Delta(G)$ is defined consistently at all times. // p.59 === Combining the modules We use much the same tree argument as in previous chapters to combine our strategies. As each cycle (in either basic strategy) imposes only one "wave" of restraint, we need only one outcome corresponding to each cycle. Let $Lambda = {-1} union omega$ with the natural ordering and let $T = finseq(Lambda)$ be the tree of strategies with the standard partial ordering $<_L$. If $alpha in T$ is of length $|alpha| = 2e$ then $alpha$ will work towards satisfying requirement $R_e$, while if $|alpha| = 2e+1$, $alpha$ will work towards satisfying $N_e$. We make no distinction between a node and the strategy it is employing. A strategy/node is _cancelled_ by resetting all of its cycles and discarding any functional it may have (partially) defined. Any parameter, once defined, keeps that value until it is redefined or undefined. The construction proceeds as follows. Stage 0: #h(1em) All parameters are undefined or $emptyset$, as appropriate, all functionals are completely undefined, and all cycles are in #state(0) or #nstate(0), as appropriate. Stage $s+1$: #h(1em) We define, in substages $t < s$, a finite path, $f_(s+1)$, through $T$, of length $s$. We think of $f_(s+1)$ as our approximation to the "true" path defined at stage $s+1$. So, suppose we have reached substage~$t$, and $alpha = restr(f_(s+1), t)$ is already defined. If no cycle of #stalpha is started, we start $alpha$'s #cycle(0) (which may lead to a cascade of other cycles being started), and set $f_(s+1)(t) = -1$. Otherwise, we have two cases. - #case(1) Some (least) #cycle($k$) of $alpha$ is able (or forced, by a $G$-injury) to act We allow #cycle($k$) to act. Let $l$ be the rightmost cycle of #stalpha now imposing restraint (if there is any such cycle) and put $f_(s+1)(t) = l$. If there is no such #cycle($l$) then put $f_(s+1)(t) = -1$. Cancel all strategies $beta$ with $concatone(alpha, f_(s+1)(t)) <_L beta$. If $l$ exists and the action of #cycle($l$) involved enumerating a number into or out of $A$ or into $B$ then also cancel all strategies $beta supset concatone(alpha, f_(s+1)(t))$. - #case(2) No cycle of #stalpha is able, or forced, to act. We do nothing, and there are no strategies to cancel. Define $f_(s+1)(t)$ just as above. If $t + 1 < s$, we advance to substage $t+1$. The strategies $alpha subset f_(s+1)$ are said to be _accessible_ at stage $s+1$. == Verification We do not state a Pattern Lemma or proofs of the consistency of the functions $Delta(G)$. These are immediate from the construction. (Note that we do not claim that the functionals $Gamma(G)$ are consistent, as we may indeed have multiple definitions. We prove below that this happens only finitely often.) As usual in an infinite injury construction, the key object in the verification is the _true path_ though the tree~$T$, defined by $f(n) = k$, where $concatone((restr(f, n)), k)$ is the leftmost successor of $restr(f, n)$ accessible infinitely often. // p.60 When needed, we will refer to parameters associated with a particular cycle, $k$, of a particular strategy, $alpha$, like so: $x(alpha, k)$, $u(alpha, k)$, etc. We will drop the strategy name whenever possible. To show that all the requirements are satisfied we prove the following, now familiar, result. #proposition[ #show: doc => setupenum(doc, formats: ("1.", "a.")) For all $n in omega$ + $f(n)$ is defined; + $restr(f, (n+1))$ is cancelled only finitely often (note that $restr(f, 0) = emptyset$ is never cancelled); + #strat($restr(f, n)$) satisfies the requirement towards which it works; and + for all sufficiently large $G$-true stages $t$, $restr(f, (n+1)) subset f_t$. <proposition4.5> ] So, inductively assume the proposition for $n = eta - 1$. Let $alpha = restr(f, eta)$ and let $s_0$ be a stage so large that $alpha$ is not cancelled after~$s_0$ and that for every $G$-true stage $t > s_0$, $alpha subset f_t$. We say that #stalpha _acts finitely_ if there is a #stg($s$) after which no cycle of $alpha$ acts. Otherwise, we way that $alpha$ _acts infinitely_. #lemma[ If $alpha$ acts infinitely then some single cycle of $alpha$ acts infinitely often. <lemma4.6> ] #proof[ Suppose otherwise, and suppose that $|alpha| = 2e$, so that #stalpha works towards satisfying requirement~$R_e$. Each #cycle($k$) must end up getting stuck in a state such that #cycle($k+1$) is no prevented from acting. Thus each $k$ either (a) gets stuck in #state(2), (b) gets stuck in #state(4), or (c) is abandoned. Suppose first that (b) happens to only finitely many cycles. Let $s > s_0$ be a stage so large that all the cycles which eventually get stuck in #state(4) have already entered that state for the last time by #stg($s$), never to leave. Consider the finite set $ cal(G) = {x st Gamma(G\; x) "is defined at " stg(s)}. $ If $x in cal(G)$ it is possible that $Gamma(G\; x)$ is (or will end up being) defined more than once: if #cycle($k$) gets permanently stuck in~4, cycles $k+1, k+2, dots$ may "write-over" values for $Gamma(G\; k+1), dots$ defined earlier. However, if $g = max(cal(G))$ and $macron(k) > g$, cycle $macron(k)$ does not get stuck in~4. Indeed, if it did it would have done so by #stg($s$), and then $x(alpha, macron(k)) in cal(G)$. But by construction $macron(k) < x(alpha, macron(k))$, which contradicts the definition of $macron(k)$. Thus either $macron(k)$ gets stuck in #state(2), or successfully passes through~4 and reaches~6. Either way, #cycle($macron(k)$) successfully computes a value $Gamma(G\; macron(k))$ which agrees with $H(macron(k))$. Thus $ H(k) = cases( H(k) & "if" k leq g\,, Gamma(G\; k) quad & "otherwise"\,, ) $ // p.61 contradicting the assumption that $G ltt H$. Hence we need only consider the case in which infinitely many cycles get stuck in #state(4). We show that this leads to a contradiction. In this case, each of these cycles will have defined a value for the function~$m$ (this is done in #state(3)). As each cycle chooses its~$p$ to be the least without an $m$-value the construction ensures that function~$m$ will be _total_ recursive, and hence dominated by $c_C$. For each #cycle($k$) that gets stuck in~4, let $x_k$ be the last witness it ever chooses#footnote[There must be a last, as $k$ only gets to choose a new one after it is reset.], let $p_k$ be the final value for which #cycle($k$) defines $m(p_k)$, and let $s_(2,k)$ be the final stage at which #cycle($k$) passes from #state(3) to #state(4) before getting stuck there. Note that $xi(x_k) = p_k$ and that $m(p_k) = s_(2,k)$. Now, since $m$ is dominated by $c_C$, there is a $macron(p)$ such that $(forall p geq macron(p))[m(p) < c_C(p)]$ and so, by the definition of $c_C$ $ (forall p geq macron(p))[restr(C_(m(p)), p) neq restr(C, p)] $ and $ (forall p geq macron(p))[restr(C_(m(p)), xi(x)) neq restr(C, xi(x))] $ whenever $xi(x) geq p$. Now let $k_0$ be minimal such that $p_(k_0) geq macron(p)$ and #cycle($k_0$) gets stuck in~4. Then $m(p_(k_0)) = s_(2,k_0)$ and $ restr(C_(s_(2,k_0)), xi(x_(k_0))) neq restr(C, xi(x_(k_0))), $ so #cycle($k_0$) does in fact see a $C$-change after #stg($s_(2,k_0)$). Let $q$ be the first $G$-true stage after $q'$, the first $C$-true stage after $s_(2,k_0)$. As $q'$ involves a $C$-change on the smallest element for which a change is still to take place, it must involve a change in $C$ below $xi(x_(k_0))$. By assumption, $alpha$ will be accessible after $q'$ no later than #stg($q$), and so #cycle($k_0$) will be released from #state(4), contradicting our assumption about #cycle($k_0$). If $|alpha| = 2e + 1$, so that $alpha$ works towards $N_e$, the proof is much simpler. Now every cycle must end up getting stuck in #nstate(2) or being permanently abandoned. In either case, the #cycle($k$) computes a value $Delta(G\; k)$ which agrees with $H(k)$, and we demonstrate $H leqt G$, a contradiction. The lemma is proved. ] The rest of the verification follows just as in #chapRef(2). The arguments are somewhat simpler, as we have only a one-dimensional cycle structure to worry about. #lemma[ Some (leftmost) successor of $alpha$ is accessible infinitely often. <lemma4.7> ] #proof[ As #lemmaRef(<lemma2.19>). ] // p.62 This establishes part 1 of the Proposition, and we assume we have a value $k_eta$ for $f(eta)$. #lemma[ $restr(f, (eta+1)) = concatone(f, k_eta)$ is cancelled only finitely often. <lemma4.8> ] #proof[ As #lemmaRef(<lemma2.21>). ] This establishes part 2 of the Proposition for $n = eta$. #lemma[ #stalpha satisfies its corresponding requirement. <lemma4.9> ] #proof[ As #lemmaRef(<lemma2.22>). ] This establishes part 3 of the Proposition for $n = eta$. #lemma[ For sufficiently large $G$-true stages $t$, $restr(f, (eta+1)) = concatone(alpha, k_eta) subset f_t$. <lemma4.10> ] #proof[ As #lemmaRef(<lemma2.23>). ] This establishes part 4 of #thmref(<proposition4.5>) for $n = eta$, and the Proposition is proved. #qed Thus all of the requirements are satisfied, and $turinginterval(D, F)$ forms a proper interval with $reInAbove(C)$ endpoints, free of r.e.~degrees. It remains to show that the permitting used in the construction sufficed to ensure that $D ltt H$ and $F ltt H$. We use the same method as in chapters @chapter2 and @chapter3. For $alpha in T$ let $ e^alpha = max { n | (exists beta in T)[concatone(beta, n) subset alpha] }, $ the largest number which occurs as an outcome in the path leading to $alpha$. We also define $ s^alpha = (mu s)[restr(H_s, e^alpha) = restr(H, e^alpha)]. $ Our technical lemma is the same as before. #lemma[ Suppose that $alpha subset f_s$, $t > max{s, s^alpha}$ is a $C$-true stage, $t' geq t$ is a $G$-true stage, and $s' > t'$. Then for $beta subset alpha$, if $beta subset.not f_t$ and $beta subset f_(s')$ then there is a stage $tau$ such that $s < tau leq s'$ and $beta$ is cancelled at #stg($tau$). <lemma4.11> ] #proof[ As #lemmaRef(<lemma2.25>). We use the assumption that $s'$ is greater than the first $G$-true stage after the first $C$-true stage after both $s$ and $s^alpha$ when we argue that it is "now or never" for some action out of #state(4). (In this state the cycle is waiting for a $C$-change, rather than a $G$-change.) ] Now permitting follows much as it did before. // p.63 #lemma[ $A join B leqt H$. <lemma4.12> ] #proof[ The argument proceeds in almost the same way as #lemmaRef(<lemma2.26>). Note that, in line with the statement of #lemmaRef(<lemma4.11>), we must work with the first $G$-true stage after the first $C$-true stage, etc. This stage may be computed with the use of both a $G$-oracle and a $C$-oracle. As, by assumption, $C join G leqt H$, we can still answer all the necessary questions with an $H$-oracle. ] == The flaw in the proof of #thmref(<theorem4.4>) <section4.4> As well as simplifying it, the proof of #theoremRef(<theorem4.3>) given above appears to correct an error in the proof of #theoremRef(<theorem4.4>) given in @ALS1996. That proof adopts the two levels of $H$-permission that are used in @CLW1989 and #chapRef(2) of this thesis, and the two-dimensional cycle structure. But the second layer of $H$-permission is redundant, as the $C$-permitting actions of #state(4) give default $H$-permission, as $C leqt H$ by assumption. In @ALS1996 #state(3) defines a second tier functional, which we will here call $Upsilon$. There is also an extra state (which we will call #state(3.5)) between our states 3 and~4, corresponding to #chapRef(2)'s #state(5). As in the earlier chapter, @ALS1996's construction returns from #state(3.5) to #state(3) upon certain $C$-changes. As well as being redundant, this seems to break the $C$-permission argument, or rather the argument that shows that $C$-permission does not interfere with our attempts to threaten $H leqt C$. We must be able to show that we don't get stuck infinitely often in #state(4). To do this, in the original proof of~@ALS1996, we must show that (apart from finitely many cases) we get a $C$-change after the _last_ time that we pass out of #state(3). (Since we can have several $restr(C, u_2)$ changes, we may jump from 3.5 back to 3 several times.) But this is now a problem: we know that $c_C$ dominates the function $m$, but $m$ no longer records the _last_ time we find a stage $s_2$ in 3, but merely the first. We can't redefine $m(p)$ each time we return to~3, as the $m$ must be recursive. We can't use larger and larger values for $p$ each time, as the argument requires that $xi(x) geq p$. We can't even redefine $xi(x)$ each time we get a $C$-change, as we would then not be able to $H$-recursively compute the final value for $xi(x)$, needed for the proof of #lemmaRef(<lemma4.12>). //////////////////////////////////////// // Chapter V // p.64 = A Theorem of Soare and Stob in an Interval <chapter5> == Introduction In @CholakHinman Cholak and Hinman give a proof of a result of Soare and Stob. #theorem(name: [Soare and Stob, @SoareStob1982])[ For any non-recursive r.e. set $C$, there exists an $reInAbove(C)$ set which is not of r.e. degree. <theorem5.1> ] In this chapter we will prove the following generalization: #theorem[ For any non-recursive r.e. set $C$, there are $reInAbove(C)$ sets $D ltt F$ such that there is no r.e.~set $E in turinginterval(D, F)$. <theorem5.2> ] The proof is based closely on that of Cholak and Hinman, but we recast it in the cycle method of earlier chapters. == The construction As shown in Corollary 4.3 of @SoareStob1982 we cannot find the sets $D$ and $F$ "effectively" from the set~$C$. That is, given (an index for) $C$ we cannot recursively compute indices for the $C$-recursive enumerations of $D$ and~$F$. We actually construct two pairs of $reInAbove(C)$, $(D_0, F_0)$ and $(D_1, F_1)$, and prove that one pair satisfies the requirements of the theorem, without knowing which one. We will satisfy all requirements $ R_e: quad & or.big_(i=0,1) [A_i neq Phi_(e,i)(E_(e,i)) or E_(e,i) neq Psi_(e,i)(C join A_i join B_i)],\ P_(2e): quad & B_0 neq Theta_e(C join A_0),\ P_(2e+1): quad & B_1 neq Theta_e(C join A_1). $ Here ${angletup(Phi_(e,0), Phi_(e,1), Psi_(e,0), Psi_(e,1), E_(e,0), E_(e,1))}_(e geq 0)$ enumerates all sextuples of four recursive functionals and two r.e. sets, and ${Theta_e}_(e geq 0)$ simply enumerates the recursive functionals. The // p.65 $R$-requirements ensure that at least one of the intervals $turinginterval(D_i, F_i) = turinginterval(C join A_i, C join A_i join B_i)$ is free of r.e. sets, while the $P$ requirements make sure that both of these intervals are proper. === The basic module for $R_e$ Fix $e geq 0$ and write $angletup(Phi_0, Phi_1, Psi_0, Psi_1, E_0, E_1)$ for $angletup(Phi_(e,0), Phi_(e,1), Psi_(e,0), Psi_(e,1), E_(e,0), E_(e,1))$. The basic module again consists of cycles, this time in an $omega$-sequence, each trying simplistically to satisfy requirement~$R_e$. // As noted below (@section5.2.3[section]) // we do not need to observe path restraint as we did in earlier constructions. Cycle~0 starts first. Cycle~$k$ may start cycle $k+1$ and stop all cycles $k' > k$ In addition, if $k > 0$, cycle $k$ may instruct cycle $k-1$ to perform an enumeration. This is detailed below. Even-numbered cycles will enumerate witnesses into $A_0$, while the odd-numbered will enumerate into $A_1$. To this end it will be useful to use the notation $pi(l)$ for the parity of $l$. All cycles begin in #state(0). A cycle is _started_ by letting it pass from #state(0) to #state(1), choosing a witness on the way. A cycle is _reset_ by putting it back into #state(0), returning its restraint to 0, and undefining the values of its parameters $x$, $u$, and $v$. Unlike earlier constructions, a cycle is never permanently abandoned. Where necessary we will indicate the value of a parameter corresponding to #cycle($l$) like so: $x(l)$, $u(l)$, $v(l)$. Cycle $k$ proceeds as follows. #show: doc => setupenum(doc) 0. Until told to start, do nothing. When started, choose a new witness~$x$ larger than any number mentioned in the construction so far (including all currently imposed $A$-restraints). Advance to #state(1). + Denote by $Eq(x, s)$ the condition $ and.big_(i=0,1) [(restr(A_i, x + 1))[s] = (restr(Phi_(e,i)(E_(e,i)), x+1))[s] and #h(1fr) \ (restr(E_(e,i), hat(phi)(x)))[s] = (restr(hat(Psi)_(e,i)(C join A_i join B_i), hat(phi)(x)))[s]], $ where $phi_s(x) = max{phi_(0,s)(x), phi_(1,s)(x)}$. (Note that $Eq(x, s)$ implies $Eq(y, s)$ for all $y < x$.) Wait for a stage $s_1$ at which $Eq(x, s_1)$ holds. Set $u = phi_(s_1)(x)$ and put $ v = max{(hat(psi)_0(phi(x)))[s_1], (hat(psi)_1(phi(x)))[s_1]}. $ If $k > 0$ then instruct cycle $k-1$ to enumerate its witness $x(k-1)$ into $A_(pi(k-1))$ with use $v(k)$. Restrain $restr((A_i join B_i), v(k))$ for $i = 0, 1$, start cycle $k+1$ to run simultaneously, and advance to #state(2). [If there is no such stage then the requirement is satisfied.] // p.66 + Wait for a #stg($s_2$) at which + $restr(C_(s_2), v) neq restr(C_(s_1), v)$, or + #cycle($k$) is instructed by #cycle($k+1$) to enumerate $x$. If we have case (i), reset all cycles $l > k$, forget the current values of $u$ and~$v$, drop this cycle's restraint back to~0, return to #state(1), and (if $k > 0$) return cycle $k-1$ to #state(2). If we have case (ii), enumerate $x$ into $A_(pi(k), s_2+1)$ with use $v(k+1)$ and advance to #state(3). (Note that we do _not_ reset any cycles to the right.) + Wait for a #stg($s_3$) at which $restr(C_(s_3), v) neq restr(C_(s_1), v)$. If this happens reset all cycles to the right, forget the values of $u$ and~$v$, drop this cycle's restraint back to~0, return to #state(1), and (if $k > 0$) return cycle $k-1$ to #state(2). The $C$-change ejects $x(k-1)$ from $A_(pi(k-1))$. (Note further than since $v < x(k+1) leq v(k+1)$, $x$ is ejected from $A_(pi(k))$ by this change in $C$.) We note (without proof) that the valid patterns corresponding to this module are exactly $setconcat(finseq({3}), {angletup(2, 1)})$. === The basic module for the $P$-requirements Consider the requirement $P_(2e)$. ($P_(2e+1)$ is the same, with $A_0$ and $B_0$ replaced with $A_1$ and $B_1$.) Write $Theta$ for~$Theta_e$. The strategy for satisfying $P_(2e)$ has no cycle structure, but has 3 internal states. #show: doc => setupenum(doc, prefix: "P") 0. Wait for the strategy to be started. When it is, choose a new witness, $y$, larger than any number mentioned in the construction so far (in particular, larger than any currently imposed $B$-restraint). Advance to #pstate(1). + Wait for a #stg($t_1$) such that $ (hat(Theta)(C join A_0; y))[t_1] = 0. $ Put $w = hat(theta)_(t_1)(y)$, restrain $restr(A_0, w)$, and enumerate $y$ into $B_(0, t_1 +1)$ with $C$-use~$w$. Advance to #pstate(2). + Wait for a #stg($t_2$) at which $restr(C_(t_2), w) neq restr(C_(t_1), w)$. If this happens forget the value of~$w$, drop this cycle's restraint back to~0, and return to #pstate(1). (Note that $y$ has been ejected from $B_0$ by the $C$-change.) // p.67 === Combining the modules <section5.2.3> As usual, we combine the basic modules by means of a strategy tree, $T$. Let $Lambda = {-1} union omega$, and define the strategy tree $T = {f in finseq(Lambda) st n "odd" arrow.r.double f(n) in {0, 1} }$, with the standard partial ordering $<_L$. If $alpha in T$ has even length $|alpha| = 2e$ then $alpha$ aims to satisfy requirement~$R_e$, while if $|alpha| = 2e + 1$ it works towards satisfying~$P_e$. (Hence the definition of the priority tree: strategies for satisfying $P_e$ will have just two outcomes, 0 and~1.) As before we make no distinction between a node in the tree and (the instance of) the strategy. An $R$-strategy is cancelled by resetting all of its cycles. A $P$-strategy is cancelled merely by discarding its witness, reducing its restraint to zero, and putting it into #pstate(0). The construction proceeds as follows. Stage 0: #h(1em) All strategies start in #state(0)/P0 with entirely undefined functionals and witnesses. Stage $s+1$: #h(1em) We define, in substages $t < s$, a finite path $f_(s+1)$, through the tree, of length~$s$. So, suppose $alpha = restr(f_(s+1), t) in T$ is defined. We have two cases to consider, depending on the parity of~$t$. - #case(1) $t$ is even, so #stalpha is trying to satisfy an $R$-requirement. If no cycle of #stalpha has been started since $alpha$ was last cancelled, start $alpha$'s #cycle(0). Otherwise see if there is a leftmost #cycle($k$) able to make one of the state transitions #trans(1, 2), #trans(2, 1), or #trans(3, 1). If so, let this cycle make this transition, resetting cycles to the right if indicated. In the case of the #trans(1,2) transition, if $k > 0$ we also let #cycle($k-1$) perform the required enumeration and move from #state(2) to #state(3). If one of the other transitions takes place cancel all strategies $beta$ with $concatone(alpha, k) subset beta$ or $concatone(alpha, k-1) <_L beta$. If also $k > 0$ we return #cycle($k-1$) to #state(2). If there is no such #cycle($k$) there is nothing to do at this substage. In any case, let $l$ be the rightmost cycle of $alpha$ in a state other than~0. (This cycle must be in #state(1), and if $l > 0$ then cycle $l - 1$ is in #state(2) and imposing restraint.) Let $f_(s+1)(t) = l - 1$. - #case(2) $t$ is odd, so #stalpha is trying to satisfy a $P$-requirement. If #stalpha is in #pstate(0) then let it advance to #pstate(1), choosing a witness on the way. Otherwise, if $alpha$ is able to make a state transition #trans("P1", "P2") or #trans("P2", "P1") let it do so. If the latter transition was made we cancel all strategies $beta supset concatone(alpha, 1)$. In any case, let $f_(s+1)(t) = 0$ if $alpha$ is now in #pstate(1), and $f_(s+1)(t) = 1$ otherwise (that is, if $alpha$ is in #pstate(2)). If $t + 1 < s$ advance to substage $t+1$. If $alpha subset f_(s+1)$ then $alpha$ is _accessible_ at stage $s+1$. // p.68 == Verification In what follows we will denote the values at #stg($s$) of parameters associated with #cycle($k$) of #stalpha like so: $u_s(alpha, k)$, $x_s(alpha, k)$. For parameters associated with $P$-requirement strategies we naturally omit any reference to the cycle. Wherever context makes it possible, we omit as many of the stage, the strategy, and the cycle as we can. Our verification follows Cholak and Hinman, @CholakHinman. #lemma[ #show: doc => setupenum(doc, formats: ("(i)",)) For $alpha, beta in T$ with $|alpha| = 2e$ and $|beta| = 2e' + 1$ we have: + $x_s(alpha, k) in A_(pi(k),s)$ iff #cycle($k$) of #stalpha is in #state(3) at #stg($s$). Similarly, $y_s(beta) in B_(pi(e'),s)$ iff #strat($beta$) is in #pstate(2) at #stg($s$). + For stages $t < s$, if $x_t(alpha, k)$ is defined, and either $x_t(alpha, k) neq x_s(alpha, k)$, or $x_s(alpha, k)$ is undefined, then for all $s' geq s$, $x_t(alpha, k) in.not A_(pi(k),s')$. Similarly for any witness $y_t(beta)$ and the set $B_(pi(e'))$. <lemma5.3> ] #proof[ We start with (i). Certainly $x_s(alpha, k)$ is enumerated into $A_(pi(k))$ exactly when $alpha$'s #cycle($k$) enters #state(3). We claim that whenever this cycle leaves #state(3), for whatever reason, $x_s(alpha, k)$ is ejected from $A_(pi(k))$. There are three ways that this cycle could leave #state(3). #show: doc => setupenum(doc, formats: ("(a)",)) + Cycle $k$ sees a #stg($s_3$) such that $restr(C_(s_3), v) neq restr(C_(s_1), v)$. Well, the $C$-use of the enumeration that put $x_s(alpha, k)$ into $A_(pi(k))$ is $v(k+1) geq x(k+1) > v$ so this $C$-change indeed ejects $x_s(alpha, k)$ from $A_(pi(k))$. + Some cycle $l < k$ of #stalpha resets $k$ due to a change in $restr(C, v(l))$. But $v(l) < x_s(alpha, k) leq v(k) < v(k+1)$ as before, so again $x_s(alpha, k)$ is ejected from $A_(pi(k))$. + Strategy $alpha$ is cancelled by some cycle $gamma subset.neq alpha$ or $gamma <_L alpha$ seeing a $C$-change below $v(gamma, l)$ for some~$l$ (or below $w(gamma)$, as appropriate.) We consider the case where $|gamma|$ is even, so that the $C$-change is below $v(gamma, l)$. The $|gamma|$ odd case is the same. If $gamma subset.neq alpha$ then, by construction, $x_s(alpha, k)$ is chosen to be larger than $v(gamma, l)$, as $alpha$ becomes accessible only after #cycle($l$) of #strat($gamma$) imposes restraint and defines $v(gamma, l)$. But now, as above, $v(alpha, k+1) > v(gamma, l)$ and $x_s(alpha, k)$ is again ejected. If $gamma <_L alpha$ then let $delta$ be the longest common initial segment of $alpha$ and~$gamma$. At the stage $t < s$ when $v(gamma, l)$ is defined, either the action of #strat($delta$) cancelled #stalpha, or no cycle of $alpha$ had been started since #stalpha was last cancelled. Thus $x_s(alpha, k)$ is chosen after $v(gamma, l)$ is defined, so $x_s(alpha, k) > v(gamma, l)$, and as before we have the result we want. // p.69 A similar argument establishes the result for the $y$-witness. For (ii), notice that the witness $x(alpha, k)$ changes or becomes undefined only when #stalpha's #cycle($k$) is reset. When this occurs, #cycle($k$) leaves #state(3) (if it was there) so by part~(i), $x(alpha, k)$ certainly leaves $A_(pi(k))$. As it will never again be chosen as a witness, it never reenters $A_(pi(k))$. ] The rest of the verification revolves around the true path, $f$, through the priority tree, defined as usual by $f(n) = nu$, where $concatone((restr(f, n)), nu)$ is the left most successor $restr(f, n)$ accessible infinitely often. We have the following (now familiar) result. #proposition[ #show: doc => setupenum(doc, formats: ("1.",)) For all $n in omega$ + $f(n)$ is defined; + $restr(f, (n+1))$ is cancelled only finitely often (note that $restr(f, 0) = emptyset$ is never cancelled); + #strat($restr(f, n)$) satisfies the requirement towards which it works; and + for all sufficiently large $C$-true stages $t$, $restr(f, (n+1)) subset f_t$. <proposition5.4> ] We proceed by induction. Assume 1, 2, 3, and 4 for $n = eta - 1$ and let $alpha = restr(f, eta)$. Fix a stage $s_0$ so large that $alpha$ is not cancelled after $s_0$, and for every $C$-true $t > s_0$, $alpha subset f_t$. For the moment, assume $eta$ is even. For $l geq -1$ define $ s(l) = (mu s geq s_0)(forall t > s)[alpha's "cycle" l+1 "is in state 1, 2, or 3 at stage" t], $ if such exists. Clearly , if $s(l)$ is defined then so is $s(l') < s(l)$ for all $l' < l$. When it exists, $s(l)$ has all sorts of nice properties. #lemma[ #show: doc => setupenum(doc, formats: ("(i)",)) Suppose $s(l)$ exists then + for each $k leq l$, $x_t(alpha, k)$, $u_t(alpha, k)$, and $v_t(alpha, k)$ have the same values for all $t > s(l)$ --- these fixed values will be referred to as $macron(x)(alpha, k)$, $macron(u)(alpha, k)$, and $macron(v)(alpha, k)$; + $Eq(macron(x)(alpha, l), s(l))$ holds; + $restr(C_(s(l)), macron(v)(alpha, l)) = restr(C, macron(v)(alpha, l))$; + for all $s geq s(l)$ and $0 leq k leq l - 2$, $macron(x)(alpha, k) in A_(pi(l),s)$; and + if $l > 0$ then $macron(x)(alpha, l - 1) in.not A_(pi(l-1), s(l))$ but for all $s > s(l)$, $macron(x)(alpha, l - 1) in A_(pi(l-1), s)$. <lemma5.5> ] #proof[ For (i), note that since cycle $l+1$ is in a state numbered 1 or higher for $s > s(l)$, all cycles $k leq l$ are in #state(2) or~3 for these stages. Thus the values of the parameters $x$, $u$, and $v$ associated with these cycles will have no chance to change value. // p.70 (ii) follows from the minimality of $s(l)$. For (iii), if $restr(C, macron(v)(alpha, l))$ changes after #stg($s(l)$) then $alpha$'s #cycle($l$) would return to #state(1), resetting cycle $l+1$ and contradicting the definition of~$s(l)$. For all $s geq s(l)$ and all $k leq l-2$, $alpha$'s #cycle($k$) is in #state(3) at #stg($s$), so by #lemmaRef(<lemma5.3>);(i), $macron(x)(alpha, k) in A_(pi(k), s)$. This establishes~(iv). (v) follows as did (iv) once we note that, by the minimality of $s(l)$, $alpha$'s #cycle($l-1$) is still in #state($s$) at #stg($s(l)$), only reaching 3 at #stg($s(l) + 1$). ] #lemma[ For all $l$ and $s > s_0$, if #cycle($l+1$) of #stalpha is in a state numbered 1 or higher at #stg($s$) and $restr(C_s, v_s(alpha, l)) = restr(C, v_s(alpha, l))$ then $s(l)$ exists and is less than~$s$. <lemma5.6> ] #proof[ Cycle $l+1$ can only be reset (and hence sent to a state other than 1, 2, or 3) by a change in~$C$. The argument in #lemmaRef(<lemma5.3>);(i) shows that this change must be below $v_s(alpha, l)$, a contradiction. ] The following result is vitally important, if tedious to prove. #lemma[ If $s(l+1)$ exists then $ restr(A_(pi(l), s(l+1)), macron(v)(alpha, l)) = restr(A_(pi(l), s(l)), macron(v)(alpha, l)),\ restr(B_(pi(l), s(l+1)), macron(v)(alpha, l)) = restr(B_(pi(l), s(l)), macron(v)(alpha, l)). $ <lemma5.7> ] #proof[ We give the argument for $A_(pi(l))$. The $B_(pi(l))$ case is essentially the same. It suffices to show that for all $beta in T$ with $|beta|$ even, and all $t leq s(l+1)$ and $k in omega$ with $x_t(beta, k) < macron(v)(alpha, l)$, that $ A_(pi(l),s(l+1))(x_t(beta, k)) = A_(pi(l),s(l))(x_t(beta, k)). $ If $concatone(alpha, l+1) <_L beta$ then strategy $beta$ is cancelled at #stg($s(l)$), and we use #lemmaRef(<lemma5.3>);(ii). Otherwise we actually have to do some work. We have several cases to consider. - $concatone(alpha, l+1) subset.neq beta$ for some $j leq l+1$ Note that we can actually assume that $j < l$, since if $j = l$ or $j = l+1$ then by construction #strat($beta$) automatically respects the restraint $macron(v)(alpha, l)$. If $t leq s(l)$ and $x_t(beta, k) in A_(pi(l), s(l))$ then by #lemmaRef(<lemma5.3>);(i) #strat($beta$)'s #cycle($k$) is in #state(3) at #stg($s(l)$). For $x_t(beta, k)$ to leave $A_(pi(l))$ between stages $s(l)$ and $s(l+1)$, $beta$'s #cycle($k$) must leave #state(3), due to a $C$-change, necessarily below $macron(v)(alpha, l)$. But this implies that $alpha$'s #cycle($l+1$) is reset, contradicting the definition of~$s(l)$. // p.71 If $x_t(beta, k) in.not A_(pi(l), s(l))$ then $x_t(beta, k)$ can enter $A_(pi(l))$ only by $beta$'s #cycle($k$) entering #state(3). But this requires $beta$ being accessible, and hence the accessibility of $concatone(alpha, j)$. This can happen only if $alpha$'s #cycle($l+1$) is in #state(0), which does not happen after #stg($s(l)$). Note that no new witness $x_t(beta, k)$ is chosen for $s(l) < t leq s(l+1)$, as this would again imply $alpha$'s #cycle($l+1$) being in #state(0). - $beta <_L alpha$ Any change in the value of $A_(pi(l))(x_t(beta, k))$ between stages $s(l)$ and $s(l+1)$ implies the accessibility or cancellation of #strat($beta$), and hence the cancellation of #stalpha, which by assumption does not happen. - $beta subset.neq alpha$ If $t leq s(l)$ and $x_t(beta, k) in A_(pi(l), s(l))$, then $concatone(beta, k) <_L alpha$ or $concatone(beta, k) subset alpha$. (Otherwise #cycle($k$) of $beta$ is reset at #stg($s(l)$), as $alpha$ is inaccessible at $s(l)$.) Note that $beta$'s #cycle($k$) is in #state(3) at #stg($s(l)$). If this changes at any time before #stg($s(l+1)$) then $alpha$ is cancelled, by construction, which would contradict the definition of~$s(l)$. Suppose $t leq s(l)$ and $x_t(beta, k) in.not A_(pi(l), s(l))$, but that $x_t(beta, k)$ enters $A_(pi(l))$ at some stage $t'$ with $s(l) < t' leq s(l+1)$. Then either $concatone(beta, k) subset alpha$ or $alpha <_L concatone(beta, k)$, as otherwise the entry of $x_t(beta, k)$ entails the cancellation of~$alpha$. But $alpha$ is accessible at #stg($s(l+1)$), so either #cycle($k$) of $beta$ is reset by $s(l+1)$ (if $alpha <_L concatone(beta, k)$), or $beta$'s #cycle($k$) is in #state(2) at #stg($s(l+1)$) In either case, $x_t(beta, k) in.not A_(pi(l), s(l+1))$, by #lemmaRef(<lemma5.3>);(i). If $s(l) < t leq s(l+1)$ (so $x_t(beta, k) in.not A_(pi(l), s(l))$) then $alpha <_L concatone(beta, k)$, for otherwise (since $concatone(beta, k-1)$ is accessible at~$t$) $alpha$ would have been cancelled between stages $s(l)$ and~$t$. Thus, since $alpha$ is accessible at $s(l+1)$, $beta$'s #cycle($k$) is in #state(0) at #stg($s(l+1)$), and $x_t(beta, k) in.not A_(pi(l), s(l+1))$. - $beta = alpha$ If $j > l+1$ we just use #lemmaRef(<lemma5.3>) (ii). No $x_t(alpha, l+1)$ is ever in $A_(pi(l))$, as $pi(l+1) neq pi(l)$. If $j leq l$, $t < s(l)$, and $x_t(alpha, j) neq macron(x)(alpha, j)$ then again we use #lemmaRef(<lemma5.3>) (ii). So we are just left to consider the witnesses $macron(x)(alpha, j)$ for $j leq l$ and, by parity, we need only consider those with $j equiv l thin (mod 2)$. Well, for such $j leq l - 2$, $macron(x)(alpha, j) in A_(pi(l), s)$ for all $s geq s(l)$ by #lemmaRef(<lemma5.5>) (iv). Now $alpha$'s #cycle($l$) is in #state(1) at #stg($s(l)$) so, by #lemmaRef(<lemma5.3>);(i), $macron(x)(alpha, l) in.not A_(pi(l), s(l))$. By #lemmaRef(<lemma5.5>) (v), $macron(x)(alpha, l) in.not A_(pi(l),s(l+1))$. ] This result is used to prove the crucial // p.72 #lemma[ #show: doc => setupenum(doc, formats: ("(i)",)) + If $s(l+1)$ exists then $restr(E_(pi(l), s(l+1)), macron(u)(alpha, l)) = restr(E_(pi(l), s(l)), macron(u)(alpha, l))$. + If $s(l+2)$ exists then there is a $t > s(l)$ such that $restr(E_(pi(l), t), macron(u)(alpha, l)) neq restr(E_(pi(l), s(l)), macron(u)(alpha, l))$, and if $t(l)$ is the least such~$t$, then $s(l+1) < t(l)$. <lemma5.8> ] #proof[ For (i), if $s(l+1)$ exists then both $Eq(macron(x)(alpha, l), s(l+1))$ and $Eq(macron(x)(alpha, l), s(l))$ hold, // note: Eq(*,*) hold in previous line by Lemma 5.5(ii) so $ restr(E_(pi(l),s(l+1)), macron(u)(alpha, l)) &= (restr(hat(Psi)_(pi(l))(C join A_(pi(l)) join B_(pi(l))), macron(u)(alpha, l)))[s(l+1)] \ &= (restr(hat(Psi)_(pi(l))(C join A_(pi(l)) join B_(pi(l))), macron(u)(alpha, l)))[s(l)] \ &= restr(E_(pi(l),s(l)), macron(u)(alpha, l)). $ For (ii), if $s(l+2)$ exists, then since both $Eq(macron(x)(alpha, l), s(l+2))$ and $Eq(macron(x)(alpha, l), s(l))$ hold, if equality held for $s = s(l+2)$ we would have $ 0 &= A_(pi(l),s(l))(macron(x)(alpha,l)) \ &= (hat(Phi)_(pi(l))(E_(pi(l)); macron(x)(alpha,l)))[s(l)] \ &= (hat(Phi)_(pi(l))(E_(pi(l)); macron(x)(alpha,l)))[s(l+2)] \ &= A_(pi(l),s(l+2))(macron(x)(alpha,l)) \ &= 1. $ By part (i) we must have $s(l+1) < t(l)$. ] #lemma[ If $alpha$ acts infinitely (that is, if infinitely often some cycle of #stalpha changes state) then some (leftmost) cycle of $alpha$ must change state infinitely often. <lemma5.9> ] #proof[ Suppose not. Then each cycle must eventually get stuck in #state(3), never to leave. Thus $s(l)$ exists for all~$l$, and also each~$t(l)$. But $s(-1) = 0$ and $s(l+1) = (mu s < t(l))[Eq(macron(x)(alpha, l+1), s) sand restr(C_s, v_s(alpha, l+1)) = restr(C_(t(l)), v_s(alpha, l+1))]$. Moreover; by Lemmas~#thmref(<lemma5.8>);(ii) and~#thmref(<lemma5.5>);(i) both $t(l)$ and $macron(x)(alpha, l)$ are recursively computable from $s(l)$, so the function $lambda l[s(l)]$ is recursive. But $restr(C_(s(l)), macron(v)(alpha, l)) = restr(C, macron(v)(alpha, l))$ by #lemmaRef(<lemma5.5>);(iii), and $macron(v)(alpha, l) geq macron(x)(alpha, l) > l$ so $C(l) = C_(s(l))(l)$, and $C$ is a recursive set, which contradicts the assumption of the theorem. ] From now on we consider both $eta$ even and $eta$ odd. #lemma[ Some (leftmost) successor of $alpha$ is accessible infinitely often. <lemma5.10> ] #proof[ If $eta$ is even and $alpha$ acts only finitely, then after some stage $s geq s_0$ no cycle of $alpha$ ever changes state again. If $k$ is the rightmost cycle of $alpha$ in a state other than 0 at #stg($s+1$) // p.73 then $concatone(alpha, k-1)$ will be accessible whenever $alpha$ is after #stg($s$). But by the inductive hypothesis $alpha$ is accessible at every $C$-true stage after $s_0$, of which there are infinitely many. If $eta$ is even and $alpha$ acts infinitely, then by #lemmaRef(<lemma5.9>) some leftmost #cycle($k$) changes state infinitely often. Strategy~$alpha$ is not cancelled after #stg($s_0$), so it must be that either (a)~#cycle($k$) eventually switches infinitely often between states 2 and~3, and is never in another state; or (b)~$k = 0$ and #cycle($k$) returns infinitely often to #state(1). In case~(a), $concatone(alpha, k)$ is accessible infinitely often, while in case~(b), $concatone(alpha, -1)$ is. Note that in case~(a) #cycle($k+1$) is reset only finitely often. If $eta$ is odd, then let $ h = cases( 0\, quad & "if" stalpha "is in" pstate(1) "infinitely often,", 1\, quad & "otherwise (that is, if" alpha "is in" pstate(2) "cofinitely often)". ) $ Then $concatone(alpha, h)$ is accessible infinitely often. ] Thus we have established part 1 of the Proposition for $n = eta$, and that we have a value $epsilon$ for $f(n)$. #lemma[ $restr(f, (eta+1)) = concatone(alpha, epsilon)$ is cancelled only finitely often. <lemma5.11> ] #proof[ By assumption, after #stg($s_0$) #stalpha is not cancelled. If $eta$ is even, the by construction $concatone(alpha, epsilon)$ is cancelled after #stg($s_0$) only if $epsilon geq 0$, and even then only when #cycle($epsilon$) of #stalpha is reset or returns to #state(1). The argument in the proof of #lemmaRef(<lemma5.10>) shows that this happens only finitely often. If $eta$ is odd, then if $epsilon = 0$, #strat($concatone(alpha, epsilon)$) is never cancelled after #stg($s_0$). If $epsilon = 1$ then there is a stage $s > s_0$ after which #stalpha is never in a state other than #pstate(2), and $concatone(alpha, 1)$ is not cancelled after #stg($s$). ] This establishes part 2 of the Proposition. #lemma[ Strategy $alpha$ satisfies the requirement to which it works. <lemma5.12> ] #proof[ First consider $|eta| = 2e$. If $alpha$ acts only finitely, then $alpha$'s #cycle($epsilon + 1$) must get permanently stuck in #state(1). In particular, it never reaches its instance of $blankEq$. Thus either $A_i neq Phi_(e,i)(E_(e,i))$ or $E_(e,i) neq Psi_(e,i)(C join A_i join B_i)$ (for some $i = 0,1$) and the requirement is satisfied. If $alpha$ acts infinitely, let $s > s_0$ be so larger than that #cycle($epsilon + 1$) of #stalpha is not reset after #stg($s$). Thus #cycle($epsilon + 1$) works with the same witness, $x$, from now on. As this cycle returns infinitely often to #state(1) we must have that one the functions $Phi_(e,i)(E_(e,i))$, $Psi_(e,i)(C join A_i join B_i)$ (for some $i = 0, 1$) is partial. (The argument is the same as in the proof of #lemmaRef(<lemma2.22>).) // p.74 Now consider $eta = 2e+1$. Without loss of generality we assume that $e = 2e'$ is even, so that #stalpha works with $A_0$ and~$B_0$. Since $alpha$ is not cancelled after #stg($s_0$), the strategy works with the same witnesses, $y$, forever after~$s_0$. If $epsilon = 0$ then $alpha$ is infinitely often in #pstate(1). Thus either $Theta_(e')(C join A_0; y)$ converges to something other than~0, (if only finitely often does #stalpha advance to #pstate(2)); or it diverges. But if $alpha$ is in #pstate(1) infinitely often, then $y in.not B_0$, so either way $Theta_(e')(C join A_0; y) neq B_0(y)$. If $epsilon = 1$ then there is a state $s > s_0$ after which #stalpha remains in #pstate(2). Then $Theta_(e')(C join A_0; y) = 0 neq 1 = B_0(y)$. ] This establishes part 3 of the Proposition. Only one part now remains. #lemma[ For all sufficiently large $C$-true stages~$t$, $concatone(alpha, epsilon) subset f_t$. <lemma5.13> ] #proof[ First suppose that $eta$ is even. Let $s > s_0$ be a stage so large that #cycle($epsilon + 1$) is not reset after #stg($s$) and is in a state other than~0 at #stg($s$). Suppose $t > s$ is a $C$-true stage such that #cycle($epsilon + 1$) finished #stg($t$) in a state numbered greater than~1. Then #cycle($epsilon + 1$) will never again return to #state(1), as the only way it can do so it through a change in $restr(C, v)$. But the combination of the use of the hat-trick functional $hat(Psi)$ in the definition of $blankEq$, and the fact that $t$ is a $C$-true stage, means that this cannot happen after #stg($t$). But this is a contradiction, as by construction #cycle($epsilon + 1$) is infinitely often in #state(1). If $eta$ is odd then a similar argument shows that (with #stg($s$) defined as above) #stalpha ends every $C$-true #stg($t$) after $s$ in #state($epsilon$). ] This concludes the proof of the Proposition, and of #theoremRef(<theorem5.2>). // Chapter 6 // p.75 = A generalization of a result of Coles et al <chapter6> == Introduction In previous chapters we have typically been given a starting r.e. set $X$ and have constructed an $reInAbove(X)$ set~$Y$ by specifying an algorithm to do so. In each case we have (essentially explicitly) specified a pseudojump operator~$V$ such that $Y = pseudojump(X, V)$. In #chapRef(3) we also constructed our base set~$X$; we only had to ensure $X in turinginterval(C, G)$. There are four different ways we may be asked to go about building $V^X$: #[ #set align(center) #tablex( columns: (1.3in,) * 3, rows: (auto, 3em), align: horizon + center, $X$, $V$, [Example], [we construct], [we construct], [#theoremRef(<theorem3.7>)], [given], [we construct], [Theorems #thmref(<theorem2.2>), #thmref(<theorem4.3>)], [we construct], [given], [Hmmmm #sym.dots], [given], [given], [Boring:\ $V^X$ is fixed] ) ] In this chapter we consider the situation in which we are given a pseudojump operator~$V$. Jockusch and Shore, in @JockuschShore1983[Thm 3.1], show the following. #theorem[ Given a pseudojump operator $V$ there is a non-recursive r.e. set $A$ such that $pseudojump(A, V) equivt K$. <theorem6.1> ] This is generalized in @CDJF to obtain #theorem[ Given a pseudojump operator $V$ such that $V^X notleqt X$ for all r.e. sets $X$, there exist incomparable r.e. sets $A$ and $B$ such that $pseudojump(A, V) equivt pseudojump(B, V) equivt K$. <theorem6.2> ] A stronger conclusion is possible from a weaker assumption, with a simpler proof. // p.76 #theorem[ Given a pseudojump operator $V$ such that $V^X notleqt X$ for all recursively enumerable sets $X equivt K$, there exist pairwise incomparable r.e. sets $D_0, D_1, D_2, dots$ such that for each $i$, $pseudojump(D_i, V) equivt K$. <theorem6.3> ] == The construction As pointed out in @CDJF, #theoremRef(<theorem6.2>) may be proved as follows. By Corollary~4.2 in @JockuschShore1983 we have #theorem[ For any low r.e. set $L$ and any pseudojump operator~$V$, there is an r.e. set $A geqt L$ such that $pseudojump(A, V) equivt K$. <theorem6.4> ] Now, using the Sacks splitting theorem we can split $K$ into two low r.e. sets $L_1$ and~$L_2$. Taking a pseudojump~$V$ which acts non-trivially on the all r.e. sets and applying #theoremRef(<theorem6.4>) to $L_1$ and $L_2$ in turn we obtain r.e. sets $A_1$ and $A_2$ such that $A_1 join A_2 equivt K$ and $pseudojump(A_i, V) equivt K$. We have that $A_1$ and $A_2$ are incomparable, because if, say, $A_1 leqt A_2$ then $A_2 equivt K$ and $V$ would not act non-trivially upon it. The direct proof in @CDJF in a complex one and is at times difficult to follow. The proof given here uses a more naïve approach. We simply combine proofs of #theoremRef(<theorem6.4>) and of (an extremely weak version of) the Sacks splitting theorem. These proofs mesh quite well and the combination generalizes easily to produce infinitely many incomparable sets. Therefore, the present author didn't so much discover the proof as assemble it from bits that were lying around. We will construct for each $i in omega$ an r.e. set $D_i = A_i join B_i join C_i$. The sets~$A_i$ will receive the elements of $K$ along the lines of the splitting theorem in such a way that if $0 leq i < j$ we have $A_i union A_j = K$ so $K equivt A_i join A_j$ and $K equivt D_i join D_j$. The sets $C_i$ will receive trace-markers which will attempt to encode $K$ into each $D_i$ separately, using the method of @JockuschShore1983. The sets $B_i$ receive enumerations needed for the recovery of these encodings. We represent elements $(x, y)$ of $omega times omega$ using the standard pairing function $pair(x, y)$. Note that for all $i in omega$ we have $x < y implies pair(i, x) < pair(i,y)$. We aim to satisfy the following requirements, for all $i, x in omega$. $ N_(i,x): quad & (exists^infinity s)[x in V^(D_i)[s]] implies x in V^(D_i),\ P_x: quad & x in K iff x "is missing from at most one of" A_0, A_1, A_2, dots,\ R_(i,x): quad & x in K iff (exists y leq h(i,x))[y in column(omega, x) sand y in C_i], $ where $h(i,x)$ is a function satisfying $lambda x[h(i,x)] leqt pseudojump(D_i, V)$. We assume that we have an enumeration ${K_s}_(s geq 0)$ such that $(forall s)[thin |setdiff(K_(s+1), K_s)| leq 1]$. Following~@CDJF we also assume that the pseudojump $V$ has the property that for all r.e. // p.77 sets $X$ and all $x$, $s$, and $t > s$: $ [x in V^X[s] sand restr(X_t, r_X(x, s)) neq restr(X_s, r_X(x, s))] implies (exists u)[s < u leq t sand x in.not V^X[u]] $ where $r_X(x, s)$ is the $X$-use of the axiom witnessing $x in V^X[s]$. Essentially , we identify $V$ with its hat-trick counterpart,~$hat(V)$. The construction progresses as follows. #let stage-hdr-local(name) = [Stage #name: #h(1em)] #stage-hdr-local(0) For $i in omega$, $A_i = B_i = C_i = emptyset$. Also, for all $i, s, x in omega$ put (as boundary conditions) $h(i, x, -1) = l(i, x, -1) = 0$ and $h(i, -1, s) = -1$. #let shalf = $s + half$ // See #88 #stage-hdr-local($s+1$) Each stage has two "phases". The first aims to satisfy the requirements $P_x$ and the second to satisfy the $R_(i,x)$. We will call the point between the two phases stage $shalf$. If $setdiff(K_(s+1), K_s) = emptyset$ then there is nothing to do. Otherwise let $setdiff(K_(s+1), K_s) = {k}$ and proceed as described. #phase("I") For all $i, x in omega$ define $ r(i, x, s) = cases( "the" D_i"-use of" x in V^(D_i)[s] quad & "if" x in V^(D_i)[s]\,, 0 & "otherwise" ) $ Now put $rho(i, x, s) = max{r(i, y, s) st y leq x}$ and $rho^-(i, x, s) = max{r(i, y, s) st y < x}$. Define $ l(i,x,s) = (mu y) [ y in column(omega, x) sand y in.not B_(i,s) sand y geq l(i, x, s-1) sand y > rho^-(i, x, s)]. $ Let $pair(i_0, x_0)$ be the least pair $pair(i, x)$ such that $k leq r(i, x, s)$. We now mimic the proof of the splitting theorem and "protect" the pair $pair(i_0, x_0)$ by enumerating $k$ "everywhere else". For each $j neq i_0$ do the following: #show: doc => setupenum(doc) + Enumerate $k$ into $A_(j, shalf)$, + Let $z$ be least such that $k leq r(j, z, s)$ and enumerate $l(j, z, s)$ into $B_(j, shalf)$. If there is no such $z$, do nothing here.#footnote[ Strictly speaking, we are being a little loose. After all, it is not $k$ that gets enumerated into $D_j$, but its encoding, $2k$, and we are only concerned with things if _this_ enumeration injures some restraint $r(j, z, s)$. However, if $2k < r(j, z, s)$ then certainly $k < r(j, z, s)$, and it does not seem worth the trouble to keep track of the difference between $k in A_i$ and what that means to $D_j$. ] (This is the purpose of the enumerations into $B_j^([z])$: they witness the fact that $N_(j,z)$ was injured by an enumeration forced on us to protect a higher-priority pair. We do this only for the least pair $pair(j, z)$ so affected, because if $pair(j, z') > pair(j, z)$ is also affected, this can be detected ($pseudojump(D_i, V)$)-recursively _via_ the implied change in $h(j, z, s)$. See #lemmaRef(<lemma6.7>) below.) If there is no such $pair(i_0, x_0)$, then just enumerate $k$ into $A_(i,shalf)$ for every $i in omega$. In this case there is no enumeration into any $B_i$. // p.78 #phase("II") For every $i$ such that $A_i$ just received an enumeration, we recompute $r(i, x, s)$, and hence $rho(i, x, s)$ based on $D_(i,shalf)$. (Even in the case where $k$ doesn't thereby injure anything we might get a new element in $setdiff(V^(D_i)[shalf], V^(D_i)[s])$, so we may as well recompute.) Now, for all $i, x in omega$, (even~$i_0, x_0$) put $ h(i, x, s) = (mu y)[ y in column(omega,x) sand y > h(i, x-1, s) sand y geq h(i, x, s-1) sand y > rho(i, x, s)] $ and enumerate $h(i, k, s)$ into $C_(i, s + 1)$. This ends the description of the construction. == Verification We have a sequence of lemmas which together demonstrate that all of the requirements are met. #lemma[ If $i neq j$ then $K = A_i union A_j$. Thus each requirement $P_x$ is satisfied. <lemma6.5> ] #proof[ Only elements appearing in $K$ at some time are ever enumerated into any $A_k$, and if (say) $y in K$ and $y in.not A_i$ then by construction, $y$ was enumerated into $A_j$. ] Now for each pair $pair(i, x)$ define the injury set of $N_(i,x)$ as $ I_(i,x) = { y st (exists s)[y in setdiff(D_(i,s+1), D_(i,s)) sand y leq r(i, x, s)]}. $ #lemma[ Each $I_(i,x)$ is finite, and each $r(i, x) =^"dfn" lim_s r(i, x, s)$ exists. Thus each requirement $N_(i,x)$ is satisfied. <lemma6.6> ] #proof[ We proceed by induction. Fix $i$ and $x$, and suppose that the lemma holds for all $pair(j, y) < pair(i, x)$. Let $s$ be so large that for all $pair(j, y) < pair(i, x)$ - $I_(j,y) subset D_(j,s)$ (so that $N_(j,y)$ will never subsequently be injured), and - $(forall s' geq s)[r(j, y, s') = r(j, y)]$. Let $r = max {r(j,y) | pair(j,y) < pair(i,x)}$ and let $t geq s$ be so large that $restr(K_t, r) = restr(K, r)$ and $restr(K_t, x) = restr(K, x)$. This means that no pair $pair(j, y) < pair(i, x)$ will ever need protection again. Thus, for any $t' > t$, if $y in setdiff(K_(t'+1), K_(t'))$ and $y leq r(i, x, t')$ then $pair(i, x)$ is the least such pair threatened with injury and $y$ will be enumerated into all of the $A_j$'s _except_ $A_i$. So nothing enumerated into $A_i$ after #stg($t$) will ever injure $N_(i,x)$ and thereby enter $I_(i,x)$. As numbers of the form $l(i, z, t')$ with $z leq x$ will only be enumerated into $B_i$ when a pair $pair(j, y) < pair(i, x)$ needs protection, no such will ever be enumerated again. Also, since $restr(K_t, x) = restr(K, x)$, no number of the form $h(i, z, macron(t))$ with $z < x$ will enter $C_i$ after #stg($t$). By construction, $ (forall t' > t)(forall z geq x)[h(i, z, t') > r(i,z,t')] $ // p.79 and $ (forall t' > t)(forall z > x)[l(i, z, t') > r(i, z, t')] $ so none of these will ever injure $N_(i,x)$. We have shown that no element enters $I_(i,x)$ after stage $t$ and hence that this set is finite. If there is a stage $t_0 > t$ at which $x in V^(D_i)[t_0]$ then this will never later be injured, $x in V^(D_i)$, and $r(i, x) = r(i, x, t_0)$. By the same token, if $x in.not V^(D_i)$ then $(forall t_0 > t)[x in.not V^(D_i)[t_0]]$, and $r(i, x) = 0$. ] Now define $h(i, x) = lim_s h(i, x, s)$. #lemma[ For $i in omega$, $lambda x [h(i,x)] leqt pseudojump(D_i, V)$. <lemma6.7> ] #proof[ Fix $i$. Write $m_r$ and $m_h$ for moduli of the functions $r(i,x,s)$ and $h(i, x, s)$ respectively. // That is, (treating $i$ as a constant because we have fixed it.) // // m_r(x) = m_r(i, x) and m_h(x) = m_h(i, x) are such that // // (forall s > m_r(x))[r(i, x, s) = r(i, x)] // (forall s > m_h(x))[h(i, x, s) = h(i, x)]. // // See Soare p.56 We show how to $(pseudojump(D_i, V))$-recursively compute $h(i,x)$, $m_r$, and $m_h$ by induction. (In fact, it turns out that we end up with $m_r = m_h$, but it seems more natural to refer to these functions separately.) So, suppose we know $m_r(y)$, $r(i, y)$, $m_h(y)$, and $h(i,y)$ for all $y < x$. Let $s$ be so large that for all $y < x$ - $s > m_r(y)$, - $s > m_h(y)$, and - $h(i, y) in C_(i,s)$ for all $y < x$ such that $h(i, y) in C_i$. By definition, for all $s' > s$, $rho^-(i, x, s') = rho^-(i, x, s)$. Now, if $x in V^(D_i)$ find a $t > s$ such that $x in V^(D_i)[t]$ and $restr(D_(i,t), r(i, x, t)) = restr(D_i, r(i, x, t))$. Then never again will $r(i, x, dot)$ or $h(i, x, dot)$ change and we can define $m_r(x) = m_h(x) = t$, so that $r(i, x) = r(i, x, t)$ and $h(i, x) = h(i, x, t)$. If $x in.not V^(D_i)$ we must work a little harder. Let $s' > s$ be arbitrary. First note that there can be no (least) $y < x$ such that $l(i, y, s' + 1) neq l(i, y, s')$. Well, suppose otherwise. For this least $y$, $l(i, y, ast.op)$ can change only if $l(i, y, s')$ enters $B_i$ at #stg($s'+1$) to mark an injury to $N_(i,y)$. As this requirement in injured we must have in particular that $y in (V^(D_i))[s']$ and $r(i, y, s') > 0$. But then $restr(D_(i,s'+1), r(i, y, s')) neq restr(D_(i,s'), r(i, y, s'))$ and so by the assumption we made about the operator~$V$, $y in.not (V^(D_i))[s'+1]$ and $r(i, y, s'+1) = 0 neq r(i,y,s')$, contradicting the definition of~$s$. As $rho^-(i, x, s') = rho^-(i, x, s)$, it follows that the only way that we can have $l(i, x, s'+1) neq l(i, x, s')$ is if $l(i, x, s')$ was enumerated into $B_(i,s'+1)$ because some pair $pair(j, y) < pair(i, x)$ received protection, thus injuring $N_(i,x)$. In this case, $l(i, x, s'+1)$ is chosen to be the next element after $l(i, x, s')$ in the column~$column(omega, x)$. So, writing $l(i,x,s) = pair(x, lambda_0)$, the only values that will ever subsequently be taken on by $l(i, x, s')$ will be (in order) $pair(x, lambda_0)$, $pair(x, lambda_0 + 1)$, $pair(x, lambda_0 + 2)$, ... As the value will change only when $N_(i,x)$ is injured, by #lemmaRef(<lemma6.6>) $l(i, x, s')$ will only take on finitely many of these values. // p.80 So, let $J_x = { pair(x, k) | k geq lambda_0 sand pair(x, k) in B_i }$. Note that this set is finite, may perhaps be empty, and may be computed from $B_i$. Let $t$ be the least stage greater than~$s$ such that $J_x subset B_(i,t)$. Thus there are no injuries to $N_(i,x)$ after $t$ due to higher priority pairs being protected. By the definition of~$s$, there are also no subsequent injuries due to enumerations of $h(i,y)$ for $y < x$. As these are the only two ways in which injuries can occur, there are no injuries at all after #stg($t$), and we may conclude that $m_r(x) = m_h(x) = t$, $r(i, x) = 0$, and $h(i, x) = h(i, x, t)$. We are done. ] Now, by #lemmaRef(<lemma6.6>), all of the requirements of the form $N_(i,x)$ are satisfied, so for each $i$, $pseudojump(D_i, V) leqt K$. By construction each $R$-requirement is satisfied, and since by #lemmaRef(<lemma6.7>) we can compute $h(i, x)$ from $pseudojump(D_i, V)$ we may conclude that also $K leq pseudojump(D_i, V)$. It remains only to show that $D_0, D_1, D_2, dots$ are pairwise incomparable. Well, suppose $i neq j$ and $D_i leqt D_j$. By lemma~#thmref(<lemma6.5>) $K equivt A_i join A_j$ so also $K equivt D_i join D_j equivt D_j$. But we already know that $pseudojump(D_i, V) equivt K$, so this would contradict the non-triviality assumption we have made on the pseudojump~$V$. #bibliography("works.yml", style: "ieee") // LocalWords: basicModuleRe blankEq equivt CDJF notleqt lim ast yml
https://github.com/frectonz/the-pg-book
https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/193.%20airbnbs.html.typ
typst
airbnbs.html The Airbnbs December 2020To celebrate Airbnb's IPO and to help future founders, I thought it might be useful to explain what was special about Airbnb.What was special about the Airbnbs was how earnest they were. They did nothing half-way, and we could sense this even in the interview. Sometimes after we interviewed a startup we'd be uncertain what to do, and have to talk it over. Other times we'd just look at one another and smile. The Airbnbs' interview was that kind. We didn't even like the idea that much. Nor did users, at that stage; they had no growth. But the founders seemed so full of energy that it was impossible not to like them.That first impression was not misleading. During the batch our nickname for <NAME> was The Tasmanian Devil, because like the cartoon character he seemed a tornado of energy. All three of them were like that. No one ever worked harder during YC than the Airbnbs did. When you talked to the Airbnbs, they took notes. If you suggested an idea to them in office hours, the next time you talked to them they'd not only have implemented it, but also implemented two new ideas they had in the process. "They probably have the best attitude of any startup we've funded" I wrote to <NAME> during the batch.They're still like that. Jessica and I had dinner with Brian in the summer of 2018, just the three of us. By this point the company is ten years old. He took a page of notes about ideas for new things Airbnb could do.What we didn't realize when we first met Brian and Joe and Nate was that Airbnb was on its last legs. After working on the company for a year and getting no growth, they'd agreed to give it one last shot. They'd try this Y Combinator thing, and if the company still didn't take off, they'd give up.Any normal person would have given up already. They'd been funding the company with credit cards. They had a binder full of credit cards they'd maxed out. Investors didn't think much of the idea. One investor they met in a cafe walked out in the middle of meeting with them. They thought he was going to the bathroom, but he never came back. "He didn't even finish his smoothie," Brian said. And now, in late 2008, it was the worst recession in decades. The stock market was in free fall and wouldn't hit bottom for another four months.Why hadn't they given up? This is a useful question to ask. People, like matter, reveal their nature under extreme conditions. One thing that's clear is that they weren't doing this just for the money. As a money-making scheme, this was pretty lousy: a year's work and all they had to show for it was a binder full of maxed-out credit cards. So why were they still working on this startup? Because of the experience they'd had as the first hosts.When they first tried renting out airbeds on their floor during a design convention, all they were hoping for was to make enough money to pay their rent that month. But something surprising happened: they enjoyed having those first three guests staying with them. And the guests enjoyed it too. Both they and the guests had done it because they were in a sense forced to, and yet they'd all had a great experience. Clearly there was something new here: for hosts, a new way to make money that had literally been right under their noses, and for guests, a new way to travel that was in many ways better than hotels.That experience was why the Airbnbs didn't give up. They knew they'd discovered something. They'd seen a glimpse of the future, and they couldn't let it go.They knew that once people tried staying in what is now called "an airbnb," they would also realize that this was the future. But only if they tried it, and they weren't. That was the problem during Y Combinator: to get growth started.Airbnb's goal during YC was to reach what we call ramen profitability, which means making enough money that the company can pay the founders' living expenses, if they live on ramen noodles. Ramen profitability is not, obviously, the end goal of any startup, but it's the most important threshold on the way, because this is the point where you're airborne. This is the point where you no longer need investors' permission to continue existing. For the Airbnbs, ramen profitability was $4000 a month: $3500 for rent, and $500 for food. They taped this goal to the mirror in the bathroom of their apartment.The way to get growth started in something like Airbnb is to focus on the hottest subset of the market. If you can get growth started there, it will spread to the rest. When I asked the Airbnbs where there was most demand, they knew from searches: New York City. So they focused on New York. They went there in person to visit their hosts and help them make their listings more attractive. A big part of that was better pictures. So Joe and Brian rented a professional camera and took pictures of the hosts' places themselves.This didn't just make the listings better. It also taught them about their hosts. When they came back from their first trip to New York, I asked what they'd noticed about hosts that surprised them, and they said the biggest surprise was how many of the hosts were in the same position they'd been in: they needed this money to pay their rent. This was, remember, the worst recession in decades, and it had hit New York first. It definitely added to the Airbnbs' sense of mission to feel that people needed them.In late January 2009, about three weeks into Y Combinator, their efforts started to show results, and their numbers crept upward. But it was hard to say for sure whether it was growth or just random fluctuation. By February it was clear that it was real growth. They made $460 in fees in the first week of February, $897 in the second, and $1428 in the third. That was it: they were airborne. Brian sent me an email on February 22 announcing that they were ramen profitable and giving the last three weeks' numbers."I assume you know what you've now set yourself up for next week," I responded.Brian's reply was seven words: "We are not going to slow down."
https://github.com/davidedomini/DTM-2425-Crash-Course
https://raw.githubusercontent.com/davidedomini/DTM-2425-Crash-Course/master/slides/lesson01.typ
typst
Apache License 2.0
#import "@preview/polylux:0.3.1": * #import "@preview/fontawesome:0.1.0": * #import themes.metropolis: * #show: metropolis-theme.with( aspect-ratio: "16-9", ) #set text(font: "Fira Sans", weight: 350, size: 20pt) #show math.equation: set text(font: "Fira Math") #set strong(delta: 200) #set par(justify: true) #set quote(block: true) #show quote: set align(left) #show quote: set pad(x: 2em, y: -0.8em) #set raw(tab-size: 4) #show raw.where(block: true): block.with( fill: luma(240), inset: 1em, radius: 0.7em, width: 100%, ) #show bibliography: set text(size: 0.8em) #show footnote.entry: it => { block(inset: (x: 2em, y: 0.1em))[#text(size: 0.75em)[#it.note.body]] } #let fcite(clabel) = { footnote(cite(form: "full", label(clabel))) } #let author = block(inset: 0.1em)[ #table(inset: 0.5em, stroke: none, columns: (auto, 4fr), align: (left, left), [#alert[*<NAME>*]], [`<EMAIL>`], ) #place(right, dy:-1.5em)[ #figure(image("images/disi.svg", width:40%)) ] ] #title-slide( title: "Introduction to Computer Architectures and Operating Systems", subtitle: "Digital Transformation Management @ 2024", author: author, // date: datetime.today().display("[day] [month repr:long] [year]"), ) #focus-slide[ Hardware and software ] #slide(title: "A stacked architecture")[ #figure(image("images/stack-arc.svg")) ] #slide(title: "Hardware")[ - #alert[Hardware]: set of physical devices of a computer - Circuits: motherboard, network cards, ... // - Memories: HDDs, SSDs, cache, ... - Devices: mouse, keyboard, printer, ... - #alert[Rigid] with respect to #alert[changes] - due to, for instance, compatibility or costs - this is an important aspect when you make decisions about the hardware you want to buy #side-by-side[ #figure(image("images/cpu.jpeg", width: 75%)) ][ #figure(image("images/gpu.jpeg", width: 75%)) ][ #figure(image("images/motherboard.jpg", width: 75%)) ] ] #slide(title: "Software")[ - #alert[Software]: set of instructions and algorithms used by a computer to solve a given problem - The software allows hardware to work - Instruction are coded in a specific language that can be used by a computer (Assembly, Python, Kotlin, Java, ...) - #alert[Algorithm]: steps needed to be followed in some order to solve a problem - A SW may include one or more algorithms - Algorithms are more general concepts that have to be translated in a specific programming language ] #slide(title:"Software (2)")[ ``` 1) Look at the firt element in the list 2) Compare the current number with the next number 3) If the next number is smaller, swap the two elements 4) Continue until no swaps are made ``` ```python def bubble_sort(arr): for n in range(len(arr) - 1, 0, -1): for i in range(n): if arr[i] > arr[i + 1]: swapped = True arr[i], arr[i + 1] = arr[i + 1], arr[i] ``` ] #slide(title:"Hardware vs Software")[ - _Hardware is just petrified software_ - The software is #alert[equivalent], from a logical point of view, to the hardware - Hardware can also be #alert[simulated] in software and any operation performed by software can also #alert[be built directly] into the hardware - The decision to put certain functions in hardware and others in software is based on: - #underline[Cost], HW > SW - #underline[Speed], HW > SW - #underline[Reliability], HW > SW - #underline[Frequency of expected changes], HW < SW ] #slide(title:"Hardware vs Software (2)")[ - It is important, in a project, deciding what is software and what is hardware - Generally, for #alert[clearly definable] and #alert[repetitive] actions, creating a #alert[dedicated HW solution] will speed up operations when compared to SW running on a general-purpose processor - Examples of HW accelerators: - Encryption - (Pseudo)Random Number Generator #side-by-side[ #figure(image("images/RNG.png", width: 50%)) ][ #figure(image("images/encryption.jpg", width: 50%)) ] ] #focus-slide[ Let's look deeper into the hardware components ] #slide(title:"Motherboard")[ - It is the #alert[main] printed circuit board - It #alert[holds] many of the #alert[crucial] electronic components of a system and allows #alert[communications] between them - It #alert[distributes the power], received from the power supply, to all components #figure(image("images/MB-explained.png", width:40%)) ] #slide(title:"Motherboard (2)")[ - #alert[Peripheral Component Interconnect] - Expansion slot used for connecting expansion cards to a computer's motherboard - These expansion cards provide additional functionality or capabilities to a computer system #side-by-side[ - Different types of PCI, for example: PCI and PCI/e - PCI/e not backward compatible with PCI - Different bandwidth (MB/s vs GB/s) ][ #figure(image("images/pci.jpeg", width:80%)) ] ] #slide(title:"Motherboard (3)")[ #align(center)[ === Examples of cards for PCI ] #v(2em) #side-by-side[ #align(center)[ Graphics processing unit #figure(image("images/gpu.jpeg", width: 75%))] ][ #align(center)[ Ethernet card #figure(image("images/ethernet.png", width: 95%))] ][ #align(center)[ Wi-Fi Card #figure(image("images/wifi.png", width: 50%))] ] ] #slide(title:"Motherboard (4)")[ - There are #alert[different types] of motherboards designed for different applications: - Multiple PCI Express slots for multiple GPUs, for example for HPC, Deep Learning, Mining, ... - ATX (Advanced Technology eXtended) for desktop computers - Different shapes and sizes - Motherboards built for embedded systems, rackmount servers, mobiles, vehicles, ... #figure(image("images/atx.jpg", width: 45%)) ] #slide(title:"Connectors")[ - Huge #alert[variety] of #alert[connectors] are available for different uses: - #alert[Video] (VGA, HDMI, DisplayPort, ...) - #alert[Data] (Fireware, USB, ...) - ... - This is so for several reasons, for instance #alert[technological] and #alert[marketing] #side-by-side[ - EU started to #alert[regulate] the connector market - iPhone 15, from lightning to USB-c ][ #figure(image("images/connectors.jpeg", width: 60%))] ] #slide(title:"Central Processing Unit (CPU)")[ - #alert[Brain] of a system, every single #alert[operation] that you do with the computer is #alert[processed] by the CPU - A CPU controls the following functions: - #alert[Instruction management] - Fetching and interpretation of the program instructions - #alert[Data computation] - Computation of binary operations that belong, for example, to the arithmetic logic (but not only) - #alert[Data memorization] - Permanent and volatile data memorization - #alert[Data transfer] - Input and output functionalities #place(right, dy:-5.5em)[ #figure(image("images/cpus.png", width:40%)) ] ] #slide(title:"Central Processing Unit (2)")[ #align(center)[The CPU is programmed as a #alert[Finite State Machine] (FSM) with 3 states] #figure(image("images/cpu-cycle.png", width:66%)) ] #slide(title:"Central Processing Unit (3)")[ - There are 3 types of instructions run by the CPU - #alert[ALU operations]: all the omputations on the data through the computation unit (Arithmetic Logic Unit) - #alert[Load/Store operations]: I/O operations in memories and I/O devices - #alert[Control operations]: all the operations to control the internal units of the CPU and the ones needed by the Operating System ] #slide(title:"Central Processing Unit (4)")[ #side-by-side[ - Two main units: - #alert[Control Unit] (CU) - #alert[Arithmetic Logic Unit] (ALU) - A CPU has also several internal memories called #alert[registers] - A CPU communicates with memories and other devices through the #alert[BUS] ][ #figure(image("images/cpu-arch.jpg", width: 90%)) ] ] #slide(title:"Central Processing Unit (5)")[ - We can see the Control Unit, as the name suggests, as the #alert[main controller] of all operations - The ALU is the #alert[real worker], who works to complete the instructions received - #alert[Registers] are essential in order to get and store data and instructions - Registers are #alert[inside] the CPU - Registers are the #alert[fastest] memory type - Registers are #alert[very small] - Without the CU, the ALU is not able to work ] #slide(title:"A 1-bit ALU")[ #figure(image("images/1b-alu.png", width: 45%)) ] #slide(title:"Control Unit")[ - It #alert[directs the operations] of the other units by providing #alert[timing] and #alert[control signals] - It tells the computer's memory, arithmetic logic unit and input and output devices how to respond to the instructions that have been sent to the processor - There are two types of CUs: - #alert[Hardwired]: they are implemented through use of combinational logic units, featuring a finite number of gates that can generate specific results - #alert[Microprogrammed]: the CU has a special control memory to store microinstructions. Therefore, the control operations are not done directly by hardware, but they are a sort of program, called microprogram. ] #slide(title:"Control Unit: Hardwired vs Microprogrammed")[ - #alert[Hardwired] - Generally faster than the microprogrammed ones - Fixed architecture: it requires changes in the wiring if the instruction set is modified or changed - High speed but very limited flexibility - #alert[Microprogrammed] - Simplicity of its structure - Microprogram can be debugged and replaced very like software - Flexibility ] #slide(title:"Memory")[ #side-by-side[ - A memory is the place in which #alert[information is stored] - The #alert[basic unit] of memory is the binary digit, called bit. A bit may contain a 0 or a 1 - Memories consist of a number of #alert[cells] each of which can store this piece of information - Each cell has a number, called its #alert[address], by which programs can refer to it ][ #figure(image("images/memory.png", width: 80%)) ] ] #slide(title:"Memory (2)")[ - In a computer, there are 2 types of memory: - #alert[Primary memory] - It includes the _Random Access Memory_ (RAM) and the _Read Only Memory_ (ROM) - Usually primary memory is very #underline[fast] and #underline[small-sized] and located #underline[close to the processor] - RAM is _#underline[volatile]_ (data disappear when the power goes out) - ROM is #underline[non-volatile] (the information is maintained even if the component loses power). - #alert[Secondary memory] - It includes _HDD_ (Hard Disk) and _SSD_ (Solid State Disk) - Usually is #underline[slower] than RAM/ROM - It has a #underline[greater capacity] than primary memory (1, 2, 8 TB vs 64, 128, 256 GB) - It is #underline[not close] to the processor (it can be located in an external separate storage device) - It is #underline[not volatile] ] #slide(title:"Random Access Memory")[ #align(center)[ RAM is the #alert[hardware location] in a computer where #underline[programs], the #underline[operating system] and #underline[data in current use] are kept so that they can be #underline[quickly reached] by the computer's processor ] #figure(image("images/ram.jpg", width: 60%)) ] #slide(title:"Random Access Memory (2)")[ - Random Access Memory since #alert[any storage location can be accessed directly] in the same amount of time - There are 2 types of RAM - #alert[Static RAM] (SRAM) - #alert[Dynamic RAM] (DRAM) - Static vs Dynamic - Static is #underline[faster] - Dynamic is #underline[less expensive] - Dynamic has a #underline[less power consumption] - SRAM is used to create the CPU's speed-sensitive cache, while DRAM forms the larger system RAM space ] #slide(title:"Read Only Memory")[ - It is used as the computer begins to #alert[boot up], in order to #underline[transfer data from the hard disk to the RAM memory] - It is used to #underline[store the start-up instructions] for a computer - When you press the power button, the RAM memory is #underline[empty] - #alert[It is important to store the essential start-up instructions in a ROM memory] - These start-up instructions are the Basic Input-Output System (BIOS) or the more recent Unified Extensible Firmware Interface (UEFI) - The BIOS instructs the CPU to start reading code at specific memory locations - In some modern devices, the BIOS also check the components through the Power-On Self-Test (POST) ] #slide(title:"Read Only Memory")[ - There are #alert[3 different types] of ROM - #underline[Programmable Read Only Memory] (PROM), also known as One Time Programmable (OTP) - #underline[Erasable Programmable Read Only Memory] (EPROM), erasable through ultraviolet light source - #underline[Electrically Erasable Programmable Read-Only Memory] (EEPROM), erasable through electricity. #figure(image("images/eprom.jpeg", width: 30%)) ] #slide(title:"Hard Disks")[ #side-by-side[ - It #alert[stores and retrieves] digital data using #underline[magnetic storage] and one or more #underline[rigid rotating platters] coated with magnetic material - Data are stored in #alert[logic units] defined as: sectors, clusters, tracks and cilinders - It is an #alert[electro-mechanical] data storage device, composed by many components - Data, in terms of 0/1, are stored as #alert[magnetic signals] read and written by a head ][ #figure(image("images/hdd.png", width: 80%)) ] ] #slide(title:"HDD vs SSD")[ - A #alert[Solid State Disk] (SSD) is a #underline[completely electronic] memory based on flash memories - Pro of SSDs with the respect of HDDs - #underline[Higher speed] - Access Time (time spent in retrieving data): μs vs ms - Transfer Time (time spent in transferring data): MB vs GB - #underline[SSD are more reliable] - No mechanical parts in movement - Higher impact resistance and no heat produced - BUT - HDDs are chaper: 0,1€/GB vs 0,02€/GB (up to 2020) - SSDs have minor life in case of frequent writing operation - It is very hard retrieve information from broken SSD (easier for HDDs) ] #slide(title:"Recap: memory organization")[ #figure(image("images/mem-org.png", width: 70%)) ] #slide(title:"Memory hierarchy")[ - We can define a #alert[memory hierarchy] based on the - #underline[Speed]: the amount of time that it takes the memory to receive a request and then read or write data - #underline[Size]: the amount of the space that can be written on the memory - #underline[Cost] #figure(image("images/mem-hier.png", width: 41%)) ] #slide(title:"Cache memory")[ - Historically, #alert[CPUs have always been faster than memories]. Memory is a #underline[bottleneck] for the CPU performance - Actually, the problem is both #alert[technology and economics] - Engineers know how to build memories that are as fast as CPUs, BUT #underline[they have to be located on the CPU chip] - Going over the bus to memory is #underline[very slow] - Putting a large memory on the CPU chip makes it #underline[bigger], which makes it #underline[more expensive], and even if cost were not an issue, there are #underline[limits to how big a CPU chip can be made] - The #alert[solution] is having a #underline[small amount of fast memory] and a #underline[large amount of slow memory] - This small and fast memory is called #alert[cache] #place(right, dy:-0.5em)[ #figure(image("images/cache.png", width:25%)) ] ] #slide(title:"Cache memory (2)")[ - The basic idea behind a cache is simple: the #alert[most heavily used memory words are kept in the cache]. When the CPU needs a word, it first looks in the cache. Only if the word is not there does it go to main memory - What are the “most heavily used memory words”? We can rely on 2 #underline[principles]: - #alert[Spatial locality]: if a particular storage location is referenced at a particular time, then it is likely that nearby memory locations will be referenced in the near future - #alert[Temporal locality]: if at one point a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future - We have a #alert[hit] when a word is in the cache, otherwise there is a miss ] #slide(title:"Cache memory (3)")[ - Some issues - #alert[Cache size]: the bigger the cache, the better it performs, but also the more it costs - #alert[Cache organization]: if the cache is full, how to define which spaces to free? We need specific policies - #alert[Number of caches]: usually chips have a primary (L1) cache on chip, a secondary (L2) cache off chip but in the same package, and a third (L3) cache still further away - #underline[Different levels of cache memory for different speeds], since they are closer to the CPU and built with different technologies #place(right, dy:-0.5em)[ #figure(image("images/cache-levels.png", width:30%)) ] ] #slide(title:"Cache memory (4)")[ #figure(image("images/cache-i7.png", width: 70%)) ] #focus-slide[ How are all these things organised inside a computer? ] #slide(title:"Computer architecture")[ - All components introduced are #alert[connected] through one or more #alert[buses] - A bus is a #alert[communication system] that transfers data between components inside a computer, or even between computers - Different buses since there are two types of organizations for memories (and in general computers): #alert[Von Neumann] Architecture and #alert[Harvard] Architecture ] #slide(title:"Von Neumann Architecture")[ - It is a computer architecture in which the #alert[memory stores both data and instructions] - Programs instructions and data are stored in the RAM of the system - This architecture is simpler that the Harvard architecture #figure(image("images/von.png", width: 35%)) ] #slide(title:"Harvard Architecture")[ It is a computer architecture in which #alert[there is a separate storage and signal pathways (bus) for instructions and data] #figure(image("images/harvard.png", width: 45%)) ] #slide(title:"Von Neumann vs Harvard")[ - #alert[Recent processors use the von Neumann architecture] BUT, for performance reasons, #underline[they usually have a separate processor memories (caches) for the instructions and data] - In this way they overcome the von Neumann bottleneck (a instruction fetch and a data operation cannot occur at the same time) that limit the performance of the system ] #focus-slide[ How have computers evolved over time? ] #slide(title:"Generations of computers")[ - We define different #alert[generations] of computers relying on the #alert[technology] used - Evolution of switches is the evolution of generations - #underline[Mechanical] - #underline[Electromechanical] - #underline[Electronics] #v(3em) #place(right, dy:-5.5em)[ #figure(image("images/hdd-big.png", width:35%)) ] ] #slide(title:"0th Generation: Mechanical Computers (1642-1945)")[ - In 1642, <NAME> was the first person to build a working #alert[calculating machine] - He built this calculator to help his father with taxes - The programming language _Pascal_ is named in his honor - The name of the calculator was Pascalina and was a sort of #alert[simple calculator] - #underline[Entirely mechanical] - Powered by a hand-operated crank - Only addition and subtraction operations (multiplication and division through repeated additions or subtractions) - Thirty years later the German mathematician von Leibniz built the #alert[Stepped Reckoner] - #underline[Multiplication and division] operations added - The #underline[first real calculator] - First example of #underline[memory] #place(right, dy:-2.5em)[ #figure(image("images/stepped.png", width:35%)) ] ] #slide(title:"0th Generation: Mechanical Computers (1642-1945)")[ - In 1834 <NAME> built the #underline[Analytical Engine] - Entirely #underline[mechanical] - It was #underline[general purpose] machine, i.e. it was not specialized to perform only single operations - It read instructions from#underline[ punched cards] and carried them out - #alert[The programming age was born]: punching a different program on the input cards, it was possible to have the analytical engine perform different computations. - To produce the (simple) software, Babbage hired a young woman named <NAME> - #alert[Ad<NAME>ace was thus the world's first computer programmer] #figure(image("images/lovelace.png", width:42%)) ] #slide(title:"1st Generation: Vacuum Tubes (1945-1955)")[ - #alert[Passage from mechanic to electricity] - Key problem: how to handle and memorize electrical binary signals (0/1) - It is possible to perform arithmetic operations thanks to the Boolean Algebra - #underline[Binary signal perfectly suits with electricity] - 1 is a wire with electricity, or rather an open switch - Therefore, the research focused on different types of electromechanical and electrical switches (realys, vacuum tubes, transistors, atoms, ... ) #figure(image("images/transistor.png", width:60%)) ] #slide(title:"1st Generation: Vacuum Tubes (1945-1955)")[ - The main stimulus for the electronic 1st generation computers was #underline[World War II] - Goal: decode German messages produced by the ENIGMA - Solution: the British government set up a top secret laboratory that built an electronic computer - The name was Colossus, projected by <NAME> in 1944 - Based on vacuum tubes - It was a general purpose calculator #figure(image("images/turing.png", width:60%)) ] #slide(title:"1st Generation: Vacuum Tubes (1945-1955)")[ - #alert[ENIAC] (Electronic Numerical Integrator and Computer) - Built in 1946 in USA, to compute range tables for heavy artillery - It was the first real #underline[programmable, electronic, general-purpose digital computer] - 30 tons and consumed 140 kW of power - 18k vacuum tubes and 1.5k relays - #alert[IAS] (Institute for Advanced Study) machines - Built in Princeton (USA) in 1951 - Also known as #underline[Neumann Machine], since for the first time data and instructions were stored in the memory - Before that, the programmers have to set all the switches and cables to program the computer, since the program was not stored in memory ] #slide(title:"Bugs")[ - Problems with the high number of vacuum tubes: - #alert[Malfunctions] (vacuum tubes are easily broken) - #alert[Heat dissipation], that requires the use of air conditioning in the computers' rooms - #alert[NB] Also today the heat dissipation is a key element in data centers and server rooms, since this aspect is related to the power consumption and failure prevention (a GPU can reach up to 90 - 100°C at a full load) - The use of the term #alert[“bug”] to describe #alert[software errors] is common - The term derives from the fact that in 1946 operators traced an error in the Mark II to a #alert[moth trapped in a relay], coining the term bugs - Thus, #alert[debugging] is the process of finding and resolving bugs within computer programs, software, or systems ] #slide(title:"2nd Generation: Transistors (1955-1965)")[ - 1948: _<NAME>, <NAME>, and <NAME>_ invented the #alert[transistor] in Bell Labs - They were awarded the 1956 Nobel Prize in physics - Transistor vs Vacuum Tube - Transistors are much #underline[more durable] - Transistors are much #underline[smaller] - Transistors require much #underline[less voltage] to function #place(right, dy:-4.5em)[ #figure(image("images/trans.jpg", width:30%)) ] ] #slide(title:"2nd Generation: Transistors (1955-1965)")[ - #alert[ PDP-1] (1960): on its screen, MIT students programmed the first computer game (Spacewar) #figure(image("images/pdp.png", width:70%)) - #alert[Olivetti]: italian company leader in the computer market in '60s - #alert[Programma-101]: first programmable #alert[desktop] computer (also used by NASA for the Apollo program!) #place(right, dy:-0.5em)[ #figure(image("images/programma.jpg", width:20%)) ] ] #slide(title:"3rd Generation: Integrated Circuits (1965-1980)")[ - This generation is still based on transistors, but now they are “organized” in a circuits with a #alert[large amount] of them (#alert[Integrated Circuits], ICs) - #alert[Silicon Integrated Circuits] - Dozens of transistors put on a single chip - IC made it possible to build computers that were: - #underline[Smaller] - #underline[Faster] - #underline[Cheaper] #place(right, dy:-3.5em)[ #figure(image("images/ic.jpg", width:35%)) ] ] #slide(title:"3rd Generation: Integrated Circuits (1965-1980)")[ - #alert[IBM 360] (1964): used new integrated circuit technology - Introduces the paradigm of #alert[multiprogramming]: several programs are stored in memory, so when one was waiting for input/output to complete, another could compute - Therefore, #underline[CPU utilization is improved] #figure(image("images/ibm360.png", width:35%)) ] #slide(title:"4th Generation: Very Large Scale Integration (1980-?)")[ - 1971: the italian _<NAME>_ created the first #underline[microprocessor], the Intel 4004 - #underline[Millions] (not dozen) of transistors on a single chip. This development soon led to #underline[smaller] and #underline[faster] computer: the beginning of the #alert[Personal Computer (PC)] era #figure(image("images/faggin.png", width:55%)) #side-by-side[ - #alert[IBM 5150] (IBM PC - 1981) - Operating System: MD-DOS (developed by a small company owned by _<NAME>_…) ][ #figure(image("images/bill.png", width:50%)) ] ] #slide(title:"5th Generation: Invisible Computers")[ - The real fifth generation is more a #alert[paradigm shift] than a specific new architecture - We can refer to this generation also with _Ubiquitous Computing_ or _Pervasive Computing_ - rather than having a _small number of very powerful computing devices_ in your life (laptops, tablets, phones, music players and so on), you might have a _large number of smart devices_ which are perhaps less powerful (“simpler”) #figure(image("images/iot.png", width:35%)) ] #focus-slide[ How can we communicate with a computer? ] #slide(title:"Communicating with computers")[ - #underline[Computers understand only the machine language] (a sequence of binary symbols 0 and 1)- - Why? Simplifying, with the machine language we are #underline[turning on/off] the circuits and functional units inside the circuits of a compute #figure(image("images/communication.png", width:45%)) ] #slide(title:"Machine language")[ - #alert[Machine Language:] a set of #underline[instructions directly executed by the hardware] - Machine language is a (very) #underline[low-level programming language], that generally consists of numbers, i.e. the binary code - The execution is #underline[extremely fast], but the instruction set is limited due to cost and complexity - Unfortunately, the #alert[machine language is really hard for humans] #figure(image("images/machinelanguage.png", width:30%)) ] #slide(title:"Machine Language")[ - The #alert[first generation of software] (late 1940s) had its #underline[instructions written directly in machine language] - Unfortunately, #alert[programming in machine language is extremely difficult], especially when programs become more complex - A #alert[solution] for the machine language complexity can be represented by the use of an #alert[intermediate language] that is #alert[comprehensible by humans] - Then, we have to #alert[translate this new language] into the machine language ] #slide(title:"Machine Language conversion")[ #align(center)[ The same happens with #alert[humans] ] #figure(image("images/translation.png", width:75%)) ] #slide(title:"Assembly Language")[ - One of the first solutions is to use the #alert[assembly language] - Indeed, in assembly language instructions can be represented with the help of #alert[mnemonics] (such as mov, add, sub, end. ...) that are #alert[human-readable commands] #figure(image("images/assembly.png", width:40%)) - Unfortunately, #alert[also the assembly language became hard to be used] by programmers to write more and more complex programs ] #slide(title:"Assembly Language")[ - Why? Because having a more human-friendly language #alert[we started to build more complex systems] which however required even more friendly languages - The solution is to create #alert[a new “easier” language] (L1) and a «translator» to translate the easier language to the assembly (or machine) language (L0). L1 will be runned on a sort of #alert[Virtual Machine] (e.g., Java Virtual Machine) ] #slide(title:"Multilevel Machines")[ #align(center)[This solution can be #alert[repeated] again to create #alert[new higher-level languages] easier for humans and with respect to the application to be developed] #figure(image("images/multilevel.png", width:80%)) ] #slide(title:"Compilers vs Interpreters")[ - There are two types of translators: #alert[compilers] and #alert[intepreters] - Both compiler and interpreters #alert[do the same job] which is converting higher level programming languages to lower logical levels - However, there are some differences: - #alert[Compiler]: converts the code into machine code before program run - #alert[Interpreter]: convert code into machine code when the program is running (a sort of “simultaneous translation”) - Compiler vs Interpreter - Compiled code #underline[runs faster], since can be better optimized for a specific architecture - Interpreted code runs slower but has a #underline[better portability] ] #slide(title:"Compilers vs Interpreters")[ #figure(image("images/com-inter.png", width:80%)) ] #focus-slide[ Operating Systems ] #slide(title:"Introduction")[ - #alert[Operating systems] (OS) are one of the most essential components of modern computers - They #alert[manage computer hardware and software resources], and provide services for computer programs - Without an OS, a computer cannot function efficiently ] #slide(title:"What is an OS?")[ - An operating system is software #alert[that acts as an intermediary] between computer hardware and users - It #alert[manages hardware components] such as the CPU, memory, and input/output devices, ensuring they are used efficiently - it allows users to interact with the computer through user interfaces such as command-line interfaces (CLI) or graphical user interfaces (GUI) - Examples of operating systems include Windows, macOS, Linux, Android, and iOS ] #slide(title:"Main Functions of an OS")[ - #alert[Process Management:] It manages the execution of processes, schedules tasks, and handles multitasking. - #alert[Memory Management:] It allocates and deallocates memory spaces as needed by different programs and processes - #alert[File System Management:] It organizes, stores, retrieves, and manages data in files on storage devices - #alert[Device Management:] It manages input and output devices such as keyboards, printers, and monitors - #alert[Security and Protection:] The OS ensures the integrity and confidentiality of data by managing access permissions and protecting the system from external threats ] #slide(title:"Types of OS")[ - There are several types of operating systems designed for different purposes: - #alert[Desktop Operating Systems:] Used on personal computers. Examples include Windows, macOS, and Linux - #alert[Mobile Operating Systems:] Designed for smartphones and tablets. Examples are Android and iOS - #alert[Embedded Operating Systems:] Found in devices like cars, appliances, and industrial machines, often with real-time processing needs - #alert[Real-Time Operating Systems (RTOS):] Provide immediate processing for applications that require quick responses, such as medical systems or aircraft controls - #alert[Network Operating Systems:] Manage and coordinate network resources, commonly used in servers and data centers ] #slide(title:"Architecture of an OS")[ - The architecture of an operating system is typically divided into layers: - #alert[Kernel:] it manages the system hardware and acts as a bridge between the hardware and application layers. It controls low-level tasks like memory management, process scheduling, and device I/O - #alert[Monolithic Kernel]: Large and complex, with many functions built into the kernel itself (e.g., Linux) - #alert[Microkernel:] Minimalist design where only essential services run in the kernel, with other services running in user space (e.g., Minix) - #alert[User Space:] This includes everything outside the kernel, where user applications and processes run. The user interacts with the operating system through interfaces provided in this space ] #slide(title:"Process Management")[ - #alert[A process is an instance of a program in execution]. The OS is responsible for managing processes in the following ways: - #alert[Process Scheduling:] Determines the order in which processes are executed. Common algorithms include: - #alert[First Come First Served (FCFS):] Processes are executed in the order they arrive - #alert[Round-Robin (RR):] Each process gets a fixed time slice to execute - #alert[Priority Scheduling:] Processes with higher priorities are executed before lower-priority ones - #alert[Creation and Termination:] The OS creates processes through system calls and terminates them once completed - #alert[Multitasking and Concurrency:] The OS can switch between multiple processes, giving the illusion of parallel execution on single-core systems, or actual parallelism on multi-core systems ] #slide(title:"Disclaimer")[ Most of the slides are based on the slides of prof. <NAME> ]
https://github.com/ukihot/igonna
https://raw.githubusercontent.com/ukihot/igonna/main/articles/web/scripts.typ
typst
#import "@preview/codly:0.2.0": * #let icon(codepoint) = { box( height: 0.8em, baseline: 0.05em, image(codepoint) ) h(0.1em) } #show: codly-init.with() #codly(languages: ( html: (name: "HTML", icon: icon("html5.svg"), color: rgb("#ff6a25")), )) == JavaScript Javascriptは1995年に<NAME>氏によって開発された。 クライアントサイド#footnote[サーバではなくユーザー側のモバイルやPC上を指す。]で動作するスクリプト言語として誕生し、HTMLに動きや対話性をもたらした。 HTMLで定義された要素に対し、イベントや条件に応じて操作や変更を行うことも機能の一つである。 例えば、ボタンをクリックしたときにアラートを表示したり、入力フォームの値を検証したり、画像を切り替えたり、アニメーションやゲームを作ることも可能である。 JavaScriptライブラリ#footnote[流行り廃りは https://stateofjs.com から確認できる。]を利用して高度なWebアプリケーションも開発されている。 著名なSNSもすべて、ブラウザ上ではJavaScriptが動いている。 == JavaScriptエンジン JavaScriptコードを解析するコアプログラムをJavaScriptエンジンという。 Webブラウザはエンジンを搭載されているため、JavaScriptを実行できる。 #table( columns: (1fr, auto), inset: 1em, align: horizon, [*Browser*], [*Engine*], "Google Chrome", "V8", "Microsoft Edge", "V8", "FireFox", "SpiderMonkey", "Safari", "JavaScriptCore" ) JavaScriptエンジンを独自に再開発するプロジェクト#footnote[https://github.com/boa-dev/boa]も存在する。 == JavaScriptランタイム JavaScriptエンジンを含む、JavaScript実行環境をランタイムという。 - Node.js(V8を採用) - Bun.sh(JavaScriptCoreを採用) - Deno(V8を採用) == JavaScriptを動かしてみる 任意のディレクトリで新規のHTMLファイルを作成する。 ```sh toucn index.html ``` 以下はいわゆるトースト通知を実装したスクリプトである。 このままコピー&ペーストし、`index.html`をWebブラウザで開くとJavaScriptの挙動を確認できる。JavaScriptが埋め込まれたHTMLファイルは、特別なコマンドや操作は必要とせず、Webブラウザに自動的に実行される。 ```html <!DOCTYPE html> <html lang="jp"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Exaple</title> </head> <body> <div id="toast"> <p>Hello, JavaScript!</p> </div> <button class="button">Enter</button> <script> document.addEventListener("DOMContentLoaded", function () { const toast = document.querySelector("#toast"); const button = document.querySelector(".button"); button.addEventListener("click", () => { toast.style.visibility = "visible"; setTimeout(function () { toast.style.visibility = "hidden"; }, 3000); }); }); </script> </body> </html> ``` `<script>`タグで囲まれている箇所がJavaScriptにあたる。 本書では、JavaScriptのシンタックスについては触れないが、`Coding`セクションが理解できていれば問題ない。 既に`Rust`がプログラミング言語の中で母国語のような存在に昇格できていれば,おそらく`JavaScript`もある一定レベルまで読むことができるだろう。 == DOM
https://github.com/TypstApp-team/typst
https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/meta/numbering.typ
typst
Apache License 2.0
// Test integrated numbering patterns. --- #for i in range(0, 9) { numbering("*", i) [ and ] numbering("I.a", i, i) [ for #i \ ] } --- #for i in range(0, 4) { numbering("A", i) [ for #i \ ] } ... \ #for i in range(26, 30) { numbering("A", i) [ for #i \ ] } ... \ #for i in range(702, 706) { numbering("A", i) [ for #i \ ] } --- #set text(lang: "he") #for i in range(9, 21, step: 2) { numbering("א.", i) [ עבור #i \ ] } --- #set text(lang: "zh", font: ("Linux Libertine", "Noto Serif CJK SC")) #for i in range(9, 21, step: 2){ numbering("一", i) [ and ] numbering("壹", i) [ for #i \ ] } --- #set text(lang: "jp", font: ("Linux Libertine", "Noto Serif CJK JP")) #for i in range(0, 4) { numbering("イ", i) [ (or ] numbering("い", i) [) for #i \ ] } ... \ #for i in range(47, 51) { numbering("イ", i) [ (or ] numbering("い", i) [) for #i \ ] } ... \ #for i in range(2256, 2260) { numbering("イ", i) [ for #i \ ] } --- #set text(lang: "ko", font: ("Linux Libertine", "Noto Serif CJK KR")) #for i in range(0, 4) { numbering("가", i) [ (or ] numbering("ㄱ", i) [) for #i \ ] } ... \ #for i in range(47, 51) { numbering("가", i) [ (or ] numbering("ㄱ", i) [) for #i \ ] } ... \ #for i in range(2256, 2260) { numbering("ㄱ", i) [ for #i \ ] } --- // Error: 17-19 number must be at least zero #numbering("1", -1)
https://github.com/WinstonMDP/math
https://raw.githubusercontent.com/WinstonMDP/math/main/exers/b.typ
typst
#import "../cfg.typ": * #show: cfg $ "Prove that" ex(lim_cal(B)_X f\, lim_cal(B)_Y g) -> (all(B_Y in cal(B)_Y) ex(B_X in cal(B)_X): f[B_X] subset.eq B_Y) -> lim_cal(B)_X g f = lim_cal(B)_Y g $ That is, $all(V(lim_cal(B)_Y g)) ex(B in cal(B)_X): g f[B] subset.eq V(lim_cal(B)_Y g)$ $ex(B in cal(B)_Y): g[B] subset.eq V(lim_cal(B)_Y g)$ $ex(B' in cal(B)_X): f[B'] subset.eq B$ $g f[B'] subset.eq V(lim_cal(B)_Y g)$
https://github.com/EGmux/PCOM-2023.2
https://raw.githubusercontent.com/EGmux/PCOM-2023.2/main/lista2/lista2q15.typ
typst
=== A forma de onda de sinalização RZ AMI que representa a sequência binária 0100101011 é transmitida em um canal com muito rúido. Aforma de onda recebida é mostrada na Fig. 1, a qual contém um único erro. Localize a posição deste erro e justifique usa resposta. \ #figure( image("../assets/fig1.png", width: 80%), caption: [Forma de onda da sequência recebida], ) <fig-fig1> o erro esta no instante de tempo 7 visto que seria esperado uma alternância do bit1 e sabemos que foi bit 1 visto que bit 0 não tem pulso na sinalização RZ AMI.
https://github.com/emfeltham/Typst.jl
https://raw.githubusercontent.com/emfeltham/Typst.jl/main/README.md
markdown
MIT License
# Typst.jl Include [Julia](https://julialang.org) outputs as formatted elements of [Typst](https://typst.app/docs) documents. ## Introduction [Typst](https://typst.app/docs) is a new system for markup-based typesetting, touted as an alternative to LaTeX. Typst is desirable as a system that is very fast to compile (near instantaneous live rendering), relatively simple to use with clear and clean syntax, and easily extensible with user-contributed packages. It has also recently been [incorporated into Quarto](https://quarto.org/docs/output-formats/typst.html). However, do note that this system is new and limited in various ways (e.g., Typst currently only renders to PDF). This package will be useful to Julia users engaged in academic and professional writing, and may be a more parsimonious alternative to other common approaches to writing documents that rely on the outputs of programs (e.g, Jupyter[^rdme-1], Markdown-based solutions[^rdme-2], LaTeX). [^rdme-1]: Which I find to be slow, and difficult to work with. Perhaps more importantly, they have been found [difficult to replicate](https://arxiv.org/abs/2209.04308). [^rdme-2]: These solutions, e.g., Quarto requires conversion of Markdown to TeX to render to PDF which may complicate detailed formatting, and is consequently subject to slow compilation. While Quarto now supports Typst, users may find that that indirectly producing Typst documents may cause similar complications and may defeat the purpose of Typst's clean user-end syntax and customizability. The underlying philosophy here is to provide a simple set of functions that produce properly formatted Typst documents with captions, labels, and other features that may be easily embedded into larger Typst documents. These documents should be easily updated as models are re-estimated, figures and tables are changed, or reported values in the text change as the research and writing process take their course. The point is to provide a relatively simple framework, that does [more with less](https://yihui.org/en/2024/01/bye-rstudio/). The above-mentioned solutions make it possible to do this[^rdme-3]. However, I believe that it is useful to provide an option that does not additionally conflate program execution and writing. Quarto or RMarkdown documents may become unwieldy when execution times extend beyond those of simple toy examples. Generally, one does not want to re-run large models or execute long-running code to render a document. The outputs of programs will reasonably be updated asynchronously in relation to changes to the text. Subsequently, the goal here is both to provide a set of ready-made functions common to an academic writing workflow (e.g., creating regression tables, including figures) along with more general means to customize outputs to Typst documents (e.g., export a custom table using simple defined types). [^rdme-3]: Though, the support seems much greater for R than for Julia. For example, it is much more awkward to write Quarto documents that include text that updates based on the values of variables. ## Tables Tables are built using the [typst-tablex](https://github.com/PgBiel/typst-tablex) package, which allows for the construction of more sophisticated tables than [those made in base Typst](https://typst.app/docs/reference/model/table/). Table construction for Typst is built around structs that correspond to tablex functions, subset under the `TableX` abstract type, including `CellX`, and `HLineX`, which constitute the building blocks of tables. ## Regression tables[^rdme-4] [^rdme-4]: Example adapted from [GLM.jl](https://juliastats.org/GLM.jl/stable/examples/). Set up simple data set, and execute OLS. ```{julia} using Typst using DataFrames using GLM data = DataFrame(X=[1,2,3], Y=[2,4,7], Z = [4,5,6]) ols1 = lm(@formula(Y ~ X), data) ols2 = lm(@formula(Y ~ Z), data) ols3 = lm(@formula(Y ~ X + Z), data) ms = [ols1, ols2, ols3]; ``` Construct a formatted regression table for Typst: ```{julia} regtable_typ( ms, "exampletable"; caption = "Models of Y." ) ``` Produces the following output saved to a specified ".typ file (here, "exampletable.typ"): The output contains code that imports the typst-tablex package, and defines two variables that control table column widths, which may be easily altered by the user. The table itself[^rdme-5] is embedded in a `figure` object, and is specified as a `table` type[^rdme-6]. [^rdme-5]: `gridx` objects are identical to `tablex` objects, but do not include vertical and horizontal lines by default. [^rdme-6]: Which is useful if you want to, for example, set captions above the all table objects in your document, but not for other sorts of figures. ```{typst} #import"@preview/tablex:0.0.8": tablex, gridx, hlinex, vlinex, colspanx, rowspanx, cellx #let col1width = 12em #let coliwidth = auto #figure( kind: table, gridx( columns: (col1width, coliwidth, coliwidth, coliwidth, ), rows: (0.2em, 1.5em), align: center + horizon, hlinex(y: 0, stroke: 0.05em), cellx(x: 0, y: 0, colspan: 4)[], hlinex(y: 1, stroke: 0.05em), cellx(x: 1, y: 1)[(1)], cellx(x: 2, y: 1)[(2)], cellx(x: 3, y: 1)[(3)], hlinex(start: 1, y: 2, stroke: 0.05em), cellx(x: 0, y: 6, align: left)[Z], cellx(x: 2, y: 6)[2.5#super[$+$]], cellx(x: 2, y: 7)[(0.289)], cellx(x: 3, y: 6)[-0.222], cellx(x: 3, y: 7)[(0.208)], cellx(x: 0, y: 4, align: left)[X], cellx(x: 1, y: 4)[2.5#super[$+$]], cellx(x: 1, y: 5)[(0.289)], cellx(x: 3, y: 4)[2.722], cellx(x: 3, y: 5)[(0.487)], cellx(x: 0, y: 2, align: left)[(Intercept)], cellx(x: 1, y: 2)[-0.667], cellx(x: 1, y: 3)[(0.624)], cellx(x: 2, y: 2)[-8.167], cellx(x: 2, y: 3)[(1.462)], cellx(x: 3, y: 2)[0.0], cellx(x: 3, y: 3)[(NaN)], hlinex(start: 1, y: 8, stroke: 0.05em), cellx(x: 0, y: 8, align: left)[N], cellx(x: 0, y: 12, align: left)[BIC], cellx(x: 0, y: 9, align: left)[$R^2$], cellx(x: 0, y: 11, align: left)[AIC], cellx(x: 0, y: 10, align: left)[Adjusted $R^2$], cellx(x: 1, y: 8)[3], cellx(x: 1, y: 12)[3.138], cellx(x: 1, y: 9)[0.987], cellx(x: 1, y: 11)[5.843], cellx(x: 1, y: 10)[0.974], cellx(x: 2, y: 8)[3], cellx(x: 2, y: 12)[3.138], cellx(x: 2, y: 9)[0.987], cellx(x: 2, y: 11)[5.843], cellx(x: 2, y: 10)[0.974], cellx(x: 3, y: 8)[3], cellx(x: 3, y: 12)[3.138], cellx(x: 3, y: 9)[0.987], cellx(x: 3, y: 11)[5.843], cellx(x: 3, y: 10)[0.974], hlinex(y: 13, stroke: 0.1em), cellx(y: 13, colspan: 4, align: left)[_Note:_ $#super[+]p<0.10$; $#super[$star.op$]p<0.05$; $#super[$star.op star.op$]p<0.01$, $#super[$star.op star.op star.op$]p<0.001$] ), caption: [Models of Y.] ) ``` This file should be incorporated into your document via `#include("exampletable.typ")`. ## Figures The package also provides a function to easily write a ".typ" file that will load a corresponding figure, formatted with a caption and label. ```julia using DataFrames using CairoMakie data = DataFrame(X=[1,2,3], Y=[2,4,7], Z = [4,5,6]); fg, ax, pl = scatter(data.X, data.Y) ``` The function will automatically handle directories on the path. N.B., the figure file extension is included in `filenamepath`. Also observe that the output ".typ" file to load the figure expects the figure file to appear in the same directory (in the example below, the "plot.svg" should be saved in "dir/"). `export_figure` will automatically save them in the same place. ```julia filenamepath = "dir/plot.svg" caption = "Plot caption.", short_caption = "Cap" #= If desired, define a modified version of the Makie `save` function with whatever specified options. Otherwise, just input `save`. =# @inline save2(name, fg) = save(name, fg; pt_per_unit = 2) # Short captions are used in the list of figures or the list of tables short_caption = "Effect of village size above or below 150" # Long captions appear with the figure itself caption = "(a) Effect of village size above or below Dunbar's number with respect to accuracy in network cognition. LHS: Grey bands that surround the effect estimates represent bootstrapped 95% confidence ellipses. RHS: Bands represent 95% confidence intervals (see Methods for details). (b) Distribution of village sizes, with Dunbar's number (150) (yellow line) and average size (black line)." #= generate two files (1) a ".typ" that includes figure information for Typst, and (2) the image file (e.g., "plot.svg") that is called in the ".typ" file. =# figure_export( filenamepath, fg, # Makie figure save2; # Makie save function caption, short_caption, ) end ``` The following output is produced in "dir/plot.typ": ```typst #figure( image("plot.png", width: 100%), caption: flex-caption( [Plot caption.], [Cap] ) ) <plot> ``` This file should be incorporated into your document via `#include("dir/plot.typ")`. ## Tasks - [/] real and updated documentation (the documentation is **not** current) - [/] updated for figure export - [ ] DataFrame export - [ ] update regression table export - [ ] update examples to match code changes (N.B., the examples are very out of date and not correct) - [X] short captions ### Regression tables - [X] basic support for MixedModels - [X] labels - [X] better alignment - [ ] improve table row spacing ### Tables - [X] objects for other tablex functions - [X] `vlinex` - [ ] `rowspanx`, `colspanx` - [ ] functions for other kinds of tables - [X] simple display of an array, DataFrame - [ ] NamedArrays - [X] `gridx` option (cf. `autolines`) - [ ] adjust import statement above to only include functions needed for current table - [ ] option to include tables with tablecomponents not explicitly indexed by (x, y) - [ ] regularize regression table with table_export workflow ### Figures - [X] figure export function - [ ] documentation for figure export ### Dynamic text - [ ] export variables from Julia into Typst (_e.g._, so that the text can reference exported variables that update based on Julia code execution) (probably use dicts) ### Types - [X] define types to hold contents relevant Typst functions, start with those relevant to table and figure production (they should have same print workflow)
https://github.com/xrarch/books
https://raw.githubusercontent.com/xrarch/books/main/documents/xlospec/chapintro.typ
typst
#import "@preview/tablex:0.0.6": tablex, cellx, colspanx, rowspanx = Overview The XR/SDK Linkable Object *(XLO)* file format is the native object file format of the XR/SDK suite of tools. It is the only file format emitted by the XR/ASM assembler, and is the only file format accepted by the XR/LINK linker. It is suitable for use as an intermediate object code format, as an executable file format, as a static library format, and as a dynamic library format. *XLO* is a portable format, with current support for the RISC XR/17032 and CISC fox32 architectures, and with planned support for the Aphelion 64-bit architecture. The format is simple, and architecture-specific details are limited to the definition of new relocation entry types. It is designed for flexibility, with the ability to specify arbitrary sections. The tables are laid out for rapid iteration of relevant entries during the process of load-time program relocation and dynamic linking, and an "optimistic" relocation scheme helps avoid both of these costs entirely. = Module Format The overall format of an *XLO* module file is a header, followed by a number of tables linked together via file pointers (i.e. byte offsets into the file), and finally the data for each section. The metadata contained within the file, that is, the header and the tables, are referred to collectively as the "head" of the file. An maximal module file might have the following layout: ``` +--------+---------------+-------------------+--------------+--------------+------------ | Header | Section Table | Relocation Tables | Symbol Table | Extern Table | Unr. Fixups +--------+---------------+-------------------+--------------+--------------+------------ -+--------------+--------+--------------+--------------+--------------+ | Import Table | Fixups | String Table | Text Section | Data Section | -+--------------+--------+--------------+--------------+--------------+ ``` The "head length" of this file is the length of all of the contents up to the end of the last metadata; in this case, the string table. #box([ = Header ``` STRUCT XloHeader Magic : ULONG, SymbolTableOffset : ULONG, SymbolCount : ULONG, StringTableOffset : ULONG, StringTableSize : ULONG, TargetArchitecture : ULONG, HeadLength : ULONG, ImportTableOffset : ULONG, Flags : ULONG, Timestamp : ULONG, SectionTableOffset : ULONG, ExternTableOffset : ULONG, ExternCount : ULONG, SectionCount : UINT, ImportCount : UINT, END STRUCT XloHeaderExtended Hdr : XloHeader, UnresolvedFixupTableOffset : ULONG, UnresolvedFixupCount : ULONG, END ``` The header of an *XLO* file contains general information about the module file, and provides information required to find and parse the tables of metadata. There are two variants of the header, "normal" and "extended". These vary only by the extended header featuring two extra fields. The extended header is present within "fragment" modules, that is, modules that have the *XLO_FILE_FRAGMENT* flag (bit 0) set within the Flags field of the normal header. In future revisions, extra fields may be added beyond the end of the extended header, but the header's length _must_ remain 64-bit aligned. ]) == Magic The 32-bit magic number in the normal header should read 0x6174737F. == SymbolTableOffset, SymbolCount SymbolTableOffset contains the file pointer of the table which describes the symbols exposed by the module. SymbolCount contains the number of entries within that table. If SymbolCount is zero, SymbolTableOffset has undefined meaning. == StringTableOffset, StringTableSize StringTableOffset contains the file pointer of the "string table", which is the hunk of all null-terminated ASCII strings used by metadata within the module. StringTableSize contains its length, up to (and including) the null terminator of the final string. If StringTableSize is zero, StringTableOffset has undefined meaning. == TargetArchitecture This field contains the 32-bit "architecture code" indicating which instruction set the code within the module is for. Currently defined codes are: #box([ #tablex( columns: (1fr, 8fr), cellx([ #set text(fill: white) #set align(center) *Code* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Architecture* ], fill: rgb(0,0,0,255)), [00000000], [Unknown], [00000001], [XR/17032], [00000002], [Fox32], [00000003], [Aphelion], [00000004], [AMD64], ) ]) == HeadLength This field contains the length in bytes of all of the metadata for the module. It must therefore be grouped together at the beginning of the file to form a region known as the "head" that precedes all section data. == ImportTableOffset, ImportCount ImportTableOffset contains the file pointer of the "import table", a flat array of entries which describe the dynamically linked libraries that are depended upon by this module. ImportCount contains the 16-bit count (range [0, 65535]) of entries in this table. If the module is a fragment (*XLO_FILE_FRAGMENT* is set in the Flags field), ImportCount must be zero. When ImportCount is zero, the meaning of ImportTableOffset is undefined. == Flags This field contains up to 32 flags indicating characteristics of the module file. Currently defined flags are: #box([ #tablex( columns: (1fr, 5fr, 16fr), cellx([ #set text(fill: white) #set align(center) *Bit* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Name* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Meaning* ], fill: rgb(0,0,0,255)), [0], [*XLO_FILE_FRAGMENT*], [This file is a fragment; it has an extended header and is not yet suitable for relocation or dynamic linking. These files are produced directly by the assembler.], [1], [*XLO_FILE_STRIPPED*], [This file was stripped of its internal relocations. Its sections cannot be loaded elsewhere in the address space.] ) ]) == Timestamp This field contains a 32-bit Unix Epoch timestamp (in seconds) of when the module file was encoded to disk. It is intended primarily to provide a unique versioning among multiple versions of the same dynamic library. When a dynamic library is linked against, its Timestamp field is captured in the import table entry. Mismatched timestamps indicate to the runtime dynamic linker that the library was updated, and that any modules that reference the old version must be fixed up. == SectionTableOffset, SectionCount SectionTableOffset contains the file pointer of the "section table", a flat array of "section headers" that describe the sections contained within the module file. SectionCount contains the 8-bit number (range [0, 255]) of entries in this table. When it is zero, the meaning of SectionTableOffset is undefined. SectionCount can physically contain a 16-bit count, but other fields within the format limit the number of sections in a single module to 255. == ExternTableOffset, ExternCount ExternTableOffset contains the file pointer of the "extern table", a flat array that describes all required symbols that reside in other modules. ExternCount is the 16-bit number (range [0, 65535]) of entries in this table. When it is zero, the meaning of ExternTableOffset is undefined. == UnresolvedFixupTableOffset, UnresolvedFixupCount These two entries reside in the extended header and therefore only exist in fragment modules. UnresolvedFixupTableOffset contains the file pointer of the "unresolved fixup table", a flat array of relocation entries that depend on the value of unresolved extern symbols in order to be processed. UnresolvedFixupCount contains the number of entries in this table. If it is zero, the meaning of UnresolvedFixupTableOffset is undefined. #box([ = Symbol Table ``` STRUCT XloSymbolEntry SectionIndex : UBYTE, Type : UBYTE, Flags : UBYTE, Padding : UBYTE, SectionOffset : ULONG, NameOffset : ULONG, END ``` The symbol table is an array of symbol entries, each representing a named value that is exposed by the module. This structure is essential for linking (both static and dynamic) and debugging (for stack traces, etc). A symbol normally corresponds to a function, variable, or data structure defined in a high-level language like Jackal. ]) == SectionIndex The 8-bit index (range [0, 255]) into the section table of the section that this symbol resides in; i.e. the section that the SectionOffset field is relative to. == Type The 8-bit type code indicating properties of the symbol. Currently defined types are: #box([ #tablex( columns: (1fr, 1fr, 8fr), cellx([ #set text(fill: white) #set align(center) *Code* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Name* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Meaning* ], fill: rgb(0,0,0,255)), [0x01], [*GLOBAL*], [This symbol is visible to other modules in a statically linked compilation unit, but will not be included in the symbol table of a final executable or dynamic library.], [0x02], [*EXPORT*], [This symbol is visible to other modules in both a statically and dynamically linked unit. Is included in a final symbol table after linking.] ) ]) == Flags Up to 8 flags indicating characteristics of the symbol. No symbol flags are currently defined. == SectionOffset The offset within the section at which the symbol resides. == NameOffset The offset from the base of the string table at which the null-terminated ASCII symbol name resides. #box([ = Import Table ``` STRUCT XloImportEntry NameOffset : ULONG, ExpectedTimestamp : ULONG, FixupTableOffset : ULONG, FixupCount : ULONG, END ``` The import table is the array of entries that describe the dynamic libraries upon which this module depends at runtime. ]) == NameOffset The offset from the base of the string table at which the null-terminated ASCII dynamic library name resides. == ExpectedTimestamp This field contains a 32-bit Unix Epoch timestamp (in seconds), captured from the Timestamp field of the dynamic library's header. It is intended primarily to provide a unique versioning among multiple versions of the same dynamic library. When a dynamic library is linked against, its Timestamp field is captured here. Mismatched timestamps indicate to the runtime dynamic linker that the library was updated, and that this module must be fixed up. == FixupTableOffset, FixupCount FixupTableOffset contains the file pointer of a "fixup table", containing all of the relocations that must be performed at runtime should this dynamic library have a mismatched version, or fail to load at its preferred base address. FixupCount contains the number of entries in this table. #box([ = Relocation and Fixup Tables There are several "relocation tables" within the *XLO* format: - The per-section relocation tables, describing all of the "internal" relocations that must be performed if that section is moved in the virtual address space. - The unresolved fixup table, containing all of the external relocations that must be performed against the value of extern symbols that are still of totally unknown origin. These are common in fragment modules that have just been produced by an assembler and are destined to be linked into an executable or library. - The per-import fixup tables, containing all of the "external" relocations that must be performed if that imported dynamic library is of an unexpected version, or if it fails to load at its preferred base address. The entries of the per-section relocation tables and the unresolved fixup table share a common format: ``` STRUCT XloRelocEntry SectionOffset : ULONG, ExternIndex : UINT, Type : UBYTE, SectionIndex : UBYTE, END ``` ]) #box([ The import fixup table entries are the same, except they have an *OriginalValue* field: ``` STRUCT XloImportFixupEntry SectionOffset : ULONG, ExternIndex : UINT, Type : UBYTE, SectionIndex : UBYTE, OriginalValue : ULONG, END ``` ]) It's important to note that all relocations except for import fixups are performed relative to the value that is already encoded in that location. For instance, if a section is relocated from virtual address 0x10000000 to 0x10010000, the relocations in that section's table will be performed by adding the difference (0x10000) to all of the values already encoded there. Import fixups are performed by calculating the address of the referenced symbol, adding the sign-extended contents of the *OriginalValue* field of the fixup to it, and replacing the value entirely. == SectionOffset Indicates the offset within the "target section" of the pointer that must be relocated. == ExternIndex Indicates the 16-bit index (range [0, 65535]) of the entry within the extern table that describes the external symbol this relocation relies upon. This field has no meaning and is unused if this is an internal (i.e. per-section table) relocation. #box([ == Type Indicates the 8-bit type code (range [0, 255]) of the pointer that must be relocated. The currently defined types are: #tablex( columns: (5fr, 14fr, 48fr), cellx([ #set text(fill: white) #set align(center) *Code* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Name* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Meaning* ], fill: rgb(0,0,0,255)), [0x01], [*PTR*], [32 or 64-bit pointer, depending on the bitness of the module's target architecture.], [0x02], [*XR17032_ABSJ*], [An XR/17032 absolute jump instruction.], [0x03], [*XR17032_LA*], [An XR/17032 *LA* pseudo-instruction.], [0x04], [*XR17032_FAR_INT*], [An XR/17032 far-int access pseudo-instruction.], [0x05], [*XR17032_FAR_LONG*], [An XR/17032 far-long access pseudo-instruction.], [0x06], [*FOX32_CALL*], [A fox32 *CALL* instruction.] ) ]) ]) == SectionIndex The 8-bit index (range [0, 255]) into the section table of the "target section" that this relocation modifies; i.e., the section that the SectionOffset is relative to. #box([ = Extern Table ``` STRUCT XloExternEntry NameOffset : ULONG, Type : UBYTE, Padding : UBYTE, ImportIndex : UINT, Padding2 : ULONG, Padding3 : ULONG, END ``` The extern table is an array of "external symbol" entries, each representing a named value that is external to, but depended upon by the module. This structure is essential for linking. An extern normally corresponds to a function, variable, or data structure defined in a high-level language like Jackal. ]) == NameOffset The offset from the base of the string table at which the null-terminated ASCII name of the external symbol resides. == Type The 8-bit type code indicating properties of the extern. Currently defined types are: #box([ #tablex( columns: (2fr, 6fr, 30fr), cellx([ #set text(fill: white) #set align(center) *Type* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Name* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Meaning* ], fill: rgb(0,0,0,255)), [1], [*UNRESOLVED*], [This external symbol is completely unresolved.], [2], [*IMPORTED*], [This external symbol resides in a known dynamic library.] ) ]) == ImportIndex The 16-bit index (range [0, 65535]) of the import table entry that describes the dynamic library this external symbol resides in. If this external symbol is not of type *IMPORTED*, this field has no meaning. #box([ = Section Table ``` STRUCT XloSectionHeader #IF ( == BITS 64 ) VirtualAddress : UQUAD, #ELSE VirtualAddress : ULONG, Reserved : ULONG, #END NameOffset : ULONG, FileOffset : ULONG, DataLength : ULONG, RelocTableOffset : ULONG, RelocCount : ULONG, Flags : ULONG, END ``` The section table is a flat array of "section headers" that describe hunks of data and code contained by this module. The file pointer of the section table must be 64-bit aligned as the section header contains a 64-bit field. ]) == VirtualAddress VirtualAddress contains the "link-time" base address to which the section has been placed; that is, the "assumed" address that all pointers to the section have been offsetted by. If at runtime the section cannot be placed at this address, internal relocations for this module (and external fixups for other modules that may be dynamically linked to it) must be performed. This field is either 32 bits or 64 bits depending on the bitness of the target architecture. This allows sections to be located anywhere within a 64-bit address space, but their sizes are still limited to 4GB each, due to pervasive use of 32-bit section offsets. For 32-bit modules, the space where the upper 32 bits of the virtual address would be should be zero, to ensure compatibility with 64-bit tools. == NameOffset The offset from the base of the string table at which the null-terminated ASCII name of the section resides. == FileOffset The file pointer of the section contents within the module. == DataLength The length of the section contents. == RelocTableOffset and RelocCount RelocTableOffset contains the file pointer of the section's relocation table, containing all of the internal relocations that must be performed at runtime should this section fail to be placed at its preferred virtual address. RelocCount contains the number of entries within this table. #box([ == Flags Up to 32 flags that indicate characteristics of the section. Currently defined flags are: #box([ #tablex( columns: (1fr, 5fr, 12fr), cellx([ #set text(fill: white) #set align(center) *Bit* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Name* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Meaning* ], fill: rgb(0,0,0,255)), [0], [*XLO_SECTION_ZERO*], [The section has no on-disk data and is full of zeroes. This flag is primarily a hint to the linker.], [1], [*XLO_SECTION_CODE*], [The section contains code and should be mapped as executable.], [2], [*XLO_SECTION_MAP*], [The section has in-memory presence at load time. If this isn't set, it only has on-disk data such as debug information.], [3], [*XLO_SECTION_PAGED*], [The section is pageable. This is only relevant to the _MINTIA_ Executive and modules thereof.] ) ]) ])
https://github.com/binhtran432k/ungrammar-docs
https://raw.githubusercontent.com/binhtran432k/ungrammar-docs/main/contents/literature-review/ungrammar.typ
typst
#import "/components/glossary.typ": gls == Ungrammar This section explores the suitability of Ungrammar, a language for defining grammars, for implementing a #gls("lsp") for a specific language. Lezer (@sec-lezer), a parser generator, will be used in conjunction with Ungrammar to achieve efficient parsing and analysis. === Ungrammar in Rust Analyzer Ungrammar is a #gls("dsl") developed by the Rust Analyzer team to define the syntax structure of the Rust programming language. It serves as a declarative way to specify the grammar rules without delving into the complexities of parser implementation @bib-ungrammar. Key Role of Ungrammar in Rust Analyzer: - *Grammar Specification*: Ungrammar is used to precisely define the #gls("cst") of Rust code. This includes tokens, keywords, operators, and the hierarchical structure of the language. - *Parser Generation*: While Ungrammar itself doesn't generate a parser, it provides a blueprint for constructing a parser. Tools like Lezer can utilize Ungrammar grammars to create efficient parsers. - *Syntax Tree Construction*: The generated parser, often in conjunction with Lezer, constructs a #gls("cst") based on the Ungrammar grammar. This #gls("cst") is the foundation for subsequent language analysis tasks. - *Language Server Integration*: The #gls("cst"), along with additional semantic information, is used to power #gls("lsp") features like syntax highlighting, code completion, and error checking. Benefits of Using Ungrammar: - *Readability*: Ungrammar's syntax is designed for human readability, making it easier to understand and maintain grammar definitions. - *Maintainability*: By separating grammar specification from parser implementation, changes to the language syntax can be made more easily. - *Efficiency*: The focus on defining the #gls("cst") structure allows for optimized parser generation and efficient syntax analysis. By adopting Ungrammar, Rust Analyzer has established a solid foundation for parsing and analyzing Rust code, contributing to the overall performance and accuracy of the language server. === LSP Integration with Ungrammar and Lezer While Ungrammar excels at defining grammars, directly using its output for full #gls("lsp") features like semantic analysis might have limitations. Lezer, with its efficient parsing capabilities, can be a valuable tool in conjunction with Ungrammar. Here's how they can work together: + *Grammar Definition in Ungrammar*: The language's syntax is defined using Ungrammar, leveraging its conciseness and expressiveness. + *Parser Generation with Lezer*: Lezer processes the Ungrammar grammar to generate a parser specifically tailored for the target language. + *Syntax Analysis with Lezer*: Lezer parses the code, generating a #gls("nast", mode: "full") that captures the syntactic structure. + *Potential Challenges*: - _NAST Limitations_: As mentioned previously, #gls("nast") might not be sufficient for advanced semantic analysis required by #gls("lsp"). Additional processing or extensions to the #gls("nast") might be necessary. - _Custom Tokenization_: Depending on the language's complexity, Lezer's #gls("lr") parsing approach might require custom tokenization logic for accurate parsing. This approach leverages the strengths of both Ungrammar and Lezer: - Ungrammar provides a clear and expressive way to define the language's grammar. - Lezer offers efficient parsing and a solid foundation for syntax analysis. By combining these tools, we can establish a robust foundation for building an #gls("lsp") server for our target language. === Future Research and Exploration While Ungrammar and Lezer present a promising solution, further exploration is necessary: - *Strategies for overcoming NAST limitations*: Investigate techniques for enriching the #gls("nast") or integrating semantic analysis tools to address #gls("lsp") requirements. - *Evaluation of Ungrammar for complex languages*: Analyze the suitability of Ungrammar for defining grammars of languages with intricate syntax or semantics. - *Community resources*: Explore existing projects using Ungrammar and Lezer in conjunction with #gls("lsp") to learn from best practices and potential pitfalls. By conducting further research and leveraging the existing developer community, we can effectively utilize Ungrammar and Lezer to build a comprehensive #gls("lsp") server for our target language.
https://github.com/CheneyNine/Resume-in-Typst-Template
https://raw.githubusercontent.com/CheneyNine/Resume-in-Typst-Template/main/resume.typ
typst
#import "template.typ": * // 设置图标, 来源: https://fontawesome.com/icons/ // 如果要修改图标颜色, 请手动修改 svg 文件中的 fill="rgb(38, 38, 125)" 属性 // 例如可以使用 VS Code 的全局文本替换功能 #let faAward = icon("icons/fa-award.svg") #let faBuildingColumns = icon("icons/fa-building-columns.svg") #let faCode = icon("icons/fa-code.svg") #let faEnvelope = icon("icons/fa-envelope.svg") #let faGithub = icon("icons/fa-github.svg") #let faGraduationCap = icon("icons/fa-graduation-cap.svg") #let faLinux = icon("icons/fa-linux.svg") #let faPhone = icon("icons/fa-phone.svg") #let faWindows = icon("icons/fa-windows.svg") #let faWrench = icon("icons/fa-wrench.svg") #let faWechat = icon("icons/fa-wechat.svg") #let faLinkin = icon("icons/linkedin.png") #let faLink = icon("icons/fa-link.svg") #let faCertificate = icon("icons/fa-certificate.svg") #let faScroll = icon("icons/fa-scroll.svg") #let faUser = icon("icons/fa-user.svg") #let faPen = icon("icons/fa-pen.svg") #let faHouse = icon("icons/fa-house.svg") #let faBook = icon("icons/fa-book.svg") #let faBilibili = icon("icons/fa-bilibili.svg") #let faYoutube = icon("icons/fa-youtube.svg") #let faScroll_f = icon("icons/fa-scroll-flat.svg") #let faCertificate_f = icon("icons/fa-certificate-flat.svg") // 主题颜色 #let themeColor = rgb(38, 38, 125) // 设置简历选项与头部 #show: resume.with( // 字体基准大小 size: 10pt, // 标题颜色 themeColor: themeColor, // 控制纸张的边距 top: 1.65cm, bottom: 1cm, left: 1cm, right: 1cm, // 如果不需要头像,则将下面的参数注释或删除 photograph: "photo.jpg", backgroundImage: "CQUresumebg.jpg", photographWidth: 5.5em, gutterWidth: 2em, )[ = 姓名 #info( color: themeColor, ( icon: faPhone, content: "(+86)123 4567 8999" ), ( icon: faUser, content: "政治面貌", ), ( icon: faGraduationCap, content: "学科专业", ), ( icon: faEnvelope, content: "<EMAIL>", link: "mailto:<EMAIL>" ), ( icon: faGithub, content: "Github主页", link: "https://github.com", ), ( icon: faWechat, content:"微信 ID", ), ( icon: faLinkin, content:"领英账号", link: "https://www.linkedin.com/", ), ( icon: faHouse, content:"个人主页", ), ( icon: faHouse, content:"自定义内容", ) ) #grid(columns: (100%), rows: (0pt), cell(height:100%)[#h(2em)简短的个人陈述,讲述个人的学科专业、知识领域、研究兴趣、本科科研经历等。#linebreak()#h(2em)简短的个人陈述,讲述个人的学科专业、知识领域、研究兴趣、本科科研经历等。] ) ] #show: set block(spacing: 0.5em) #show: set par(justify: true,leading:0.45em) == #faGraduationCap 教育背景 #grid(columns: (20%, 1fr, auto), rows: (5pt, auto), cell(height: 100%)[*重庆大学* |本科在读], cell(height: 100%)[学院名字 专业名称 GPA: x.xx/4.00 专业排名: xx/xxx], cell(height: 100%)[2021-至今], ) == #faCode 科研经历 #item( link( "https://github.com/", [*项目研究名称*] ), [ *第十四届大学生科研训练计划(SRTP)* ], date[ 2022.03–2023.04 ] ) 描述项目经历与项目成果。描述项目经历与项目成果。描述项目经历与项目成果。 #linebreak() 右侧可添加证书图片/项目仓库/演示视频等链接。#box({link("https://github.com/", faCertificate_f)}) #item( link( "https://github.com/", [ *项目研究名称*] ), [ *2023年第一届国家级大学生创新训练项目* ], date[ 2023.06–2024.06 ] ) 描述项目经历与项目成果。描述项目经历与项目成果。描述项目经历与项目成果。 #linebreak() 右侧可添加证书图片/项目仓库/演示视频等链接。#box({link("https://github.com/", faAward)}) #item( link( "https://github.com/", [ *项目研究名称*] ), [ *2023年第一届国家级大学生创新训练项目* ], date[ 2023.06–2024.06 ] ) 描述项目经历与项目成果。描述项目经历与项目成果。描述项目经历与项目成果。 #linebreak() 右侧可添加证书图片/项目仓库/演示视频等链接。#box({link("https://github.com/", faScroll_f)}) == #faWrench 实习经历 #item( [ *实习具体部门* ], [*实习所在公司*], date[ 2023.09–2023.12 ] ) 描述实习内容描述实习内容描述实习内容。 描述实习内容描述实习内容描述实习内容。 #linebreak() 右侧可添加证书图片/项目仓库/演示视频等链接。#box({link("https://github.com/", faGithub)}) #item( [ *实习具体部门* ], [*实习所在公司*], date[ 2023.09–2023.12 ] ) 描述实习内容描述实习内容描述实习内容。 描述实习内容描述实习内容描述实习内容。 #linebreak() 右侧可添加证书图片/项目仓库/演示视频等链接。#box({link("https://github.com/", faYoutube)}) #item( [ *实习具体部门* ], [*实习所在公司*], date[ 2023.09–2023.12 ] ) 描述实习内容描述实习内容描述实习内容。 描述实习内容描述实习内容描述实习内容。 #linebreak() 右侧可添加证书图片/项目仓库/演示视频等链接。#box({link("https://github.com/", faBilibili)}) == #faBuildingColumns 校园经历 #item( [*职务名称*], [*所在单位*], date[ 2021.09–2024.03 ] ) 描述任职工作内容描述任职工作内容描述任职工作内容。所获荣誉称号等。 #box({link("https://github.com/", faLink)}) #item( [*职务名称*], [*所在单位*], date[2021.10–2024.03 ] )描述任职工作内容描述任职工作内容描述任职工作内容。所获荣誉称号等。 #box({link("https://github.com/", faLink)}) #item( [*职务名称*], [*所在单位*], date[ 2022.07–2023.08] ) 描述任职工作内容描述任职工作内容描述任职工作内容。所获荣誉称号等。 #link("https://www.bilibili.com/")[《可附代表作品》] #let rank(body) = block({ set text(size: 5pt) body }) == #faAward 荣誉奖项 + 2024年重庆市三好学生 + 2024年重庆市三好学生 + 2024年重庆市三好学生 + 2024年重庆市三好学生 + 2024年重庆市三好学生 == #faPen 专业技能 #grid(columns: (10%, 1fr), //rows: (1.45em, auto), row-gutter:0.45em, cell[*编程语言*], cell[Python, Matlab, Java, C/C++, HTML+JavaScipt+CSS], cell[*技术框架*], cell[PyTorch, Tensorflow, Git, MySQL], cell[*专业工具*], cell[Photoshop, Premiere, Latex], cell[*语言能力*], cell[英语(CET6 XXX|TOFEL XXX|IELTS XXX) 普通话(XXX)], )
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/string-01.typ
typst
Other
// Test the `first` and `last` methods. #test("Hello".first(), "H") #test("Hello".last(), "o") #test("🏳️‍🌈A🏳️‍⚧️".first(), "🏳️‍🌈") #test("🏳️‍🌈A🏳️‍⚧️".last(), "🏳️‍⚧️")
https://github.com/MRoiban/stat-proba-f315
https://raw.githubusercontent.com/MRoiban/stat-proba-f315/main/main.typ
typst
#import "@preview/equate:0.2.0": equate #import "@preview/showybox:2.0.1": showybox #import "@preview/lovelace:0.3.0": * #import "@preview/ilm:1.1.2": * #show: ilm.with( title: [Probability & Statistics], author: "<NAME>", date: datetime(year: 2024, month: 07, day: 26), abstract: [], preface: [], figure-index: (enabled: true), table-index: (enabled: true), listing-index: (enabled: true) ) #let showbox(title, body) = box()[ #showybox( shadow: ( offset:3.5pt ), title: title, [#body] ) ] #let pseudocodeblock(title, code) = box[#text( font: "FiraCode Nerd Font Mono", ligatures: true )[ #showybox( shadow: ( offset: 3.5pt ), title: title, [#code] ) ]] #let CQFD = box[$space square.filled$] = Probability == Exercices === Ex 1 *Density Function* $ f_(X)(x)= cases( k cos(x) space space &#[if $-pi/2<=X<=pi/2$], 0 space space &#[else] ) $ *Value k where $f_X (x)$ is a probability density function* We know that the integral probability density function must be equal to 1 so we calculate the integral of $f_X (x)$ $ f_X (x)&= integral^(pi/2)_(-pi/2) k cos(x) d x = 1 \ &=k integral^(pi/2)_(-pi/2) cos(x) d x \ &= k [sin(x)]^(pi/2)_(-pi/2) \ &= k[sin(pi/2) - sin(-pi/2)] \ &= k[1 - (-1)] \ &= 2k =1 \ k &= 1/2 $ 1. $P(0<=X<=pi/4)$ $ P(0<=X<=pi/4)&= integral_0^(pi/4) 1/2 cos(x) d x \ &= 1/2 [sin(pi/4) - sin(0)] \ &= 1/2 [sqrt(2)/2] \ &= sqrt(2)/4 $ 2. $P(0 <= X <= pi/4 | 0 <= X <= pi/2)=sqrt(2)/2$ #showbox( "Conditional Probability", [ When we have a conditional probability of the form $ P(X|Y) = frac(P(X and Y),P(Y)) $ we will take into account 3 possible cases. *Case 1* $X $ and $ Y$ are independent, thus our conditional probability will behave diferently: $ P(X|Y) &= frac(P(X and Y),P(Y)) \ &= frac(P(X)P(Y),P(Y)) $ We will calculate the $and$ probability separately! *Case 2* $X$ and $Y$ are contained in one or the other so either $X subset Y$ or $Y subset X$: if $X subset Y$: $ P(X|Y)&=P(X and Y)/P(Y) \ &=1 $ if $Y subset X$: $ P(X|Y)&=P(X and Y)/P(Y) \ &= P(X)/P(Y) $ *Case 3* ... ] )
https://github.com/yhtq/Notes
https://raw.githubusercontent.com/yhtq/Notes/main/数学模型/论文/pkuthss-typst/changelog.typ
typst
#import "template.typ": * #import "contributors.typ": * #let issue(id) = link("https://github.com/lucifer1004/pkuthss-typst/issues/" + str(id), text(fill: purple)[\##id]) #set enum(indent: 0em) #set list(indent: 0em) #heading(level: 2, numbering: none, "2023-11-22") + 进一步优化了 `alwaysstartodd=true` 时的表现,现在插入的空白页不会显示页眉和页脚。 #heading(level: 2, numbering: none, "2023-11-20") + 修正了 `blind=true` 时,使用已经删除的 `textbf` 函数导致编译失败的问题(#issue(14))。 + 修正了 `alwaysstartodd=true` 时的处理逻辑(#issue(4))。 + 修正了 `v0.9` 版本下图表标题错误的问题。 #heading(level: 2, numbering: none, "2023-05-30") + 设置每一章的脚注编号从 1 开始。 - 同时增加了脚注的示例。 #heading(level: 2, numbering: none, "2023-05-22") + 修正了 `booktab` 不能被正确引用的问题(#issue(12))。 #heading(level: 2, numbering: none, "2023-05-06") + 修改了#strong[黑体]和#emph[斜体]前后空白的处理逻辑: - 现在如果希望前后不出现空白,应该直接使用 `#strong[粗体]` 或 `#emph[斜体]` ,而不是 `*粗体*` 或 `_斜体_`; - 因为语法解析的问题,使用 `_中文_` 时需要在前后加上空格才能被正确识别为 `emph`,但前后的空格也将被渲染出来。原来的解决方案是人为添加了 `h(0em, weak: true)`,但这又会导致在 `strong` 或 `emph` 块的结尾为西文字母时,手动插入的空格字符会被忽略; - 作者认为,目前的处理方式能够适应更多的需求。 + 修正了#emph[斜体]对西文字母无效的问题(#issue(10))。 #heading(level: 2, numbering: none, "2023-05-03") + 使用Typst `v0.3.0` 中提供的 `array` 类型的 `zip` 方法代替原 `helpers.typ` 中的 `zip` 函数。 #heading(level: 2, numbering: none, "2023-04-26") + 适配 Typst `v0.3.0`,将 `calc.mod` 改为 `calc.rem`。 + 简化了 `show ref` 中的逻辑: - 现在提供了 `element`,可以少进行一次 `query`。 #heading(level: 2, numbering: none, "2023-04-20") + 不再给目录和索引页中填充空隙用的 `repeat([.])` 添加链接(参见#link("https://github.com/typst/typst/issues/758", text(fill: purple)[typst/typst\#758]))。 #heading(level: 2, numbering: none, "2023-04-19") + 修复了附录中没有一级标题时使用行间公式导致无法编译的错误(#issue(7))。 #heading(level: 2, numbering: none, "2023-04-18") + 完整实现了盲评格式的论文(#issue(5)): - 现在在 `blind = true` 时可以正确生成盲评格式的封面。 + 修改了 `lengthceil` 辅助函数的逻辑: - 现在直接使用 `math.ceil` 函数,不再需要使用循环。 #heading(level: 2, numbering: none, "2023-04-16") + 增加了编译所需的字体文件(#contributors.TeddyHuang-00)。 + 修正了论文标题样式(#contributors.TeddyHuang-00): - 现在分为两行显示的论文标题样式将同样正确应用 `bold` 选项。 + 增加了更多字号设置(#contributors.TeddyHuang-00): - 对应 Word 中初号至小七的所有字号。 #heading(level: 2, numbering: none, "2023-04-14") + 适配了下一版本对 `query` 函数的改动: - 这会导致模板与 Typst `v0.2.0` 版本的不兼容。如果你使用的是 Typst `v0.2.0` 版本,请使用此前版本的模板。 #heading(level: 2, numbering: none, "2023-04-13") + 修正了 `alwaysstartodd` 为 `false` 时,摘要页不显示页码的错误。 + 去除了版权声明中多余的空格。 + 增加了致谢页和原创性声明页。 + 增加了 `blind` 选项,设置为 `true` 时将生成盲评格式的论文。但目前只是去除了致谢和原创性声明,还需要进一步完善。 #heading(level: 2, numbering: none, "2023-04-12") + 将代码块的首选字体改为 `New Computer Modern Mono`: - Typst `v0.2.0` 版本内嵌了 `New Computer Modern` 字体,虽然并未同时提供 `New Computer Modern Mono`,这里将本模板的代码块字体相应进行了调整。`New Computer Modern Mono` 的字体文件现在在 `fonts` 目录中提供,同时删除了原来的 `CMU Typewriter Text` 字体文件。 #heading(level: 2, numbering: none, "2023-04-11") + 将代码块的首选字体改为 `CMU Typewriter Text`: - `CMU Typewriter Text` 的字体文件已经加入 `fonts` 目录,可以通过在运行 Typst 时使用 `--font-path` 参数指定 `fonts` 目录来使用。 #heading(level: 2, numbering: none, "2023-04-10") + 正确设置了语言类型: - 现在设置为 `zh`,之前错误设置为了 `cn`。 + 正确设置了首行缩进: - 现在正文环境的首行缩进为 #2em。 + 修正了引用图、表、公式等时在前后产生的额外空白: - 现在在 //"@web" 等前后增加了 `h(0em, weak: true)`。 + 修正了公式后编号的字体: - 现在设置为 #字体.宋体。 + 修正了图题、表题等的字号: - 现在设置为 #字号.五号。 + 修正了目录中没有对 `outlined` 进行筛选的问题: - 现在目录中只会显示 `outlined` 为 `true` 的条目。 + 增加了对三线表的支持: - 现在可以通过 `booktab` 命令插入三线表。 + 增加了对含标题代码块的支持: - 现在可以通过 `codeblock` 命令插入代码块。 + 增加了插图索引、表格索引和代码索引功能: - 插图索引:使用 `listofimage` 选项启用或关闭; - 表格索引:使用 `listoftable` 选项启用或关闭; - 代码索引:使用 `listofcode` 选项启用或关闭。 + 初步支持在奇数页开始的功能: - 使用 `alwaysstartodd` 选项启用或关闭。
https://github.com/ralphmb/My-Dissertation
https://raw.githubusercontent.com/ralphmb/My-Dissertation/main/sections/introduction.typ
typst
Creative Commons Zero v1.0 Universal
In this project we hope to examine generalised linear regression models and see how they can be applied to data from the English Premier League. \ We will first look at logistic regression models, to see how match result is affected by a few different effects. Logistic regression can be used to model Bernoulli processes with variable probability dependent on different factors, making it a useful tool for predicting outcome in sports. \ Afterwards we will look at proportional hazards models and see how they can be used for analysis of first-goal times. Proportional hazards models that we hope to focus on are parametric models of exponential or Weibull type, and the Cox proportional-hazards model. \ #heading([Notes], outlined: false, numbering: none) // No numbering - Don't want this to show up in contents or change numbering of later sections. Word count excluding bibliography, appendices is approx 11.7k.\ This document was typeset using Typst. The code used for generating page headers was written by <NAME>, and used under the terms of the Apache open source license. Their original work and the associated license can be found at the following link. #link("https://github.com/zagoli/simple-typst-thesis")\ All code pertaining to statistics was written in R, though some helper scripts were written in Python. All relevant code will be in the appendices, and some time after the Turnitin check all source files will be uploaded to my Github: #link("https://github.com/ralphmb") This project received ethics approval from Coventry University, the certificate can be found at the end of the appendices. Thank you to Prof. <NAME> for his practical support and guidance, and thank you to Deimantė Bogužaitė for her moral support and cooking. \
https://github.com/jamesrswift/frackable
https://raw.githubusercontent.com/jamesrswift/frackable/main/tests/test1/test.typ
typst
The Unlicense
#import "/src/lib.typ" as frackable: frackable, generator // #import "@local/frackable:0.1.0": * #set page( width: auto, height: auto, margin: 1cm, background: none ) #frackable(1, 2) #frackable(1, 3) #frackable(3, 4, whole: 9) #frackable(9, 16) #frackable(31, 32) #frackable(0, "000") #let my-frackable = generator( ) #my-frackable(1, 2) #my-frackable(1, 3) #my-frackable(3, 4, whole: 9) #my-frackable(9, 16) #my-frackable(31, 32) #my-frackable(0, "000")
https://github.com/floriandejonckheere/utu-thesis
https://raw.githubusercontent.com/floriandejonckheere/utu-thesis/master/thesis/chapters/06-automated-modularization/03-algorithms.typ
typst
#import "@preview/acrostiche:0.3.1": * #import "/helpers.typ": * === Algorithms <slr_algorithms> Microservice candidate identification is a problem that is commonly solved by representing the architecture as a directed weighted graph. The graph exposes the relationship between the elements of the applications. The vertices of the graph represent the classes, modules, or components, and the edges the function or method calls between them. Often the edges are weighted, representing the frequency or cost of the calls. Based on the information contained within, the graph is then divided into several clusters, each encapsulating a microservice candidate. The goal is to find a partitioning of the graph that minimizes the number of edges between clusters and maximizes the number of edges within clusters. We categorized #total(algorithms) algorithms in the literature into three main classes: clustering algorithms, evolutionary algorithms, and graph algorithms. Other algorithms that do not fit into one of these categories, or custom algorithms are grouped in a single _Other_ category. #grid( columns: (57%, 43%), gutter: 1em, [ #figure( include("/figures/06-automated-modularization/algorithms.typ"), caption: [SDLC algorithm categories] ) <slr_algorithms_chart> ], [ The majority of the algorithms identified in the literature are clustering algorithms (#count(algorithms, "clustering")\; #percentage(algorithms, "clustering")), followed by evolutionary algorithms (#count(algorithms, "evolutionary")\; #percentage(algorithms, "evolutionary")) and graph algorithms (#count(algorithms, "graph")\; #percentage(algorithms, "graph")). The remaining algorithms are grouped in the _Other algorithms_ category (#count(algorithms, "other")\; #percentage(algorithms, "other")). ] ) @slr_algorithms_chart presents a visual representation of the distribution of the algorithms in the identified categories. #pagebreak() The identified classes of microservice candidate identification algorithms and the relevant publications are listed in @slr_algorithms_table. #figure( table( columns: (20%, auto, 35%), inset: 10pt, stroke: (x: none), align: (left, left, left), [*Type*], [*Example algorithms*], [*Publications*], "Clustering algorithms", "K-Means, DBSCAN, Hierarchical Agglomerative Clustering, Affinity Propagation", [ #algorithms.at("clustering").map(p => ref(label(p))).join() ], "Evolutionary algorithms", "NSGA-II, NSGA-III", [ #algorithms.at("evolutionary").map(p => ref(label(p))).join() ], "Graph algorithms", "Kruskal, Louvain algorithm, Leiden algorithm, Label Propagation", [ #algorithms.at("graph").map(p => ref(label(p))).join() ], "Other algorithms", "Linear optimization, custom algorithms", [ #algorithms.at("other").map(p => ref(label(p))).join() ], ), caption: [Microservice candidate identification algorithm] ) <slr_algorithms_table> ==== Clustering algorithms The first class of algorithms identified in the literature is clustering algorithms. Clustering algorithms are unsupervised machine learning algorithms that aim to find an optimal partitioning of the graph. Typical clustering algorithms used for this purpose are K-Means clustering and agglomerative clustering. Examples of publications that used K-Means clustering to identify microservice candidates are #cite_full(<saidi_etal_2023>), #cite_full(<wu_zhang_2022>), #cite_full(<romani_etal_2022>), and #cite_full(<hao_etal_2023>). #cite_full(<al_debagy_martinek_2020>) used Affinity Propagation @frey_dueck_2007 to cluster vector representations of operation names in a software system. Affinity Propagation is a clustering algorithm that identifies exemplars in the data, which are used to represent the clusters @frey_dueck_2007. Hierarchical clustering approaches are used in various publications @selmadji_etal_2020, @zaragoza_etal_2022, @santos_silva_2022, @ma_etal_2022, @lourenco_silva_2023, @bandara_perera_2020. #cite_full(<lourenco_silva_2023>) used similarity between domain entities accesses and development history of source code files as a guiding measure for the clustering algorithm, while #cite_full(<zaragoza_etal_2022>) used structural and data cohesion of microservices. #cite_full(<daoud_etal_2020>) extended the Hierarchical Agglomerative Clustering (HAC) algorithm @murtagh_legendre_2014 with a collaborative approach, where the clustering is performed by multiple homogenous clustering nodes, each responsible for a subset of the data. They named the algorithm collaborative Hierarchical Agglomerative Clustering with Collaborative Nodes (cHAC). #cite_full(<selmadji_etal_2020>) proposed two possible algorithms for microservice candidate identification: a hierarchical clustering algorithm, and a clustering algorithm based on gravity centers. #cite_full(<sellami_etal_2022>) used the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm @ester_etal_1996 to identify microservice candidates. ==== Evolutionary algorithms Evolutionary algorithms are the second class of algorithms identified in the literature. Evolutionary algorithms, and in particular genetic algorithms, are algorithms aimed at solving optimization problems by borrowing techniques from natural selection and genetics @vikhar_2016. These algorithms typically operate iteratively, selecting the best solutions from a population at each iteration (also called a generation), and then combining the selected solutions to create new combinations for the next generation. The process is then repeated until certain criteria are met, for example a maximum number of generations, convergence of the population, or fulfillment of a quality indicator. Examples of publications that used Non-Dominated Sorting Algorithm II (NGSA-II) as multi-objective optimization algorithm to identify microservice candidates are #cite_full(<zhou_xiong_2022>), #cite_full(<kinoshita_kanuka_2022>), #cite_full(<zhang_etal_2020>), #cite_full(<jin_etal_2021>), and #cite_full(<li_etal_2023>). #cite_full(<carvalho_etal_2020>) used the next generation of NSGA, NSGA-III, in order to find a solution for the problem. #cite_full(<amiri_2018>) relied on a genetic algorithm using Turbo-MQ @mitchell_etal_2001 as fitness function to measure the quality of the clusters. ==== Graph algorithms Another common approach to identify microservice candidates is to use classical algorithms from graph theory. For example, #cite_full(<mazlami_etal_2017>) and #cite_full(<yang_etal_2022>) used Kruskal's algorithm @kleinberg_tardos_2006 to partition the graph into connected clusters. Kruskal's algorithm is a greedy algorithm that finds the minimum spanning forest for an undirected weighted graph @kleinberg_tardos_2006. #cite_full(<filippone_etal_2023>) applied the Louvain community detection algorithm @blondel_etal_2008 to obtain the granularity of the microservices, and high-cohesive communities of vertices. The Louvain algorithm is a greedy optimization algorithm that aims to extract non-overlapping communities from a graph, using the modularity value as optimization target @blondel_etal_2008. #cite_full(<hasan_etal_2023>) used the Leiden algorithm @traag_etal_2019, an improvement of the Louvain algorithm that uses a refinement step to improve the quality of the communities. #cite_full(<cao_zhang_2022>) used both the Leiden algorithm and the hierarchical clustering algorithm to identify microservice candidates. First, the Leiden algorithm is used to detect cohesive communities in static and dynamic analysis data, and then the hierarchical clustering algorithm is used to merge the communities into microservice candidates based on a call relation matrix. #cite_full(<nitin_etal_2022>) used Context sensitive Label Propagation (CARGO), an algorithm built on the principles of the Label Propagation algorithm @zhu_ghahramani_2003. CARGO is a community detection algorithm that is able to leverage the context embedded in the dependency graph to increase the cohesiveness of the communities. ==== Other algorithms Other publications using algorithms that do not fit into one of the previous categories are grouped in a single category. These algorithms are often custom algorithms, or algorithms from other domains that are adapted to the problem of microservice candidate identification. Some authors define one or multiple objectives to optimize, then use a single- or multi-objective optimization algorithm to find a Pareto-optimal solution to the problem. For example, the authors of #cite_full(<quattrocchi_etal_2024>) incorporated a Mixed Integer Linear Programming (MILP) solver in their solution. The MILP solver is used to find a solution for an optimization problem that decomposes the software system into microservices, based on the placement of operations and data entities according to the users' needs. #cite_full(<filippone_etal_2021>) used a linear optimization algorithm to solve a combinatorial optimization problem. The approach taken by #cite_full(<kamimura_etal_2018>) is to use a custom clustering algorithm named SArF @kobayashi_etal_2012, that aims at identifying software subsystems without the need for human intervention. #cite_full(<escobar_etal_2016>) also used a custom clustering algorithm, detecting optimal microservice candidates based on a meta-model of the class hierarchy. #cite_full(<agarwal_etal_2021>) proposed an algorithm based on seed expansion. The seed classes are detected by using formal concept analysis. Then, using a seed expansion algorithm, clusters are created around the seeds by pulling in related code artefacts based on implementation structure of the software system @agarwal_etal_2021. #cite_full(<eyitemi_reiff_marganiec_2020>) used a rule-based approach to microservice candidate identification. The six proposed rules are based on the principles of high cohesion and low coupling, and using a step-based protocol can be used to manually decompose a monolith system into microservices.
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.2.0/src/draw/shapes.typ
typst
Apache License 2.0
#let typst-angle = angle #let typst-rotate = rotate #import "/src/coordinate.typ" #import "/src/drawable.typ" #import "/src/styles.typ" #import "/src/path-util.typ" #import "/src/util.typ" #import "/src/vector.typ" #import "/src/matrix.typ" #import "/src/process.typ" #import "/src/bezier.typ" as bezier_ #import "/src/hobby.typ" as hobby_ #import "/src/anchor.typ" as anchor_ #import "/src/mark.typ" as mark_ #import "/src/mark-shapes.typ" as mark-shapes_ #import "/src/aabb.typ" #import "transformations.typ": * #import "styling.typ": * #import "grouping.typ": * /// Draws a circle or ellipse. /// /// #example(``` /// circle((0,0)) /// // Draws an ellipse /// circle((0,-2), radius: (0.75, 0.5)) /// ```) /// = parameters /// /// = Styling /// *Root:* `circle` /// == Keys /// #show-parameter-block("radius", ("number", "array"), [A number that defines the size of the circle's radius. Can also be set to a tuple of two numbers to define the radii of an ellipse, the first number is the `x` radius and the second is the `y` radius.], default: 1) /// /// = Anchors /// Supports border and path anchors. The "center" anchor is the default. /// /// - position (coordinate): The position to place the circle on. /// - name (none,string): /// - anchor (none, string): /// - ..style (style): #let circle(position, name: none, anchor: none, ..style) = { // No extra positional arguments from the style sink assert.eq( style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos()), ) let style = style.named() (ctx => { let (ctx, pos) = coordinate.resolve(ctx, position) let style = styles.resolve(ctx.style, merge: style, root: "circle") let (rx, ry) = util.resolve-radius(style.radius).map(util.resolve-number.with(ctx)) let (cx, cy, cz) = pos let (ox, oy) = (calc.cos(45deg) * rx, calc.sin(45deg) * ry) let drawables = (drawable.ellipse( cx, cy, cz, rx, ry, fill: style.fill, stroke: style.stroke),) let (transform, anchors) = anchor_.setup( auto, (), default: "center", name: name, offset-anchor: anchor, transform: ctx.transform, border-anchors: true, path-anchors: true, center: (cx, cy, cz), radii: (rx*2, ry*2), path: drawables.first(), ) return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform(transform, drawables), ) },) } /// Draws a circle through three coordinates. /// /// #example( /// ``` /// let (a, b, c) = ((0,0), (2,-.5), (1,1)) /// line(a, b, c, close: true, stroke: gray) /// circle-through(a, b, c, name: "c") /// circle("c.center", radius: .05, fill: red) /// ```) /// /// = parameters /// /// = Styling /// *Root:* `circle` /// /// `circle-through` has the same styling as @@circle() except for `radius` as the circle's radius is calculated by the given coordinates. /// /// = Anchors /// Supports the same anchors as `circle` as well as: /// / a: Coordinate a /// / b: Coordinate b /// / c: Coordinate c /// /// - a (coordinate): Coordinate a. /// - b (coordinate): Coordinate b. /// - c (coordinate): Coordinate c. /// - name (none,string): /// - anchor (none,string): /// - ..style (style): #let circle-through(a, b, c, name: none, anchor: none, ..style) = { assert.eq(style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos())) style = style.named() (a, b, c).map(coordinate.resolve-system) return (ctx => { let (ctx, a, b, c) = coordinate.resolve(ctx, a, b, c) let center = util.calculate-circle-center-3pt(a, b, c) let style = styles.resolve(ctx.style, merge: style, root: "circle") let (cx, cy, cz) = center let r = vector.dist(a, (cx, cy)) let (ox, oy) = (calc.cos(45deg) * r, calc.sin(45deg) * r) let drawables = (drawable.ellipse( cx, cy, 0, r, r, fill: style.fill, stroke: style.stroke),) let (transform, anchors) = anchor_.setup( auto, (), default: "center", name: name, offset-anchor: anchor, transform: ctx.transform, border-anchors: true, path-anchors: true, center: center, radii: (r*2, r*2), path: drawables.first(), ) return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform( transform, drawables ) ) },) } /// Draws a circular segment. /// /// #example(``` /// arc((0,0), start: 45deg, stop: 135deg) /// arc((0,-0.5), start: 45deg, delta: 90deg, mode: "CLOSE") /// arc((0,-1), stop: 135deg, delta: 90deg, mode: "PIE") /// ```) /// /// Note that two of the three angle arguments (`start`, `stop` and `delta`) must be set. /// The current position `()` gets updated to the arc's end coordinate (anchor `arc-end`). /// /// == parameters /// /// = Styling /// *Root:* `arc`\ /// == Keys /// #show-parameter-block("radius", ("number", "array"), [The radius of the arc. An elliptical arc can be created by passing a tuple of numbers where the first element is the x radius and the second element is the y radius.], default: 1) /// #show-parameter-block("mode", ("string",), [The options are: "OPEN" no additional lines are drawn so just the arc is shown; "CLOSE" a line is drawn from the start to the end of the arc creating a circular segment; "PIE" lines are drawn from the start and end of the arc to the origin creating a circular sector.], default: "OPEN") /// #show-parameter-block("update-position", ("bool",), [Update the current canvas position to the arc's end point (anchor `"arc-end"`). /// This overrides the default of `true`, that allows chaining of (arc) elements.], default: true) /// /// = Anchors /// Supports border and path anchors. /// / center: The center of the arc, this is the default anchor. /// / arc-center: The midpoint of the arc's curve. /// / chord-center: Center of chord of the arc drawn between the start and end point. /// / origin: The origin of the arc's circle. /// / arc-start: The position at which the arc's curve starts. /// / arc-end: The position of the arc's curve end. /// /// - position (coordinate): Position to place the arc at. /// - start (auto,angle): The angle at which the arc should start. Remember that `0deg` points directly towards the right and `90deg` points up. /// - stop (auto,angle): The angle at which the arc should stop. /// - delta (auto,angle): The change in angle away start or stop. /// - name (none,string): /// - anchor (none, string): /// - ..style (style): #let arc( position, start: auto, stop: auto, delta: auto, name: none, anchor: none, ..style, ) = { // Start, stop, delta check assert( (start, stop, delta).filter(it => { it == auto }).len() == 1, message: "Exactly two of three options start, stop and delta should be defined.", ) // No extra positional arguments from the style sink assert.eq( style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos()), ) let style = style.named() // Coordinate check let t = coordinate.resolve-system(position) let start-angle = if start == auto { stop - delta } else { start } let stop-angle = if stop == auto { start + delta } else { stop } // Border angles can break if the angle is 0. assert.ne(start-angle, stop-angle, message: "Angle must be greater than 0deg") return (ctx => { let style = styles.resolve(ctx.style, merge: style, root: "arc") assert(style.mode in ("OPEN", "PIE", "CLOSE")) let (ctx, arc-start) = coordinate.resolve(ctx, position) let (rx, ry) = util.resolve-radius(style.radius).map(util.resolve-number.with(ctx)) let (x, y, z) = arc-start let drawables = (drawable.arc( ..arc-start, start-angle, stop-angle, rx, ry, stroke: style.stroke, fill: style.fill, mode: style.mode),) let sector-center = ( x - rx * calc.cos(start-angle), y - ry * calc.sin(start-angle), z ) let arc-end = ( sector-center.first() + rx * calc.cos(stop-angle), sector-center.at(1) + ry * calc.sin(stop-angle), z ) let chord-center = vector.lerp(arc-start, arc-end, 0.5) let arc-center = ( sector-center.first() + rx * calc.cos((stop-angle + start-angle)/2), sector-center.at(1) + ry * calc.sin((stop-angle + start-angle)/2), z ) // Set the last position to arc-end if style.update-position { ctx.prev.pt = arc-end } // Center is calculated based on observations of tikz's circular sector and semi circle shapes. let center = if style.mode != "CLOSE" { // A circular sector's center anchor is placed half way between the sector-center and arc-center when the angle is 180deg. At 60deg it is placed 1/3 of the way between, this is mirrored at 300deg. vector.lerp( arc-center, sector-center, if (stop-angle + start-angle) > 180deg { (stop-angle + start-angle) } else { (stop-angle + start-angle) + 180deg } / 720deg ) } else { // A semi circle's center anchor is placed half way between the sector-center and arc-center, so that is always `center` when the arc is closed. Otherwise the point at which compass anchors are calculated from will be outside the lines. vector.lerp( arc-center, chord-center, 0.5 ) } // compass anchors are placed on the shapes border in tikz so prototype version is setup for use here let border = anchor_.border.with( center, 2*rx, 2*ry, drawables + if style.mode == "OPEN" { (drawable.path(( path-util.line-segment((arc-start, sector-center, arc-end)), )),) } ) let (transform, anchors) = anchor_.setup( anchor => { let pt = if type(anchor) == str {( arc-start: arc-start, origin: sector-center, arc-end: arc-end, arc-center: arc-center, chord-center: chord-center, center: center, ).at(anchor, default: none)} if pt != none { return pt } if style.mode == "OPEN" { if type(anchor) == str and anchor in anchor_.compass-directions { // Compass anchors are placed on the shapes border in // TikZ so prototype version is setup for use here let drawables = drawables.first() drawables.segments.push(path-util.line-segment( (path-util.segment-start(drawables.segments.first()), sector-center, path-util.segment-end(drawables.segments.last())))) return anchor_.calculate-border-anchor( anchor, center, 2 * rx, 2 * ry, drawables) } else { return anchor_.calculate-path-anchor( anchor, drawables.first()) } } else { let pt = anchor_.calculate-border-anchor( anchor, center, 2 * rx, 2 * ry, drawables.first()) if pt != none { return pt } return anchor_.calculate-path-anchor( anchor, drawables.first()) } }, ("arc-center", "chord-center", "origin", "arc-start", "arc-end") + anchor_.closed-shape-names, default: "arc-start", name: name, offset-anchor: anchor, transform: ctx.transform, ) if mark_.check-mark(style.mark) { let (marks, segments) = mark_.place-marks-along-path(ctx, style.mark, drawables.first().segments) drawables.first().segments = segments drawables += marks } return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform( transform, drawables, ) ) },) } /// Draws an arc that passes through three points a, b and c. /// /// Note that all three points must not lie on a straight line, otherwise /// the function fails. /// /// #example(``` /// arc-through((0,1), (1,1), (1,0)) /// ```) /// == parameters /// /// = Styling /// *Root*: `arc` /// /// Uses the same styling as @@arc() /// /// = Anchors /// For anchors see @@arc(). /// /// - a (coordinate): Start position of the arc /// - b (coordinate): Position the arc passes through /// - c (coordinate): End position of the arc /// - name (none, string): /// - ..style (style): #let arc-through( a, b, c, name: none, ..style, ) = get-ctx(ctx => { let (ctx, a, b, c) = coordinate.resolve(ctx, a, b, c) assert(a.at(2) == b.at(2) and b.at(2) == c.at(2), message: "The z coordinate of all points must be equal, but is: " + repr((a, b, c).map(v => v.at(2)))) // Calculate the circle center from three points or fails if all // three points are on one straight line. let center = util.calculate-circle-center-3pt(a, b, c) let radius = vector.dist(center, a) // Find the start and inner angle between a-center-c let start = vector.angle2(center, a) let delta = vector.angle(a, center, c) // Returns a negative number if pt is left of the line a-b, // if pt is right to a-b, a positive number is returned, // otherwise zero. let side-on-line(a, b, pt) = { let (x1, y1, ..) = a let (x2, y2, ..) = b let (x, y, ..) = pt return (x - x1) * (y2 - y1) - (y - y1) * (x2 - x1) } // Center & b b is left, // are left center not // // +-b-+ +-b-+ // / \ / \ // | C | --a-------c-- // \ / \ C / // ---a---c--- +---+ // // If b and C are on the same side of a-c, the arcs radius is >= 180deg, // otherwise the radius is < 180deg. let center-is-left = side-on-line(a, c, center) < 0 let b-is-left = side-on-line(a, c, b) < 0 // If the center and point b are on the same side of a-c, // the arcs delta must be > 180deg. Note, that delta is // the inner angle between a-center-c, so we need to calculate // the outer angle by subtracting from 360deg. if center-is-left == b-is-left { delta = 360deg - delta } // If b is left of a-c, swap a-c to c-a by using a negative delta if b-is-left { delta *= -1 } return arc(a, start: start, delta: delta, radius: radius, anchor: "arc-start", name: name, ..style) }) /// Draws a single mark pointing towards a target coordinate. /// /// #example(``` /// mark((0,0), (1,0), symbol: ">", fill: black) /// mark((0,0), (1,1), symbol: "stealth", scale: 3, fill: black) /// ```) /// /// = parameters /// /// = Styling /// *Root:* `mark`\ /// You can directly use the styling from @styling-mark. /// /// - from (coordinate): The position to place the mark. /// - to (coordinate): The position the mark should point towards. /// - ..style (style): #let mark(from, to, ..style) = { assert.eq( style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos()), ) let style = style.named() (from, to).map(coordinate.resolve-system) return (ctx => { let (ctx, ..pts) = coordinate.resolve(ctx, from, to) let style = styles.resolve(ctx.style, merge: style, root: "mark") if style.end == none { style.end = style.symbol } style.start = none style.symbol = none let segments = (path-util.line-segment(pts),) let (drawables, _) = mark_.place-marks-along-path(ctx, style, segments) return (ctx: ctx, drawables: drawables) },) } /// Draws a line, more than two points can be given to create a line-strip. /// /// #example(``` /// line((-1.5, 0), (1.5, 0)) /// line((0, -1.5), (0, 1.5)) /// line((-1, -1), (-0.5, 0.5), (0.5, 0.5), (1, -1), close: true) /// ```) /// /// If the first or last coordinates are given as the name of an element, /// that has a `"default"` anchor, the intersection of that element's border /// and a line from the first or last two coordinates given is used as coordinate. /// This is useful to span a line between the borders of two elements. /// /// #example(``` /// circle((1,2), radius: .5, name: "a") /// rect((2,1), (rel: (1,1)), name: "b") /// line("a", "b") /// ```) /// /// = parameters /// /// = Styling /// *Root:* `line` /// /// Supports mark styling. /// /// = Anchors /// Supports path anchors. /// /// - ..pts-style (coordinates, style): Positional two or more coordinates to draw lines between. Accepts style key-value pairs. /// - close (bool): If true, the line-strip gets closed to form a polygon /// - name (none,string): #let line(..pts-style, close: false, name: none) = { // Extra positional arguments from the pts-style sink are interpreted as coordinates. let pts = pts-style.pos() let style = pts-style.named() assert(pts.len() >= 2, message: "Line must have a minimum of two points") // Coordinate check let pts-system = pts.map(coordinate.resolve-system) // Find the intersection between line a-b next to b // if no intersection could be found, return a. let element-line-intersection(ctx, elem, a, b) = { // Vectors a and b are not transformed yet, but the vectors of the // drawable are. let (ta, tb) = util.apply-transform(ctx.transform, a, b) let pts = () for drawable in elem.at("drawables", default: ()) { pts += intersection.line-path(ta, tb, drawable) } return if pts == () { a } else { // Find the nearest point let pt = util.sort-points-by-distance(tb, pts).first() // Reverse the transformation return util.revert-transform(ctx.transform, pt) } } return (ctx => { let first-elem = pts.first() let last-elem = pts.last() let (ctx, ..pts) = coordinate.resolve(ctx, ..pts) // If the first/last element, test for intersection // of that element and a line from the two first/last coordinates of this // line strip. if pts-system.first() == "element" { let elem = ctx.nodes.at(first-elem) pts.first() = element-line-intersection(ctx, elem, ..pts.slice(0, 2)) } if pts-system.last() == "element" { let elem = ctx.nodes.at(last-elem) pts.last() = element-line-intersection(ctx, elem, ..pts.slice(-2).rev()) } let style = styles.resolve(ctx.style, merge: style, root: "line") let drawables = drawable.path( (path-util.line-segment(pts),), fill: style.fill, stroke: style.stroke, close: close) // Get bounds let (transform, anchors) = anchor_.setup( auto, (), name: name, transform: ctx.transform, path-anchors: true, path: drawables ) // Place marks and adjust segments if mark_.check-mark(style.mark) { let (marks, segments) = mark_.place-marks-along-path(ctx, style.mark, drawables.segments) drawables.segments = segments drawables = (drawables,) + marks } return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform(transform, drawables) ) },) } /// Draws a grid between two coordinates /// /// #example(``` /// // Draw a grid /// grid((0,0), (2,2)) /// /// // Draw a smaller blue grid /// grid((1,1), (2,2), stroke: blue, step: .25) /// ```) /// /// = parameters /// /// = Styling /// *Root:* `grid` /// == Keys /// #show-parameter-block("step", ("number", "tuple", "dictionary"), [ /// Distance between grid lines. A distance of $1$ means to draw a /// grid line every $1$ length units in x- and y-direction. /// If given a dictionary with `x` and `y` keys or a tuple, /// the step is set per axis. /// ], default: 1) /// #show-parameter-block("help-lines", ("bool",), [ /// If true, force the stroke style to `gray + 0.2pt` /// ], default: 1) /// /// = Anchors /// Supports border anchors. /// /// - from (coordinate): The top left of the grid /// - to (coordinate): The bottom right of the grid /// - name (none,string): /// - ..style (style): #let grid(from, to, name: none, ..style) = { (from, to).map(coordinate.resolve-system) assert.eq(style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos())) style = style.named() return (ctx => { let (ctx, from, to) = coordinate.resolve(ctx, from, to) (from, to) = ( (calc.min(from.at(0), to.at(0)), calc.min(from.at(1), to.at(1))), (calc.max(from.at(0), to.at(0)), calc.max(from.at(1), to.at(1))) ) let style = styles.resolve(ctx.style, merge: style, root: "grid", base: ( step: 1, stroke: auto, help-lines: false, )) if style.help-lines { style.stroke = 0.2pt + gray } let (x-step, y-step) = if type(style.step) == dictionary { (style.step.at("x", default: 1), style.step.at("y", default: 1)) } else if type(style.step) == array { style.step } else { (style.step, style.step) }.map(util.resolve-number.with(ctx)) let drawables = { if x-step != 0 { range(int((to.at(0) - from.at(0)) / x-step)+1).map(x => { x *= x-step x += from.at(0) drawable.path( path-util.line-segment(((x, from.at(1)), (x, to.at(1)))), stroke: style.stroke ) }) } else { () } if y-step != 0 { range(int((to.at(1) - from.at(1)) / y-step)+1).map(y => { y *= y-step y += from.at(1) drawable.path( path-util.line-segment(((from.at(0), y), (to.at(0), y))), stroke: style.stroke ) }) } else { () } } let center = ((from.first() + to.first()) / 2, (from.last() + to.last()) / 2) let (transform, anchors) = anchor_.setup( anchor => { if type(anchor) == str { ( north: (center.first(), to.last()), north-east: to, east: (to.first(), center.last()), south-east: (to.first(), from.last()), south: (center.first(), from.last()), south-west: from, west: (from.first(), center.last()), north-west: (from.first(), to.last()), center: center, ).at(anchor) (0,) } }, ( "north", "north-east", "east", "south-east", "south", "south-west", "west", "north-west", "center" ), name: name, transform: ctx.transform ) return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform( transform, drawables ) ) },) } /// Positions Typst content in the canvas. Note that the content itself is not transformed only its position is. /// /// #example(``` /// content((0,0), [Hello World!]) /// ```) /// To put text on a line you can let the function calculate the angle between its position and a second coordinate by passing it to `angle`: /// /// #example(``` /// line((0, 0), (3, 1), name: "line") /// content( /// ("line.start", 0.5, "line.end"), /// angle: "line.end", /// padding: .1, /// anchor: "south", /// [Text on a line] /// ) /// ```) /// /// #example(``` /// // Place content in a rect between two coordinates /// content((0, 0), (2, 2), box(par(justify: false)[This is a long text.], stroke: 1pt, width: 100%, height: 100%, inset: 1em)) /// ```) /// /// /// = parameters /// = Styling /// *Root:* `content` /// == Keys /// #show-parameter-block("padding", ("number", "dictionary"), default: 0, [Sets the spacing around content. Can be a single number to set padding on all sides or a dictionary to specify each side specifically. The dictionary follows Typst's `pad` function: https://typst.app/docs/reference/layout/pad/]) /// #show-parameter-block("frame", ("string", "none"), default: none, [Sets the frame style. Can be `none`, "rect" or "circle" and inherits the `stroke` and `fill` style.]) /// /// = Anchors /// Supports border anchors. /// /// - ..args-style (coordinate, content, style): When one coordinate is given as a positional argument, the content will be placed at that position. When two coordinates are given as positional arguments, the content will be placed inside a rectangle between the two positions. All named arguments are styling and any additional positional arguments will panic. /// - angle (angle,coordinate): Rotates the content by the given angle. A coordinate can be given to rotate the content by the angle between it and the first coordinate given in `args`. This effectively points the right hand side of the content towards the coordinate. This currently exists because Typst's rotate function does not change the width and height of content. /// - anchor (none, string): /// - name (none, string): #let content( ..args-style, angle: 0deg, anchor: none, name: none, ) = { let (args, style) = (args-style.pos(), args-style.named()) let (a, b, body) = if args.len() == 2 { args.insert(1, auto) args } else if args.len() == 3 { args } else { panic("Expected 2 or 3 positional arguments, got " + str(args.len())) } coordinate.resolve-system(a) if b != auto { coordinate.resolve-system(b) } if type(angle) != typst-angle { coordinate.resolve-system(angle) } return (ctx => { let style = styles.resolve(ctx.style, merge: style, root: "content") let padding = util.as-padding-dict(style.padding) for (k, v) in padding { padding.insert(k, util.resolve-number(ctx, v)) } let (ctx, a) = coordinate.resolve(ctx, a) let b = b let auto-size = b == auto if not auto-size { (ctx, b) = coordinate.resolve(ctx, b) } let angle = if type(angle) != typst-angle { let c (ctx, c) = coordinate.resolve(ctx, angle) vector.angle2(a, c) } else { angle } // Typst's `rotate` function is clockwise relative to x-axis, which is backwards from us angle = angle * -1 let (width, height, ..) = if auto-size { util.measure(ctx, body) } else { vector.sub(b, a) } width = (calc.abs(width) + padding.at("left", default: 0) + padding.at("right", default: 0)) height = (calc.abs(height) + padding.at("top", default: 0) + padding.at("bottom", default: 0)) let anchors = { let w = width/2 let h = height/2 let center = if auto-size { a } else { vector.add(a, (w, -h)) } // Only the center anchor gets transformed. All other anchors // must be calculated relative to the transformed center! center = matrix.mul-vec(ctx.transform, vector.as-vec(center, init: (0,0,0,1))) let north = (calc.sin(angle)*h, -calc.cos(angle)*h,0) let east = (calc.cos(-angle)*w, -calc.sin(-angle)*w,0) let south = vector.scale(north, -1) let west = vector.scale(east, -1) ( center: center, north: vector.add(center, north), north-east: vector.add(center, vector.add(north, east)), east: vector.add(center, east), south-east: vector.add(center, vector.add(south, east)), south: vector.add(center, south), south-west: vector.add(center, vector.add(south, west)), west: vector.add(center, west), north-west: vector.add(center, vector.add(north, west)), ) } let border = if style.frame in (none, "rect") { drawable.path( path-util.line-segment(( anchors.north-west, anchors.north-east, anchors.south-east, anchors.south-west )), close: true, stroke: style.stroke, fill: style.fill) } else if style.frame == "circle" { let (x, y, z) = util.calculate-circle-center-3pt(anchors.north-west, anchors.south-west, anchors.south-east) let r = vector.dist((x, y, z), anchors.north-west) drawable.ellipse( x, y, z, r, r, stroke: style.stroke, fill: style.fill ) } let (aabb-width, aabb-height, ..) = aabb.size(aabb.aabb( (anchors.north-west, anchors.north-east, anchors.south-west, anchors.south-east))) let corners = (anchors.north-east, anchors.north-west, anchors.south-west, anchors.south-east) let drawables = () if style.frame != none { drawables.push(border) } drawables.push( drawable.content( anchors.center, aabb-width, aabb-height, border.segments, typst-rotate(angle, block( width: width * ctx.length, height: height * ctx.length, inset: ( top: padding.at("top", default: 0) * ctx.length, left: padding.at("left", default: 0) * ctx.length, bottom: padding.at("bottom", default: 0) * ctx.length, right: padding.at("right", default: 0) * ctx.length, ), body ) ) ) ) let (transform, anchors) = anchor_.setup( anchor => { if type(anchor) == str { anchors.at(anchor) } }, anchors.keys(), default: if auto-size { "center" } else { "north-west" }, offset-anchor: anchor, transform: none, // Content does not get transformed, see the calculation of anchors. name: name, ) return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform( transform, drawables ) ) },) } /// Draws a rectangle between two coordinates. /// #example(``` /// rect((0,0), (1,1)) /// rect((-.5, -.5), (rel: (2, 2)), /// radius: (north-east: (100%, .5), /// south-west: (100%, .5), rest: .2), /// stroke: red) /// rect((-1, -1), (rel: (3, 3)), radius: .5, stroke: blue) /// ```) /// /// = parameters /// /// = Styling /// *Root* `rect` /// == Keys /// #show-parameter-block("radius", ("number", "ratio", "dictionary"), default: 0, [ /// The rectangles corner radius. If set to a single number, that radius is applied /// to all four corners of the rectangle. If passed a dictionary you can set the /// radii per corner. The following keys support either a `<number, ratio>` or an array of `<number, ratio>` /// for specifying a different x- and y-radius: /// `north`, `east`, `south`, `west`, `north-west`, `north-east`, `south-west` and `south-east`. To set a default value for /// remaining corners, the `rest` key can be used. /// /// Ratio values are relative to the rects width/height. /// /// #example(``` /// rect((0,0), (rel: (1,1)), radius: 0) /// rect((2,0), (rel: (1,1)), radius: 25%) /// rect((4,0), (rel: (1,1)), radius: (north: 50%)) /// rect((6,0), (rel: (1,1)), radius: (north-east: 50%)) /// rect((8,0), (rel: (1,1)), radius: (south-west: 0, rest: 50%)) /// rect((10,0), (rel: (1,1)), radius: (rest: (20%, 50%))) /// ```, vertical: true) /// ]) /// /// = Anchors /// Supports border and path anchors. /// /// - a (coordinate): Coordinate of the bottom left corner of the rectangle. /// - b (coordinate): Coordinate of the top right corner of the rectangle. You can draw a rectangle with a specified width and height by using relative coordinates for this parameter `(rel: (width, height))`. /// - name (none,string): /// - anchor (none, string): /// - ..style (style): #let rect(a, b, name: none, anchor: none, ..style) = { // Coordinate check let t = (a, b).map(coordinate.resolve-system) // No extra positional arguments from the style sink assert.eq( style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos()), ) let style = style.named() return ( ctx => { let ctx = ctx let (ctx, a, b) = coordinate.resolve(ctx, a, b) assert(a.at(2) == b.at(2), message: "Both rectangle points must have the same z value.") (a, b) = { let lo = ( calc.min(a.at(0), b.at(0)), calc.min(a.at(1), b.at(1)), calc.min(a.at(2), b.at(2)), ) let hi = ( calc.max(a.at(0), b.at(0)), calc.max(a.at(1), b.at(1)), calc.max(a.at(2), b.at(2)), ) (lo, hi) } let style = styles.resolve(ctx.style, merge: style, root: "rect") let (x1, y1, z1) = a let (x2, y2, z2) = b let size = (calc.abs(x2 - x1), calc.abs(y2 - y1)) let (north-west: nw, north-east: ne, south-west: sw, south-east: se) = util.as-corner-radius-dict(ctx, style.radius, size) let drawables = { let z = z1 // Compute two corner points offset by radius from origin pt. // // x radius * a // |----| // --p1←--pt --- // | | y radius * b // ↓ | // p2 --- // | // // parameters a and b function as direction vectors in which // direction the resulting points p1 and p2 should get offset to. // // The point pt is the corner point of the non-rounded rectangle. // If the radius is zero, we can just return that point for both // new corners. let get-corner-pts(radius, pt, a, b) = { let (rx, ry) = radius if rx > 0 or ry > 0 { let (xa, ya) = a let (xb, yb) = b (vector.add(pt, (xa * rx, ya * ry)), vector.add(pt, (xb * rx, yb * ry))) } else { (pt, pt) } } // Get segments for arc between start- and stop angle, starting // at point. If radius is zero for both axes, x and y, nothing // gets returned. // // s----p0/ // p1 // | // e // // Returns a cubic bezier curve between s and e // with the control points pointing from s in direction // p0 * radius and from e in direction p1 * radius. // The bezier approximates a 90 degree arc. let corner-arc(radius, s, e, p0, p1) = { let (rx, ry) = radius if rx > 0 or ry > 0 { let m = 0.551784 let p0 = (p0.at(0) * m * radius.at(0), p0.at(1) * m * radius.at(1)) let p1 = (p1.at(0) * m * radius.at(0), p1.at(1) * m * radius.at(1)) (path-util.cubic-segment(s, e, vector.add(s, p0), vector.add(e, p1)),) } } // Compute all eight corner points: // // p1-------p2 // / | | \ // p0--+ +--p3 // | | // p7--+ +--p4 // \ | | / // p6-------p5 // // If a corner has radius (0,0), both of its // corner points are the same. See the comment on get-corner-pts // on how the corners get computed. let (p0, p1) = get-corner-pts(nw, (x1, y2, z), ( 0,-1), ( 1, 0)) let (p2, p3) = get-corner-pts(ne, (x2, y2, z), (-1, 0), ( 0,-1)) let (p4, p5) = get-corner-pts(se, (x2, y1, z), ( 0, 1), (-1, 0)) let (p6, p7) = get-corner-pts(sw, (x1, y1, z), ( 1, 0), ( 0, 1)) let segments = () segments += corner-arc(nw, p0, p1, (0, 1), (-1, 0)) if p1 != p2 { segments += (path-util.line-segment((p1, p2)),) } segments += corner-arc(ne, p2, p3, (1, 0), (0, 1)) if p3 != p4 { segments += (path-util.line-segment((p3, p4)),) } segments += corner-arc(se, p4, p5, (0, -1), (1, 0)) if p5 != p6 { segments += (path-util.line-segment((p5, p6)),) } segments += corner-arc(sw, p6, p7, (-1, 0), (0,-1)) if p7 != p0 { segments += (path-util.line-segment((p7, p0)),) } drawable.path(segments, fill: style.fill, stroke: style.stroke, close: true) } // Calculate border anchors let center = vector.scale(vector.add(a, b), .5) let (width, height, ..) = size let (transform, anchors) = anchor_.setup( auto, (), default: "center", name: name, offset-anchor: anchor, transform: ctx.transform, border-anchors: true, path-anchors: true, center: center, radii: (width, height), path: drawables, ) return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform(transform, drawables), ) }, ) } /// Draws a quadratic or cubic bezier curve /// /// #example(``` /// let (a, b, c) = ((0, 0), (2, 0), (1, 1)) /// line(a, c, b, stroke: gray) /// bezier(a, b, c) /// /// let (a, b, c, d) = ((0, -1), (2, -1), (.5, -2), (1.5, 0)) /// line(a, c, d, b, stroke: gray) /// bezier(a, b, c, d) /// ```) /// /// = parameters /// /// = Styling /// *Root* `bezier` /// /// Supports marks. /// /// = Anchors /// Supports path anchors. /// / ctrl-n: nth control point where n is an integer starting at 0 /// /// - start (coordinate): Start position /// - end (coordinate): End position (last coordinate) /// - name (none,string): /// - ..ctrl-style (coordinate,style): The first two positional arguments are taken as cubic bezier control points, where the first is the start control point and the second is the end control point. One control point can be given for a quadratic bezier curve instead. Named arguments are for styling. #let bezier(start, end, ..ctrl-style, name: none) = { // Extra positional arguments are treated like control points. let (ctrl, style) = (ctrl-style.pos(), ctrl-style.named()) // Control point check let len = ctrl.len() assert( len in (1, 2), message: "Bezier curve expects 1 or 2 control points. Got " + str(len), ) let coordinates = (start, ..ctrl, end) // Coordinates check let t = coordinates.map(coordinate.resolve-system) return ( ctx => { let (ctx, start, ..ctrl, end) = coordinate.resolve(ctx, ..coordinates) if ctrl.len() == 1 { (start, end, ..ctrl) = bezier_.quadratic-to-cubic(start, end, ..ctrl) } let style = styles.resolve(ctx.style, merge: style, root: "bezier") let drawables = drawable.path( (path-util.cubic-segment(start, end, ..ctrl),), fill: style.fill, stroke: style.stroke, ) let (transform, anchors) = anchor_.setup( anchor => { if anchor == "ctrl-0" { return ctrl.at(0) } else if anchor == "ctrl-1" { return ctrl.at(1) } }, ("ctrl-0", "ctrl-1"), default: "start", name: name, transform: ctx.transform, path-anchors: true, path: drawables, ) if mark_.check-mark(style.mark) { let (marks, segments) = mark_.place-marks-along-path(ctx, style.mark, drawables.segments) drawables.segments = segments drawables = (drawables,) + marks } return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform( transform, drawables ) ) }, ) } /// Draws a cubic bezier curve through a set of three points. See `bezier` for style and anchor details. /// /// #example(``` /// let (a, b, c) = ((0, 0), (1, 1), (2, -1)) /// line(a, b, c, stroke: gray) /// bezier-through(a, b, c, name: "b") /// /// // Show calculated control points /// line(a, "b.ctrl-0", "b.ctrl-1", c, stroke: gray) /// ```) /// /// - start (coordinate): The position to start the curve. /// - pass-through (coordinate): The position to pass the curve through. /// - end (coordinate): The position to end the curve. /// - name (none,string): /// - ..style (style): #let bezier-through(start, pass-through, end, name: none, ..style) = { assert.eq(style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos())) style = style.named() return (ctx => { let (ctx, start, pass-through, end) = coordinate.resolve(ctx, start, pass-through, end) let (start, end, ..control) = bezier_.cubic-through-3points(start, pass-through, end) return bezier(start, end, ..control, ..style, name: name).first()(ctx) },) } /// Draws a Catmull-Rom curve through a set of points. /// /// #example(``` /// catmull((0,0), (1,1), (2,-1), (3,0), tension: .4, stroke: blue) /// catmull((0,0), (1,1), (2,-1), (3,0), tension: .5, stroke: red) /// ```) /// /// = parameters /// /// = Styling /// *Root* `catmull`\ /// Supports marks. /// == Keys /// #show-parameter-block("tension", "float", [How tight the curve should fit to the points. The higher the tension the less curvy the curve.], default: 0.5) /// /// = Anchors /// Supports path anchors. /// / pt-n: The nth given position (0 indexed so "pt-0" is equal to "start") /// /// - ..pts-style (coordinate,style): Positional arguments should be coordinates that the curve should pass through. Named arguments are for styling. /// - close (bool): Closes the curve with a straight line between the start and end of the curve. /// - name (none,string): #let catmull(..pts-style, close: false, name: none) = { let (pts, style) = (pts-style.pos(), pts-style.named()) assert(pts.len() >= 2, message: "Catmull-rom curve requires at least two points. Got " + repr(pts.len()) + "instead.") pts.map(coordinate.resolve-system) return (ctx => { let (ctx, ..pts) = coordinate.resolve(ctx, ..pts) let style = styles.resolve(ctx.style, merge: style, root: "catmull") let curves = bezier_.catmull-to-cubic( pts, style.tension, close: close) let segments = curves.map(c => path-util.cubic-segment(..c)) let drawables = drawable.path( segments, fill: style.fill, stroke: style.stroke, close: close) let (transform, anchors) = { let a = (:) for (i, pt) in pts.enumerate() { a.insert("pt-" + str(i), pt) } anchor_.setup( anchor => { if type(anchor) == str and anchor in a { return a.at(anchor) } }, a.keys(), name: name, default: "start", transform: ctx.transform, path-anchors: true, path: drawables, ) } if mark_.check-mark(style.mark) { let (marks, segments) = mark_.place-marks-along-path(ctx, style.mark, drawables.segments) drawables.segments = segments drawables = (drawables,) + marks } return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform( transform, drawables ) ) },) } /// Draws a Hobby curve through a set of points. /// /// #example(``` /// hobby((0, 0), (1, 1), (2, -1), (3, 0), omega: 0, stroke: blue) /// hobby((0, 0), (1, 1), (2, -1), (3, 0), omega: 1, stroke: red) /// ```) /// /// = parameters /// /// = Styling /// *Root* `hobby`\ /// Supports marks. /// == Keys /// #show-parameter-block("omega", ("tuple of float",), [How curly the curve should be at each endpoint. When the curl is close to zero, the spline approaches a straight line near the endpoints. When the curl is close to one, it approaches a circular arc.], default: (1, 1)) /// /// = Anchors /// Supports path anchors. /// / pt-n: The nth given position (0 indexed, so "pt-0" is equal to "start") /// /// - ..pts-style (coordinate,style): Positional arguments are the coordinates to use to draw the curve with, a minimum of two is required. Named arguments are for styling. /// - tb (auto,array): Incoming tension at `pts.at(n+1)` from `pts.at(n)` to `pts.at(n+1)`. The number given must be one less than the number of points. /// - ta (auto, array): Outgoing tension at `pts.at(n)` from `pts.at(n)` to `pts.at(n+1)`. The number given must be one less than the number of points. /// - close (bool): Closes the curve with a proper smooth curve between the start and end of the curve. /// - name (none,string): #let hobby(..pts-style, ta: auto, tb: auto, close: false, name: none) = { let (pts, style) = (pts-style.pos(), pts-style.named()) assert(pts.len() >= 2, message: "Hobby curve requires at least two points. Got " + repr(pts.len()) + "instead.") pts.map(coordinate.resolve-system) return (ctx => { let (ctx, ..pts) = coordinate.resolve(ctx, ..pts) let style = styles.resolve(ctx.style, merge: style, root: "hobby") let curves = hobby_.hobby-to-cubic( pts, ta: ta, tb: tb, omega: style.omega, close: close) let segments = curves.map(c => path-util.cubic-segment(..c)) let drawables = drawable.path( segments, fill: style.fill, stroke: style.stroke, close: close) let (transform, anchors) = { let a = (:) for (i, pt) in pts.enumerate() { a.insert("pt-" + str(i), pt) } anchor_.setup( anchor => { if type(anchor) == str and anchor in a { return a.at(anchor) } }, a.keys(), name: name, default: "start", transform: ctx.transform, path-anchors: true, path: drawables, ) } if mark_.check-mark(style.mark) { let (marks, segments) = mark_.place-marks-along-path(ctx, style.mark, drawables.segments) drawables.segments = segments drawables = (drawables,) + marks } return ( ctx: ctx, name: name, anchors: anchors, drawables: drawable.apply-transform( transform, drawables ) ) },) } /// Merges two or more paths by concattenating their elements. Anchors and visual styling, such as `stroke` and `fill`, are not preserved. When an element's path does not start at the same position the previous element's path ended, a straight line is drawn between them so that the final path is continuous. You must then pay attention to the direction in which element paths are drawn. /// /// #example(``` /// merge-path(fill: white, { /// line((0, 0), (1, 0)) /// bezier((), (0, 0), (1,1), (0,1)) /// }) /// ```) /// /// Elements hidden via @@hide() are ignored. /// /// = parameters /// /// = Anchors /// Supports path anchors. /// /// - body (elements): Elements with paths to be merged together. /// - close (bool): Close the path with a straight line from the start of the path to its end. /// - name (none,string): /// - ..style (style): #let merge-path(body, close: false, name: none, ..style) = { // No extra positional arguments from the style sink assert.eq( style.pos(), (), message: "Unexpected positional arguments: " + repr(style.pos()), ) let style = style.named() return ( ctx => { let ctx = ctx let segments = () for element in body { let r = process.element(ctx, element) if r != none { ctx = r.ctx if segments != () and r.drawables != () { assert.eq(r.drawables.first().type, "path") let start = path-util.segment-end(segments.last()) let end = path-util.segment-start(r.drawables.first().segments.first()) if vector.dist(start, end) > 0 { segments.push(path-util.line-segment((start, end,))) } } for drawable in r.drawables { if drawable.hidden { continue } assert.eq(drawable.type, "path") segments += drawable.segments } } } let style = styles.resolve(ctx.style, merge: style) let drawables = drawable.path(fill: style.fill, stroke: style.stroke, close: close, segments) let (transform, anchors) = anchor_.setup( auto, (), name: name, transform: ctx.transform, path-anchors: true, path: drawables, ) return ( ctx: ctx, name: name, anchors: anchors, drawables: drawables, ) }, ) }
https://github.com/LuminolT/SHU-Bachelor-Thesis-Typst
https://raw.githubusercontent.com/LuminolT/SHU-Bachelor-Thesis-Typst/main/body/abstract.typ
typst
#let abstract_zh = [ 我好想做嘉然小姐的狗啊。可是嘉然小姐说她喜欢的是猫,我哭了。我知道既不是狗也不是猫的我为什么要哭的。因为我其实是一只老鼠。我从没奢望嘉然小姐能喜欢自己。我明白的,所有人都喜欢理解余裕上手天才打钱的萌萌的狗狗或者猫猫,没有人会喜欢阴湿带病的老鼠。但我还是问了嘉然小姐:「我能不能做你的狗?」我知道我是注定做不了狗的。但如果她喜欢狗,我就可以一直在身边看着她了,哪怕她怀里抱着的永远都是狗。可是她说喜欢的是猫。她现在还在看着我,还在逗我开心,是因为猫还没有出现,只有我这老鼠每天蹑手蹑脚地从洞里爬出来,远远地和她对视。等她喜欢的猫来了的时候,我就该重新滚回我的洞了吧。但我还是好喜欢她,她能在我还在她身边的时候多看我几眼吗?嘉然小姐说接下来的每个圣诞夜都要和大家一起过。我不知道大家指哪些人。好希望这个集合能够对我做一次胞吞。 猫猫还在害怕嘉然小姐。我会去把她爱的猫猫引来的。我知道稍有不慎,我就会葬身猫口。那时候嘉然小姐大概会把我的身体好好地装起来扔到门外吧。那我就成了一包鼠条,嘻嘻我希望她能把我扔得近一点,因为我还是好喜欢她。会一直喜欢下去的。 我的灵魂透过窗户向里面看去,挂着的铃铛在轻轻鸣响,嘉然小姐慵懒地靠在沙发上,表演得非常温顺的橘猫坐在她的肩膀。壁炉的火光照在她的脸庞,我冻僵的心脏在风里微微发烫。 ] #let keywords_zh = ("嘉然小姐", "羽毛球", "矮子") #let abstract_en = [ I want to be <NAME>'s dog. But <NAME> said that she likes cats, so I cried. I know the reason why I cried as neither a cat nor a dog. I am actually a mouse instead. I have never hope that <NAME> would like me. I understand, that everyone likes dogs or cats who are outstanding, smart, talented, rich and adorable. No one likes cold and ill mice. But I still asked <NAME>: "Can I be your dog?" I know I can't be a dog anyways. But if she likes dogs, I can always watch her by her side, even if she always hug a real dog. But she said that she likes cats. She was still looking at me now, still making me laugh because the cat hasn't appeared yet, only a mouse like me crawl out of the hole every day, looking at her from a distance. When the cat which she likes comes, I shall then return to my hole again. But I still like her so much. Will she gave me more attention when I am still around her? <NAME> said that she would spend every Christmas Eve with us. I don't know who the word "us" includes, but I hope I will be one of this collection. Catty are still afraid of <NAME>. I would attract the kitten for her. I know that I would be eaten by the cat if I am not cautious enough. By that time, <NAME> would probably pack up my body in a nice way and throw me out of the door. Then I become a pack of rat chips, lol. I hope she won't throw me so far because I would still like her until Forever. My soul looks inside through the window, where the bells rang softly, and <NAME> leaned back on the sofa lazily with an orange cat sat on her shoulder, which pretends to be tame. Her face was enlightened by the dancing flames in the fireplace, while my frozen heart slightly burns in the wind. ] #let keywords_en = ("Genshin Impact", "The Legend of Zelda", "Tencent")
https://github.com/RaphGL/ElectronicsFromBasics
https://raw.githubusercontent.com/RaphGL/ElectronicsFromBasics/main/DC/chap7/2_analysis_technique.typ
typst
Other
#import "../../core/core.typ" === Analysis technique The goal of series-parallel resistor circuit analysis is to be able to determine all voltage drops, currents, and power dissipations in a circuit. The general strategy to accomplish this goal is as follows: - *Step 1*: Assess which resistors in a circuit are connected together in simple series or simple parallel. - *Step 2*: Re-draw the circuit, replacing each of those series or parallel resistor combinations identified in step 1 with a single, equivalent-value resistor. If using a table to manage variables, make a new table column for each resistance equivalent. - *Step 3*: Repeat steps 1 and 2 until the entire circuit is reduced to one equivalent resistor. - *Step 4*: Calculate total current from total voltage and total resistance ($I=E/R$). - *Step 5*: Taking total voltage and total current values, go back to last step in the circuit reduction process and insert those values where applicable. - *Step 6*: From known resistances and total voltage / total current values from step 5, use Ohm\'s Law to calculate unknown values (voltage or current) ($E=I R$ or $I=E/R$). - *Step 7*: Repeat steps 5 and 6 until all values for voltage and current are known in the original circuit configuration. Essentially, you will proceed step-by-step from the simplified version of the circuit back into its original, complex form, plugging in values of voltage and current where appropriate until all values of voltage and current are known. - *Step 8*: Calculate power dissipations from known voltage, current, and/or resistance values. This may sound like an intimidating process, but its much easier understood through example than through description. #image("static/00123.png") #image("static/10126.png") In the example circuit above, R#sub[1] and R#sub[2] are connected in a simple parallel arrangement, as are R#sub[3] and R#sub[4]. Having been identified, these sections need to be converted into equivalent single resistors, and the circuit re-drawn: #image("static/00124.png") The double slash (\/\/) symbols represent \"parallel\" to show that the equivalent resistor values were calculated using the 1/(1/R) formula. The 71.429 $Omega$ resistor at the top of the circuit is the equivalent of R#sub[1] and R#sub[2] in parallel with each other. The 127.27 $Omega$ resistor at the bottom is the equivalent of R#sub[3] and R#sub[4] in parallel with each other. Our table can be expanded to include these resistor equivalents in their own columns: #image("static/10127.png") It should be apparent now that the circuit has been reduced to a simple series configuration with only two (equivalent) resistances. The final step in reduction is to add these two resistances to come up with a total circuit resistance. When we add those two equivalent resistances, we get a resistance of 198.70 $Omega$. Now, we can re-draw the circuit as a single equivalent resistance and add the total resistance figure to the rightmost column of our table. Note that the \"Total\" column has been relabeled ($R_1 \/\/ R_2 - R_3 \/\/ R_4$) to indicate how it relates electrically to the other columns of figures. The \"--\" symbol is used here to represent \"series,\" just as the \"\/\/\" symbol is used to represent \"parallel.\" #image("static/00125.png") #image("static/10128.png") Now, total circuit current can be determined by applying Ohm\'s Law ($I=E/R$) to the \"Total\" column in the table: #image("static/10129.png") Back to our equivalent circuit drawing, our total current value of 120.78 milliamps is shown as the only current here: #image("static/00126.png") Now we start to work backwards in our progression of circuit re-drawings to the original configuration. The next step is to go to the circuit where $R_1 \/\/ R_2$ and $R_3 \/\/ R_4$ are in series: #image("static/00127.png") Since $R_1 \/\/ R_2$ and $R_3 \/\/ R_4$ are in series with each other, the current through those two sets of equivalent resistances must be the same. Furthermore, the current through them must be the same as the total current, so we can fill in our table with the appropriate current values, simply copying the current figure from the Total column to the $R_1 \/\/ R_2$ and $R_3 \/\/ R_4$ columns: #image("static/10130.png") Now, knowing the current through the equivalent resistors $R_1 \/\/ R_2$ and $R_3 \/\/ R_4$, we can apply Ohm\'s Law ($E=I R$) to the two right vertical columns to find voltage drops across them: #image("static/00128.png") #image("static/10131.png") Because we know $R_1 \/\/ R_2$ and $R_3 \/\/ R_4$ are parallel resistor equivalents, and we know that voltage drops in parallel circuits are the same, we can transfer the respective voltage drops to the appropriate columns on the table for those individual resistors. In other words, we take another step backwards in our drawing sequence to the original configuration, and complete the table accordingly: #image("static/00129.png") #image("static/10132.png") Finally, the original section of the table (columns R#sub[1] through R#sub[4]) is complete with enough values to finish. Applying Ohm\'s Law to the remaining vertical columns (I\=E/R), we can determine the currents through R#sub[1], R#sub[2], R#sub[3], and R#sub[4] individually: #image("static/10133.png") Having found all voltage and current values for this circuit, we can show those values in the schematic diagram as such: #image("static/00130.png") As a final check of our work, we can see if the calculated current values add up as they should to the total. Since R#sub[1] and R#sub[2] are in parallel, their combined currents should add up to the total of 120.78 mA. Likewise, since R#sub[3] and R#sub[4] are in parallel, their combined currents should also add up to the total of 120.78 mA. You can check for yourself to verify that these figures do add up as expected. A computer simulation can also be used to verify the accuracy of these figures. The following SPICE analysis will show all resistor voltages and currents (note the current-sensing vi1, vi2, . . . \"dummy\" voltage sources in series with each resistor in the netlist, necessary for the SPICE computer program to track current through each path). These voltage sources will be set to have values of zero volts each so they will not affect the circuit in any way. #image("static/00131.png") ``` series-parallel circuit v1 1 0 vi1 1 2 dc 0 vi2 1 3 dc 0 r1 2 4 100 r2 3 4 250 vi3 4 5 dc 0 vi4 4 6 dc 0 r3 5 0 350 r4 6 0 200 .dc v1 24 24 1 .print dc v(2,4) v(3,4) v(5,0) v(6,0) .print dc i(vi1) i(vi2) i(vi3) i(vi4) .end ``` I\'ve annotated SPICE\'s output figures to make them more readable, denoting which voltage and current figures belong to which resistors. ``` v1 v(2,4) v(3,4) v(5) v(6) 2.400E+01 8.627E+00 8.627E+00 1.537E+01 1.537E+01 Battery R1 voltage R2 voltage R3 voltage R4 voltage voltage ``` ``` v1 i(vi1) i(vi2) i(vi3) i(vi4) 2.400E+01 8.627E-02 3.451E-02 4.392E-02 7.686E-02 Battery R1 current R2 current R3 current R4 current voltage ``` As you can see, all the figures do agree with the our calculated values. #core.review[ - To analyze a series-parallel combination circuit, follow these steps: - Reduce the original circuit to a single equivalent resistor, re-drawing the circuit in each step of reduction as simple series and simple parallel parts are reduced to single, equivalent resistors. - Solve for total resistance. - Solve for total current (I\=E/R). - Determine equivalent resistor voltage drops and branch currents one stage at a time, working backwards to the original circuit configuration again. ]
https://github.com/SeniorMars/tree-sitter-typst
https://raw.githubusercontent.com/SeniorMars/tree-sitter-typst/main/examples/layout/table.typ
typst
MIT License
// Test tables. --- #set page(height: 70pt) #set table(fill: (x, y) => if calc.even(x + y) { rgb("aaa") }) #table( columns: (1fr,) * 3, stroke: 2pt + rgb("333"), [A], [B], [C], [], [], [D \ E \ F \ \ \ G], [H], ) --- #table(columns: 3, stroke: none, fill: green, [A], [B], [C]) --- // Ref: false #table() --- // Error: 14-19 expected color, none, or function, found string #table(fill: "hey")
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/numblex/0.1.0/lib/numblex.typ
typst
Apache License 2.0
/// Numblex main function /// /// - `numberings`: A tuple of numbering styles. Each style can be a string or a function. /// - `depth`: A tuple of depths for each numbering style. /// - `..styles`: A list of styles. If provided, `numberings` and `depth` will be ignored. /// /// The depth means the level of shown (upper) numbering, default is 1(only show the level at present). /// /// Example usage: /// ```typst /// set heading(numbering: numblex(numberings: ("一.", "1.", "(1).", circle_numbers), depth: (1, 1, 2, 4))) /// /// set heading(numbering: numblex( /// "一.", /// "1.", /// (numbering: "(1).", depth: 2), /// (numbering: circle_numbers, depth: 4), /// )) /// ``` #let numblex(numberings: (none,), depth: (1,), ..styles) = { styles = styles.pos() if styles.len() > 0 { numberings = () depth = () for s in styles { if type(s) == str { s = (numbering: s, depth: 1) } assert("numbering" in s, message: "numblex: style must have a 'numbering' field") assert("depth" in s, message: "numblex: style must have a 'depth' field") assert(type(s.numbering) in (function, str), message: "numblex: 'numbering' field must be a function or a string") assert(type(s.depth) == int and s.depth > 0, message: "numblex: 'depth' field must be a positive integer") numberings.push(s.numbering) depth.push(s.depth) } } assert(numberings.len() > 0, message: "numblex: at least one numbering style must be provided") assert(depth.len() > 0, message: "numblex: at least one depth must be provided") let get_repeat_last(arr, ind) = { arr.at(ind, default: arr.at(-1)) } return (..numbers) => { let nums = numbers.pos() let max_level = nums.len() let this_depth = get_repeat_last(depth, nums.len() - 1) let ans = "" for level in range(max_level - this_depth, max_level) { ans = ans + numbering(get_repeat_last(numberings, level), nums.at(level)) } return ans } }
https://github.com/sitandr/typst-examples-book
https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/typstonomicon/totally-empty.md
markdown
MIT License
# Empty pages without numbering ## Empty pages before chapters starting at odd pages <div class="warning"> This snippet has been broken on 0.12.0. If someone will help fixing it, this would be cool. </div> `````typ -norender // author: janekfleper #set page(height: 20em) #let find-labels(name) = { return query(name).map(label => label.location().page()) } #let page-header = context { let empty-pages = find-labels(<empty-page>) let new-chapters = find-labels(<new-chapter>) if new-chapters.len() > 0 { if new-chapters.contains(here().page()) [ _a new chapter starts on this page_ #return ] // get the index of the next <new-chapter> label let new-chapter-index = new-chapters.position(page => page > here().page()) if new-chapter-index != none { let empty-page = empty-pages.at(new-chapter-index) if empty-page < here().page() [ _this is an empty page to make the next chapter start on an odd page_ #return ] } } [and this would be a regular header] line(length: 100%) } #let page-footer = context { // since the page breaks in chapter-heading() are inserted after the <empty-page> label, // the selector has to look "before" the current page to find the relevant label let empty-page-labels = query(selector(<empty-page>).before(here())) if empty-page-labels.len() > 0 { let empty-page = empty-page-labels.last().location().page() // look back at the most recent <new-chapter> label let new-chapter = query(selector(<new-chapter>).before(here())).last().location().page() // check that there is no <new-chapter> label on the current page if (new-chapter != here().page()) and (empty-page + 1 == here().page()) [ _this is an empty page where the page number should be omitted_ #return ] } let page-display = counter(page).display(here().page-numbering()) h(1fr) + page-display + h(1fr) } #show heading.where(level: 1): it => [ #[] <empty-page> #pagebreak(to: "even", weak: true) #[] <new-chapter> #pagebreak(to: "odd", weak: true) #it.body #v(2em) ] #show outline.entry.where(level: 1): it => { // reverse the results of the label queries to find the last <empty-page> label for the targeted page // the method array.position() will always return the first one... let empty-pages = find-labels(<empty-page>).rev() let new-chapters = query(<new-chapter>).rev() let empty-page-index = empty-pages.position(page => page == int(it.page.text)) let new-chapter = new-chapters.at(empty-page-index) link(new-chapter.location())[#it.body #box(width: 1fr)[#it.fill] #new-chapter.location().page()] } #set page(header: page-header, footer: page-footer, numbering: "1") #outline() = The explanation ``` These queries reveal where the corresponding tags are found. The actual empty page is always at the location of the label <empty-page> + 1. If an empty page is actually inserted by the pagebreaks, the two labels will cover the page of the heading and one page before that. If no empty page was inserted, both labels will point to the same page which is not an issue either. And even then we can check for the <new-chapter> label first to give it a higher priority. The first <empty-page> label is always on page 1 and can just be ignored since it points to the (non-existing) empty page before the first chapter. pages with the label <empty-page>: #context find-labels(<empty-page>) pages with the label <new-chapter>: #context find-labels(<new-chapter>) ``` = A heading #lorem(190) = Another heading #lorem(100) = The last heading #lorem(400) `````
https://github.com/typst-community/mantodea
https://raw.githubusercontent.com/typst-community/mantodea/main/tests/link/test.typ
typst
MIT License
#import "/src/link.typ" #show heading.where(level: 1): it => pagebreak(weak: true) + it #set page(width: auto, height: auto, header: counter(footnote).update(0)) = GitHub #link.forge("https://github.com", "author/repository") #link.github("author/repository") = GitLab #link.forge("https://gitlab.com", "author/repository") #link.gitlab("author/repository") = Codeberg #link.forge("https://codeberg.org", "author/repository") #link.codeberg("author/repository") = Generc Git Subdomain #link.forge("https://git.university.com") #link.forge("https://git.university.com", label: "git@university") #link.forge("https://git.university.com", "author/repository") #link.forge("https://notgit.university.com") = Packages #link.package("Hydra") #link.package-repo("Hydra", version(0, 1, 0)) = Types from Values #link.type(with-footnote: true, str) #link.type(with-footnote: true, red) #link.type(with-footnote: true, left + horizon) = Types #let l = link.type.with(with-footnote: true) #block(width: 200pt, for t in link.typst.types.keys() [ #l(t) ] )
https://github.com/UntimelyCreation/typst-neat-cv
https://raw.githubusercontent.com/UntimelyCreation/typst-neat-cv/main/src/content/fr/interests.typ
typst
MIT License
#import "../../template.typ": * #cvSection("Intérêts personnels") #cvInterestTags( tags: ("Logiciel libre", "Musique", "Lecture", "Sport"), ) #divider() #cvInterestDescription( description: list( [#lorem(15)] ) )
https://github.com/yingziyu-llt/blog
https://raw.githubusercontent.com/yingziyu-llt/blog/main/archived/Linear-Algebra-C1.typ
typst
#set document(title:"线性代数学习笔记 向量空间(Vector Space)") #set document(date: datetime( year:2024, month:7, day:7 )) #set page(margin: ( top: 0cm, bottom: 0cm, x: 0cm, )) #set text(size: 16pt) == Introduction 本笔记基于Linear Algebra Done Right 一书的内容和顺序写成,可能具体内容不完全按照该书,会加入一些 简明线性代数(丘维声著) 的内容。 写这篇笔记,主要是我在前面的线性代数学习中,自我感觉只是基本掌握了一些散乱的知识点,并没有真正很好的理解线性代数的本质和内核,于是暑假用闲暇时间重读线性代数,换一本书(LADR),希望能够得到更加深刻的理解。 == $RR^n$ and $CC^n$ === 复数 Complex Number 复数的定义是由对负数开平方根得出的。我们定义$i = sqrt(-1)$,其运算规则和常规的运算法则类似。 *Definition*: $CC = {a+b i : a,b in RR}$称作复数域 加法法则$(a + b i) + (c + d i) = (a+c) + (b+d)i$ 乘法法则$(a + b i) * (c + d i) = (a c - b d) + (b c + a d)i$ 当$b=0$时,$a$就是实数。显然,$CC$是一个*线性空间*。 在本笔记的其他部分,我们用$FF$来表示$RR$和$CC$,称$FF$中的元素叫做*标量*(scalar)。 === 组(List) *Definition* 取$n$个非负数的整数组成一个*有序*的对叫做一个组(List),记为$(x_1,x_2,dots,x_n)$。当且仅当两个组的各元素依次均相等时,可以称两个组相等。 *Notice* List 和 Set 之间的差异: List 和 Set 的区别是,List 中的元素有序,Set 中的元素无序;List 中的元素可重复,Set 中的元素不可重复 *Definition* 定义两个组的加法$(a_1,a_2,dots,a_n) + (b_1,b_2,dots,b_n) = (a_1+b_1,a_2+b_2,dots,a_n+b_n)$,满足交换律。 定义零元$0 = (0,0,dots,0)$ === 向量(Vector) 将组放在一个坐标系中,取原点到该点的一个*有向线段*,称这个有向线段为*向量*(Vector) 向量之间的加法和组的加法类似,也满足对应的定律。这里略。 *Definition* 数乘 $lambda * (a_1,a_2,dots,a_n) = (lambda a_1,lambda a_2,dots,lambda a_n)$ == 向量空间(Vector Space) 向量空间要求有以下几个必备条件: *Definition* + 加法 $alpha,beta in V $,定义某种运算$+$,使得$alpha + beta in V $ + 数乘 $lambda in FF,alpha in V$,定义某种运算$dot$,使得$lambda dot alpha in V$ 对于一个空间$S = (VV,FF, +, dot)$,要求满足: + 加法可交换$alpha + beta = beta + alpha$ + 加法可结合$alpha + beta + gamma = alpha + (beta + gamma)$ + 数乘可交换$lambda mu alpha= mu lambda alpha$ + 数乘可结合$lambda mu alpha = lambda (mu alpha)$ + 数乘可分配$(lambda + mu)alpha = lambda alpha + mu alpha$,$lambda (alpha + beta) = lambda alpha + lambda beta$ + 数乘有幺元$1$,$1 alpha = alpha$ + 加法有零元$0 in VV$,$0 + alpha = alpha$ + 加法有负元,$alpha + (-alpha) = 0$ 那么称$S$为向量空间(Vector Space)。 向量空间的元素称为点(point)或者向量(vector)。 向量空间的形式和向量空间数乘的数域是有很大关系的。我们称$S$是在$FF$上的向量空间(vector space over $FF$),在$RR$上的叫实向量空间,在$CC$上的叫做复向量空间。 在前面我们说的$VV$一般是一个传统意义上的向量集合$FF^n$(n可以是无穷,称为无穷维向量空间),下面我们讨论和函数相关的向量空间。 *Definition* 我们记$FF^S$($FF = RR "or" CC$,$S$是一个集合)为从$S->FF$的映射 取$f,g in FF^S$,加法定义为$(f+g) (x) = f (x) + g (x)$,数乘定义为$lambda in FF,(lambda f)(x) = lambda f (x)$ 定理 + 加法单位元唯一 + 加法负元唯一 + $0$数乘一个向量为零元 + 任何数乘零元为零元 + $-1$数乘任何向量为其负元 == 子空间(Subspace) === 子空间 设$V$是一个线性空间,若线性空间$U$中的所有元素都在$V$里,且二者运算相同(要求有向量加法和数乘),就称$U$是$V$的一个子空间。 *Conditions for Subspace* + 有零元(additive identity)$0 in U$ + 加法封闭(closed under addition)$alpha,beta in U;alpha + beta in U$ + 数乘封闭(closed under scalar multiplication)$alpha in U,lambda in FF;lambda alpha in U$ === 子空间的和(Sum) *Definition* 定义运算 + ,满足$U_1+U_2+dots+U_n = {u_1+u_2+dots+u_n : u_1 in U_1,u_2 in U_2,dots,u_n in U_n}$ 子空间的和是包含那些子空间的最小子空间。 === 子空间的直和(Direct Sum) *Definition* 和$U=U_1+U_2+...+U_n$成为*直和*,若$U$中每个元素只能被唯一表示成$u_1+u_2+dots+u_n$,其中$u_i in U_i$。记直和的符号为$plus.circle$ *Conditions for Direct Sum* $U_1 + U_2 + dots + U_n$是直和$arrow.l.r.double$ $0$只有唯一表出方式:$u_1=u_2=dots=u_n=0$ $arrow.l.r.double$ $U sect V = {0}$ 完结撒花!
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/clip_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test box clipping with a rectangle Hello #box(width: 1em, height: 1em, clip: false)[#rect(width: 3em, height: 3em, fill: red)] world 1 Space Hello #box(width: 1em, height: 1em, clip: true)[#rect(width: 3em, height: 3em, fill: red)] world 2
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/003%20-%20Gatecrash/002_The%20Fathom%20Edict.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "The Fathom Edict", set_name: "Gatecrash", story_date: datetime(day: 09, month: 01, year: 2013), author: "<NAME>", doc ) #strong[Document function:] Intelligence#linebreak() #strong[Document category:] Transcript#linebreak() #strong[Document recipient, primary:] Commander Yaszen, Horizon Academy#linebreak() #strong[Documenter:] Wojek Scribe-Sergeant Bogumil Bem#linebreak() #strong[Document context:] Merfolk woman Zegana, calling herself Prime Speaker, announced address at Forum of Azor one month ago. At appointed time she entered Forum of Azor with entourage, took center rostrum, delivered speech to audience of approx 12,000. Speech concluded with decree regarding a new Simic Combine. Within hours, criers and signalers referred to speech as "Fathom Edict." #figure(image("002_The Fathom Edict/02.jpg", width: 100%), caption: [Art by Willian Murai], supplement: none, numbering: none) #strong[Transcription of "Fathom Edict," Prime Speaker Zegana:] Citizens of Ravnica, thank you for gathering at this ancient place under rain-skies#super[i] .We strive to honor surface traditions by taking this sanctioned stage for our address. Many millennia have passed since surface-walkers have seen Ravnica’s oceans. But they persist, under many layers of city. Izzet magewrights reached down to us in ages past to build our world’s water systems. Others delved close to us, seeking darkness and secrecy.#super[ii] All the while, we held fast in our deep home. Two principles guide my race. The first we call the Holdfast. This principle binds us to the sea, as kelp binds to the sea floor so it does not drift away. It roots us, keeps us deep, and safe from swells of ambition, pride, dogma. The Holdfast has made us content to remain in the buried oceans for many epochs. But another principle countervails the Holdfast. Our oceans live and breathe in great cycles, and the depleted waters above must be replenished by richer waters from below: the Upwelling. Without the Upwelling, our seas would be a tomb. #figure(image("002_The Fathom Edict/04.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none) Years ago, my people#super[iii] detected new growth reaching downward toward our waters—a growth not seen for many millennia.Roots! The taproots of great surface-trees,#super[iv]plumbing the depths through stone and steel to find water. An auspicious homecoming. We did not know yet that this was your doing, surface-walkers. We sought knowledge of your world over the centuries, so we knew of the guilds and their aims. We did not anticipate some of you working together to create more wild places on Ravnica. Not until later did we learn of your efforts. Not until now have we declared to the world our admiration and our desire to support this work. As the surface reached downward, we felt pulled upward, revivified and gladdened by the new life. The deepsages sought places for our own great Upwelling—uninhabited places that could become lasting tunnels between your world and ours. The first is what you now know as Zonot One. With our most powerful biomancy, we opened the first zonot and reinforced it with great lattices of sea-plants. We were at first unsure of its stability, despite our careful planning. But it held and became our first portal to the surface, an inverse tower suited to our nature. We knew we would be a curiosity on the surface; we knew we risked violence against us simply by emerging.#super[v] But we calculated that Ravnicans would be grateful when they gazed on their own oceans at long last, if only in glimpses. And we were correct. Since then, we have created other zonots, and more will follow.#super[vi]Each zonot will become a community, and each will have a speaker. These speakers will in turn have a prime speaker, and for now I fulfill that role. As prime speaker, I have conferred with many leaders during the last few months. Through them I learned that the new growth reaching our depths was purposeful, nurtured by The Wilds Initiative.#super[vii]Let this noble effort not be covert: a visionary cadre of Golgari, Gruul, and Selesnya shamans and druids has worked in concert for many years to cultivate wild space across the world. This cross-guild effort, unsanctioned by guildmasters, has the potential to reshape Ravnica, to truly resurrect nature on our world. A grand Upwelling. #figure(image("002_The Fathom Edict/06.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none) So it is those among you, surface-walkers, who have begun this new cycle of renewal. Your actions brought us up from the depths and reunited you with your vast, covered oceans. But another element must be acknowledged if we are to succeed: the remnants of the Simic Combine. Learning of Momir Vig’s hubris saddened us, and the havoc wreaked by his creations is not forgivable. The cytoplasts, Experiment Kraj…. The Vigians drifted too far from their purpose. They abandoned their role as keepers of nature and instead saw it as a plaything. They lost the Holdfast. We have sought out what is left of the Combine. We have brought to these encounters our principles and our goals. Even now, those who believe in the true Simic mission are joining us,#super[viii]knowing that with our help the Wilds will prosper, and that we will never be carried away by the currents of pride or progress as Vig was. Our home is fathoms down, and we will remain tethered to it forever. We strive for an enduring connection between Ravnica’s highest heights and deepest depths, and we seek your help. We are the Simic."#super[ix] #strong[Documenter’s summary:] Ostensible spokesperson for majority of merfolk race outlines causes for merfolk reemergence, plans for more "zonots," and intention to commandeer and reshape the Simic Combine.#linebreak() #strong[Recommended action:] Question key individuals about where new "zonots" might appear. Maintain patrols around existing ones. Monitor new Simic Combine activity and organization.#linebreak() #strong[Actions pending:] Awaiting further orders from <NAME> regarding potential preventive action against further Simic reconstruction. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) #super[i] #strong[Bem:] Rain came on suddenly before Zegana and her entourage arrived. Likely magically induced.#linebreak() #super[ii] #strong[Bem:] Implication of Dimir activity deeper in Undercity than currently suspected may warrant further investigation.#linebreak() #super[iii] #strong[Bem:] Boros estimates of total merfolk population underway. Early projections show a significant but far from massive number.#linebreak() #super[iv] #strong[Bem:] Consider checking substructures of major Boros fortresses for invasive roots.#linebreak() #super[v] #strong[Bem:] Wojek report no known violence against merfolk on the surface but have dispersed gawking crowds.#linebreak() #super[vi] #strong[Bem:] Field reports reveal three "zonots" thus far.#linebreak() #super[vii] #strong[Bem:] Intercepted Azorius memoranda confirm the existence of this program.#linebreak() #super[viii] #strong[Bem:] Intelligence suggests merfolk race does not act in total unison; Wojek report isolated merfolk individuals acting alone in various districts.#linebreak() #super[ix]#strong[Bem:] Zegana did not comment after the address but scribes and attendees interviewed later generally agreed that her intention was to declare herself the new guildmaster of the Simic Combine.
https://github.com/lucannez64/Notes
https://raw.githubusercontent.com/lucannez64/Notes/master/Maths_Devoir_Maison_2.typ
typst
#import "template.typ": * // Take a look at the file `template.typ` in the file panel // to customize this template and discover how it works. #show: project.with( title: "Maths Devoir Maison 2", authors: ( "<NAME>", ), date: "30 Octobre, 2023", ) #set heading(numbering: "1.1.") == Exercice 1 <exercice-1> === Partie A <partie-a> + #figure([#image("DM2_9_10_2023_1.png")], caption: [ Cube ] ) #block[ #set enum(numbering: "1.", start: 2) + Les points $M comma P comma F comma G$ sont coplanaires si et seulenent si $arrow(M P) eq a arrow(M F) plus b arrow(M G)$ avec $lr((a semi b)) in bb(R)^2$ \ \ $ arrow(M P) eq arrow(M H) plus arrow(H P) upright("(Chasles)")\ arrow(M P) eq 1 / 2 arrow(E H) plus 1 / 4 arrow(H G) upright("(Données)")\ arrow(M F) eq arrow(M E) plus arrow(E F) upright("(Chasles)")\ arrow(M F) eq minus arrow(E M) plus arrow(H G) upright("(Car EFGH est un carré)")\ arrow(M F) eq minus 1 / 2 E H plus H G upright("(Données)")\ arrow(M G) eq arrow(M H) plus arrow(H G) upright("(Chasles)")\ arrow(M G) eq 1 / 2 arrow(E H) plus arrow(H G) upright("(Données)") $ ] \ On peut transcrire le problème sous forme d’un système d’équation : \ \ $cases(1 / 2 eq minus 1 / 2 a plus 1 / 2 b, 1 / 4 eq a plus b)$ $arrow.l.r.double$ \ \ $cases(1 / 2 minus 1 / 2 b eq minus 1 / 2 a, 1 / 4 minus a eq b)$ $arrow.l.r.double$ \ \ $cases(1 / 2 minus 1 / 2 lr((1 / 4 minus a)) eq minus 1 / 2 a, b eq 1 / 4 minus a)$ $arrow.l.r.double$ \ \ $cases(a eq minus 3 / 8, b eq 1 / 4 plus 3 / 8 eq 5 / 8)$ \ \ $S eq lr({lr((minus 3 / 8 semi 5 / 8))})$ \ donc $arrow(M P) comma arrow(M G) comma arrow(M F)$ sont coplanaires \=\> M,P,E,G sont coplanaires \ #block[ #set enum(numbering: "a.", start: 2) + $lr((M P))$ et $lr((F G))$ sont parallèles si et seulement si $arrow(M P) eq k arrow(F G)$ avec $k in bb(R)$ \ \ $arrow(M P) eq 1 / 2 arrow(E H) plus arrow(H G) upright("(Question précédente)")$ $arrow(F G) eq arrow(E H) upright("(car EFGH est un carré)")$ \ ] On peut poser le système suivant : \ $cases(1 / 2 eq k, 1 eq 0)$ ce qui est impossible \ donc $lr((M P))$ et $lr((F G))$ ne sont pas parallèles. === Partie B <partie-b> + On sait que $lr((M P)) subset lr((E H G))$ et $lr((F G)) subset lr((E H G))$ donc que $lr((M P))$ et $lr((F G))$ sont coplanaires Or $lr((M P))$ et $lr((F G))$ ne sont pas parallèles d’après la question précédente et deux droites coplanaires sont soit parallèles ou sécantes donc $lr((M P))$ et $lr((F G))$ sont sécantes en un point d’intersection $L$ \ \ $ lr((L N)) subset lr((F G C))\ lr((C G)) subset lr((F G C))\ arrow.r.double lr((L N)) upright(" et ") lr((G C)) upright(" sont coplanaires ") $ \ Or d’après la figure $lr((L N))$ n’est pas parallèle à $lr((G C))$ donc $lr((L N))$ et $lr((C G))$ sont sécantes en un point d’intersection T \ \ #block[ #set enum(numbering: "1.", start: 3) + $lr((L N)) subset lr((F G C))\ lr((B F)) subset lr((F G C))\) & arrow.r.double lr((L N)) upright(" et ") lr((B F)) upright(" sont coplanaires ")$ \ \ ] Si on considère 3 droites: $lr((d)) comma lr((d^prime)) comma lr((t))$ avec $lr((d)) slash.double lr((d^prime))$ $s^prime lr((t))$ est sécante à $lr((d))$ alors $lr((t))$ est sécante à $lr((d^prime))$ Or $lr((L N))$ est sécante à $lr((C G))$ et $lr((C G)) slash.double lr((B F))$ car $F G C B$ est un carré et $lr([G C])$ est le côté opposé à $lr([B F])$ Donc $lr((L N))$ est sécante à (BF) en un point d’intersection $Q$ \ \ #image("DM2_9_10_2023_2.png") \ \ #block[ #set enum(numbering: "a.", start: 2) + La section $lr((M N P))$ de $lr((F G C))$ est la droite $lr((T Q))$ et d’après le théorème des parallèles la section pas un même plan de deux plan forment de deux droites parallèles or $lr((F G C)) slash.double lr((H E A))$ donc $lr((Q T))$ est parallèle à la droite $lr((M Y))$ avec $Y$ le point d’intersection entre $lr((E A))$ et la parallèle de $lr((Q T))$ au point $M$. $ & P in lr((H D C)) upright(" La section ") lr((M N P)) upright("de") lr((H G C D)) upright("est ") lr([T P])\ & T in lr((H D C)) $ \ \ $ & Q in lr((E F B))\ & Y in lr((E F B)) $ donc la section $lr((M N P))$ de $lr((A E F B))$ est $lr([Y Q])$ \ ] On en déduis que la section (MNP) du cube (EHGFADCB) est (MPTQY) #figure([#image("DM2_9_10_2023_3.png")], caption: [ Section (MNP) de (EHGFADCB) ] ) === Partie C <partie-c> + $M lr((0 semi 1 / 2 semi 1)) quad N lr((1 semi 1 / 2 semi 1 / 2)) quad P lr((1 / 4 semi 1 semi 1))$ \ + $arrow(M N) vec(x_N minus x_M, y_N minus y_M, z_N minus z_M) quad arrow(M N) vec(1, 0, minus 1 / 2)$ ~ La longueur de $lr([M N])$ est égale à la norme de $arrow(M N)$ définit comme : ~ $parallel arrow(M N) parallel eq sqrt(x^2 plus y^2 plus y^2)$ $ eq sqrt(1^2 plus 0^2 plus lr((minus 1 / 2))^2) $ $ eq sqrt(1 plus 1 / 4) eq sqrt(5) / 2 $ \ \ donc la longueur de MN est $sqrt(5) / 2$ \ \ + \ $ & arrow(T P) vec(1 / 4 minus 1, 1 minus 1, 1 minus 5 / 8) quad arrow(T P) vec(minus 3 / 4, 0, 3 / 8)\ \ & arrow(P N) vec(1 minus 1 / 4, 1 / 2 minus 1, 1 / 2 minus 1) quad arrow(P N) vec(3 / 4, minus 1 / 2, minus 1 / 2)\ \ & arrow(T N) vec(0, minus 1 / 2, minus 1 / 8) quad arrow(T N) vec(1 minus 1, 1 / 2 minus 1, 1 / 2 minus 5 / 8)\ & $ \ D’après la réciproque théorème de pythagore un triangle est un traingle rectangle si et seulement si le carré de la longueur du plus grand côté est égale à la somme des carrés des autres côtés. \ soit si $parallel arrow(P N) parallel^2 eq parallel arrow(T P) parallel^2 plus parallel arrow(T N) parallel^2$ $ lr((parallel arrow(P N) parallel))^2 & eq lr((3 / 4))^2 plus lr((minus 1 / 2))^2 plus lr((minus 1 / 2))^2\ & eq 17 / 16\ $ $ lr((parallel arrow(T P) parallel))^2 plus lr((parallel arrow(T N) parallel))^2 & eq lr((minus 3 / 4))^2 plus 0^2 plus lr((3 / 8))^2 plus 0^2 plus lr((minus 1 / 2))^2 plus lr((minus 1 / 8))^2\ & eq 31 / 32 $ \ Donc TPN n’est pas un triangle rectangle #link("Maths_DM.pdf")[Maths DM]
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/docs/tinymist/configurations.typ
typst
Apache License 2.0
#import "mod.typ": * #import "@preview/cmarker:0.1.0": render as md #show: book-page.with(title: "Tinymist Configurations") #let packages = json("/editors/vscode/package.json") #let config-type(t) = if "anyOf" in t { let any-of = t.anyOf if type(any-of) == array { any-of.map(config-type).join(" | ") } } else { if type(t.type) == array { t.type.join(" | ") } else { t.type } } #let config_item(key, cfg) = [ + *#raw(key)*: - Type: #raw(config-type(cfg)) #if "anyOf" in cfg { // todo: anyOf } else if cfg.type == "array" [ - Items: #raw(cfg.items.type) - Description: #md(cfg.items.description) ] - Description: #md(cfg.at("markdownDescription", default: cfg.at("description", default: none))) #if cfg.at("enum", default: none) != none [ - Valid values: #for (i, item) in cfg.enum.enumerate() [ - #raw(item): #if "enumDescriptions" in cfg { md(cfg.enumDescriptions.at(i)) } ] ] #let cfg-default = cfg.at("default", default: none) #if type(cfg-default) == str { if cfg-default != "" [ - Default: #raw(cfg-default) ] else [ - Default: `""` ] } else if type(cfg-default) == array [ - Default: [#cfg-default.join(",")] ] else if cfg-default != none [ - Default: #cfg-default ] ] #for (key, cfg) in packages.contributes.configuration.properties { config_item(key, cfg) }
https://github.com/Meisenheimer/Notes
https://raw.githubusercontent.com/Meisenheimer/Notes/main/src/DecisionTree.typ
typst
MIT License
#import "@local/math:1.0.0": * = Decision Tree
https://github.com/Functional-Bus-Description-Language/Specification
https://raw.githubusercontent.com/Functional-Bus-Description-Language/Specification/master/src/functionalities/group.typ
typst
== Group <group> The group functionality represents a group of functionalities. It used to inform a compiler that particular functionalities might be accessed together, and their register location must meet additional constraints. The group functionality has following properties: *`virtual`*` bool (false) {definitive}` #pad(left: 1em)[ The virtual property defines whether the group is virtual. Virtual groups are used to group functionalities without generating the group interface for the requester code. ] Not all functionalities can be grouped. Functionality grouping is described in detail in @grouping.
https://github.com/HiiGHoVuTi/requin
https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/lang/continuables.typ
typst
#import "../lib.typ": * #show heading: heading_fct Soit $Sigma = {a,b}$. On dit que $w in Sigma^*$ est _primitif_ s’il n’existe pas de mot $u in Sigma^*$ et de $p>1$ tels que $w=u^p$ Un langage $L$ est dit _continuable_ si pour tout $u in Sigma^*$, il existe un $v in Sigma^*$ tel que $u v in L$ #question(0)[Pour chacun des cas suivants donner des exemples de langage sur $Sigma = {a,b}$: - Un langage infini régulier ne reconnaissant aucun mot primitif - Un langage infini régulier ne reconnaissant que des mots primitifs - Un langage infini régulier continuable - Un langage infini algébrique continuable ne reconnaissant que des mots primitifs ] #correct([ 1. - Celui reconnu par l'expression rationnel $(a a)^*$ fonctionne - Celui reconnu par l'expression rationnel $b a^*$ fonctionne - Celui reconnu par l'expression rationnel $Sigma^*$ fonctionne - Celui reconnu par la grammaire $S -> a S b | b S b | a $ ]) #question(1)[Proposer un algorithme pour tester si un mot est primitif en $O(|w|^(3/2))$. _C'est possible de le faire en $O(|w|)$_] #correct([ 2. On peut se donner : #rect([$ &"Si " (w = a^(|w|) or w = b^(|w|)) and |w|>1 " renvoyer NON-PRIMITIF"\ &"Pour " p <-- 2,...,floor(sqrt(|w|)) "faire :"\ &#h(20pt) "Si " p | |w| "alors faire :"\ &#h(40pt) "Si " w = (w[0..p])^p "alors renvoyer NON PRIMITIF Fin si"\ &#h(20pt)"Fin si"\ &"Fin pour"\ &"Renvoyer PRIMITIF" $]) qui est en $O(|w| + sqrt(|w|)times |w|) = O(|w|^(3/2))$. La solution en $O(|w|)$ est en utillisant le tableau LPS. Une fois le tableau LPS calculé en $O(|w|)$, on fait juste la vérification en $O(|w|)$ de si LPS$[ |w| ]$ est une bonne puissance. Voir le sujet "Autour des préfixes" pour plus d'informations. ]) #question(1)[Étant donné un automate $A$, proposer un algorithme pour déterminer si le langage reconnu par $A$ est continuable.] #correct([ 3. Une fois l'automate déterministe complet crée, il suffit de vérifier si tout les états sont co-accesible. Un simple parcours depuis tout les états pour tester si on peut atteindre des états final suffit (ou un test d'un états puit). Cette CNS est vraie car si tout les états sont co-accesibles alors depuis n'importe que mot, on atteint un état depuis lequel il est possible d'aller vers un état final. Réciproquement, si $L$ est continuable, alors depuis chaque état il existe un chemin étiquetté par la continuation pour aller vers un état final. ]) #question(3)[Montrer que tout langage régulier continuable sur $Sigma = {a,b}$ contient une infinité de mots primitifs. Quelle est la condition sur $Sigma$ pour que cela soit vrai ?] #correct([ 4. Tout viens d'une chose : le fait que la continuation est bornée en taille. On considère le mot $a b^(2N)$ pour le $N$ du lemme de l'étoile. Alors il existe un $v$ tel que $a b^(2N) v in L$, mais par le lemme de l'étoile on peut réduire ce $v$ jusqu'à ce que sa longeur soit plus petite que $N$ (le lemme de l'étoile ne dit pas exactement ça, mais quitte à considérer l'automate dont l'état initial est l'état obtenu après lecture de $a b^(2N)$ ce que je dit est correct) On montre assez facilement que $a b^(2N) v$ est primitif : si on avait $a b^(2N) v = u^p$ avec $p>1$, alors $u=a b^(|u|-1)$, or la deuxième répétition de $u$ commence dans la série de $b$, donc $u[0] = b$, absurde. Cela ne marche que car $Sigma = {a,b}$ est de cardinal >2. Si $Sigma = {a}$, alors tout langage continuable génère des mots composite (sinon, c'était un langage infini inclus dans ${a^p : p in PP}$ qui n'est pas rationnel). ]) #question(4)[ Existe-t'il un langage infini rationnel continuable ne comportant que des mots primitifs?\ _Indication: Considérer l'ensemble des applications partielles ${ delta^*(u,\_) : u in Sigma^* }$, et trouver un bon groupe pour la composition inclut dans cet ensemble.\ $delta^*$ est définie par $delta^*(u a,q) = delta (a,delta^*(u,q))$ et $delta^*(epsilon,q) = q$ pour $(a,u,q) in Sigma times Sigma^* times Q$ _] #correct([ _Cette preuve (très belle!) nous viens de #link("https://www.univ-orleans.fr/lifo/Members/Nicolas.Ollinger/")[<NAME>inger]. Une autre preuve utillisant la reconnaissance par morphisme de monoïdes fini et le fait que dans toute $J$-classe maximal il existe un groupe (et donc une puissance laissant final le mot) à été donné par Colin du LIP de l'ENS de Lyon._ Soit $(Q, Sigma, delta, q_0, F)$ un automate fini déterministe complet qui reconnaît $L$. Posons $k = min {|delta(Q,w)| : w in Sigma^*}$ et définissons $K = {w in Sigma^* : |delta(Q,w)| = k}$. Une fois le langage $K$ atteint, on en reste captif : $K Sigma^* subset.eq K$. Comme $L$ est continuable et non vide, $K$ l’intersecte. Choisissons $u in K sect L$ et posons $Q_u = delta(Q,u)$. Par construction $delta(Q_u,u) = Q_u$ car $delta(Q_u,u) = delta(Q_u,u^2) subset.eq Q_u$ et $|Q_u| = k$. L’application $q |-> delta(q,u)$ est donc une permutation de $Q_u$. Ce dernier étant fini, il existe $n>0$ tel que $q |-> delta(q,u^n)$ est l’identité sur $Q_u$. On en déduit $delta(q_0, u^(n+1))=delta(q_0, u) in F$ donc $u^(n+1) in L$. ])
https://github.com/protohaven/printed_materials
https://raw.githubusercontent.com/protohaven/printed_materials/main/common-tools/lathe_wood.typ
typst
#import "/meta-environments/env-features.typ": * = Wood Lathe // Do we want a "use the lathe for..." here? == Notes // Specific to Protohaven === Safety The following equipment is required while working at the lathe: - Hearing protection - Closed-toed shoes - An industrial quality Face Shield A well-fitting dust mask is also required for sanding. Loose sleeves must be rolled up and long hair and scarves must be tied back. Avoid any dangling drawstrings or other articles of clothing that may come in contact with the lathe. - Ensure that your piece is secure and mounted appropriately - Wear a face shield and a well-fitted dust mask, especially while sanding - Immediately stop turning if there is excessive vibration or a piece becomes loose === Common Hazards The most significant safety hazard is getting a body part caught in the lathe and being drawn into the machine while it is rotating. This is a potentially deadly hazard and should be carefully avoided by: - Wearing the proper clothes (nothing loose or dangling that can get wrapped around the workpiece or chuck) - Positioning the tool rest properly (keep a good cutting angle, and keep your weight away from the lathe) - Using break-away materials when sanding and finishing (e.g., applying finish with a paper towel instead of a cloth rag) The most common hazard is material breaking off of the lathe and impacting the user. Stop turning immediately if there is excessive vibration or a piece becomes loose. This hazard can be minimized by: - Wearing appropriate PPE - Properly securing materials - Turning at the proper speed - Carefully turning glue-ups or potentially weakened materials _Pinching_ between the tool rest and the turning tool is also a common hazard, caused when the tool makes contact with the piece before being properly placed on the tool rest. Avoid this by always placing the tool against the tool rest, then feeding it into the turning piece. While not a primary safety concern, woodturning _catches_ can be frightening and result in damage to your workpiece, or in extreme circumstances, can cause a piece to break from the lathe. Catches are a part of woodturning and require time and practice to minimize. === Prohibited Materials Do not use the wood lathe to turn: - Rotted or split wood - Improperly glued-up workpieces - Off-center workpieces at high speed - Metal Rotted wood, split wood, and improperly glued-up workpieces have the potential to break apart while turning. Off-center workpieces can cause excessive vibration, and may break loose while turning. Secure off-center workpieces carefully, and turn them at lower speeds. Turning metal is inappropriate on the wood lathe. Use the metal lathe in the machine shop for turning metal workpieces. === Care Stop use if you hear scraping or grinding sounds from the machine. This may indicate the machine requires immediate maintenance. Beware of impacts on the tool rest. The tool rest should be smooth, and free of bumps and dips. == Parts of the Lathe === Full View #figure( image("images/lathe-front_view-annotated.png", width: 100%), caption: [ Annotated front view of the lathe. ], ) === Lathe Controls #figure( image("images/lathe-controls-annotated.png", width: 100%), caption: [ Annotated view of the lathe control panel. ], ) === Accessory Kit #figure( image("images/lathe-accessories-annotated.png", width: 100%), caption: [ Annotated view of the contents of the lathe accessory kit. ], ) === On/Off Switch and Emergency Stops Pull the red power button to turn on the lathe. Push the red power button to turn off the lathe. The lathe has two emergency stop buttons. The red power button on the head stock doubles as an emergency stop. The lathe also has a remote emergency stop attached to the power supply. The remote emergency stop has a magnetic base, and can be attached to the lathe body where convenient. === Banjo The banjo is an adjustable metal bracket that is mounted on the lathe bed. The banjo has a movable arm that can be adjusted to hold tools and accessories at different angles and distances from the workpiece. The banjo is commonly set up to hold a tool rest, used to support a tool while turning. === Bed The bed supports and aligns that various components of the lathe. // === Chip Guard === Calipers Calipers are useful to check if the workpiece has been turned down to a required size. Calipers are available in the accessory kit. === Chuck and Jaws A device that holds the workpiece on the lathe. Always use an appropriate chuck to safely and securely hold a workpiece for turning. Various jaws suitable for various sized workpieces are available in the accessory kit. === Drive Belts The drive belts can be adjusted to change the range of RPMs at which the lathe can spin the workpiece. === Drive Center A drive center attaches to the workpiece and spins it for turning. Drive centers are used for spindle turning, with the workpiece secured between the drive center and the tailstock. === Faceplate A device that holds the workpiece on the lathe. A faceplate is mechanically connected to the workpiece by clamps, fasteners, or other means. The faceplate is then connected to the drive of the lathe to turn the workpiece. Faceplates are more complicated to set up than drive centers or chucks, but are very flexible, and can be used to mount a wide variety of workpiece shapes and sizes. === Headstock Provides the drive for the workpiece, usually through pulleys connected by a belt to the drive motor of a lathe. === Headstock Hand Wheel // & Indexing Use the headstock hand wheel to rotate the workpiece manually int he lathe. Use the hand wheel to check workpiece clearances, or to slowly rotate the workpiece for visual inspection. === Lights The lathe has several work lights attached to help illuminate the workpiece. More light can be very useful when turning fine details. === Live Center A live center is often used in the tail stock to support the workpiece on both sides for spindle turning. The live center should turn freely in the tailstock, without catching of binding. === Speed Controls Use the speed controls to adjust the revolutions per minute (RPM) of the lathe. === Spindle Lock // I am not happy with this definition. Engage the spindle lock to keep the spindle from turning. This is an important safety feature to make sure the lathe will not spin in between turning operations. === Tailstock The movable assembly opposite the headstock that slides along the lathe bed and supports workpieces. === Tailstock Handwheel Use the tailstock handwheel to advance or retract the spindle on the tailstock. === Tool Rest The tool rest is an adjustable bar that steadies and supports a turning tool while it is engaged with the workpiece. The tool rest should be adjusted to that is as close the workpiece as possible. Once setting and locking the tool rest into place, manually rotate the workpiece to make sure it turns freely, and does not contact the tool rest. === Vacuum System The lathe has a vacuum system to mount delicate, low-clearance workpiece to the drive spindle. == Turning Tools === Turning Tool Materials Turning tools are commonly equipped with steel or carbide cutting tips. Tools with steel cutting tips can be resharpened an reshaped as needed.#footnote([Do not reshape the tools in the accessory kit.]) Carbide tools use a small, replaceable tip of carbide for their cutting edge. Carbide is a very hard material that keeps a good edge for a long time, but can easily chip. === Turning Tool Types For *spindle* turning: _Roughing gouges_ are wide, u-shaped fluted turning chisels used to hog off large amounts of material from the workpiece. They are available in many shapes and sizes. _Spindle gouges_ are thinner, more delicate fluted turning chisels meant to apply fine detailing to the profile of the workpiece. They are available in many shapes and sizes. _Skew chisels_ are turning chisels meant for fine finishing and planing of the workpiece. Skew chisels are available with a right- and left-handed skew. _Parting tools_ are used to divide the workpiece, often paring away material at the end of the workpiece prior to removal. For *bowl* turning: _Bowl gouges_ are heavy deeply fluted chisels for turning the inside and outside of a bowl. _Scrapers_ use a burr to scrape the surface smooth. They are useful for scraping fine finishes in the interior of a bowl. == Turning Speeds Turning pieces at the proper speed will help ensure better results with greater safety. In general, use lower speeds for roughing and for long or large diameter work. Turn smaller pieces at higher RPMs and larger pieces at lower RPMs. This is not an exact science, so always begin turning a piece at close to the recommended RPMs (see @lathe-turning-speeds), increasing speed only if necessary. #figure( caption: [Recommended turning speeds.], table( columns: (auto, auto, auto, auto), inset: 10pt, align: (horizon + left, horizon + right, horizon + right, horizon + right,), stroke: none, table.header( [*Workpiece Diameter*], [*Roughing RPM*], [*Cutting RPM*], [*Finishing RPM*] ), table.hline(), [Under 2"],[1500],[3000],[3000], [2"–4"], [600], [1500],[2300], [4"–6"], [450], [1100],[1500], [6"–8"], [450], [600], [1100], [8"–10"], [450], [600], [850], [10"–12"], [450], [600], [850], [12"–14"], [450], [450], [600], ) )<lathe-turning-speeds> == Basic Operation: Spindle Turning === Setting up + Engage both emergency stops. + Set the speed control to zero. + Slide the tailstock and banjo away from the headstock. + Install a drive center, faceplate, or chuck to accept the workpiece. + Secure the workpiece in the lathe with a center in the tailstock. + Set the brakes on the tailstock arm and lathe bed to lock the tailstock in place. + Adjust the tool rest to the appropriate height, and leave a 1/8" gap between the tool rest and the workpiece. + Rotate the workpiece one full rotation by hand to make sure the workpiece does not impact the tool rest. + Set the brakes on the banjo and tool rest. + Place turning tools within easy reach. + Secure loose clothing, tie back long hair, and put on PPE. === Workholding Ensure that the workpiece is firmly secured in the lathe. If using a chuck, make sure the chuck is making good contact with the workpiece, and is properly tightened. If using spindles, make sure the spindles have good contact with the workpiece, and will not wander, or escape the workpiece. === Spindle Turning + Release the emergency stops. + Stand outside of the direct line of rotation for the workpiece. \ _If something goes wrong, and the workpiece is ejected from the lathe, do not stand where the workpiece will go. Make sure you and any others are clear of the area._ + Turn on the lathe, and slowly ramp up RPMs to the proper speed. \ _Watch for vibration, loosening of the piece, or other hazards before beginning turning._ + Round the workpiece with roughing tools. + Shape the workpiece with cutting tools. + If you are sanding the workpiece: + Turn off the lathe. + Remove the tool rest. + Run the lathe at slow speed. + Apply sandpaper with light pressure. \ _Do not put so much pressure on the paper that you might fall into the lathe. If using a strip of paper around the workpiece, do not grip it so tightly that it could pull you into the lathe._ + If you are finishing the workpiece: + Turn off the lathe. + Remove the tool rest. + Run the lathe at slow speed. + Apply finish with a paper towel, using light pressure. \ _Do not put so much pressure on the paper towel that you might fall into the lathe._ + Remove the piece with a parting tool: + Part the piece down to a thin spindle. + Turn off the lathe. + Use a handsaw to cut away the remaining wood to finish the removal. === Cleaning Up Woodturning generates a lot of wood chips and sawdust. Give yourself extra time, especially before the shop closes, to ensure you have enough time to fully clean the lathe area. + Wipe any finish or other waste from the lathe as needed: check the bed, tool rest, center(s), chuck, faceplate, and banjo. + Sweep up chips and place in a trash can. + Sweep and/or vacuum the lathe to remove any chips and dust. + Retract the live center into the tailstock housing if needed. + Remove the drive center with a rod hammer, or remove the chuck, faceplate, or adapter as appropriate. + Return all accessories to the Lathe Kit. + Clean the carbide tools of any chips or sawdust and return them to their case. + Push the tool rest in close to the lathe so it won't catch on anything passing by the lathe. If there are any waste pieces large enough to be recycled, place them in the back recycling bin. == Basic Operation: Bowl Turning === Setting up + Engage both emergency stops. + Set the speed control to zero. + Slide the tailstock and banjo away from the headstock. + Install a drive center, faceplate, or chuck to accept the workpiece. + Secure the workpiece in the lathe with a center in the tailstock. + Move the tailstock to the far end of the lathe bed. + Adjust the tool rest to the appropriate height, and leave a 1/8" gap between the tool rest and the workpiece. + Rotate the workpiece one full rotation by hand to make sure the workpiece does not impact the tool rest. + Set the brakes on the banjo and tool rest. + Secure loose clothing, tie back long hair, and put on PPE. === Workholding Ensure that the workpiece is firmly secured in the lathe. If using a chuck, make sure the chuck is making good contact with the workpiece, and is properly tightened. If using a faceplate, make sure the faceplate is well secured to the workpiece, and is securely held. === Bowl Turning + Release the emergency stops. + Stand outside of the direct line of rotation for the workpiece. \ _If something goes wrong, and the workpiece is ejected from the lathe, do not stand where the workpiece will go. Make sure you and any others are clear of the area._ + Turn on the lathe, and slowly ramp up RPMs to the proper speed. \ _Watch for vibration, loosening of the piece, or other hazards before beginning turning._ + Round the workpiece with roughing tools. + Shape the workpiece with cutting tools. + If you are sanding the workpiece: + Turn off the lathe. + Remove the tool rest. + Run the lathe at slow speed. + Apply sandpaper with light pressure. \ _Do not put so much pressure on the paper that you might fall into the lathe._ + If you are finishing the workpiece: + Turn off the lathe. + Remove the tool rest. + Run the lathe at slow speed. + Apply finish with a paper towel, using light pressure. \ _Do not put so much pressure on the paper towel that you might fall into the lathe._ === Cleaning Up Woodturning generates a lot of wood chips and sawdust. Give yourself extra time, especially before the shop closes, to ensure you have enough time to fully clean your area. + Wipe any finish or other waste from the lathe as needed: check the bed, tool rest, center(s), chuck, faceplate, and banjo. + Sweep up chips and place in a trash can. + Sweep and/or vacuum the lathe to remove any chips and dust. + Retract the live center into the tailstock housing if needed. + Remove the drive center with a rod hammer, or remove the chuck, faceplate, or adapter as appropriate. + Return all accessories to the Lathe Kit. + Clean the carbide tools of any chips or sawdust and return them to their case. + Push the tool rest in close to the lathe so it won't catch on anything passing by the lathe. If there are any waste pieces large enough to be recycled, place them in the back recycling bin.
https://github.com/hugo-b-r/insa-template-typst
https://raw.githubusercontent.com/hugo-b-r/insa-template-typst/master/examples/stage.typ
typst
#import "../templates/stage.typ":project #show: project.with( title: "Rapport de stage de découverte de l'entreprise du 29 Mai au 28 Juin 2019", student: "<NAME>", company_referent: "<NAME>", school_referent: "Y neuça", group_number: 7, promo_number: 42, company: "Apple", front_comment: "Comment une entreprise de la taille d'apple arrive-t-elle a exister ?", front_image: "../assets/apple_park.jpg", company_logo: "../assets/apple_logo.jpg" )
https://github.com/noaa-afsc/nmfs-sar-template
https://raw.githubusercontent.com/noaa-afsc/nmfs-sar-template/main/_extensions/nmfs-sar-template/typst-template.typ
typst
// This is an example typst template (based on the default template that ships // with Quarto). It defines a typst function named 'article' which provides // various customization options. This function is called from the // 'typst-show.typ' file (which maps Pandoc metadata function arguments) // // If you are creating or packaging a custom typst template you will likely // want to replace this file and 'typst-show.typ' entirely. You can find // documentation on creating typst templates and some examples here: // - https://typst.app/docs/tutorial/making-a-template/ // - https://github.com/typst/templates #let article( title: none, authors: none, date: none, abstract: none, abstract-title: none, cols: 1, margin: (x: 1.22in, y: 1.2in), paper: "us-letter", lang: "en", region: "US", font: (), fontsize: 11pt, sectionnumbering: none, toc: false, toc_title: none, toc_depth: none, toc_indent: 1.5em, doc, ) = { set page( paper: paper, margin: margin, numbering: "1", header: align(right + horizon)[ #set text( font: "Source Sans 3", fill: rgb("#5EB6D9")) ALASKA FISHERIES SCIENCE CENTER], // Define the background for the first page background: context { if(counter(page).get().at(0)== 1) { align(left + top)[ #image("assets/22Fisheries SEA_T1 CornerTall.png", width: 35%) ]} } ) set par(justify: true) set text(lang: lang, region: region, font: font, size: fontsize, fill: rgb("#323C46")) set heading(numbering: sectionnumbering) if title != none { [#grid(columns: (35%, 1fr))[][ #text(weight: "regular", size: 1.5em, font: "Source Sans 3", fill: rgb("#00559B"))[#title] ]] } line(length: 100%, stroke: rgb("#00559B")) v(2em) grid( columns: (75%,25%), if abstract != none { block(fill: rgb("#F1F2F3"), inset: 1em)[ #text(font: "Source Sans 3")[#abstract] ] }, // Display the authors list. for i in range(calc.ceil(authors.len() / 3)) { let end = calc.min((i + 1) * 3, authors.len()) let is-last = authors.len() == end let slice = authors.slice(i * 3, end) set align(right) grid( columns: 1, rows: slice.len(), row-gutter: 1em, ..slice.map(author => align(right, { text(font: "Source Sans 3", weight: "semibold", author.name) if "email" in author [ \ #text(font: "Source Sans 3", size: 0.75em, author.email) ] })) ) if not is-last { v(16pt, weak: true) } } ) v(2em, weak: true) if date != none { align(center)[#block(inset: 1em)[ #date ]] } if toc { let title = if toc_title == none { auto } else { toc_title } block(above: 0em, below: 2em)[ #outline( title: toc_title, depth: toc_depth, indent: toc_indent ); ] } if cols == 1 { doc } else { columns(cols, doc) } } #set table( inset: 6pt, stroke: none )
https://github.com/HellOwhatAs/whiledb-typ
https://raw.githubusercontent.com/HellOwhatAs/whiledb-typ/main/example.typ
typst
#import "@local/whiledb:0.1.0": whiledb_exec; #set heading(numbering: "1.") #set page(height: auto, width: auto, fill: white) #let whiledb(src, stdins: ("",)) = { assert(type(stdins) == array, message: "stdins must be array"); let src = if type(src) == str { src } else { src.text }; raw(src); stack(dir: ltr, spacing: 2em, ..{ stdins.map(stdin => { let stdin = if type(stdin) == str { stdin } else { stdin.text }; let res = whiledb_exec(src, stdin: stdin); set block(inset: 6pt, radius: 5pt, spacing: 1pt) if stdin.len() > 0 { block(fill: rgb("#e5ceec"), raw("stdin> " + stdin)) } block(fill: rgb("#d8dde8"), text(fill: if res.err { red } else { rgb("#1d2433") }, raw(res.result))) }) }) } = Errors #whiledb(``` write_int(123 / 0); ```) #whiledb(``` // parse err, no comment ```) #whiledb(``` *10 ```) #whiledb(``` *10 = 10 ```) #whiledb(``` 1 + 2 = 3 ```) #whiledb(``` func() ```) #whiledb(``` variable ```) #whiledb(``` read_int() ```, stdins: (`char`,)) = Basic #whiledb(``` n = read_int(); m = n + 1; write_int(m + 2); write_char(10) ```, stdins: ("7", "10", "13")) = Branches #whiledb(``` var x; x = read_int(); if (x > 0) then { while (x > 0) do { x = x - 1 } } else { if (x < 0) then { write_int(0) } else { write_int(1) } } ```, stdins: ("0", "1", "-1")) = Prime Judge #whiledb(``` n = read_int(); i = 2; flag = 1; while (flag && i * i <= n) do { if (n % i == 0) then { flag = 0 } else { flag = 1 }; i = i + 1 }; if (flag) then { write_char(80); write_char(82); write_char(73); write_char(77); write_char(69); write_char(10) } else { write_char(78); write_char(79); write_char(78); write_char(80); write_char(82); write_char(73); write_char(77); write_char(69); write_char(10) } ```, stdins: ("5", "6", "7", "8", "13")) = Loops #whiledb(``` n = read_int(); i = 0; s = 0; while (i < n) do { s = s + read_int(); i = i + 1 }; write_int(s); write_char(10) ```, stdins: (``` 5 1 2 3 4 5 ```, ``` 10 2 4 6 8 10 12 14 16 18 20 ```)) = Abs #whiledb(``` n = read_int(); if (n >= 0) then { write_int(n) } else { write_int(- n) }; write_char(10) ```, stdins: (`-123`, `125`, `-0`, `0`)) = Linked List #whiledb(``` var n; var i; var p; var q; var s; n = read_int(); i = 0; p = 0; while (i < n) do { q = malloc(2); * q = read_int(); * (q + 1) = p; p = q; i = i + 1 }; s = 0; while (p != 0) do { s = s + * p; p = * (p + 1) }; write_int(s); write_char(10) ```, stdins: ("1 1", "2 2 4", "3 3 3 3"))
https://github.com/Myriad-Dreamin/shiroa
https://raw.githubusercontent.com/Myriad-Dreamin/shiroa/main/github-pages/docs/format/build-meta.typ
typst
Apache License 2.0
#import "/github-pages/docs/book.typ": book-page #show: book-page.with(title: "Build Metadata") = Build Metadata #let type-hint(t, required: false) = { { set text(weight: 400, size: 16pt) if required { " (required) " } } { text(fill: red, raw(t)) } } === dest-dir #type-hint("string") The directory to put the rendered book in. By default this is `book/` in the book's root directory. This can be *overridden* with the `--dest-dir` CLI option. ```typ #build-meta( dest-dir: "../dist", ) ``` When you set it to `../dist`, `shiroa` will output the rendered book to `parent/to/book.typ/../../dist` or calculated `parent/dist`.
https://github.com/antonWetzel/Masterarbeit
https://raw.githubusercontent.com/antonWetzel/Masterarbeit/main/arbeit/appendix.typ
typst
#import "setup.typ": * = Appendix == Glossar / Koordinatensystem: ist eine Menge von Achsen, mit den eine Position genau beschrieben werden kann. Für kartesische Koordinaten ist die x-Achse nach rechts, die y-Achse nach oben und die z-Achse nach hinten orientiert. / Punkt: ist die Kombination von einer dreidimensionalen Position und zusätzlichen Informationen. / Punktwolke: ist eine Menge von Punkten. Für alle Punkte sind die gleichen zusätzlichen Informationen vorhanden. / Normale: ist ein normalisierter dreidimensionaler Vektor, welcher die Orientierung einer Oberfläche von einem Objekt angibt. Der Vektor ist dabei orthogonal zur Oberfläche, kann aber in das Objekt oder aus dem Objekt gerichtet sein. / Voxel: ist ein Würfel im dreidimensionalen Raum. Die Position und Größe vom Voxel kann explizit abgespeichert oder relativ zu den umliegenden Voxeln bestimmt werden. / Tree: ist eine Datenstruktur, bestehend aus Knoten, welche wiederum Kinderknoten haben können. Die Knoten selber können weitere Informationen enthalten. Die Knoten sind eindeutig sortiert, wodurch ein Knoten niemals Kinderknoten oder indirekter Kinderknoten von sich selbst ist. / Octree: ist eine Tree, bei dem ein Knoten acht Kinderknoten haben kann. Mit einem Octree kann ein Voxel aufgeteilt werden. Jeder Knoten gehört dabei zu einem Voxel, welcher gleichmäßig mit den Kinderknoten weiter unterteilt wird. / Leaf-Knoten: ist ein Knoten, welcher keine weiteren Kinderknoten hat. Für Punktwolken gehört jeder Punkt zu genau einem Leaf-Knoten. / Branch-Knoten: ist ein Knoten, welcher weitere Kinderknoten hat. / Root-Knoten: ist der erste Knoten im Tree, alle anderen Knoten sind direkte oder indirekte Kinderknoten vom Root-Knoten. / KD-Baum: ist eine Datenstruktur, um im $k$-dimensionalen Raum für eine Position die nächsten Punkte zu bestimmen. / Greedy-Algorithmus: ist eine Kategorie von Algorithmen, bei denen das Ergebnis schrittweise berechnet wird. Bei jedem Schritt wird mit den momentanen Informationen die beste Entscheidung getroffen, wodurch das Ergebnis schnell, aber meist nicht global optimal berechnet wird. #pagebreak(weak: true) == KD-Baum <kd_baum> Für eine Menge von Punkten kann ein KD-Baum berechnet werden. Mit diesem kann effizient bestimmt werden, welche Punkte innerhalb einer Kugel mit beliebiger Position und Radius liegen. Ein Beispiel für einen KD-Baum ist in @appendix_kd_baum gegeben. Für jede Unterteilung ist die Trenngerade mit weniger Punkten gezeichnet. Weil der rote Kreis vollständig links der ersten Unterteilung ist, müssen die Punkte rechts nicht betrachtet werden. #figure( caption: [ KD-Baum für Punkte in 2D. ], cetz.canvas(length: 2cm, { import "triangulierung.typ": positions import cetz.draw: * for pos in positions { circle(pos, radius: 0.1, fill: black) } circle((1.7, 0), radius: 0.1, fill: black) let x0 = positions.at(3).at(0) line((x0, 1.5), (x0, -1), stroke: black + 2pt) let y1 = positions.at(0).at(1) line((-0.5, y1), (x0, y1), stroke: (dash: "densely-dotted", thickness: 2pt)) let x2 = positions.at(1).at(0) line((x2, 1.5), (x2, y1), stroke: (dash: "dotted", thickness: 2pt)) let y3 = positions.at(2).at(1) line((x2, y3), (x0, y3), stroke: (dash: "loosely-dotted", thickness: 2pt)) let x2 = positions.at(9).at(0) line((x2, y1), (x2, -1), stroke: (dash: "dotted", thickness: 2pt)) let y3 = positions.at(8).at(1) line((x2, y3), (x0, y3), stroke: (dash: "loosely-dotted", thickness: 2pt)) let y1 = 0 line((x0, y1), (2.5, y1), stroke: (dash: "densely-dotted", thickness: 2pt)) let x2 = positions.at(4).at(0) line((x2, 1.5), (x2, y1), stroke: (dash: "dotted", thickness: 2pt)) let y3 = positions.at(5).at(1) line((x2, y3), (2.5, y3), stroke: (dash: "loosely-dotted", thickness: 2pt)) let x2 = positions.at(7).at(0) line((x2, y1), (x2, -1), stroke: (dash: "dotted", thickness: 2pt)) let y3 = positions.at(6).at(1) line((x2, y3), (2.5, y3), stroke: (dash: "loosely-dotted", thickness: 2pt)) circle((0.8, 0.1), radius: 0.4, stroke: red) }), ) <appendix_kd_baum> === Konstruktion Für die Konstruktion von einem KD-Baum werden nur die Positionen der Punkte benötigt. Zuerst wird für die Punkte entlang der ersten Dimension der Median bestimmt. Dabei wird der _Quickselect_-Algorithmus @quickselect verwendet. Der Median hat als Index die halbe Anzahl der Punkte. Ist die Anzahl der Punkte ungerade, so kann der Index auf- oder abgerundet werden, solange bei der Suche die gleiche Strategie verwendet wird. Wie beim _Quicksort_-Algorithmus wird ein beliebiges Pivot-Element ausgewählt, mit diesem die Positionen entlang der Dimension unterteilt werden. Die Positionen werden einmal iteriert und kleinere Positionen vor dem Pivot und größere Positionen nach dem Pivot verschoben. Der Pivot ist am Index, wo es in der sortierten Liste wäre. Um den Median zu finden, wird nur der Teil von den Punkten weiter betrachtet, welcher den Median beinhaltet. Die Unterteilung wird so lange wiederholt, bis der Bereich nur noch einem Punkt enthält und der Median bekannt ist. Durch den _Quickselect_-Algorithmus sind die Positionen nach der Bestimmung vom Median in kleine und größere Positionen unterteilt. Die Ebene durch den Punkt teilt dabei den Raum und alle Punkte mit kleinerem Index liegen auf der anderen Seite als die Punkte mit größerem Index. Die beiden Hälften werden in der gleichen Weise unterteilt. Dabei wird die nächste, beziehungsweise für die letzte Dimension wieder die erste Dimension verwendet. Der zugehörige Binärbaum muss nicht gespeichert werden, da dieser implizit entsteht. Für jede Unterteilung wird die Position vom Median gespeichert, weil diese für die Suchanfragen benötigt werden. === Suche mit festem Radius Bei dieser Suchanfrage werden alle Punkte gesucht, welche in einer Kugel mit bekanntem Zentrum und Radius liegen. Von dem Root-Knoten aus wird der Baum dabei durchsucht. Bei jeder Unterteilung wird dabei überprüft, wie die Kugel zur teilenden Ebene liegt. Ist die Kugel vollständig auf einer Seite, so muss nur der zugehörige Teilbaum weiter durchsucht werden. Liegen Teile der Kugel auf beiden Seiten, so müssen beide Teilbäume weiter durchsucht werden. Dabei wird bei jeder Unterteilung überprüft, ob der zugehörige Punkt in der Kugel liegt und wird gegebenenfalls zum Ergebnis hinzugefügt. Mit der gleichen Methode kann effizient bestimmt werden, ob eine Kugel leer ist. Dafür wird beim ersten gefundenen Punkt in der Kugel die Suche abgebrochen. === Suche mit fester Anzahl Bei dieser Suchanfrage wird für eine feste Anzahl $k$ die $k$-nächsten Punkte für ein bestimmtes Zentrum gesucht. Dafür werden die momentan $k$-nächsten Punkte gespeichert und nach Entfernung sortiert. Die Entfernung zum $k$-ten Punkt wird als Maximaldistanz verwendet. Solange noch nicht $k$ Punkte gefunden sind, kann unendlich oder ein beliebiger Wert als Maximalabstand verwendet werden. Es wird wieder vom Root-Knoten aus der Baum durchsucht. Bei jeder Unterteilung wird zuerst in der Hälfte vom Baum weiter gesucht, die das Zentrum enthält. Dabei werden die Punkte zu den besten Punkten hinzugefügt, die näher am Zentrum als die Maximaldistanz liegen. Sobald $k$ Punkte gefunden sind, wird dadurch die Maximaldistanz kleiner, weil der Punkte mit der alten Maximaldistanz nicht mehr zu den $k$-nächsten Punkten gehört. Nachdem ein Teilbaum vollständig durchsucht ist, wird überprüft, ob Punkte aus dem anderen Teilbaum näher am Zentrum liegen können. Dafür wird der Abstand vom Zentrum zur Ebene bestimmt. Ist der Abstand größer als die momentane Maximaldistanz, so kann kein Punkt näher am Zentrum liegen und der Teilbaum muss nicht weiter betrachtet werden. === Verbesserte Suche für kleine Teilbäume Sobald ein Teilbaum nur noch wenige Punkte beinhaltet, ist es langsamer zu überprüfen, welche Punkte näher sein können, als alle Punkte zu betrachten. Deshalb wird für Teilbäume mit weniger als #number(32) Punkte die Punkte linear iteriert, wodurch Rekursion vermieden wird. #pagebreak(weak: true) == Baum (Datenstruktur) Ein Baum ermöglichen räumlich dünnbesetzte Daten effizient zu speichern. Dafür wird der Raum unterteilt, und nur für Bereiche mit Daten weitere Knoten gespeichert. === Konstruktion Zuerst wird die räumliche Ausdehnung der Daten bestimmt. Dieser Bereich wird dem Root-Knoten zugeordnet. Solange noch zu viele Datenwerte im Bereich von einem Knoten liegen, wird dieser weiter unterteilt. Dafür wird der zugehörige Bereich entlang aller Dimensionen halbiert und jeder Teilbereich einem Kinderknoten zugeordnet. Bei einem Quadtree in 2D entstehen dadurch vier Kinderknoten und bei einem Octree in 3D acht Kinderknoten. Die Daten vom Knoten werden auf die Kinderknoten aufgeteilt. Der unterteilte Knoten speichert stattdessen die Kinderknoten. In @quadtree und @octree sind Beispiele in 2D und 3D gegeben. #figure( caption: [Unterschiedliche Stufen von einem Quadtree.], cetz.canvas({ import cetz.draw: * rect((0, 0), (2, 2), fill: blue) set-origin((5, 0)) rect((0, 0), (2, 2), stroke: gray) rect((0, 0), (1, 1), fill: blue) rect((0, 1), (1, 2), fill: blue) rect((1, 1), (2, 2), fill: blue) set-origin((5, 0)) rect((0, 0), (2, 2), stroke: gray) rect((0, 0), (1, 1), stroke: gray) rect((0, 1), (1, 2), stroke: gray) rect((1, 1), (2, 2), stroke: gray) rect((0.0, 0.0), (0.5, 0.5), fill: blue) rect((0.0, 0.5), (0.5, 1.0), fill: blue) rect((0.0, 1.0), (0.5, 1.5), fill: blue) rect((0.5, 1.0), (1.0, 1.5), fill: blue) rect((0.0, 1.5), (0.5, 2.0), fill: blue) rect((0.5, 1.5), (1.0, 2.0), fill: blue) rect((1.0, 1.5), (1.5, 2.0), fill: blue) }), ) <quadtree> #figure( caption: [Unterschiedliche Stufen von einem Octree.], cetz.canvas({ import cetz.draw: * let frontal((ax, ay, az), size, fill: black, stroke: black) = { line((ax + size, ay, az), (ax + size, ay + size, az), stroke: stroke) line((ax, ay + size, az), (ax + size, ay + size, az), stroke: stroke) line((ax + size, ay + size, az + size), (ax + size, ay + size, az), stroke: stroke) } let background((ax, ay, az), size, fill: black, stroke: black) = { line( (ax, ay, az), (ax + size, ay, az), (ax + size, ay, az + size), (ax + size, ay + size, az + size), (ax, ay + size, az + size), (ax, ay + size, az), close: true, fill: fill, stroke: stroke, ) } let cube((ax, ay, az), size, fill: black, stroke: black) = { background((ax, ay, az), size, fill: fill, stroke: stroke) frontal((ax, ay, az), size, fill: fill, stroke: stroke) } cube((0, 0, 0), 2, fill: blue) set-origin((5, 0)) background((0, 0, 0), 2, fill: auto, stroke: gray) cube((0, 0, 1), 1, fill: blue) cube((0, 0, 0), 1, fill: blue) cube((0, 1, 1), 1, fill: blue) cube((1, 1, 1), 1, fill: blue) frontal((0, 0, 0), 2, fill: auto, stroke: gray) set-origin((5, 0)) background((0, 0, 0), 2, fill: auto, stroke: gray) cube((0, 0, 1.5), 0.5, fill: blue) cube((0, 0.5, 1.5), 0.5, fill: blue) cube((0, 1.0, 1.5), 0.5, fill: blue) cube((0.5, 1.0, 1.5), 0.5, fill: blue) cube((1.0, 1.0, 1.5), 0.5, fill: blue) cube((1.5, 1.0, 1.5), 0.5, fill: blue) cube((1.5, 1.0, 1.0), 0.5, fill: blue) cube((0, 0, 1.0), 0.5, fill: blue) cube((0, 0, 0.5), 0.5, fill: blue) cube((0.5, 0, 0.5), 0.5, fill: blue) cube((0, 0, 0), 0.5, fill: blue) frontal((0, 0, 0), 2, fill: auto, stroke: gray) }), ) <octree> === Suchanfrage Bei einer Suchanfrage wird vom Root-Knoten aus der Leaf-Knoten gesucht, welcher die gesuchte Position enthält. Am Anfang wird der Root-Knoten als momentaner Knoten verwendet. Solange der momentane Knoten ein Branch-Knoten ist, wird der Kinderknoten gesucht, welcher die Position enthält und als neuer momentaner Knoten verwendet. Wenn der momentane Knoten ein Leaf-Knoten ist, wurde der gesuchte Leaf-Knoten gefunden. == Systemeigenschaften <systemeigenschaften> #figure( caption: [Überblick über die Systemeigenschaften], table( columns: (1fr, 2fr), align: (left, right), [Betriebssystem], [Windows 11], [Prozessor], [Intel(R) Core(TM) i7-9700KF CPU \@ 3.60 GHz], [Grafikkarte], [NVIDIA GeForce GTX 1660 SUPER], [RAM], [2 x G.Skill F4-3200C16-8GIS], [Festplatte], [SanDisk SSD PLUS 2000GB], ), ) #figure( caption: [Prozessoreigenschaften], table( columns: (1fr, 2fr), align: (left, right), [Physische Kerne], $8$, [Logische Kerne], $8$, [Maximale Taktrate], [#number(4.6) GHz], [Cachegröße (L1, L2, L3)], [$512 "KiB"$, $2 "MiB"$, $12 "MiB"$], ), ) #figure( caption: [Grafikkarteneigenschaften], table( columns: (1fr, 2fr), align: (left, right), [Basistaktung], $1530 "Mhz"$, [Boost-Taktung], $1785 "Mhz"$, [Speicherkonfiguration], $6 "GB GDDR6"$, [Speicherschnittstelle], $192 "Bit"$, ), ) #figure( caption: [RAM-Eigenschaften], table( columns: (1fr, 2fr), align: (left, right), [Größe], $2 times 8 "GiB"$, [Taktrate], $2133 "MHz"$, ), ) #figure( caption: [Sequenzielle Lese- und Schreibgeschwindigkeit der Festplatte mit $4 "KiB"$ Blöcken für unterschiedliche Dateigrößen.], table( columns: (1fr, 1fr, 1fr, 1fr), align: (left, right, right, right), [*Aktion*], [*1 GiB*], [*5 GiB*], [*10 GiB*], [Lesen], $776 "MiB/s"$, $384 "MiB/s"$, $218 "MiB/s"$, [Schreiben], $1739 "MiB/s"$, $1929 "MiB/s"$, $270 "MiB/s"$, ), ) == Messwerte vom Import <messwerte> #[ #set text(size: 0.8em) #set par(justify: false) #let data = csv("../data/werte.tsv", delimiter: "\t") #figure( table( align: (x, y) => if y == 0 { center } else { (left, left, right, right, right).at(x) }, columns: (1fr, 2.5fr, 1fr, 1fr, 1.2fr), [#box(width: 2.2cm)[*Datensatz*]], [*Datei*], [*Daten*#linebreak() (Punkte)], [*Segmente* (Punkte)], [#box[*Detailstufen*]#linebreak() (Punkte)], ..data.flatten(), ), caption: [Messwerte (Verwendete Datei und Punktanzahl).], ) <messwerte_1> #let data = csv("../data/werte_2.tsv", delimiter: "\t") #figure( table( align: (x, y) => if y == 0 { center } else { (left, right, right, right, right, right).at(x) }, columns: (1fr, 1fr, 1fr, 1.5fr, 1fr, 1fr), [#box(width: 2.2cm)[*Datensatz*]], [*Segmente* (Anzahl)], [*Laden* (Sekunden)], [*Segmentierung* (Sekunden)], [*Analyse* (Sekunden)], [*Detailstufen* (Sekunden)], ..data.flatten(), ), caption: [Messwerte (Anzahl Segmente und Importgeschwindigkeit).], ) <messwerte_2> ]
https://github.com/GuilloteauQ/qcv
https://raw.githubusercontent.com/GuilloteauQ/qcv/main/main.typ
typst
MIT License
#import "qcv.typ": * #show: qcv.with() #header("<NAME>", email: "<EMAIL>", webpage: "https://john.doe.org", orcid: "1111-1111-1111-1111") #let max_year = 2024 #let min_year = 2017 #let entry(..args) = { raw_entry(min_year, max_year, ..args) } = Education #entry(2020, 2023, title:"Phd Student", details: lorem(30)) #entry(2018, 2020, title:"Master Degree", details: lorem(20)) = Professional Experience #entry(2019, 2019, title:"Intern", details: lorem(20)) = Publications #bibliography("references.bib", full: true, style: "ieee", title: none)
https://github.com/Leadlegend/Curriculum-Vitae
https://raw.githubusercontent.com/Leadlegend/Curriculum-Vitae/main/modules/projects.typ
typst
Apache License 2.0
#import "../template.typ": * #cvSection("Research Experience") #cvEntry( title: [Exploring Few-Shot Learning of Large LMs on Document-level Relation Extraction], society: [Supervisor: Associate Prof. <NAME>, Wangxuan Institute of Computer Technology, Peking University], date: [2023], location: [Beijing, China], description: list( [Reviewed the few-shot learning (FSL) performance of large language models (LLMs) on mainstream NLP tasks, and investigated key factors contributing to models' generalization ability, especially their pre-training phases such as instruction tuning and prompt learning.], [Studied the limitations of document-level relation extraction (DocRE) on supervised learning settings, and explored the challenges and benefits of conducting DocRE task on FSL setting.], [Explored the influences of LLMs’ DocRE generalization ability by FSL ablation experiments on scientific LLMs suite Pythia, especially the number of samples and model parameter amount.], [Validated the facilitating effect of positive correlation between pre-training corpus and inference data on DocRE task, and conducted experiments to check the correlation saliency for different models.] ) ) #cvEntry( title: [Interpretable Neural Network for Drug Response Prediction], society: [Supervisor: Assistant Prof. <NAME>, <NAME>en School of CSE, University of Washington], date: [2022], location: [Seattle, WA], description: list( [Investigated the interpretability of neural networks, a critical problem in BioNLP, especially the way of encoding feature and information among neurons in models and how to comprehend it.], [#link("https://github.com/Leadlegend/DrugCell")[Reproduced DrugCell], a canonical interpretable model for drug response prediction on cancer cell-line and optimized the model's inference efficiency and prediction accuracy.], [Designed a new interpretable model architecture for drug response prediction: Readable Neural Networks, which extracted contextual text embeddings of Gene Ontology terms from PubMed literatures through distant supervision.] ) ) #cvEntry( title: [Development of Commonsense-based Question Generation Models], society: [Supervisor: Associate Prof. <NAME>, Institute of Computational Linguistics, PKU], date: [2020], location: [Beijing, China], description: list( [Independently designed and implemented a #link("https://github.com/Leadlegend/Commonsense-based-Question-Generation")[seq-to-seq question generation model], leveraging prior knowledge from knowledge graph to enhance model performance and the quality of generated output.], [Reviewed development of pre-trained NLG methods (BERTsum, BART, ProphetNet, etc.), especially focusing on text summarization, and designed feasible ways to introduce pretraining paradigm into question generation task.] ) )
https://github.com/7sDream/fonts-and-layout-zhCN
https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/05-features/shaping/mapping.typ
typst
Other
#import "/template/template.typ": web-page-template #import "/template/components.typ": note #import "/template/lang.typ": arabic, devanagari #import "/lib/glossary.typ": tr #show: web-page-template // ### Mapping and reordering === 映射和重排序 // The first thing the shaper does is map the Unicode characters in the input into a series of glyph IDs, internal to the font. (I call this resulting series the "glyph stream", but that's not a common expression. Shaper implementers may call it the *buffer*.) For some scripts, this mapping is easy. You simply uses the character map (`cmap` table) to turn Unicode characters into glyph IDs internal to the font. Most scripts, however, need a bit of help when moving from the Unicode world of characters to the OpenType world of glyphs. This “help” is the logic provided by complex shapers; there are a bunch of “complex shapers” as part of an OpenType shaping engine, each handling text in a different script or family of scripts. #tr[shaping]工作的第一步是#tr[shaper]将输入的Unicode#tr[character]映射为在字体内部使用的#tr[glyph]ID序列。我通常会将这个序列称为“#tr[glyph]流”,但这并不是通用术语,#tr[shaper]的某些实现可能就称其为“缓冲区”。对于某些#tr[scripts]来说,这种映射非常简单,只需要直接使用#tr[character]映射表(`cmap` 表)即可。但绝大多数#tr[scripts]在这一步骤中都需要一些额外的帮助。这些“帮助”通常由支持复杂#tr[scripts]的#tr[shaper]提供。基于OpenType技术的#tr[shaping]引擎内部通常会有各种复杂#tr[scripts]#tr[shaper],每个#tr[shaper]用于支持一种(或一个家族的)#tr[script]。 // So, for example, if your text is in Arabic, it will come in as a series of codepoints which don’t contain any “topographic” information: the string ججج is made up of the same Unicode code point three times (U+062C ARABIC LETTER JEEM). But it needs to come out as three different glyphs, one for “initial jeem”, one for “medial jeem” and one for “final jeem”. In this case, there’s a part of the shaping engine which specifically knows how to help process Arabic, and it goes through the Unicode input annotating it with what position the glyphs need to be in. It knows how Arabic “works”: it knows that if you have جاج (JEEM ALIF JEEM), the first JEEM goes in initial form because it links to the ALIF but the second JEEM stays how it is because the letter ALIF does not join to its left. After it has done this annotation, it will apply the rules you specify for initial form substitutions *only* to those parts of the glyph stream which are marked as being in initial form, and so on for medial, final and isolated forms. 假设输入文本是阿拉伯文,它首先会被转化为不包含任何#tr[typography]信息的一串#tr[codepoint]。举例来说,字符串 #arabic[ججج] 由同一个Unicode#tr[codepoint](`U+062C ARABIC LETTER JEEM`)重复三次组成,但在后续流程中它们需要被映射为三个不同的#tr[glyph]。第一个#tr[codepoint]映射为“首部JEEM”,第二个则是“中部JEEM”,最后的是“尾部JEEM”。在此情境中,#tr[shaping]引擎需要知道如何处理阿拉伯文,它映射出的#tr[glyph]需要附带上其所处位置的信息。这也就是说,#tr[shaping]引擎了解有关阿拉伯文的书写规则。比如它知道在文本 #arabic[جاج](JEEM ALIF JEEM)中,第一个JEEM因为和 ALIF 连接,所以它需要是首部形式。而第二个 JEEM 则不变形,因为 ALIF 不应继续向左连接。在#tr[shaping]引擎进行了类似这样的位置标注之后,代码中的只针对首部样式的#tr[substitution]规则才能只应用于对应位置的#tr[glyph]们。中部、尾部以及独立样式也同理。 // Other scripts require different kinds of help to move from the Unicode world to the OpenType world. The way that Unicode defines the encoding of scripts is sometimes a little bit different from the order that those scripts are written in. As a simple example, the Devanagari sequence कि (“ki”) is encoded with the consonant ka (क) first and then the vowel i (ि) second. But visually - when you type or print - the vowel needs to appear first. So the shaping engine has to again “help” the font by reordering the glyph stream: it puts any vowels which need to visually appear first - "pre-base vowels" - before the base consonant. This is just a convenience; it’s much easier for us as the engineer to handle the glyphs `iMatra-deva ka-deva` than it would be to be handed `ka-deva iMatra-deva` as a straight Unicode-to-glyph conversion, and then be left having to shuffle the glyphs around in your font’s OpenType rules. 其他#tr[scripts]各自需要不同类型的帮助才能顺利地从Unicode世界来到OpenType世界。Unicode定义#tr[encoding]的方式和有时会和#tr[script]本身的书写顺序不太一致。一个简单的例子是天城文中的ki,写作 #devanagari[कि] 。它被#tr[encoding]为辅音ka(#devanagari[क])加上元音i(#devanagari[ि])。但当你书写或打印这个#tr[character]时,在视觉上却是元音i先出现。这就又到了#tr[shaping]引擎施以援手的时候了,它会将#tr[glyph]流重新排序,将这种前置元音挪动到对应的基本辅音之前。这会使我们作为工程师的工作更加简单,毕竟比起处理 `ka-deva iMatra-deva` 这种从Unicode直接转换而来的形式,调整过的 `iMatra-deva ka-deva` 就不需要我们自己写OpenType规则来调换它们的位置了。 // Notice also that when I showed you the vowel i on its own like this - ि - it was displayed with a dotted circle. The vowel mark can’t normally appear on its own - it needs to be attached to some consonant - so I have typed something that is orthographically impossible. To denote the missing consonant and to try and display something sensible, the shaping engine has inserted the dotted circle; that’s another job of the complex shaper. It knows what is a valid syllable and what isn’t, and adds dotted circles to tell you when a syllable is broken. (So if you ever see a dotted circle in the printed world, the input text was wrong.) 请注意,当我像 #devanagari[ि] 这样单独展示元音 i 时,右边会有一个虚线描绘的圆。这表示这个元音不应该单独出现,它需要附加到某些辅音上。所以这种展示方式其实是构造了违反正字法的文本。为了让显示的内容有意义,提示这里缺少了辅音,#tr[shaping]引擎会插入一个虚线圆。这就是复杂#tr[scripts]#tr[shaper]的另一个工作内容了。它知道哪些音节是有效的,哪些是不完整的。所以只要你看到了这个虚线圆,就表示输入的文本是有问题的。
https://github.com/soul667/typst
https://raw.githubusercontent.com/soul667/typst/main/PPT/MATLAB/touying/docs/docs/style.md
markdown
--- sidebar_position: 4 --- # Code Styles ## show-slides Style If we only need simplicity, we can use `#show: slides` for a cleaner syntax. However, this approach has corresponding drawbacks: firstly, this method may significantly impact document rendering performance. Secondly, subsequent `#slide(..)` cannot be added directly. Instead, you need to manually mark `#slides-end`. The most significant drawback is that complex functionalities cannot be achieved. ```typst #import "@preview/touying:0.2.1": * #let (init, slide, slides) = utils.methods(s) #show: init #show: slides = Title == First Slide Hello, Touying! #pause Hello, Typst! #slides-end #slide[ A new slide. ] ``` ![image](https://github.com/touying-typ/touying/assets/34951714/db2a1b60-bc56-4fa9-a317-ee9ecc6f3895) And you can create a new page with the empty header `==`. ## slide-block Style For better performance and more powerful capabilities, in most cases, we still need to use the code style like: ```typst #slide[ A new slide. ] ```
https://github.com/mrtz-j/typst-thesis-template
https://raw.githubusercontent.com/mrtz-j/typst-thesis-template/main/template/chapters/figures.typ
typst
MIT License
#import "global.typ": * #import "../utils/caption.typ": dynamic-caption #import "../utils/symbols.typ": * #import "../utils/subfigure.typ": subfigure #import "../utils/todo.typ": TODO This chapter will demonstrate how to insert, manipulate and reference figures of various types. The functionality offered by typst to work with figures is powerful and relatively intuitive, especially if you're coming from #LaTeX. However, this template also features a few additional lightweight packages to further simplify working with figures. == Images <subsec:images> Typst allows us to render images in formats such as `png`, `jpg` and `svg` out of the box. Using the vector graphic format `svg` is recommended when applicable, for instance when inserting graphs and plots, as it ensures good quality and readability when printed or viewed on a large screen. Be aware that `svg` images may render differently when rendered on different PDF viewers in some cases. #figure(caption: [A plot from python's matplotlib, exported to svg])[ #image("../figures/plot_serial.svg") ] <fig:blur_plot> Inserting a figure in typst is very simple, and we can now easily refer to @fig:blur_plot anywhere in the document. We can also easily customize the image, for instance by adjusting the width of it so that it doesn't take up as much space, like @fig:blur_plot_small. The typst documentation #footnote[see #link("https://typst.app/docs/reference/visualize/image/")] covers images more in depth. #figure(caption: [A smaller version of the figure])[ #image("../figures/plot_serial.svg", width: 30%) ] <fig:blur_plot_small> #pagebreak(weak: true) == Tables <subsec:tables> Creating a basic table with typst is quite simple, yet we can also customize them a great deal if we would like to. This thesis template also has some custom default styling for tables, to make the stroke gray and headers distinct. #figure( table( columns: 3, table.header( [store], [location], [open sundays], ), [Coop Extra], [Breivika], [No], [Joker], [Dramsvegen], [Yes], [Rema 1000], [K1], [No], [Coop Obs], [Jekta], [No], ), caption: [Table with default styling], ) <tab:default_styling> While @tab:default_styling is a very simple table with no extra styling, @tab:rowspan is more advanced, using bold for the headers and letting them span multiple rows/columns. Note that we also set the alignment for the text inside the table cells. #figure( table( columns: 7, align: center + horizon, /* --- header --- */ table.header( // table.cell lets us access properties such as rowspan and colspan to customize the cells table.cell([*Classifier*], rowspan: 2), table.cell([*Precision*], colspan: 6), [1], [2], [3], [1&2], [1&3], [All], ), /* --- body --- */ [Perceptron], [0.78], [0.82], [0.24], [0.81], [0.77], [0.83], [Decision Tree], [0.65], [0.79], [0.56], [0.75], [0.65], [0.73], [One-Class SVM], [0.74], [0.72], [0.50], [0.80], [0.73], [0.85], [Isolation Forest], [0.54], [0.51], [0.52], [0.53], [0.54], [0.53], ), caption: [A slightly more elaborate table], ) <tab:rowspan> On a page break, a table can also break and continue on the subsequent page. If a table header and/or footer is set, like in @tab:break, these will also repeat on both pages by default. #figure( caption: [A table that breaks with the page], table( columns: 3, fill: (_, y) => if y == 0 { gray.lighten(75%) }, table.header[Week][Distance (km)][Time (hh:mm:ss)], [1], [5], [00:30:00], [2], [7], [00:45:00], [3], [10], [01:00:00], [4], [12], [01:10:00], [5], [15], [01:25:00], [6], [18], [01:40:00], [7], [20], [01:50:00], [8], [22], [02:00:00], [...], [...], [...], table.footer[_Goal_][_42.195_][_02:45:00_], ), ) <tab:break> We can also override the default styling to customize tables. @tab:break sets a custom fill color for the header and @tab:hlines uses `table.hline()` to enable the border stroke on certain lines only. The second column in @tab:hlines is also set to fill all space available to it. #figure( table( stroke: none, columns: (auto, 1fr), table.header(), [09:00], [Badge pick up], [09:45], [Opening Keynote], [10:30], [Talk: Typst's Future], [11:15], [Session: Good PRs], table.hline(start: 1), [Noon], [_Lunch break_], table.hline(start: 1), [14:00], [Talk: Tracked Layout], [15:00], [Talk: Automations], [16:00], [Workshop: Tables], table.hline(), [19:00], [Day 1 Attendee Mixer], ), caption: [A table with no border stroke], ) <tab:hlines> There is a lot more customization to be done with tables. Read the official table guide #footnote[see #link("https://typst.app/docs/guides/table-guide/")] to discover how to create a table by reading a `csv` file with typst, achieving zebra highlighting and much more. #pagebreak(weak: true) == Listings <subsec:listings> For code listings, this template uses a third party package called *codly* #footnote[see #link("https://typst.app/universe/package/codly")] in order to provide some out of the box styling and proper syntax highlighting. Unless you want to customize the appearance you don't need to touch codly at all, simply create a normal code block like you would in markdown. #figure(caption: [Hello world! in rust])[ ```rust pub fn main() { println!("Hello, world!"); } ``` ] <raw:rust> By default, code listings are configured with zebra lines, line numbering and a label displaying the programming language, like the rust snippet in @raw:rust. If we want to, we can disable or customize these features locally using the codly `#local()` function, demonstrated with @raw:fsharp. Note that too many calls to `#local()` may cause issues, so always use `#codly()` where possible. // FIXME: Empty label is still shown as a tiny circle // NOTE: As the code snippet is within the #local call here, we need to specify the figure kind to get 'Listing' instead of 'Figure' #figure(caption: [F\# snippet with no zebras and label], kind: raw)[ #local(zebra-fill: none, display-name: false)[ ```fsi [<EntryPoint>] let main () = "Hello, world!" |> printfn ``` ] ] <raw:fsharp> We can also skip lines in the code snippet. Note that it doesn't actually skip lines in your snippet, but rather changes the line numbers to represent skipped code. This is demonstrated in @raw:c below. #figure(caption: [C snippet with skipped lines], kind: raw)[ #codly(skips: ((2, 15),)) ```c int main() { printf("Hello, world!"); return(0); } ``` ] <raw:c> Codly also allows us to highlight code using line and column positions. @raw:python demonstrates highlighting a line and giving it a tag "assignment". #figure(caption: [Python snippet with highlights], kind: raw)[ #codly(highlights: ( (line: 1, start: 3, end: none, fill: blue, tag: "assignment"), )) ```python if __name__ == "__main__": d = {'a': 1} print("Hello, world!") ``` ] <raw:python> Code snippets can also be imported from files using the `code-block` macro. @raw:fsharp-file shows a F\# snippet imported from a file. #figure(caption: [F\# snippet imported from file], kind: raw)[ #code-block("Program.fsx", read("../code-snippets/Program.fsx")) ] <raw:fsharp-file> // TODO: Make subfigures without subpar or figure out how to fix numbering == Subfigures <subsec:subfigures> A lot of times we want to display figures side by side and be able to reference them separately as well as together. To make this process easy, this thesis template includes the *subpar* #footnote[see #link("https://typst.app/universe/package/subpar")] package. It lets us easily lay out figures in a _grid_ while making all labels available for reference. #subfigure( figure( image("../figures/philosophers.png"), caption: [Subfigure with dining philosophers], ), <fig:philosophers>, figure( image("../figures/dining_philosophers.png"), caption: [Subfigure with a deadlock], ), <fig:dining_philosophers>, columns: (1fr, 1fr), caption: [A figure composed of two subfigures], label: <fig:with_subfigures>, ) Now we can refer to @fig:philosophers, @fig:dining_philosophers and the parent @fig:with_subfigures separately. To access subpar, we use a custom function `#subfigure()` which is included in this template. It's a simple wrapper that sets up the numbering for us to match the rules of the template. #subfigure( figure(caption: [F\# snippet in a subfigure], kind: raw)[ ```fsi [<EntryPoint>] let main () = "Hello, world!" |> printfn ``` ], <subfig:subfig_fsharp>, figure( table( columns: 3, table.header( [store], [location], [open sundays], ), [Coop Extra], [Breivika], [No], [Joker], [Dramsvegen], [Yes], [Rema 1000], [K1], [No], [Coop Obs], [Jekta], [No], ), caption: [Table in a subfigure], ), <subfig:subfig_table>, figure( image("../figures/philosophers.png"), caption: [Subfigure with philosophers], ), <subfig:philosophers2>, figure( image("../figures/dining_philosophers.png"), caption: [Subfigure with dining philosophers], ), <subfig:dining_philosophers2>, columns: (150pt, 1fr), caption: [A figure with multiple subfigure kinds], label: <fig:mixed_kinds>, ) We can include as many figures as we want in the grid, and even mix and match figure types. @fig:mixed_kinds also has the first column set to a width of `150pt` while the second column is set to take up the remaining space. Note that by default, subfigures do not appear in the List of Figures, and the supplement of referring to for instance @subfig:subfig_fsharp is _not_ "Listing" like we might expect. #figure( caption: dynamic-caption( [A nice picture of UiT, the Arctic University of Norway, under the northern lights. The picture is taken from #link("https://www.wur.nl/en/").], [UiT under aurora borealis], ), image("../figures/uit_aurora.jpg"), ) <fig:uit_aurora> Another handy function available in this template is the `#dynamic-caption()`, which takes two arguments: a short and a long version of a caption. The long version is displayed under the figure, like in @fig:uit_aurora, however the short version is used in the List of Figures at the start of the thesis. Using the custom macro `csv-table` it is possible to include data dynamically for csv files. @table:csv-table demonstrates this. #figure( csv-table( tabledata: csv("../figures/table.csv"), header-row: white, odd-row: luma(240), even-row: white, columns: 3, ), caption: [A table with data from a csv file], ) <table:csv-table> == Equations <subsec:equations> Typst has great built-in support for mathematical equations and this template applies numbering to them by default, so that we can refer to @equ:simple-equation just like we would a figure. $ sum_(k=1)^n k = (n(n+1)) / 2 $ <equ:simple-equation> By default, we can use powerful symbols and functions inside equation blocks (`$ ... $`) to typeset quite advanced equations. For instance, `#attach()` grants us fine control over symbol placement, like in @equ:attach. $ attach( Pi, t: alpha, b: beta, tl: 1, tr: 2+3, bl: 4+5, br: lambda, ) $ <equ:attach> Many of the functions have additional parameters to further customize their behavior. For instance, the matrix function allows us to specify the delimiter, see @equ:matrix. $ mat( delim: "[", 1, 2, ..., 10; 2, 2, ..., 10; dots.v, dots.v, dots.down, dots.v; 10, 10, ..., 10; ) $ <equ:matrix> We can also define our own classes to use within equation blocks, and much more. Refer to the typst reference #footnote()[see #link("https://typst.app/docs/reference/math/")] to see all capabilities. #let spade = math.class( "normal", sym.suit.spade, ) $ root(3, 5 spade) in RR $ == Physica <subsec:physica> To expand on the already considerable built-in support for math symbols, we've also included the physica #footnote()[see #link("https://typst.app/universe/package/physica")] package. It makes a far greater range of functions available, allowing us to quickly typeset common symbol sequences without having to build them with the vanilla library. For example, big O notation is easily available: $ Order(n log(n)) $ This section covers only a fraction of the available symbols and handy shorthands like expectation value in @equ:expval, and digital timing diagrams in @equ:clock. Refer to the full user manual on github #footnote()[see #link("https://github.com/Leedehai/typst-physics/")] to see the full usage. $ expval(p, psi) $ <equ:expval> $ "clk:" & signals("|1...|0...|1...|0...|1...|0...|1...|0...", step: #0.5em) $ <equ:clock>
https://github.com/fichas/resume-typst
https://raw.githubusercontent.com/fichas/resume-typst/main/main.typ
typst
#import "template.typ": * // Take a look at the file `template.typ` in the file panel // to customize this template and discover how it works. #show: project.with( title: "Resume-ng", author: (name: "新同学"), contacts: ( "+86 139XXXXXXXX", "24岁", link("mailto:<EMAIL>", "<EMAIL>"), link("https://github.com", "github.com/fichas"), ) ) #resume-section("教育经历") #resume-education( university: "吉林大学", degree: "本科", school: "计算机科学与技术,计算机学院", start: "2018-09", end: "2022-06" )[ *GPA: 3.3/4.0*,获吉林大学学业奖学金多次,获得 XX 奖项。 ] #resume-section[工作经历] #resume-work( company: "XXXX", city: "上海", duty: "后端 ", start: "2022.01", end: "至今", )[ - *参与 XXX 的需求分析,系统技术方案设计;完成需求开发、灰度测试、上线和监控。* ] #resume-work( company: "腾讯游戏", city: "深圳", duty: "后端实习生", start: "2021.07", end: "2021.08", )[ - *参与 XXX 的需求分析,系统技术方案设计;完成需求开发、灰度测试、上线和监控。* ] #resume-section[项目经历] #resume-project( title: "测试项目 A", duty: "项目负责人", )[ - 请结合STAR原则,说出你的角色和贡献,并带有量化的产出 ] #resume-project( title: "测试项目 B", duty: "项目负责人", )[ - 请结合STAR原则,说出你的角色和贡献,并带有量化的产出 ] #resume-project( title: "Multi-Raft 分布式 KV 存储系统", duty: "算法设计与实现 / MIT 6.824 课程", )[ - 实现了 Raft 协议的选举、日志复制、持久化、日志压缩等基本功能。 - 基于 Raft 协议实现了满足线性一致性的 KV 数据库。 - 采用 Multi-Raft 架构,支持数据分片,分片迁移,分片垃圾回收和分片迁移时读写优化。 - 对分布式系统的设计考量有了更多的认识。 ] #resume-section[技术能力] - *语言*: 编程不受特定语言限制。常用 Rust, Golang, Python,C++; 熟悉 C, #text(fill: gray, "JavaScript");了解 Lua, Java, #text(fill: gray, "TypeScript")。 - *工作流*: Linux, Shell, (Neo)Vim, Git, GitHub, GitLab. - *其他*: 有容器化技术的实践经验,熟悉 Kubernetes 的使用。 #resume-section[个人总结] - 本人乐观开朗、在校成绩优异、自驱能力强,具有良好的沟通能力和团队合作精神。 - 可以使用英语进行工作交流(六级成绩 XXX),平时有阅读英文书籍和口语练习的习惯。 - 有六年 Linux 使用经验,较为丰富的软件开发经验、开源项目贡献和维护经验。善于技术写作,持续关注互联网技术发展。
https://github.com/satshi/typst-jp-template
https://raw.githubusercontent.com/satshi/typst-jp-template/main/README.md
markdown
# Typst の日本語テンプレートの例 ## 概要 Typstで、とりあえず日本語で書き始めるためのテンプレートである。pLaTeXでのjarticle風のものを目指した。 ## 使い方 同じフォルダに`template.typ`をおいて、次のように書いていく。 ```typst #import "template.typ": * #show: jarticle こんにちは、世界。 ``` 補助的なものとして次のものを定義している。 - `年月日`: 日付の表示の仕方で、◯年◯月◯日の形のものを定義した。例えば ```typst #datetime.today().display(年月日) ``` とすると、(今日の日付で)「2024年4月8日」のように表示される。同様に「日」を入れない`年月`も定義している。 - `appendix`: LaTeXでいうところの `\appendix`みたいなもの。通常節の番号は「1.1」みたいな感じだが ```typst #show: appendix ``` とするとその後は「A.1」みたいな感じになる。
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/visualize/gradient-text-cjk.typ
typst
Apache License 2.0
#set page(width: 140pt, height: auto, margin: 10pt) #set par(justify: true) #set text(fill: gradient.radial(red, blue), size: 10pt) 锄禾日当午, 汗滴禾下土。 谁知盘中餐, 粒粒皆辛苦。 锄禾日当午, 汗滴禾下土。 谁知盘中餐, 粒粒皆辛苦。
https://github.com/qujihan/toydb-book
https://raw.githubusercontent.com/qujihan/toydb-book/main/src/chapter2/bitcask.typ
typst
#import "../../typst-book-template/book.typ": * #let path-prefix = figure-root-path + "src/pics/" == BitCask 存储引擎 ToyDB使用的可持久化的存储引擎是BitCask(参考@bitcask)的*变种*。简单来说,就是在写入的时候,先写入到只能追加写的log文件中,然后在内存中维护一个索引,索引内容为 (key -> 文件位置以及长度)。 当垃圾量(包含替换以及删除的key)大于一定阈值的时候,将在内存中的key写入到新的log文件中,然后替换老的log文件。替换的过程被称为压缩,会导致写放大问题,但是可以通过控制阈值来减小影响。 通过上面的描述,可以分析出几个语义: - BitCask要求所有的Key必须能存放在内存中。 - BitCask在启动的时候需要扫描Log文件来构建索引。 - 删除文件的时候并不是真正的删除。 - 实际上是写入一个墓碑值(tombstone value),读取到墓碑值就认为是删除了。 下面先看一下ToyDB的宏观架构,再自底向上的看一下实现过程。 #code( "toydb/src/storage/bitcask.rs", "bitcask", )[ ```rust struct Log { /// Path to the log file. /// 日志文件的路径 path: PathBuf, /// The opened file containing the log. /// 包含日志的打开文件 file: std::fs::File, } /// Maps keys to a value position and length in the log file. /// 将key映射到日志文件中的value位置和长度 type KeyDir = std::collections::BTreeMap<Vec<u8>, (u64, u32)>; pub struct BitCask { /// The active append-only log file. /// 当前的只追加写的日志文件 log: Log, /// Maps keys to a value position and length in the log file. /// 将key映射到日志文件中的value位置和长度 keydir: KeyDir, } ``` ]<bitcask_code> 在ToyDB中,`BitCask`中包含一个管理内存中索引文件的数据结构`keydir`以及一个用来写Log的文件`log`。 其中`keydir`是一个BTree,key是一个Vec<u8>,value是一个元组(u64, u32),其中u64是value在Log文件中的位置,u32是value的长度。这个结构是有序的,这样就可以进行范围查询。 再来看`log`,它包含了一个文件路径`path`以及一个文件句柄`file`。这个文件是只追加写的,这样就可以保证写入的顺序是正确的。 下面先看`log`实现部分,再来回看`BitCask`的部分。 === Log的实现 每一个log entry包含四个部分: - Key 的长度,大端u32 - Value 的长度,大端i32,-1 表示墓碑值 - Key 的字节序列(最大 2GB) - Value 的字节序列(最大 2GB) 在Log中,`new`比较简单,是打开一个log文件,当不存在的时候就创建这个文件。并且在使用过程中一直使用的排它锁,这样就可以保证只有一个线程在写入。 #code( "toydb/src/storage/bitcask.rs", "new", )[ ```rust fn new(path: PathBuf) -> Result<Self> { if let Some(dir) = path.parent() { std::fs::create_dir_all(dir)? } let file = std::fs::OpenOptions::new() .read(true) .write(true) .create(true) .truncate(false) .open(&path)?; file.try_lock_exclusive()?; Ok(Self { path, file }) } ``` ] `read_value`,`write_value`这两个函数也比较简单,用于读取value以及写入KV对。 #code( "toydb/src/storage/bitcask.rs", "read_value", )[ ```rust /// 从file的value_pos位置读取value_len长度的数据 fn read_value(&mut self, value_pos: u64, value_len: u32) -> Result<Vec<u8>> { let mut value = vec![0; value_len as usize]; self.file.seek(SeekFrom::Start(value_pos))?; self.file.read_exact(&mut value)?; Ok(value) } ``` ] #code( "toydb/src/storage/bitcask.rs", "write_value", )[ ```rust /// 写入key/value对, 返回写入的位置和长度 /// 墓碑值使用 None Value fn write_entry(&mut self, key: &[u8], value: Option<&[u8]>) -> Result<(u64, u32)> { let key_len = key.len() as u32; // map_or 是 Option类型的方法, 用于在 Option 为 Some 以及 None 时执行不同的操作 let value_len = value.map_or(0, |v| v.len() as u32); let value_len_or_tombstone = value.map_or(-1, |v| v.len() as i32); // 这里 4 + 4 就是 key_len(u32) 和 value_len_or_tombstone(u32) 的长度 let len = 4 + 4 + key_len + value_len; let pos = self.file.seek(SeekFrom::End(0))?; // BufWriter 是一个带有缓冲的写操作, 可以减少实际IO操作的次数 let mut w = BufWriter::with_capacity(len as usize, &mut self.file); w.write_all(&key_len.to_be_bytes())?; w.write_all(&value_len_or_tombstone.to_be_bytes())?; w.write_all(key)?; if let Some(value) = value { w.write_all(value)?; } w.flush()?; Ok((pos, len)) } ``` ] `build_keydir`就比较复杂了,用来构建索引(ToyDB只有在重启的时候才会构建)。 #code( "toydb/src/storage/bitcask.rs", "build_keydir", )[ ```rust /// Builds a keydir by scanning the log file. If an incomplete entry is /// encountered, it is assumed to be caused by an incomplete write operation /// and the remainder of the file is truncated. /// 通过扫描log文件来构建一个keydir. 如果遇到不完整的条目, 就会假设是因为不完整的写操作 /// 并且截断文件. fn build_keydir(&mut self) -> Result<KeyDir> { let mut len_buf = [0u8; 4]; let mut keydir = KeyDir::new(); let file_len = self.file.metadata()?.len(); let mut r = BufReader::new(&mut self.file); let mut pos = r.seek(SeekFrom::Start(0))?; while pos < file_len { // Read the next entry from the file, returning the key, value // position, and value length or None for tombstones. // 读取一条新的条目, 返回key, value位置, 以及value长度或者墓碑值(None) let result = || -> std::result::Result<(Vec<u8>, u64, Option<u32>), std::io::Error> { // r 在当前文件指针位置读取数据到 len_buf 中 // 读取完成以后文件指针会自动向后移动 len_buf.len() 的大小 r.read_exact(&mut len_buf)?; let key_len = u32::from_be_bytes(len_buf); r.read_exact(&mut len_buf)?; let value_len_or_tombstone = match i32::from_be_bytes(len_buf) { l if l >= 0 => Some(l as u32), _ => None, // -1 for tombstones }; let value_pos = pos + 4 + 4 + key_len as u64; let mut key = vec![0; key_len as usize]; r.read_exact(&mut key)?; if let Some(value_len) = value_len_or_tombstone { if value_pos + value_len as u64 > file_len { // 这里就是遇到了不完整的条目 return Err(std::io::Error::new( std::io::ErrorKind::UnexpectedEof, "value extends beyond end of file", )); } // 在当前文件指针位置移动 value_len 的大小 // 使用 seek_relative 而不是 seek 是为了避免丢弃缓冲区 // // seek 是把文件指针立刻移动到某个位置, 旧的缓冲区的数据可能和新的位置不匹配 // 所以缓冲失效会被丢弃 r.seek_relative(value_len as i64)?; // avoids discarding buffer } Ok((key, value_pos, value_len_or_tombstone)) }(); match result { // Populate the keydir with the entry, or remove it on tombstones. // 填充 keydir, 或者在墓碑值的时候删除 Ok((key, value_pos, Some(value_len))) => { keydir.insert(key, (value_pos, value_len)); pos = value_pos + value_len as u64; } Ok((key, value_pos, None)) => { keydir.remove(&key); pos = value_pos; } // If an incomplete entry was found at the end of the file, assume an // incomplete write and truncate the file. // 这里就是遇到了不完整的条目 Err(err) if err.kind() == std::io::ErrorKind::UnexpectedEof => { log::error!("Found incomplete entry at offset {}, truncating file", pos); self.file.set_len(pos)?; break; } Err(err) => return Err(err.into()), } } Ok(keydir) } ``` ] === BitCask的实现 在知道了`log`是如何实现的以后,就可以更好的理解BitCask的实现了。回忆一下,@bitcask_code 中,`BitCask`中包含了一个`log`以及一个`keydir`。`log`用来写入KV对,`keydir`用来维护内存中的索引。 下面先看一下`BitCask`中的一些周边函数,然后再看一下如何实现`Engine`这个trait。 先来看看`BitCask`的构造函数以及析构函数,`new`和`new_compact`。这个两个函数的区别就是`new_compact`会在打开的时候自动压缩。关于析构函数,会在Drop的时候尝试flush文件。 #code("toydb/src/storage/bitcask.rs", "impl Bitcask")[ ```rust /// Opens or creates a BitCask database in the given file. /// 通过 path 打开或者创建一个 BitCask 数据库 pub fn new(path: PathBuf) -> Result<Self> { // 这里非常简单, 就是调用前面实现的 Log::new log::info!("Opening database {}", path.display()); let mut log = Log::new(path.clone())?; let keydir = log.build_keydir()?; log::info!("Indexed {} live keys in {}", keydir.len(), path.display()); Ok(Self { log, keydir }) } /// Opens a BitCask database, and automatically compacts it if the amount /// of garbage exceeds the given ratio and byte size when opened. /// 打开一个 BitCask 数据库, 如果打开的时候垃圾的比例和字节大小超过给定的阈值, 就会自动压缩 pub fn new_compact( path: PathBuf, garbage_min_fraction: f64, garbage_min_bytes: u64, ) -> Result<Self> { let mut s = Self::new(path)?; let status = s.status()?; if Self::should_compact( status.garbage_disk_size, status.total_disk_size, garbage_min_fraction, garbage_min_bytes, ) { log::info!( "Compacting {} to remove {:.0}% garbage ({} MB out of {} MB)", s.log.path.display(), status.garbage_percent(), status.garbage_disk_size / 1024 / 1024, status.total_disk_size / 1024 / 1024 ); s.compact()?; log::info!( "Compacted {} to size {} MB", s.log.path.display(), (status.total_disk_size - status.garbage_disk_size) / 1024 / 1024 ); } Ok(s) } /// Returns true if the log file should be compacted. /// 如果日志文件应该被压缩, 就返回 true fn should_compact( garbage_size: u64, total_size: u64, min_fraction: f64, min_bytes: u64, ) -> bool { let garbage_fraction = garbage_size as f64 / total_size as f64; garbage_size > 0 && garbage_size >= min_bytes && garbage_fraction >= min_fraction } /// Attempt to flush the file when the database is closed. /// 在 Drop 的时候尝试 flush 文件 impl Drop for BitCask { fn drop(&mut self) { if let Err(error) = self.flush() { log::error!("failed to flush file: {}", error) } } } ``` ] 上面几个函数比较简单,下面看一下在压缩的时候会使用的函数`compact`以及`write_log`。 #code("toydb/src/storage/bitcask.rs", "compact / write_log")[ ```rust impl BitCask { /// Compacts the current log file by writing out a new log file containing /// only live keys and replacing the current file with it. /// 压缩当前的日志文件, 写出一个新的日志文件, 只包含活跃的 key, 并且用它替换当前的文件 pub fn compact(&mut self) -> Result<()> { let mut tmp_path = self.log.path.clone(); tmp_path.set_extension("new"); let (mut new_log, new_keydir) = self.write_log(tmp_path)?; std::fs::rename(&new_log.path, &self.log.path)?; new_log.path = self.log.path.clone(); self.log = new_log; self.keydir = new_keydir; Ok(()) } /// Writes out a new log file with the live entries of the current log file /// and returns it along with its keydir. Entries are written in key order. /// 写出一个新的日志文件, 包含当前日志文件中的活跃条目, 并且返回它以及它的 keydir. fn write_log(&mut self, path: PathBuf) -> Result<(Log, KeyDir)> { let mut new_keydir = KeyDir::new(); let mut new_log = Log::new(path)?; new_log.file.set_len(0)?; // truncate file if it exists for (key, (value_pos, value_len)) in self.keydir.iter() { let value = self.log.read_value(*value_pos, *value_len)?; let (pos, len) = new_log.write_entry(key, Some(&value))?; new_keydir.insert(key.clone(), (pos + len as u64 - *value_len as u64, *value_len)); } Ok((new_log, new_keydir)) } } ``` ] 这里也还是比较简单的,`write_log`函数会遍历keydir,将活跃的key/value对写入到新的log文件中。`compact`函数会调用`write_log`获取新的log文件,最后将新的log文件替换掉老的log文件。 最后我们看一下 Bitcask 对`Engine`这个trait的实现。 #code("", "")[ ```rust impl Engine for BitCask { type ScanIterator<'a> = ScanIterator<'a>; fn delete(&mut self, key: &[u8]) -> Result<()> { self.log.write_entry(key, None)?; self.keydir.remove(key); Ok(()) } fn flush(&mut self) -> Result<()> { // Don't fsync in tests, to speed them up. We disable this here, instead // of setting raft::Log::fsync = false in tests, because we want to // assert that the Raft log flushes to disk even if the flush is a noop. #[cfg(not(test))] self.log.file.sync_all()?; Ok(()) } fn get(&mut self, key: &[u8]) -> Result<Option<Vec<u8>>> { if let Some((value_pos, value_len)) = self.keydir.get(key) { Ok(Some(self.log.read_value(*value_pos, *value_len)?)) } else { Ok(None) } } fn scan(&mut self, range: impl std::ops::RangeBounds<Vec<u8>>) -> Self::ScanIterator<'_> { ScanIterator { inner: self.keydir.range(range), log: &mut self.log } } fn scan_dyn( &mut self, range: (std::ops::Bound<Vec<u8>>, std::ops::Bound<Vec<u8>>), ) -> Box<dyn super::ScanIterator + '_> { Box::new(self.scan(range)) } fn set(&mut self, key: &[u8], value: Vec<u8>) -> Result<()> { let (pos, len) = self.log.write_entry(key, Some(&*value))?; let value_len = value.len() as u32; self.keydir.insert(key.to_vec(), (pos + len as u64 - value_len as u64, value_len)); Ok(()) } fn status(&mut self) -> Result<Status> { let keys = self.keydir.len() as u64; let size = self .keydir .iter() .fold(0, |size, (key, (_, value_len))| size + key.len() as u64 + *value_len as u64); let total_disk_size = self.log.file.metadata()?.len(); // 8 * keys: key 是 u64, 所以是 8 * key的数量 let live_disk_size = size + 8 * keys; // account for length prefixes let garbage_disk_size = total_disk_size - live_disk_size; Ok(Status { name: "bitcask".to_string(), keys, size, total_disk_size, live_disk_size, garbage_disk_size, }) } } ``` ] 这里可以看得出来,`BitCask`实现了`delete`,`flush`,`get`,`scan`,`set`,`status`这几个方法。大多数都是调用了`log`的方法。 - `delete`会将key/value对写入到log文件中,并且在keydir中删除这个key - `flush`会将缓冲区的内容写出到存储介质中 - `get`会从keydir中获取value的位置以及长度,然后从log文件中读取value - `scan`会返回一个`ScanIterator`,用于遍历keydir - `set`会将key/value对写入到log文件中,并且在keydir中插入这个key - `status`会返回存储引擎的状态 === ToyDB中BitCask的取舍 在ToyDB中,BitCask的实现做了相当程度的简化: + ToyDB没有使用固定大小的日志文件,而是使用了任意大小的仅追加写的日志文件。这会增加压缩量,因为每次压缩的时候都会重写整个日志文件,并且也可能会超过文件系统的文件大小限制 + 压缩的时候会阻塞所有的读以及写操作,这问题不大,因为ToyDB只会在重启的时候压缩,并且文件应该也比较小 + 没有hint文件,因为ToyDB的value预估都比较小,hint文件的作用不大(其大小与合并的Log文件差不多大) + 每一条记录没有timestamps以及checksums + BitCask需要key的集合在内存中,而且启动的时候需要扫描log文件来构建索引 + 与LSMTree不同,单个文件的BitCask需要在压缩的过程中重写整个数据集,这会导致显著的写放大问题 + ToyDB没有使用任何压缩,比如可变长度的整数
https://github.com/TypstApp-team/typst
https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/meta/bibliography-full.typ
typst
Apache License 2.0
// Test the full bibliography. --- #set page(paper: "a6") #bibliography("/files/works.bib", full: true)
https://github.com/Besthope-Official/backend
https://raw.githubusercontent.com/Besthope-Official/backend/master/24spring/tutorial.md
markdown
# 一个不负责的介绍 Hi swufer! 欢迎来到奇点工作室的**后端部**. 我是引导你们快速上手的小助手, 希望你能够在加入奇点工作室以后保持一颗充满探索的内心, 以及对技术的热爱! o(_ ̄ ▽  ̄_)ブ 大家初探后端的世界, 可能会感到一定的茫然. Don't worry, 现在是 2024 年, 各种大语言模型可以是你学习相伴探案的*华生*, GPT, [Copilot](https://github.com/features/copilot)(快去利用你作为 _高贵的学生_ **认证**!), 文心一言...(如果你还没有其中之一, 快去搞一个) ## 奇点的日常事宜 - 每周日晚 7 点在通博楼 B423 开会(或者在隔壁) - 你可以坐电梯上来 - 等有新场地另行通知(奇点 5 楼的场地在装修) - 模式: **授课 + 工程开发**结合 - 教大家点**开发技能**, 总结下布置给大家以及大家上周学习的东西 - 如果你觉得 OK 就可以准备下一周的学习了 - 然后是开发部(前后端一起)的项目交流环节 - **欢迎新同学旁听**, 其实也不会有什么高深的知识, 你觉得 you can, 那就来*给我们项目打工*把(bushi) ## 学习 网络上有着琳琅满目的学习资料, 你只是需要找到它, 然后**自学**: - 大家都喜欢上 bilibili 大学, 很好, 那为什么不刷刷我们去年暑假开展的[暑期训练营](https://space.bilibili.com/3494378444163577)(_不是我们自吹自擂_) - 以及我们编写的[在线文档](https://singularity-backend.gitbook.io/backend-online-doc/before/readme)(推荐阅读前面的部分, 后面 Flask 和 SpringBoot 如果你感兴趣可以了解下) - 我们把后端开发的相关技术栈其实都交代在了这里, 附带一些鸡汤. :) - 插一嘴, 如果是学习计算机相关技术的课程, 你英语能力好可以 follow 下这个 [csdiy wiki](https://csdiy.wiki/) 当然, *只*跟着课程走, 你接触的内容还是局限在课程的那部分里(这也是我们希望你不要干的一件事), 多多留心这些经验教程里的**拓展阅读**相关的内容. ## 我这一周要做什么吗? 为了让大家尽可能加入到我们团队开发的行列, 我给大家列举一些我们**本学期**后端项目会使用到的技术栈: - Python 的基础语法和一些进阶知识. 可以查看下我们曾经编写的[文档](https://singularity-backend.gitbook.io/backend-online-doc/week1/advanced-python), 或者看看 `www.python.org`. - Python Django 框架. Bilibili 大学或者 Django [官方文档](https://docs.djangoproject.com/zh-hans/5.0/). - Git. 团队协作的必备. - MySQL. 最基本的数据库操作知识, 另外涉及到程序与数据库的交互也得会. 想学习全栈开发的同学, 你可以和前端部的几个部长交流交流~ ## 一些不是很硬性的要求 - 有问题及时来群里问问! 有求我们*应该*能答. - 想学什么在会议上及时提出. - 包括我们做的不好的地方也及时提出. - 欢迎水群, 发表情包, 拉人打游戏(前提有人)~ - 开会最好别缺席. 嗯. 我们会很伤心的. - 有任务及时完成, 别拖. *我们也会很伤心的.//* ## 小任务 2024/3/3 - 2024/3/10 - 在你的机子上装好必要的开发软件以及开发环境. 参考[这里](https://singularity-backend.gitbook.io/backend-online-doc/before/qiu-sheng-shou-ce). - 跟进 Django 的官方教程. 学几章看看大家的学习效率如何. - 学的东西好多, 记不住怎么办. - 记点电子笔记吧! 例如下面这样 - 学点 Markdown, 或者 [typst](https://typst.app/). ```markdown # Markdown 基础语法 ## 基础语法 ### 标题 # 一级标题 ## 二级标题 ### 三级标题 #### 四级标题 标题会在目录和大纲中分级显示, 可以跳转. ... ```
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unofficial-fhict-document-template/1.0.0/template/main.typ
typst
Apache License 2.0
#import "@preview/unofficial-fhict-document-template:1.0.0": * // #import "./terms.typ": term-list #show: fhict-doc.with( title: "", // subtitle: "", // authors-title: "Authors", // authors: ( // ( // name: "", // ), // ), // assessors-title: "Assessors", // assessors: ( // ( // title: "", // name: "", // ), // ), // language: "en", // available-languages: ("en", "nl", "de", "fr", "es"), // version-history: ( // ( // version: "", // date: "", // author: "", // changes: "", // ), // ), // chapter-on-new-page: true, // pre-toc: [#include "./pre-toc.typ"], // appendix: [#include "./appendix.typ"], // bibliography-file: bibliography("my-sources.bib"), // citation-style: "ieee", // table-of-figures: false, // table-of-listings: true, // table-of-tables: true, // glossary-terms: term-list, // glossary-front: false, // toc-depth: 3, // disable-toc: false, // disable-chapter-numbering: false, // watermark: none, // censored: 0, // print-extra-white-page: false, // secondary-organisation-color: none, // secondary-organisation-logo: none, // secondary-organisation-logo-height: 6%, // enable-index: false, // index-columns: 2, ) = Chapter
https://github.com/EunTilofy/NumComputationalMethods
https://raw.githubusercontent.com/EunTilofy/NumComputationalMethods/main/coding/task1/SC-report-2.typ
typst
#import "../../template.typ": * #show: project.with( course: "Computing Method", title: "Computing Method - Programming 2", date: "2024.4.23", authors: "<NAME>, 3210106357", has_cover: false ) = 问题 用 Romberg 方法计算积分: $ 2 / (sqrt(pi)) integral_0^1 e^(-x^2) dx. $ = 公式与算法 \ *Romberg 算法流程如下:* + 输入 $a, b, epsilon$. + 设置 $h = (b-a) \/ 2, T_0^((0)) = h(f(a) + f(b)) , k = 1, n = 1$. + 设置 $F = 0$.对 $i = 1, 2, dots.c n$,计算 $F = F + f(a + (2i - 1)h)$. + $T_0^((k)) = T_0^((k-1))\/2 + h F$. + 对 $m = 1, 2, dots.c k$,计算$ T_m^((k-m)) = (4^m T_(m-1)^((k-m+1))-T_(m-1)^((k-m)))/(4^m - 1) $ + 若 $abs(T_m^(0) - T_(m-1)^((0))) < epsilon$,输出 $I approx T_m^((0))$,停机; 否则置 $h/2 arrow.double h, 2n arrow.double n, k + 1 arrow.double k$,返回步骤3. = 程序 ```matlab a = 0; b = 1; e = 1e-6; syms x; f(x) = exp(-x^2) * 2 / sqrt(pi); Romberg(f, a, b, e); function res = Romberg(f, a, b, eps) T = zeros(1, 1); n = 1; k = 1; h = (b-a)/2; T(1, 1) = h * (f(a) + f(b)); while true F = 0; for i = 1 : n F = F + f(a + (2*i-1)*h); end T(k+1, k+1) = 0; T(1, k+1) = T(k, 1) / 2 + h * F; for m = 1 : k T(m+1, k-m+1) = (4^m * T(m, k-m+2) - T(m, k-m+1)) / (4^m - 1); end if(abs(T(k+1, 1) - T(k, 1)) < eps) break; end h = h/2; n = 2*n; k = k+1; end disp('Romberg Table: ') for col = 1:k+1 for row = 1:k+1-row+1 fprintf('%.7f ', R(row, col)); end fprintf('\n'); end res = T(k+1, 1); end ``` = 数据与结果 \ 输入 $a = 0, b = 1, epsilon = 1e-6$,程序运行结果如下: \ #figure( image("res.png", width: 90%), caption: [ Result of Romberg Algorithm ], ) 可以发现,一共进行了 15 次迭代,最终答案为 $T_(15)^((0)) = 0.8427007$。 = 结论 该积分的真值近似为 $T approx 0.842700792950$. 该算法的求解误差为 $abs(T - T_(15)^(0)) approx 9e-8 < 1e-7$。
https://github.com/EunTilofy/NumComputationalMethods
https://raw.githubusercontent.com/EunTilofy/NumComputationalMethods/main/Chapter2/Chapter2-3.typ
typst
#import "../template.typ": * #show: project.with( course: "Computing Method", title: "Computing Method - Chapter2&5", date: "2024.4.23", authors: "<NAME>, 3210106357", has_cover: false ) #HWProb(name: "16")[ 在(2.33)式中,令 $n=2$,求 $f'(x_0), f'(x_1), f'(x_2)$。 ] #solution[ 令 $x = x_0 + t h, x_k = x_0 + k h (k = 0, 1, 2)$, $ phi_2(x) = f_0 + t Delta f_0 + t(t-1)/2 Delta^2 f_0 = f_0 + (x-x_0)/h (f_1 - f_0) + ((x-x_0)(x-x_0-h))/(2h^2) (f_2 - 2 f_1 + f_0) $ 所以 $ phi_2'(x) = (f_1 - f_0)/h + (f_2 - 2f_1 + f_0)/(2 h^2) (2x - 2x_0 - h) $ 代入 $x_0, x_1, x_2$ 得 $ cases( f'(x_0) = (f_1 - f_0)/h - (f_2 - 2 f_1 + f_0)/(2h) = (4f_1 - f_2 -3f_0)/(2h)\ f'(x_1) = (f_1 - f_0)/h + (f_2 - 2 f_1 + f_0)/(2h) = (f_2 - f_0) / (2h) \ f'(x_2) = (f_1 - f_0)/h + (3(f_2 - 2 f_1 + f_0))/(2h) = (-4f_1 + f_0 + 3f_2)/(2h) ) $ ] #HWProb(name: "补充题")[ 求 $x_1,x_2,A_1,A_2$,满足 $ integral_0^1 e^x f(x) dx = A_1 f(x) + A_2 f(x_2) $ 具有最高代数精度(精确到 0.0001) ] #solution[ $ cases( integral_0^1 e^x dx = e - 1 = A_1 + A_2 \ integral_0^1 x e^x dx = 1 = A_1 x_1 + A_2 x_2 \ integral_0^1 x^2 e^x dx = e- 2 = A_1 x_1^2 + A_2 x_2^2 \ integral_0^1 x^3 e^x dx = 6 - 2e = A_1 x_1^3 + A_2 x_2^3 ) $ 求解的python代码如下: ```python from scipy.optimize import fsolve import numpy as np def equations(vars): A1, A2, x1, x2 = vars e_minus_1 = np.exp(1) - 1 integral_x_ex = 1 integral_x2_ex = np.exp(1) - 2 integral_x3_ex = 6 - 2 * np.exp(1) eq1 = A1 + A2 - e_minus_1 eq2 = A1 * x1 + A2 * x2 - integral_x_ex eq3 = A1 * x1**2 + A2 * x2**2 - integral_x2_ex eq4 = A1 * x1**3 + A2 * x2**3 - integral_x3_ex return np.array([eq1, eq2, eq3, eq4]) initial_guesses = [0.5, 0.5, 0.25, 0.75] result = fsolve(equations, initial_guesses) print(f"Results: A1 = {result[0]}, A2 = {result[1]}, x1 = {result[2]}, x2 = {result[3]}") # Results: A1 = 0.7131482910221459, A2 = 1.005133537436899, x1 = 0.24760397941963827, x2 = 0.8192161683583419 ``` 所以, $ cases( A_1 = 0.7131 \ A_2 = 1.0051 \ x_1 = 0.2476 \ x_2 = 0.8192 ) $ ]
https://github.com/Vortezz/fiches-mp2i-maths
https://raw.githubusercontent.com/Vortezz/fiches-mp2i-maths/main/chapter_6.typ
typst
#set page(header: box(width: 100%, grid( columns: (100%), rows: (20pt, 8pt), align(right, text("CHAPITRE 6. NOMBRES RÉELS")), line(length: 100%), )), footer: box(width: 100%, grid( columns: (50%, 50%), rows: (8pt, 20pt), line(length: 100%), line(length: 100%), align(left, text("<NAME> - MP2I")), align(right, text("<NAME> - 2023/2024")), ))) #set heading(numbering: "I.1") #let titleBox(title) = align(center, block(below: 50pt, box(height: auto, fill: rgb("#eeeeee"), width: auto, inset: 40pt, text(title, size: 20pt, weight: "bold")))) #titleBox("Nombres réels") = $NN$ et $ZZ$ == Les entiers naturels Les entiers naturels sont définis par induction structurelle, on a $0$, et on a la relation successeur $S(n) = n+1$. On a l'*axiome de récurrence* : $(P(0) and (forall n in NN, P(n) ==> P(n+1)) ==> (forall n in NN, P(n)))$ *Propriété fondamentale de $NN$* : _Tout sous ensemble non vide et majoré de $NN$ admet un maximum._ Il en découle que tout sous ensemble non vide de $NN$ admet un minimum. *La propriété fondamentale de $NN$ est équivalente à l'axiome de récurrence.* De plus, $NN$ est doté d'une $cases(+ "commutative et associative", times "commutative, associative et distribuable sur le " +)$ == Les entiers relatifs Pour construire $ZZ$ on symétrise tout $n in NN$. Les opérations $+$ et $times$ se prolongent à $ZZ$, ainsi $ZZ$ est un *anneau* ($+$ associative, commutative avec un neutre $0$ et il existe $-a$ et une $times$ associative, distribuable sur $+$ et avec un neutre $1$). On dit aussi que $(A, +)$ est un *groupe abélien*. = $QQ$ == Construction de $QQ$ La classe $overline((a,b))$ du couple $(a,b)$ est noté $a/b$ avec $(a,b) in ZZ times NN^*$. Les lois définies sur $ZZ times NN^*$ telles que $(a,b)+(c,d)=(a d + b c, b d) "et" (a,b) times (c, d) = (a c, b d)$ se prolongent, ainsi on a : $a/b + c/d = (a d + b c)/(b d)$ et $a/b times c/d = (a c)/(b d)$. Propriétés et lois de $QQ$ : - $+$ et $times$ commutatives et associatives - $times$ distribuable sur le $+$ - $0 = 0/1$ est neutre pour $+$, et $a/b$ admet un opposé $-a/b$ - $a/b = 0$ si et seulement si $a=0$ - $1 = 1/1$ neutre pour $times$, tout $a/b != 0$ admet un inverse $b/a$ Ainsi $QQ$ est un *corps*. Si on a $q=a/b$ et $r=c/d$ (avec $(b,d) in (NN^*)^2$), alors le signe de $a d - b c$ est indépendant des $(a,b,c,d)$ choisis pour $q$ et $r$. Ainsi $q <= r$ si et seulement si $a d - b c <= 0$. La relation $<=$ ainsi définie est un *ordre total* sur $QQ$. = $RR$ == Existence de nombres non rationnels Soit $(x,y) in (RR^*)^2$. On dit que $x$ et $y$ sont *incommensurables* si $x/y$ est irrationel. Si $n$ n'est pas un carré parfait, $sqrt(n)$ est irrationnelle. == L'ensemble ordonné $RR$ On obtient $RR$ en "bouchant" les trous de $QQ$, on considère $E = {x in QQ | x^2 <= 2}$, ainsi $E$ est borné et n'admet pas de borne supérieure dans $QQ$. En construisant $RR$, on comble ces trous en complétamt $QQ$ des bornes supérieures de tous les ensembles non vides bornés. *Propriété fondamentale de $RR$* : _Tout sous ensemble $E$ non vide et majoré de $RR$ admet une borne supérieure dans $RR$_. On en déduit que _Tous sous ensemble $E$ non vide et majoré de $RR$ admet une borne inférieure dans $RR$_. == Valeurs absolue et partie positive et négative On a $|x|$ la *valeur absolue* de $x$, définie par $|x| = cases(x "si" x >= 0, -x "si" x < 0)$. Elle est utile notamment pour majorer et minorer $A$ par $B$ et $-B$. En effet, $-B <= A <= B$ est équivalent à $|A|<=B$. On note $x^+$ la *partie positive* de $x$ tel que $x^+ = max(0,x)$ et on note $x^-$ la *partie négative* de $x$ tel que $x^- = -min(0,x) = max(0, -x)$. Et on a les propriétés suivantes : - $x^+ >= 0$ et $x^- >= 0$ - $x^+ = 0$ ou $x^- = 0$ - $x = x^+ - x^-$ - $|x| = x^+ + x^-$ == Rappels sur les opérations et les inégalités La relation d'ordre sur $RR$ vérifie : - C'est une relation d'_ordre total_ - $forall (x,y) in RR^2, x <= y <==> x-y in RR^+$ - $forall (x,y) in (RR^+)^2, x + y >= 0 "avec égalité si" x=y=0$ - $forall (x,y) in (RR^+)^2, x y >= 0$ On a $x, y in RR$, ainsi d'après la *règle des signes* on a : - Si $x >= 0 "et" y <= 0$ ou $x <= 0 "et" y >= 0$, alors $x y <= 0$ - Si $x <= 0 "et" y <= 0$ alors $x y >= 0$ Soit $a, b, c, d in RR$, alors : - Si $a <= b "et" c <= d$, $a + c <= b + d$ avec égalité $a = b "et" c = d$ - Si $a <= b "alors" -b <= -a$ - Si $a <= b "et" c <= d$, $a - d <= b - c$ - Si $a >= 0$ et $c <= d$, alors $a c <= a d$ - Si $a <= 0$ et $c <= d$, alors $a c >= a d$ - Si $0 < a <= b$ et $0 < c <= d$, alors $0 < a c <= b d$ avec égalité si et seulement si $a = b$ et $c = d$ - Sinon pour les produits d'inégalités on se ramène à des raisonnements sur la valeur absolue avec ajout des signes ensuite. Pour obtenir des inégalités on peut : - Tout passer du même côté - Procéder par étude de fonctions - Utiliser une propriété de convexité ou de concavité - Utiliser les inégalités classiques Soit $a, b in RR$, on a : - $|a+b| <= |a| + |b|$ (*inégalité triangulaire*) - $|a+b| >= ||a| - |b||$ (*deuxième inégalité triangulaire*) Il en découle que $||a|-|b|| <= |a-b| <= |a|+|b|$ et que $|sum_(i in I) a_i| <= sum_(i in I) |a_i| $ On pose $x_1, ..., x_n, y_1, ..., y_n$, alors d'après l'*inégalité de Cauchy-Schwarz* on a : $ |sum_(i=1)^n x_i y_i|^2 <= (sum_(i=1)^n x_i^2)(sum_(i=1)^n y_i^2) $ avec égalité si et seulement si les vecteurs $(x_1, ..., x_n)$ et $(y_1, ..., y_n)$ sont colinéaires. On admet l'*inégalité arithmético-géométrique*, ainsi pour tout $X in (RR^(+*))^n$ on a : $ 1/n (x_1 + ... + x_n) <= root(n, x_1 ... x_n) $ == Division euclidienne dans $RR$ Soit $x, y in RR$ tel qu'il existe $n in NN$ tel que $x < n y$ d'après la *propriété d'Archimède*. Elle est reformulable en disant que pour tout $y > 0$, la suite $(n y)_(n in NN)$ tend vers $+ infinity$ lorsque $n$ tend vers $+ infinity$. Pour tout $x > 0$ et tout $y > 0$, il existe un rationnel $r$ tel que $0 < r x < y$. Soit $x, y in RR^(+*)$, il existe un unique $n in NN$ tel que $n y <= x < (n + 1) y$. Et il existe un unique $n' in NN$ tel que $n' x < x <= (n' + 1)y$. Sauf quand $x/y$ est entier, $n=n'$, le résulat se généralise à $x$ négatif. On a la *division euclidienne*, ainsi : - Soit $x in RR$ et $y in RR^(+*)$, il existe un unique entier $n$ et un unique réel $r in [0, y[$ tel que $x=n y + r$ - Soit $x in RR$ et $y in RR^(*)$, il existe un unique entier $n$ et un unique réel $r in [0, |y|[$ tel que $x=n y + r$ == Densité de $QQ$ et $RR \\ QQ$ dans $RR$ Un sous ensemble $E$ de $RR$ est *dense* dans $RR$, si pour tout $(x,y) in RR^2$ tel que $x<y$, il existe $z in E$ tel que $x<z<y$. Les ensembles $QQ$ et $RR \\ QQ$ dans $RR$. == Partie entière, partie décimale La *partie entière* d'un réel $x$, notée $floor(x)$ est le quotient de la division euclidienne de $x$ par $1$. Le reste de cette division est parfois noté ${x}$, appellé *partie décimale*. Soit $x in RR$, - $floor(x) = max({n in ZZ | n <= x})$ - $floor(x) = min({n in ZZ | n > x}) - 1$ - $floor(x)$ est l'unique entier tel que $floor(x) <= x < floor(x)+1$ - $floor(x)$ est l'unique entier tel que $x - 1 < floor(x) <= x$ On définit parfois aussi la *partie entière par excès*, notée $ceil(x)$, comme étant le plus petit entier supérieur ou égal à $x$ : $ceil(x) = min({n in ZZ | n >= x})$. On a alors $x in RR$, - $ceil(x) = cases(floor(x) + 1 "si" x in.not ZZ, floor(x) "si" x in ZZ)$ - $floor(-x) = -ceil(x)$ On a les propriétés suivantes pour la partie entière : - $forall x,y in RR, floor(x) + floor(y) + 1 >= floor(x+y) > floor(x) + floor(y)$ - $forall x,y in RR^+, floor(x y) >= floor(x) floor(y)$ - $forall x in RR, forall n in ZZ, floor(x + n) = floor(x) + n$ == Représentation décimale On note $DD$ l'ensemble des *nombres décimaux*, c'est à dire des réels $x$ tel qu'il existe $n in NN$ tel que $10^n x$ est entier. Étant donné $n in NN$, on note $DD_n$ l'ensemble des décimaux tel que $10^n x in ZZ$. Soit $x in RR, n in NN^*$, il existe un unique élément $y in D_n$ tel que $y_n <= x < y_n + 10^(-n)$. - $y_n$ est appellé *valeur approchée décimale* à la précision $10^(-n)$ *par défaut*. - $y_n + 10^(-n)$ est appellé *valeur approchée décimale* à la précision $10^(-n)$ *par excès*. Pour tout $n in NN$, il existe $a_n in [|0, 9|]$ tel que $y_n - y_(n-1) = a_n/10^n$. Soit $x in RR^+$, il existe pour tout $n in NN^*$, des entiers $a_n in [|0,9|]$ tel que : - Il existe $n_0 in ZZ$ tel que $forall n <= n_0, a_n = 0$ - $ x = sum_(n = - infinity)^(+ infinity) a_n 10^(-n) = sum_(n = n_0)^0 a_n 10^(-n) + sum_(n = 1)^(+ infinity) a_n 10^(-n) = sum_(n = n_0)^0 a_n 10^(-n) + lim_(N -> + infinity) sum_(n = 1)^N a_n 10^(-n) $ - Sauf si pour tout $n in NN^*, a_n = 9$, on a alors : $ floor(x) = sum_(n = n_0)^0 a_n 10^(-n) " et" " " sum_(n = 1)^N a_n 10^(-n) $ Soit $x in RR^*$, alors : - Si $x$ n'est pas décimal, alors $x$ admet un unique développement décimal. - Si $x$ est décimal, $x$ admet deux développements décimaux exactement, l'un terminant uniquement par des $9$, l'autre uniquement par des $0$. On appelle *développement décimal propre* de $x$ l'unique développement de $x$ si $x$ n'est pas décimal, ou l'unique développement de $x$ terminant par des $0$ si $x$ est décimal. Ainsi, tout réel admet un unique développement décimal propre. = Intervalles == Description des intervalles Soit $E$ un sous ensemble de $RR^n$, on dit que $E$ est *convexe* si et seulement si pour tout couple de points $A$ et $B$ de $E$, le segment $[A B]$ est entièrement inclus dans $E$. Un *intervalle* $I$ de $RR$ est un sous-ensemble convexe $I$ de $RR$, c'est à dire tel que : $forall (a,b) in I^2, forall x in RR, a <= x <= b ==> x in I$ Tout intervalle $I$ de $RR$ est d'une des formes suivantes, pour certaines valeurs réelles $a$ et $b$ : #grid( columns: (50%, 50%), { list( $[a, b] = {x in RR, a <= x <= b}, a <= b$, $]a, b[ = {x in RR, a < x < b}, a < b$, $[a, b[ = {x in RR, a <= x < b}, a < b$, $]a, b] = {x in RR, a < x <= b}, a < b$, $[a, + infinity[ = {x in RR, x >= a}$, ) }, { list( $]a, + infinity[ = {x in RR, x > a}$, $]- infinity, b] = {x in RR, x <= b}$, $]- infinity, b[ = {x in RR, x < b}$, $]- infinity, + infinity[ = RR$, $emptyset$, ) } ) On dit qu'un intervalle est : - *ouvert* si il est de forme $]a, b[, ]a, + infinity[, ]- infinity, b[, RR "et" emptyset$ - *fermé* si il est de forme $[a, b], [a, + infinity[, ]- infinity, b], RR "et" emptyset$ - *semi-ouvert* si il est de forme $[a, b[ "et" ]a, b]$ == Intervalles et topologie La notion d'intervalle est liée à la notion de topologie. On utilisera une brève introduction des ces notions dans $RR^n$, en utilisant la distance euclidienne canonique : $d(X,Y) = sqrt(sum^n_(i=1) (y_i-x_i)^2)$. Si $x, y in RR$, on a $d(x,y) = |x-y|$. Soit $X in RR^n$ et $r in RR^+$, on a : - On a la *boule ouverte* de centre $X$ et de rayon $r$ : B(X,r) = $circle(B)(X,r) = {Y in RR^n, d(X,Y) < r}$. - On a la *boule fermée* de centre $X$ et de rayon $r$ : $overline(B)(X,r) = {Y in RR^n, d(X,Y) <= r}$. Dans $RR$ les boules ouvertes et fermées sont des intervalles, $B(x,r) = ]x-r, x+r[$ et $overline(B)(x,r) = [x-r, x+r]$. De fait, _tout intervalle ouvert est une boule ouverte_, et _tout intervalle fermé est une boule fermée_. Soit $]a,b[ = B((a+b)/2,(a-b)/2)$ et $[a,b] = overline(B)((a+b)/2,(a-b)/2)$. La majoration de certaines _valeurs absolue_ se traduit par l'_appartenance à une boule fermée_, ainsi $|x-a| <= r$ est équivalent à $x in overline(B)(a,r)$ et par l'_appartenance à une boule ouverte_, ainsi $|x-a| < r$ est équivalent à $x in B(a,r)$. Soit $x in RR^n$, un *voisinage* de $x$ est un ensemble $V$ tel qu'il existe une boule ouverte centrée en $x$ et incluse dans $V$ : $ exists epsilon > 0, B(x,epsilon) subset V, "i.e." exists epsilon > 0, forall y in E, d(x,y) < epsilon ==> y in V $ En gros, $V$ est un voisinage de $x$ si $x$ est à _l'intérieur_ de $V$, et non sur le bord. Un *ouvert* $U$ de $RR^n$ est un sous ensemble $U$ de $RR^n$ tel que pour tout $x in U$, $U$ est un voisinage de $x$. $U in RR^n$ est ouvert ssi : $forall x in U, exists epsilon > 0, B(x,epsilon) subset U$. Un sous-ensemble $F$ de $RR^n$ est _fermé_ si et seulement si _son complémentaire $C_E F$ est ouvert_. On a : - _Toute union quelconque d'ouverts est un ouvert_ - _Toute intersection _finie_ d'ouverts est un ouvert_ - _Toute intersection quelconque de fermés est un fermé_ - _Toute union _finie_ de fermés est un fermé_ On a les contre-exemples suivants : - Pour une intersection infinie d'ouverts, on a $sect.big_(n=1)^(+ infinity) ]-1/n,1[ = [0,1[$ - Pour une union infinie de fermés, on a $union.big_(n=1)^(+ infinity) [1/n, 1] = ]0,1]$ = Droite achevée $overline(RR)$ La *droite achevée* $overline(RR)$ est l'ensemble $RR union {- infinity, + infinity}$. On peut prolonger l'ordre de $RR$ à $overline(RR)$ en posant $- infinity <= x <= + infinity$ pour tout $x in overline(RR)$. On peut prolonger partiellement les opérations $+$ et $times$ à $overline(RR)$ en posant : - $- (+ infinity) = - infinity$ - $forall x in overline(RR)\\{- infinity}, x + (+ infinity) = + infinity$ - $forall x in overline(RR)\\{+ infinity}, x + (- infinity) = - infinity$ - $1/(+ infinity) = 1/(- infinity) = 0$ - $forall x in overline(RR)^*_+, x times (+ infinity) = + infinity, x times (- infinity) = - infinity$ - $forall x in overline(RR)^*_-, x times (+ infinity) = - infinity, x times (- infinity) = + infinity$ #emoji.warning *Attention* : Il existe certaines opérations qui ne sont pas définies, et définissent les _formes indéterminées_ de la somme et du produit : $- infinity + (+ infinity), 0 times (+infinity) "et" 0 times (- infinity)$ On peut étendre la _propriété fondamentale de $RR$_ dans $overline(RR)$, ainsi _tout sous ensemble de $overline(RR)$ admet une borne supérieure dans $overline(RR)$_. On a aussi _tout sous ensemble de $overline(RR)$ admet une borne inférieure dans $overline(RR)$_.
https://github.com/DieracDelta/presentations
https://raw.githubusercontent.com/DieracDelta/presentations/master/polylux/book/src/dynamic/pause.typ
typst
#import "../../../polylux.typ": * #set page(paper: "presentation-16-9") #set text(size: 50pt) #polylux-slide[ first #pause second #pause third ]
https://github.com/spherinder/ethz-infk-thesis
https://raw.githubusercontent.com/spherinder/ethz-infk-thesis/master/pages/outline.typ
typst
#let align-helper(state-key, what-to-measure, display) = style(styles => { let max-width = state(state-key, 0pt) let this-width = measure(what-to-measure, styles).width max-width.update(max => calc.max(this-width, max)) locate(loc => { display(max-width.final(loc), this-width) }) }) #let presets = ( // outrageous preset for a Table of Contents outrageous-toc: ( font-weight: ("bold", auto), font-style: (auto,), vspace: (22pt, none), fill: (none, repeat[~.]), fill-right-pad: .4cm, fill-align: true, body-transform: none, page-transform: none, ), ) #let show-entry( entry, font-weight: presets.outrageous-toc.font-weight, font-style: presets.outrageous-toc.font-style, vspace: presets.outrageous-toc.vspace, fill: presets.outrageous-toc.fill, fill-right-pad: presets.outrageous-toc.fill-right-pad, fill-align: presets.outrageous-toc.fill-align, body-transform: presets.outrageous-toc.body-transform, page-transform: presets.outrageous-toc.page-transform, label: <outrageous-modified-entry>, state-key: "outline-page-number-max-width", ) = { fill-right-pad = if fill-right-pad == none { 0pt } else { fill-right-pad } let max-width = state(state-key, 0pt) if entry.at("label", default: none) == label { entry // prevent infinite recursion } else { let font-weight = font-weight.at(calc.min(font-weight.len(), entry.level) - 1) let font-style = font-style.at(calc.min(font-style.len(), entry.level) - 1) let vspace = vspace.at(calc.min(vspace.len(), entry.level) - 1) let fill = fill.at(calc.min(fill.len(), entry.level) - 1) set text(weight: font-weight) if font-weight not in (auto, none) set text(style: font-style) if font-style not in (auto, none) if vspace != none { v(vspace, weak: true) } let fields = entry.fields() if body-transform != none { let new-body = body-transform(entry.level, entry.body) fields.body = if new-body == none { entry.body } else { new-body } } if page-transform != none { let new-page = page-transform(entry.level, entry.page) fields.page = if new-page == none { entry.page } else { new-page } } if fill in (none, auto) or not fill-align { if fill != auto { fields.fill = if fill == none { none } else { box(width: 100% - fill-right-pad, fill) } } [#outline.entry(..fields.values()) #label] } else { align-helper( state-key, entry.page, (max-width, this-width) => { let fields = fields fields.fill = box(width: 100% - (max-width - this-width) - fill-right-pad, fill) [#outline.entry(..fields.values()) #label] } ) } } } #let outline_page() = { // TODO Needed, because context creates empty pages with wrong numbering set page( numbering: "i", ) show outline.entry: show-entry outline( depth: 3, indent: true, fill: repeat(text(". ")), ) }
https://github.com/mem-courses/calculus
https://raw.githubusercontent.com/mem-courses/calculus/main/note-2/1.级数.typ
typst
#import "../template.typ": * #import "@preview/xarrow:0.3.0": xarrow #show: project.with( course: "Calculus II", course_fullname: "Calculus (A) II", course_code: "821T0160", semester: "Spring-Summer 2024", title: "Note #1 级数", authors: ( ( name: "<NAME>", email: "<EMAIL>", id: "3230104585", ), ), date: "January 24, 2024", ) #let ss = [$display(attach(inline(sum), t: +oo, b: n=1))$] #let sf(x) = [$display(attach(inline(sum), t: +oo, b: n=#x))$] #let sss = [$display(attach(sum, t: +oo, b: n=1))$] #let ssf(x) = [$display(attach(sum, t: +oo, b: n=#x))$] #let int = math.integral #let dx = [$dif x$] #let dy = [$dif y$] = 数项级数 #definition[ 给定数列 ${a_n}$,将其每一项依次用“$+$”号连接起来的表达式 $a_1 + a_2 + dots.c + a_n + dots.c = ss a_n$ 称为#def[无穷级数]。 ] #definition[ 在级数 $ss a_n$ 中,前 $n$ 项的和 $display(S_n = a_1 + a_2 + dots.c + a_n)$ 称为它的第 $n$ 个#def[部分和]。所得到的数列 ${S_n}$ 称为#def[部分和数列]。 ] == 级数收敛的定义与判别 #definition[ 对于级数 $ss a_n$,若它的部分和数列 ${S_n}$ 收敛,则称该无穷级数#def[收敛]。且称 $S=display(lim_(n->oo) S_n = S)$ 为该级数的#def[和];若数列 ${S_n}$ 发散,则称该无穷级数#def[发散]。 ] #theorem(name: "柯西收敛准则")[ 级数 $ss a_n$ 收敛的充要条件是:$forall eps > 0,sp exists N>0$,当 $n>N$ 时,$forall p in NN_+$,均有 $ |S_(n+p) - S_n| = |a_(n+1) + a_(n+2) + dots.c + a_(n+p)| < eps $ ] #example[ #problem[ 证明级数 $sss display(1/(n^2))$ 收敛。 ] #solution(tag: "证明")[ 当 $k>=2$ 且 $k in NN_+$ 时,有 $display(1/(k^2) < 1/((k-1)k) = 1/(k-1) - 1/k)$,对任意 $p in NN_+$ 有 $ sum_(k=n+1)^(n+p) 1 / (k^2) < sum_(k=n+1)^(n+p) 1 / ((k-1)k) = sum_(k=n+1)^(n+p) ( 1 / (k-1) - 1 / k ) = 1 / n - 1 / (n+p) < 1 / n $ 所以 $forall eps >0,sp exists N = display([1/eps])$ 满足当 $n>N$ 时,$forall p in NN_+$ 均有 $display(sum_(k=n+1)^(n+p) 1/(k^2) < 1/n < eps)$,则根据柯西收敛准则,级数 $display(sum_(n=1)^(+oo) 1/(n^2))$ 收敛。 ] ] == 收敛级数的基本性质 #theorem[ 若级数 $ss a_n, ss b_n$ 均收敛,则对任意 $k_1,k_2 in RR$,$ss (k_1 a_n + k_2 b_n)$ 也收敛,且 $ sss (k_1 a_n + k_2 b_n) = k_1 sss a_n + k_2 sss b_n $ ] #corollary[ 若级数 $ss a_n$ 收敛且级数 $ss b_n$ 发散,则级数 $ss (a_n + b_n)$ 发散。 ] #theorem[ 去掉、添加或修改级数中的有限项,不影响级数的敛散性。 #proof[ 由柯西收敛准则可知级数的敛散性与其前有限项无关。 ] ] #theorem(name: "收敛级数的结合性")[ 收敛级数任意添加括号后所得级数仍然收敛,且其和不变。 #proof[ 原级数收敛,故其部分和数列 ${S_n}$ 收敛。而添加括号后的级数的部分和数列 ${sigma_n}$ 是 ${S_n}$ 的一个子列,故 ${sigma_n}$ 收敛,且极限与 ${S_n}$ 的极限相同。 ] #caution[ 这一结论对于发散级数并不成立,如 $(1-1)+(1-1)+dots.c$ 收敛但原级数发散。 ] ] #theorem(name: "级数收敛的必要条件")[ 若级数 $ss a_n$ 收敛,则 $display(lim_(n->+oo) a_n) = 0$。 #caution[ 这是必要条件而非充分条件,例: 级数 $sss display(ln(1+1/n))$ 的一般项趋于 $0$ 而级数发散。 ] ] == 正项级数 #definition[ 若级数 $ss a_n$ 中的各项均有 $a_n>0$,则该级数为#def[正项级数]。 ] #theorem(name: "正项级数收敛的充要条件")[ 正项级数收敛的充要条件是它的部分和数列 ${S_n}$ 有上界。 #proof[ 可由单调有界定理导出。 ] ] == 正项级数的敛散性判别 #theorem(name: "比较判别法")[ 对于正项级数 $ss a_n,ss b_n$,若存在 $N$ 使得当 $n>N$ 时 $a_n<=k b_n sp (k>0)$ 恒成立,则: (1) 若 $ss b_n$ 收敛,则 $ss a_n$ 必收敛。 (2) 若 $ss a_n$ 发散,则 $ss b_n$ 必发散。 ] \ 比较判别法也可用极限形式表示。 #theorem(name: "极限判别法")[ 对于正项级数 $ss a_n, ss b_n$,如果 $display(lim_(n->+oo) (a_n)/(b_n)) = l$,那么 (1) 当 $0<l<+oo$ 时,级数 $ss a_n, ss b_n$ 具有相同的敛散性; (2) 当 $l=0$ 时,若 $ss b_n$ 收敛,则 $ss a_n$ 也收敛; (3) 当 $l=+oo$ 时,若 $ss b_n$ 发散,则 $ss a_n$ 也发散。 #proof[ (1) 123 (3) 是 (2) 倒过来的情况,证明只需要注意到“若 $ss a_n$ 发散,则 $ss a_n$ 发散”是“若 $ss b_n$ 收敛,则 $ss b_n$ 收敛”的逆否命题。 ] ] #note[ 这一定理的记忆可以类比无穷小的概念:若 $l=1$,则 $a_n$ 是 $b_n$ 的等价无穷小;若 $l=0$,则 $a_n$ 是 $b_n$ 的高阶无穷小。一般来说,我们找个同阶无穷小来判定敛散性会比较方便。 ] #example[ #problem[ 判断级数 $sss display((1/n - ln(1+1/n)))$ 的敛散性。 ] #solution[ 因为当 $x->0$ 时,$display(x-ln(1+x) = x - (x - x^2/2 + o(x^2)) = x^2/2 + o(x^2))$。所以 $ lim_(x->0) (x-ln(1+x)) / (x^2) = 1 / 2 $ 从而有 $ lim_(n->+oo) n^2 (1 / n - ln(1+1/n)) = 1 / 2 $ 而级数 $sss display(1/(n^2))$ 收敛,因此级数 $sss display((1/n - ln(1+1/n)))$ 也收敛。 ] ] \ #property[ 通常称 $sss display(1/(n^p))$ 为 #def[$p$-级数]。当 $p<=1$ 时,$p$-级数发散;当 $p>1$ 时,$p$-级数收敛。 #proof[ 当 $p<=1$ 时,$display(1/(n^p)>=1/n)$,而调和级数发散,因此当 $p<=1$ 时,级数 $sss display(1/(n^p))$ 发散。 当 $p>1$ 时,对函数 $f(x)=display(1/(x^(p-1)) sp (x>=1))$ 在区间 $[n,n+1]$ 上应用拉格朗日中值定理,$exists theta_n in (0,1)$ 使得 $ 1 / ((n+1)^(p-1)) - 1 / (n^(p-1)) = f(n+1)-f(n) = f'(n+theta_n) = - (p-1) / (n+theta_n)^p $ 则 $ 1 / (p-1) (1 / (n^(p-1)) - 1 / ((n+1)^(p-1))) = 1 / ((n+theta_n)^p) > 1 / ((n+1)^p) $ 又正项级数 $sss display((1/(n^(p-1)) - 1/((n+1)^(p-1))))$ 收敛,根据比较判别法,当 $p>1$ 时,级数 $sss display(1/(n^p))$ 收敛。 ] ] #theorem(name: "比值判别法,达朗贝尔判别法")[ 对于正项级数 $ss a_n$,若 $display(lim_(n->+oo) (a_(n+1))/(a_n)) = l$,则 (1) 当 $0<=l<1$ 时,级数 $ss a_n$ 收敛; (2) 当 $l>1$ 时,级数 $ss a_n$ 发散; (3) 当 $l=1$ 时,比值判别法失效。 // #proof[] ] #theorem(name: "根值判别法,柯西判别法")[ 对于正项级数 $ss a_n$,若 $display(lim_(n->+oo) root(n,a_n)) = l$,则 (1) 当 $0<=l<1$ 时,级数 $ss a_n$ 收敛; (2) 当 $l>1$ 时,级数 $ss a_n$ 发散; (3) 当 $l=1$ 时,比值判别法失效。 // #proof[] ] #theorem(name: "积分判别法")[ 设函数 $f(x)$ 在 $[1, +oo)$ 上单调递减,则级数 $ss f(n)$ 与反常积分 $display(int_1^(+oo) f(x) dx)$ 具有相同的敛散性。 // #proof[] ] == 一般项级数的敛散性判别 === 交错级数与莱布尼茨判别法 #definition[ 若 $a_n>0 sp(n=1,2,dots.c)$,则称 $ss (-1)^(n-1) a_n$ 为#def[交错级数]。 ] #theorem(name: "莱布尼茨判别法")[ 若交错级数 $ss (-1)^(n-1) a_n sp(a_n>0)$ 满足 (1) 数列 ${a_n}$ 单调递减,即 $a_(n+1)<=a_n sp (n=1,2,dots.c)$; (2) $display(lim_(n->+oo) a_n = 0)$。 则交错级数 $ss (-1)^(n-1) a_n$ 收敛且其和 $S<=a_1$。 #proof[ $ S_(2m) = (a_1-a_2)+(a_3-a_4)+dots.c+(a_(2m-1)-a_(2m)) $ 由 ${a_n}$ 单调递减知 ${S_(2m)}$ 单调增。另一方面: $ S_(2m) = a_1 - (a_2-a_3) - (a_4-a_5)-dots.c-(a_(2m-2)-a_(2m-1))-a_(2m)<a_1 $ 根据单调有界定理知 $S_(2m)$ 有上界,故 ${S_(2m)}$ 收敛。记 $dp(lim_(m->+oo) S_(2m) = S<a_1)$。 而 $S_(2m+1)=S_(2m)+a_(2m+1)$,由条件 $dp(lim_(n->+oo) a_n)=0$ 和极限的四则运算法则得 $dp(lim_(m->+oo) S_(2m+1)=S)$。故原级数收敛且其和为 $S$。 ] ] #example[ #problem[ 判断级数 $display(sum_(n=2)^(+oo) ((-1)^n)/(sqrt(n) + (-1)^n))$ 的敛散性。 ]\ #solution[ 由于 $display(((-1)^n)/(sqrt(n) + (-1)^n) = ((-1)^n (sqrt(n) - (-1)^n))/(n-1) = (-1)^n sqrt(n)/(n-1) - 1/(n-1))$。根据莱布尼茨判别法可知级数 $display(sum_(n=2)^(+oo) (-1)^n sqrt(n)/(n-1))$ 收敛,而调和级数 $display(sum_(n=2)^(+oo) 1/(n-1))$ 发散。故原级数发散。 ] ] === 利用级数绝对收敛的性质 #definition[ 若级数 $ss |a_n|$ 收敛,则称级数 $ss a_n$ #def[绝对收敛],若级数 $ss a_n$ 收敛而级数 $ss |a_n|$ 发散,则称级数 $ss a_n$ #def[条件收敛]。 ] #theorem(name: "级数的绝对收敛准则")[ 若级数 $ss a_n$ 绝对收敛,则级数 $ss a_n$ 必收敛。 #proof[ 【*法一*】注意到 $0<=a_n+|a_n|<=2|a_n|$,因为 $ss |a_n|$ 收敛,所以 $ss 2|a_n|$ 也收敛,由比较判别法得 $ss (|a_n|+a_n)$ 收敛。而 $a_n=(|a_n|+a_n)-a_n$,由级数性质知 $ss a_n$ 收敛。 【*法二*】如果级数 $ss a_n$ 绝对收敛,由柯西收敛准则,$forall eps > 0$,$exists N > 0$,当 $n>N$ 时,$forall p in NN_+$ 有 $ |a_(n+1)| + |a_(n+2)| + dots.c + |a_(n+p)| < eps $ 因此 $ |a_(n+1) + a_(n+2) + dots.c + a_(n+p)| < |a_(n+1)| + |a_(n+2)| + dots.c + |a_(n+p)| < eps $ 故根据柯西收敛准则,级数 $ss a_n$ 收敛。 ] ] 通过这一性质,我们可将级数 $ss a_n$ 的判别转化为对 $ss |a_n|$ 的判别。 \ 这种方法常常与比值判别法、根治判别法等方法连用,来解决一般级数的敛散性判别问题。 #theorem(name: "绝对值的比值判别法")[ 对于一般级数 $ss a_n$,若 $display(lim_(n->+oo) abs(a_(n+1))/abs(a_n) = l)$ 存在,则: (1) $l < 1$ 时,级数绝对收敛; (2) $l > 1$(包括 $l = +oo$)时,级数发散; (3) $l = 1$ 时,绝对值的比值判别法失效。 #note[ 第 (1), (3) 条都可以由比值判别法和绝对收敛准则导出。第 (2) 条成立主要是因为 $l>1$ 时, $ display(lim_(n->+oo) |a_n| = +oo) != 0 $ 故 $display(lim_(n->+oo) a_n)!=0$ 也成立,级数必不收敛。 ] ] #theorem(name: "绝对值的根值判别法")[ 对于一般级数 $ss a_n$,若 $display(lim_(n->+oo) root(n,abs(u_n)) = l)$,则: (1) $l<1$ 时,级数绝对收敛; (2) $l>1$(包括 $l=+oo$ 时),级数发散; (3) $l=1$ 时,绝对值的根值判别法失效。 ] #example[ #problem[ 判断级数的敛散性:$display(sss ((-1)^(n-1))/n x^n)$。 ] #solution[ $ lim_(n->+oo) abs(u_(n+1)/u_n) = lim_(n->+oo) n / (n+1) abs(x) = abs(x) $ 可见,$|x|<1$ 时级数(绝对)收敛;$|x|>1$ 时级数发散; $x=1$ 时,级数 $display(sss ((-1)^(n-1))/n x^n = sss ((-1)^(n-1))/n)$ 收敛; $x=-1$ 时,级数 $display(sss ((-1)^(n-1))/n x^n = sss (-1)/n)$ 发散。 总之,级数当 $|x|<1$ 时绝对收敛,当 $x=1$ 时条件收敛,$|x|>1$ 或 $x=-1$ 时发散。 ] ] \ 绝对收敛级数具有和普通有限项和数相仿的两个运算性质——交换律和分配律成立,请看下面的两个定理。 #definition[ 把自然数数列 ${1,2,dots.c,n,dots.c}$ 到它自身的一一映射 $f:n->k(n)$ 称为自然数数列的重排。 ] #theorem(name: "绝对收敛的条件下无限个数相加的加法交换律")[ 设级数 $ss a_n$ 绝对收敛,且其和等于 $S$,则任意重排得到的级数 $ss a_n'$ 也绝对收敛,且有相同的和数 $S$。 #note[ 由条件收敛级数重排得到的新级数可能发散,即使收敛也不一定收敛于原来的和数,甚至可以收敛于任何事先指定的数。(黎曼定理) #example[ 如有结论 $ln 2 = display(1-1/2+1/3-1/4+1/5-1/6+dots.c)$。 两边同乘 $display(1/2)$ 得 $dp(1/2 ln 2 = 1/2 - 1/4 + 1/6 - 1/8 + dots.c)$。 两式相加有 $dp(3/2 ln 2 = 1+1/3 - 1/2 + 1/5 + 1/7 - 1/4 + dots.c)$。这是原级数的重排。虽然重拍后的级数仍然收敛,但是其和发生了变化。 ] ] ] #theorem(name: "柯西定理")[ 设级数 $ss a_n = A$ 和 $ss b_n = B$ 都绝对收敛,则它们的柯西乘积按任意顺序排列所得到的级数 $ss c_n$ 也绝对收敛,且其和为 $A B$。 #corollary(name: "梅尔腾斯定理*")[ 若级数 $ss a_n = A$ 与 $ss b_n = B$ 中有一个绝对收敛,另一个条件收敛,则它们的柯西乘积(按照对角线顺序取)所组成的级数仍收敛,且其和为 $A B$。 ] ] // === 阿贝尔判别法与狄利克雷判别法 // #lemma(name: "阿贝尔变换")[ // 设 ${a_n}$ 与 ${b_n}$ 是两数列,即 $B_n = display(sum_(k=1)^n b_k)$,则 // $ sum_(k=1)^n a_k b_k = a_n B_n + sum_(k=1)^(n-1) (a_k-a_(k+1)) B_k $ // ] // #corollary(name: "阿贝尔引理")[ // 若数列 ${a_n}$ 与级数 $ss b_n$ 的部分和数列 ${B_n}$ 满足: // (1) 数列 ${a_n}$ 单调,且 $exists A>0$,$forall n in NN_+$ 均有 $|a_n| <= A$; // (2) $exists eps>0$,$forall n in NN_+$,均有 $|B_n| < eps$; // 则 $forall n in NN_+$,有 $display(lr(| sum_(k=1)^n a_k b_k <= 3 A eps |))$。 // #proof[ // 根据阿贝尔变换, // $ // lr(|sum_(k=1)^n a_k b_k|) // &= lr(| a_n B_n + sum_(k=1)^(n-1) (a_k - a_(k+1)) B_k |) // &= lr(|a_n B_n|) + sum_(k=1)^(n-1) |a_k - a_(k+1)| dot.c |B_k| // $ // ] // ] == 函数项级数\* #definition[ 设 $u_1(x),u_2(x),dots.c,u_n(x),dots.c$ 是定义在 $I in RR$ 上的函数,则 $ ss u_n(x) = u_1(x)+u_2(x)+dots.c+u_n(x) + dots.c $ 称为定义在区间 $I$ 上的#def[函数项级数]。 ] #definition[ 如果 $x_0 in I$,数项级数 $ss u_n(x_0)$ 收敛,则称 $x_0$ 为级数 $ss u_n(x)$ 的#def[收敛点],否则称为#def[发散点]。 ] #definition[ 函数项级数 $ss u_n(x)$ 的所有收敛点的全体称为#def[收敛域],所有发散点的全体称为#def[发散域]。 ] #definition[ 称 $ S_n = u_1(x) + u_2(x) + dots.c + u_n(x) $ 为函数项级数 $ss u_n(x)$ 的(第 $n$ 个)#def[部分和函数]。若 $dp(lim_(n->+oo) S_n(x) = S(x))$,则称 $S(x)$ 为函数项技术 $ss u(x)$ 的#def[和函数]。 ] #note[ 在判定函数项级数的敛散性时,常常与比值判别法、根值判别法连用。对于 $l=1$ 的特殊情况,此时 $x$ 一般为确定值,再用数项级数的方法进行判断。 ] == 幂级数 #definition[ 称函数项级数 $ a_0+a_1 (x-x_0)+dots.c+a_n (x-x_0)^n +dots.c = ssf(0) a_n (x-x_0)^n $ 为关于 $x-x_0$ 的#def[幂级数],其中 $a_n$ 称为#def[幂级数系数]。 特别地,取 $x_0=0$,称 $ a_0 +a_1 x + dots.c + a_n x^n = ssf(0) a_n x^n $ 为关于 $x$ 的幂级数。 显然,任何幂级数 $sf(0) a_n x^n$ 在 $x=0$ 处收敛。 ] === 幂级数的收敛半径与收敛域 #theorem(name: "阿贝尔定理")[ (1) 如果级数 $sf(0) a_n x^n$ 在 $x=x_1(x_1!=0)$ 处收敛,则它在满足不等式 $|x|<|x_1|$ 的一切 $x$ 处绝对收敛。 (2) 如果级数 $sf(0) a_n x^n$ 在 $x=x_2$ 处发散,则它在满足不等式 $|x|>|x_2|$ 的一切 $x$ 处发散。 #proof[ 下证 (1)。因为 $sf(0) a_n x_1^n$ 收敛,所以 $dp(lim_(n->+oo) a_n x_1^n=0)$,$exists M>0$,使得 $|a_n x_1^n|<=M sp (n=0,1,2,dots.c)$。 由于 $|x|<|x_1|$,有 $dp(|x/x_1|<1)$, $ |a_n x^n| = abs(a_n x_1^n dot (x^n)/(x_1^n)) = |a_n x_1^n| dot abs(x/x_1)^n <= M abs(x/x_1)^n $ 而等比级数 $dp(ssf(0) M abs(x/x_1)^n)$ 收敛,故 $sf(0) a_n x^n$ 收敛,因此级数 $ss a_n x^n$ 绝对收敛。 得到 (1) 后通过反证法可证明 (2)。 ] ] 可以发现,幂级数 $ss a_n x^n$ 的收敛域是一个以 $x=0$ 为中心的区间。 #definition[ 设幂级数 $ss a_n x^n$ 的收敛域的长度为 $2R$,则称 $R$ 为幂级数 $ss a_n x^n$ 的#def[收敛半径],称 $(-R,R)$ 为该幂级数的#def[收敛区间]。 ] #theorem[ 设幂级数 $ss a_n x^n$ 的所有系数 $a_n!=0$,且 $dp(lim_(n->+oo) abs(a_(n+1)/a_n)) = rho$ 或 $dp(lim_(n->+oo) root(n,abs(a_n))) = rho$,则幂级数 $ss a_n x^n$ 的收敛半径 $R$ 满足 (1) 当 $0<rho<+oo$ 时,收敛半径 $R=dp(1/rho)$; (2) 当 $p=0$ 时,收敛半径 $R = +oo$; (3) 当 $rho=+oo$ 时,收敛半径 $R=0$。 #note[ 直接计算,就是 $dp(R = lim_(n->+oo) abs(a_n/a_(n+1)))$ 或 $R = dp(lim_(n->+oo) 1/root(n,abs(a_n)))$。 如果不满足所有系数 $a_n!=0$,则通过修改下面的证明过程来完成。该证明过程需要掌握(期中必考)。 ] #proof[ 以 $dp(lim_(n->+oo) abs(a_(n+1)/a_n))=rho$ 的情况举例:如果 $0<rho<+oo$, - 当 $abs(x)<dp(1/rho)$ 时,$rho abs(x)<1$,$ss a_n x^n$ 绝对收敛; - 当 $abs(x)>dp(1/rho)$ 时,$rho abs(x)>1$,$ss abs(a_n x^n)$ 发散,故 $ss a_n x^n$ 发散。 故收敛半径 $R = dp(1/rho)$。类似的,可以处理 $rho=0$ 和 $rho=+oo$ 的情况。 ] ] #tip[ 【求幂级数的收敛域】 - 先根据上面的做法来确定收敛半径 $R$。(童雯雯老师要求写上:所以,收敛半径为 $(-R,R)$) - 检验 $x=-R$ 和 $x=+R$ 时是否收敛,从而确定收敛域。 ] === 幂级数的运算 #theorem[ 设 $sf(0) a_n x^n$ 和 $sf(0) b_n x^n$ 在 $x=0$ 的某邻域内相等,则它们的同幂次项相等,即 $ a_n=b_n sp (n=0,1,2,dots.c) $ ] #theorem[ 设 $sf(0) a_n x^n$ 和 $sf(0) b_n x^n$ 的收敛半径分别为 $R_1$ 和 $R_2$,则令 $R=min{R_1,R_2}$,有以下运算成立: - 加法:$sf(0) a_n x^n + sf(0) b_n x^n = ss (a_n + b_n) x^n quad (|x|<R)$; - 减法:$sf(0) a_n x^n - sf(0) b_n x^n = ss (a_n - b_n) x^n quad (|x|<R)$; - 数乘:$lambda sf(0) a_n x^n = sf(0) lambda a_n x^n, sp |x|<R_1$,其中 $lambda$ 是常数; - 乘法:$(sf(0) a_n x^n) (sf(0) b_n x^n) = (sf(0) c_n x^n), sp |x|<R$,其中 $c_n=dp(sum_(k=0)^n a_k b_(n-k))$。 #note[ 可以理解为,在收敛半径内,幂级数符合多项式的若干性质。 ] ] === 幂级数的分析性质 #theorem[ 设幂级数 $sf(0) a_n x^n$ 的收敛半径为 $R>0$,且 $S(x) = sf(0) a_n x^n, sp |x|<R$,则 1. $S(x)$ 在幂级数的收敛区间 $(-R,R)$ 内连续。即 $ forall x_0 in (-R,R), quad dp(lim_(x->x_0) S(x) = S(x_0)) $ 2. 幂级数 $sf(0) a_n x^n$ 在 $(-R,R)$ 内可以逐项微分、逐项积分,且经过这些运算后所得的幂级数与原幂级数有相同的收敛半径(但是收敛域可能改变)。即 $ S'(x) = ssf(0) (a_n x^n)' = ssf(0) n a_n x^(n-1), sp |x|<R $ $ int_0^x S(x) dif x = ssf(0) int_0^x a_n x^n dif x = ssf(0) dp(a_n/(n+1) x^(n+1)), sp |x|<R $ #corollary[ 设 $S(x)$ 是幂级数 $sf(0) a_n x^n$ 在收敛区间 $(-R,R)$ 内的核函数,则 $S(x)$ 在 $(-R,R)$ 内任意阶可导,且可逐项求导,收敛半径仍是 $R$,即 ] #corollary[ 若幂级数 $sf(0) a_n x^n$ 在收敛区间的端点 $x=R$ 处收敛,则 $S(x) = sf(0) a_n x^n$ 在 $x=R$ 处左方连续,即 $dp(lim->(R^-) sf(0) a_n x^n = sf(0) a_n R^n)$,或 $dp(lim_(x->R^-) S(x) = S(R))$。对应到另一侧同理。 ] #example[ #problem[ 求幂级数 $sss dp(x^(2n+1)/(2n))$ 的收敛域。 ] #solution[ 设 $S(x) = sss dp(x^(2n)/(2n))$,则 $S(0)=0$,$sss dp(x^(2n+1)/(2n)) = x S(x)$。注意到 $ S'(x) = (sss (x^(2n)) / (2n))' =sss x^(2n-1) = x / (1-x^2), sp |x|<1\ => S(x) = int_0^x S'(x) dif x = int_0^x x / (1-x^2) dif x = -1 / 2 ln(1-x^2),sp |x|<1 $ 所以收敛区间为 $(-1,1)$。当 $x=pm 1$ 时,此级数为 $ssf(0) dp(1/(2n))$ 发散,所以收敛域为 $(-1,1)$。 ] ] ] === 将函数展开成幂级数 #theorem[ 如果函数 $f(x)$ 在 $U(x_0, delta)$ 内具有任意阶导数,_且_在 $U(x_0,delta)$ 内能展开成 $(x-x_0)$ 的幂级数,即 $f(x) = sf(0) a_n (x-x_0)^n,space x in (x_0 - delta, x_0 + delta)$,则其系数 $a_n = display(1/(n!) f^((n))(x_0)),space n=1,2,dots.c$,且展开式是唯一的。 #tip[ 据此,我们可以通过函数的展开式来求函数的 $n$ 阶导数。 #caution[ 不要忽略阶乘! ] ] ] ==== 函数能展开成泰勒级数的条件 #definition[ 称 $display(f(x) = f(x_0)+f'(x_0)(x-x_0) + (f''(x_0))/2 (x-x_0)^2 + dots.c + (f^((n))(x_0))/(x-x_0)^n + dots.c),space x in (x_0-delta, x_0+delta)$ 为 $f(x)$ 在 $x=x_0$ 处的#def[泰勒级数]。当 $x_0=0$ 时,称这一级数为#def[麦克劳林级数]。 ] #theorem(name: "函数能展开成幂级数的必要条件")[ 设函数 $f(x)$ 在点 $x_0$ 的某一邻域内具有各阶导数,则 $f(x)$ 在该邻域内能展开成泰勒级数的充分必要条件是 $f(x)$ 的泰勒公式中的余项 $R_n (x)=display((f^((n+1))(xi))/((n+1)!) (x-x_0)^(n+1))$ 当 $n->+oo$ 时趋向于 $0$。此时有 $ f(x) = ssf(0) (f^((n)) (x_0)) / (n!) (x-x_0)^n $ #proof[ *必要性*:若 $R_n (x)$ 的极限不为 $0$,则幂级数的和函数不能收敛到定值 $f(x)$。 *充分性*:即证 $display(lim_(n->+oo)) S_n (x) = f(x)$,由泰勒公式知 $f(x) = S_n (x) + R_n (x)$,故 $display(lim_(n->+oo)) S_n (x) = display(lim_(n->+oo)) (f(x) - R(x)) = f(x) - 0 = f(x)$。 ] ] ==== 将函数展开成幂级数的方法 直接求导的方法自不必说。 #example[ #problem[ 将 $f(x)=arctan x$ 展开成 $x$ 的幂级数。 ] #solution[ $ f'(x) = 1 / (1+x^2)=1-x^2+x^4-x^6+dots.c+(-1)^n x^(2n) + dots.c,space abs(x)<1 $ $ => arctan x = arctan 0 + int_0^x 1 / (1+x^2) dif x = x - x^3 / 3+x^5 / 5-x^7 / 7+dots.c+( -1 )^n (x^(2n+1)) / (2n+1) + dots.c,space abs(x)<1 $ ] ] #example[ #problem[ 将 $f(x)=display(x/(1+x-2x^2))$ 展开成 $x$ 的幂级数。 ] #solution[ $ f(x) = x / ((1-x)(1+2x)) = 1 / 3 (1 / (1-x) - 1 / (1+2x)) $ 其中 $display(1/(1-x) = ssf(0) x^n\,space abs(x)<1)$,$display(1/(1+2x) = ssf(0) (-2)^n x^n \,space abs(2x)<1)$,则 $ f(x)=1 / 3 ssf(0) (x^n - (-2)^n x^n) = ssf(0) (1-(-2)^n) / 3 x^n, space abs(x)<1 / 2 $ ] ] ==== 利用幂级数展开求函数积分 利用幂级数展开求解不定积分的情况: #example[ #problem[ 求 $display(int (sin x)/x) dif x$ ] #solution[ $ sin x = ssf(0) (-1)^n (x^(2n+1)) / ((2n+1)!),space x in (-oo,+oo) => (sin x) / x = ssf(0) (-1)^n (x^(2n)) / ((2n+1)!),space x in (-oo,+oo)\ => "原式" = C + ssf(0) (-1)^n (x^(2n+1)) / ((2n+1) dot ((2n+1)!)),space x in (-oo,+oo) $ ] ] 应用此方法,也可以求函数的定积分: #example[ #problem[ 求 $display(int_0^1 e^(-x^2) dif x)$。 ] #solution[ $ e^(-x^2) = ssf(0) ((-1)^n x^(2n)) / (n!) ,space x in (-oo,+oo) \ => "原式" = lr((ssf(0) ((-1)^n x^(2n+1))/((2n+1) dot n!))|)_0^1 = ssf(0) ((-1)^n) / ((2n+1) dot n!) $ ] ] == 傅里叶级数 === 三角级数 简谐振动:$y = A sin(omega x + phi) = a cos omega x + b sin omega x$,周期为 $T$。 简谐振动的叠加:$y_k =a_k cos k omega x + b_k sin k omega x$,周期为 $display(T/k)$。则 $ss y_k=ss (a_k cos k omega + b_k sin k omega x)$ 以 $T$ 为周期。 三角级数可以理解为无数个简谐振动的叠加。 #definition[ 将下式称为#def[三角级数]: $ (a_0) / 2 + sss (a_n cos n omega x + b_n sin n omega x) " 或 " (a_0) / 2 + sss (a_n cos (n pi x) / l + b_n sin (n pi x) / l) $ 其中,$a_n,b_n$ 称为#def[三角级数的系数]。将 $ &1,cos omega x,sin omega x,cos 2 omega x,sin 2 omega x, dots.c,cos n omega,sin n omega x,dots.c\ " 或 "& 1,cos (pi x) / l,sin (pi x) / l,cos (2 pi x) / l,sin (2 pi x) / l, dots.c,cos (n pi x) / l,sin (n pi x) / l,dots.c $ 称为#def[三角函数系]。 ] #theorem(name: "三角函数系的正交性")[ 三角函数系中的任意两个不同函数的乘积在 $[-l,l]$ 上的积分等于零。 ] #theorem(name: "三角函数的和函数与系数的关系")[ 设在整个数轴上有 $ f(x) = (a_0) / 2 + sss (a_n cos (n pi x) / l + b_n sin (n pi x) / l) $ 并设上式右端逐项可积,且乘以 $display(cos (n pi x)/l)$ 或 $display(sin (n pi x)/l)$ 后仍逐项可积,则 $ a_n &= 1 / l int_(-l)^l f(x) cos (n pi x) / l dif x, space n=0,1,2,3,dots.c\ b_n &= 1 / l int_(-l)^l f(x) sin (n pi x) / l dif x, space n=1,2,3,dots.c\ $ #proof[ TBD ] ] === 傅里叶级数 #definition(name: "傅里叶级数")[ 设 $f(x)$ 是以 $2l$ 为周期的周期函数,且 $f(x)$ 在 $[-l,l]$ 上可积,称 $ a_n &= 1 / l int_(-l)^l f(x) cos (n pi x) / l dif x, space n=0,1,2,3,dots.c\ b_n &= 1 / l int_(-l)^l f(x) sin (n pi x) / l dif x, space n=1,2,3,dots.c\ $ 为 $f(x)$ 的#def[傅里叶系数],称三角级数 $ (a_0) / 2 + sss (a_n cos (n pi x) / l + b_n sin (n pi x) / l) $ 为 $f(x)$ 的#def[傅里叶级数]。记作 $ f(x) sim (a_0) / 2 + sss (a_n cos (n pi x) / l + b_n sin (n pi x) / l) $ #warning[ 注意两者之间不能简单的划等号,因为我们并不能保证这一函数项级数收敛。 ] ] #definition(name: [逐段光滑])[ 称函数 $f(x)$ 是#def[逐段光滑]的,当且仅当 $f(x)$ 对任意有限区间 $[a,b]$,存在有限个点,将区间 $[a,b]$ 分成有限个子区间,使得函数 $f(x)$ 在每个子区间内连续,且有连续的导数 $f'(x)$,而在这些子区间的端点处 $f(x)$ 及 $f'(x)$ 最坏只能是第一类间断点。 ] #theorem(name: "狄利克雷定理")[ 设 $f(x)$ 是以 $2l$ 为周期的周期函数,且 $f(x)$ 在 $[-l,l]$ 上_逐段光滑_,则 $f(x)$ 的傅里叶级数在每一点 $x$ 上都收敛,且和函数满足 $ S( x ) = cases( f(x)\, quad & x "是 " f(x) "的连续点" , display((f(x-0)+f(x+0))/2)\, quad& x "是 " f(x) "的第一类间断点", display((f(-l+0)+f(l-0))/2)\,quad& x=pm l ) $ 其中 $display(f(a-0)=lim_(x->a^-) f(x)\,space f(a+0)=lim_(x->a^+) f(x))$。 ] #definition(name: "正弦傅里叶级数")[ 当 $f(x)$ 为奇函数时,它的傅里叶系数为 $ a_n &= 0, space n=0,1,2,3,dots.c\ b_n &= 2 / l int_0^l f(x) sin (n pi x) / l dif x, space n=1,2,3,dots.c $ 这时 $f(x) sim display(sss b_n sin (n pi x)/l)$ 称为 $f(x)$ 的#def[正弦傅里叶级数],简称#def[正弦级数]。 ] #definition(name: "余弦傅里叶级数")[ 当 $f(x)$ 为偶函数时,它的傅里叶系数为 $ a_n &= 2 / l int_0^l f(x) cos (n pi x) / l dif x, space n=0,1,2,3,dots.c\ b_n &= 0, space n=1,2,3,dots.c $ 这时 $f(x) sim display(a_0/2 + sss a_n cos (n pi x)/l)$ 称为 $f(x)$ 的#def[余弦傅里叶级数],简称#def[余弦级数]。 ] === 周期延拓 #definition[ 对定义在 $[-l,l]$ 上的函数 $f(x)$ 作#def[周期延拓]得 $ F(x) = cases( f(x)\,quad &x in [-l,l), f(x-2k l)\,quad &x in (-oo,-l) union [l,+oo) ) $ 对其进行傅里叶展开得 $ f(x) sim (a_0) / 2 + sss (a_n cos (n pi x) / l + b_n sin (n pi x) / l) $ ] === 奇延拓和偶延拓 求解定义在 $[0,l]$ 上的函数 $f(x)$ 的傅里叶级数时,可以将其延拓到 $(-l,l)$ 上,从而求得傅里叶级数。 $ f(x),space x in [0,l] xarrow("延拓") F(x) = cases( f(x)\,quad& x in [0,l), "任意函数" \,quad& x in (-l,0) ) $ 为了计算方便,我们一般将 $f(x)$ 延拓成奇函数或偶函数,这就是奇延拓与偶延拓。 #definition(name: "奇延拓")[ 对定义在 $[0,l]$ 上的函数 $f(x)$ 作#def[奇延拓]得 $ F(x) = cases( f(x)\,quad &x in (0,l), 0\,quad& x=0, -f(-x)\,quad &x in (-l,0) ) $ 对其进行傅里叶展开得 $ F(x) sim sss b_n sin (n pi x) / l\,space -l<=x<=l $ 从而得 $f(x)$ 的#def[正弦傅里叶级数]。 $ f(x) sim sss b_n sin (n pi x) / l\,space 0<=x<=l $ ] #definition(name: "偶延拓")[ 对定义在 $[0,l]$ 上的函数 $f(x)$ 作#def[偶延拓]得 $ F(x) = cases( f(x)\,quad &x in [0,l), f(-x)\,quad &x in (-l,0) ) $ 对其进行傅里叶展开得 $ F(x) sim a_0 / 2 + sss a_n cos (n pi x) / l\,space -l<=x<=l $ 从而得 $f(x)$ 的#def[余弦傅里叶级数]。 $ f(x) sim a_0 / 2 + sss a_n cos (n pi x) / l\,space 0<=x<=l $ ]
https://github.com/r8vnhill/keen-manual
https://raw.githubusercontent.com/r8vnhill/keen-manual/main/installation.typ
typst
BSD 2-Clause "Simplified" License
= Installation This section guides you through the process of setting up the Keen framework and the EvolutionPlotter in your Kotlin project using Gradle Kotlin DSL. == Gradle Kotlin DSL Setup === Step 1: Specify Versions if `gradle.properties` First, define the versions of Keen and Compose in your `gradle.properties` file. Make sure to replace these with the latest versions available. ```kotlin // gradle.propertis // Keen framework version. Replace with the latest version. keen.version=1.1.0 // Compose version for the EvolutionPlotter. Replace at your discretion. compose.version=1.5.11 ``` === Step 2: Configure Plugin Management in `settings.gradle.kts` This step is essential only if you plan to use the EvolutionPlotter. Here, you configure the plugin management for the Compose plugin. ```kotlin // settings.gradle.kts pluginManagement { repositories { // Standard Gradle plugin repository. gradlePluginPortal() // Repository for JetBrains Compose. maven("https://maven.pkg.jetbrains.space/public/p/compose/dev") // Google's Maven repository, sometimes needed for dependencies. google() } plugins { // Apply the Compose plugin with the specified version. id("org.jetbrains.compose") version extra["compose.version"] as String } } ``` === Step 3: Configure Project Plugins, Repositories, and Dependencies In your build script, configure the necessary plugins, repositories, and dependencies. ```kotlin // Retrieve the Keen version defined earlier. val keenVersion: String by extra["keen.version"] as String plugins { /* ... */ // Include this only if using the EvolutionPlotter. id("org.jetbrains.compose") } repositories { // Maven Central repository for most dependencies. mavenCentral() /* ... */ } dependencies { // Keen core library dependency. implementation("cl.ravenhill:keen-core:$keenVersion") // Compose dependency, required for the EvolutionPlotter. implementation(compose.desktop.currentOs) /* ... */ } ``` == Additional Notes: - Ensure that the versions specified in `gradle.properties` are compatible with your project setup. - The `pluginManagement` block in `settings.gradle.kts` is crucial for resolving the Compose plugin, especially if you're using features like the EvolutionPlotter. - Remember to sync your Gradle project after making changes to these files to apply the configurations.
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/accent_01.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page $ x &= p \ dot(x) &= v \ dot.double(x) &= a \ dot.triple(x) &= j \ dot.quad(x) &= s $
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/supercharged-dhbw/1.2.0/declaration-of-authorship.typ
typst
Apache License 2.0
#let declaration-of-authorship(authors, title, date, language) = { pagebreak() v(2em) text(size: 20pt, weight: "bold", if (language == "de") { "Selbstständigkeitserklärung" } else { "Declaration of Authorship" }) v(1em) par(justify: true, [ emäß Ziffer 1.1.13 der Anlage 1 zu §§ 3, 4 und 5 der Studien- und Prüfungsordnung für die Bachelorstudiengänge im Studienbereich Technik der Dualen Hochschule Baden- Württemberg vom 29.09.2017. Ich versichere hiermit, dass ich meine Arbeit mit dem Thema: ]) v(1em) align(center, text(weight: "bold", title) ) v(1em) par(justify: true, [ selbstständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt habe. Ich versichere zudem, dass die eingereichte elektronische Fassung mit der gedruckten Fassung übereinstimmt. ]) v(3em) text([#if (language == "de") { [#authors.map(author => author.company.city).dedup().join(", ", last: " und "), #date.display( "[day].[month].[year]" )]} else { [#authors.map(author => author.company.city).dedup().join(", ", last: " and "), #date.display( "[day].[month].[year]" )]}]) for author in authors { v(5em) line(length: 40%) author.name } }
https://github.com/Breadp4ck/typst-nstu-template
https://raw.githubusercontent.com/Breadp4ck/typst-nstu-template/main/nstu_template/template.typ
typst
// NSTU report template by <NAME> (https://github.com/Breadp4ck) // with help from Tombleron (https://github.com/Tombleron) show image: it => { #let project( faculty: none, // ФПМИ, АВТф students: (), teachers: (), department: none, discipline: none, task_type: none, task_name: none, variant: none, group: none, team: none, year: none, show_title_page: true, body, ) = { // Set the document's basic properties. set document(author: students, title: task_name) set text(font: ("Raleway", "Fira Code"), lang: "ru", weight: "regular") if show_title_page { page( background: image("images/border.png", width: 90%), align(center + horizon)[ #set text(size:14pt, font: ("Raleway")) // Information about work #align(center + top)[ #image("images/eagle.png", width: 20%) #v(3em, weak: true) Федеральное государственное бюджетное \ образовательное учреждение высшего образования \ #smallcaps(["Новосибирский государственный технический университет"]) \ #v(3em, weak: true) #if {faculty == "ФПМИ"} [#image("images/logo_fami.png")] #if {faculty == "АВТФ"} [#image("images/logo_avtf.png")] #v(3em, weak: true) #department #v(1.5em, weak: true) #if {task_type != none} [ #task_type \ #if {discipline != none } [ по дисциплине "#discipline" ] ] else if {discipline != none } [ Дисциплина "#discipline" ] #v(1.5em, weak: true) #if {task_name != none} [ #text(weight: "semibold", smallcaps(task_name)) ] ] // Information about students #grid( columns: (5cm, 1fr), align(left + top)[ #if {group != none} [ Группа: #group \ ] #if {team != none} [ Бригада: #team \ ] #if {variant != none} [ Вариант: #variant \ ] ], align(left + top)[ #for student in students [ #upper(student) \ ] ], ) // Information about teachers #grid( columns: (5cm, 1fr), align(left + top)[ #if {teachers.len() != 0} [ Преподаватели: \ ] ], align(left + top)[ #for teacher in teachers [#upper(teacher)] ], ) #align(center + bottom)[ #if {year != none} [Новосибирск, #year] else [Новосибирск] ] ])} // = Main body config set page(numbering: "1", number-align: center) set heading(numbering: "1.") show heading: set block(above: 1em, below: 1em) // = Paragraph config // Add paragraph indent (TODO: with bug #311 workaround) set par(justify: true, first-line-indent: 1.8em ) show par: set block(spacing: 0.5em) show heading: it => { it par(text(size:0.00em, h(0.0em))) } show figure: it => { it par(text(size:0.00em, h(0.0em))) } show image: it => { it par(text(size:0.00em, h(0.0em))) } // Add table caption to top right show figure.where(kind: table): it => box(width:100%)[ #v(if it.has("gap") {it.gap} else {0.65em}) #set align(right) #set par(hanging-indent: 1cm, justify: true) #pad(x: 0cm)[#it.caption] #align(center)[#it.body] ] // = Math config set math.equation( numbering: "(1)", supplement: none, ) show ref: it => { // provide custom reference for equations if it.element != none and it.element.func() == math.equation { // optional: wrap inside link, so whole label is linked link(it.target)[(#it)] } else { it } } body } // = Code config // Import nice listing library (remove when not needed) #import "@preview/codelst:1.0.0": sourcecode #let sourcecode = sourcecode.with( frame: block.with( fill: white, radius: 1em, inset: (rest: 0.8em), stroke: (paint: gray, thickness: 1pt, dash: "solid") ) ) // Paste code from file // TODO: This approach feels s wrong... #let lst(source, lang) = { // TODO: Workaround for line indent set par(justify: true, first-line-indent: .0em ) set text(size:10pt, font: ("Fira Code")) // Breaks umbering on long strings without spaces sourcecode(lang: lang)[#raw(read("../" + source))] } // = Funny emphasizings #let amazed(term) = box[✨ #term ✨] #let blazed(term) = box[🔥 #term 🔥] #let zapped(term) = box[⚡ #term ⚡] #let exploded(term) = box[💥 #term 💥] #let celebrated(term) = box[🎉 #term 🎉]
https://github.com/benjamineeckh/kul-typst-template
https://raw.githubusercontent.com/benjamineeckh/kul-typst-template/main/src/core/component/abstract.typ
typst
MIT License
#import "../../page-utils.typ" #let insert-abstract(abstract, lang:"en") = context { heading( level: 1, numbering: none, outlined: true, if lang == "en"{ "Abstract" }else{ "Samenvatting" } ) if abstract != none { abstract pagebreak(weak: true) }else{ let t = if lang == "nl"{ "PLAATSHOUDER VOOR SAMENVATTING" }else{ "PLACEHOLDER FOR ABSTRACT" } text(purple, size:3em)[#t] set text(red) lorem(200) pagebreak(weak: true) } }
https://github.com/npikall/typst-templates
https://raw.githubusercontent.com/npikall/typst-templates/main/templates/skript.typ
typst
// Latex Style Template with simple cover page #let conf( language:"en", title: none, doc ) = { // set the Pagelayout set page( paper:"a4", margin: ( left: 2.5cm, right: 2.5cm, top: 2.5cm, bottom: 2.5cm ), footer: context{ if counter(page).get().first() > 1 [ #h(1fr) #counter(page).display( "1", both: false, ) #h(1fr)]}, header: context{ if counter(page).get().first() > 1 and title != none [ #set text(10pt) #h(1fr) #emph(title) #line(length: 100%, stroke: 0.7pt)]} , header-ascent: 30%, ) // Set the Heading Numbering set heading(numbering: "1.") // Set the paragraph style set par(leading: 0.55em, first-line-indent: 1.8em, justify: true) // Set the font style let mainfont = "CMU Serif" let rawfont = "Dejavu Sans Mono" set text( font: mainfont, size: 10pt, ) show raw: set text(font: rawfont) show par: set block(above: 1.4em, below: 1em) // Referencing Figures show figure.where(kind: table): set figure(supplement:[Tab.], numbering: "1") if language == "aut" show figure.where(kind: image): set figure(supplement:[Abb.], numbering: "1",) if language == "aut" // Set Table style set table( stroke: none, gutter: auto, fill: none, inset: (right: 1.5em), ) // Set Table of Contents Style show outline.entry.where( level: 1, ): it => { v(11pt, weak: true) strong(it.body) box(width: 1fr, repeat[]) strong(it.page) } show outline.entry.where( level: 2, ): it => { it.body box(width: 1fr, repeat[.]) it.page} show outline.entry.where( level: 3, ): it => { it.body box(width: 1fr, repeat[.]) it.page} // List indentation set list(indent: 1em) set enum(indent: 1em) // Document doc } #let maketitle( title:[], subtitle:[The important subtitle], author:[], date: datetime.today().display("[day].[month].[year]"), bottom:[Engineering\ Fantasy University], logo:none, ) = { // titlepage with the logo if logo != none [ #image(logo, width: 40%) #align(center,[ #v(2.5cm) #text(20pt)[#title] #v(1em) #text(16pt)[#subtitle] #v(3em) #text(14pt)[#author] #v(1fr) #text(14pt)[ #bottom\ \ #date] ]) #pagebreak()] else [ // titlepage without the logo #align(center,[ #v(5cm) #text(20pt)[#title] #v(1em) #text(16pt)[#subtitle] #v(3em) #text(14pt)[#author] #v(1fr) #text(14pt)[ #bottom\ \ #date]]) #pagebreak() ] }
https://github.com/cadojo/correspondence
https://raw.githubusercontent.com/cadojo/correspondence/main/src/rolo/src/affiliations.typ
typst
MIT License
#let affiliation( number: none, name: none, department: none, address: none, city: none, region: none, country: none, postal-code: none, url: none, ) = ( number: number, name: name, department: department, address: address, city: city, region: region, country: country, postal-code: postal-code, url: url )
https://github.com/MattiaOldani/Informatica-Teorica
https://raw.githubusercontent.com/MattiaOldani/Informatica-Teorica/master/capitoli/alias.typ
typst
// Setup #import "@preview/ouset:0.1.1": overset // Alias // Matematica #let dominio(funzione) = $op("Dom")_funzione$ #let immagine(funzione) = $op("Im")_(funzione)$ #let composizione = $circle.stroked.tiny$ // Dati e programmi #let dati = $"DATI"$ #let programmi = $"PROG"$ // Funzione coppia di Cantor #let cantor(..params) = { let add = params.pos().join(", ") $angle.l add angle.r$ } #let cantorsin = $op("sin")$ #let cantordes = $op("des")$ #let listlength = $op("length")$ // Sistemi di calcolo #let ram = $"RAM"$ #let mwhile = $"WHILE"$ // Sistema ram #let inc(reg) = $reg arrow.long.l reg + 1$ #let subsus(reg) = $reg arrow.long.l reg overset(-,.) 1$ #let ifgoto(reg,number) = $"IF" reg = 0 "THEN GOTO" number$ #let istr(index) = $"Istr"_index$ #let istruzioni = $"ISTR"$ #let stati = $"STATI"$ #let iniziale = $S_("iniziale")$ #let inizializzazione = $"in"$ #let cod = $op("cod")$ #let ar = $"Ar"$ // Sistema while #let composto = $"begin" C_1";"dots";"C_n "end"$ #let comandowhile = $"while" x_k eq.not 0 "do" C$ #let wstato(nome) = $underline(nome)$ #let wstati = $W"-STATI"$ #let winizializzazione = $"w-in"$ #let wcomandi = $"W-COM"$ #let wprogrammi = $W"-"programmi$ // Traduttori #let c1programmi = $C_1"-"programmi$ #let c2programmi = $C_2"-"programmi$ #let compilatore = $op("Comp")$ #let colorcode(color,code) = text(fill: color)[$#code$] #let proiezione(index,lista) = $op("Proj")(index,lista)$ #let macroincr(index,lista) = $op("incr")(index,lista)$ #let macrodecr(index,lista) = $op("decr")(index,lista)$ // Definizione formale di calcolabilità #let elem = $"ELEM"$ #let comp = $"COMP"$ #let rp = $"RP"$ #let ricprim = $"RICPRIM"$ #let lfor = $"FOR"$ #let min = $"MIN"$ #let arresto(programma) = $"AR"_programma$ #let ristretto = $overset(P,§)$ // DTM #let blank = "blank" // Classi di complessità #let dtime = $italic("DTIME")$ #let ftime = $italic("FTIME")$ #let dspace = $italic("DSPACE")$ #let fspace = $italic("FSPACE")$ #let fp = $italic("FP")$ #let fl = $italic("FL")$ #let exptime = $italic("EXPTIME")$ #let cent = $¢$ // Classi di complessità non deterministiche #let ntime = $italic("NTIME")$ #let NP = $italic("NP")$ #let NPC = $italic("NPC")$ #let PC = $italic("PC")$
https://github.com/GYPpro/DS-Course-Report
https://raw.githubusercontent.com/GYPpro/DS-Course-Report/main/Rep/04.typ
typst
#import "@preview/tablex:0.0.6": tablex, hlinex, vlinex, colspanx, rowspanx #import "@preview/codelst:2.0.1": sourcecode // Display inline code in a small box // that retains the correct baseline. #set text(font:("Times New Roman","Source Han Serif SC")) #show raw.where(block: false): box.with( fill: luma(230), inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt, ) #show raw: set text( font: ("consolas", "Source Han Serif SC") ) #set page( paper: "a4", ) #set text( font:("Times New Roman","Source Han Serif SC"), style:"normal", weight: "regular", size: 13pt, ) #let nxtIdx(name) = box[ #counter(name).step()#counter(name).display()] #set math.equation(numbering: "(1)") #show raw.where(block: true): block.with( fill: luma(240), inset: 10pt, radius: 4pt, ) #set math.equation(numbering: "(1)") #set page( paper:"a4", number-align: right, margin: (x:2.54cm,y:4cm), header: [ #set text( size: 25pt, font: "KaiTi", ) #align( bottom + center, [ #strong[暨南大学本科实验报告专用纸(附页)] ] ) #line(start: (0pt,-5pt),end:(453pt,-5pt)) ] ) /*----*/ = 实现基于循环增长数组的`deque` \ #text( font:"KaiTi", size: 15pt )[ 课程名称#underline[#text(" 数据结构 ")]成绩评定#underline[#text(" ")]\ 实验项目名称#underline[#text(" ") 实现基于循环增长数组的`deque` #text(" ")]指导老师#underline[#text(" 干晓聪 ")]\ 实验项目编号#underline[#text(" 04 ")]实验项目类型#underline[#text(" 设计性 ")]实验地点#underline[#text(" 数学系机房 ")]\ 学生姓名#underline[#text(" 郭彦培 ")]学号#underline[#text(" 2022101149 ")]\ 学院#underline[#text(" 信息科学技术学院 ")]系#underline[#text(" 数学系 ")]专业#underline[#text(" 信息管理与信息系统 ")]\ 实验时间#underline[#text(" 2024年6月13日上午 ")]#text("~")#underline[#text(" 2024年7月13日中午 ")]\ ] #set heading( numbering: "1.1." ) = 实验目的 实现基于循环增长数组的双向队列,保证在某一段重复添加弹出后实际内存占用规模符合理论占用,不会出现方向性泄漏。 = 实验环境 计算机:PC X64 操作系统:Windows + Ubuntu20.0LTS 编程语言:C++:GCC std20 IDE:Visual Studio Code = 程序原理 在类`deque`中维护了两个指针与两个循环增长数组。如果某一端的长度偏差值大于1,即某侧数据长小于同侧空白区域长度,则触发再分配。 可以证明,在数据规模极大时,再分配的均摊复杂度为$OO(1)$ #pagebreak() = 程序代码 == `deque.h` #sourcecode[```cpp // #define _PRIVATE_DEBUG #ifndef PRVLIBCPP_DEQUE_HPP #define PRVLIBCPP_DEQUE_HPP #include <map> #include <vector> #ifdef _PRIVATE_DEBUG #include <iostream> #endif namespace myDS { template<typename VALUE_TYPE> class deque{ protected: private: using coddinate = std::pair<std::int32_t,std::int32_t>; // < L : 0 , R : 1 > std::vector<std::vector<VALUE_TYPE>> _indexs; std::int32_t _size = 0; std::int32_t _L = 1; std::int32_t _R = -1; VALUE_TYPE & get(coddinate p) { return _indexs[p.first][p.second]; } coddinate index2cod(std::int32_t p) { if(p+_L > 0) return coddinate(1,p+_L-1); else return coddinate(0,-p-_L); } void _reDistribute() { if(_L * _R <= 0) return; if(abs(_L - _R) + 1 < std::min(abs(_L),abs(_R))) { if(_L > 0) { // < --- 0 : 0 --- L -- R --- > std::vector<VALUE_TYPE> N; for(int i = _L-1;i <= _R;i ++) N.push_back(_indexs[1][i]); _indexs[1] = N; _L = 1; _R = N.size() - 1; } else { // < --- L(<0) -- R(<0) --- 0 : 0 --- > std::vector<VALUE_TYPE> N; for(int i = -_R-1;i <= -_L;i ++) N.push_back(_indexs[0][i]); _indexs[0] = N; _L = -N.size()+1; _R = -1; } } else return; } public: deque(){ _indexs.push_back(std::vector<VALUE_TYPE>()); _indexs.push_back(std::vector<VALUE_TYPE>()); } void push_back(VALUE_TYPE t) { _R ++; if(_R >= 0) { _indexs[1].push_back(t); } else { _indexs[0][_R-1] = t; _reDistribute(); } } void push_frount(VALUE_TYPE t) { _L --; if(_L <= 0) { _indexs[0].push_back(t); } else { _indexs[1][_L-1] = t; _reDistribute(); } } VALUE_TYPE pop_back() { if(!this->size()) throw std::out_of_range("Pop from empty deque"); VALUE_TYPE t ; if(_R >= 0) { t = _indexs[1].back(); _indexs[1].pop_back(); _R --; } else { t = _indexs[0][-_R-1]; _R --; _reDistribute(); } return t; } VALUE_TYPE pop_frount() { if(!this->size()) throw std::out_of_range("Pop from empty deque"); VALUE_TYPE t; if(_L <= 0) { t = _indexs[0].back(); _indexs[0].pop_back(); _L ++; } else { t = _indexs[1][_L-1]; _L ++; _reDistribute(); } return t; } void clear() { _indexs[0].clear(); _indexs[1].clear(); _L = 1; _R = -1; } std::int32_t size() { return _R - _L + 2; } #ifdef _PRIVATE_DEBUG void innerPrint() { std::cout << "L : " << _L << " R : " << _R << "\n"; std::cout << "L : "; for(auto x:_indexs[0]) std::cout << x << " "; std::cout << "\n"; std::cout << "R : "; for(auto x:_indexs[1]) std::cout << x << " "; std::cout << "\n"; } #endif // myDS::deque<VALUE_TYPE>::_iterator begin() { } // myDS::deque<VALUE_TYPE>::_iterator rbegin() { } // myDS::deque<VALUE_TYPE>::_iterator end() { } // myDS::deque<VALUE_TYPE>::_iterator rend() { } // myDS::deque<VALUE_TYPE>::_iterator get(std::int32_t p) { } VALUE_TYPE & operator[](std::int32_t p) { return get(index2cod(p)); } }; } #endif ```] == `_PRIV_TEST.cpp` #sourcecode[```cpp #define DS_TOBE_TEST deque #define _PRIVATE_DEBUG // #define __DETIL_DEBUG_OUTPUT #include "Dev\04\deque.h" #include <time.h> #include <iostream> #include <math.h> #include <vector> using namespace std; using TBT = myDS::deque<int>; void accuracyTest() {//结构正确性测试 TBT tc = TBT(); for(;;) { string op; cout << ">>>"; cin >> op; if(op == "clr") { //清空 tc.clear(); } else if(op == "q") //退出测试 { return; } else if(op == "pb")//push_back { int c; cin >> c; tc.push_back(c); } else if(op == "pf")//push_frount { int c; cin >> c; tc.push_frount(c); } else if(op == "ob")//pop_back { cout << tc.pop_back() << "\n"; } else if(op == "of")//pop_frount { cout << tc.pop_frount() << "\n"; } else if(op == "at")//随机访问 { int p; cin >> p; cout << tc[p] << "\n"; } else if(op == "at")//随机访问 { int p; cin >> p; cout << tc[p] << "\n"; } else if(op == "of")//pop_frount { } else if(op == "at")//随机访问 { int p; cin >> p; cout << tc[p] << "\n"; // } else if(op == "delEL")//删除所有等于某值元素 // { // int p; // cin >> p; // cout << tc.erase(p) << "\n"; // } else if(op == "delPS")//删除某位置上的元素 // { // int p; // cin >> p; // cout << tc.erase(tc.get(p)) << "\n"; } else if(op == "iterF") //正序遍历 { tc.innerPrint(); cout << "Iter with index:\n"; for(int i = 0;i < tc.size();i ++) cout << tc[i] << " ";cout << "\n"; // cout << "Iter with begin end\n"; // for(auto x = tc.begin();x != tc.end();x ++) cout << (*x) << " ";cout << "\n"; // cout << "Iter with AUTO&&\n"; // for(auto x:tc) cout << x << " ";cout << "\n"; } else if(op == "iterB") //倒序遍历 { tc.innerPrint(); cout << "Iter with index:\n"; for(int i = 0;i < tc.size();i ++) cout << tc[tc.size()-1-i] << " ";cout << "\n"; // cout << "Iter with begin end\n"; // for(auto x = tc.rbegin();x != tc.rend();x ++) cout << (*x) << " ";cout << "\n"; // cout << "Iter with AUTO&&\n";."\n"; } else if(op == "mv")//单点修改 { int p; cin >> p; int tr; cin >> tr; tc[p] = tr; } else if(op == "") { } else { op.clear(); } } } void memLeakTest1() {//内存泄漏测试 TBT tc = TBT(); for(;;){ tc.push_back(1); tc.push_back(1); tc.push_back(1); tc.push_back(1); tc.clear(); } } void memLeakTest2() {//内存泄漏测试 TBT tc = TBT(); for(;;){ tc.push_back(1); tc.pop_frount(); } } void speedTest() { TBT tc = TBT(); int begin = clock(); int N = 1e8; for(int i = 0;i < sqrt(N/2);i ++) { for(int j = 0;j < sqrt(N/2);j ++) { tc.push_back(i); } for(int j = 0;j < sqrt(N/2);j ++) { tc.pop_frount(); } } cout << "myDS::deque push_back then pop_frount sqrt(5000000) elements for sqrt(5000000) times cost:" << clock() - begin << "ms\n"; std::vector<int> tmp; begin = clock(); for(int i = 0;i < N;i ++) { tmp.push_back(i); } cout << "std::vector push_back 10000000 elements cost:" << clock() - begin << "ms\n"; system("pause"); } signed main() { // accuracyTest(); // memLeakTest1(); // memLeakTest2(); speedTest(); } ```] = 测试数据与运行结果 运行上述`_PRIV_TEST.cpp`测试代码中的正确性测试模块,得到以下内容: ``` pb 2 pb 3 pb 4 pf 1 pf 0 iterF pb 5 pb 6 of of of iterF of iterF of iterF >>>pb 2 >>>pb 3 >>>pb 4 >>>pf 1 >>>pf 0 >>>iterF L : -1 R : 2 L : 1 0 R : 2 3 4 Iter with index: 0 1 2 3 4 >>> pb 5 >>>pb 6 >>>of 0 >>>of 1 >>>of 2 >>>iterF L : 2 R : 4 L : R : 2 3 4 5 6 Iter with index: 3 4 5 6 >>>of 3 >>>iterF L : 1 R : 2 L : R : 4 5 6 Iter with index: 4 5 6 >>>of 4 >>>iterF L : 1 R : 1 L : R : 5 6 Iter with index: 5 6 ``` 可以看出,代码运行结果与预期相符,可以认为代码正确性无误。 运行`_PRIV_TEST.cpp`中的内存测试模块与单向插入测试模块,在保持CPU高占用率运行一段时间后内存变化符合预期,可以认为代码内存安全性良好。 #image("03.png") 运行`_PRIV_TEST.cpp`中的性能测试模块,结果为 ``` myDS::deque push_back then pop_frount sqrt(5000000) elements for sqrt(5000000) times cost:3964ms std::vector push_back 10000000 elements cost:1528ms ``` 可以认为在每轮中单向插入的复杂度符合预期。
https://github.com/matthiasbeyer/ttt
https://raw.githubusercontent.com/matthiasbeyer/ttt/master/ttt.typ
typst
MIT License
// Workaround for the lack of an `std` scope. #let std-bibliography = bibliography #let std-smallcaps = smallcaps #let std-upper = upper // Overwrite the default `smallcaps` and `upper` functions with increased spacing between // characters. Default tracking is 0pt. #let smallcaps(body) = std-smallcaps(text(tracking: 0.6pt, body)) #let upper(body) = std-upper(text(tracking: 0.6pt, body)) // Colors used across the template. #let stroke-color = luma(200) #let fill-color = luma(250) // This function gets your whole document as its `body` and formats it as a simple // non-fiction paper. #let ttt( // The name of the author author: [Authors name], // The name of the class class: [Your class], // Author's name. subject: [The subject of today], // The paper size to use. paper-size: "a4", // Date that will be displayed on cover page. // The value needs to be of the 'datetime' type. // More info: https://typst.app/docs/reference/foundations/datetime/ // Example: datetime(year: 2024, month: 03, day: 17) date: none, // Format in which the date will be displayed on cover page. // More info: https://typst.app/docs/reference/foundations/datetime/#format date-format: "[year]-[month]-[day]", // The result of a call to the `bibliography` function or `none`. // Example: bibliography("refs.bib") // More info: https://typst.app/docs/reference/model/bibliography/ bibliography: none, // Whether to display a maroon circle next to external links. external-link-circle: true, // The content of your work. body, ) = { // Set the document's metadata. set document(title: class + [: ] + subject, author: author) // Set the body font. // Default is Linux Libertine at 11pt set text(font: ("Libertinus Serif", "Linux Libertine"), size: 12pt) // Set raw text font. // Default is Fira Mono at 8.8pt show raw: set text(font: ("Iosevka", "Fira Mono"), size: 9pt) // Configure page size and margins. set page( paper: paper-size, margin: (bottom: 1.75cm, top: 2.25cm), ) // Configure paragraph properties. // Default leading is 0.65em. set par(leading: 0.7em, justify: true, linebreaks: "optimized") // Default spacing is 1.2em. show par: set block(spacing: 1.35em) // Add vertical space after headings. show heading: it => { it v(3%, weak: true) } // Do not hyphenate headings. show heading: set text(hyphenate: false) // Show a small maroon circle next to external links. show link: it => { it // Workaround for ctheorems package so that its labels keep the default link styling. if external-link-circle and type(it.dest) != label { sym.wj h(1.6pt) sym.wj super(box(height: 3.8pt, circle(radius: 1.2pt, stroke: 0.7pt + rgb("#993333")))) } } // Indent nested entires in the outline. set outline(indent: auto) // Configure heading numbering. set heading(numbering: "1.") // Configure page numbering and footer. set page( footer: context { // Get current page number. let i = counter(page).at(here()).first() align(center)[#h(1.75em) #i] }, ) // Configure equation numbering. set math.equation(numbering: "(1)") // Display inline code in a small box that retains the correct baseline. show raw.where(block: false): { box.with( fill: fill-color.darken(2%), inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt, ) } // Display block code with padding. show raw.where(block: true): block.with( inset: (x: 3pt), ) // Break large tables across pages. show figure.where(kind: table): set block(breakable: true) set table( // Increase the table cell's padding inset: 7pt, // default is 5pt stroke: (0.5pt + stroke-color) ) // Use smallcaps for table header row. show table.cell.where(y: 0): smallcaps set page( header-ascent: 0%, header: [ #set rect( inset: (x: 5pt, y: 0pt), outset: (x: 0pt, y: 5pt), fill: rgb("e4e5ea"), width: 100%, ) #grid( columns: (4fr, 5fr, 3fr), rows: (4fr, 5fr, 3fr), column-gutter: 10pt, rect[*Class*], rect[*Subject*], rect[#align(right)[*Date*]], rect[#class], rect[#subject], rect[#align(right)[#text(date.display(date-format))]], ) ] ) // Wrap `body` in curly braces so that it has its own context. This way show/set rules will only apply to body. { show heading.where(level: 1): it => { // If we start a new chapter, do so on a new page // ...except if we are on the first page if counter(page).at(here()).first() != 1 { pagebreak() } it } body } // Display bibliography. if bibliography != none { pagebreak() show std-bibliography: set text(0.85em) // Use default paragraph properties for bibliography. show std-bibliography: set par(leading: 0.65em, justify: false, linebreaks: auto) bibliography } } // This function formats its `body` (content) into a blockquote. #let blockquote(body) = { block( width: 100%, fill: fill-color, inset: 1em, stroke: (y: 0.5pt + stroke-color), body ) }
https://github.com/VisualFP/docs
https://raw.githubusercontent.com/VisualFP/docs/main/SA/project_documentation/content/meeting_minutes/meeting_minutes.typ
typst
#import "../../../style.typ": include_section #pagebreak() = Meeting Minutes The following section lists all "formal" project meetings and their discussed agenda. #include_section("project_documentation/content/meeting_minutes/week_01.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_02.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_03.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_04.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_05.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_06.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_07.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_08.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_10.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_11.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_12.typ", heading_increase: 1) #include_section("project_documentation/content/meeting_minutes/week_14.typ", heading_increase: 1)
https://github.com/benjamineeckh/kul-typst-template
https://raw.githubusercontent.com/benjamineeckh/kul-typst-template/main/src/core/component/bibliography.typ
typst
MIT License
// Bibliography #let insert-bibliography(bib, lang:"en") = { if bib != none { heading( level: 1, numbering: none, if lang == "en" { "Bibliography" } else { "Bibliografie" }, outlined: true ) show bibliography: set text(size: 0.9em) set bibliography(title: none) bib } }
https://github.com/ClazyChen/Table-Tennis-Rankings
https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history_CN/2015/WS-11.typ
typst
#set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (1 - 32)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [1], [丁宁], [CHN], [3387], [2], [朱雨玲], [MAC], [3345], [3], [刘诗雯], [CHN], [3321], [4], [李晓霞], [CHN], [3307], [5], [陈梦], [CHN], [3178], [6], [武杨], [CHN], [3171], [7], [冯天薇], [SGP], [3070], [8], [福原爱], [JPN], [3066], [9], [韩莹], [GER], [3050], [10], [木子], [CHN], [3038], [11], [李洁], [NED], [3006], [12], [石川佳纯], [JPN], [2988], [13], [冯亚兰], [CHN], [2973], [14], [李明顺], [PRK], [2962], [15], [文佳], [CHN], [2961], [16], [伊藤美诚], [JPN], [2925], [17], [徐孝元], [KOR], [2909], [18], [田志希], [KOR], [2908], [19], [李佼], [NED], [2902], [20], [单晓娜], [GER], [2891], [21], [若宫三纱子], [JPN], [2886], [22], [平野美宇], [JPN], [2868], [23], [姜华珺], [HKG], [2854], [24], [李倩], [POL], [2852], [25], [胡丽梅], [CHN], [2845], [26], [伊丽莎白 萨玛拉], [ROU], [2844], [27], [佩特丽莎 索尔佳], [GER], [2841], [28], [崔孝珠], [KOR], [2836], [29], [李芬], [SWE], [2833], [30], [MIKHAILOVA Polina], [RUS], [2830], [31], [郑怡静], [TPE], [2825], [32], [帖雅娜], [HKG], [2819], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (33 - 64)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [33], [李晓丹], [CHN], [2813], [34], [金景娥], [KOR], [2812], [35], [倪夏莲], [LUX], [2788], [36], [于梦雨], [SGP], [2767], [37], [#text(gray, "文炫晶")], [KOR], [2765], [38], [PESOTSKA Margaryta], [UKR], [2764], [39], [沈燕飞], [ESP], [2762], [40], [车晓曦], [CHN], [2762], [41], [石垣优香], [JPN], [2751], [42], [侯美玲], [TUR], [2749], [43], [李皓晴], [HKG], [2742], [44], [傅玉], [POR], [2741], [45], [杨晓欣], [MON], [2740], [46], [POTA Georgina], [HUN], [2739], [47], [索菲亚 波尔卡诺娃], [AUT], [2730], [48], [<NAME>], [ROU], [2722], [49], [梁夏银], [KOR], [2721], [50], [杜凯琹], [HKG], [2713], [51], [IVANCAN Irene], [GER], [2704], [52], [GU Ruochen], [CHN], [2694], [53], [NG Wing Nam], [HKG], [2689], [54], [金宋依], [PRK], [2685], [55], [吴佳多], [GER], [2678], [56], [GRZYBOWSKA-FRANC Katarzyna], [POL], [2674], [57], [森田美咲], [JPN], [2672], [58], [平野早矢香], [JPN], [2668], [59], [邵杰妮], [POR], [2666], [60], [佐藤瞳], [JPN], [2666], [61], [LANG Kristin], [GER], [2662], [62], [刘斐], [CHN], [2659], [63], [WINTER Sabine], [GER], [2657], [64], [维多利亚 帕芙洛维奇], [BLR], [2657], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (65 - 96)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [65], [RI Mi Gyong], [PRK], [2649], [66], [陈思羽], [TPE], [2648], [67], [刘佳], [AUT], [2648], [68], [LI Xue], [FRA], [2644], [69], [LIN Ye], [SGP], [2642], [70], [BILENKO Tetyana], [UKR], [2641], [71], [PARK Youngsook], [KOR], [2641], [72], [李时温], [KOR], [2641], [73], [#text(gray, "LEE Eunhee")], [KOR], [2630], [74], [加藤美优], [JPN], [2626], [75], [张蔷], [CHN], [2621], [76], [KIM Hye Song], [PRK], [2619], [77], [陈幸同], [CHN], [2618], [78], [#text(gray, "YOON Sunae")], [KOR], [2617], [79], [曾尖], [SGP], [2612], [80], [PASKAUSKIENE Ruta], [LTU], [2612], [81], [张安], [USA], [2609], [82], [#text(gray, "JIANG Yue")], [CHN], [2609], [83], [SAWETTABUT Suthasini], [THA], [2608], [84], [王曼昱], [CHN], [2606], [85], [EKHOLM Matilda], [SWE], [2606], [86], [ZHOU Yihan], [SGP], [2605], [87], [LI Chunli], [NZL], [2603], [88], [ABE Megumi], [JPN], [2602], [89], [#text(gray, "KIM Jong")], [PRK], [2594], [90], [LIU Xi], [CHN], [2594], [91], [KOMWONG Nanthana], [THA], [2592], [92], [CHENG Hsien-Tzu], [TPE], [2589], [93], [PROKHOROVA Yulia], [RUS], [2583], [94], [妮娜 米特兰姆], [GER], [2582], [95], [刘高阳], [CHN], [2578], [96], [CHOI Moonyoung], [KOR], [2578], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Women's Singles (97 - 128)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [97], [LEE Yearam], [KOR], [2576], [98], [#text(gray, "ZHU Chaohui")], [CHN], [2576], [99], [TIKHOMIROVA Anna], [RUS], [2571], [100], [森樱], [JPN], [2571], [101], [伯纳黛特 斯佐科斯], [ROU], [2569], [102], [CHA Hyo Sim], [PRK], [2568], [103], [MAEDA Miyu], [JPN], [2567], [104], [ODOROVA Eva], [SVK], [2566], [105], [RAMIREZ Sara], [ESP], [2565], [106], [PARTYKA Natalia], [POL], [2563], [107], [早田希娜], [JPN], [2561], [108], [DVORAK Galia], [ESP], [2558], [109], [VACENOVSKA Iveta], [CZE], [2556], [110], [#text(gray, "XIAN Yifang")], [FRA], [2555], [111], [NOSKOVA Yana], [RUS], [2552], [112], [MATSUZAWA Marina], [JPN], [2549], [113], [张墨], [CAN], [2549], [114], [#text(gray, "PARK Seonghye")], [KOR], [2546], [115], [浜本由惟], [JPN], [2545], [116], [HAPONOVA Hanna], [UKR], [2541], [117], [SONG Maeum], [KOR], [2540], [118], [DOLGIKH Maria], [RUS], [2539], [119], [STRBIKOVA Renata], [CZE], [2537], [120], [GRUNDISCH Carole], [FRA], [2535], [121], [HUANG Yi-Hua], [TPE], [2534], [122], [LAY Jian Fang], [AUS], [2525], [123], [LOVAS Petra], [HUN], [2525], [124], [KH<NAME>wan], [THA], [2523], [125], [#text(gray, "<NAME>")], [KOR], [2523], [126], [LEE I-Chen], [TPE], [2516], [127], [<NAME>], [SRB], [2515], [128], [SO Eka], [JPN], [2515], ) )
https://github.com/piepert/philodidaktik-hro-phf-ifp
https://raw.githubusercontent.com/piepert/philodidaktik-hro-phf-ifp/main/src/parts/ephid/ziele_und_aufgaben/ziele_aufgabenstellungen.typ
typst
Other
#import "@preview/cetz:0.2.2" #import "/src/template.typ": * == #ix("Zielformulierung", "Ziel", "Zielformulierung") und #ix("Aufgabenstellungen", "Aufgabenstellung") Jeder #ix("Unterrichtsplanung") geht eine #ix("Zielformulierung") voraus. Erst Ziele ermöglichen es nach der Unterrichtsstunde zu beurteilen, ob der Unterricht erfolgreich war, und machen es möglich, den Unterricht #ix("auszuwerten", "Unterrichtsauswertung"). Bei der #ix("Zielsetzung") für den Unterricht wird zwischen *#ix("Grob-", "Grobziel")* und *#ix("Feinzielen", "Feinziel")* unterschieden. In der #ix("Unterrichtsplanung") beschreiben *#ix("Grobziele", "Grobziel")* das Ziel einer gesamten Unterrichtsstunde und *#ix("Feinziele", "Feinziel")* die einzelnen Absichten, die mit den Aufgabenstellungen im Unterricht erreicht werden sollen. Feinziele werden formuliert mit der folgenden Formel: #align(center)[ "SuS" + *Ausbildung einer Kompetenz* + "durch"/"indem" + *#ix("EPA-Operator", "Operator")* ] Zur Planung einer #ix("zielorientierten", "Zielorientierung") Stunde wird das folgende Muster durchlaufen: #align(center, cetz.canvas({ import cetz.draw: * content((0, 0))[ + Erfassung des *Ist-Standes* + Festlegung eines *Soll-Standes*, durch eine *Zielsetzung* + *Maßnahmenplanung* zur Erreichung des Soll-Standes + *Durchführung* der Maßnahmen + *Kontrolle* der Zielerrichung ] let x1 = 1 let x2 = 6 let y1 = -1.05 let y2 = 1 let p = (x1, y1) let q = (x2, y1) let r = (x2, y2) let s = (x1, y2) line(p, q) line(q, r) line(r, s, mark: (end: ">")) })) Es gibt einen Zusammenhang zwischen #ix("Feinzielen", "Feinziel") und Aufgabenstellungen: Zu jeder Aufgabenstellung muss es im Unterichtsentwurf auch ein #ix("Feinziel", "Feinziel") geben. Die dazugehörige Aufgabenstellung muss den #ix("EPA-Operator", "Operator") des #ix("Feinziels", "Feinziel") übernehmen, wie etwa hier für den Philosophieunterricht einer 10. Klasse: #table(columns: 2, stroke: none)[ #show: align.with(right) #show: strong Feinziel: ][ Die SuS erwerben Methodenkompetenz, indem sie Descartes' erste Meditation in Form eines Abstracts #underline[zusammenfassen]. ][ #show: align.with(right) #show: strong Aufgabenstellung: ][ #underline[Fassen] Sie Descartes' erste Meditation in Form eines Abstracts #underline[zusammen]! ] #task[Feinziele][ Entwerfen Sie drei Feinziele für eine Unterrichtsstunde zum Thema "Das Wachsbeispiel in Descartes' _Meditationes de prima philosophia_." in der Sekundarstufe II. ][ + Die SuS erwerben Wahrnehmungs- und Deutungskompetenz, indem sie das Wachsbeispiel von Descartes in eigenen Worten wiedergeben. + Die SuS erwerben Darstellungskompetenz, indem Sie einen philosophischen Dialog zu Descartes' Wachsbeispiel gestalten. + Die SuS erwerben Argumentations- und Urteilskompetenz, indem Sie eine philosophische Problemreflexion zu Descartes' Wachsbeispiel durchführen. ] #task(key: "drei-aufgabenstellungen")[Aufgabenstellung][ Entwerfen Sie für jeden EPA-Anforderungsbereich je eine Aufgabenstellungen für eine Unterrichtsstunde zum Thema "Das Fremdpsychische in Descartes' _Meditationes de prima philosophia_." für die Sekundarstufe I. ][ #grid(columns: 2, column-gutter: 0.5em, row-gutter: 1em, [AFB I:], [ Geben Sie die drei Stufen von Descartes' methodischem Zweifel in eigenen Worten wieder! ], [AFB II:], [ Erörtern Sie ein Problem der Erkenntnis aus der Wahrnehmung, das Descartes aufwirft! ], [AFB III:], [ Diskutieren Sie Descartes' Position zum Problem des Fremdpsychischen! ]) ]