repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/cunhapaulo/typst_styles | https://raw.githubusercontent.com/cunhapaulo/typst_styles/main/toolbox/paper.typ | typst | MIT License | #import "toolbox.typ": *
//====================================================================
// The project function defines how your document looks.
// It takes your content and some metadata and formats it.
//====================================================================
#let project(
university: "", // University´s name
sigla: "", // University´s sigla
centre: "", // Center or Institute assotiation
faculty: "", // Faculty´s name
course: "", //Course name
authors: (), // Informations about the authors
title: [], // Title of the academic essay, article, report ou paper
subtitle: [], // Optional subtitle
abstract: [], // Short abstract
keywords: [], // Keywords
citystate: "", // City and State
date: none, // Complete date of the academic work
year: "", // Only the year of the wor
discipline: "", // Name of the Discipline or Course
professor: "", // Professor´s name
body,
) = {
//==========================================================================
// IMPORTANT!!!
// GLOBAL DEFINITIONS
//==========================================================================
let doc_language = "pt"
let page_size = "a4"
let heading_style = "1.1"
let title_fontsize = 1.75em
let abstract_fontsize = 9pt
let body_fontsize = 11pt
let body_firstlineindent = 10mm
let body_interlinespace = 0.6em
let body_paragraphspacing = body_interlinespace
let body_hyphenate = true
let body_fonttype = "Palatino Linotype" // "TeX Gyre Pagella" //"Linux Libertine"
// "STIX Two Text", "TeX Gyre Pagella", "Liberation Serif", "Fira Sans", "Libre Baskerville", "New Computer Modern", "TeX Gyre Termes", "Atkinson Hyperlegible", "STIX Two Text", "Linux Libertine", "Inria Serif", "Source Sans Pro", "Palatino", "Palatino Linotype"
// ---------------------------------------------------------------
// Lists Configurations
// ---------------------------------------------------------------
set enum(indent: 6pt, body-indent: 6pt, numbering: "1.a.i.")
set list( body-indent: 6pt)
//==========================================================================
// Set the document's basic properties.
//==========================================================================
set document(author: authors.map(a => a.name), title: title)
set page(
paper: page_size,
margin: (left: 30mm, right: 30mm, top: 30mm, bottom: 30mm),
numbering: "1",
number-align: center // top + right,
)
set heading(numbering: heading_style)
set text(size: body_fontsize, font: body_fonttype, lang: doc_language)
// Defines separation character to be used in the cation of all elements.
show figure.where(
// kind: image,
): set figure.caption(position: top, separator: [ -- ])
//=======================
// Title row.
//=======================
align(center)[
#block(text(size: title_fontsize, weight: 600, title + subtitle))
#v(2em, weak: true)
#date
]
//=======================
// Author information.
//=======================
set par(justify: true, leading: 5.1pt)
pad(
top: 0.75em,
bottom: 0.75em,
x: 2em,
grid(
columns: (1fr,) * calc.min(3, authors.len()),
gutter: 1em,
..authors.map(author => align(center)[
*#author.name* \
#author.affiliation\
#text(size: body_fontsize - 1pt ,font: "Lucida Sans Typewriter", author.email)
]),
),
)
//=======================
// Abstract
//=======================
block[
#set par(justify: true, leading: 4.0pt)
#set text(size: abstract_fontsize)
#text(weight: 900, "Resumo: ") #abstract
]
//=======================
// Keywords
//=======================
block[
#set par(justify: false, leading: 3.26pt)
#set text(size: abstract_fontsize)
#text(weight: 900, "Palavras-chave: ") #keywords
]
v(1%)
//=======================
// Main body.
//=======================
set par(justify: true, first-line-indent: body_firstlineindent, leading: body_interlinespace, linebreaks: "simple")
set text(size: body_fontsize)
show heading: set block(above: 1.4em, below: 0.8em)
body
} |
https://github.com/Jozott00/typst-LLNCS-template | https://raw.githubusercontent.com/Jozott00/typst-LLNCS-template/main/main.typ | typst | #import "template.typ": *
#let inst_princ = institute("Princeton Univerity",
addr: "Princeton NJ 08544, USA"
)
#let inst_springer = institute("Spr<NAME>",
addr: "Tiergartenstr. 17, 69121 Heidelberg, Germany",
email: "<EMAIL>",
url: "http://www.springer.com/gp/computer-science/lncs"
)
#let inst_abc = institute("ABC Institute",
addr: "Rupert-Karls-University Heidelberg, Heidelberg, Germany",
email: <EMAIL>"
)
#show: project.with(
title: "Contribution Title",
thanks: "Supported by organization x.",
authors: (
author("First Author",
insts: (inst_princ),
oicd: "0000-1111-2222-3333",
),
author("Second Author",
insts: (inst_springer, inst_abc),
oicd: "1111-2222-3333-4444",
),
author("Third Author",
insts: (inst_abc),
oicd: "2222-3333-4444-5555",
),
),
abstract: [
The abstract should briefly summarize the contents of the paper in
15--250 words.
],
keywords: ("First keyword", "Second keyword", "Another keyword"),
bibliography-file: "refs.bib"
)
= First Section
== A subsection Sample
Please note that the first paragraph of a section or subsection is not indented. The first paragraph that follows a table, figure, equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
=== Sampling Heading (Third Level)
Only two levels of headings should be numbered. Lower level headings remain unnumbered; they are formatted as run-in headings.
==== Sample Heading (Fourth Level)
The contribution should contain no more than
four levels of headings. @heading_styles gives a summary of all heading levels.
#figure(caption: [Table Captions should be placed above the tables])[
#table(
columns: 3,
align: left,
[Heading level], [Example], [Font size and style],
[Title (centered)], text(14pt, weight: "bold", "Lecture Notes"), [14 point, bold],
[1st-level heading], text(12pt, weight: "bold")[Introduction], [12 point, bold],
[2nd-level heading], text(10pt, weight: "bold")[Printing Area], [10 point, bold],
[3rd-level heading], [#text(10pt, weight: "bold")[Run-in Heading in Bold.] Text follows.], [10 point, bold],
[4th-level heading], [#text(10pt, style: "italic")[Lowest Level Heading] Text follows.], [10 point, italic],
)
] <heading_styles>
Displayed equations are centered and set on a separate line.
$ x + y = z $
Please try to avoid rasterized images for line-art diagrams and schemas. When-
ever possible, use vector graphics instead (see @image_fig).
#figure(caption: [A figure caption is always placed below the illustration. Please note that short
captions are centered, while long ones are justified by the macro package automatically.],
image("fig1.svg")
) <image_fig>
#theorem[This is a sample theorem. The run-in heading is set in bold, while the following text appears in italics. `definition`, `lemma`, `proposition`, and `corollary` are styled the same way.]
#proof[Proofs, examples, and remarks have the initial word in italics, while the following text appears in normal font.]
For citations of references, we prefer the use of square brackets and consecutive numbers. Citations using labels or the author/year convention are also acceptable. The following bibliography provides a sample reference list with entries for journal articles @PAPER:1, a book @BOOK:2, and a homepage @WEBSITE:1. Multiple citations are grouped @BOOK:2@ARTICLE:1@BOOK:1.
|
|
https://github.com/ChHecker/diverential | https://raw.githubusercontent.com/ChHecker/diverential/main/README.md | markdown | MIT License | # Diverential
`diverential` is a [Typst](https://github.com/typst/typst) package simplifying the typesetting of differentials. It is the equivalent to LaTeX's `diffcoeff`, though not as mature.
## Overview
`diverential` allows normal, partial, compact, and separated derivatives with smart degree calculations.
```ts
#import "@preview/diverential:0.2.0": *
$ dv(f, x, deg: 2, eval: 0) $
$ dvp(f, x, y, eval: 0, evalsym: "[") $
$ dvpc(f, x) $
$ dvps(f, #([x], 2), #([y], [n]), #([z], [m]), eval: 0) $
```
<img src="examples/overview.jpg" width="150">
## `dv`
`dv` is an ordinary derivative. It takes the function as its first argument and the variable as its second one. A degree can be specified with `deg`. The derivate can be specified to be evaluated at a point with `eval`, the brackets of which can be changed with `evalsym`. `space` influences the space between derivative and evaluation bracket. Unless defined otherwise, no space is set by default, except for `|`, where a small gap is introduced.
### `dvs`
Same as `dv`, but separates the function from the derivative.
Example:
$$ \frac{\mathrm{d}}{\mathrm{d}x} f $$
### `dvc`
Same as `dv`, but uses a compact derivative.
Example:
$$ \mathrm{d}_x f $$
## `dvp`
`dv` is a partial derivative. It takes the function as its first argument and the variable as the rest. For information on `eval`, `evalsym`, and `space`, read the description of `dv`.
The variable can be one of these options:
- plain variable, e.g. `x`
- list of variables, e.g. `x, y`
- list of variables with higher degrees (type `(content, integer)`), e.g. `x, #([y], 2)`
The degree is smartly calculated: If all degrees of the variables are integers, the total degree is their sum. If some are content, the integer ones are summed (arithmetically) up and added to the visual sum of the content degrees. Example: `#([x], n), #([y], 2), z` → $\frac{\partial^{n+3}}{\partial x^n\,\partial y^2\,\partial z}$
Specifying `deg` manually is always possible and might be required in more complicated cases.
### `dvps`
Same as `dvp`, but separates the function from the derivative.
Example:
$$ \frac{\partial}{\partial x} f $$
### `dvpc`
Same as `dvp`, but uses a compact derivative.
Note: If supplying multiple variables, `deg` is ignored.
Example:
$$ \partial_x f $$
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CSL_old/oktoich/Hlas7/0_Nedela.typ | typst | #let M = (
"HV": (
("","","Prijidíte, vozrádujemsja Hóspodevi, sokrušívšemu smérti deržávu, i prosvitívšemu čelovíčeskij ród, so bezplótnymi zovúšče: soďíteľu i Spáse náš, sláva tebí."),
("","","Prijidíte, vozrádujemsja Hóspodevi, sokrušívšemu smérti deržávu, i prosvitívšemu čelovíčeskij ród, so bezplótnymi zovúšče: soďíteľu i Spáse náš, sláva tebí."),
("","","Krest preterpíl jesí Spáse, i pohrebénije nás rádi, smértiju že, jáko Bóh smérť umertvíl jesí. Ťímže poklaňájemsja tridnévnomu voskreséniju tvojemú: Hóspodi, sláva tebí."),
("","","Apóstoli víďivše voskresénije soďíteľa, čuďáchusja, pojúšče chvalú ánheľskuju: sijá sláva jésť cérkóvnaja, sijé bohátstvo cárstvija, postradávyj nás rádi, Hóspodi, sláva tebí."),
("Dogmat","","Strášno i neizrečénno voístinnu, jéže o tebí soďíjannoje tájinstvo neskvérnaja: Slóvo bo vsích vinóvnoje, páče viný i slóva, svjatým Dúchom voplóščšejesja rodilá jesí, iz tebé plóť priímša, svojehó jestestvá neprelóžno prebývša. Sošédšymsja bo oboím samobýtňi, po ipostási jedínstvennoj, suhúb jestestvóm proischódit: vés Bóh, i vés čelovík, vo oboích soveršénstvijich, ďijstvíteľnymi svójstvy izjavľája: postradáv bo na kresťí plótiju, bezstrástnyj tójže prebýsť božéstvenňi: jáko čelovík umrýj, páki oživé jáko Bóh tridnéven, deržávu smértnuju nizložív, iz istľínija izbávľ čelovíčestvo. Tohó jáko izbáviteľa i Spása róda nášeho Bohomáti molí, nizposláti nám ščedrót jehó véliju mílosť."),
),
"S": (
("","","Voskrésl jesí iz hróba Spáse míra, i sovozdvíhl jesí čelovíki s plótiju tvojéju: Hóspodi, sláva tebí."),
("","Neradívše o vrémennych","Vostók sólnca mýslennaho Ďívo bylá jesí, na zápaďich jéže po nám bývšaho jestestvá: no jáko imúšči derznovénije, tohó molí Bohoródice vsepítaja, ot bezmírnych prehrišénij svobodíti dúšy náša."),
("","","Žézl iz kórene Jesséjeva Ďívo, jávi prozjablá jesí, nasaždénija prélesti iz kórene potrébľšaho: no jáko imúšči derznovénije, neprestánno molí, iskoreníti vsepítaja, sérdca mojehó strásti, i tohó vsadíti strách, i spastí mja."),
("","","Vratá Bóžija, presvjatája, ot vrát ádovych izbávi mjá, i púť pokajánija pokaží mi, ímže obrjášču vratá k žízni vvoďáščaja: zablúždšich nastávnice, ród vírnych čelovík sobľudáj, i spasáj dúšy náša."),
("Dogmat","","Iz tebé vsesvjatája Bohoródice Ďívo, neizrečénno rodísja Christós Bóh náš, voístinnu sýj Bóh prevíčnyj, i čelovík nóv. Óvo úbo sýj, prisnosúščen: óvo že nás rádi býv: spasájet bo sobóju, kojehóždo jestestvá svójstvo, óvim úbo sijája čudesý, óvim že uvirjája strasťmí, ťímže jedín i tójže, i umirájet jáko čelovík, i jáko Bóh vostajét. Jehóže molí čístaja, neiskusobráčnaja, spastísja dušám nášym."),
),
)
#let V = (
"HV": (
("","","Prijidíte, vozrádujemsja Hóspodevi, sokrušívšemu smérti deržávu, i prosvitívšemu čelovíčeskij ród, so bezplótnymi zovúšče: soďíteľu i Spáse náš, sláva tebí."),
("","","Krest preterpíl jesí Spáse, i pohrebénije nás rádi, smértiju že jáko Bóh smérť umertvíl jesí. Ťímže poklaňájemsja tridnévnomu voskreséniju tvojemú: Hóspodi, sláva tebí."),
("","","Apóstoli víďivše voskresénije soďíteľa, čuďáchusja, pojúšče chvalú ánheľskuju: sijá sláva jésť cerkóvnaja, sijé bohátstvo cárstvija: postradávyj nás rádi, Hóspodi, sláva tebí."),
("","","Ášče i ját býl jesí Christé, ot bezzakónnych mužéj: no tý mi jesí Bóh, i ne postyždúsja: bijén býl jesí po pleščéma, ne otmetájusja: na kresťí prihvoždén býl jesí, i ne tajú: vostánijem tvojím chvaľúsja: smérť bo tvojá živót mój, vsesíľne i čelovikoľúbče Hóspodi, sláva tebí."),
("","","Davídskoje proróčestvo ispolňája Christós, velíčestvije v Sijóňi svojé učenikóm otkrýl jésť, chvalíma pokazúja sebé, i slávima prísno, so Otcém že i Dúchom svjatým: pérvije úbo bezplótna, jáko Slóva, posľidí že nás rádi voploščéna, i umerščvléna jáko čelovíka, i voskrésša so vlástiju, jáko čelovikoľúbca."),
("","","Sošél jesí vo ád Christé, jákože voschoťíl jesí: isprovérhl jesí smérť, jáko Bóh i Vladýka: i voskrésl jesí tridnéven, sovoskrésív Adáma ot ádovych úz i tľínija, zovúšča i hlahóľušča: sláva voskreséniju tvojemú jedíne čelovikoľúbče."),
("","","Vo hróbi položén býl jesí jáko spjá Hóspodi: i voskrésl jesí tridnéven, jáko sílen v kríposti, sovoskrésív Adáma ot tlí smértnyja, jáko vsesílen."),
("","Jehdá ot dréva","Tý jesí ánhelov rádosť, tý jesí čelovíkov sláva, tý vírnych nadéžda, Vladýčice vsepítaja, predstáteľnica náša: i k tebí pribihájem vo vsjákoj núždi, jáko da moľbámi tvojími, ot stríl vrážijich páhuby dušetľínnyja, i vsjákich skorbéj izbávimsja vsí vospivájuščiji ťá Bohonevístnaja."),
("","","Tý mi Bohoródice, upovánije, tý mi predstátelnica i sťiná i pribížišče, Vladýčice prepítaja, jáže Bóha plótiju neiskusomúžno róždši míru Spasíteľa: tý moľbámi tvojími ot oderžáščich nás izbávi iskušénij, i bíd i obstojánij, jedína christiján pribížišče."),
("","","Ukrotí plóti mojejá stremlénija, uhasí plámeň strastéj mojích, otimí ot mené Bohoródice, znój lukávyj želánija mojehó, i nepreklónnyj mój obýčaj vozvratí Ďívo, ot vsjákaho bisóvskaho stremlénija: jáko da v tišiňí sérdca, i v bezstrástiji duší mojejá, pisnoslóvľu ťá vsepétuju."),
("Dogmat","","Máti úbo poználasja jesí, páče jestestvá, Bohoródice, prebylá že jesí Ďíva, páče slóva i rázuma: i čudesé roždestvá tvojehó skazáti jazýk ne móžet. Preslávnu bo súšču začátiju čístaja, nepostížen jésť óbraz roždénija: iďíže bo chóščet Bóh, pobiždájetsja jestestvá čín. ťímže ťá vsí, Máter Bóžiju vídušče, mólim ti sja priľížno, molí spastísja dušám nášym."),
),
"S": (
("","","Voskrésl jesí iz hróba Spáse míra, i sovozdvíhl jesí čelovíki s plótiju tvojéju: Hóspodi sláva tebí."),
("","","Voskrésšemu iz mértvych, i prosvitívšemu vsjá, prijidíte poklonímsja: ot ádova bo mučíteľstva nás svobodíl jésť, svojím tridnévnym voskresénijem živót nám darovávyj, i véliju mílosť."),
("","","Vo ád sošéd Christé, smérť pľiníl jesí, i tridnéven voskrés, nás sovoskresíl jesí, slávjaščich tvojé vsesíľnoje vostánije, Hóspodi čelovikoľúbče."),
("","","Strášen javílsja jesí Hóspodi, vo hróbi ležjá jáko spjá: voskrés že tridnéven jáko sílen, Adáma sovoskresíl jesí zovúšča: sláva voskreséniju tvojemú, jedíne čelovikoľúbče."),
("Dogmat","","Pod króv tvój Vladýčice, vsí zemnoródniji pribihájušče, vopijém tí: Bohoródice upovánije náše, izbávi ný ot bezmírnych prehrišénij, i spasí dúšy náša."),
),
"T": (
("","","Razrušíl jesí krestóm tvojím smérť, otvérzl jesí razbójniku ráj: mironósicam pláč preložíl jesí, i apóstolom propovídati poveľíl jesí: jáko voskrésl jesí Christé Bóže, dárujaj mírovi véliju mílosť."),
("Bohoródičen","","Jáko nášeho voskresénija sokróvišče, na ťá naďíjuščyjasja vsepítaja, ot róva i hlubiný prehrišénij vozvedí: tý bo povínnyja hrichú spaslá jesí, róždši spasénije náše: jáže préžde roždestvá Ďíva, i v roždeství Ďíva, i po roždeství páki prebyváješi Ďíva."),
),
)
#let P = (
"1": (
("","","Sokrušívšemu bráni mýšceju svojéju, i vsádniki tristáty pohruzívšemu, pojím jemú, jáko izbáviteľu nášemu Bóhu: jáko proslávisja."),
("","","Jáko ukrašénije pínijem, i písň Bohopodóbnu blahodarénija tí dáry prinósim: jéže rádujsja nýňi čístaja, jáko ot pečáli rádosť nám podalá jesí."),
("","","Mílosti tvojejá blahodáť, i pokróva kríposť, ne premolčím Ďívo prečístaja: jáko ot bíd ľútych nás spaslá jesí."),
("","","Ot razlíčnych izbávľšesja iskušénij i skorbéj, máternimi tvojími molítvami prečístaja, sohlásno i tépľi blahodarénija tebí vsí hlásy pojím."),
("","","Rjásnami ispeščréna jáko zlatými dobroďítelmi, i Dúcha blahodáťmi prečístaja, jáko nevísta Ótča ukrasívšisja, voístinnu javílasja jesí Máti Bóhu."),
),
"3": (
("","","V načáľi nebesá vsesíľnym slóvom tvojím utverždéj Hóspodi Spáse, i vseďíteľnym i Bóžijim Dúchom, vsjú sílu ích, na nedvížimim mjá kámeni ispovídanija tvojehó utverdí."),
("","","Rádosti božéstvennyja velíčije, jáko ot načála nám veséliju, blahodarénija hlásy prinósim priľížno, počitájušče jú jáko predstáteľnicu nášu."),
("","","Íže tobóju izbávľšesja ot bíd, i tebé rádi rádosť polučívše, Máti beznevístnaja, jáko podáteľnicu blahúju, i zastúpnicu blahochváľnu vsí proslavľájem."),
("","","Izminénije prehrišénij i napástej, božéstvennoju tvojéju molítvoju prijémľušče, Máti Christá Bóha, jáko vinú blahích vospivájem ťá vírno blahodárnymi hlásy."),
("","","Jáže rádosti istóčnik bezsmértnyja, tóčit strují prísno, i spasájet vsích samá: jáko prisnožíznennoje nám blahodajánije, Máti Christá Bóha."),
),
"4": (
("","","Pokrýla jésť nebesá Christé Bóže, smotrénijem tvojím, dobroďítel neizrečénnyja múdrosti tvojejá čelovikoľúbče."),
("","","Dostójnu tí písň rádovanija čístaja Ďívo, veseľáščesja prinósim, izbávľšesja ot bíd tvojími molítvami."),
("","","Blahodárstvenno tebí Ďívo čístaja, dušévnyma rukáma vnósim pínije, božéstvennymi písňmi ihrájušče, ot pečáli izbávľšesja mnóhija."),
("","","Vozdvihóša na ný bidý mnóhi strastníji hrisí: no božéstvennym tvojím pokróvom čístaja, nás izbávi."),
("","","Blažéni voístinnu čtúščiji ťá, Bohoródice prečístaja, jáko tobóju hrichá i pečáli izbávichomsja."),
),
"5": (
("","","Nóšč nesvitlá nevírnym Christé, vírnym že prosviščénije v sládosti slovés tvojích: sehó rádi k tebí útreňuju, i vospiváju tvojé Božestvó."),
("","","Hrichá potrebíteľa Christá Ďívo rodilá jesí, ímže ot napástej i boľíznej spasésja mír: ťímže rádujsja, i mý vopijém tí, izbávľšijisja pečálej."),
("","","Objémšesja razlíčnymi napásťmi čístaja Vladýčice, i pečáliju i skórbiju, i ľútym obstojánijem, otčájavšesja mý vesélija, nadéždu ťá obritóchom."),
("","","Jáko spasénija chranílišče nám rabóm tvojím čístaja, othoňáješi napásti, i sobľudáješi nevredímy. Ťímže pričástnicy mnóhich tvojích bláh, blahodarím ťá písňmi."),
("","","Bláhodarím ťá izbávľšesja tobóju hrichóv mnóhich, i némoščej i boľíznej i nedúh ľútych prečístaja Vladýčice: nadéžda bo jesí tverdá tvojím vírnym rabóm."),
),
"6": (
("","","Vo hlubinú hrichóvnuju vpádsja bláže, jáko Jóna ot kíta vopijú ti: vozvedí ot tlí živót mój, i spasí mja čelovikoľúbče."),
("","","Ánheľstiji jazýcy dostójno tvojá chvalý vospíti ne móhut čístaja: mý že nýňi rábski vzémšesja, rádovanije Havriílovo prinósim tí."),
("","","Vo hlubinú pečáli i obstojánija hrichí nášimi vpádše, izbávichomsja tobóju ot núždy i napástej, Ďívo Bohoródice čístaja."),
("","","Pod dólhom soďílasja vés mír, čístaja, blahodaríti, i chvalíti, i sláviti blahodáť tvojú blahočéstno: tobóju bo bíd i pečálej izbávleni býchom."),
("","","V noščí i vo dní, jávi i vtáj, pod tvój pribihájem pokróv, prečístaja Ďívo, íže ťá vírno slavoslóvjaščiji."),
),
"S": (
("","","Hóspodi, mý jesmý ľúdije tvoí, i óvcy pážiti tvojejá, zablúždšich jáko pástyr obratí nás, tléju rastočénych soberí nás: pomíluj stádo tvojé, umilosérdisja na ľúdi tvojá, molítvami Bohoródicy, jedíne bezhríšnyj."),
),
"7": (
("","","V péšč óhnennuju vvérženi prepodóbniji ótrocy, óhň v rósu preložíša, vospivánijem síce vopijúšče: blahoslovén jesí Hóspodi Bóže otéc nášich."),
("","","Rádovanije blahodárstvennoje tebí prinósim, Máti Bóžija: jáko voístinnu tobóju izbávleni vsjákaho zláho načinánija, i sohlásno tebí pojém: blahoslovénna jesí."),
("","","Vodvoríchomsja v véčer v pláči pečálej, i čájaniji zlých: no tvojím Bohoľípnym króvom Ďívo, obožéni bývše, obritóchom rádosť zaútra: tý bo spaslá jesí nás."),
("","","Jáko božéstvennoje pribížišče, mý pokróv tvój vsí sťažávše k Bóhu, v napástech i honénijich i hrisích, k tebí pribihájem, i tobóju izminénije prijémlem, prečístaja."),
("","","Bláhodáť molítvy tvojejá čístaja, propovídujem ustý i Dúchom, slávnaja: jáko tebé rádi, napásti i búri, i ťážkich pečálej, i hrichá strastéj vsí izbavľájemsja."),
),
"8": (
("","","Strášnaho cheruvímom, i čúdnaho serafímom, i míru tvorcá, svjaščénnicy i rabí, i dúsi právednych, pójte, blahoslovíte i prevoznosíte jehó vo víki."),
("","","Jáko izbávľšesja búri hrichóvnyja i strastéj, i napástej tvojími molítvami Bohoródice blahája, blahodárstvennym hlásom, rádujsja, vopijém tí: jáko tobóju ot pečáli v rádosť prešédše."),
("","","V boľíznech i napástech oderžímyja ne prézri blahája, no moľbú chudúju uslýšavši, svobodí nás ot skorbéj velíkich: da vírno tvojú molítvu čístaja, vospivájem."),
("","","Prehrišénij razrušéniju viná, nýňi vozstávi nás ot pečálej, napástej i strastéj čelovík, i iskušénija nepodóbna: tvojími že Bohoródice božéstvennymi moľbámi, izbávi nás ot ních preslávno."),
("","","Ščedróty tvojá na vsích vsehdá nizposylájutsja, Christé, blahodátiju voístinnu i moľbámi róždšija ťá: ot tebé bo prijémlem christijáne mílosť tvojú, Spáse mílostive."),
),
"9": (
("","","Páče jestestvá Máter, i po jestestvú Ďívu, jedínu v ženách blahoslovénnuju, písňmi vírniji Bohoródicu veličájem."),
("","","Rádovanije po dostojániju, so hlásom vosklicánija tebé Ďívo, nýňi prinósim so ánhelom Havrijílom, izbávľšesja ot razlíčnych iskušénij Bohoródice, molítvami tvojími."),
("","","Rádosť i vesélije, i rádovanije božéstvennoje, Ďívo neiskusomúžnaja, nám umnóžisja Bóžije: sé bo pláčuščijisja ľúťi, rádujemsja molítvami tvojími."),
("","","Žértvu chvalénija požrú, i jazýkom i hlásom, písň blahodárstvennuju vozdajú ti priľížno, Ďívo: jáko da tebí moľásja, v déň pečáli tobóju izbávľusja."),
("","","Srádujemsja prečístaja, božéstvennomu tvojemú roždéstvú blahočéstno: rádosť bo nám istočíla jesí ot napástej i pečálej. Ťímže tí blahodárno i mý písň soveršájem, vírno vospivájušče ťá."),
),
)
#let N = (
"1": (
("","","Mánijem tvojím na zemnýj óbraz preložísja, préžde udoborazlivájemoje vodnóje jestestvó Hóspodi: ťímže nemókrenno pišešéstvovav Izráiľ, pojét tebí písň pobídnuju ."),
("","","Otvérzi mí ustá úmnaja sérdca, i ustňí veščéstvennyja ko chvaľí tvojéj, trisvítloje jedíno Božestvó vsích, píti tebí písň svitoďíteľu, blahodárstvennuju."),
("","","Da preizlíšneje tvojejá bláhosti pokážeši, sozdalá jesí čelovíka Tróice bezmirnosíľnaja: tókmo óbraz brénnyj tvojehó načáľnaho soďíteľu, Hospóďstva."),
("","","Um beznačáľnyj, Slóvo soprisnosúščnoje rodívyj, i Dúcha sobeznačáľnaho prosijávyj, spodóbi jedínomu Bóhu po suščestvú soobráznu nás poklaňátisja triipostásnomu."),
("Bohoródičen","","Javílsja jesí pri kupiňí Moiséju Bóžij Slóve, jáko óhň čistíteľnyj, ne opaľája že otňúd, jéže ot Ďívy proobrazúja tvojé voploščénije, ímže čelovíki voobrazíl jesí."),
),
"3": (
("","","V načáľi nebesá, vsesíľnym slóvom tvojím utverždéj Hóspodi Spáse, i vseďíteľnym i Bóžijim Dúchom vsjú sílu ích: na nedvížimim mjá kámeni ispovídanija tvojehó utverdí."),
("","","Íže jedínstvennaho i trisijánnaho, i vseďíteľnaho ťá Vladýku vospivájušče, hrichóv i iskušénij prósim izbavlénija bezmirnosíľnyj Bóže: da ne úbo prézriši, víroju tvojú bláhosť slávjaščich."),
("","","Ótrasl javísja ot Otcá, jáko ot kórene beznačáľna Bóh Slóvo, i ravnomóščen so sráslenym i božéstvennym Dúchom: i sehó rádi vírniji, Tróičnoje lícy jedíno Hospóďstvo slávim."),
("","","Ravnoslávno i srásleno, triipostásnoje jestestvó nerazďíľňi i razďíľňi, i jedinonačáľnuju Tróicu vsí ťa slavoslóvim vírniji: i poklaňájuščesja prósim sohrišénij proščénija."),
("Bohoródičen","","Neizménno čelovíkom upodóblsja po vsemú Bóžij Slóve, ot čístyja otrokovícy prošél jesí jávi: i vsím pokazál jesí Bohonačálije trisijánnoje, i jedínstvennoje suščestvóm neizménnych ipostásej."),
),
"S1": (
("","Íže mené rádi","Sohrišívšyja pomíluj Tróice svjatája, rabý tvojá: i prijimí kájuščyjasja tebí blahoutróbne, i proščénija spodóbi."),
("Bohoródičen","","Ozlóblennyja dúšy náša hrichmí, ublaží vsečístaja Bohoródice, i izbávi ot prehrišénij tebé pojúščyja Bohonevísto."),
),
"4": (
("","","Ótča ňídra ne ostávľ, i sošéd na zémľu Christé Bóže, tájnu uslýšach smotrénija tvojehó, i proslávich ťá jedíne čelovikoľúbče."),
("","","Soderžíteľnaja trisvítlaja jedínice, Bohonačáľnaja i spasíteľnaja vsích, tvojá pivcý nýňi ohradí, i spasí ot skórbi, i strastéj, i vsjákaho ozloblénija."),
("","","Naznámenateľnych rečénij, tvojehó nepostižímaho trisvítlaho Božestvá, nedoumivájušče vospivájem ťá čelovikoľúbče Hóspodi, i proslavľájem tvojú sílu."),
("","","Na zemlí jáko na nebesích so bezplótnymi líki, jedínice i Tróice, nerazďíľno ťá razďiľájem, i ľubóviju slávim, jáko súščimi vsími vladýčestvujuščuju."),
("Bohoródičen","","Otéčskija slávy ne otstúpľ, k nášej chúdosti snizšél jesí vóleju, voplóščsja presúščestvennyj: i voznésl jesí k božéstvenňij slávi, jáko blahoutróben."),
),
"5": (
("","","Nóšč nesvitlá nevírnym Christé, vírnym že prosviščénije v sládosti slovés tvojích: sehó rádi k tebí útreňuju, i vospiváju tvojé Božestvó."),
("","","Svitonačáľnoje jestestvó Tróice načertáňmi, i jedínice v sovíťi i slávi i čésti, utverdí nás v tvojéj ľubví."),
("","","Svitonačáľnoje jestestvó Tróice načertáňmi, i jedínice v sovíťi i slávi i čésti, utverdí nás v tvojéj ľubví."),
("","","Úm, i Slóvo, i Dúch, jedíno Bohonačáľnoje i trisólnečnoje jestestvó slávjašče, prósim izbávitisja iskušénij, i vsjákich skorbéj."),
("Bohoródičen","","Voobrážsja Bóžij Slóve, v jestestvó čelovíčeskoje, ot svjatýja Ďívy, Tróicu vo jedínice naučíl jesí píti, soobráznu i soprestóľnu."),
),
"6": (
("","","Plávajuščaho v molví žitéjskich popečénij, s korablém potopľájema hrichí, i dušetľínnomu zvírju primetájema, jáko Jóna Christé vopijú ti: iz smertonósnyja hlubiný vozvedí mja."),
("","","Jedinoslávnuju deržávu, i jedinonačáľnoje Bohonačálije, tremí ipostásmi súščeje, nepremínňi s drúh drúhom slávim, razlíčije imúšče tókmo, bytijém kójždo svójstva."),
("","","Jedinoslávnuju deržávu, i jedinonačáľnoje Bohonačálije, tremí ipostásmi súščeje, nepremínňi s drúh drúhom slávim, razlíčije imúšče tókmo, bytijém kójždo svójstva."),
("","","Úmniji ťá čínove ánheľskich krasót chváľat, trisólnečnoje Bohonačálije: s nímiže i mý brénnymi ustý, jáko tvoríteľnu vsích, jedínu pisnoslóvim, i vírno slávim."),
("Bohoródičen","","Nedoumínnym slóvom, Slóvo róždšejesja ot sólnca Otcá, íno sólnce, préžde vík, ot Ďívy naposľídok vozsijá: i jedínstvennaho tremí lícy, nedomýslimaho Bóha propovída."),
),
"S2": (
("","Ohňá svitľíjši","Tróice jedinosúščnaja, jedínice vsích triipostásnaja, pomíluj, jáže sozdalá jesí, bezsmértnaja: popaľájušči zlóby prehrišájuščich, i prosviščájušči serdcá pojúščich blahoutróbije tvojé, Bóže náš, sláva tebí."),
("Bohoródičen","","Ohňá svitľíjši, svíta ďíjstvenňijši mílosť blahodáti tvojejá Vladýčice, popaľájušči hrichí čelovíkov, i orošájušči mýsli chváľaščich velíčija tvojá, Bohoródice preneporóčnaja."),
),
"7": (
("","","V péšč óhnennuju vvérženi prepodóbniji ótrocy, óhň v rósu preložíša, vospivánijem síce vopijúšče: blahoslovén jesí Hóspodi Bóže otéc nášich."),
("","","Svitolúčnymi blistániji prísno sijája, trisvíte Bóže, jedínice nepristúpnaja i presúščnaja spasí, íže v ťá Vladýko, vírujuščyja blahočéstno, i poklaňájuščyjasja tebí."),
("","","Rečénijem božéstvennych prorók povinújuščesja, jedínaho ťá tóčiju Bóha vsích slávim v trijéch načertánijich, síce vopijúšče: blahoslovén jesí Hóspodi Bóže otéc nášich."),
("","","Pérstnymi ustnámi mý s neveščéstvennymi čínmi, písňmi Tróice svjatája, ťá pojém vo jedínstvi suščestvá, vopijúšče: blahoslovén jesí Hóspodi Bóže otéc nášich."),
("Bohoródičen","","Da íže sozdávyj Adáma, vozsozíždet páki, vsečístaja, iz tebé jávi vočelovíčisja, čelovíki obožív síce vopijúščyja: blahoslovén prečístaja, plód tvojehó čréva."),
),
"8": (
("","","Neopáľnaja ohňú v Sináji pričáščšajasja kupiná, Bóha javí medlenojazýčnomu i huhnívomu Moiséovi, i ótroki révnosť Bóžija trí nepreborímyja vo ohní pivcý pokazá: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Sólnca lučámi trisijánnaho ozarítisja svitoďíteľnymi spodóbi, serdcám pivcév tvojích: i nýňi zríti dobrótu tvojú Tróice, jedínice, jáko móščno dáruj vsehdá, vsém víroju podóbnoju, tvojé velíčije pesnoslóvjaščym vo vsjá víki."),
("","","Deržíši vsjáčeskaja, Tróičnoje i jedínstvennoje Hóspodonačálije beznačáľnoje, i upravľáješi nébo i zémľu. Ťímže mjá ľubóviju tvojéju privlačíma prísno, sochraní píti tebí: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Chrám mjá tvojejá trisvítnyja sotvorí zarí, blahoďíteľu čelovikoľúbče: i pričástija i priobščénija nepristúpna vrahóm nevídimym, i plotskím strastém Vladýko, pokaží, jedinonačáľnyj Bóže mój i Hóspodi slávy, pisnoslóviti ťá vo vsjá víki."),
("Bohoródičen","","Svít Bohonačáľnyj ot čréva tvojehó vozsijávyj, Bohomáti prečístaja, vés mír trisólnečnym svítom ozarí, i zémľu jákože druhóje nébo pokazá, pojúšču: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
),
"9": (
("","","Máti Bóžija i Ďíva, róždšaja i ďívstvujuščaja páki, ne jestestvá ďílo, no Bóžija snizchoždénija: ťím jáko jedínu Bóžijich čudés spodóbľšujusja, ťá prísno veličájem."),
("","","Vysokoslóviti i píti ťá dostójno, jehóže v výšnich neprestánno Serafími vospivájut, ne vozmohájem brénniji: obáče jáko Vladýku vsích derzájušče, i čelovikoľúbňijšaho Bóha veličájem."),
("","","Ťilésnyja boľízni izbávi, i dušévnaho pristrástija pivcý tvojá, jedínstvennaja Tróice nerazďíľnaja, i sochranítisja nevreždénnym ot vsích žitéjskich iskušénij spodóbi."),
("","","Ravnosíľnaja, Bohonačáľnaja, trisvítnaja, vsesíľnaja deržávo, neizmínnaja dobróto súščestvennyja bláhosti, dážď prehrišénij razrišénije tvojím rabóm, i iskušénij, i strastéj izbávi."),
("Bohoródičen","","Úm i dúšu, i plotskóje smišénije prijém Bohoródice, ot tvojích ložésn prečístych, Bóh Slóvo poístinňi čelovík javísja: i božéstvennaho suščestvá óbščnika čelovíka jávi pokazál jésť."),
),
)
#let U = (
"T": (
("","","Razrušíl jesí krestóm tvojím smérť, otvérzl jesí razbójniku ráj: mironósicam pláč preložíl jesí, i apóstolom propovídati poveľíl jesí: jáko voskrésl jesí Christé Bóže, dárujaj mírovi véliju mílosť."),
("Bohoródičen","","Jáko nášeho voskresénija sokróvišče, na ťá naďíjuščyjasja vsepítaja, ot róva i hlubiný prehrišénij vozvedí: tý bo povínnyja hrichú spaslá jesí, róždši spasénije náše: jáže préžde roždestvá Ďíva, i v roždeství Ďíva, i po roždeství páki prebyváješi Ďíva."),
),
"S1": (
("","","Žízň vo hróbi vozležáše, i pečáť na kámeni nadležáše, jáko carjá spjášča vójini strežáchu Christá: i ánheli slávľachu jáko Bóha bezsmértna, žený že vzyváchu: voskrése Hospóď, podajá mírovi véliju mílosť."),
("","","Tridnévnym pohrebénijem tvojím pľinívyj smérť, i istľívša čelovíka živonósnym vostánijem tvojím voskresívyj Christé Bóže, jáko čelovikoľúbec, sláva tebí."),
("Bohoródičen","","Raspénšahosja nás rádi, i voskrésšaho Christá Bóha nášeho, i nizložívšaho smérti deržávu, neprestánno molí Bohoródice Ďívo: da spasét dúšy náša."),
),
"S2": (
("","","Zapečátanu hróbu, živót ot hróba vozsijál jesí Christé Bóže: i dvérem zakľučénnym, učenikóm predstál jesí vsích voskresénije, Dúch právyj ťími obnovľája nám, po velícij tvojéj mílosti."),
("","","Na hrób tečáchu žený, so slezámi míra nosjáščja: i vójinom strehúščym ťá vsích carjá, hlahólachu k sebí: któ otvalít nám kámeň? Voskrése velíka sovíta ánhel, poprávyj smérť: vsesíľne Hóspodi, sláva tebí."),
("Bohoródičen","","Rádujsja blahodátnaja Bohoródice Ďívo, pristánišče i predstáteľstvo róda čelovíčeskaho, iz tebé bo voplotísja izbáviteľ míra: jedína bo jesí Máti i Ďíva, prísno blahoslovéna i preproslávlena. Molí Christá Bóha mír darováti vséj vselénňij."),
),
"Y": (
("","","Íže náš zrák vosprijémyj, i preterpívyj krest plótski, spasí mja voskresénijem tvojím Christé Bóže, jáko čelovikoľúbec."),
),
"A1": (
("","","Pľín Siióň ot lésti obratív, i mené Spáse, oživí, izimája rabótnyja strásti."),
("","","V júh síjaj skórbi póstnyja so slézami, séj rádostnyja póžnet rukojáti prisnoživopitánija."),
("","","Svjatým Dúchom istóčnik božéstvennych sokróvišč, ot nehóže premúdrosť, rázum, strách: tomú chvalá i sláva, čésť i deržáva."),
("","","Svjatým Dúchom istóčnik božéstvennych sokróvišč, ot nehóže premúdrosť, rázum, strách: tomú chvalá i sláva, čésť i deržáva."),
),
"A2": (
("","","Ášče ne Hospóď sozíždet dóm dušévnyj, vsúje truždájemsja: rázvi bo tohó ni ďijánije, ni slóvo soveršájetsja."),
("","","Plodá črévna, svjatíji dúchodvížno prozjabájut otéčeskaja predánija, sýnopoložénija."),
("","","Svjatým Dúchom, vsjáčeskaja jéže býti ímut: préžde bo vsích Bóh, vsích Hospóďstvo, svít nepristúpen, živót vsích."),
("","","Svjatým Dúchom, vsjáčeskaja jéže býti ímut: préžde bo vsích Bóh, vsích Hospóďstvo, svít nepristúpen, živót vsích."),
),
"A3": (
("","","Bojáščijisja Hóspoda, putí životá obrítše, nýňi i prísno ublažájutsja slávoju netľínnoju."),
("","","Ókrest trapézy tvojejá, jáko stéblije víďa isčádija tvojá, rádujsja i veselísja, privoďá sijá Christóvi pastyrenačálniku."),
("","","Svjatým Dúchom hlubiná darovánij, bohátstvo slávy, sudéb hlubiná vélija: jedinosláven Otcú, i Sýnu, služím bo."),
("","","Svjatým Dúchom hlubiná darovánij, bohátstvo slávy, sudéb hlubiná vélija: jedinosláven Otcú, i Sýnu, služím bo."),
),
"P": (
("","","Voskresní Hóspodi Bóže mój, da voznesétsja ruká tvojá, ne zabúdi ubóhich tvojích do koncá."),
("","","Ispovímsja tebí Hóspodi, vsím sérdcem mojím."),
),
"K": (
"P1": (
"1": (
("","","Mánijem tvojím na zemnýj óbraz preložísja, préžde udoborazlivájemoje vodnóje jestestvó Hóspodi: ťímže nemókrenno pišešéstvovav Izráiľ, pojét tebí písň pobídnuju."),
("","","Osudísja smértnoje mučíteľstvo drévom, neprávednoju smértiju osuždénu tí Hóspodi: otňúduže kňáz ťmý tebí ne odoľív, právedno izhnán býsť."),
("","","Ád tebí priblížisja, i zubý ne vozmóh stérti ťílo tvojé, čeľusťmí sokrušísja. Otňúduže Spáse, boľízni razrúš smértnyja, voskrésl jesí tridnéven."),
("Bohoródičen","","Razrišíšasja boľízni pramátere Jévy: boľízni bo izbižávše, neiskusomúžno rodilá jesí. Otňúduže jávi Bohoródicu, prečístaja, vídušče ťá, vsí slávim."),
),
"2": (
("","","Póntom pokrý faraóna s kolesnícami, sokrušájaj bráni mýšceju vysókoju, pojím jemú, jáko proslávisja."),
("","","Dvá istóčnika nám na kresťí istočí Spás živonósnaja ot svojehó probodénnaho rebrá. Pojím jemú: jáko proslávisja."),
("","","Vo hrób vselívsja, i voskrés tridnéven, netľínije podadé, čájanije Christós smértnym. Pojím jemú: jáko proslávisja."),
("Bohoródičen","","Jedína Ďíva i po roždeství pokazálasja jesí: ziždíteľa bo mírovi voploščénna rodilá jesí: ťímže rádujsja, tebé vsí zovém."),
),
"3": (
("","","Mánijem tvojím na zemnýj óbraz preložísja, préžde udoborazlivájemoje vodnóje jestestvó Hóspodi: ťímže nemókrenno pišešéstvovav Izráiľ, pojét tebí písň pobídnuju."),
("","","Bézdnu róždšaja blahoutróbija Ďívo, dúšu mojú ozarí svitonósnym tvojím sijánijem, jáko da dostójno vospojú tvojích čudés bézdnu."),
("","","Strilóju hrichóvnoju ujázvleny nás Slóvo víďiv, jáko blahoďíteľ uščédri: otňúduže neskazánno sojedinísja vsečístaja, plótiju, íže iz tebé prebožéstvennyj."),
("","","Býsť smértiju játo jestestvó, tľínnoje že i mértvennoje čelovíčeskoje, Vladýčice: tý že žízň začénši, sijé ot istľínija k životú vozvelá jesí."),
),
),
"P3": (
"1": (
("","","V načáľi nebesá vsesíľnym Slóvom tvojím utverždéj Hóspodi Spáse, i vseďíteľnym i Bóžijim Dúchom vsjú sílu ích, na nedvížimim mjá kámeni ispovídanija tvojehó utverdí."),
("","","Tý vozšéd na drévo, o nás boľíznuješi vóleju blahoutróbne Spáse, i terpíši jázvu, primirénija chodátajstvennu, i spasénija vírnym: jéjuže tvojemú mílostive, vsí primiríchomsja Rodíteľu."),
("","","Tý mja očístiv ot jázvy, dušéju ujázvlennaho zmíjevym uhryzénijem Christé, i pokazál jesí svít, vo ťmí drévle mňí sležášču i vo tľíniji: krestóm bo vo ád sošéd, mené sovoskresíl jesí."),
("Bohoródičen","","Neiskusomúžnyja tvojejá Mátere moľbámi mír mírovi podážď, Spáse, i neskazánnyja tvojejá slávy slavoslóvjaščyja ťá spodóbi."),
),
"2": (
("","","Nebesá utverdívyj slóvom, i zemnóje osnovánije ukripívyj na vodách mnóhich, utverdí mój úm, vo vóľu tvojú čelovikoľúbče."),
("","","Íže na kresťí strásti preterpívyj, i razbójniku ráj otvérzyj, jáko blahoďíteľ i Bóh, utverdí mój úm v vóľu tvojú, jedíne čelovikoľúbče."),
("","","Voskresýj tridnéven iz hróba, i živót mírovi vozsijávyj, jáko žiznodávec i Bóh, utverdí mój úm v vóľu tvojú, jedíne čelovikoľúbče."),
("Bohoródičen","","Jáko Bóha bezsímenno začénši, i ot kľátvy Jévu izbávľši, Ďívo Máti Mariám, molí ot tebé voplóščšahosja Bóha, spastí stádo tvojé."),
),
"3": (
("","","V načáľi nebesá vsesíľnym Slóvom tvojím utverždéj Hóspodi Spáse, i vseďíteľnym i Bóžijim Dúchom vsjú sílu ích, na nedvížimim mjá kámeni ispovídanija tvojehó utverdí."),
("","","Zmíj popolzýj iz Jedéma, mené obožénija želánijem preľstív, vérže v zémľu: no íže mílostiv, i jestestvóm blahoutróben, uščédriv Bohosoďíla, vo črévo tvojé vséľsja, i podóben mňí býv Máti Ďívo."),
("","","Blahoslovén plód tvojehó čréva, Ďívo Bohoródice, vsích rádoste, rádosť bo vsemú míru rodilá jesí, i vesélije voístinnu razhoňájuščeje pečáľ hrichóvnuju, Bohonevísto."),
("","","Žízň víčnuju, i svít Bohorodíteľnice Ďívo, i mír rodilá jesí nám, drévnich čelovík bráň jáže ko Otcú i Bóhu, ukroťivájuščij, víroju i ispovídanijem blahodáti."),
),
),
"P4": (
"1": (
("","","Ótča ňídra ne ostávľ, i sošéd na zémľu Christé Bóže, tájnu uslýšach smotrénija tvojehó, i proslávich ťá jedíne čelovikoľúbče."),
("","","Svojá pleščí dáv, íže ot Ďívy voplóščsja, na rány, rabú prehrišívšu, bijén byvájet Vladýka nepovínen, razrišája mojá sohrišénija."),
("","","Predstojáv sudíšču sudíj zakonoprestúpnych, jáko osudímyj povínen byvájet, i zaušájetsja brénnoju rukóju, sozdávyj čelovíka jáko Bóh, i suďáj právedno zemlí."),
("Bohoródičen","","Jáko voístinnu Máti Bóžija, tvorcá tvojehó i Sýna molí, k spasíteľnomu napráviti mjá pristánišču vseneporóčnaja slávnaho jehó choťínija."),
),
"2": (
("","","Smotrjájaj prorók, v posľídňaja tvojé Christé prišéstvije, vopijáše: tvojú uslýšach Hóspodi sílu, jáko vsjá spastí pomázannyja tvojá prišél jesí."),
("","","Ne vídyj hrichá, i jehó rádi býv Hóspodi, jéže ne býl jesí voobražáješisja, prijém čúždeje, da spaséši mír, i ubijéši preľstív mučíteľa."),
("","","Na kresťí vozdvížen býl jesí, i práotca Adáma razrišíl jesí hrích jehóže rádi tvojú uslýšach sílu: jáko vsjá spastí pomázannyja tvojá prišél jesí."),
("Bohoródičen","","Tý roždéjsja ot Ďívy, umiráješi, oživľáješi že Adáma mýsliju zablúždšaho: íbo ubojásja smérť kríposti tvojejá, jáko vsjá spastí rastľívšyjasja prišél jesí."),
),
"3": (
("","","Ótča ňídra ne ostávľ, i sošéd na zémľu Christé Bóže, tájnu uslýšach smotrénija tvojehó, i proslávich ťá jedíne čelovikoľúbče."),
("","","Izbránnaja vsjá i dóbraja, jávľšisja Bóhu préžde sozdánija svítlostiju vsepítaja, svetolítijem tvojím pojúščyja ťá prosvití."),
("","","Bóha čelovíkom rodilá jesí čístaja, voploščénna ot čístych krovéj tvojích, izbavľájušča sohrišénij mnóhich, ľubóviju slávjaščyja i počitájuščyja ťá Máti Ďívo."),
("","","Svjaščennoďíjstvujet jestestvó slovésnoje, vozsijávšemu iz tebé vsepítaja, táinstvu neizrečénnomu roždestvá tvojehó naučívšejesja nýňi, preblažénnaja."),
),
),
"P5": (
"1": (
("","","Nóšč ne svitlá nevírnym Christé, vírnym že prosviščénije v sládosti slovés tvojích: sehó rádi k tebí útreňuju, i vospiváju tvojé Božestvó."),
("","","Za tvojá rabý prodajéšisja Christé, i po laníťi udarénije terpíši, svobóďi chodátajstvenno pojúščym: k tebí útreňuju, i vospiváju tvojé Božestvó."),
("","","Božéstvennoju tvojéju síloju Christé, némoščiju plotskóju krípkaho nizložíl jesí, i pobidíteľa mjá smérti, Spáse, voskresénijem pokazál jesí."),
("Bohoródičen","","Bóha rodilá jesí Máti čístaja, voploščénnaho iz tebé Bohoľípno vsepítaja: ponéže ne poznála jesí múžeska póla, no ot svjatáho raždáješi Dúcha."),
),
"2": (
("","","Hóspodi Bóže mój, ot noščíútrenevav tebí moľúsja: podážď mí ostavlénije prehrišénij mojích, i k svítu tvojích poveľínij púti mojá naprávi, moľúsja."),
("","","Jehdá so bezzakónnymi vminívsja, voznéslsja jesí na lóbňim, svitíla sokryváchusja, i zemľá kolebášesja, i cerkóvnaja svítlosť razdrásja, jevréjskoje javľájušči otpadénije."),
("","","Tebé razrušívšaho mučítelevu vsjú sílu krípostiju nepostižímaho tvojehó božestvá, i mértvyja tvojím voskresénijem vozdvíhšaho písňmi slávim."),
("Bohoródičen","","Máti carjá i Bóha vsepítaja Bohoródice, víroju i ľubóviju ťá písňmi voschvaľájuščym prísno, očiščénije prehrišénij tvojími moľbámi nizposlí."),
),
"3": (
("","","Nóšč ne svitlá nevírnym Christé, vírnym že prosviščénije v sládosti slovés tvojích: sehó rádi k tebí útreňuju, i vospiváju tvojé Božestvó."),
("","","Ľistvicu uzrív Jákov, k vysoťí utverždénu, óbrazu naučísja, neiskusobráčnaja tebé: tobóju bo Bóh čelovíkom priobščísja, vsečístaja Vladýčice."),
("","","Izbavlénije víčnoje tobóju Ďívo, nýňi obrítše usérdno zovém tí: jéže rádujsja Bohonevístnaja: i tvojím svítom vozrádovavšesja vsepítaja, písňmi ťá pojém."),
("","","Jedínu ťá ženích posreďí térnija krín Ďívo, obrít, čistotý blistánijem sviťáščusja, i svítom ďívstva vseneporóčnaja, nevístu vospriját."),
),
),
"P6": (
"1": (
("","","Plávajuščaho v molví žitéjskich popečénij, s korablém potopľájema hrichí, i dušetľínnomu zvírju primetájema, jáko Jóna Christé, vopijú ti: iz smertonósnyja hlubiný vozvedí mja."),
("","","Vospomináchu ťá zakľučénnyja vo áďi dúšy, i ostávľšyjasja právednych, i ot tebé spasénija moľáchusja: jéže krestóm Christé, pódal jesí preispódnim, prišéd jáko blahoutróben."),
("","","Ko oduševlénnomu tvojemú i nerukotvorénnomu chrámu, razrušénu bývšu stradáňmi, vozzríti páki lík apóstoľskij otčájasja: no páče nadéždy poklóňsja, voskrésša povsjúdu propovída."),
("Bohoródičen","","Neizrečénnaho roždestvá tvojehó vseneporóčnaho óbraz, Ďívo Bohonevístnaja, íže nás rádi, któ ot čelovík skazáti vozmóžet? Jáko Bóh neopísanňi, Slóvo sojedinívsja tebí, plóť iz tebé býsť."),
),
"2": (
("","","Jóna iz čréva ádova vopijáše: vozvedí ot tlí živót mój. Mý že tebí vopijém: Vesíľne Spáse pomíluj nás."),
("","","Na kresťí voznéssja Spáse vóleju, vrážiju pľiníl jesí deržávu, na sím prihvozdív hrichóvnoje bláže, rukopisánije."),
("","","Iz mértvych voskrés Spáse vlástiju, sovozdvíhl jesí čelovíčeskij ród, živót i netľínije darovávyj nám, jáko čelovikoľúbec."),
("Bohoródičen","","Jehóže rodilá jesí Bohoródice neskazánno Bóha nášeho, moľášči ne prestáj, izbávitisja ot bíd pojúščym ťá, čístaja prisnoďívo."),
),
"3": (
("","","Plávajuščaho v molví žitéjskich popečénij, s korablém potopľájema hrichí, i dušetľínnomu zvírju primetájema, jáko Jóna Christé, vopijú ti: iz smertonósnyja hlubiný vozvedí mja."),
("","","Zakónniji ťá óbrazy, i proróčeskaja prorečénija jávi predvozviščáchu, choťáščuju rodíti blahoďíteľa čístaja vsejá tvári, mnohočástňi i mnohoobrázňi blahoďíjstvovavšaho vírno vospivájuščich ťá."),
("","","Ustránšahosja drévle navítom čelovikoubíjcy, Adáma pervozdánnaho rájskija božéstvennyja sládosti, neiskusobráčnaja páki vozvelá jesí, róždši íže ot prestuplénija nás izbávivšaho."),
("","","Íže choťínijem božéstvennym, soďíteľnoju že síloju vsé sostavléj ot ne súščich, iz čréva tvojehó čístaja, proizýde, i súščyja vo ťmí smértňij, Bohonačáľňijšimi mólnijami osijá."),
),
),
"K": (
("","","Ne ktomú deržáva smértnaja vozmóžet deržáti čelovíki: Christós bo sníde sokrušája i razorjája síly jejá. Svjazúem byvájet ád, prorócy sohlásno rádujutsja: predstá, hlahóľušče, spás súščym v véri, izydíte vírniji v voskresénije."),
("","","Vostrepetáša dóľi preispódňaja dnés, ád i smérť jedínaho ot Tróicy: zemľá pokolebásja, vrátnicy že ádovy víďivše ťá užasóšasja: vsjá že tvár so proróki rádujuščisja pojét tebí pobídnuju písň, izbáviteľu Bóhu nášemu, razrušívšemu nýňi smértnuju sílu. Da vosklíknem i vozopiím ko Adámu, i k súščym iz Adáma: drévo sehó páki vvedé, izydíte vírniji vo voskresénije."),
),
"P7": (
"1": (
("","","Péšč ótrocy ohnepáľnu drévle rosotočášču pokazáša, jedínaho Bóha vospivájušče, i hlahóľušče: prevoznosímyj otcév Bóh, i preproslávlen."),
("","","Drévom umerščvľájetsja Adám, vóleju preslušánije soďílav: poslušánijem že Christóvym páki obnovľájem jésť. Mené bo rádi raspinájetsja Sýn Bóžij, preproslávlennyj."),
("","","Tebé voskrésšaho Christé, iz hróba, tvár vsjá vospí: tý bo žízň súščym vo áďi procvíl jesí, mértvym voskresénije, íže vo ťmí, svít preproslávlennyj."),
("Bohoródičen","","Rádujsja dščí Adáma tľínnaho. Rádujsja jedína Bohonevísto. Rádujsja, jéjuže tľá izhnána býsť, jáže Bóha róždši: jehóže molí čístaja, spastísja vsím nám."),
),
"2": (
("","","V péšč óhnennuju vvérženi prepodóbniji ótrocy, óhň v rócu preložíša vospivánijem síce vopijúšče: blahoslovén jesí Hóspodi Bóže otéc nášych."),
("","","Íže na drévi krestňim hrichóvnoje žálo pritupív, i Adámova prestuplénija rukopisánije razrušív kopijém rebrá tvojehó, blahoslovén jesí Hóspodi Bóže otéc nášich."),
("","","Íže v rebró probodén býv, i kropléňmi króve božéstvennyja zémľu očístiv, krovmí idolobísija oskvernénnuju, blahoslovén jesí Hóspodi Bóže otéc nášich."),
("Bohoródičen","","Jéže préžde sólnca prosviščénija, vozsijávši mírovi Bohorodíteľnice Christá, ot ťmý izbávľšaho, i prosviščájušča vsjá Bohovíďinijem: blahoslovén jesí zovúščyja, Bóže otéc nášich."),
),
"3": (
("","","Péšč ótrocy ohnepáľnu drévle rosotočášču pokazáša, jedínaho Bóha vospivájušče, i hlahóľušče: prevoznosímyj otcév Bóh, i preproslávlen."),
("","","Preispeščrénu, pozlaščénu útvar ťá imúšču vozľubí sozdátel tvój Ďívo, i Hospóď: prevoznosímyj otcév Bóh i preproslávlen."),
("","","Očiščájetsja otrokovíce, úhľ drévle Isáia prijém: znamenáteľňi tvojé roždestvó víďiv, prevoznosímaho otcév Bóha, i preproslávlena."),
("","","Óbrazy znamenáteľnaja drévle božéstvennaho tvojehó roždestvá, božéstvenniji prorócy zrjášče, rádostno vospivájušče vzyváchu: prevoznosímyj otcév Bóh, i preproslávlen."),
),
),
"P8": (
"1": (
("","","Neopáľnaja ohňú v sináji pričáščšajasja kupiná, Bóha javí medlenojazýčnomu i huhnívomu Moiséovi, i ótroki révnosť Bóžija trí nepreborímyja vo ohní pivcý pokazá: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Prečístyj áhnec slovésnyj za mír zaklán býv, prestávi jáže po zakónu prinosímaja, očístiv sehó kromí prehrišénij jáko Bóh, prísno zovúšča: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Netľínna ne súšči préžde strásti, vosprijátaja ot sozdáteľa plóť náša, po strásti i voskrésniji neprikosnovénna tľíniju ustróisja, i smértnyja obnovľájet, zovúščyja: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("Bohoródičen","","Tvojé čistótnoje i vseneporóčnoje, Ďívo, skvérnoje i mérzskoje vselénnyja očísti, i bylá jesí nášeho primirénija k Bóhu viná, prečístaja: ťímže ťá Ďívo vsjá ďilá blahoslovím, i prevoznósim vo vsjá víki."),
),
"2": (
("","","Jedínaho beznačáľnaho carjá slávy, jehóže blahoslovját nebésnyja síly, i trepéščut číny ánheľstiji pójte svjaščénnicy, ľúdije prevoznosíte jehó vo víki."),
("","","Preterpívšaho strásti vóleju, i na kresťí prihvoždéna choťínijem, i razrúššaho síly ádovy, pójte svjaščénnicy, ľúdije prevoznosíte vo vsjá víki."),
("","","Uprazdnívšaho smérti deržávu, i ot hróba voskrésšaho so slávoju, i spásšaho čelovíčeskij ród, pójte svjaščénnicy, ľúdije prevoznosíte vo vsjá víki."),
("Bohoródičen","","Jedínaho blahoutróbnaho, prevéčnaho slóva, naposľídok ot Ďívy roždéna, i razríššaho drévňuju kľátvu, pójte svjaščénnicy, ľúdije prevoznosíte vo vsjá víki."),
),
"3": (
("","","Neopáľnaja ohňú v sináji pričáščšajasja kupiná, Bóha javí medlenojazýčnomu i huhnívomu Moiséovi, i ótroki révnosť Bóžija trí nepreborímyja vo ohní pivcý pokazá: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Svítom roždestvá tvojehó, strannoľípno vselénnuju prosvitíla jesí Bohorodíteľnice, súšča bo voístinnu Bóha na objátijach nósiši tvojích, prosviščájušča vírnyja prísno zovúščyja: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Pojém čístaja, blahočéstno tvojé črévo, Bóha vmistívšeje neskazánno voploščájema, dávšaho vsím vírnym Bohorazúmija prosviščénije, prísno zovúščym: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Svíta tvojehó blistáňmi, tebé pojúščyja svitovídny soďílala jesí, svitorodíteľnice Bohoródice čístaja: svíta bo javílasja jesí selénije, ujasňájušči svítom zovúščyja: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte vo vsjá víki."),
),
),
"P9": (
"1": (
("","","Netľínija iskušénijem róždšaja, i vsechitrecú slóvu plóť vzaimodávšaja, Máti neiskusomúžnaja Ďívo Bohoródice: prijátelišče nesterpímaho, seló nevmistímaho ziždíteľa tvojehó, ťá veličájem."),
("","","Íže božestvú strásť prilahájuščiji, zaustítesja vsí čuždemúdrenniji: Hóspoda bo slávy plótiju raspjáta, ne raspjáta že jestestvóm božéstvennym, jáko vo dvojú jestestvú jedínaho veličájem."),
("","","Íže ťilesém vostániju nevírujuščiji, ko Christóvu šédše hróbu, naučítesja: jáko umerščvléna býsť, i voskrése páki plóť žiznodávca, vo uvirénije posľídňaho voskresénija, na néže upovájem."),
("Tróičen","","Ne Božéstv Tróicu, no ipostásej, nižé jedínicu líc, no božestvá čtúšče, ssicájem že sijú ďíľáščich: slivájem že páki, slijánije derzájuščich na sijú, júže veličájem."),
),
"2": (
("","","Máti Bóžija i Ďíva, róždšaja i ďivstvujuščaja páki, ne jestestvá ďílo, no bóžija snizchoždénija. Ťímže jáko jedínu, Bóžijich čudés spodóbľšujusja, ťá prísno veličájem."),
("","","Svít ot svíta, Ótčeje sijánije slávy bezľítno osijávyj, jákože vo ťmí čelovíčeskomu žitijú Christós vozsijá, i hoňáščuju prohná ťmú: jehóže neprestánno vírniji veličájem."),
("","","Strásti plotskíja, i kríposť Božestvá, vo Chrisťí víďašče, múdrstvujuščiji jedíno slóžnoje jestestvó, da posrámjatsja: tój bo jáko čelovík úbo umirájet, jákože vsehó soďíteľ vostajét."),
("","","Míro mértvym, živómu že pínije: slézy umirájuščym, životú že vsích písň žený prinesíte, íže vostánija propovídnik vopijáše, blahovistvúja Christóvo voskresénije."),
("Bohoródičen","","Bóha rázvi tebé inóho ne znáju, cérkov vopijét tí: ot nevírnych mjá jazýk nevístu svojú izbrávyj, dážď úbo Slóve, vírnym spasénije, róždšija ťá molítvami, jáko blahoutróben."),
),
"3": (
("","","Netľínija iskušénijem róždšaja, i vsechitrecú slóvu plóť vzaimodávšaja, Máti neiskusomúžnaja Ďívo Bohoródice: prijátelišče nesterpímaho, seló nevmistímaho ziždíteľa tvojehó, ťá veličájem."),
("","","Rádosti nám víčnyja chodátaica i vesélija javílasja jesí prisnoďívo otrokovíce, izbáviteľa róždši, ístinoju i Dúchom božéstvennym tohó čtúščich, jáko Bóha izbavľájuščaho."),
("","","Pojá Davíd, tvój práotec, prečístaja, ťá kovčéh imenújet svjatýni božéstvennyja, prejestéstvenňi Bóha vmistívšuju, vo otéčeskich siďáščaho ňídrich, jehóže neprestánno vírniji veličájem."),
("","","Jáko voístinnu prevýšši jesí vsejá tvári otrokovíce: ziždíteľa vo vsích ťilésňi nám rodilá jesí. Ťímže jáko Máti jedínaho Vladýki, nósiši protívu vsích načáľňi pobiždénije."),
),
),
),
"CH": (
("","","Voskrése Christós iz mértvych, razrúš smértnyja úzy: blahovistí zemlé, rádosť véliju, pójte nebesá Bóžiju slávu."),
("","","Voskresénije Christóvo víďivše, poklonímsja svjatómu Hóspodu Iisúsu: jedínomu bezhríšnomu."),
("","","Christóvu voskreséniju kláňajuščesja ne prestájem: tój bo spásl jésť nás ot bezzakónij nášich, svjatýj Hospóď Iisús, javléj voskresénije."),
("","","Čtó vozdámy Hóspodevi o vsích, jáže vozdadé nám? Nás rádi Bóh v čelovícich, za istľívšeje jestestvó Slóvo plóť býsť, i vselísja v ný, k neblahodárnym, blahoďíteľ: k pľínnikom, svobodíteľ: ko íže vo ťmí seďáščym, sólnce právdy: na kresťí, bezstrástnyj: vo áďi, svít: v smérti, živót: Voskresénije, pádšich rádi. K nemúže vozopijím: Bóže náš, sláva tebí."),
("","","Vratá ádova sokrušíl jesí Hóspodi, i smértnuju deržávu uprazdníl jesí krípkoju síloju tvojéju, i sovozdvíhl jesí mértvyja, íže ot víka vo ťmí spjáščyja, božéstvennym i slávnym voskresénijem tvojím, jáko cár vsích i Bóh vsesílen."),
("","","Prijidíte vozrádujemsja Hóspodevi, i vozveselímsja o voskreséniji jehó, jáko sovozdvíže mértvyja ot ádovych nerišímych úz: i darová mírovi jáko Bóh žízň víčnuju, i véliju mílosť."),
("","","Blistájajsja ánhel, na kámeni siďáše živoprijémnaho hróba, i ženám mironósicam bláhovistvováše, hlahóľa: voskrése Hospóď, jákože préžde rečé vám, vozvistíte učenikóm jehó, jáko predvarjájet vý v Haliléi: mírovi že podajét žízň víčnuju, i véliju mílosť."),
("","","Počtó nebrehóma sotvoríste kámene kraeuhóľnaho. O prebezzakónniji judéi? Sé tój jésť, jehóže položí Bóh v Sijóňi: íže iz kámene istočívyj v pustýni vódu, i nám istočájaj ot rébr svoích bezsmértije. Séj jésť kámeň, íže ot horý ďivíčeskija otsíksja bez choťínija múžeska: Sýn čelovíč hrjadýj na óblacich nebésnych, k vétchomu déňmi, jákože rečé Danijíl, i víčno jehó cárstvo."),
),
)
#let L = (
"B": (
("","","Krasén bí i dóbr v sňíď, íže mené umertvívyj plód: Christós jésť drévo živótnoje, ot nehóže jadýj ne umiráju, no vopijú s razbójnikom: pomjaní mja Hóspodi, vo cárstviji tvojém."),
("","","Na krest voznéssja ščédre, Adámovo rukopisánije drévňaho hrichá zahládil jesí, i spásl jesí ot prélesti vés ród čelovíčeskij. Ťímže vospivájem ťá blahoďíteľu Hóspodi."),
("","","Prihvozdíl jesí na kresťí ščédre, hrichí náša Christé, i tvojéju smértiju smérť umertvíl jesí vozdvíhnuvyj uméršyja iz mértvych: ťímže poklaňájemsja tvojemú svjatómu voskreséniju."),
("","","Izlijá jád zmíj v slúchi Jéviny inohdá: Christós že na drévi krestňim istočíl jésť mírovi žízni sládosť. Ťímže vzyvájem: pomjaní nás Hóspodi, vo cárstviji tvojém."),
("","","Vo hróbi jáko smérten položén býl jesí, životé vsích Christé: i verejí ádovy slomíl jesí: i voskrés vo slávi tridnéven jáko sílen, vsích prosvitíl jesí: sláva tvojemú vostániju."),
("","","Hospóď voskrés tridnéven iz mértvych, darová mír svój učenikóm, i sích blahoslovív poslá rék: vsjá privedíte vo cárstvije mojé."),
("Tróičen","","Svít otéc, svít Sýn i Slóvo, svít Dúch svjatýj: no jedín svít trí, jedín bo Bóh v trijéch úbo lícich, jedíňim že jesteství i načáľi, nesikóm i nesliján, sýj prevíčnyj."),
("Bohoródičen","","Rodilá jesí Sýna i Slóvo Ótčeje, plótiju nás rádi, jákože vísť Bohoródice, sám. Ťímže Ďívo Máti, obožéni bývše tobóju, rádujsja, tebí zovém, christijánom upovánije."),
),
"TKB": (
("","","Razrušíl jesí krestóm tvojím smérť, otvérzl jesí razbójniku ráj: mironósicam pláč preložíl jesí, i apóstolom propovídati poveľíl jesí: jáko voskrésl jesí Christé Bóže, dárujaj mírovi véliju mílosť."),
("","","Ne ktomú deržáva smértnaja vozmóžet deržáti čelovíki: Christós bo sníde sokrušája i razorjája síly jejá. Svjazúem byvájet ád, prorócy sohlásno rádujutsja: predstá, hlahóľušče, spás súščym v véri, izydíte vírniji v voskresénije."),
("Bohoródičen","","Jáko nášeho voskresénija sokróvišče, na ťá naďíjuščyjasja vsepítaja, ot róva i hlubiný prehrišénij vozvedí: tý bo povínnyja hrichú spaslá jesí, róždši spasénije náše: jáže préžde roždestvá Ďíva, i v roždeství Ďíva, i po roždeství páki prebyváješi Ďíva."),
),
"P": (
("","","Hospóď kríposť ľúdem svojím dást: Hospóď blahoslovít ľúdi svojá mírom:"),
("","","Prinesíte Hóspodevi Sýnove Bóžiji, prinesíte Hóspodevi sýny óvni. "),
("","","Bláho jésť ispovídatisja Hóspodevi, i péti ímeni tvojemú výšnij. "),
("","","Vozveščáti zaútra Mílosť tvojú, i ístinu tvojú na vsjáku nóšč."),
),
)
|
|
https://github.com/elteammate/typst-compiler | https://raw.githubusercontent.com/elteammate/typst-compiler/main/src/A-showcase.typ | typst | #let add(/*int*/x, /*int*/y) = x + y
2 + 2 = #add(2, 2)
|
|
https://github.com/marisbaier/Typst_eineTeXAlternative | https://raw.githubusercontent.com/marisbaier/Typst_eineTeXAlternative/main/HowToTemplate/main.typ | typst | #import "template.typ": *
#show: template.with(
title: [Dies ist ein Titel]
)
= Erstes Kapitel
Hier geht es weiter
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/numberingx/0.0.1/numberingx_impl.typ | typst | Apache License 2.0 | // SPDX-License-Identifier: CC0-1.0
/// Parse a style object into a descriptor
#let parse-style(s) = {
assert(type(s) == "dictionary", message: "Invalid style: " + repr(s) + ".")
assert("system" in s, message: repr(s) + " is missing a `system` key.")
let sys = s.system
assert(
sys in ("cyclic", "symbolic", "alphabetic", "fixed", "numeric", "additive")
or (type(sys) == "array" and sys.len() == 2
and sys.at(0) == "fixed" and type(sys.at(1)) == "integer"),
message: "Invalid system: " + repr(sys) + "."
)
let desc = (:)
desc.system = if type(sys) == "string" { sys } else { sys.at(0) }
if sys == "fixed" { desc.offset = 1 }
if type(sys) == "array" { desc.offset = int(sys.at(1)) }
desc.fallback = if "fallback" in s {
assert(
type(s.fallback) == "string",
message: "Invalid fallback value: " + repr(s.fallback) + "."
)
s.fallback
} else {
"decimal"
}
let (min, max) = if "range" in s {
assert(
type(s.range) == "array" and s.range.len() == 2
and s.range.all((n) => type(n) == "integer" or n == "inf"),
message: repr(s.range) + " is not a valid range."
)
s.range.map((x) => if x == "inf" { calc.inf } else { x })
} else if desc.system in ("cyclic", "numeric", "fixed") {
(min: -calc.inf, max: calc.inf)
} else if desc.system in ("alphabetic", "symbolic") {
(min: 1, max: calc.inf)
} else if desc.system == "additive" {
(min: 0, max: calc.inf)
}
desc.min = min
desc.max = max
let allow-negative = ("symbolic", "alphabetic", "numeric", "additive")
desc.negative = if desc.system in allow-negative {
if "negative" in s {
assert(
type(s.negative) in ("string", "content"),
message: "Invalid negative marker: " + repr(s.negative) + "."
)
s.negative
} else {
"-"
}
}
desc.symbols = if desc.system == "additive" {
assert(
"additive-symbols" in s,
message: repr(s) + " is missing an `additive-symbols` key."
)
let check-sym(sym) = (type(sym) == "array" and sym.len() == 2
and type(sym.at(0)) == "integer"
and type(sym.at(1)) in ("string", "content"))
assert(
type(s.additive-symbols) == "array" and s.additive-symbols.len() > 0
and s.additive-symbols.all(check-sym),
message: "Invalid symbols: " + repr(s.additive-symbols) + "."
)
s.additive-symbols
} else {
assert("symbols" in s, message: repr(s) + " is missing a `symbols` key.")
assert(
type(s.symbols) == "array" and s.symbols.len() > 0 and
s.symbols.all((sym) => type(sym) in ("string", "content")),
message: "Invalid symbols: " + repr(s.symbols) + "."
)
s.symbols
}
desc
}
/// Apply function `f` to all values of `dict` without touching the keys
#let map-values(dict, f) = {
let d = (:)
for (k, v) in dict.pairs() {
d.insert(k, f(v))
}
d
}
#let builtin-styles = map-values(yaml("styles.yaml"), parse-style)
#builtin-styles.insert("1", builtin-styles.decimal)
#builtin-styles.insert("a", builtin-styles.lower-alpha)
#builtin-styles.insert("A", builtin-styles.upper-alpha)
#builtin-styles.insert("i", builtin-styles.lower-roman)
#builtin-styles.insert("I", builtin-styles.upper-roman)
#builtin-styles.insert("い", builtin-styles.hiragana-iroha)
#builtin-styles.insert("イ", builtin-styles.katakana-iroha)
#builtin-styles.insert("א", builtin-styles.hebrew)
#builtin-styles.insert("ㄱ", builtin-styles.korean-consonant)
#builtin-styles.insert("가", builtin-styles.korean-syllable)
#builtin-styles.insert("一", builtin-styles.cjk-decimal)
#builtin-styles.insert("*", parse-style((
system: "symbolic",
symbols: ("*", "†", "‡", "§", "¶", "‖"),
)))
/// Get a style descriptor given an id.
#let get-descriptor(id, styles) = {
assert(
id in builtin-styles or id in styles,
message: "Unknown style: " + id + "."
)
if id in styles { styles.at(id) } else { builtin-styles.at(id) }
}
/// Parse a format string into a structured descriptor (ids not resolved).
#let parse-format-string(fmt) = {
let pieces = ()
let (acc, lex) = ("", fmt)
while lex.len() > 0 {
let i = lex.position(regex("[{}]"))
if i == none {
acc += lex
lex = ""
} else if lex.at(i + 1) == lex.at(i) {
acc += lex.slice(0, i + 1)
lex = lex.slice(i + 2)
} else if lex.at(i) == "{" {
let j = lex.position("}")
assert(
j != none,
message: "Unclosed format rule in format string \"" + fmt + "\"."
)
pieces.push((prefix: acc + lex.slice(0, i), id: lex.slice(i + 1, j)))
acc = ""
lex = lex.slice(j + 1)
} else {
assert(
false,
message: "Unpaired closing brace in format string \"" + fmt + "\"."
)
}
}
(pieces: pieces, suffix: acc)
}
/// Parse a format string and resolves the styles ids into descriptors
#let parse-format-desc(fmt, styles) = {
let (pieces, suffix) = parse-format-string(fmt)
pieces = pieces.map(((prefix, id)) => {
(prefix: prefix, style: get-descriptor(id, styles))
})
(pieces: pieces, suffix: suffix)
}
/// System specific formatting routines
#let format-functions = (
cyclic: (d, n) => {
d.symbols.at(calc.rem(n - 1, d.symbols.len()))
},
fixed: (d, n) => {
let (min, max) = (d.offset, d.offset + d.symbols.len() - 1)
if n < min or n > max { return false }
d.symbols.at(n - min)
},
symbolic: (d, n) => {
let sym = d.symbols.at(calc.rem(n - 1, d.symbols.len()))
let len = calc.ceil(n / d.symbols.len())
sym * len
},
alphabetic: (d, n) => {
let s = ""
let len = d.symbols.len()
while n != 0 {
n = n - 1
s = d.symbols.at(calc.rem(n, len)) + s
n = calc.quo(n, len)
}
s
},
numeric: (d, n) => {
if n == 0 { return d.symbols.at(0) }
let s = ""
let len = d.symbols.len()
while n != 0 {
s = d.symbols.at(calc.rem(n, len)) + s
n = calc.quo(n, len)
}
s
},
additive: (d, n) => {
let tups = d.symbols
if n == 0 {
let sym = tups.find((s) => s.at(0) == 0)
return if sym != none { sym.at(1) } else { false }
}
let s = ""
for (weight, sym) in tups {
if weight >= 1 and weight <= n {
s = s + sym * calc.quo(n, weight)
n = calc.rem(n, weight)
if n == 0 { return s }
}
}
if n == 0 { s } else { return false }
}
)
/// Format a number using a given style descriptor.
#let format-one(desc, n) = {
assert(type(n) == "integer", message: "only integer can be formatted")
let fallback() = format-one(get-descriptor(desc.fallback), n)
let m = if desc.negative != none { calc.abs(n) } else { n }
if m < desc.min or m > desc.max { return fallback() }
let s = format-functions.at(desc.system)(desc, m)
if s == false { return fallback() }
if n < 0 { desc.negative + s } else { s }
}
#let do-format(desc, ..nums) = {
let nums = nums.pos()
let front = nums.slice(0, calc.min(nums.len(), desc.pieces.len()))
let back = if front.len() < nums.len() {
nums.slice(front.len())
} else { () }
for (i, n) in front.enumerate() {
let (prefix, style) = desc.pieces.at(i)
prefix
format-one(style, n)
}
let (prefix, style) = desc.pieces.last()
for n in back {
if prefix != "" { prefix } else { desc.suffix }
format-one(style, n)
}
desc.suffix
}
#let format(fmt, styles: (:), ..nums) = do-format(
parse-format-desc(fmt, styles),
styles: styles,
..nums
)
#let formatter(fmt, styles: (:)) = (..nums) => {
format(fmt, styles: styles, ..nums)
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-11300.typ | typst | Apache License 2.0 | #let data = (
("GRANTHA SIGN COMBINING ANUSVARA ABOVE", "Mn", 0),
("GRANTHA SIGN CANDRABINDU", "Mn", 0),
("GRANTHA SIGN ANUSVARA", "Mc", 0),
("GRANTHA SIGN VISARGA", "Mc", 0),
(),
("GRANTHA LETTER A", "Lo", 0),
("GRANTHA LETTER AA", "Lo", 0),
("GRANTHA LETTER I", "Lo", 0),
("GRANTHA LETTER II", "Lo", 0),
("GRANTHA LETTER U", "Lo", 0),
("GRANTHA LETTER UU", "Lo", 0),
("GRANTHA LETTER VOCALIC R", "Lo", 0),
("GRANTHA LETTER VOCALIC L", "Lo", 0),
(),
(),
("GRANTHA LETTER EE", "Lo", 0),
("GRANTHA LETTER AI", "Lo", 0),
(),
(),
("GRANTHA LETTER OO", "Lo", 0),
("GRANTHA LETTER AU", "Lo", 0),
("GRANTHA LETTER KA", "Lo", 0),
("GRANTHA LETTER KHA", "Lo", 0),
("GRANTHA LETTER GA", "Lo", 0),
("GRANTHA LETTER GHA", "Lo", 0),
("GRANTHA LETTER NGA", "Lo", 0),
("GRANTHA LETTER CA", "Lo", 0),
("GRANTHA LETTER CHA", "Lo", 0),
("GRANTHA LETTER JA", "Lo", 0),
("GRANTHA LETTER JHA", "Lo", 0),
("GRANTHA LETTER NYA", "Lo", 0),
("GRANTHA LETTER TTA", "Lo", 0),
("GRANTHA LETTER TTHA", "Lo", 0),
("GRANTHA LETTER DDA", "Lo", 0),
("GRANTHA LETTER DDHA", "Lo", 0),
("GRANTHA LETTER NNA", "Lo", 0),
("GRANTHA LETTER TA", "Lo", 0),
("GRANTHA LETTER THA", "Lo", 0),
("GRANTHA LETTER DA", "Lo", 0),
("GRANTHA LETTER DHA", "Lo", 0),
("GRANTHA LETTER NA", "Lo", 0),
(),
("GRANTHA LETTER PA", "Lo", 0),
("GRANTHA LETTER PHA", "Lo", 0),
("GRANTHA LETTER BA", "Lo", 0),
("GRANTHA LETTER BHA", "Lo", 0),
("GRANTHA LETTER MA", "Lo", 0),
("GRANTHA LETTER YA", "Lo", 0),
("GRANTHA LETTER RA", "Lo", 0),
(),
("GRANTHA LETTER LA", "Lo", 0),
("GRANTHA LETTER LLA", "Lo", 0),
(),
("GRANTHA LETTER VA", "Lo", 0),
("GRANTHA LETTER SHA", "Lo", 0),
("GRANTHA LETTER SSA", "Lo", 0),
("GRANTHA LETTER SA", "Lo", 0),
("GRANTHA LETTER HA", "Lo", 0),
(),
("COMBINING BINDU BELOW", "Mn", 7),
("GRANTHA SIGN NUKTA", "Mn", 7),
("GRANTHA SIGN AVAGRAHA", "Lo", 0),
("GRANTHA VOWEL SIGN AA", "Mc", 0),
("GRANTHA VOWEL SIGN I", "Mc", 0),
("GRANTHA VOWEL SIGN II", "Mn", 0),
("GRANTHA VOWEL SIGN U", "Mc", 0),
("GRANTHA VOWEL SIGN UU", "Mc", 0),
("GRANTHA VOWEL SIGN VOCALIC R", "Mc", 0),
("GRANTHA VOWEL SIGN VOCALIC RR", "Mc", 0),
(),
(),
("GRANTHA VOWEL SIGN EE", "Mc", 0),
("GRANTHA VOWEL SIGN AI", "Mc", 0),
(),
(),
("GRANTHA VOWEL SIGN OO", "Mc", 0),
("GRANTHA VOWEL SIGN AU", "Mc", 0),
("GRANTHA SIGN VIRAMA", "Mc", 9),
(),
(),
("GRANTHA OM", "Lo", 0),
(),
(),
(),
(),
(),
(),
("GRANTHA AU LENGTH MARK", "Mc", 0),
(),
(),
(),
(),
(),
("GRANTHA SIGN PLUTA", "Lo", 0),
("GRANTHA LETTER VEDIC ANUSVARA", "Lo", 0),
("GRANTHA LETTER VEDIC DOUBLE ANUSVARA", "Lo", 0),
("GRANTHA LETTER VOCALIC RR", "Lo", 0),
("GRANTHA LETTER VOCALIC LL", "Lo", 0),
("GRANTHA VOWEL SIGN VOCALIC L", "Mc", 0),
("GRANTHA VOWEL SIGN VOCALIC LL", "Mc", 0),
(),
(),
("COMBINING GRANTHA DIGIT ZERO", "Mn", 230),
("COMBINING GRANTHA DIGIT ONE", "Mn", 230),
("COMBINING GRANTHA DIGIT TWO", "Mn", 230),
("COMBINING GRANTHA DIGIT THREE", "Mn", 230),
("COMBINING GRANTHA DIGIT FOUR", "Mn", 230),
("COMBINING GRANTHA DIGIT FIVE", "Mn", 230),
("COMBINING GRANTHA DIGIT SIX", "Mn", 230),
(),
(),
(),
("COMBINING GRANTHA LETTER A", "Mn", 230),
("COMBINING GRANTHA LETTER KA", "Mn", 230),
("COMBINING GRANTHA LETTER NA", "Mn", 230),
("COMBINING GRANTHA LETTER VI", "Mn", 230),
("COMBINING GRANTHA LETTER PA", "Mn", 230),
)
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/09-layout/bidi-fail.typ | typst | Other | #import "/lib/draw.typ": *
#import "/template/lang.typ": arabic
#let start = (0, 100)
#let end = (1000, 600)
#let graph = with-unit((ux, uy) => {
// mesh(start, end, (100, 100))
let lbPoint = place(left + bottom, dx: -15 * ux, dy: 15 * ux, circle(radius: 15 * ux, fill: theme.main))
let box-stroke-thickness = 8 * ux
let box-stroke = box-stroke-thickness + gray
let eng = body => [
#set text(size: 260 * ux, top-edge: 220 * ux)
#lbPoint #body
]
let eng-text = "textin".codepoints()
txt(grid(
columns: eng-text.len(),
stroke: 8 * ux + gray,
column-gutter: (0pt, 0pt, 0pt, 80 * ux, 0pt),
.."textin".codepoints().map(eng),
), (50, 300), anchor: "lb")
let arrows = (start, widths, fill: theme.main) => {
let position = start
for width in widths {
let new-pos = (position.at(0) + width, position.at(1))
arrow(position, new-pos, stroke: 10 * ux + fill,head-scale: 1.5)
position = new-pos
}
}
let str-to-widths = s => s.map(it => measure(eng(it)).width / ux)
arrows((50, 200), str-to-widths(eng-text.slice(0, 4)))
arrows((600, 200), str-to-widths(eng-text.slice(4)))
let arabic-text = "العر"
let arabic-color = rgb(82, 148, 114).darken(10%).transparentize(10%)
let arabic-right = (960, 535)
txt(arabic(fill: arabic-color, size: 355 * ux, top-edge: "bounds")[#arabic-text], arabic-right, anchor: "rt", dx: 10)
let arabic-box-color = rgb(94, 166, 133)
let arabic-box-stroke = box-stroke-thickness + arabic-box-color
let widths = (70, 85, 142, 160)
let heights = (235, 235, 235, 315)
let position = arabic-right
for (width, height) in widths.zip(heights) {
rect(position, width: -width, height: height, stroke: arabic-box-stroke)
position = (position.at(0) - width, position.at(1))
}
arrows((arabic-right.at(0), 150), widths.map(it => -it), fill: green.darken(50%))
})
#canvas(end, start: start, width: 100%, graph)
|
https://github.com/DVDTSB/ctheorems | https://raw.githubusercontent.com/DVDTSB/ctheorems/main/0.1.0/assets/manual.typ | typst | #show raw.where(block: true): block.with(
fill: luma(250),
inset: 1em,
radius: 4pt,
width:100%,
stroke: luma(200),
)
= Getting Started
== Introduction
The `ctheorems` package provides `typst` functions that help create numbered theorems, lemmas, corollaries, etc. The package is designed to be easy to use and customize, and it is also very lightweight.
A theorem enviorment lets you wrap content together with automatically updating numbering information. Enviorments can:
- have their own counter (e.g. Theorem 1, Example 1, etc.)
- share the same counter (e.g. Theorem 1, Lemma 2, etc.)
- be attached to another enviorment (e.g. Theorem 1, Collary 1.1, etc.)
- be attached to headings
- have a numbering level depth fixed (for instance, use only top level heading numbers)
- be referenced elsewhere in the document
== Using `ctheorems`
First you need to install the package. Just download the `.zip` file from the github repository and extract it to your `@local` folder. Then you can import it in your document:
```typst
#import "@local/ctheorems:0.1.0": *
```
When it will get accepted as an oficial package, you will be able to import it from the `@preview` repository, aka:
```typst
#import "@preview/ctheorems:1.0.0": *
```
|
|
https://github.com/jamesrswift/ionio-illustrate | https://raw.githubusercontent.com/jamesrswift/ionio-illustrate/main/src/extras/content.typ | typst | MIT License | #import "@preview/cetz:0.1.2"
#let _prepare(self, ctx) = { return self}
#let _stroke(self, ctx) = {
cetz.draw.content(
anchor: self.anchors.at(0),
self.coordinates.at(0),
self.body
)
}
#let content(body, position, anchor: "center") = {
return ((
type: "raw",
body: body,
coordinates: (position,),
anchors: (anchor,),
plot-prepare: _prepare,
plot-stroke: _stroke,
),)
} |
https://github.com/hei-templates/hevs-typsttemplate-thesis | https://raw.githubusercontent.com/hei-templates/hevs-typsttemplate-thesis/main/00-templates/page-reportinfo.typ | typst | MIT License | //
// Description: Report info page for thesis template
// Author : <NAME>
//
#import "../00-templates/helpers.typ": *
#let page-reportinfo(
author: (),
date: none,
signature: none,
) = {
heading(numbering:none, outlined: false)[Information about this report]
v(2em)
[*Contact Information*]
tablex(
columns: (auto, auto),
stroke: none,
align: left + top,
rowspanx(3)[Author:], [#author.name],
[#author.degree Student],
[#author.affiliation],
[Email:], [#link("mailto:author.email")[#author.email]],
)
v(5em)
[
*Declaration of honor*
I, undersigned, #author.name, hereby declare that the work submitted is the result of a personal work. I certify that I have not resorted to plagiarism or other forms of fraud. All sources of information used and the author quotes were clearly mentioned.
]
tablex(
stroke: none,
columns: (auto,auto),
align: left + horizon,
[Place, date:], [#author.place, #date],
[Signature:],
if signature != none {
[#line(start: (0cm,2cm),length:5cm) #v(-0.4cm) #pad(x: 2.5em, image(author.signature, width:3cm))]
} else {
[#line(start: (0cm,2cm),length:7cm)]
},
)
} |
https://github.com/akrantz01/resume | https://raw.githubusercontent.com/akrantz01/resume/main/template/projects.typ | typst | MIT License | #import "common.typ": date-range, icon, section
#let entry(
title,
date,
description: none,
url: none,
details: (),
settings: (:),
) = {
let about = if description != none [ *#title*, #emph(description) ] else [ *#title* ]
let about = if url != none {
let text = if settings.at("full-links", default: false) {
url.href
} else {
url.href.trim("https://", at: start).split("/").slice(1).join("/")
}
[
#about
#box(
pad(
left: 0.5em,
link(url.href)[
#icon(url.type)
#text
],
),
)
]
} else {
about
}
set block(above: 0.7em, below: 0.75em)
box(width: 100%)[
#about
#h(1fr)
#date-range(date)
]
list(..details)
}
#let projects(title: "Projects", settings: (:), omit: (), ..entries) = {
section(title)
entries.pos().filter(((id, ..rest)) => id not in omit).map((
(id, title, date, ..rest),
) => entry(
title,
date,
settings: settings,
..rest,
)).join()
}
|
https://github.com/dyc3/senior-design | https://raw.githubusercontent.com/dyc3/senior-design/main/dev-plan/dev-plan.typ | typst | = Dev Plan
== Roles and Responsibilities
Development Lead: <NAME> \
Architect: <NAME> \
Developers: <NAME>, <NAME> \
Test Lead: <NAME> \
Testers: <NAME>, <NAME> \
Documentation: <NAME>, <NAME> \
System Admin: <NAME> \
User Advocate: <NAME>
== Method
- Software
- Languages: Rust
- Operating Systems: Linux
- Deployment method: Docker
- Hardware
- Deployed on Fly.io
- 1 CPU
- 256MB RAM
- Review Process
- 1 code review per pull request, required before merge
- All builds and tests must pass before merge
- Build Plan
- Builds will be triggered on every pull request, and on every merge to master
- Modification Request Process
- All modification requests must be submitted as a pull request to the spec repo
- All modification requests must be approved by at least one other developer
=== Backup Plan
Fly provides a mechanism to replay network requests to a different instance of the application. If the Load Balancer doesn't work as planned, some of the work can be salvaged by utilizing this feature instead.
== Virtual and Real Workspaces
All sprint planning and issue tracking will be done on GitHub, tracked in this project: https://github.com/users/dyc3/projects/4
The repos we will be working in:
- https://github.com/dyc3/opentogethertube
- https://github.com/dyc3/senior-design
== Communication Plan
- Sprint Planning: On Wednesday mornings, we will meet to discuss what we will be working on for the next sprint.
- Heartbeat Meetings: On Monday mornings, we will touch base and sync up on what we have been working on.
- Issue Meetings: If there are any specific issues that are blocking work, we will meet to discuss them and come up with a solution. These meetings will be held on an as-needed basis.
== Timeline and Milestones
Tentative Timeline:
- End of October: Prototype Test Harness with some test cases
- End of November: Many more test cases, and finding and filing bugs in the Balancer
- End of January: Finalize Test Harness, and start the process of polishing the Balancer
- End April: Finalize Balancer, Complete validation of the Balancer with production deployment
== Testing Plan
Unit tests will be written for all functions and methods where it makes sense, using Rust's integrated testing framework. Integration tests will be created to test the functionality of the load balancer as a whole.
== Risks
- Both the Rust ecosystem and Async Rust are relatively young, and there may be some issues with the libraries we use.
- We could fail to addequately account for all the possible race conditions that could occur in a distributed environment.
== Assumptions
- We assume that the Fly.io platform will be able to handle the load of the project.
== IRB Protocol
Not necessary for this project.
== Required Resources and Budget
#figure(
table(columns: 3,
[Resource], [Cost], [Source],
[Hosting], [Approx. \$2-4 / month], [Fly.io],
)
) |
|
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/introspection/query.typ | typst | // Test creating a header with the query function.
--- query-here ---
// Test that `here()` yields the context element's location.
#context test(query(here()).first().func(), (context none).func())
--- query-running-header ---
#set page(
paper: "a8",
margin: (y: 1cm, x: 0.5cm),
header: context {
smallcaps[Typst Academy]
h(1fr)
let after = query(selector(heading).after(here()))
let before = query(selector(heading).before(here()))
let elem = if before.len() != 0 {
before.last()
} else if after.len() != 0 {
after.first()
}
emph(elem.body)
}
)
#outline()
= Introduction
#lines(1)
= Background
#lines(2)
= Approach
--- query-list-of-figures ---
#set page(
paper: "a8",
numbering: "1 / 1",
margin: (bottom: 1cm, rest: 0.5cm),
)
#set figure(numbering: "I")
#show figure: set image(width: 80%)
= List of Figures
#context {
let elements = query(selector(figure).after(here()))
for it in elements [
Figure
#numbering(it.numbering,
..counter(figure).at(it.location())):
#it.caption.body
#box(width: 1fr, repeat[.])
#counter(page).at(it.location()).first() \
]
}
#figure(
image("/assets/images/cylinder.svg", width: 50%),
caption: [Cylinder],
)
#figure(
rect[Just some stand-in text],
kind: image,
supplement: "Figure",
caption: [Stand-in text],
)
#figure(
image("/assets/images/tetrahedron.svg", width: 50%),
caption: [Tetrahedron],
)
--- query-complex ---
= A
== B
#figure([Cat], kind: "cat", supplement: [Other])
#heading(level: 3, outlined: false)[D]
= E <first>
#figure([Frog], kind: "frog", supplement: none)
#figure([Giraffe], kind: "giraffe", supplement: none) <second>
#figure([GiraffeCat], kind: "cat", supplement: [Other]) <second>
= H
#figure([Iguana], kind: "iguana", supplement: none)
== I
#let test-selector(selector, ref) = context {
test(query(selector).map(e => e.body), ref)
}
// Test `or`.
#test-selector(
heading.where(level: 1).or(heading.where(level: 3)),
([A], [D], [E], [H]),
)
#test-selector(
selector(heading).and(heading.where(outlined: false)),
([D],)
)
#test-selector(
heading.where(level: 1).or(
heading.where(level: 3),
figure.where(kind: "frog"),
),
([A], [D], [E], [Frog], [H]),
)
#test-selector(
heading.where(level: 1).or(
heading.where(level: 2),
figure.where(kind: "frog"),
figure.where(kind: "cat"),
),
([A], [B], [Cat], [E], [Frog], [GiraffeCat], [H], [I]),
)
#test-selector(
figure.where(kind: "dog").or(heading.where(level: 3)),
([D],),
)
#test-selector(
figure.where(kind: "dog").or(figure.where(kind: "fish")),
(),
)
// Test `or` duplicates removal.
#test-selector(
heading.where(level: 1).or(heading.where(level: 1)),
([A], [E], [H]),
)
// Test `and`.
#test-selector(
figure.where(kind: "cat").and(figure.where(kind: "frog")),
(),
)
// Test `or` with `before`/`after`
#test-selector(
selector(heading)
.before(<first>)
.or(selector(figure).before(<first>)),
([A], [B], [Cat], [D], [E]),
)
#test-selector(
heading.where(level: 2)
.after(<first>)
.or(selector(figure).after(<first>)),
([Frog], [Giraffe], [GiraffeCat], [Iguana], [I]),
)
// Test `and` with `after`
#test-selector(
figure.where(kind: "cat")
.and(figure.where(supplement: [Other]))
.after(<first>),
([GiraffeCat],),
)
// Test `and` (with nested `or`)
#test-selector(
heading.where(level: 2)
.or(heading.where(level: 3))
.and(heading.where(level: 2).or(heading.where(level: 1))),
([B], [I]),
)
#test-selector(
heading.where(level: 2)
.or(heading.where(level: 3), heading.where(level:1))
.and(
heading.where(level: 2).or(heading.where(level: 1)),
heading.where(level: 3).or(heading.where(level: 1)),
),
([A], [E], [H]),
)
// Test `and` with `or` and `before`/`after`
#test-selector(
heading.where(level: 1).before(<first>)
.or(heading.where(level: 3).before(<first>))
.and(
heading.where(level: 1).before(<first>)
.or(heading.where(level: 2).before(<first>))
),
([A], [E]),
)
#test-selector(
heading.where(level: 1).before(<first>, inclusive: false)
.or(selector(figure).after(<first>))
.and(figure.where(kind: "iguana").or(
figure.where(kind: "frog"),
figure.where(kind: "cat"),
heading.where(level: 1).after(<first>),
)),
([Frog], [GiraffeCat], [Iguana])
)
--- issue-3726-query-show-set ---
// Test that show rules apply to queried elements, i.e. that the content
// returned from `query` isn't yet marked as prepared.
#set heading(numbering: "1.")
#show heading: underline
= Hi
#set heading(numbering: "I.")
#show heading: set text(blue)
#show heading: highlight.with(fill: aqua.lighten(50%))
= Bye
// New show rules apply to this, but its location and the materialized fields
// from the original are retained.
#context query(heading).join()
--- query-quote ---
// Test quoting a query.
#quote[ABC] & #quote[EFG]
#context query(selector(quote).before(here())).first()
#quote(block: true)[HIJ]
#quote(block: true)[KLM]
#context query(selector(quote).before(here())).last()
#quote[NOP] <nop>
#context query(<nop>).first()
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/visualize/gradient-math_04.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test on underover
#show math.equation: set text(fill: gradient.linear(..color.map.rainbow))
#show math.equation: box
$ underline(X^2) $
$ overline("hello, world!") $
|
https://github.com/wychwitch/typst-mla9-template | https://raw.githubusercontent.com/wychwitch/typst-mla9-template/main/template.typ | typst | MIT License | #let mla(
title: "Paper Title",
author: none,
professor: none,
date: none,
course: none,
bibliography-file: none,
font-size: 12pt,
font: "Times New Roman",
body
) = {
// Set document metdata.
set document(
title: title,
author: author.firstname + " " + author.lastname
)
// Configure the page.
set page(
paper: "us-letter",
header: align(
right + horizon,
[
#v(0.5in)
#author.lastname
#counter(page).display("1")
]
),
margin: 1in
)
// Set paragraph properties.
set par(
first-line-indent: 0.5in,
justify: false,
leading: 2em
)
show par: set block(spacing: 2em)
// Set the body font.
set text(
font: font,
size: font-size,
)
// Configure headings.
set heading(numbering: "1.1.a.")
show heading: set block(spacing: 2em)
show heading: it => {
set text(size: font-size)
set par(first-line-indent: 0in)
// Create the heading numbering.
let number = if it.numbering != none {
counter(heading).display(it.numbering)
h(6pt, weak: true)
}
// Don't number the conclusion
let is-conclusion = it.body in ([conclusion], [Conclusion], [CONCLUSION])
v(2em)
if it.level == 1 and is-conclusion == true {
block[#text(weight: "bold")[#it.body]]
} else if it.level == 1 and is-conclusion == false {
block[#text(weight: "bold")[#number #it.body]]
} else if it.level == 2 {
block[#text(weight: "semibold")[#number #it.body]]
} else if it.level == 3 {
block[#text(weight: "medium")[#number #it.body]]
}
v(2em)
}
// configure block quotes
set quote(block: true)
show quote: set pad(left: 0.5in)
show quote: set block(spacing: 2em)
// configure tables
show figure.where(kind: table): it => {
set block(spacing: 1em)
set par(
first-line-indent: 0in,
leading: 1em
)
set table(stroke: none, align: center, row-gutter: 1em)
strong([Table #it.counter.display(it.numbering) #linebreak()])
it.caption.body
it.body
it.supplement
}
// configure illustrations
show figure.where(kind: image): it => {
set block(spacing: 1em)
set par(
first-line-indent: 0in,
leading: 1em
)
it.body
align(
center,
[Fig. #it.counter.display(it.numbering)\. #it.caption.body]
)
}
// MLA boilerplate
align(left,
stack(
dir:ttb,
spacing: 2em,
[#author.firstname #author.lastname],
professor,
course,
date
)
)
// Display the paper's title.
align(center, title)
// Display the paper's contents.
body
// Display the bibliography, if any is given.
if bibliography-file != none {
pagebreak()
align(center, "Works Cited")
show bibliography: set block(spacing: 2em)
show bibliography: set par(
first-line-indent: 0in,
hanging-indent: 0.5in
)
bibliography(
bibliography-file, title:none, style: "mla"
)
}
}
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/shift_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
#set super(typographic: false, baseline: -0.25em, size: 0.7em)
n#super[1], n#sub[2], ... n#super[N]
|
https://github.com/as505/HelldiversTTRPG | https://raw.githubusercontent.com/as505/HelldiversTTRPG/main/README.md | markdown | # HelldiversTTRPG
[PDF](/main.pdf)
Available online in the [Typst WebEditor](https://typst.app/project/wNBY2bQBXGE-2hqSFRPrYq)
|
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/066.%20mit.html.typ | typst | mit.html
A Student's Guide to Startups
Want to start a startup? Get funded by
Y Combinator.
October 2006(This essay is derived from a talk at MIT.)Till recently graduating seniors had two choices: get a job or go
to grad school. I think there will increasingly be a third option:
to start your own startup. But how common will that be?I'm sure the default will always be to get a job, but starting a
startup could well become as popular as grad school. In the late
90s my professor friends used to complain that they couldn't get
grad students, because all the undergrads were going to work for
startups. I wouldn't be surprised if that situation returns, but
with one difference: this time they'll be starting their own
instead of going to work for other people's.The most ambitious students will at this point be asking: Why wait
till you graduate? Why not start a startup while you're in college?
In fact, why go to college at all? Why not start a startup instead?A year and a half ago I gave a talk
where I said that the average age of the founders of
Yahoo, Google, and Microsoft was 24, and that if grad students could
start startups, why not undergrads? I'm glad I phrased that as a
question, because now I can pretend it wasn't merely a rhetorical
one. At the time I couldn't imagine why there should be any lower
limit for the age of startup founders. Graduation is a bureaucratic
change, not a biological one. And certainly there are undergrads
as competent technically as most grad students. So why shouldn't
undergrads be able to start startups as well as grad students?I now realize that something does change at graduation: you lose a
huge excuse for failing. Regardless of how complex your life is,
you'll find that everyone else, including your family and friends,
will discard all the low bits and regard you as having a single
occupation at any given time. If you're in college and have a
summer job writing software, you still read as a student. Whereas
if you graduate and get a job programming, you'll be instantly
regarded by everyone as a programmer.The problem with starting a startup while you're still in school
is that there's a built-in escape hatch. If you start a startup
in the summer between your junior and senior year, it reads to
everyone as a summer job.
So if it goes nowhere, big deal; you return to school in the
fall with all the other seniors; no one regards you as a failure,
because your occupation is student, and you didn't fail at that.
Whereas if you start a startup just one year later, after you
graduate, as long as you're not accepted to grad school in the fall
the startup reads to everyone as your occupation. You're
now a startup founder, so you have to do well at that.For nearly everyone, the opinion of one's peers is the most powerful
motivator of all—more powerful even than the nominal goal of most
startup founders, getting rich.
[1]
About a month into each funding
cycle we have an event called Prototype Day where each startup
presents to the others what they've got so far. You might think
they wouldn't need any more motivation. They're working on their
cool new idea; they have funding for the immediate future; and
they're playing a game with only two outcomes: wealth or failure.
You'd think that would be motivation enough. And yet the prospect
of a demo pushes most of them into a
rush of activity.Even if you start a startup explicitly to get rich, the money you
might get seems pretty theoretical most of the time. What drives
you day to day is not wanting to look bad.You probably can't change that. Even if you could, I don't think
you'd want to; someone who really, truly doesn't care what his peers
think of him is probably a psychopath. So the best you can do is
consider this force like a wind, and set up your boat accordingly.
If you know your peers are going to push you in some direction,
choose good peers, and position yourself so they push you in a
direction you like.Graduation changes the prevailing winds, and those make a difference.
Starting a startup is so hard
that it's a close call even for the ones that succeed. However
high a startup may be flying now, it probably has a few leaves stuck
in the landing gear from those trees it barely cleared at the end
of the runway. In such a close game, the smallest increase in the
forces against you can be enough to flick you over the edge into
failure.When we first started Y Combinator
we encouraged people to start
startups while they were still in college. That's partly because
Y Combinator began as a kind of summer program. We've kept the
program shape—all of us having dinner together once a week turns
out to be a good idea—but we've decided now
that the party line should be to tell people to wait till they
graduate.Does that mean you can't start a startup in college? Not at all.
<NAME>, the co-founder of Loopt,
had just finished his sophomore year when we funded them, and Loopt
is probably the most promising of all the startups we've funded so
far. But <NAME> is a very unusual guy. Within about three
minutes of meeting him, I remember thinking "Ah, so this is what
<NAME> must have been like when he was 19."If it can work to start a startup during college, why do
we tell people not to? For the same reason that the probably
apocryphal violinist, whenever he was asked to judge someone's
playing, would always say they didn't have enough talent to make
it as a pro. Succeeding as a musician takes determination as well
as talent, so this answer works out to be the right advice for
everyone. The ones who are uncertain believe it and give up, and
the ones who are sufficiently determined think "screw that, I'll
succeed anyway."So our official policy now is only to fund undergrads we can't talk
out of it. And frankly, if you're not certain, you should wait.
It's not as if all the opportunities to start companies are going
to be gone if you don't do it now. Maybe the window will close on
some idea you're working on, but that won't be the last idea you'll
have. For every idea that times out, new ones become feasible.
Historically the opportunities to start startups have only increased
with time.In that case, you might ask, why not wait longer? Why not go work
for a while, or go to grad school, and then start a startup? And
indeed, that might be a good idea. If I had to pick the sweet spot
for startup founders, based on who we're most excited to see
applications from, I'd say it's probably the mid-twenties. Why?
What advantages does someone in their mid-twenties have over someone
who's 21? And why isn't it older? What can 25 year olds do that
32 year olds can't? Those turn out to be questions worth examining.PlusIf you start a startup soon after college, you'll be a young founder
by present standards, so you should know what the relative advantages
of young founders are. They're not what you might think. As a
young founder your strengths are: stamina, poverty, rootlessness,
colleagues, and ignorance.The importance of stamina shouldn't be surprising. If you've heard
anything about startups you've probably heard about the long hours.
As far as I can tell these are universal. I can't think of any
successful startups whose founders worked 9 to 5. And it's
particularly necessary for younger founders to work long hours
because they're probably not as efficient as they'll be later.Your second advantage, poverty, might not sound like an advantage,
but it is a huge one. Poverty implies you can live cheaply,
and this is critically important for startups. Nearly every startup
that fails, fails by running out of money. It's a little misleading
to put it this way, because there's usually some other underlying
cause. But regardless of the source of your problems, a low burn
rate gives you more opportunity to recover from them. And since
most startups make all kinds of mistakes at first, room to recover
from mistakes is a valuable thing to have.Most startups end up doing something different than they planned.
The way the successful ones find something that works is by trying
things that don't. So the worst thing you can do in a startup is
to have a rigid, pre-ordained plan and then start spending a lot
of money to implement it. Better to operate cheaply and give your
ideas time to evolve.Recent grads can live on practically nothing, and this gives you
an edge over older founders, because the main cost in software
startups is people. The guys with kids and mortgages are at a
real disadvantage. This is one reason I'd bet on the 25 year old
over the 32 year old. The 32 year old probably is a better programmer,
but probably also has a much more expensive life. Whereas a 25
year old has some work experience (more on that later) but can live
as cheaply as an undergrad.<NAME> and I were 29 and 30 respectively when we started
Viaweb, but fortunately we still lived like 23 year olds. We both had
roughly zero assets. I would have loved to have a mortgage,
since that would have meant I had a house. But in retrospect
having nothing turned out to be convenient. I wasn't tied down and
I was used to living cheaply.Even more important than living cheaply, though, is thinking cheaply.
One reason the Apple II was so popular was that it was cheap. The
computer itself was cheap, and it used cheap, off-the-shelf peripherals
like a cassette tape recorder for data storage and a TV as a monitor.
And you know why? Because Woz designed this computer for himself,
and he couldn't afford anything more.We benefitted from the same phenomenon. Our prices were
daringly low for the time. The top level of service was
$300 a month, which was an order of magnitude below the norm. In
retrospect this was a smart move, but we didn't do it because we
were smart. $300 a month seemed like a lot of money to us. Like
Apple, we created something inexpensive, and therefore popular,
simply because we were poor.A lot of startups have that form: someone comes along and makes
something for a tenth or a hundredth of what it used to cost, and
the existing players can't follow because they don't even want to
think about a world in which that's possible. Traditional long
distance carriers, for example, didn't even want to think about
VoIP. (It was coming, all the same.) Being poor helps in this
game, because your own personal bias points in the same direction
technology evolves in.The advantages of rootlessness are similar to those of poverty.
When you're young you're more mobile—not just because you don't
have a house or much stuff, but also because you're less likely to
have serious relationships. This turns out to be important, because
a lot of startups involve someone moving.The founders of Kiko, for example, are now en route to the Bay Area
to start their next startup. It's a better place for what they
want to do. And it was easy for them to decide to go, because
neither as far as I know has a serious girlfriend, and everything
they own will fit in one car—or more precisely, will either fit
in one car or is crappy enough that they don't mind leaving it
behind.They at least were in Boston. What if they'd been in Nebraska,
like <NAME> was at their age? Someone wrote recently that
the drawback of Y Combinator was that you had to move to participate.
It couldn't be any other way. The kind of conversations we have
with founders, we have to have in person. We fund a dozen startups
at a time, and we can't be in a dozen places at once. But even if
we could somehow magically save people from moving, we wouldn't.
We wouldn't be doing founders a favor by letting them stay in
Nebraska. Places that aren't
startup hubs are toxic to startups.
You can tell that from indirect evidence. You can tell how hard
it must be to start a startup in Houston or Chicago or Miami from
the microscopically small number, per capita, that succeed
there. I don't know exactly what's suppressing all the startups in these
towns—probably a hundred subtle little things—but something
must be.
[2]Maybe this will change. Maybe the increasing cheapness of startups
will mean they'll be able to survive anywhere, instead of only in
the most hospitable environments. Maybe 37signals is the pattern
for the future. But maybe not. Historically there have always
been certain towns that were centers for certain industries, and
if you weren't in one of them you were at a disadvantage. So my
guess is that 37signals is an anomaly. We're looking at a pattern
much older than "Web 2.0" here.Perhaps the reason more startups per capita happen in the Bay Area
than Miami is simply that there are more founder-type people there.
Successful startups are almost never started by one person. Usually
they begin with a conversation in which someone mentions that
something would be a good idea for a company, and his friend says,
"Yeah, that is a good idea, let's try it." If you're missing that
second person who says "let's try it," the startup never happens.
And that is another area where undergrads have an edge. They're
surrounded by people willing to say that. At a good college you're
concentrated together with a lot of other ambitious and technically
minded people—probably more concentrated than you'll ever be
again. If your nucleus spits out a neutron, there's a good chance
it will hit another nucleus.The number one question people ask us at Y Combinator is: Where can
I find a co-founder? That's the biggest problem for someone starting
a startup at 30. When they were in school they knew a lot of good
co-founders, but by 30 they've either lost touch with them or these
people are tied down by jobs they don't want to leave.Viaweb was an anomaly in this respect too. Though we were comparatively
old, we weren't tied down by impressive jobs. I was trying to be
an artist, which is not very constraining, and Robert, though 29,
was still in grad school due to a little interruption in his academic
career back in 1988. So arguably the Worm made Viaweb possible.
Otherwise Robert would have been a junior professor at that age,
and he wouldn't have had time to work on crazy speculative projects
with me.Most of the questions people ask Y Combinator we have some kind of
answer for, but not the co-founder question. There is no good
answer. Co-founders really should be people you already know. And
by far the best place to meet them is school. You have a large
sample of smart people; you get to compare how they all perform on
identical tasks; and everyone's life is pretty fluid. A lot of
startups grow out of schools for this reason. Google, Yahoo, and
Microsoft, among others, were all founded by people who met in
school. (In Microsoft's case, it was high school.)Many students feel they should wait and get a little more experience
before they start a company. All other things being equal, they
should. But all other things are not quite as equal as they look.
Most students don't realize how rich they are in the scarcest
ingredient in startups, co-founders. If you wait too long, you may
find that your friends are now involved in some project they don't
want to abandon. The better they are, the more likely this is to
happen.One way to mitigate this problem might be to actively plan your
startup while you're getting those n years of experience. Sure,
go off and get jobs or go to grad school or whatever, but get
together regularly to scheme, so the idea of starting a startup
stays alive in everyone's brain. I don't know if this works, but
it can't hurt to try.It would be helpful just to realize what an advantage you have as
students. Some of your classmates are probably going to be successful
startup founders; at a great technical university, that is a near
certainty. So which ones? If I were you I'd look for the people
who are not just smart, but incurable
builders.
Look
for the people who keep starting projects, and finish at least some
of them. That's what we look for. Above all else, above academic
credentials and even the idea you apply with, we look for people
who build things.The other place co-founders meet is at work. Fewer do than at
school, but there are things you can do to improve the odds. The
most important, obviously, is to work somewhere that has a lot of
smart, young people. Another is to work for a company located in
a startup hub. It will be easier to talk a co-worker into quitting
with you in a place where startups are happening all around you.You might also want to look at the employment agreement you sign
when you get hired. Most will say that any ideas you think of while
you're employed by the company belong to them. In practice it's
hard for anyone to prove what ideas you had when, so the line gets
drawn at code. If you're going to start a startup, don't write any
of the code while you're still employed. Or at least discard any
code you wrote while still employed and start over. It's not so
much that your employer will find out and sue you. It won't come
to that; investors or acquirers or (if you're so lucky) underwriters
will nail you first. Between t = 0 and when you buy that yacht,
someone is going to ask if any of your code legally belongs
to anyone else, and you need to be able to say no.
[3]The most overreaching employee agreement I've seen so far is Amazon's.
In addition to the usual clauses about owning your ideas, you also
can't be a founder of a startup that has another founder who worked
at Amazon—even if you didn't know them or even work there at the
same time. I suspect they'd have a hard time enforcing this, but
it's a bad sign they even try. There are plenty of other places
to work; you may as well choose one that keeps more of your options
open.Speaking of cool places to work, there is of course Google. But I
notice something slightly frightening about Google: zero startups
come out of there. In that respect it's a black hole. People seem
to like working at Google too much to leave. So if you hope to start
a startup one day, the evidence so far suggests you shouldn't work
there.I realize this seems odd advice. If they make your life so good
that you don't want to leave, why not work there? Because, in
effect, you're probably getting a local maximum. You need a certain
activation energy to start a startup. So an employer who's fairly
pleasant to work for can lull you into staying indefinitely, even
if it would be a net win for you to leave.
[4]The best place to work, if you want to start a startup, is probably
a startup. In addition to being the right sort of experience, one
way or another it will be over quickly. You'll either end up rich,
in which case problem solved, or the startup will get bought, in
which case it it will start to suck to work there and it will be
easy to leave, or most likely, the thing will blow up and you'll
be free again.Your final advantage, ignorance, may not sound very useful. I
deliberately used a controversial word for it; you might equally
call it innocence. But it seems to be a powerful force. My Y
Combinator co-founder <NAME> is just about to publish
a book of interviews
with startup founders, and I noticed a remarkable pattern in them.
One after another said that if they'd known how hard it would be,
they would have been too intimidated to start.Ignorance can be useful when it's a counterweight to other forms
of stupidity. It's useful in starting startups because you're
capable of more than you realize. Starting startups is harder than
you expect, but you're also capable of more than you expect, so
they balance out.Most people look at a company like Apple and think, how could I
ever make such a thing? Apple is an institution, and I'm just a
person. But every institution was at one point just a handful of
people in a room deciding to start something. Institutions are
made up, and made up by people no different from you.I'm not saying everyone could start a startup. I'm sure most people
couldn't; I don't know much about the population at large. When
you get to groups I know well, like hackers, I can say more precisely.
At the top schools, I'd guess as many as a quarter of the CS majors
could make it as startup founders if they wanted.That "if they wanted" is an important qualification—so important
that it's almost cheating to append it like that—because once you
get over a certain threshold of intelligence, which most CS majors
at top schools are past, the deciding factor in whether you succeed
as a founder is how much you want to. You don't have to be that
smart. If you're not a genius, just start a startup in some unsexy
field where you'll have less competition, like software for human
resources departments. I picked that example at random, but I feel
safe in predicting that whatever they have now, it wouldn't take
genius to do better. There are a lot of people out there working
on boring stuff who are desperately in need of better software, so
however short you think you fall of Larry and Sergey, you can ratchet
down the coolness of the idea far enough to compensate.As well as preventing you from being intimidated, ignorance can
sometimes help you discover new ideas. <NAME>
put this very strongly:
All the best things that I did at Apple came from (a) not having
money and (b) not having done it before, ever. Every single thing
that we came out with that was really great, I'd never once done
that thing in my life.
When you know nothing, you have to reinvent stuff for yourself, and
if you're smart your reinventions may be better than what preceded
them. This is especially true in fields where the rules change.
All our ideas about software were developed in a time when processors
were slow, and memories and disks were tiny. Who knows what obsolete
assumptions are embedded in the conventional wisdom? And the way
these assumptions are going to get fixed is not by explicitly
deallocating them, but by something more akin to garbage collection.
Someone ignorant but smart will come along and reinvent everything,
and in the process simply fail to reproduce certain existing ideas.MinusSo much for the advantages of young founders. What about the
disadvantages? I'm going to start with what goes wrong and try to
trace it back to the root causes.What goes wrong with young founders is that they build stuff that
looks like class projects. It was only recently that we figured
this out ourselves. We noticed a lot of similarities between the
startups that seemed to be falling behind, but we couldn't figure
out how to put it into words. Then finally we realized what it
was: they were building class projects.But what does that really mean? What's wrong with class projects?
What's the difference between a class project and a real startup?
If we could answer that question it would be useful not just to
would-be startup founders but to students in general, because we'd
be a long way toward explaining the mystery of the so-called real
world.There seem to be two big things missing in class projects: (1) an
iterative definition of a real problem and (2) intensity.The first is probably unavoidable. Class projects will inevitably
solve fake problems. For one thing, real problems are rare and
valuable. If a professor wanted to have students solve real problems,
he'd face the same paradox as someone trying to give an example of
whatever "paradigm" might succeed the Standard Model of physics.
There may well be something that does, but if you could think of
an example you'd be entitled to the Nobel Prize. Similarly, good
new problems are not to be had for the asking.In technology the difficulty is compounded by the fact that real
startups tend to discover the problem they're solving by a process
of evolution. Someone has an idea for something; they build it;
and in doing so (and probably only by doing so) they realize
the problem they should be solving is another one. Even if the
professor let you change your project description on the fly, there
isn't time enough to do that in a college class, or a market to
supply evolutionary pressures. So class
projects are mostly about implementation, which is the least
of your problems in a startup.It's not just that in a startup you work on the idea as well as
implementation. The very implementation is different. Its main
purpose is to refine the idea. Often the only value of most of the
stuff you build in the first six months is that it proves your
initial idea was mistaken. And that's extremely valuable. If
you're free of a misconception that everyone else still shares,
you're in a powerful position. But you're not thinking that way
about a class project. Proving your initial plan was mistaken would
just get you a bad grade. Instead of building stuff to throw away,
you tend to want every line of code to go toward that final goal
of showing you did a lot of work.That leads to our second difference: the way class projects are
measured. Professors will tend to judge you by the distance between
the starting point and where you are now. If someone has achieved
a lot, they should get a good grade. But customers will judge you
from the other direction: the distance remaining between where you
are now and the features they need. The market doesn't give a shit
how hard you worked. Users just want your software to do what they
need, and you get a zero otherwise. That is one of the most
distinctive differences between school and the real world: there
is no reward for putting in a good effort. In fact, the whole
concept of a "good effort" is a fake idea adults invented to encourage
kids. It is not found in nature.Such lies seem to be helpful to kids. But unfortunately when you
graduate they don't give you a list of all the lies they told you
during your education. You have to get them beaten out of you by
contact with the real world. And this is why so many jobs want
work experience. I couldn't understand that when I was in college.
I knew how to program. In fact, I could tell I knew how to program
better than most people doing it for a living. So what was this
mysterious "work experience" and why did I need it?Now I know what it is, and part of the confusion is grammatical.
Describing it as "work experience" implies it's like experience
operating a certain kind of machine, or using a certain programming
language. But really what work experience refers to is not some
specific expertise, but the elimination of certain habits left over
from childhood.One of the defining qualities of kids is that they flake. When
you're a kid and you face some hard test, you can cry and say "I
can't" and they won't make you do it. Of course, no one can make
you do anything in the grownup world either. What they do instead
is fire you. And when motivated by that
you find you can do a lot more than you realized. So one of the
things employers expect from someone with "work experience" is the
elimination of the flake reflex—the ability to get things done,
with no excuses.The other thing you get from work experience is an understanding
of what work is, and in particular, how intrinsically horrible it
is. Fundamentally the equation is a brutal one: you have to spend
most of your waking hours doing stuff someone else wants, or starve.
There are a few places where the work is so interesting that this
is concealed, because what other people want done happens to coincide
with what you want to work on. But you only have to imagine what
would happen if they diverged to see the underlying reality.It's not so much that adults lie to kids about this as never explain
it. They never explain what the deal is with money. You know from
an early age that you'll have some sort of job, because everyone
asks what you're going to "be" when you grow up. What they
don't tell you is that as a kid you're sitting on the shoulders of
someone else who's treading water, and that starting working means
you get thrown into the water on your own, and have to start treading
water yourself or sink. "Being" something is incidental; the
immediate problem is not to drown.The relationship between work and money tends to dawn on you only
gradually. At least it did for me. One's first thought tends to
be simply "This sucks. I'm in debt. Plus I have to get up on monday
and go to work." Gradually you realize that these two things are
as tightly connected as only a market can make them.So the most important advantage 24 year old founders have over 20
year old founders is that they know what they're trying to avoid.
To the average undergrad the idea of getting rich translates into
buying Ferraris, or being admired. To someone who has learned from
experience about the relationship between money and work, it
translates to something way more important: it means you get to opt
out of the brutal equation that governs the lives of 99.9% of people.
Getting rich means you can stop treading water.Someone who gets this will work much harder at making a startup
succeed—with the proverbial energy of a drowning man, in fact.
But understanding the relationship between money and work also
changes the way you work. You don't get money just for working,
but for doing things other people want. Someone who's figured that
out will automatically focus more on the user. And that cures the
other half of the class-project syndrome. After you've been working
for a while, you yourself tend to measure what you've done the same
way the market does.Of course, you don't have to spend years working to learn this
stuff. If you're sufficiently perceptive you can grasp these things
while you're still in school. <NAME> did. He must have, because
Loopt is no class project. And as his example suggests, this can
be valuable knowledge. At a minimum, if you get this stuff, you
already have most of what you gain from the "work experience"
employers consider so desirable. But of course if you really get
it, you can use this information in a way that's more valuable to
you than that.NowSo suppose you think you might start a startup at some point, either
when you graduate or a few years after. What should you do now?
For both jobs and grad school, there are ways to prepare while
you're in college. If you want to get a job when you graduate, you
should get summer jobs at places you'd like to work. If you want
to go to grad school, it will help to work on research projects as
an undergrad. What's the equivalent for startups? How do you keep
your options maximally open?One thing you can do while you're still in school is to learn how
startups work. Unfortunately that's not easy. Few if any colleges
have classes about startups. There may be business school classes
on entrepreneurship, as they call it over there, but these are
likely to be a waste of time. Business schools like to talk about
startups, but philosophically they're at the opposite end of the
spectrum. Most books on startups also seem to be useless. I've
looked at a few and none get it right. Books in most fields are
written by people who know the subject from experience, but for
startups there's a unique problem: by definition the founders of
successful startups don't need to write books to make money. As a
result most books on the subject end up being written by people who
don't understand it.So I'd be skeptical of classes and books. The way to learn about
startups is by watching them in action, preferably by working at
one. How do you do that as an undergrad? Probably by sneaking in
through the back door. Just hang around a lot and gradually start
doing things for them. Most startups are (or should be) very
cautious about hiring. Every hire increases the burn rate, and bad
hires early on are hard to recover from. However, startups usually
have a fairly informal atmosphere, and there's always a lot that
needs to be done. If you just start doing stuff for them, many
will be too busy to shoo you away. You can thus gradually work
your way into their confidence, and maybe turn it into an official
job later, or not, whichever you prefer. This won't work for all
startups, but it would work for most I've known.Number two, make the most of the great advantage of school: the
wealth of co-founders. Look at the people around you and ask
yourself which you'd like to work with. When you apply that test,
you may find you get surprising results. You may find you'd prefer
the quiet guy you've mostly ignored to someone who seems impressive
but has an attitude to match. I'm not suggesting you suck up to
people you don't really like because you think one day they'll be
successful. Exactly the opposite, in fact: you should only start
a startup with someone you like, because a startup will put your
friendship through a stress test. I'm just saying you should think
about who you really admire and hang out with them, instead of
whoever circumstances throw you together with.Another thing you can do is learn skills that will be useful to you
in a startup. These may be different from the skills you'd learn
to get a job. For example, thinking about getting a job will make
you want to learn programming languages you think employers want,
like Java and C++. Whereas if you start a startup, you get to pick
the language, so you have to think about which will actually let
you get the most done. If you use that test you might end up
learning Ruby or Python instead.
But the most important skill for a startup founder isn't a programming
technique. It's a knack for understanding users and figuring out
how to give them what they want. I know I repeat this, but that's
because it's so important. And it's a skill you can learn, though
perhaps habit might be a better word. Get into the habit of thinking
of software as having users. What do those users want? What would
make them say wow?This is particularly valuable for undergrads, because the concept
of users is missing from most college programming classes. The way
you get taught programming in college would be like teaching writing
as grammar, without mentioning that its purpose is to communicate
something to an audience. Fortunately an audience for software is
now only an http request away. So in addition to the programming
you do for your classes, why not build some kind of website people
will find useful? At the very least it will teach you how to write
software with users. In the best case, it might not just be
preparation for a startup, but the startup itself, like it was for
Yahoo and Google.Notes[1]
Even the desire to protect one's children seems weaker, judging
from things people have historically done to their kids
rather than risk their community's disapproval. (I assume we still
do things that will be regarded in the future as barbaric, but
historical abuses are easier for us to see.)[2]
Worrying that Y Combinator makes founders move for 3 months
also suggests one underestimates how hard it is to start a startup.
You're going to have to put up with much greater inconveniences than
that.[3]
Most employee agreements
say that any idea relating to the company's present or potential
future business belongs to them. Often as not the second clause could
include any possible startup, and anyone doing due diligence for an
investor or acquirer will assume the worst.To be safe either (a) don't use code written while you
were still employed in your previous job, or (b) get your employer to
renounce, in writing, any claim to the code you write for your side
project. Many will consent to (b) rather than
lose a prized employee. The downside is that you'll have to tell them
exactly what your project does.[4]
Geshke and Warnock only founded Adobe because Xerox ignored
them. If Xerox had used what they built, they would probably
never have left PARC.Thanks to <NAME> and <NAME> for reading
drafts of this, and to <NAME> and the SIPB for inviting me to
speak.
Comment on this essay.Chinese TranslationArabic Translation
|
|
https://github.com/mem-courses/calculus | https://raw.githubusercontent.com/mem-courses/calculus/main/homework-2/homework2.typ | typst | #import "../template.typ": *
#show: project.with(
course: "Calculus II",
course_fullname: "Calculus (A) II",
course_code: "821T0160",
title: "Homework #2",
authors: ((
name: "<NAME>",
email: "<EMAIL>",
id: "3230104585"
),),
semester: "Spring-Summer 2024",
date: "January 24, 2024",
)
#let ss = [$display(attach(inline(sum), t: +oo, b: n=1))$]
#let sss = [$display(attach(sum, t: +oo, b: n=1))$]
#let int = math.integral
= 习题11-3
== P263 1(1)
#prob[
判别级数绝对收敛,条件收敛,还是发散:
$ sss (-1)^n/root(n,n) $
]
== P263 1(4)
#prob[
判别级数绝对收敛,条件收敛,还是发散:
$ sum_(n=2)^(+oo) sin(n pi - 1/(ln n)) $
]
== P263 1(5)
#prob[
判别级数绝对收敛,条件收敛,还是发散:
$ sss (-1)^n ln(1+1/sqrt(n)) $
]
== P263 2
#prob[
设 $a_n>0 sp (n=1,2,dots.c)$,且 $ss a_n$ 绝对收敛,常数 $lambda in (0,dp(pi/2))$,证明级数 $dp(sss (-1)^n (n tan lambda/n) a_(2n))$ 绝对收敛。
]
== P263 4
#prob[
证明:$ss |a_n|$ 收敛,则 $ss a_n^2$ 也收敛,反之不成立,试举例说明。
]
= 习题11-5
== P281 1(1)
#prob[
求幂级数的收敛半径、收敛区间及收敛域:
$ sss (x^n)/((n+1)3^n) $
]
== P281 1(4)
#prob[
求幂级数的收敛半径、收敛区间及收敛域:
$ sss ((x-1)^n)/((2n-1)2^n) $
]
== P281 1(7)
#prob[
求幂级数的收敛半径、收敛区间及收敛域:
$ sss (x^n)/(a^n+b^n) quad (a>0,sp b>0) $
]
== P281 2(1)
#prob[
求幂级数的和函数:
$ sum_(n=0)^(+oo) (x^(4n+1))/(4n+1) $
]
== P281 2(7)
#prob[
求幂级数的和函数:
$ sss ((-1)^(n-1) x^(2n))/(n(2n-1)) $
]
== P281 3(1)
#prob[
利用幂级数的和函数,求级数的和:
$ sum_(n=0)^(+oo) (-1/2)^(4n+2) 1/(4n+1) $
]
== P281 3(2)
#prob[
利用幂级数的和函数,求级数的和:
$ sss n^2 (1/3)^n $
] |
|
https://github.com/karlis-vagalis/typst-cv | https://raw.githubusercontent.com/karlis-vagalis/typst-cv/master/cv.typ | typst | #let generateAddress(
author: (),
) = [
*#author.firstName #author.lastName* \
#author.address.street #author.address.houseNumber \
#author.address.postalCode #author.address.city
]
#let generateDetails(author: ()) = [
//#box(height: 8pt, image("../lib/icons/calendar.svg")) #author.birthday \
#box(height: 8pt, image("./icons/at-sign.svg")) #author.email \
#box(height: 8pt, image("./icons/phone.svg")) #author.phone \
#link(author.socials.website.link)[#box(height: 8pt, image("./icons/globe.svg"))]
#h(1mm)
#link(author.socials.linkedin.link)[#box(height: 8pt, image("./icons/linkedin.svg"))]
#h(1mm)
#link(author.socials.github.link)[#box(height: 8pt, image("./icons/github.svg"))]
]
#let signOff(author: (), signature: none) = {
v(1fr)
[#author.address.city, den #datetime.today().display("[day]. [month repr:long] [year]")]
linebreak()
box(width: 5cm, image(signature))
}
#let timeline(type: none, items: [], accentColor: none) = {
let lineColor = rgb("#d6d6d6")
let firstColumnWidth = 32mm
let columnGutter = 5mm
let lineOverhang = 1mm
let timeItems = ()
show heading.where(level: 2): (it) => {
text(weight: 700, size: 10pt, it.body)
}
show emph : it => {
text(fill: accentColor, style: "italic", it.body)
}
if (type == "job") {
for (index, item) in items.enumerate() {
let timeSpan = align(right)[#item.start - #item.end]
let content = [
#heading(level: 2, item.company) #h(2mm) #box(height: 8pt, image("./icons/map-pin.svg")) #item.location \
_ #item.role _ \
#list(..item.tasks)
]
timeItems.push(timeSpan)
timeItems.push(content)
}
//repr(timeItems)
}
if (type == "education") {
for (index, item) in items.enumerate() {
let timeSpan = align(right)[#item.start - #item.end]
let content = [
#heading(level: 2, item.institution) #h(2mm) #box(height: 8pt, image("./icons/map-pin.svg")) #item.location \
#eval(item.description, mode: "markup")
]
timeItems.push(timeSpan)
timeItems.push(content)
}
//repr(timeItems)
}
if (type == "descriptive") {
for (index, item) in items.enumerate() {
let content = [
#heading(level: 2, item.item) \
#list(marker: "-", ..item.tasks.map(i => {
//panic(i)
eval(i, mode: "markup")
}))
]
timeItems.push([])
timeItems.push(content)
}
//repr(timeItems)
}
let g = grid(
columns: (firstColumnWidth, 1fr),
column-gutter: columnGutter,
row-gutter: 4mm,
..timeItems
)
// Measure size of the grid depending on the layout constraints
layout(size => style(styles => {
let (width, height) = measure(block(width: size.width, g), styles)
g
place(dy: -height -lineOverhang, dx: firstColumnWidth + (0.5 * columnGutter), line(angle: 90deg, stroke: (paint: lineColor, thickness: 0.4pt), length: height + 3 *lineOverhang))
}))
}
#let skills(items: []) = {
let lineColor = rgb("#d6d6d6")
let firstColumnWidth = 32mm
let columnGutter = 5mm
let lineOverhang = 1mm
show heading.where(level: 2): (it) => {
align(right)[#text(size: 10pt, weight: 400, upper(it.body))]
}
let itemList = ()
for (index, item) in items.enumerate() {
let type = [#heading(level: 2, item.type)]
let content = {
item.items.map(i => {
let parts = i.split("(")
let head = parts.at(0)
if parts.len() > 1 {
let foot = parts.slice(1, parts.len()).join("(")
[*#head* (#foot]
} else {
[*#head*]
}
}).join("\n")
}
itemList.push(type)
itemList.push(content)
}
let g = grid(
columns: (firstColumnWidth, 1fr),
column-gutter: columnGutter,
row-gutter: 4mm,
..itemList
)
// Measure size of the grid depending on the layout constraints
layout(size => style(styles => {
let (width, height) = measure(block(width: size.width, g), styles)
g
place(dy: -height -lineOverhang, dx: firstColumnWidth + (0.5 * columnGutter), line(angle: 90deg, stroke: (paint: lineColor, thickness: 0.4pt), length: height + 3 *lineOverhang))
}))
}
#let hobbies(items: []) = {
let lineColor = rgb("#d6d6d6")
let firstColumnWidth = 32mm
let columnGutter = 5mm
let lineOverhang = 1mm
show heading.where(level: 2): (it) => {
text(size: 10pt, weight: 700, it.body)
}
let itemList = ()
for (index, item) in items.enumerate() {
let content = {
[#heading(level: 2, item.type) #item.description]
}
itemList.push([])
itemList.push(content)
}
let g = grid(
columns: (firstColumnWidth, 1fr),
column-gutter: columnGutter,
row-gutter: 4mm,
..itemList
)
// Measure size of the grid depending on the layout constraints
layout(size => style(styles => {
let (width, height) = measure(block(width: size.width, g), styles)
g
place(dy: -height -lineOverhang, dx: firstColumnWidth + (0.5 * columnGutter), line(angle: 90deg, stroke: (paint: lineColor, thickness: 0.4pt), length: height + 3 *lineOverhang))
}))
}
#let cv(
author: (),
picture: none,
font: "Dinish",
accentColor: color.aqua,
doc,
) = {
set page(
paper: "a4",
margin: 2cm,
)
set text(
font: font,
size: 10pt,
hyphenate: true,
)
// Set heading styles
show heading.where(level: 1): (it) => {
let boxWidth = 2.5mm
let boxHeight = 7.4mm
grid(
columns: (boxWidth, 1fr),
rows: (auto),
gutter: boxWidth,
[#align(horizon)[#box(width: boxWidth, height: boxHeight, fill: accentColor)]],
[#align(horizon)[#upper(it.body)]],
)
}
set list(marker: "-")
/* Generate document */
place(
top + right,
box(
height: 4cm,
radius: 2cm,
clip: true,
image(picture),
)
)
grid(
columns: (60mm, 1fr),
generateAddress(author: author),
generateDetails(author: author)
)
doc
}
|
|
https://github.com/noahjutz/CV | https://raw.githubusercontent.com/noahjutz/CV/main/body/timeline.typ | typst | #import "@preview/cetz:0.3.0"
#let _stroke = 1pt
#let _fill = black
#let entry(
prefix,
title,
postfix
) = {
block(
inset: (left: 4pt),
below: 0pt,
block(
stroke: (left: _stroke),
inset: (left: 8pt, bottom: 8pt, top: 8pt),
below: 0pt,
stack(
{
set text(fill: gray, size: 12pt)
prefix
v(6pt)
},
{
set text(weight: "black")
place(
horizon,
dx: -12pt,
circle(radius: 4pt, stroke: none, fill: _fill)
)
title
},
if postfix != none {
v(12pt)
postfix
},
)
)
)
}
#let arrow = cetz.canvas({
import cetz.draw: *
circle((), radius: 0)
line(
(4pt, 0),
(4pt, -16pt),
mark: (end: "straight", scale: 1.5)
)
}) |
|
https://github.com/xdoardo/co-thesis | https://raw.githubusercontent.com/xdoardo/co-thesis/master/thesis/chapters/imp/analysis/fold.typ | typst | #import "/includes.typ":*
#import "@preview/prooftrees:0.1.0"
=== Pure constant folding optimization<subsection-imp-analysis_optimization-fold>
Pure constant folding is the second and last transformation we consider.
Again from @concrete-semantics, pure folding consists in statically examining
the source code of the program in order to move, when possible, computations
from runtime to (pre-)compilation.
The objective of pure constant folding is that of finding all the places in the
source code where the result of expressions is computable statically: examples
of this situation are `and true true`, `plus 1 1`, `le 0 1` and so on. This
optimization is called _pure_ because we avoid the phase of constant
propagation, that is, we do not replace the value of identifiers even when their
value is known at compile time.
==== Pure folding of arithmetic expressions<subsubsection-imp-fold-arith>
Pure folding optimization on arithmetic expressions is straighforward, and we
define it as a function ```hs apfold```. In words : let $a$ be an arithmetic
expression. Then, if $a$ is a constant or an identifier the result of the
optimization is $a$. If $a$ is the sum of two other arithmetic expressions
$a_1$ and $a_2$ ($a eq.triple "plus" space a_1 space a_2$), the optimization is
performed on the two immediate terms $a_1$ and $a_2$, resulting in two
potentially different expressions $a'_1$ and $a'_2$. If both are constants
$v_1$ and $v_2$ the result of the optimization is the constant $v_1 + v_2$;
otherwise, the result of the optimization consists in the same arithmetic
expression $"plus" space a'_1 space a'_2$, that is, optimized immediate
subterms. The Agda code for the function ```hs apfold``` is shown in
@code-apfold.
#mycode(label: <code-apfold>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/PureFolding/Arith.agda#L15")[
//typstfmt::off
```hs
apfold : (a : AExp) -> AExp
apfold (const x) = const x
apfold (var id) = var id
apfold (plus a₁ a₂) with (apfold a₁) | (apfold a₂)
... | const v₁ | const v₂ = const (v₁ + v₂)
... | a₁' | a₂' = plus a₁' a₂'
```
//typstfmt::on
]
Of course, what we want to show is that this optimization does not change the
result of the evaluation, as shown in @thm-apfold-safe[Theorem].
#theorem(
name: "Safety of pure folding for arithmetic expressions",
label: <thm-apfold-safe>
)[
Let $a$ be an arithmetic expression and $s$ be a store. Then
#align(center,
$"aeval" a space s eq.triple "aeval" ("apfold" a) space s$)
In Agda:
#mycode(proof: <proof-apfold-safe>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/PureFolding/Arith.agda#L31")[
```hs apfold-safe : ∀ a s -> (aeval a s ≡ aeval (apfold a) s) ```
]]
#linebreak()
==== Pure folding of boolean expressions<subsubsection-imp-fold-bool>
//typstfmt::off
Pure folding of boolean expressions, which we define as a function ```hs bpfold```,
//typstfmt::on
follows the same line of reasoning shown in
@subsubsection-imp-fold-arith[Paragraph].
Let $b$ be a boolean expression. If $b$ is an expression with no immediates
(i.e. $b eq.triple "const" n$) we leave it untouched. If, instead, $b$ has
immediate subterms, we compute the pure folding of them and build a result
accordingly, as shown in @code-bpfold.
#mycode(label: <code-bpfold>,"https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/PureFolding/Bool.agda#L17")[
//typstfmt::off
```hs
bpfold : (b : BExp) -> BExp
bpfold (const b) = const b
bpfold (le a₁ a₂) with (apfold a₁) | (apfold a₂)
... | const n₁ | const n₂ = const (n₁ ≤ᵇ n₂ )
... | a₁ | a₂ = le a₁ a₂
bpfold (not b) with (bpfold b)
... | const n = const (lnot n)
... | b = not b
bpfold (and b₁ b₂) with (bpfold b₁) | (bpfold b₂)
... | const n₁ | const n₂ = const (n₁ ∧ n₂)
... | b₁' | b₂' = and b₁' b₂'
```
//typstfmt::on
]
As before, our objective is to show that evaluating a boolean expression after
the optimization yields the same result as the evaluation without optimization,
as shown in @thm-bpfold-safe.
#theorem(
name: "Safety of pure folding for boolean expressions",
label: <thm-bpfold-safe>
)[
Let $b$ be a boolean expression and $s$ be a store. Then
#align(center, $"beval" b space s eq.triple "beval" ("bpfold" b) space s$)
#mycode(proof: <proof-bpfold-safe>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/PureFolding/Bool.agda#L38")[
//typstfmt::off
```hs
bpfold-safe : ∀ b s -> (beval b s ≡ beval (bpfold b) s)
```
//typstfmt::on
]]
==== Pure folding of commands<subsubsection-imp-fold-commands>
Pure folding of commands builds on the definition of $"apfold"$ and $"bpfold"$
above combining the definitions as shown in @code-cpfold.
#mycode(label: <code-cpfold>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/PureFolding/Command.agda#L23")[
//typstfmt::off
```hs
cpfold : Command -> Command
cpfold skip = skip
cpfold (assign id a) with (apfold a)
... | const n = assign id (const n)
... | _ = assign id a
cpfold (seq c₁ c₂) = seq (cpfold c₁) (cpfold c₂)
cpfold (ifelse b c₁ c₂) with (bpfold b)
... | const false = cpfold c₂
... | const true = cpfold c₁
... | _ = ifelse b (cpfold c₁) (cpfold c₂)
cpfold (while b c) with (bpfold b)
... | const false = skip
... | b = while b c
```
//typstfmt::on
]
And, again, what we want to show is that the pure folding optimization does not
change the semantics of the program, that is, optimized and unoptimized programs
converge to the same value or both diverge, as shown in @thm-cpfold-safe[Theorem].
#theorem(
name: "Safety of pure folding for commands",
label: <thm-cpfold-safe>
)[
Let $c$ be a command and $s$ be a store. Then
#align(center, $"ceval" c space s eq.triple "ceval" ("cpfold" b) space s$)
#mycode(proof: <proof-cpfold-safe>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/PureFolding/Command.agda#L50")[
//typstfmt::off
```hs
cpfold-safe : ∀ (c : Command) (s : Store)
-> ∞ ⊢ (ceval c s) ≈ (ceval (cpfold c) s)
```
//typstfmt::on
]]
Of course, what makes @thm-cpfold-safe[Theorem] different from the other safety
proofs in this chapter is that we cannot use propositional equality and we must
instead use weak bisimilarity. The execution of a program, in terms of chains
of constructors ```hs later``` and ```hs now```, changes for the same term if
the pure folding optimization does indeed change the source. Take, for example,
the case for `c ≡ while (plus 1 1) < 0 do skip`; this program will be optimized to
`skip`, which results in a shorter evaluation.
|
|
https://github.com/ymgyt/techbook | https://raw.githubusercontent.com/ymgyt/techbook/master/programmings/js/typescript/specification/typeof.md | markdown | # typeof
* valueを引数にとってそのvalueの型を返す
```typescript
if (typeof x === "string") {
// xをstring型として扱うことをcompilerに許してもらえる
x.toUpperCase();
}
```
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/g-exam/0.1.0/g-exam.typ | typst | Apache License 2.0 | #import "@preview/oxifmt:0.2.0": strfmt
#let __question-number = counter("question-number")
#let __question-point = state("question-point", 0)
#let __question-point-position-state = state("question-point-position", left)
#let __localization = state("localization")
#let __default-localization = (
grade-table-queston: "Question",
grade-table-total: "Total",
grade-table-points: "Points",
grade-table-calification: "Calification",
point: "point",
points: "points",
page: "Page",
page-counter-display: "1 of 1",
family-name: "Surname",
personal-name: "Name",
group: "Group",
date: "Date"
)
#let __student-data(show-line-two: true) = {
locate(loc => {
[#__localization.final(loc).family-name: #box(width: 2fr, repeat[.]) #__localization.final(loc).personal-name: #box(width:1fr, repeat[.])]
if show-line-two {
v(1pt)
align(right, [#__localization.final(loc).group: #box(width:2.5cm, repeat[.]) #__localization.final(loc).date: #box(width:3cm, repeat[.])])
}
}
)
}
#let __grade-table-header(decimal-separator: ".") = {
locate(loc => {
let end-question-locations = query(<end-question-localization>, loc)
let columns-number = range(0, end-question-locations.len() + 1)
let question-row = columns-number.map(n => {
if n == 0 {align(left + horizon)[#text(hyphenate: false,__localization.final(loc).grade-table-queston)]}
else if n == end-question-locations.len() {align(left + horizon)[#text(hyphenate: false,__localization.final(loc).grade-table-total)]}
else [ #n ]
}
)
let total-point = 0
if end-question-locations.len() > 0 {
total-point = end-question-locations.map(ql => __question-point.at(ql.location())).sum()
}
let points = ()
if end-question-locations.len() > 0 {
points = end-question-locations.map(ql => __question-point.at(ql.location()))
}
let point-row = columns-number.map(n => {
if n == 0 {align(left + horizon)[#text(hyphenate: false,__localization.final(loc).grade-table-points)]}
else if n == end-question-locations.len() [
#strfmt("{0:}", calc.round(total-point, digits:2), fmt-decimal-separator: decimal-separator)
]
else {
let point = points.at(n)
[
#strfmt("{0}", calc.round(point, digits: 2), fmt-decimal-separator: decimal-separator)
]
}
}
)
let calification-row = columns-number.map(n =>
{
if n == 0 {
align(left + horizon)[#text(hyphenate: false, __localization.final(loc).grade-table-calification)]
}
}
)
align(center, table(
stroke: 0.8pt + luma(80),
columns: columns-number.map( n =>
{
if n == 0 {auto}
else if n == end-question-locations.len() {auto}
else {30pt}
}),
rows: (auto, auto, 30pt),
..question-row.map(n => n),
..point-row.map(n => n),
..calification-row.map(n => n),
)
)
}
)
}
#let __question-numbering(..args) = {
let nums = args.pos()
if nums.len() == 1 {
numbering("1. ", nums.last())
}
else if nums.len() == 2 {
numbering("(a) ", nums.last())
}
else if nums.len() == 3 {
numbering("(i) ", nums.last())
}
}
#let __paint-tab(point: none, loc: none) = {
if point != none {
let label-point = __localization.final(loc).points
if point == 1 {
label-point = __localization.final(loc).point
}
[(#emph[#strfmt("{0}", calc.round(point, digits: 2), fmt-decimal-separator: ",") #label-point])]
}
}
#let g-question(point: none, body) = {
__question-number.step(level: 1)
[#hide[]<end-question-localization>]
__question-point.update(p =>
{
if point == none { 0 }
else { point }
})
locate(loc => {
let __question-point-position = __question-point-position-state.final(loc)
if __question-point-position == left {
v(0.1em)
{
__question-number.display(__question-numbering)
if(point != none) {
__paint-tab(point:point, loc: loc)
h(0.2em)
}
}
body
}
else if __question-point-position == right {
v(0.1em)
if(point != none) {
place(right,
dx: 12%,
float: false,
__paint-tab(point: point, loc: loc))
}
__question-number.display(__question-numbering)
body
}
else {
v(0.1em)
__question-number.display(__question-numbering)
body
}
})
}
#let g-subquestion(point: none, body) = {
__question-number.step(level: 2)
let subquestion-point = 0
if point != none { subquestion-point = point }
__question-point.update(p => p + subquestion-point )
locate(loc => {
let question-point-position = __question-point-position-state.final(loc)
if question-point-position == left {
v(0.1em)
{
h(0.7em)
__question-number.display(__question-numbering)
if(point != none) {
__paint-tab(point: point, loc:loc)
h(0.2em)
}
}
body
}
else if question-point-position == right {
v(0.1em)
if(point != none) {
place(right,
dx: 12%,
float: false,
__paint-tab(point: point, loc:loc))
}
{
h(0.7em)
__question-number.display(__question-numbering)
}
body
}
else {
v(0.1em)
{
h(0.7em)
__question-number.display(__question-numbering)
}
body
}
}
)
}
#let __show_clarifications = (clarifications: none) => {
if clarifications != none {
let clarifications-content = []
if type(clarifications) == "content" {
clarifications-content = clarifications
}
else if type(clarifications) == "string" {
clarifications-content = clarifications
}
else if type(clarifications) == "array" {
clarifications-content = [
#for clarification in clarifications [
- #clarification
]
]
}
else {
panic("Not implementation clarificationso of type: '" + type(clarifications) + "'")
}
rect(
width: 100%,
stroke: luma(120),
inset:8pt,
radius: 4pt,
clarifications-content
)
v(5pt)
}
}
#let g-exam(
author: (
name: "",
email: none,
watermark: none
),
school: (
name: none,
logo: none,
),
exam-info: (
academic-period: none,
academic-level: none,
academic-subject: none,
number: none,
content: none,
model: none
),
localization: (
grade-table-queston: none,
grade-table-total: none,
grade-table-points: none,
grade-table-calification: none,
point: none,
points: none,
page: none,
page-counter-display: none,
family-name: none,
personal-name: none,
group: none,
date: none
),
// date: none auto datetime,
date: none,
keywords: none,
languaje: "en",
clarifications: none,
show-studen-data: "first-page",
show-grade-table: true,
decimal-separator: ".",
question-point-position: left,
body,
) = {
assert(show-studen-data in (none, "first-page", "odd-pages"),
message: "Invalid show studen data")
assert(question-point-position in (none, left, right),
message: "Invalid question point position")
assert(decimal-separator in (".", ","),
message: "Invalid decimal separator")
let __show-watermark = (
author: (
name: "",
email: none,
watermark: none
),
school: (
name: none,
logo: none,
),
exam-info: (
academic-period: none,
academic-level: none,
academic-subject: none,
number: none,
content: none,
model: none
),
) => {
place(
top + right,
float: true,
clearance: 0pt,
dx:72pt,
dy:-115pt,
rotate(270deg,
origin: top + right,
{
if author.at("watermark", default: none) != none {
text(size:7pt, fill:luma(90))[#author.watermark]
h(35pt)
}
if exam-info.at("model", default: none) != none {
text(size:8pt, luma(40))[#exam-info.model]
}
}
)
)
}
let __document-name = (
exam-info: (
academic-period: none,
academic-level: none,
academic-subject: none,
number: none,
content: none,
model: none
)) => {
let document-name = ""
if exam-info.at("name", default: none) != none { document-name += " " + exam-info.name }
if exam-info.at("content", default: none) != none { document-name += " " + exam-info.content }
if exam-info.at("number", default: none) != none { document-name += " " + exam-info.number }
if exam-info.at("model", default: none) != none { document-name += " " + exam-info.model }
return document-name
}
let __read-localization = (
languaje: "en",
localization: (
grade-table-queston: none,
grade-table-total: none,
grade-table-points: none,
grade-table-calification: none,
point: none,
points: none,
page: none,
page-counter-display: none,
family-name: none,
personal-name: none,
group: none,
date: none
)) => {
let __lang_data = toml("lang.toml")
if(__lang_data != none) {
let __read_lang_data = __lang_data.at(languaje, default: localization)
if(__read_lang_data != none) {
let __read-localization_value = (read_lang_data: none, field: "", localization: none) => {
let __parameter_value = localization.at(field)
if(__parameter_value != none) { return __parameter_value }
let value = read_lang_data.at(field, default: __default-localization.at(field))
if(value == none) { value = __default-localization.at(field)}
return value
}
let __grade_table_queston = __read-localization_value(read_lang_data: __read_lang_data, field: "grade-table-queston", localization: localization)
let __grade_table_total = __read-localization_value(read_lang_data: __read_lang_data, field: "grade-table-total", localization: localization)
let __grade_table_points = __read-localization_value(read_lang_data: __read_lang_data, field: "grade-table-points", localization: localization)
let __grade_table_calification = __read-localization_value(read_lang_data: __read_lang_data, field: "grade-table-calification", localization: localization)
let __point = __read-localization_value(read_lang_data: __read_lang_data, field:"point", localization: localization)
let __points = __read-localization_value(read_lang_data: __read_lang_data, field: "points", localization: localization)
let __page = __read-localization_value(read_lang_data: __read_lang_data, field: "page", localization: localization)
let __page-counter-display = __read-localization_value(read_lang_data: __read_lang_data, field: "page-counter-display", localization: localization)
let __family_name = __read-localization_value(read_lang_data: __read_lang_data, field: "family-name", localization: localization)
let __personal_name = __read-localization_value(read_lang_data: __read_lang_data, field: "personal-name", localization: localization)
let __group = __read-localization_value(read_lang_data: __read_lang_data, field: "group", localization: localization)
let __date = __read-localization_value(read_lang_data: __read_lang_data, field: "date", localization: localization)
let __localization_lang_data = (
grade-table-queston: __grade_table_queston,
grade-table-total: __grade_table_total,
grade-table-points: __grade_table_points,
grade-table-calification: __grade_table_calification,
point: __point,
points: __points,
page: __page,
page-counter-display: __page-counter-display,
family-name: __family_name,
personal-name: __personal_name,
group: __group,
date: __date,
)
__localization.update(__localization_lang_data)
}
}
}
set document(
title: __document-name(exam-info: exam-info).trim(" "),
author: author.name
)
let margin-right = 2.5cm
if (question-point-position == right) {
margin-right = 3cm
}
set page(
paper: "a4",
margin: (top: 5cm, right:margin-right),
numbering: "1 / 1",
number-align: right,
header-ascent: 20%,
header:locate(loc => {
let page-number = counter(page).at(loc).first()
if (page-number==1) {
align(right)[#box(
width:108%,
grid(
columns: (auto, auto),
gutter:0.7em,
align(left + top)[
#if(school.logo != none) {
image(school.logo, height:2.5cm, fit: "contain")
}
],
grid(
rows: (auto, auto, auto),
gutter:1em,
grid(
columns: (auto, 1fr, auto),
align(left + top)[
#school.name \
#exam-info.academic-period \
#exam-info.academic-level
],
align(center + top)[
// #exam-info.number #exam-info.content \
],
align(right + top)[
#exam-info.at("academic-subject", default: none) \
#exam-info.number \
#exam-info.content
],
),
line(length: 100%, stroke: 1pt + gray),
if show-studen-data in ("first-page", "odd-pages") {
__student-data()
}
)
)
)]
}
else if calc.rem-euclid(page-number, 2) == 1 {
grid(
columns: (auto, 1fr, auto),
gutter:0.3em,
align(left + top)[
#school.name \
#exam-info.academic-period \
#exam-info.academic-level
],
align(center + top)[
// #exam-info.number #exam-info.content \
],
align(right + top)[
#exam-info.at("academic-subject", default: none) \
#exam-info.number \
#exam-info.content
]
)
line(length: 100%, stroke: 1pt + gray)
if show-studen-data == "odd-pages" {
__student-data(show-line-two: false)
}
}
else {
grid(
columns: (auto, 1fr, auto),
gutter:0.3em,
align(left + top)[
#school.name \
#exam-info.academic-period \
#exam-info.academic-level
],
align(center + top)[
// #exam-info.number #exam-info.content \
],
align(right + top)[
#exam-info.at("academic-subject", default: none) \
#exam-info.number \
#exam-info.content \
]
)
line(length: 100%, stroke: 1pt + gray)
}
}
),
footer: locate(loc => {
line(length: 100%, stroke: 1pt + gray)
align(right)[
#__localization.final(loc).page
#counter(page).display(__localization.final(loc).page-counter-display, both: true,
)
]
// grid(
// columns: (1fr, 1fr, 1fr),
// align(left)[#school.name],
// align(center)[#exam-info.academic-period],
// align(right)[
// Página
// #counter(page).display({
// "1 de 1"},
// both: true,
// )
// ]
// )
__show-watermark(author: author, school: school, exam-info: exam-info)
}
)
)
set par(justify: true)
set text(font: "New Computer Modern")
__read-localization(languaje: languaje, localization: localization)
__question-point-position-state.update(u => question-point-position)
set text(lang:languaje)
if show-grade-table == true {
__grade-table-header(
decimal-separator: decimal-separator,
)
v(10pt)
}
// show heading.where(level: 1): it => {
// set block(above: 1.2em, below: 1em)
// set text(12pt, weight: "semibold")
// question(point: none)[#it.body]
// }
// show heading.where(level: 2): it => {
// set text(12pt, weight: "regular")
// subquestion(point: none)[#it.body]
// }
set par(justify: true)
if clarifications != none {
__show_clarifications(clarifications: clarifications)
}
body
[#hide[]<end-question-localization>]
[#hide[]<end-exam>]
}
#let g-explanation(size:8pt, body) = {
text(size:size)[$(*)$ #body]
} |
https://github.com/matbesancon/modern-cv-academics | https://raw.githubusercontent.com/matbesancon/modern-cv-academics/master/resume.typ | typst | MIT License | #import "@preview/modern-cv:0.4.0": *
#show: resume.with(
author: (
firstname: "Mathieu",
lastname: "Besançon",
email: "<EMAIL>",
phone: "(+33) 111-111-1111",
github: "matbesancon",
linkedin: "corporate",
address: "The Big Lab, 3rd Main Street, UniversityCity, France",
positions: (
"Inria Associate Researcher",
"Mathematical Optimization",
),
),
// date: datetime.today().display(),
language: "en",
colored-headers: true,
)
= Experience
#let expdata = toml("experience.toml")
// only show full time positions
#for entry in expdata.values().filter(e => e.fulltime ) [
#resume-entry(
title: entry.title,
description: entry.institution,
date: entry.date,
location: entry.location,
)
#entry.description
]
= Education
#let educationdata = toml("education.toml")
#for entry in educationdata.values() [
#resume-entry(
title: entry.title,
location: entry.institution,
date: entry.date,
description: entry.program,
)
#if ("description" in entry) {
entry.description
}
]
// This key defines the publication format between:
// A single file highlighted.bib with everything in it included OR
// Subsections from three bib files (submitted, journals, conferences)
#let selectedpubsonly = false
// Used to highlight your name in publications, you may have to adapt it to different citation styles
#show regex("Besançon, M\\.?"): name => text(navy, (strong(name)))
#if selectedpubsonly {
let bibdata = bibliography("publications/highlight.bib", full: true, title: "Selected Publications")
bibdata
} else {
show bibliography: none
[= Publications]
// Define which sections to include
let sections = (
("Technical Reports and Working Papers", "submitted"),
("Journal Articles", "journals"),
("Conference Papers", "conferences"),
)
bibliography(sections.map(p => "publications/" + p.at(1) + ".yaml"), full: false)
for (sectitle, filename) in sections {
[== #sectitle]
linebreak()
let entries = yaml("publications/" + filename + ".yaml").pairs().sorted(key: p => p.at(1).date).rev().map(p => label(p.at(0)))
for key in entries [
#cite(key, form: "full", style: "american-physics-society")
]
linebreak()
}
}
= Supervision
#let supervisiondata = toml("supervision.toml")
== Ph.D. Candidates
#for entry in supervisiondata.phd {
[- #entry.firstname #entry.lastname, #entry.institution. #(entry.startmonth).#entry.startyear - #if entry.keys().contains("endmonth") {((entry.endmonth).entry.endyear )} else {"today"}. #if entry.keys().contains("coadvisor") [With #entry.coadvisor.] else []
]
}
== Recent Master's Theses
#for entry in supervisiondata.masters {
[- #entry.firstname #entry.lastname, #entry.program at #entry.institution, #entry.topic #(entry.startmonth).#entry.startyear - #entry.endmonth.#entry.endyear. #if entry.keys().contains("coadvisor") [With #entry.coadvisor.] else []
]
}
= Recent and Selected Talks
#let parse(month, year) = toml.decode("date = '01-" + str(month) + " " + str(year) + "'").date
// Current logic for the talk filters: only show the ones that are less than a year old or that have the highlighted key
#let talkdata = toml("talks.toml")
#let filteredtalkdata = talkdata.values().filter(
p =>
p.at("highlight", default: false) or
(datetime(day: 1, year: p.year, month: p.month) > datetime.today() - duration(days: 365) )
)
#resume-item[
#for entry in filteredtalkdata [
#let typestring = text(entry.type, weight: "semibold")
#datetime(day: 1, year: entry.year, month: entry.month).display("[month repr:short] [year]") - #typestring #entry.event, #entry.title
]
]
= Research Visits
#resume-item[
- BigUniv, Industrial and Systems Engineering, May 2024
- CoolSchool, Institute of Applied Mathematics, March 2023
]
= Teaching Activities
#let teaching_date = toml("teaching.toml")
#for entry in teaching_date.teaching_experience [
#resume-entry(
title: entry.course,
location: entry.institution,
date: entry.period,
description: [#entry.at("program", default: "")],
)
]
// This section is inspired by the original template, feel free to adapt for your needs
= Projects
#resume-entry(
title: "FrankWolfe.jl",
location: [#github-link("ZIB-IOL/FrankWolfe.jl")],
date: "Feb. 2021 - Present",
description: "Designer/Developer",
)
= Competitive Funding
#resume-item[
- Travel grant from Institution, to attend BigConf, 2024
- Young Investigator Grant, BigLab, funding the Ph.D. of ABC, 2023
]
= Community Service
== Reviewing activities
Mathematical Programming,
Computers and Operations Research
CPAIOR 24 PC member.
== Conference Organization
BigConf 2025, PC member.
== Other Service
#resume-item[
- Selection committee member, assistant professor position, Big School, 2024
]
|
https://github.com/DashieTM/ost-5semester | https://raw.githubusercontent.com/DashieTM/ost-5semester/main/experiment/weeks/week11.typ | typst | #import "../../utils.typ": *
#section("Approximation Tactics")
Base idea: The problem is that we don't always have the necessary parameters for
all the functions explained previously. Instead, we need to approximate both $mu$ and $sigma$.\
Note, this is a mathematical approach, not random bullshit go, aka it is a
function that tried to *approximate* the result.\
In the end we want 2 things out of the approximation:
- figure out the base quantity
- reduce the error of the approximation or at least measure it
#subsection("Different Tactics")
#subsubsection("Point Approximation")
Here we need the median value $mu$, the variance $sigma^2$ and the probability $p$.
#align(
center,
[
#text(
teal,
)[The curved line above T is a common notation to show an approximation or guess.]
#image("../../Screenshots/2023_11_30_04_50_53.png", width: 100%)
],
)
#align(center, [
#text(teal)[Not relevant for the exam, just for clarification]
#image("../../Screenshots/2023_11_30_04_51_08.png", width: 100%)
])
#align(
center,
[#image("../../Screenshots/2023_11_30_04_51_41.png", width: 100%)],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_04_51_56.png", width: 100%)],
)
#subsubsection("Interval Approximation")
Here we compare the parameters from above over an interval to test the error of
the parameters -> reliability score
#subsubsubsection("Expected Value")
#align(
center,
[#image("../../Screenshots/2023_11_30_04_52_53.png", width: 100%)],
)
#text(
teal,
)[
- the results of the probe are calculated with an approximation function such as
chi-thai pingpang
- in an experiment, many results are gathered with again are distributed under a
function
- only in case of an unlimited base quantity N are the probes truly independent
- However, it an be considered "approx unlimited" when we have enough probes -> n
is big enough
- see below for when to consider N to be unlimited
]
#align(
center,
[#image("../../Screenshots/2023_11_30_04_54_04.png", width: 100%)],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_05_59_35.png", width: 100%)],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_04_54_16.png", width: 100%)],
)
#set text(14pt)
#text(red)[Steps to build the confidence interval]
+ determine the distribution form of $overline(X)$
+ determine the variance of $overline(X)$, if not deterministic -> approximate
with $s^2$ see approximation of variance above
+ determine the quantile value of z or t aus table or calculator\
This is usually taken from a table -> read it from the distribution
+ calculate the maximum approximation error
- the product of quantile value and standard deviation of X
+ determine confidence borders
- the top and bottom border are calculated by adding or subtracting the
approximation error to the median $overline(X)$
#set text(11pt)
Example:
#align(
center,
[#image("../../Screenshots/2023_11_30_04_54_32.png", width: 80%)
N -> 600 n -> 26
$overline(X)$ -> 124.5g s -> 1.72g],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_04_54_44.png", width: 80%)],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_04_54_59.png", width: 80%)],
)
#pagebreak()
#set text(14pt)
#text(red)[Steps to build the confidence interval of a fraction -> e.x. 90%]
+ determine the distribution form of $P$.\
The approximation function is in normal distribution when $n*P (1-P) > 9$.
+ determine variance of P
+ determine the quantile value of z
+ calculate the maximum approximation error
- the product of quantile value and standard deviation of P
+ determine confidence borders
- the top and bottom border are calculated by adding or subtracting the
approximation error to the median $P$
- #text(red)[Alright P is a percentage -> 30% of consumers know brand x]
#set text(11pt)
Example:
#align(
center,
[#image("../../Screenshots/2023_11_30_06_20_03.png", width: 80%)],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_06_20_36.png", width: 80%)],
)
#set text(14pt)
#text(
red,
)[This is the reversal, instead of wanting to know the error of something, we
would like to know the value of something given an error:]
#align(center, [
with backtracking aka n > 30 aka N $approx$ unlimited:
#image("../../Screenshots/2023_11_30_06_28_00.png", width: 90%)
])
#align(center, [
#image("../../Screenshots/2023_11_30_06_28_51.png", width: 90%)
])
#align(center, [
without backtracking aka n < 30 aka N $approx$ unlimited:
#image("../../Screenshots/2023_11_30_06_29_59.png", width: 90%)
])
#align(
center,
[#image("../../Screenshots/2023_11_30_06_30_09.png", width: 90%)],
)
#subsubsubsection("Variance")
#align(
center,
[#image("../../Screenshots/2023_11_30_04_55_35.png", width: 100%)],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_04_55_45.png", width: 100%)],
)
#align(
center,
[#image("../../Screenshots/2023_11_30_04_55_58.png", width: 100%)],
)
|
|
https://github.com/TriefendeNase1384/rezepte | https://raw.githubusercontent.com/TriefendeNase1384/rezepte/main/key_lime_pie.typ | typst | #let primary_colour = rgb("#43978D")
#let text_colour = rgb("#333")
#let recipes(title, doc) = {
set text(10pt, font: "IBM")
set page(
margin: (x: 54pt, y: 52pt),
numbering: "1",
number-align: right,
)
set align(center)
v(240pt)
text(fill: primary_colour, font: "Salzburg-Serial", size: 30pt, weight: 100, title)
set align(left)
show heading.where(
level: 1
): it => [
#pagebreak()
#v(300pt)
#set align(center)
#text(
fill: primary_colour,
font: "IBM Plex Sans",
weight: 300,
size: 20pt,
{it.body},
)
#text(" ")
#pagebreak()
]
doc
}
#let display_method(method, servings) = {
[== Method.]
set enum(tight: true)
columns(
2,
gutter: 11pt,
method
)
emph([Serves #servings.])
}
#let recipe(
title: "",
author: "",
description: "",
image_path: "",
servings: 3,
prep_time: "",
bake_time: "",
difficulty: "normal",
ingredients: (),
method: [],
) = {
show heading.where(
level: 2
): it => text(
fill: primary_colour,
font: "IBM Plex Sans",
weight: 300,
size: 11pt,
grid(
columns: (auto, auto),
column-gutter: 5pt,
[#{upper(it.body)}],
[
#v(5pt)
#line(length: 272pt, stroke: 0.5pt + primary_colour)
]
)
)
{
grid(
columns: (350pt, 130pt),
[
#text(fill: primary_colour, font: "Salzburg-Serial", size: 18pt, weight: 100, upper(title))
#h(3pt)
#text(fill: text_colour, font: "<NAME>", size: 14pt, author)
#v(0pt)
#emph(description)
],
[
#v(2pt)
#set align(right)
#if(prep_time != "") {
[_Preparation: #prep_time _]
}
#if(bake_time != "") {
[\ _Baking Time: #bake_time _]
}
#if(difficulty != "") {
[\ _Difficulty: #difficulty _]
}
],
)
grid(
columns: (145pt, 380pt),
column-gutter: 15pt,
[
#set list(marker: [], body-indent: 0pt)
#set align(right)
#text(fill: primary_colour, font: "IBM Plex Sans", weight: 300, size:11pt, upper([Ingredients.\ ]))
#emph(ingredients)
],
[
#display_method(method, servings)
]
)
v(30pt)
}
}
#set text(lang: "en")
#set text(11pt, font: "IBM")
#set page(
margin: (x: 54pt, y: 52pt),
numbering: "1",
number-align: right,
)
#let author = [by <NAME>]
#let key_lime_pie = recipe(
title: [Key Lime Pie.],
author: author,
description: [
I've searched long and hard for the perfect Key lime pie recipe, and this might be the one. The ratios are spot on and the custard turns out perfectly every time. Make this recipe with lemons, Meyer lemons, grapefruits, sour oranges... any citrus that is super tart. Oh, and it is best enjoyed with some coffee, chef.
],
prep_time: "20 minutes",
bake_time: "15 minutes",
difficulty: "easy",
ingredients: [
- 116 g unsalted butter
- 116 citrus peels
- 115 ml sweetened condensed milk
- 114 graham crackers
- 110 g powdered sugar
- 101 ml key lime juice
- 1 pinch kosher salt
- 1 teaspoon vanilla extract
],
method: [
Clean mixing bowl.
Put unsalted butter into the mixing bowl.
Put key lime juice into the mixing bowl.
Fold kosher salt into the mixing bowl.
Mix the mixing bowl well.
Put sweetened condensed milk into the mixing bowl.
Put unsalted butter into the mixing bowl.
Add vanilla extract.
Put unsalted butter into the mixing bowl.
Put powdered sugar into the mixing bowl.
Put graham crackers into the mixing bowl.
Stir the mixing bowl for 2 minutes.
Put kosher salt into the mixing bowl.
Liquefy contents of the mixing bowl.
Pour contents of the mixing bowl into the baking dish.
#colbreak()
#image("img/key_lime_pie.png", width: 70%)
],
)
#key_lime_pie
|
|
https://github.com/rikhuijzer/phd-thesis | https://raw.githubusercontent.com/rikhuijzer/phd-thesis/main/chapters/5.typ | typst | The Unlicense | #import "../style.typ": citefig
#import "../functions.typ": chapter, textcite, parencite, note
#pagebreak()
#chapter(
[Early Identification of Dropouts During the Special Forces Selection Program],
label: [
<NAME>., <NAME>., <NAME>., <NAME>., & <NAME>, <NAME>. (2024).
Early Identification of Dropouts During the Special Forces Selection Program.
_PsyArXiv_.
#link("https://doi.org/10.31234/osf.io/nbs6j")
],
abstract: [
Special forces selection is a highly demanding process that involves exposure to high levels of psychological and physical stress resulting in dropout rates of up to 80%.
To identify who likely drops out, we assessed a group of 249 recruits, every week of the program, on their experienced psychological and physical stress, recovery, self-efficacy, and motivation.
Using both ordinary least squares regression and state-of-the-art machine learning models, we aimed to find the model that could predict dropout best.
Furthermore, we inspected the best model to identify the most important predictors of dropout and to evaluate the predictive performance in practice.
Via cross-validation, we found that linear regression performed best while remaining interpretable, with an Area Under the Curve (AUC) of 0.69.
We also found that low levels of self-efficacy and motivation were significantly associated with dropout.
Additionally, we found that dropout could often be predicted multiple weeks in advance and that the AUC score may underestimate the real-world predictive performance.
Taken together, these findings offer novel insights in the use of prediction models on repeated measurements of psychological and physical processes, specifically in the context of special forces selection.
This offers opportunities for early intervention and support, which could ultimately improve selection success rates.
]
)
== Introduction
Special forces are often considered the most elite military units, with the potential to significantly impact strategic military outcomes.
They are typically composed of highly trained and motivated individuals who are able to operate in high-stakes environments which are both psychologically and physically demanding.
However, dropout rates during the selection process are close to 80%
(e.g., #parencite(<gayton2015prospective>)).
This is a concern for both the recruits and the military as it incurs a personal toll on the recruits and is costly for the military.
Scientifically, a major challenge is identifying potential dropouts early in the selection period via accurate predictive models.
Such models could allow for early intervention on potential future dropouts by intervening the relevant psychological and physical processes.
The relatively scarce previous research investigated dropout by comparing test scores from before the selection period with the final dropout or graduation decision.
Psychological tests included, for instance, personality questionnaires and showed that a higher emotional stability and conscientiousness were associated with graduation
(e.g., #parencite(<jackson2012military>); #parencite(<sorlie2020person>); #parencite(<huijzer2022personality>); #parencite(<rolland1998psychometric>); #parencite(<tedeholm2021personality>)).
In other research, psychological hardiness was associated with graduation among 1,138 special forces recruits @bartone2008psychological and 178 Norwegian border patrol soldiers @johnsen2013psychological.
On the other hand, in a study including 73 South African special forces, hardiness and self-efficacy were not associated with graduation @beer2014exploring.
In another study, higher self-efficacy was significantly associated with graduation among 380 special forces recruits @gruber2009effects.
Physical tests typically include fitness, strength, and endurance tests.
For example, in a study among 69 Finnish soldiers, baseline information of aerobic fitness significantly predicted graduation @vaara2020can.
In a study on 160 Swedish police counterterrorism intervention units including various psychological and physical tests, the authors found that only running capacity was a significant predictor of graduation @tedeholm2023predictors
A study on 800 special forces recruits showed that both psychological and physical tests were significantly associated with graduation @farina2019physical.
Finally, a follow-up study on 117 special forces soldiers found that physical characteristics of the body, such as a lower percentage body fat and fat mass were predictors for physical performance and graduation @farina2022anthropometrics.
Despite some evidence for the role of psychological and physical factors in predicting dropout, a main issue of previous studies is that they showed limited effects and different predictor combinations.
For instance, when comparing agreeableness between military recruits and a civilian control group, agreeableness was found to be lower after training @jackson2012military, whereas this was not found in two recent studies @huijzer2022personality @tedeholm2023predictors.
Such contradicting results could be due to theoretical and methodological factors.
Theoretically, a commando profile could be composed of different combinations of characteristics that could allow an individual to perform in highly psychologically and physically demanding situations
(e.g., #parencite(<hartigh2016dynamic>)).
Accordingly, and methodology-related, an important factor contributing to dropout is how recruits respond to the stress during the heavy selection program.
This cannot be derived from psychological and physical measures taken at one point during the selection program.
Thus, an important question is: how do recruits actually respond to, and recover from, the stress to which they are exposed?
Such a question can be answered by measuring recruits during the selection period on relevant psychological and physical processes of stress and recovery.
Recent research provided initial evidence that repeated measures can be used to predict dropout.
For instance, one longitudinal study on elite soldiers found that recruits who voluntarily dropped out exhibited an increase in emotional or physical pain and a decrease in self-efficacy up to three days before dropping out @saxon2020continuous.
Similarly, in a study on 46 male and female recruits in the Australian Army basic military training course, higher stress and recovery, as measured via the Short Recovery and Stress Scale @kellmann2019recovery, were associated with a higher risk of delayed completion @tait2022factors.
Similar results have been found in sports.
For example, in a study on 135 adolescent elite athletes, lower recovery and higher stress states as measured by the Acute Recovery and Stress Scale (ARSS) were followed by depressive, burnout, and insomnia symptoms @gerber2022perceived.
In a study on 74 middle and long-distance runners, recovery and exertion were considered some of the most important variables for predicting injuries
(#parencite(<lovdal2021injury>), Figure 4).
These findings are promising as they suggest that dropout, either voluntary or involuntary (e.g., due to injury), can be predicted in advance based on measures taken during selection or training periods.
Building upon first efforts of predicting dropout from military programs and the increasing interest in the psychological and physical stress monitoring during army training, important statistical strides can be made.
Most notably, while previous studies often applied traditional statistical methods, i.e., how variables were associated with dropout or graduation, they often did not report the predictive performance.
This means that associations between variables could be too small to be useful in practice or they could be wrong due to overfitting @yarkoni2017choosing.
Ideally, a study would report predictive performance for multiple models to avoid overfitting and dependence on one model, and use repeated measures to allow for prediction of dropout in advance.
For a recent example in the context of the marine corps, see #textcite(<dijksma2022development>).
The current study aimed to assess the experienced psychological and physical stress and recovery of recruits during the selection weeks while improving upon the statistical methods used in previous research.
In line with recommendations from previous literature, we specifically focused on the experiences of self-efficacy, motivation, and psychological and physical stress and recovery @hartigh2022resilience.
We compared various classical and state-of-the-art machine learning models via cross-validation.
In addition, we explored the moment at which valid predictions of dropout could be made (e.g., one day, one week, or three weeks in advance).
Such knowledge could lead to a better understanding of the dropout process, and to targeted interventions in practice.
#block([
== Method
=== Participants
The sample for this study consisted of 249 male special forces recruits, ranging in age from 18 to 35 years.
Prior to their involvement in the study, active informed consent was obtained from each recruit.
The information letter informed participants about the study's purpose, procedures, and potential risks, as well as their right to withdraw from the study at any time.
The participants were diverse in terms of their military experience, with some being new recruits while others had prior experience in different branches of the armed forces.
Due to the sensitive nature of the data, more detailed information about the participants could not be made available.
], breakable: false)
#block[
=== Measures
During the selection period that lasted up to 16 weeks, we asked the following self-efficacy and motivation questions, both in Dutch:
"How confident are you that you can complete the course?" (0 = not confident at all, 100 = very confident) and
"How motivated are you to pass the training program?" (0 = not at all motivated, 100 = very motivated).
Furthermore, we used a Dutch version of the Short Recovery and Stress Scale (SRSS), a self-report questionnaire assessing perceived stress and recovery levels @kellmann2019recovery.
The Dutch version underwent a parallel back-translation procedure @vallerand1989vers.
It was subsequently validated in a group of 385 Dutch and Flemish athletes @brauers2023monitoring.
The SRSS consists of 8 items divided into two subscales: Recovery and Stress.
Items were rated on a seven-point Likert scale, with higher scores indicating greater levels of recovery or stress.
The Recovery subscale evaluates an individual's current state in comparison to their best recovery state ever, with items such as "Physical performance capacity" and "Mental performance capacity".
The Stress subscale assesses an individual's current state relative to their highest stress state ever, including items like "Muscle stress" and "Lack of inspiration", see #textcite(<kellmann2019recovery>) for more information.
Over the course of the study, the recruits completed the questionnaire weekly, resulting in a total of 1652 responses.
On average, we received about 6 responses per person.
The number of responses per participant varied due to individuals dropping out of the selection process before the end of the study.
The data was collected using an electronic questionnaire, which was administered via a web-based platform that we built for this project.
The collection occurred at the start of the training week, which was typically on Monday morning at 0800 hours.
]
=== Analysis
We processed the data to include the following 13 columns: _id_, _week_, _motivation_, _self-efficacy_, 8 SRSS items, and whether the individual drops out in the week after the response.
Here, we truncated the data at 13 weeks, given that the data was only collected for 14 out of 16 weeks.
Next, we analyzed the model in three ways.
We consider none of these ways as definitive, but instead consider each of these ways as a tool to evaluate the model
(e.g., #parencite(<mcshane2019abandon>); #parencite(<hofman2021integrating>)).
Firstly, we applied principles and techniques from machine learning to estimate the model's ability to predict future behaviors.
We used 12-fold cross-validation and the area under the receiver operating characteristic curve (AUC) as a performance metric, both via the MLJ.jl software package @blaom2020mlj.
The AUC is a measure of the performance of a binary classifier, where a value of 0.5 indicates random guessing and a value of 1.0 indicates perfect predictions.
We used the AUC because it is a robust metric that is not sensitive to class imbalance and is a common metric in the literature.
Furthermore, we used multiple different models to determine which one performed best in terms of predictive performance.
We fitted a binary logistic model with no intercept as our baseline model.
Next, we fitted two SIRUS models to the training data as the SIRUS model has shown to perform well in similar situations with relatively few samples and binary outcomes @benard2021sirus @huijzer2023sirus.
SIRUS is based on random forests, and, therefore, non-parametric meaning that it does not make assumptions about the distributions of the data.
Random forest-based models are robust to outliers, do not require scaling of the data, and perform very well generally @biau2016random.
Finally, we fitted a modern gradient boosting model called EvoTrees.jl @desgagne2024evotrees.
Gradient boosting models are not fully interpretable due to the large amounts of trees
(e.g., #parencite(<huijzer2023sirus>)),
but they are known to perform well in many situations
(e.g., #parencite(<chen2016xgboost>); #parencite(<ke2017lightgbm>)).
In the context of military selection, we prefer models with an optimal trade-off between predictive accuracy and interpretability.
Therefore, to combine predictive performance and interpretation @hofman2021integrating, we inspected the model that scored best on this trade-off.
Specifically, we fitted the model on the full dataset and inspected the fitted model.
Thirdly, we evaluated the predictive performance in practice.
To do so, we converted the predictions of the model in the range of 0 to 1 back to a binary outcome.
We did this by choosing a threshold and using this threshold to split the outcomes in dropout and graduate groups.
Next, we visualized the predictions of the model for different thresholds.
This helps researchers and practitioners in selecting the right balance between the number of false positives and false negatives, and provides an indication of the predictive performance in practice.
#block([
== Results
The results for the evaluation runs on the cross-validation data are shown in Figure #citefig(<fig:weekly-auc>).
#figure(
[
#image("../images/weekly-auc.png", width: 100%)
#note([
The different lines show the results for all folds in the 12-fold cross-validation.
The average Area Under the Curve (AUC) and 1.96 $*$ standard error scores are shown in the bottom right of each graph.
])
],
caption: "Receiver Operating Characteristic (ROC) Curves"
) <fig:weekly-auc>
], breakable: false)
In these results, the bottom two graphs both have a max tree depth of 2.
This higher depth allows these models to capture more complex interactions between variables.
However, the results show that these models do not perform markedly better than the simpler models, see #citefig(<fig:weekly-auc>).
This is likely caused by more complex models overfitting the data and could likely be solved by using more data.
In general, the logistic regression model performs best since it scores best in the trade-off between predictive performance and interpretability.
The interpetability is very high because the algorithm is very simple compared to the thousands of trees in gradient boosting models, and the performance is very comparable to the gradient boosting model.
Therefore, we inspect the logistic regression model in more detail below.
The coefficients of the logistic model, when fitted on the full dataset, are shown in #citefig(<tab:coefficients>).
When interpreting this model, note that there is variation in performance for the different cross-validation folds, see Figure #citefig(<fig:weekly-auc>).
This is why we decided post hoc to set our alpha level conservatively to 0.001 instead of the commonly used 0.05.
This lower alpha level means that we are less likely to find significant results.
Setting this level post hoc seemed reasonable as we use the p-value as just one of the many tools to interpret the model
(e.g., #parencite(<mcshane2019abandon>).
From Table #citefig(<tab:coefficients>), we can see that the variables "Self-Efficacy" and "Motivation" were significant.
The positive coefficients indicate that recruits who score higher of self-efficacy and higher on motivation are less likely to drop out.
#figure(
{
set text(size: 8.5pt)
table(
columns: (auto, auto, auto, auto, auto, auto),
align: (left, right, right, right, right, right),
table.hline(start: 0),
table.header(
[*Variable*], [*Coefficient*], [*Z-Score*], [*_p_-Value*], [*Lower 95%*], [*Upper 95%*],
),
table.hline(start: 0),
[Self-Efficacy], [1.734], [4.78], [\<0.001], [1.022], [2.446],
[Motivation], [1.205], [3.6], [\<0.001], [0.549], [1.86],
[Muscle Tension], [0.561], [2.18], [0.029], [0.056], [1.066],
[Overall Stress], [-0.701], [-1.95], [0.051], [-1.405], [0.003],
[Lack of Enthusiasm], [0.531], [1.38], [0.168], [-0.221], [1.282],
[Negative Emotional State], [-0.493], [-1.38], [0.168], [-1.194], [0.209],
[Emotional Balance], [0.406], [1.24], [0.215], [-0.233], [1.044],
[Overall Recovery], [0.341], [0.89], [0.373], [-0.413], [1.096],
[Physical Performance], [-0.343], [-0.83], [0.407], [-1.157], [0.47],
[Mental Performance], [-0.245], [-0.6], [0.549], [-1.049], [0.56],
[Recovery], [0.225], [0.55], [0.582], [-0.574], [1.025],
table.hline(start: 0),
)
},
caption: "Fitted Binary Logistic Regression Statistics"
) <tab:coefficients>
Next, the predictions made by the logistic regression model are visualized in Figure #citefig(<fig:weekly-thresholds>).
The figure shows that many of the dropouts were predicted correctly in the last week, which is in line with the AUC score as reported in Figure #citefig(<fig:weekly-auc>).
Furthermore, some dropouts were predicted weeks before the actual dropout event.
This suggests that the reported AUC score underestimates the actual predictive performance, since our data is modeled such that a dropout prediction is only considered correct if it is made in the week before the dropout event.
#figure(
[
#image("../images/weekly-thresholds.png", width: 100%)
#note([
This figure shows the true points of drop out for each participant in the leftmost subfigure.
The second subfigure shows how the data was modeled.
The aim was to train a model that could predict dropout events.
The other three subfigures show the predictions according to the model for different thresholds.
Different thresholds allow practitioners to select the right balance between the number of false positives and false negatives.
This, together with the AUC, provides an indication of the predictive performance in practice.
])
],
caption: "True Dropout Data and Predictions of the Model"
) <fig:weekly-thresholds>
== Discussion
The current study aimed to predict dropout during the special forces selection period.
To that end, we assessed the recruits on psychological and physical factors related to stress and recovery during this period.
We applied simple logistic models as well as more complex models on this data.
Next, we used various tools to analyze the model.
Specifically, we evaluated how well each model performs, we interpreted the best model, and evaluated the predictive performance in practice.
We found that a simple logistic regression model scored best on the trade-off between predictive performance and interpretability because it was interpretable and performed relatively well with an area under the curve (AUC) of 0.69.
The most complex models scored only slightly better on the AUC, which suggested we had insufficient data for more complex models.
The logistic regression model's revealed that self-efficacy and motivation were significantly related to dropout.
This provides support for earlier research that found that decreases in self-efficacy were related to dropout in a military context @saxon2020continuous.
More generally, it is in accordance with the perspective that temporal measures of self-efficacy and motivation can provide important information on an individual's resilience.
That is, motivation and self-efficacy are important psychological performance factors that ideally return to normal levels following psychological and physical stress.
When individuals loose resilience, as reflected in their self-efficacy and motivation levels, then this could be a warning signal for negative outcomes such as psychological problems or dropout
(for a review, see #parencite(<hartigh2022resilience>)).
Interesting in this regard is that more direct measures of stress and recovery experiences were less predictive of dropout.
One reason for this could be that the individual questions are more sensitive than items containing multiple questions.
Put differently, in items with multiple questions, variations tend to average out, making it less likely that the items become significantly related to dropout.
Another reason for this could be that the SRSS has, so far, only been validated in the sports context.
Despite the parallels between the sport and military context, individuals are typically exposed to more extreme psychological and physical stress during the selection program.
It could be that the experience of stress and recovery are so high for everyone, that it cannot account for the variance in the outcome anymore.
Finally, we estimated the predictive performance in practice.
We visualized the predictions of the model for different thresholds.
This showed that the model could sometimes predict dropout multiple weeks in advance with few false positives, depending on the chosen threshold.
In practice, this means that the calculated AUC scores may underestimate the predictive performance due to the way the data was modeled.
Note that choosing the right threshold is important as it determines the balance between the number of false positives and false negatives.
We showed multiple thresholds which could be used by practitioners to select the right balance.
Since the cost of missing a dropout is high, we recommend a higher threshold, which would result in more early warnings of dropout.
Future work could improve upon the current study in several ways.
First, the sample size was relatively small for machine learning models.
With a higher sample size, the variation in the cross-validation folds would most likely decrease.
Second, the frequency of measurements could be increased.
More frequent measurements could provide more opportunities for early intervention and support.
Third, this study could be complemented with quantitative measures to gain deeper insights into the personal experiences, coping strategies, and psychological states of recruits.
This could help refine the predictive models and identify potential areas for intervention.
Finally, intervention studies could be conducted based on the predictive models to design and test interventions aimed at reducing dropout rates.
These could include psychological resilience training, targeted physical conditioning programs, or personalized support strategies.
Taken together, our study builds on previous research that has highlighted the importance of psychological and physical factors in predicting dropout in special forces selection.
The longitudinal design of our study adds to this body of knowledge by demonstrating that dropout may be predicted during the selection program, offering more opportunities for early intervention and support.
Even more so, by picking the right threshold, individuals at risk of dropout could sometimes be identified weeks in advance.
This allows for targeted interventions and support, which could subsequently improve success rates and reduce the personal and human resource costs associated with high dropout rates.
|
https://github.com/Devansh0210/report_template_typst | https://raw.githubusercontent.com/Devansh0210/report_template_typst/main/template.typ | typst | #let title_page(
title: "",
authors: "",
body
) = {
align(center)[
#text(title, size: 30pt)
]
align(center)[
#authors
]
body
}
#let project(
title: "",
abstract: [],
authors: (),
logo: none,
body
) = {
// Set the document's basic properties.
set document(author: authors.map(a => a.name), title: title)
set text(font: "New Computer Modern", lang: "en")
show math.equation: set text(weight: 400)
set heading(numbering: "1.1")
set par(justify: true)
// Title page.
v(0.25fr)
align(center)[
#text(2em, weight: 700, title)
]
// Author information.
pad(
top: 0.7em,
grid(
columns: (1fr),
gutter: 1em,
..authors.map(author => align(center)[
*#author.name* \
#author.email \
#author.affiliation \
#author.postal \
#author.phone
]),
),
)
// Logo
if logo != none {
v(0.25fr)
align(center, image(logo, width: 26%))
v(0.50fr)
} else {
v(0.75fr)
}
pagebreak()
// Abstract page.
set page(numbering: "I", number-align: center)
v(1fr)
align(center)[
#heading(
outlined: false,
numbering: none,
text(0.85em, smallcaps[Abstract]),
)
]
abstract
v(1.618fr)
counter(page).update(1)
pagebreak()
// Table of contents.
outline(depth: 3, indent: true)
pagebreak()
// Main body.
set page(numbering: "1", number-align: center)
set par(first-line-indent: 20pt)
// set page(header: getHeader())
counter(page).update(1)
body
} |
|
https://github.com/uhChainsaws/gradie | https://raw.githubusercontent.com/uhChainsaws/gradie/main/README.md | markdown | # gradie
> Gradient Climb + LAB7 for CALC03 Course
Лаба вариант №10
> by <NAME> \
> ISU\[368372\] \
> @uhChainsaws
В репозитории представлены:
* `lab7.typ` -- `typst` сорс-файл для генерации доклада
* `lab7.py` -- `python` скрипт с реализацией метода градиентного подъёма и модуль для генерации видео-визуализации с фреймворком `manim`
* `lab7.mp4` -- сгенерирванный видеофайл демонстрации метода
запустить скрипт можно коммандой:
```py
manimgl ./lab7.py
```
естесственно, для этого на вашем устройстве должен быть установлен `manim`
🪒
|
|
https://github.com/angelacorte/presentation-2024-acsos-multiplatform-self-organization | https://raw.githubusercontent.com/angelacorte/presentation-2024-acsos-multiplatform-self-organization/master/acsos-2024-doctoral-symposium.typ | typst | Apache License 2.0 | #import "@preview/polylux:0.3.1": *
#import "@preview/fontawesome:0.1.0": *
#import themes.metropolis: *
#show: metropolis-theme.with(
aspect-ratio: "16-9",
// footer: [Optional Footnote]
)
#set text(font: "Inter", weight: "light", size: 20pt)
#show math.equation: set text(font: "Fira Math")
#set strong(delta: 200)
#set par(justify: true)
#show figure.caption: set text(size: 14pt)
#set raw(tab-size: 4)
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 1em,
radius: 0.7em,
width: 100%,
)
#show table.cell: it => {
if it.body == [*x*] {
set text(red)
it
} else if it.body == [*\u{2713}*] {
set text(green)
it
} else if it.body == [*\u{007E}*] {
set text(yellow.darken(20%))
it
} else {
it
}
}
#let author = box[
#table(inset: 0.5em, stroke: none, columns: (1fr, auto), align: (left, right),
[*<NAME>*], [<EMAIL>],
[Supervisor: Prof. _<NAME>_],[GARR research fellow &\ (soon) PhD student \@UniBo]
)
#place(left, dy:3em)[
#figure(image("figures/qr.svg", width:10%))
]
#place(right, dy:3em)[
#figure(image("images/disi.svg", width:25%))
]
]
#title-slide(
title: "Multiplatform Self-Organizing Systems Through a Kotlin-MP Implementation of Aggregate Computing",
author: author,
)
#new-section-slide("Why?")
#let motivation = box[
#table(inset: 1.1em, stroke: none, columns: (1.5fr, 1.1fr), align: (left, left),
[
In smart cities scenarios e.g. #underline[crowd management] and #underline[surveillance], drones and sensors can help monitor the environment.
#underline[*Challenges*]: heterogeneity, scalability, and fault tolerance.
Among the existing frameworks, *Aggregate Computing* (AC) can help to address these challenges.
], [
#figure(
image("figures/smartcity.png", width: 120%),
)
]
)
]
#slide(title: "Motivation")[
#motivation
]
#new-section-slide("What?")
#slide(title: "Aggregate Computing")[
#table(inset: 0.3em, stroke: none, columns: (1fr), align: (center),[
#figure(
image("figures/acDevices.png", width: 80%),
)
],[
AC defines the *collective behavior* of devices in self-organizing systems.
It manipulates distributed data structures called "*fields*".
])
]
#slide(title: "State of the Art")[
#figure(
image("figures/SOTA.svg", width: 100%),
)
]
#new-section-slide[What if we change the compiler?]
#slide(title: "Contribution")[
#figure(
image("figures/SOTAtable.svg", width: 100%),
)
A DSL-prototype in Kotlin Multiplatform called Collektive!
]
#new-section-slide("How?")
#slide(title: "Example")[
#figure(
image("figures/code.png", width: 82%),
)
]
#new-section-slide("Potential impact")
#slide(title: "Application")[
#table(inset: 1.2em, stroke: none, columns: (1.5fr, 1.5fr), align: (left, left),
[
#figure(
image("figures/channel.png", width: 79%),
)
], [
The approach is viable: the prototype has been used to reproduce some experiments.
]
)
]
#slide(title: "Future Works")[
#table(inset: 1.2em, stroke: none, columns: (1.5fr, 1fr), align: (left, left),
[
Current AC systems run a *single complex algorithm*, but in dynamic environments a *flexible management* is needed.
Programs should be added, removed or paused *at runtime*.
Concepts similar to modern OSs -> idea of *collective operating systems*.
], [
#figure(
image("figures/devs.svg", width: 100%),
)
]
)
]
#let desc = box[
#table(inset: 1em, stroke: none, columns: (1fr, 1fr, 1fr), align: (left, left, left),
[[+, if, then -> nbr(v#sub[1])],\ [+, f -> nbr(v#sub[2])]],
[[+, if, then -> nbr(v#sub[1])],\ [+, h -> nbr(v#sub[3])]],
[[+, if, else -> nbr(v#sub[2])],\ [+, h -> nbr(v#sub[3])]]
)
]
#let alignment = box[
#table(inset: 0.5em, stroke: none, columns: (1fr), align: (center),
[#figure(
image("figures/alignment.svg", width: 88%),
)],
[\ #desc]
)
]
// #slide(title: "Alignment")[
// #alignment
// ]
// #slide(title: "Transparent Alignment")[
// #set table(
// fill: (x, y) =>
// if x == 0 or y == 0 {
// blue.lighten(80%)
// },
// align: right,
// )
// #table(
// columns: (6em, 4.5em, 3em, 2em, 5.4em, 5em, 6.5em),
// inset: 7pt,
// align: (left, center, center, center, center, center, center, center),
// table.header(
// [Language], [DSL Type], [JVM], [JS], [Wearable], [Reified Fields], [Transparent Alignment],
// ),
// [*Proto*],[external],[*x*],[*x*],[*x*],[*\u{2713}*],[*\u{2713}*],
// [*Protelis*],[external],[*\u{2713}*],[*x*],[*x*],[*\u{2713}*],[*\u{2713}*],
// [*ScaFi*],[internal],[*\u{2713}*],[*\u{2713}*],[*\u{007E}*],[*x*],[*\u{007E}*],
// [*FCPP*],[internal c++],[*x*],[*x*],[*\u{007E}*],[*\u{2713}*],[*x*],
// )
// Devices can communicate between each other without an explicit notion of "sending messages".
// // i dispositivi riescono a comunicare tra di loro senza di fatto una nozione esplicita di 'mando messaggi'
// // questa capacità di comunicare con altri dispositivi in punti arbitrari del codice viene chiamata allineamento
// // l'ideale dal punto di vista di chi realizza sw aggregato è che non debba gestire a mano l'allineamento, a meno che non ci siano moenti specifici in cui si vuole andare a basso livello a controllare
// // perche io scrivo in modo trasparente manipolando i field e assumo che sotto ciò venga gestito automaticamente, se fossero chiamate a libreria però sarebbero esplicite e quindi meno trasparenti, idem per le macro (fcpp)
// // oppure può essere un sistema completamente nascosto che non si vede, a runtime
// // non devi spiegare al compilatore cosa annotare
// ]
// #slide(title: "Example of non-trasparent alignment")[
// ```cpp
// field<double> f = nbr(CALL, 4.2); // nbr with single value
// int x = old(CALL, 0, [&](int a){ // old with initial value and update function
// return a+1;
// });
// int y = nbr(CALL, 0, [&](field<int> a){ // nbr with initial value and update function
// return min_hood(CALL, a);
// });
// ```
// *FCPP* requires the use of macros to align, which results in a lack of transparency.
// ]
// #let programs = box[
// #table(inset: 0.5em, stroke: none, columns: (1fr, 1fr), align: (left, left),
// // [Program \#1],[Program \#2],
// [
// ```scala
// nbr(1) :: nbr(2) :: Nil //list
// ```
// ],
// [
// ```scala
// (nbr(1), nbr(2)) //tuple
// ```
// ]
// )
// ]
// #slide(title: "Complete or Partial Alignment")[
// Two structurally different programs should not align.
// #programs
// In *ScaFi*, however, they align.
// // l'allineamento completo è quando non hai due programmi strutturalmente diversi che fra di loro si allineano (esempio della tesi)
// // [programmi strutturalmente diversi non devono alinnearsi]
// // esempio con scafi e protelis?
// // //those programs are different and should not be aligned
// // (in scafi allineano)
// // //they must not align because one is a concat operation and the other is the construction of a tuple, so the result of the programs are different types
// // conto a quale nbr siamo arrivati, quindi la struttura astrae da quello che c'e intorno
// // mentre in collektive sono due cose diverse
// ]
// #slide(title: "Overview")[
// #set table(
// fill: (x, y) =>
// if x == 0 or y == 0 {
// blue.lighten(80%)
// },
// align: right,
// )
// #table(
// columns: (6em, 4.5em, 3em, 2em, 5.4em, 5em, 6.5em, 6em),
// inset: 7pt,
// align: (left, center, center, center, center, center, center, center),
// table.header(
// [Language], [DSL Type], [JVM], [JS], [Wearable], [Reified Fields], [Transparent Alignment], [Automatic Complete Alignment],
// ),
// [*Proto*],[external],[*x*],[*x*],[*x*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],
// [*Protelis*],[external],[*\u{2713}*],[*x*],[*x*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],
// [*ScaFi*],[internal],[*\u{2713}*],[*\u{2713}*],[*\u{007E}*],[*x*],[*\u{007E}*],[*x*],
// [*FCPP*],[internal c++],[*x*],[*x*],[*\u{007E}*],[*\u{2713}*],[*x*],[*x*],
// )
// ]
// #new-section-slide("What is Collektive?")
// #let collektive = box[
// #table(inset: 1.5em, stroke: none, columns: (0.4fr, 1fr), align: (left, left),
// [
// #figure(
// image("figures/collektive-logo.svg", width: 70%),
// )
// ]
// , [
// *Collektive* is a minimal DSL for aggregate programming in _Kotlin Multiplatform_, to simplify the development of distributed systems, providing high-level abstractions for collective coordination and communication.
// ]
// )
// ]
// #slide(title: "Collektive")[
// #collektive
// // TODO
// ]
// #slide(title: "Objective")[
// #set table(
// fill: (x, y) =>
// if x == 0 or y == 0 {
// blue.lighten(80%)
// },
// align: right,
// )
// #let check = table.cell(
// fill: green.lighten(80%),
// )[*\u{2713}*]
// #let collektive = table.cell(
// fill: green.lighten(80%),
// )[*Collektive*]
// #let kt = table.cell(
// fill: green.lighten(80%),
// )[internal kt]
// #table(
// columns: (6em, 4.5em, 3em, 2em, 5.4em, 5em, 6.5em, 6em),
// inset: 7pt,
// align: (left, center, center, center, center, center, center, center),
// table.header(
// [Language], [DSL Type], [JVM], [JS], [Wearable], [Reified Fields], [Transparent Alignment], [Automatic Complete Alignment],
// ),
// [*Proto*],[external],[*x*],[*x*],[*x*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],
// [*Protelis*],[external],[*\u{2713}*],[*x*],[*x*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],
// [*ScaFi*],[internal],[*\u{2713}*],[*\u{2713}*],[*\u{007E}*],[*x*],[*\u{007E}*],[*x*],
// [*FCPP*],[internal c++],[*x*],[*x*],[*\u{007E}*],[*\u{2713}*],[*x*],[*x*],
// collektive, kt, check, check, check, check, check, check,
// //[*Collektive*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],[*\u{2713}*],[internal kt]
// )
// ]
// #slide(title: "Alignment")[
// In *Collektive* the alignment is managed by the *compiler plugin*, resulting in a trasparent, automatic and complete alignment.
// // The control flow visits the functions, when it finds some that entails communication, the function gets pushed into a stack. Then the control flow exiting the function pops the name from the stack and generates a _path_.
// // quando riusciamo a fare un allineamento trasparente non si riesce a farlo anche completo
// // se scrivo del sw e il compilatore lo compila, se l'all è completo cioe per ogni chiamata a funzione lo devo allineare, l'unica è che tu chiami una funzione per dire di allineare
// // scafi ha una chiamata apposita chiamata 'aggregate'
// // l'unico(?) modo per fare un allineamento completo automatico è passare da un plugin del compilatore kotlin (che lavori a compile time), quando incontra la condizione per cui si deve allineare mantiene un secondo genera un percorso su cui deve allineare in uno stack interno e quando c'è una chiamata a funzione che implica comunicazione viene registrato
// // quindi becchiamo allineamento totale e trasparente
// ]
// #slide(title: "Heterogeneity")[
// Different devices can have different sets of sensor and actuators.
// Current approaches search for these capabilities at runtime.
// We are working to capture them in the type system using Kotlin's *context functions*.
// ```kotlin
// interface DistanceSensor { fun distances(): Field<Double> }
// context(DistanceSensor) // capability required to run the program
// fun Aggregate<Int>.gradient(source: Boolean): Double {
// val dist = distances() // the capability is available locally
// ...
// }
// ```
// // i programmi aggregati assumono che ci sia lo stesso programma su i vari dispositivi altrimenti non allinea
// // succede che ad un certo punto potremmo aver bisogno di leggere il valore di un sensore SE C'È, alcuni programmi non dovresti poterli chiamare (compilare) se una certa cosa non è disponi
// // questa cosa non è banale da esprimere
// // noi possiamo definire questa cosa utilizzando i context function di Kotlin, snippettino di codice con if se ho il sensore altrimenti, mentre collektive ha context() in cima
// ]
// #let incarnation = box[
// #table(inset: 0.1em, stroke: none, columns: (1fr, 1fr), align: (left, left),
// [
// #figure(
// image("figures/gradient-2.png", width: 75%),
// )
// ]
// , [
// Simulating the behaviour of a distributed system is crucial to verify the correctness of the system and to understand its behaviour in different scenarios.
// ]
// )
// ]
// #slide(title: "Integration with Alchemist")[
// #incarnation
// ]
// #new-section-slide("What's next?")
// #slide(title: "Ongoing")[
// - Usable for experiments;
// - Standard library next to be done;
// - Actually working on an extension of an approach proposed at ACSOS.
// ]
|
https://github.com/0x1B05/english | https://raw.githubusercontent.com/0x1B05/english/main/kaoyan/content/answers.typ | typst | #import "../template.typ": *
= 真题答案速查
== 2005
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |CBACB ADADB|
|11-20(完形) |CADCD BCDAB|
|21-25(Text1) |CBACB|
|26-30(Text2) |CDADB|
|31-35(Text3) |ACDDA|
|36-40(Text4) |BDABC|
|41-45(选段) |ECGFB|
]
]
46. 电视是创造和传递感情的手段之一。也许在此之前,就加强不同的民族和国家之间的联系而言,电视还从来没有像在最近的欧洲事件中起过如此大的作用。
47. 多媒体集团在欧洲就像在其他地方一样越来越成功了。这些集团把相互关系密切的电视台、电台、报纸、杂志、出版社整合到了一起。
48. 仅这一点就表明在电视行业不是一个容易生存的领域。这个事实通过统计数字一目了然,统计表明在80家欧洲电视网中1989年出现亏损的不少于50%。
49. 创造一个尊重不同文化和传统的“欧洲统一体”绝非易事,需要战略性选择。正是这些文化和传统组成了连接欧洲大陆的纽带。
50. 在应付一个如此规模的挑战过程中,我们可以毫不夸张地说,“团结,我们就会站起来;分裂,我们就会倒下去。”
#pagebreak()
== 2006
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |ABDAD CBCAA|
|11-20(完形) |CBDCC ABCAD|
|21-25(Text1) |CACDB|
|26-30(Text2) |ABCDD|
|31-35(Text3) |CACDB|
|36-40(Text4) |DBDBA|
|41-45(选段) |CABFD|
]
]
46. 我将他定义为一个对道德问题进行苏格拉底式思考并将此作为自己人生首要责任和快乐的人。
47. 他的职责与法官相似,必须承担这样的责任:用尽可能明了的方式来展示自己做出决定的推理过程。
48. 我之所以把他(普通科学家)排除在外,是因为尽管他的成果可能会有助于解决道德问题,但他承担的任务只不过是研究这些问题的事实方面。
49. 但是,他的首要任务并不是考虑支配自己行为的道德规范,就如同不能指望商人专注于探索行业规范一样。
50. 他们可以教得很好,而且不仅仅是为了挣薪水,但他们大多数人却很少或没有对需要进行道德判断的、人的问题进行独立思考。
#pagebreak()
== 2007
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |BDACC DBABC|
|11-20(完形) |ADACB DCABD|
|21-25(Text1) |CBADC|
|26-30(Text2) |DCAAB|
|31-35(Text3) |CBDCB|
|36-40(Text4) |DABAD|
|41-45(选段) |FDBCE|
]
]
46. 长久以来,法律知识在这类学校里一起被视为律师们专有的,而不是一个受教育者的知识素养的必要组成部分。
47. 另一方面,这一学科把这些概念结合到日常生活中,这与新闻记者每天报道和评论新闻的做法是相同的。
48. 新闻记者应比普通公民更加透彻地了解法律,而这种看法是基于他们对新闻媒体业已确立的规约和特殊责任的理解。。
49. 事实上,很难设想那些对加拿大宪法的基本要点缺乏清晰了解的新闻记者何以能胜任政治新闻的报道工作。
50. 尽管律师的见解和反应会提高报道的质量,但新闻记者最好凭借他们自己对重要性的理解自行做出判断。
#pagebreak()
== 2008
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |BDACC ABDBC|
|11-20(完形) |BD(A/B)CD DCABA|
|21-25(Text1) |ADCBD|
|26-30(Text2) |DCAAB|
|31-35(Text3) |ACBDC|
|36-40(Text4) |DBCAB|
|41-45(选段) |DBCAB|
]
]
46. 他认为,或许正是这种(语言表达上的)困难迫使他长时间专注地思考每个句子,从而使他能够在推理和自己的观察中发现错误,结果这种困难反而成就了他的优势。
47. 此外,他还断言自己进行长时间纯抽象思维的能力十分有限,由此他也认定自己在数学方面根本不可能有大的作为。
48. 另一方面,某些批评家指责他虽善观察,但缺乏推理能力,他认为这种说法是缺少根据的。
49. 他谦虚地补充道,或许他“和普通人比起来,更善于注意到那些容易被忽视的东西,并对其进行细致的观察”。
50. 达尔文确信,失去了这些爱好不只是少了乐趣,而且可能会有损于一个人的智力,更有可能有损于一个人的性情。
#pagebreak()
== 2009
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |BADBC ADCBD|
|11-20(完形) |DBCDA CBAAC|
|21-25(Text1) |CDADA|
|26-30(Text2) |ACDAB|
|31-35(Text3) |DBBCC|
|36-40(Text4) |BBDAC|
|41-45(选段) |CEABG|
]
]
46. 虽然我们可以说衡量任何社会机构价值的标准是其在丰富和完善人生经验方面所起的作用,但这种作用并不是其最初动机的组成部分。
47. 人们只是逐渐才注意到社会机构的这一副产品,而把这种作用视为机构运作的指导性因素的过程则更为缓慢。
48. 虽然与年轻人接触时我们会轻易地忽视自己的行为对他们的性情所产生的影响,却无法像与成年人打交道时那样容易忽视这些影响。
49. 由于我们对年轻人的主要职责是使他们能够融入共同的生活,因此我们不禁要考虑自己是否在形成让他们获得这种本领的能力。
50. 这就引导我们在一直思考的广义的教育过程中区分出一种更为正规的教育形式,即直接指导或学校教育。
#pagebreak()
== 2010
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |ABCBC BDACD|
|11-20(完形) |CAADB ADCBD|
|21-25(Text1) |BADAB|
|26-30(Text2) |CDCBA|
|31-35(Text3) |BDACC|
|36-40(Text4) |ADCBD|
|41-45(选段) |BFDGA|
]
]
46. 科学家们急忙介入,但提出的证据显然站不住脚,其大意是,如果鸟类不能控制昆虫的数量,昆虫便会吞噬我们人类。
47. 但是我们至少近乎承认,无论鸟类能否带给我们经济价值,它们自有生存下去的权利。
48. 有证据表明:这些生物杀死体弱者来保持种群的健康,或者说它们仅仅捕食“没有价值”的物种。曾经有段时间,生物学家或多或少滥用了这一证据。
49. 在林业生态更为发达的欧洲,没有商业价值的树种被合理地看成是当地森林群落的成员,并得到相应的保护。
50. 这种保护体系往往忽视陆地群落中诸多缺乏商业价值但对其健康运作至关重要的物种,而最终导致它们的灭绝。
#pagebreak()
== 2011
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |CDBBA BADCA|
|11-20(完形) |BCDCB DADAC|
|21-25(Text1) |CBDBC|
|26-30(Text2) |BDCAC|
|31-35(Text3) |DCBAA|
|36-40(Text4) |CDADB|
|41-45(选段) |BDACF|
]
]
46. 爱伦的贡献在于,他拿出“我们并不是机器人,所以能控制自己思想”这一公认的假设,并揭示了其谬误所在。
47. 尽管我们或许可以仅凭意识来维系“控制”这种错觉,现实中我们还是要不断面对一个问题:“我为什么不能让自己做这个或完成那个?”
48. 这似乎是在为忽视贫困者的行为作辩护,为剥削、为社会上层人群的优越及社会底层人群的卑微找理由。
49. 环境仿佛是为了激发我们的最大潜能而设,如果我们觉得自己遭受了“不公”,就不太可能有意识地去努力摆脱自己的处境。
50. 其正面意义在于,了解了一切都取决于我们自己,即有了诸多可能,此前我们是谙熟各种局限的专家,现在我们成了驾驭各种可能性的权威。
#pagebreak()
== 2012
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |BABDC BDBAB|
|11-20(完形) |ACCDA CACDD|
|21-25(Text1) |DBACD|
|26-30(Text2) |CDADA|
|31-35(Text3) |ABBDC|
|36-40(Text4) |CDBCA|
|41-45(选段) |CDAFG|
]
]
46. 在物理学领域,一种做法把这种寻求大同理论的冲动推向极端,试图寻找包含一切的理论——一个涵括我们所看到的一切的生成性公式。
47. 这里,达尔文学说似乎做出了证明,因为如果人类有着共同的话题,那么似乎就有理由认为文化的多样性也可以追溯到更为有限的起源。
48. 从共有特征中滤出独有特征,这使我们得以理解复杂的文化行为是如何产生的,并从进化成认知角度理解什么引导了它的走向。
49. 第二次努力——由乔舒亚·格林堡做出——采用更为经验主义的方法来研究语言的普遍性,确定了多种语言(尤其在语法词序方面)的共有特征,这些特征被认为是代表了由认知限制产生的倾向。
50. 乔姆斯基的语法应该显示出语言变化的模式,这些模式并不受语言谱系或贯穿谱系路径的影响,而格林堡式的普遍性则预言了特定的语法词序关系类型之间所存在的紧密互依性。
#pagebreak()
== 2013
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |ABCDB DAADC|
|11-20(完形) |ACBCB CDDBA|
|21-25(Text1) |ABDCC|
|26-30(Text2) |DACBD|
|31-35(Text3) |BADBA|
|36-40(Text4) |BCACD|
|41-45(选段) |EFDGA|
]
]
46. 然而,只要看看无家可归者创造的花园的照片,你就会意识到尽管样式各异,但这些花园除了表达(人类)装饰和创造的欲望之外,更体现了人类其他根本的强烈愿望。
47. 安宁的圣地(体现的)是人类特有的需要,无论怎样疏于雕琢,它仍与遮风挡雨之所不同,后者(反映的)是动物特有的需要。
48. 这种无家可归者的花园实质上是无定所的花园,它们把“形式”引入城市环境,而城市环境中原本要么没有这种“形式”,要么并没有把它当成“形式”看待。
49. 我们大多数人通常把陷入精神颓丧归咎于某些心理疾病,直到有一天置身花园,才顿觉压抑感神奇地消失了。
50. 虽然有“扩大词义外延”的意味,但正是这种对大自然或隐晦或明晰的参照让用“花园”一词来描述这些人造组合有了充分的根据。
#pagebreak()
== 2014
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |DABAC DCDBC|
|11-20(完形) |AABDC BABCD|
|21-25(Text1) |ACDAA|
|26-30(Text2) |DDABB|
|31-35(Text3) |BCCBD|
|36-40(Text4) |ADCCB|
|41-45(选段) |CFGDB|
]
]
46. 这也解释了为什么当我们试图用语言去描述音乐的时候,充其量只能说清除自己的感受,而无法抓住音乐本身。
47. 总而言之,他是一个思想不受束缚的人,而且是一个富有勇气的人。我认为勇气是理解——更是演奏——其作品不可或缺的品格。
48. 贝多芬惯以超凡的力度加大音响,随后突然进入一个意想不到的柔和乐段,这种手法在前人的作品中极为罕见。
49. 尤其深意的是他的自由观,在他看来自由关系到个人的权利和责任:他主张思想和个人表达的自由。
50. 苦难是不可避免的,但与之抗争的勇气正是生活的意义所在,对贝多芬的诸多作品,我们都可做此解读。
#pagebreak()
== 2015
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |DBCAC ADABD|
|11-20(完形) |BABDC CBDCA|
|21-25(Text1) |DCBAC|
|26-30(Text2) |CADDD|
|31-35(Text3) |BCDCA|
|36-40(Text4) |ABACB|
|41-45(选段) |CEGBA|
]
]
46. 在各种强大动机的推动下,这场迁移从荒野中造就了一个民族,并循其本质塑造了一片全新大陆的特点,决定了它的命运。
47. 合众国是两股主要力量作用的产物——一是具有不同思想、风俗和民族特点的欧洲各民族的迁入,二是一个新国家因改变了这些特性而产生的影响。
48. 但是美洲特有地理条件的作用,不同族群间的相互影响,加上在一片荒蛮新大陆上维持旧大陆方式的巨大困难,所有这一切引起了意义深远的变化。
49. 在十五、十六世纪的北美大陆探险过去一百多年后,首批满载移民驶向今天合众国这片疆土的船只横穿过了大西洋。
50. 郁郁葱葱、树种繁多的原始森林是一座从缅因一直向南绵延到佐治亚的天然宝库。
#pagebreak()
== 2016
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |ADCAC BDBDA|
|11-20(完形) |DCBBC ABDBA|
|21-25(Text1) |ADDCD|
|26-30(Text2) |ADCAC|
|31-35(Text3) |BCDAB|
|36-40(Text4) |ABBCB|
|41-45(选段) |CGDEG|
]
]
46. 我们的心理健康不是学来的,它是我们生来就有的,正如同我们的身体指导如何治愈伤口修复断骨一样。
47. 心理健康并没有真的离开,宛如云朵后面的太阳,它虽然可能暂离我们的视线,但完全能在转瞬之间重现。
48. 健康的心理让我们在他人遇到困难时心怀同情,遭受苦难时心怀善意,不论它们是谁(我们)都会心怀无条件的爱。
49. 尽管心理健康是我们生活的万能灵药,但它又再寻常不过,因为你会发现,它一直在那里引领你完成所有艰难的抉择。
50. 你终会明白,懂得心理健康无时不在并懂得信赖它,这能让我们慢下来活在每一刻,快乐地生活着。
#pagebreak()
== 2017
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |BADCB DBCAD|
|11-20(完形) |ABDBC ACACD|
|21-25(Text1) |ACDDC|
|26-30(Text2) |BABAD|
|31-35(Text3) |DBDCA|
|36-40(Text4) |CCABD|
|41-45(选段) |FEACG|
]
]
46. 尽管讲英语的人数在进一步增加,但有迹象显示这一语言的全球统治地位在可预见的未来可能会衰落。
47. 因此,他的分析会让一些人再也无法自演,这些人或许人为英语的全球地位十分稳固,从而英国的年轻人无需额外的语言能力。
48. 许多国家都在把英语纳入小学课程,但英国的大、中、小学生却似乎没有得到更多的鼓励去熟练掌握其他语言。
49. 大卫·格拉多尔所指出的这些变化都对同其他国家的人提供英语数学的英国个人和组织机构乃至更广泛的教育产业领域构成明确的、重大的挑战。
50. 它为旨在促进英语学习和使用的所有组织机构提供了一个基础,该基础用于规划应对由一种或许会很不一样的运作环境所带来的各种可能。
#pagebreak()
== 2018
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |CADBD BCDBA|
|11-20(完形) |BBACD ACBAC|
|21-25(Text1) |DCADB|
|26-30(Text2) |DABCA|
|31-35(Text3) |BCDDB|
|36-40(Text4) |BAACD|
|41-45(选段) |EGABD|
]
]
46. 他出生时,欧洲正见证着宗教剧的消灭,以及在古典悲剧和喜剧启发之下新型戏剧形式的诞生。
47. 凡是文法学校的学童就不会不知道,戏剧这种文学形式曾给希腊和罗马带来辉煌,或也将为英国带来荣耀。
48. 但是专业剧团在他们的固定剧场里蓬勃发展,而大学里有文学抱负的人们迅速转向这些剧场,视其为一种谋生手段。
49. 一种本土的文学戏剧已然形成,它与公共剧场的联盟已然建立,它的伟大传统中至少有一部分也已然开启。
50. 要明白当时的戏剧活动是何等繁荣,我们还必须记住,大量剧作都已散失,恐怕没有哪位知名作家的全部作品幸存至今。
#pagebreak()
== 2019
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |CCBDA BDCAD|
|11-20(完形) |ABDCB DAABC|
|21-25(Text1) |ADBCB|
|26-30(Text2) |DAACB|
|31-35(Text3) |CDBAC|
|36-40(Text4) |CDCBA|
|41-45(选段) |EDGBA|
]
]
46. 医学期刊中充斥着这类无稽之谈,这些东西一经广播和非专业报刊传播,就会引起健康方面的恐慌和对某些饮食的短暂追捧。
47. 今天,任何一个科研岗位的申请者需要发表文章的数量比短短十年前申报同一岗位时整整翻了一番。
48. 为遏制这一倾向已经采取了若干措施,比如在评定岗位申请者的论文时,尝试引入一项兼顾数量和质量的评估指标。
49. 要不是因为科研人员轻易就能在将来论文里自我引用,或找同行引用以回报自己提供的类似好处,这项措施本该是合理的。
50. 如果我们真想保证我们的科研既有意义又可复制,就必须确保我们的体制能激励这样的科研。
#pagebreak()
== 2020
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |CABDA BDADC|
|11-20(完形) |CABDC BABCD|
|21-25(Text1) |CBDBC|
|26-30(Text2) |DACAD|
|31-35(Text3) |ACDCB|
|36-40(Text4) |CABCB|
|41-45(选段) |CEGAD|
]
]
46. 随着教会的教义和思维方式因文艺复兴而黯然失色,中世纪与近现代社会之间的鸿沟得以弥合,从而出现了新的未被探索的知识领域。
47. 在他们揭示每一个科学真理之前,当时的许多思想家一直沿袭着旧时的思维方式,包括地球是我们宇宙中心的地心说。
48. 尽管教会试图镇压新一代逻辑学家和理性主义者,但有关宇宙如何运转的阐释却层出不穷,其速度之快不容人们再忽视。
49. 当许多人肩负起试图将理性推理和科学信条融入社会的责任时,文艺复兴时代就结束了,而一个崭新的时代——理性时代便开始了。
50. 这种寻求知识和了解已知的行为被拉丁短语“sapere aude”,亦即“敢于求知”,描画得淋漓尽致。
#pagebreak()
== 2021
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |CDABA ACBAC|
|11-20(完形) |DBCDD BDAAC|
|21-25(Text1) |CBCDD|
|26-30(Text2) |BDCCA|
|31-35(Text3) |ABDAA|
|36-40(Text4) |CBBDA|
|41-45(选段) |GCEBD|
]
]
46. 战争刚刚结束时,这些社会的适龄群体入学率为 3%—5%,这是战前几十年基本恒定未变的水平。
47. 这些社会中,人们对接受高等教育的需求越来越高,甚至蔓延到了战前从未想过上大学的群体和社会阶层。
48. 在许多西欧国家,高校学生人数在 20 世纪 60 年代每 5 年便翻一番,到 70 年代中期,这一人数在 7、8 或 10 年内又翻了一番。
49. 且当新进教员主要是刚从研究生院毕业的青年男女时,他们会在很大程度上决定的学术生活规范。
50. 高增长率增加了学术创新的可能性;同时也弱化了在稳定期或缓慢增长期,教师和学生获准加入某学者团体所需的形式和流程。
#pagebreak()
== 2022
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |acdcd bcbad|
|11-20(完形) |cdacb daadb|
|21-25(Text1) |acddb|
|26-30(Text2) |cbcda|
|31-35(Text3) |baabc|
|36-40(Text4) |dadbc|
|41-45(选段) |fcadg|
]
]
46. 即使是对许多好好研读过这段时期历史的人来说,他们也不知道这也是一场创造密码的人和破解密码的人之间的较量。
47. 它(书中)列出了许多从西班牙的法国军队那里缴获的密码文件,这些文件的秘密已经被英军指挥部的一位军官乔治•斯科维尔揭露了出来。
48. 他(阿曼)无法精细分析这位鲜为人知的军官是否对那场国家间的大规模战争起到作用,也无法确切地告诉我们更多有关他本人的任何事情。
49. 在拿破仑战争期间,可能有很多间谍和情报官员,但通常很难找到他们实际提供或研究过的资料。
50. 正如密码破译在争夺西班牙的斗争中有其更广泛的意义,他在晋升途径上的尝试也提供了很多关于英国社会的信息。
#pagebreak()
== 2023
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |CABCC ABBAD|
|11-20(完形) |DCCBA BDADA|
|21-25(Text1) |CBACD|
|26-30(Text2) |ADBCD|
|31-35(Text3) |ACBAD|
|36-40(Text4) |BCABD|
|41-45(选段) |BFDCG|
]
]
46. 就顾客的喜好、最爱的名人及潮流服饰方面而言,人工智能也可用于辨别顾客的这些生活选择,由此在通过社交媒体推送的营销信息中提供专属的内容。
47. 有的人认为人工智能会降低创造性、减少工作岗位,从而对营销者的形象造成负面影响,但他们也知道,人工智能是降低成本和催生新信息的一种途径。
48. 用于模拟人类互动行为的算法正在催生许多这方面的担忧,尤其是在大家无法完全确定运用人工智能和顾客互动会产生何种后果时。
49. 若顾客不愿分享数据,人工智能便会缺失关键信息,无法有效运作,也不能运用机器学习来优化其营销内容和交流。
50. 敏锐捕捉目标客户需求,对营销信息进行非侵扰式的传播,这对数字营销者而言是一项重大挑战。
#pagebreak()
== 2024
#align(center)[
#three-line-table[
| 题目 | 答案 |
| - | - |
|1-10(完形) |DCBAB CADAD|
|11-20(完形) |ACCDC BDCBA|
|21-25(Text1) |DDABA|
|26-30(Text2) |ABDCB|
|31-35(Text3) |BCCDB|
|36-40(Text4) |BCCDB|
|41-45(选段) |ECFGB|
]
]
46. 它们有时会跋涉60多英里去寻找食物或者水,并且非常善于找出其它大象在哪里-即使这些大象在视线之外。
47. 研究人员确信,关于他们所需的所有资源,大象总是准确地指导自己在哪里,因此他们可以走捷径,也可以沿着熟悉的路线走。
48. 一种可能性是,他们只是用眼睛来试验发现的植物,但这可能会浪费大量的时间和精力,尤其因为它们的视力实际上不是很好。
49. 植物产生的挥发性化学物质可以散发很远的距离,而且这些物质非常有特点:每一种植物或树木都有自己独特的气味特征。
50. 试验表明,大象可以很好地利用嗅觉来识别适合食用的树木片区,其次评估每个片区内树木的品质。
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/layout/list-attach-01.typ | typst | Other | // Test that attached list isn't affected by block spacing.
#show list: set block(above: 100pt)
Hello
- A
World
- B
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/circuiteria/0.1.0/manual.typ | typst | Apache License 2.0 | #import "@preview/tidy:0.3.0"
#import "@preview/cetz:0.2.2": draw, canvas
#import "src/lib.typ"
#import "doc/examples.typ"
#import "src/circuit.typ": circuit
#import "src/element.typ"
#import "src/gates.typ"
#import "src/util.typ"
#import "src/wire.typ"
#set heading(numbering: (..num) => if num.pos().len() < 4 {
numbering("1.1", ..num)
})
#{
outline(indent: true, depth: 3)
}
#show link: set text(blue)
#show heading.where(level: 3): it => context {
let cnt = counter(heading)
let i = cnt.get().at(it.depth) - 1
let color = util.colors.values().at(i)
block(width: 100%)[
#grid(
columns: (auto, 1fr),
column-gutter: 1em,
align: horizon,
it,
{
place(horizon)[
#line(
start: (0%, 0%),
end: (100%, 0%),
stroke: color + 1pt
)
]
place(horizon)[
#circle(radius: 3pt, stroke: none, fill: color)
]
place(horizon+right)[
#circle(radius: 3pt, stroke: none, fill: color)
]
}
)
]
}
#set page(numbering: "1/1", header: align(right)[circuiteria #sym.dash.em v#lib.version])
#set page(
header: locate(loc => {
let txt = [circuiteria #sym.dash.em v#lib.version]
let cnt = counter(heading)
let cnt-val = cnt.get()
if cnt-val.len() < 2 {
align(left, txt)
return
}
let i = cnt-val.at(1) - 1
grid(
columns: (auto, 1fr),
column-gutter: 1em,
align: horizon,
txt,
place(horizon + left)[
#rect(width: 100%, height: .5em, radius: .25em, stroke: none, fill: util.colors.values().at(i))
]
)
}),
footer: locate(loc => {
let cnt = counter(heading)
let cnt-val = cnt.get()
if cnt-val.len() < 2 { return }
let i = cnt-val.at(1) - 1
grid(
columns: (1fr, auto),
column-gutter: 1em,
align: horizon,
place(horizon + left)[
#rect(width: 100%, height: .5em, radius: .25em, stroke: none, fill: util.colors.values().at(i))
],
counter(page).display("1/1", both: true)
)
})
)
#let doc-ref(target, full: false, var: false) = {
let (module, func) = target.split(".")
let label-name = module + func
let display-name = func
if full {
display-name = target
}
if not var {
label-name += "()"
display-name += "()"
}
link(label(label-name))[#display-name]
}
= Introduction
This package provides a way to make beautiful block circuit diagrams using the CeTZ package.
= Usage
Simply import #link("src/lib.typ") and call the `circuit` function:
#pad(left: 1em)[```typ
#import "src/lib.typ"
#lib.circuit({
import lib: *
...
})
```]
= Reference
#let circuit-docs = tidy.parse-module(
read("src/circuit.typ"),
name: "circuit",
require-all-parameters: true
)
#tidy.show-module(circuit-docs)
#pagebreak()
#let util-docs = tidy.parse-module(
read("src/util.typ"),
name: "util",
require-all-parameters: true,
scope: (
util: util,
canvas: canvas,
draw: draw
)
)
#tidy.show-module(util-docs)
#pagebreak()
#let wire-docs = tidy.parse-module(
read("src/wire.typ"),
name: "wire",
require-all-parameters: true,
scope: (
wire: wire,
circuit: circuit,
draw: draw,
examples: examples,
doc-ref: doc-ref
)
)
#tidy.show-module(wire-docs)
#pagebreak()
#let element-docs = tidy.parse-module(
read("src/elements/element.typ") + "\n" +
read("src/elements/alu.typ") + "\n" +
read("src/elements/block.typ") + "\n" +
read("src/elements/extender.typ") + "\n" +
read("src/elements/multiplexer.typ") + "\n" +
read("src/elements/group.typ"),
name: "element",
scope: (
element: element,
circuit: circuit,
draw: draw,
wire: wire,
tidy: tidy,
examples: examples,
doc-ref: doc-ref
)
)
#tidy.show-module(element-docs, sort-functions: false)
#pagebreak()
#let gates-docs = tidy.parse-module(
read("src/elements/logic/gate.typ") + "\n" +
read("src/elements/logic/and.typ") + "\n" +
read("src/elements/logic/buf.typ") + "\n" +
read("src/elements/logic/or.typ") + "\n" +
read("src/elements/logic/xor.typ"),
name: "gates",
scope: (
element: element,
circuit: circuit,
gates: gates,
draw: draw,
wire: wire,
tidy: tidy,
examples: examples,
doc-ref: doc-ref
)
)
#tidy.show-module(gates-docs, sort-functions: false) |
https://github.com/SidneyLYZhang/learnTypst | https://raw.githubusercontent.com/SidneyLYZhang/learnTypst/main/Documentation/Sources/003_advanced-styling.typ | typst | #set text(font:("Consolas", "Source Han Sans SC"))
#set text(lang: "zh")
#show emph: text.with(font: ("Linux Libertine","STKaiti"))
#show link: text.with(fill: color.blue)
= 高级排版样式
在本教程的前两章中,你已学会了如何在 Typst 中撰写文档并更改其格式。你在过去两章中编写的报告获得了满分的好成绩,你的上司甚至想以此为基础,撰写一篇会议论文!当然,这份报告必须符合会议的风格指南。让我们来看看如何达成这一目标。
在开始之前,让我们创建一个团队,邀请你的上司加入,并将他们添加到团队中。你可以通过点击编辑器左上角的四圈图标,返回应用程序的仪表板来完成这一步骤。接着,在左侧工具栏选择加号图标,创建一个新的团队。最后,点击新团队,通过点击团队名称旁的“管理团队”进入团队设置。现在,你就可以通过电子邮件邀请你的上司加入团队了。
#align(center, image("images/3-advanced-team-settings.png", width: 95%))
接下来,将你的项目移入团队:打开项目,通过左侧工具栏的齿轮图标进入设置,然后在所有者下拉菜单中选择你的新团队。记得保存你的更改!
现在,你的上司也可以编辑这个项目,你们两人可以实时看到对方所做的改动。你可以加入我们的#link("https://discord.gg/2uDybryKPe")[Discord服务器],找到其他用户,尝试与他们组建团队协作!
== 会议论文的规范指南
会议的排版指南已在会议网站上公布。让我们来看看这些要求:
- 字体应为`11pt`的衬线字体
- 标题应为`17pt`并加粗
- 论文包含一个单列的摘要和两列的主体文本
- 摘要应居中
- 主体文本应对齐
- 一级标题应为`13pt`、居中,并使用小写字母大写形式
- 二级标题使用行内格式,斜体,并与正文大小相同
- 最后,页面应为美国信纸大小,页码位于页脚的中间,每页的右上角应包含论文标题
我们已经知道如何完成其中的许多事项,但对于某些内容,我们需要学习一些新技巧。
== 制定正确的规则集
我们先写一些文档的设定规则。
#box(height: 200pt,
columns(2, gutter: 11pt)[
```typst #set page(
paper: "us-letter",
header: align(right)[
A fluid dynamic model for
glacier flow
],
numbering: "1",
)
#set par(justify: true)
#set text(
font: "Linux Libertine",
size: 11pt,
)
#lorem(600)```
#align(center, image("images/3-right-yulan-1.png"))
])
你已经很熟悉这里的大部分内容了。我们将文本大小设置为`11pt`,字体设置为`Linux Libertine`。我们也开启了段落对齐并设置了纸张大小为美国信纸。
`header`参数是新的:通过它,我们可以提供内容来填充每页的上边距。在页眉中,我们按照会议风格指南的要求指定了论文标题。我们使用`align`函数将文本右对齐。
最后但同样重要的是编号参数(`numbering`)。在这里,我们可以提供一个#link("https://typst.app/docs/reference/model/numbering/")[编号模式],定义如何对页面进行编号。通过将into设置为#text(fill:orange)[`"1"`],Typst只显示纯文本页码。将其设置为#text(fill:orange)[`"(1/1)"`]将显示当前页和总页数,这里需要用括号包围起来。另外,我们甚至可以提供一个完全自定义的函数来按照我们的喜好格式化内容。
== 创建标题和摘要
现在让我来写标题和摘要。从标题开始,我们将其居中对齐,并通过用SS`*星号*`括起来,来增加其字体权重。
#box(height: 90pt,
columns(2, gutter: 11pt)[
```typst #align(center, text(17pt)[
*A fluid dynamic model
for glacier flow*
])```
#align(center, image("images/3-right-yulan-2.png"))
])
这看起来正确。我们使用了`text`函数来局部覆盖先前设置的文本规则,将函数参数的字体大小增加到`17pt`。让我们也添加作者列表:由于我们是与导师一起撰写这篇论文,我们将添加自己的名字和导师的名字。
#box(height: 160pt,
columns(2, gutter: 11pt)[
```typst #grid(
columns: (1fr, 1fr),
align(center)[
Therese Tungsten \
Artos Institute \
#link("mailto:<EMAIL>")
],
align(center)[
Dr. <NAME> \
Artos Institute \
#link("mailto:<EMAIL>")
])```
#align(center, image("images/3-right-yulan-3.png"))
])
两个作者块并排布局。我们使用#link("https://typst.app/docs/reference/layout/grid/")[`grid`]函数来创建这种布局。通过grid网格,我们可以精确控制每列的大小以及哪些内容放入哪个单元格。`columns`参数接受一个#link("https://typst.app/docs/reference/layout/relative/")[相对长度(_relative lengths_)]或#link("https://typst.app/docs/reference/layout/fraction/")[分数(_fractions_)]的数组。在这种情况下,我们传入了两个相等的分数大小,告诉它将可用空间分成两个相等的列。然后我们向`grid`函数传递了两个内容参数。第一个包含我们自己的详细信息,第二个包含我们导师的信息。我们再次使用`align`函数来使内容在列中居中。网格接受任意数量的内容参数来指定单元格。行会自动添加,但也可以通过`rows`参数手动设置大小。
现在,让我们添加摘要。请记住,会议要求摘要设置为相对居中对齐。
#box(height: 100pt,
columns(2, gutter: 11pt)[
```typst ...
#align(center)[
#set par(justify: false)
*Abstract* \
#lorem(80)
]```
#align(center, image("images/3-right-yulan-4.png"))
])
很好!需要注意的是,我们在`align`的内容参数中使用了一个设置规则,以关闭摘要的对齐。这不会影响文档的其余部分,即使它是在第一个设置规则之后指定的,因为内容块只在限定的范围内被样式化处理。在内容块中设置的任何内容都只会影响该块内的内容。
另一个可能的调整是将论文标题保存在变量中,以便我们不必两次输入,用于标题和页眉。我们可以使用`let`关键字来完成此操作:
#box(height: 300pt,
columns(2, gutter: 11pt)[
```typst #let title = [
A fluid dynamic model
for glacier flow
]
...
#set page(
header: align(
right + horizon,
title
),
...
)
#align(center, text(17pt)[
*#title*
])
...
]```
#align(center, image("images/3-right-yulan-5.png"))
])
在我们将内容绑定到`title`变量之后,我们可以在函数中使用它,也可以在标记中使用它(前缀为`#`,就像函数一样)。这样,如果我们决定使用另一个标题,我们可以很容易地在一个地方进行更改。
== 增加列和标题
有点糟糕的是,上面的论文看起来像糅在一起的一整坨铅块。为了解决这个问题,让我们添加一些标题并将我们的论文更改为两列布局。#link("https://typst.app/docs/reference/layout/columns/")[`columns`]函数接受一个数字和内容,并将内容布局到指定的列数中。由于我们希望摘要之后的所有内容都在两列中,因此我们需要将`columns`函数应用于整个文档。
我们不需要将整个文档包装在一个巨大的函数调用中,而是可以使用 _“everything”_ `show`规则。要编写这样的`show`规则,请在`show`关键字后面直接放置一个冒号,然后提供一个函数。此函数将作为参数传递给文档的其余部分。我们在这里称它为`rest`,但你可以自由选择任何名称。然后,该函数可以对此内容执行任何操作。在我们当前这个例子里,它将其传递给`columns`函数。
#box(height: 300pt,
columns(2, gutter: 11pt)[
```typst ...
#show: rest => columns(2, rest)
= Introduction
#lorem(300)
= Related Work
#lorem(200)```
#align(center, image("images/3-right-yulan-6.png"))
])
现在只剩下一件事要做了:为我们的标题设置样式。我们需要将它们居中对齐并使用小写字母大写形式。由于`heading`函数没有提供设置这些选项的方法,我们需要编写自己的_heading_ `show`规则。
#box(height: 300pt,
columns(2, gutter: 11pt)[
```typst #show heading: it => [
#set align(center)
#set text(12pt, weight: "regular")
#block(smallcaps(it.body))
]
...```
#align(center, image("images/3-right-yulan-7.png"))
])
这看起来很棒!我们使用了一个适用于所有标题的`show`规则。我们为其提供了一个函数,该函数将标题的参数作为参数传递。该参数可以用作内容,但它还有一些字段,如`title`、`numbers`和`level`,我们可以从中组合出自定义的外观。在这里,我们正在居中对齐、将字体粗细设置为#text(fill:orange)["regular"(正常)],因为默认情况下标题是粗体的,并使用#link("https://typst.app/docs/reference/text/smallcaps/")[`smallcaps`]函数将标题的标题呈现为小写字母大写形式。
最后一个问题是所有标题看起来都一样了。“Motivation”和“Problem Statement”子标题应该以斜体设定标题,但现在它们看起来与上级标题无法区分。我们可以通过在我们设定规则使用`where`选择器来解决这个问题:这是一个我们可以在标题(和其他元素)上调用的方法,它允许我们按照它们的级别过滤它们。我们可以使用它来区分各级标题:
#box(height: 300pt,
columns(2, gutter: 11pt)[
```typst #show heading.where(
level: 1
): it => block(width: 100%)[
#set align(center)
#set text(12pt, weight: "regular")
#smallcaps(it.body)
]
#show heading.where(
level: 2
): it => text(
size: 11pt,
weight: "regular",
style: "italic",
it.body + [.],
)```
#align(center, image("images/3-right-yulan-8.png"))
])
这看起来很棒!我们写了两个`show`规则,每个规则都选择性地应用于第一级和第二级标题。我们使用`where`选择器按照它们的级别过滤标题。然后,我们将子部分标题呈现为*行内标题*(_run-ins_)。我们还自动在下级标题末尾添加一个句号。
让我们回顾一下会议的样式指南:
- 字体应该是`11pt`的衬线字体✓
- 标题应该是`17pt`并加粗✓
- 论文包含一个单列的摘要和双列的主要文本✓
- 摘要应居中✓
- 主要文本应对齐✓
- 第一级部分标题应居中,呈现为小写字母大写形式并为`13pt`✓
- 第二级标题是行内标题,斜体并与正文大小相同✓
- 最后,页面应为美国信纸大小,页码居中,每页的右上角应包含论文的标题✓
我们现在符合所有这些样式,可以将论文提交给会议!完成的论文如下所示:
#align(center, image("images/3-advanced-paper.png", width: 95%))
== 回顾 Review
你现在已经学会了如何创建页眉和页脚,如何使用函数和作用域来在局部覆盖样式,如何使用#link("https://typst.app/docs/reference/layout/grid/")[`grid`]函数创建更复杂的布局,以及如何为单个函数和整个文档编写`show`规则。你还学会了如何使用#link("https://typst.app/docs/reference/styling/#show-rules")[`where`选择器]根据它们的级别过滤标题。
这篇论文取得了巨大的成功!在会议上,你遇到了许多志同道合的研究人员,并正在计划一个项目,希望在明年的同一场所发表。但是,你需要使用相同的样式指南编写一篇新的论文,因此也许现在你想为你和你的团队创建一个节省时间的模板?
在下一节中,我们将学习如何创建可在多个文档中重复使用的模板。这是一个更高级的主题,所以如果你现在不想深入研究,可以随时回来。 |
|
https://github.com/Ziyzou02/typst_feyndraw | https://raw.githubusercontent.com/Ziyzou02/typst_feyndraw/main/feyndraw.typ | typst | #import "@preview/cetz:0.2.2": canvas, draw,
#import draw: circle, arc, bezier,line,mark,content, fill,
#import draw:rotate,group
// some vector calculation
#let add(a, b) = {
if a.len() != b.len() {
return [Error length]
} else{
a.zip(b).map(
((a,b))=> (a+b)
)
}
}
#let minu(a,b) = add(a,b.map(t=>-t))
#let scal(v, b) = v.map(t=> t*b)
#let norm(a) = {
let s = 0
for x in a{s = s + x*x}
return calc.sqrt(s)
}
#let vec_sum(a) = {
let s = 0
for k in a {s=s+k}
return s
}
#let inner_product(a,b) = vec_sum(a.zip(b).map(((a,b))=> a*b))
#let sgn(a) = if a!=0 {a/calc.abs(a)}else{ 0}
#let unit(a) = scal(a,1/norm(a))
#let rot_vec_2dim(ang, a) = {
(calc.cos(ang)*a.at(0)-
calc.sin(ang)*a.at(1),
calc.sin(ang)*a.at(0) +
calc.cos(ang)*a.at(1))
}
#let arc_angel2crd(position, ang, radius,)={
( position.at(0) + radius * calc.cos(ang),position.at(0) + radius * calc.sin(ang)
)
}
#let crd2angel(position, crd )={
let dx = crd.at(0) - position.at(0)
let dy = crd.at(1) - position.at(1)
let k = 0
if dx == 0 {
k = sgn(dy)*calc.inf
} else {
k = dy/dx
}
let a = calc.atan(k)
if dx < 0{
a = a + 180deg
}else{
if dy < 0{
a = a + 360deg
}
}
if a < 0deg {
return a + 360deg
}else{return a}
}
// crd2angle 给出坐标crd相对于position的角度,值域为[0,360)deg
#let ppa_circle_crd(a,b,ang,re:1)={
// ppa : means "point-point-angle" use these 3 variables to determine an arc.
// crd: means coordinate.
// a, b: two point crd
// ang: a,b两点张成的圆心角
// re: 两点一角有两种可能的取向,从短边来看,1为逆时针,-1为顺时针。
// return: radius and crd of circle(crd) center. (radius, (xc, yc) )
let Q = re*calc.abs((1 + calc.cos(ang))/calc.sin(ang))
let r = 1/2 *calc.sqrt(calc.pow(norm(minu(a,b)),2)*(1+calc.pow(Q,2)))
let xc = a.at(0) - 1/2 * inner_product(minu(a,b), (1,-Q))
let yc = a.at(1) - 1/2 *inner_product(minu(a,b), (Q,1))
return (r,(xc, yc))
}
#let arc_spilt(a, b, angle:calc.inf,re:1,node:8)={
// a, b, angle,re: same as ppa_circle_crd()
// 将弧线分为若干个节点,并给出这些节点的坐标。
let s = ()
if angle == calc.inf{
let c = unit(minu(b, a))
let l = norm(minu(b, a))/node
let lvec = scal(c, l)
let start_p = a
s = s + (start_p,)
for i in range(node){
start_p = add(start_p, lvec)
s = s + (start_p,)
}
return s
}else{
let rcc = ppa_circle_crd(a, b, angle, re:re)
let r= rcc.at(0)
let cc = rcc.at(1)
let a_angel = crd2angel(cc, a)
let b_angel = crd2angel(cc, b)
if b_angel < a_angel {b_angel = b_angel+360deg}
let start_angel = a_angel
let end_angel = b_angel
angle = end_angel - start_angel
if node < 2 {Error("node must >= 2")}
let d_angel = angle/(node - 1)
for i in range(0,node){
let ang = start_angel + i * d_angel
let unit_vec = (calc.cos(ang),calc.sin(ang))
s = s + (add(cc, scal(unit_vec, r)),)
}
return s
}}
#let polar_crd(r, angle) = {
(r*calc.cos(angle),r*calc.sin(angle))
}
#let para_grad(t, func, e:0.000000002)={
// func(t) = (x(t), y(t))
// return the grad of point f(t)
if e * t < 0 * t {
e = -e
}
let dp = minu(func(t + e*t) , func(t - e*t))
return crd2angel((0,0),dp)
}
#let tangent(..args)={
return polar_crd(1,para_grad(..args))
}
#let tangent_show(..args, r:0.01,storke:none, symbol:"stealth", mark_pos:"end",line_show:true)={
let tgt = tangent(..args)
let start = args.pos().at(1)(args.pos().at(0))
let end = add(start, scal(tgt,r))
if line_show != false{
line(start, end, storke:storke)}
if mark_pos == "end"{
mark(start, end,symbol:symbol,fill:black)}
else if mark_pos == "mid"{
mark(
start,
add(start, scal(tgt,.7*r)),
symbol:symbol, fill:black
)
}else if mark_pos == "start"{
mark(
start,
add(start, scal(tgt,.1*r)),
symbol:symbol, fill:black
)
}
}
// shape
shape
#let dot(x,y,r:.1, fill:black) = {
circle((x,y),radius: r,fill: fill)
}
#let Dot(position, r:.05,fill:black)=circle(position,radius: r,fill: fill)
#let waveline_group(start:(), end:(), node: 10)={
let q = 1
let re = 1
let long=scal(minu(end,start),1/node)
let Tran= rot_vec_2dim(-90deg, long)
while q <= node {
let a = add(start , long.map(t=>t*(q - 1)))
let b = add(a , long)
Tran = Tran.map(t=>-1*t)
q = q + 1
re = re * (-1)
(
bezier(a, b, add(scal(add(a,b),1/2), Tran.map(t=>t*1.4))),
)
}
}
#let twopoint_wave(a, b, orien:1, amp:1.2)={
let long=scal(minu(b,a),1)
let tran = add(long,scal(rot_vec_2dim(orien*90, long),amp))
bezier(a, b,add(a, tran))
}
#let waveline(starts, ends, node:10)={
for i in waveline_group(start:starts,end:ends,node:node) {i}
}
#let ppa_arc(a, b, ang, re:1, center_show: true)={
//
if ang == calc.inf{
line(a, b)
}else{
let r_crd = ppa_circle_crd(a,b,ang)
let r = r_crd.at(0)
let cc = r_crd.at(1)
let a_ang = crd2angel(cc,a)
let b_ang = crd2angel(cc,b)
if b_ang < a_ang{
b_ang = b_ang + 360deg
}
// let start_p(i) = (a, b).at(i)
// let start_ang = calc.min(a_ang, b_ang)
// let stop_ang = calc.max(a_ang,b_ang)
// let i = 0
// if start_ang == b_ang {
// i = 1
// }
arc(a, start: a_ang, stop: b_ang ,radius:r)
if center_show{
circle(cc,radius:.05,fill:gray)
}
}}
#let no_match_cir(a, b,rate:2)={
let long = minu(a,b)
let c = add(a,scal(long,1/rate))
ppa_arc(a, c,180deg,center_show: false)
ppa_arc(c, b, 180deg,center_show: false)
}
#let elem_gluon(a, b,re:1)={
let d = minu(b, a)
let t = rot_vec_2dim(90deg, d)
let r1 = 0.7 // r1 > 0.5
let r2 = r1 - 0.5
let R1 = norm(d) * r1
let R2 = norm(d) * r2
let v1 = add(a,scal(add(d,t),r1))
let v2 = add(a,add(scal(d, 1 - r1),scal(t,r1)))
ppa_arc(a, v1, 90deg,center_show: false)
ppa_arc(v1, v2, 180deg, center_show: false)
ppa_arc(v2, b, 90deg, center_show: false)
// arc(a, start:-90deg * re, stop: 0deg, radius: R1)
// arc(add(a,(R1,re*R1)), start:0deg, stop:180deg*re, radius:R2)
// arc(add(a,(R1 - 2*R2, re*R1)),start:-180deg * re, stop:-90deg * re, radius:R1)
}
#let gluon(a, b, angle: calc.inf,re:1, node:12,vertex_show:false)={
let d = minu(b, a)
let long = unit(d)
let start_point = a
let end_point = b
line(a, start_point)
line(b, end_point)
let point_group = arc_spilt(start_point, end_point, angle: angle, re:re, node:node)
for i in range(point_group.len()-1){
elem_gluon(point_group.at(i), point_group.at(i+1))
}
if vertex_show{
Dot(a,r:.02)
Dot(b,r:.02)
}
}
#let ph(a,b,angle:calc.inf, orien: 1,node:12, vertex_show:false,m:"")={
if angle == 0deg{
angle = 1deg
}
if calc.abs(b.at(0) - a.at(0)) <= 0.1{
orien = -sgn(a.at(1)-b.at(1))
}
if orien == 1{
let t = a
a = b
b = t}
let q = arc_spilt(a,b,angle: angle,node:node)
let f = 1
for i in range(q.len()-1){
twopoint_wave(q.at(i),q.at(i+1),orien: f)
f = f * -1
}
if vertex_show{
Dot(a)
Dot(b)
}
if m != "" {
let tran = rot_vec_2dim(-90deg,scal(minu(b,a),0.2))
let m_position = add(tran,q.at(calc.floor(node/2)))
content(m_position)[#emph(m)]}
}
#let fermion(a, b, ang:calc.inf, re:1,m:"",m_show:true, mark_show: true)={
// the curved fermion_line havn't be done,you can use ppa_arc() and mark() to replace
line(a, b)
let long = scal(minu(b,a),5/8)
let tran = scal(rot_vec_2dim(-90,long), re/4)
if mark_show{
mark(a, add(a,long),symbol:">",fill:black)}
if m != "" and m_show{
content(add(add(a,long),tran))[#emph(m)]
}
}
#let dashed_line(a, b, node:6, d:0.08, stroke:none)={
let c = minu(b, a)
let l = (1 - (node - 1)*d)/node
let lvec = scal(c, l)
let dvec = scal(c, d)
let start_p = a
let end_p = add(a, lvec)
for i in range(node){
line(start_p, end_p,stroke:stroke)
start_p = add(end_p, dvec)
end_p = add(start_p, lvec)
}
}
#let scalar(a, b, angle:calc.inf,re:1, form: 1, node:6, vertex_show:false, m: "")={
if form == 1{
ppa_arc(a, b, angle,re:re, center_show: false)
}else{
let point_group = arc_spilt(a, b, angle: angle, node:2*node, re: re)
for i in range(point_group.len()-1, step: 2){
line(point_group.at(i),
point_group.at(i+1))
}
}
if vertex_show{
Dot(a)
Dot(b)
}
if m != ""{
let long = scal(minu(b,a),5/8)
let tran = scal(rot_vec_2dim(-90,long), re/2)
content(add(add(a,long),tran))[#emph(m)]
}
}
#let oval(c, long:1, width:1, fill:white)={
scale(y:long, x:width)
circle(c,fill:fill)
scale(y:1/long, x:1/width)
}
#let axis(o, xlim:(-4,4),ylim:(-4,4),ticks:false,xticks:false, yticks:false)={
Dot(o)
line((xlim.at(0),0),
(xlim.at(1),0))
line((0,ylim.at(0)),
(0,ylim.at(1)))
mark(o, (xlim.at(1),0),symbol:">",fill:black)
mark(o, (0,ylim.at(1)),symbol:">",fill:black)
if ticks{
xticks=true
yticks=true
}
if xticks{
for i in range(calc.floor(xlim.at(0))+1,calc.floor(xlim.at(1))){
dot(i,0,r:0.05)
}
}
if yticks{
for i in range(calc.floor(ylim.at(0))+1,calc.floor(ylim.at(1))){
dot(0,i,r:0.05)
}
}
}
// some method for complex diagrams
#let fermion_loop(a, b,vertex_show:false)={
ppa_arc(a, b, 180deg, center_show:false)
ppa_arc(b, a, 180deg, center_show:false)
let long = scal(minu(b, a),0.5)
let tran = rot_vec_2dim( 90deg,long)
let v1 = add(a, tran)
let v2 = add(v1,scal(long,1.2))
mark(v1, v2,symbol:">",fill:black)
v1 = minu(b, tran)
v2 = minu(v1, scal(long, 1.2))
mark(v1, v2,symbol:">",fill:black)
if vertex_show{
Dot(a)
Dot(b)}
}
// Custom Elements
Custom Elements
#let ox(position, r,angle:45deg)={
circle(position, radius:r)
let s1 = add(position,polar_crd(r, angle))
let e1 = add(position, polar_crd(r, angle + 180deg))
let s2 = add(position,polar_crd(r, angle + 90deg))
let e2 = add(position,polar_crd(r, angle - 90deg))
line(s1, e1)
line(s2, e2)
}
#let Rot_group(g, angle, origin:(0,0,0))={
rotate(angle, origin: origin)
g
}
// example
(a)
#align(center)[
#canvas(length: 1cm,
{
waveline((1,0),(2,0),node:6)
circle((3,0),fill:gray)
waveline((4,0),(5,0),node:6)
})
]
#let a =(1,2)
#let b = (2,2)
(b)
#align(center)[
#canvas(length: 1cm,
{
dot(0,0)
dot(-1,1)
ppa_arc((0,0),(-1,1),120deg,re:1)
})]
#align(center)[
#canvas(length: 1cm,{
ph((1,2),(3,2),angle:120deg, node:16,vertex_show: true)
line((0,2),(4,2))
ph((2,2),(2,0),vertex_show: true,node: 16)
mark((1,-1),(1.5,-0.5),symbol:">",fill:black)
line((1,-1),(2,0))
line((2,0),(3,-1))
mark((2,0),(2.7,-0.7),symbol:">",fill:black)
}
)
]
(c)
#align(center)[
#canvas(length: 1cm, {
no_match_cir((2,0),(3,0))
}
)
]
#align(center)[
#canvas(length: 1cm, {
gluon((1,1),(1,2),node:8,vertex_show: false)
gluon((1.8,0.2),(1,1),node:8,vertex_show: false)
gluon((0.2,0.2),(1,1),node:8 )
}
)
]
#align(center)[
#canvas(length: 1cm,{
fermion((0,0),(1,1), m_show: true,m:"p",re:-1)
ph((1,1),(3,1),node:12, vertex_show: true)
fermion((4,0),(3,1), m:"k")
fermion((1,1),(0,2), m:$p'$,re:-1)
fermion((3,1),(4,2), m_show: true,m:$k'$)
content((2,0.7))[$gamma$]
content((2,1.5))[$p'+p$]
let c = (((-0.4,0),$e^-$),((-0.4,2),$e^+$),((4.4,0),$mu^+$),((4.4,2),$mu^-$))
for i in range(4){
content(c.at(i).at(0))[#c.at(i).at(1)]}
})
]
(d)
#align(center)[
#canvas(length: 1cm, {
Dot((0,0))
// let gr = arc_spilt((0,0),(1,2),angle:60deg, node: 16, re:1)
// for i in range(gr.len()-1){
// elem_gluon(gr.at(i),gr.at(i+1))
// }
content((0,0))[O]
Dot((3,0))
Dot((3,3))
gluon((3,0), (3,3), angle: 120deg, re: -1, node: 16)
scalar((0,0),(2,0), angle: 180deg, form: 2, node:8, m:"k")
scalar((2,0),(0,0), angle: 180deg, form:2)
fermion_loop((2,4),(3,4), vertex_show: false)
}
)
]
|
|
https://github.com/Enter-tainer/typst-ws | https://raw.githubusercontent.com/Enter-tainer/typst-ws/master/test.typ | typst | MIT License | #set text(font: ("Source Han Serif SC"))
= 欢迎来到 #strong[OI Wiki]!
== 内容
#label("内容")
编程竞赛发展多年,难度越来越高,内容越来越复杂,而网上资料大多零散,初学者往往并不知道如何系统地学习相关知识,需要花费大量时间摸索。
为了方便热爱编程竞赛的小伙伴更好地入门,2018 年 7 月份,#strong[OI Wiki]
迁移至 GitHub。随着 #strong[OI Wiki]
的内容不断完善,越来越多的小伙伴参与其中。
#strong[OI Wiki]
致力于成为一个免费开放且持续更新的知识整合站点,大家可以在这里获取关于
#strong[编程竞赛 (competitive programming)]
有趣又实用的知识,我们为大家准备了竞赛中的基础知识、常见题型、解题思路以及常用工具等内容,帮助大家更快速深入地学习编程竞赛。
目前,#strong[OI Wiki]
的内容还有很多不完善的地方,知识点覆盖不够全面,存在一些低质量页面需要修改。#strong[OI
Wiki] 团队以及参与贡献的小伙伴们正在积极完善这些内容。
关于上述待完善内容,请参见 #strong[OI Wiki] 的
#link("https://github.com/OI-wiki/OI-wiki/issues")[Issues] 以及
#link("https://github.com/OI-wiki/OI-wiki/labels/Iteration%20Plan%20%2F%20%E8%BF%AD%E4%BB%A3%E8%AE%A1%E5%88%92")[迭代计划]。
与此同时,#strong[OI Wiki] 源于社区,提倡
#strong[知识自由],在未来也绝不会商业化,将始终保持独立自由的性质。
== 部署
#label("部署")
本项目目前采用 #link("https://github.com/mkdocs/mkdocs")[MkDocs] 部署在
#link("https://oi-wiki.org")[oi-wiki.org]。
我们在 #link("https://status.oi-wiki.org")[status.oi-wiki.org]
维护了一份镜像站列表,它们的内容和
#link("https://oi-wiki.org")[oi-wiki.org] 都是相同的。
当然,也可以在本地部署。(#strong[需要安装 Python3 和 Pipenv])
#strong[如果遇到问题,可以查阅
#link("https://oi-wiki.org/intro/faq/")[F.A.Q.] 来了解更多信息。]
```bash
git clone https://github.com/OI-wiki/OI-wiki.git --depth=1
cd OI-wiki
# 安装 mkdocs
pipenv install --pypi-mirror https://pypi.tuna.tsinghua.edu.cn/simple/
# 使用我们的自定义主题(Windows 下请使用 Git Bash 执行)
# 安装主题时将连接网络下载资源,可通过以下配置项控制下载链接
# .gitmodules:
# - url
# scripts/pre-build/install-theme-vendor.sh:
# - MATHJAX_URL
# - MATERIAL_ICONS_URL
./scripts/pre-build/install-theme.sh
# 两种方法(选其一即可):
# 1. 运行一个本地服务器,访问 http://127.0.0.1:8000 可以查看效果
pipenv run mkdocs serve -v
# 2. 在 site 文件夹下得到静态页面
pipenv run mkdocs build -v
# 获取 mkdocs 的命令行工具的说明(解释了命令和参数的含义)
pipenv run mkdocs --help
```
我们现在在服务器端渲染 MathJax,如果希望实现类似效果,可以参考
#link("https://github.com/OI-wiki/OI-wiki/blob/master/.github/workflows/build.yml")[build.yml]。(需要安装
Node.js)
=== 镜像
#label("镜像")
```bash
# Gitee 码云 上的镜像仓库和 GitHub 仓库的内容相同
git clone https://gitee.com/OI-wiki/OI-wiki.git
```
=== 离线版
#label("离线版")
可以使用 `gh-pages` 分支的内容
```bash
git clone https://gitee.com/OI-wiki/OI-wiki.git -b gh-pages
```
本地启动一个 http 服务器可能会更方便一些。
```bash
# 如果是 python3
python3 -m http.server
# 如果是 python2
python2 -m SimpleHTTPServer
# 有些环境下找不到名叫 python3/python2 的可执行文件,不妨运行 python 试试
```
=== Docker
#label("docker")
详见 #link("https://oi-wiki.org/intro/docker-deploy/")[Docker 部署]
== 如何参与完善 OI Wiki
#label("如何参与完善-oi-wiki")
我们非常欢迎你为 #strong[OI Wiki] 编写内容,将自己的所学所得与大家分享。
具体的贡献方式在 #link("https://oi-wiki.org/intro/htc/")[如何参与]。
== 版权声明
#label("版权声明")
除特别注明外,项目中除了代码部分均采用(Creative Commons BY-SA 4.0)
知识共享署名 - 相同方式共享 4.0 国际许可协议及附加的
#link("https://github.com/zTrix/sata-license")[The Star And Thank Author License]
进行许可。
换言之,使用过程中您可以自由地共享、演绎,但是必须署名、以相同方式共享、分享时没有附加限制,
而且应该为 GitHub 仓库点赞(Star)。
而如果你想要引用这个 GitHub 仓库,可以使用如下的 bibtex:
```
@misc{oiwiki,
author = {<NAME>},
title = {OI Wiki},
year = {2016},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/OI-wiki/OI-wiki}},
}
```
== 鸣谢
#label("鸣谢")
本项目受 #link("https://ctf-wiki.org/")[CTF Wiki]
的启发,在编写过程中参考了诸多资料,在此一并致谢。
非常感谢一起完善 #strong[OI Wiki] 的
#link("https://github.com/OI-wiki/OI-wiki/graphs/contributors")[小伙伴们]
和为 #strong[OI Wiki] 捐赠的
#link("https://oi-wiki.org/intro/thanks/")[朋友们]!
特别感谢 #link("https://github.com/24OI")[24OI] 的朋友们的大力支持!
感谢 北大算协 和 Hulu 的支持!
|
https://github.com/dashuai009/dashuai009.github.io | https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/021.typ | typst | #let date = datetime(
year: 2022,
month: 3,
day: 14,
)
#metadata((
"title": "康托展开及康托逆展开",
"author": "dashuai009",
description: "康托展开及康托逆展开",
pubDate: "'Jul 08 2022'",
subtitle: [math],
))<frontmatter>
#import "../__template/style.typ": conf
#show: conf
== 题目描述
<题目描述>
小 G 喜欢玩排列。现在他手头有两个 $n$ 的排列。$n$ 的排列是由
$0 , 1 , 2 , . . . , n - 1$这 n 的数字组成的。对于一个排列
$p$,$O r d e r (p)$ 表示 $p$ 是字典序第 $O r d e r (p)$ 小的
排列(从 0 开始计数)。对于小于 $n !$ 的非负数 $x$,$P e r m (x)$
表示字典序第 $x$ 小的排列。
现在,小 G 想求一下他手头两个排列的和。两个排列 $p$ 和 $q$ 的和为
$s u m = P e r m ((O r d e r (p) + O r d e r (q)) % n !)$
== 输入输出格式
<输入输出格式>
=== 输入格式:
<输入格式>
输入文件第一行一个数字 n,含义如题。
接下来两行,每行 n 个用空格隔开的数字,表示小 G 手头的两个排列。
=== 输出格式:
<输出格式>
输出一行 n 个数字,用空格隔开,表示两个排列的和
== 输入输出样例
<输入输出样例>
=== 输入样例\#1:
<输入样例1>
```
2
0 1
1 0
```
=== 输出样例\#1:
<输出样例1>
```
1 0
```
=== 输入样例\#2:
```
3
1 2 0
2 1 0
```
=== 输出样例\#2
```
1 0 2
```
== 说明
<说明>
1、2、3、4 测试点,$1 lt.eq n lt.eq 10$。
5、6、7 测试点,$1 lt.eq n lt.eq 5000$,保证第二个排列的
$O r d e r lt.eq 10^5$ 。
8、9、10 测试点,$1 lt.eq n lt.eq 5000$
== code
```cpp
#include <cstdio>
#include <cstring>
#include <cmath>
#include <cstdlib>
#include <iostream>
#include <algorithm>
using namespace std;
const int max_n = 5e4 + 10;
int cnt[max_n], p1[max_n], p2[max_n], pp1[max_n], pp2[max_n], ans[max_n];
int n;
inline int getnum()
{
int ans = 0; char c; bool flag = false;
while ((c = getchar()) == ' ' || c == '\n' || c == '\r');
if (c == '-') flag = true; else ans = c - '0';
while (isdigit(c = getchar())) ans = ans * 10 + c - '0';
return ans * (flag ? -1 : 1);
}
int main()
{
n = getnum();
for (int i = n; i >= 1; i--) p1[i] = getnum();
for (int i = n; i >= 1; i--) p2[i] = getnum();
for (int i = 1; i <= n; i++)
for (int j = 1; j < i; j++)
if (p1[j] < p1[i]) pp1[i]++;
for (int i = 1; i <= n; i++)
for (int j = 1; j < i; j++)
if (p2[j] < p2[i]) pp2[i]++;
for (int i = 2; i <= n; i++)
{
ans[i] += pp1[i] + pp2[i];
ans[i + 1] += ans[i] / i;
ans[i] %= i;
}
for (int i = n; i >= 1; i--)
{
int _ = -1;
while (ans[i] >= 0)
{
_++;
if (!cnt[_]) ans[i]--;
}
printf("%d ", _);
cnt[_] = 1;
}
}
```
== 康拓展开:
<康拓展开>
```cpp
const int PermSize = 12;
long long factory[PermSize] = { 1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800 };
long long Cantor(string buf) {
int i, j, counted;
long long result = 0;
for (i = 0; i<PermSize; ++i) {
counted = 0;
for (j = i + 1; j<PermSize; ++j){
if (buf[i]>buf[j])++counted;
}
result = result + counted*factory[PermSize - i - 1];
}
return result;
}
```
== 康托逆展开:
<康托逆展开>
```cpp
/*返回由前n个数[1, n]组成的全排列中第m大的排列。*/
vector<int> deCantor(int n, int m) {
vector<int> res;
long long buf = 0;
m--;
for(int f = 0; n > 0; n--) {
f = m / facts[n - 1];
m = m % facts[n - 1];
while(buf & (1 << (f + 1)))f++;
res.push_back(f + 1);
buf |= (1 << (f + 1));
}
return res;
}
``` |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/par-justify_02.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test forced justification with justified break.
A B C #linebreak(justify: true)
D E F #linebreak(justify: true)
|
https://github.com/magicwenli/keyle | https://raw.githubusercontent.com/magicwenli/keyle/main/src/sym.typ | typst | MIT License | // https://support.apple.com/en-hk/guide/mac-help/cpmh0011/mac
#let mac-key = (
command: "⌘",
shift: "⇧",
option: "⌥",
control: "⌃",
"return": "↩",
delete: "⌫",
forward-delete: "⌦",
escape: "⎋",
left: "←",
right: "→",
up: "↑",
down: "↓",
pageup: "⇞",
pagedown: "⇟",
home: "↖",
end: "↘",
tab-right: "⇥",
tab-left: "⇤",
)
#let biolinum-key = (
Strg: "Strg",
Alt: "Alt",
Ctrl: "Ctrl",
Shift: "Shift",
Tab: "Tab",
Enter: "Enter",
Capslock: "Capslock",
Home: "Home",
Del: "Del",
Ins: "Ins",
End: "End",
Space: "\u{E18C}",
Esc: "Esc",
PageUp: "\u{E19A}",
PageDown: "\u{E19B}",
Back: "Back",
Pad_0: "\u{E1A0}",
Pad_1: "\u{E1A1}",
Pad_2: "\u{E1A2}",
Pad_3: "\u{E1A3}",
Pad_4: "\u{E1A4}",
Pad_5: "\u{E1A5}",
Pad_6: "\u{E1A6}",
Pad_7: "\u{E1A7}",
Pad_8: "\u{E1A8}",
Pad_9: "\u{E1A9}",
Pad_Div: "\u{E1AA}",
Pad_Add: "\u{E1AB}",
Pad_Sub: "\u{E1AC}",
Pad_Mul: "\u{E1AD}",
Pad_Enter: "\u{E1AE}",
Mac_Cmd: "\u{2318}",
Mac_Opt: "\u{2325}",
Mac_FDel: "\u{2326}",
Mac_Del: "\u{232B}",
GNU: "GNU",
Win: "\u{E168}",
Tux: "Tux",
delim_plus: "\u{E1B0}",
delim_minus: "\u{E1B1}",
) |
https://github.com/rijuyuezhu/latex-typst-template | https://raw.githubusercontent.com/rijuyuezhu/latex-typst-template/main/art-typst-chi/main.typ | typst | #import "template.typ": *
#show: project.with(
title: [
],
subtitle: [
],
authors: (
(
name: "",
organization: [
],
),
),
abstract: [
],
keywords: (
),
)
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/list_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
_Shopping list_
#list[Apples][Potatoes][Juice]
|
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/layout/length.typ | typst | --- length-fields ---
// Test length fields.
#test((1pt).em, 0.0)
#test((1pt).abs, 1pt)
#test((3em).em, 3.0)
#test((3em).abs, 0pt)
#test((2em + 2pt).em, 2.0)
#test((2em + 2pt).abs, 2pt)
--- length-to-unit ---
// Test length unit conversions.
#test((500.934pt).pt(), 500.934)
#test((3.3453cm).cm(), 3.3453)
#test((4.3452mm).mm(), 4.3452)
#test((5.345in).inches(), 5.345)
#test((500.333666999pt).pt(), 500.333666999)
#test((3.5234354cm).cm(), 3.5234354)
#test((4.12345678mm).mm(), 4.12345678)
#test((5.333666999in).inches(), 5.333666999)
#test((4.123456789123456mm).mm(), 4.123456789123456)
#test((254cm).mm(), 2540.0)
#test(calc.round((254cm).inches(), digits: 2), 100.0)
#test((2540mm).cm(), 254.0)
#test(calc.round((2540mm).inches(), digits: 2), 100.0)
#test((100in).pt(), 7200.0)
#test(calc.round((100in).cm(), digits: 2), 254.0)
#test(calc.round((100in).mm(), digits: 2), 2540.0)
#test(5em.abs.cm(), 0.0)
#test((5em + 6in).abs.inches(), 6.0)
--- length-to-absolute ---
// Test length `to-absolute` method.
#set text(size: 12pt)
#context {
test((6pt).to-absolute(), 6pt)
test((6pt + 10em).to-absolute(), 126pt)
test((10em).to-absolute(), 120pt)
}
#set text(size: 64pt)
#context {
test((6pt).to-absolute(), 6pt)
test((6pt + 10em).to-absolute(), 646pt)
test((10em).to-absolute(), 640pt)
}
--- length-unit-hint ---
// Error: 17-19 expected length, found integer
// Hint: 17-19 a length needs a unit - did you mean 12pt?
#set text(size: 12)
--- length-ignore-em-pt-hint ---
// Error: 2-21 cannot convert a length with non-zero em units (`-6pt + 10.5em`) to pt
// Hint: 2-21 use `length.to-absolute()` to resolve its em component (requires context)
// Hint: 2-21 or use `length.abs.pt()` instead to ignore its em component
#(10.5em - 6pt).pt()
--- length-ignore-em-cm-hint ---
// Error: 2-12 cannot convert a length with non-zero em units (`3em`) to cm
// Hint: 2-12 use `length.to-absolute()` to resolve its em component (requires context)
// Hint: 2-12 or use `length.abs.cm()` instead to ignore its em component
#(3em).cm()
--- length-ignore-em-mm-hint ---
// Error: 2-20 cannot convert a length with non-zero em units (`-226.77pt + 93em`) to mm
// Hint: 2-20 use `length.to-absolute()` to resolve its em component (requires context)
// Hint: 2-20 or use `length.abs.mm()` instead to ignore its em component
#(93em - 80mm).mm()
--- length-ignore-em-inches-hint ---
// Error: 2-24 cannot convert a length with non-zero em units (`432pt + 4.5em`) to inches
// Hint: 2-24 use `length.to-absolute()` to resolve its em component (requires context)
// Hint: 2-24 or use `length.abs.inches()` instead to ignore its em component
#(4.5em + 6in).inches()
|
|
https://github.com/xiaoxuan-yu/CUDA-CCL | https://raw.githubusercontent.com/xiaoxuan-yu/CUDA-CCL/main/README.md | markdown | BSD 3-Clause "New" or "Revised" License | # CUDA-CCL
> Code and report for the final project of the course *"Foundations of Parallel Computing II"* at Peking University, 2024 Spring.
This repository contains the implementation of 8-connected Komura Equivalence algorithm for Connected Component Labeling (CCL) on CUDA. The algorithm is originally described in
<NAME>, <NAME>, <NAME>, and <NAME>, “Optimizing GPUBased ConnectedComponents Labeling Algorithms,” in 2018 IEEE International Conference on Image Processing,Applications and Systems (IPAS), 2018, pp. 175–180. doi: 10.1109/IPAS.2018.8708900.
## Contained Contents
- `algorithm/`: The implementation of the algorithm.
- `validation`: The python scripts for validating the correctness of the algorithm.
- `perf`: Original data for performance evaluation.
- `report`: `typst` source code and the compiled PDF report.
## Usage
### Prerequisites
- A CUDA compatible GPU
- CUDA Toolkit 11.0 or later (Tested on CUDA 11.3 and CUDA 12.4)
- Supported GCC/G++ required by the CUDA Toolkit (GCC 9 tested for CUDA 11 and GCC 12 tested for CUDA 12)
- CMake 3.19 or later
- (Optional) `Python`, `numpy`, `ipykernel` and `OpenCV` for validation
### Build
1. clone the repository
```bash
git clone https://github.com/xiaoxuan-yu/CUDA-CCL
```
2. `cd` into the repository
```bash
cd CUDA-CCL
```
3. Configure the project with CMake
```bash
cmake -S algorithm -B build
```
4. Build the project
```bash
cmake --build build
```
### Usage
The executable will be generated in `build/bin` directory. The usage is as follows:
```bash
$ ./CCL <input_file> <output_file>
```
### Validation
Use the `validation.ipynb` to validate the correctness of the algorithm.
## Licence
This project is licensed under BSD 3-Clause Licence. See [LICENCE](LICENCE) for more details.
Parto of the code referenced [YACCLAB](https://github.com/prittt/YACCLAB), which is also licensed under BSD 3-Clause Licence. |
https://github.com/jneug/typst-nassi | https://raw.githubusercontent.com/jneug/typst-nassi/main/tests/statement/test.typ | typst | MIT License | #import "../../src/nassi.typ"
#nassi.diagram({
nassi.elements.process("Statement 1")
})
#v(1cm)
#nassi.diagram({
nassi.elements.process("Statement 1")
nassi.elements.process("Statement 2")
})
#v(1cm)
#block(
width: 12cm,
nassi.diagram({
nassi.elements.process("Statement 1")
nassi.elements.process("Statement 2")
}),
)
#v(1cm)
#nassi.diagram({
nassi.elements.process("Statement 1", fill: red)
nassi.elements.process("Statement 2", height: 2)
})
#v(1cm)
|
https://github.com/essmehdi/ensias-report-template | https://raw.githubusercontent.com/essmehdi/ensias-report-template/master/README.md | markdown | MIT License | # ENSIAS Report Typst Template
A Typst template to quickly make reports for projects at ENSIAS. This template was created based on our reports that we also made for our projects.
## What does it provide?
For now, it provides a first page style that matches the common reports style used and encouraged at ENSIAS.
It also provides a style for first level headings to act as chapters.
More improvements will come soon.
## Usage
```typ
#import "@preview/red-agora:0.1.1": project
#show: project.with(
title: "Injecting a backdoor in the xz library and taking over NASA and SpaceX spaceship tracking servers (for education purposes only)",
subtitle: "Second year internship report",
authors: (
"<NAME>",
"<NAME>"
),
school-logo: image("images/ENSIAS.svg"), // Replace with [] to remove the school logo
company-logo: image("images/company.svg"),
mentors: (
"<NAME> (Internal)",
"<NAME> (External)"
),
jury: (
"<NAME>",
"<NAME>"
),
branch: "Software Engineering",
academic-year: "2077-2078",
french: false // Use french instead of english
footer-text: "ENSIAS" // Text used in left side of the footer
)
// Put then your content here
```
## Changelog
**0.1.0 - Initial release**
- First page style
- Level 1 headings chapter style
**0.1.1**
- Fixed major issue where custom school & company logos would throw an error
- Added option to customize footer left side text (thus fixing the issue of it being hardcoded) |
https://github.com/flavio20002/cyrcuits | https://raw.githubusercontent.com/flavio20002/cyrcuits/main/components.typ | typst | Other | #import "@preview/cetz:0.3.1"
#let anchors(anchors) = {
for (k, v) in anchors {
cetz.draw.anchor(k, v)
}
}
#let component-content(start,end, l-modifier, label, angle,pad: 0.5) = {
let center-point = (start,50%,end)
let content-angle = angle
let padding = -1*pad
let anchor = "east"
if (angle == 0deg or angle == 90deg or angle == -90deg){
content-angle = 0deg
}
if (angle == -90deg){
anchor = "west"
if (l-modifier != ""){
padding = pad
anchor = "east"
}
} else if (angle == 0deg){
anchor = "south"
if (l-modifier != ""){
padding = pad
anchor = "north"
}
}else {
anchor = "east"
if (l-modifier != ""){
padding = pad
anchor = "west"
}
}
cetz.draw.content((a: center-point, b: start, number: padding, angle: 90deg), angle:content-angle, text(eval(label)), anchor: anchor)
}
#let component-flow(start,end,angle,flow,flow-config:"") = {
let angle-arrow = 90deg
if ((angle == -90deg or angle == 90deg) and flow-config.contains("_")){
angle-arrow = -90deg
}
let center-point-a-temp1 = (start,50%,end)
let center-point-a-temp = (rel: (angle,- 1.1), to: center-point-a-temp1)
let center-point-a = (rel: (-angle,0.2), to: center-point-a-temp)
let center-point-b = (a: center-point-a, b: center-point-a-temp, number: 0.3, angle: angle-arrow)
let center-point-c = (rel: (angle,0.7), to: center-point-b)
let center-point = (center-point-b,50%, center-point-c)
let content-angle = angle
let (x1,y1,..) = start
let (x2,y2,..) = end
cetz.draw.set-style(mark: (fill: black))
cetz.draw.line(center-point-b, center-point-c, mark: (end: ">"), name: "line")
let anchor = "east"
let distance = 0.2
if (angle == 0deg or angle == 90deg or angle == -90deg){
content-angle = 0deg
}
if (angle == -90deg){
anchor = "east"
} else if (angle == 0deg){
anchor = "north"
if (flow-config.contains("_")){
anchor = "north"
}
else{
anchor = "south"
distance = -0.3
}
} else {
anchor = "west"
}
cetz.draw.content((a: center-point, b: center-point-b, number: distance, angle: 90deg), angle:content-angle, text(eval(flow)), anchor: anchor)
}
#let component-voltage(start,end,angle,voltage,padding:1.5) = {
let center-point-a = (start,50%,end)
let center-point-a1 = (start,20%,end)
let center-point-a1 = (rel: (-angle,0.75), to: center-point-a)
let center-point-a2 = (start,80%,end)
let center-point-a2 = (rel: (angle,0.75), to: center-point-a)
let center-point-b = (a: center-point-a, b: start, number: -1*padding, angle: 90deg)
let center-point-b1 = (a: center-point-a1, b: start, number: -1*(padding - 0.25)/2, angle: 90deg)
let center-point-b2 = (a: center-point-a2, b: start, number: -1*(padding - 0.25)/2, angle: 90deg)
let center-point-c = (rel: (angle,-1), to: center-point-b)
let center-point-d = (rel: (angle,1), to: center-point-b)
let center-point = (center-point-b1,50%, center-point-b2)
let content-angle = angle
let (x1,y1,..) = start
let (x2,y2,..) = end
cetz.draw.set-style(mark: (fill: black))
cetz.draw.bezier(center-point-b2, center-point-b1, center-point-b, mark: (end: ">"), )
let anchor = "east"
if (angle == 0deg or angle == 90deg or angle == -90deg){
content-angle = 0deg
}
if (angle == -90deg){
anchor = "west"
} else {
anchor = "east"
}
cetz.draw.content((a: center-point, b: center-point-b2, number: padding/2, angle: 90deg), angle:content-angle, text(eval(voltage)), anchor: anchor)
}
#let R(start, end, element) = {
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
component-content(start,end, element.l-modifier, element.label, angle,pad: 0.4)
if (element.flow != ""){
component-flow(start,end,angle, element.flow, flow-config: element.flow-config)
}
if (element.voltage != ""){
component-voltage(start,end,angle,element.voltage)
}
cetz.draw.group(name: element.name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 2
let step = 1/6
let height = 5/14
let sgn = -1
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
start,
(rel: ((total-length - component-length)/2, 0)),
(rel: (0.5, 0)),
(rel: (step/2, height/2)),
..for _ in range(5) {
((rel: (step, height * sgn)),)
sgn *= -1
},
(rel: (step/2, height/2)),
(rel: (0.5, 0)),
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
})
}
#let battery1(start, end, element) = {
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
component-content(start,end, element.l-modifier, element.label, angle,pad:0.75)
if (element.flow != ""){
component-flow(start,end,angle, element.flow, flow-config: element.flow-config)
}
cetz.draw.group(name: element.name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 0.25
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
start,
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
if (element.invert){
cetz.draw.line((rel: (0, -0.6)), (rel: (0, 1.2)),)
cetz.draw.line((rel: (0.25, -0.3)), (rel: (0, -0.6)),)
}
else{
cetz.draw.line((rel: (0, -0.3)), (rel: (0, 0.6)),)
cetz.draw.line((rel: (0.25, 0.3)), (rel: (0, -1.2)),)
}
cetz.draw.line(
(rel: (0.25+(total-length - component-length)/2, 0), to: start),
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
})
}
#let isource(start, end, element) = {
cetz.draw.set-style(mark: (fill: black))
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
component-content(start,end, element.l-modifier, element.label, angle,pad:0.75)
if (element.flow != ""){
component-flow(start,end,angle, element.flow, flow-config: element.flow-config)
}
cetz.draw.group(name: element.name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 1
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
start,
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
if (element.invert){
cetz.draw.line((rel: (0.9+(total-length - component-length)/2, 0), to: start),(rel: (-0.8,0)), mark: (end: ">"))
}
else{
cetz.draw.line((rel: (0.1+(total-length - component-length)/2, 0), to: start),(rel: (0.8,0)), mark: (end: ">"))
}
cetz.draw.circle((rel: (0.5+(total-length - component-length)/2, 0), to: start), radius: 0.5,
fill: none
)
cetz.draw.line(
(rel: (component-length + (total-length - component-length)/2, 0), to: start),
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
})
}
#let short(start, end, l-modifier:"", label:"", name: none, ..style) = {
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
let center-point = (start,50%,end)
let content-angle = angle
if (angle == 0deg or angle == 90deg){
content-angle = 0deg
}
if (label!=""){
component-content(start,end, l-modifier, label, angle,pad:0.25)
}
cetz.draw.group(name: name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 0
let step = 1/6
let height = 5/14
let sgn = -1
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
start,
(rel: ((total-length - component-length), 0)),
fill: none
)
})
}
#let open(start, end, l-modifier:"", label:"", name: none, ..style) = {
cetz.draw.set-style(mark: (fill: black))
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
let center-point = (start,50%,end)
let content-angle = angle
if (angle == 0deg or angle == 90deg){
content-angle = 0deg
}
if (label!=""){
component-content(start,end, l-modifier, label, angle,pad:0.25)
}
cetz.draw.group(name: name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 0.5
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
(rel: (angle:0, radius: 0.25), to: start),
(rel: ((total-length - component-length), 0)),
fill: none,
mark: (end: ">")
)
})
}
#let nos(start, end, l-modifier:"", label:none, name: none,flow: "", ..style) = {
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
let center-point = (start,50%,end)
let content-angle = angle
if (angle == 0deg or angle == 90deg){
content-angle = 0deg
}
component-content(start,end, l-modifier, label, angle,pad:0.5)
if (flow != ""){
component-flow(start,end,angle, flow)
}
cetz.draw.group(name: name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 0.5
let step = 1/6
let height = 5/14
let sgn = -1
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
start,
(rel: ((total-length - component-length)/2, 0)),
(rel: (0.5, 0.25)),
fill: none
)
cetz.draw.line((rel: (0.5 + (total-length - component-length)/2, 0), to: start), (rel: ((total-length - component-length)/2, 0)))
})
}
#let ospst(start, end, l-modifier:"", label:none, name: none, ..style) = {
cetz.draw.set-style(mark: (fill: black))
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
let center-point = (start,50%,end)
let content-angle = angle
if (angle == 0deg or angle == 90deg){
content-angle = 0deg
}
component-content(start,end, l-modifier, label, angle,pad:0.5)
cetz.draw.group(name: name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 0.5
let step = 1/6
let height = 5/14
let sgn = -1
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
start,
(rel: ((total-length - component-length)/2, 0)),
(rel: (0.5, 0.35)),
fill: none
)
cetz.draw.line((rel: (0.5 + (total-length - component-length)/2, 0), to: start), (rel: ((total-length - component-length)/2, 0)))
cetz.draw.arc((rel: ((total-length - component-length)/2 + component-length/2, 0), to: start), radius: 0.4,start: 0deg, stop: 90deg, mark: (end: ">"))
})
}
#let C(start, end, element) = {
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
component-content(start,end, element.l-modifier, element.label, angle,pad:0.7)
if (element.flow != ""){
component-flow(start,end,angle, element.flow, flow-config: element.flow-config)
}
if (element.voltage != ""){
component-voltage(start,end,angle,element.voltage)
}
cetz.draw.group(name: element.name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 0.4
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.line(
start,
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
cetz.draw.line((rel: (0, -0.5)), (rel: (0, 1)),)
cetz.draw.line((rel: (0.4, 0)), (rel: (0, -1)),)
cetz.draw.line(
(rel: (0.4+(total-length - component-length)/2, 0), to: start),
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
})
}
#let L(start, end, element) = {
let (x1,y1,..) = start
let (x2,y2,..) = end
let angle = calc.atan2(x2 - x1, y2 - y1)
component-content(start,end, element.l-modifier, element.label, angle,pad:0.4)
if (element.flow != ""){
component-flow(start,end,angle, element.flow, flow-config: element.flow-config)
}
if (element.voltage != ""){
component-voltage(start,end,angle,element.voltage,padding:1)
}
cetz.draw.group(name: element.name, ctx => {
cetz.draw.rotate(angle, origin: start)
let component-length = 1
let radius = 0.14
let height = 5/14
let start-angle = 230deg
let stop-angle = -50deg
let total-length = calc.sqrt(calc.pow(y2 - y1,2) + calc.pow((x2 - x1),2))
cetz.draw.merge-path({
cetz.draw.line(
start,
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
cetz.draw.arc((), start: 180deg, stop: stop-angle, radius: radius, name: "arc1")
cetz.draw.arc("arc1.end", start: start-angle, stop: stop-angle, radius: radius, name: "arc2")
cetz.draw.arc("arc2.end", start: start-angle, stop: stop-angle, radius: radius, name: "arc3")
cetz.draw.arc("arc3.end", start: start-angle, stop: stop-angle, radius: radius, name: "arc4")
cetz.draw.arc("arc4.end", start: start-angle, stop: 0deg, radius: radius, name: "arc5")
cetz.draw.line(
(rel: (1+(total-length - component-length)/2, 0), to: start),
(rel: ((total-length - component-length)/2, 0)),
fill: none
)
})
})
}
#let node(start) = {
cetz.draw.circle(start, radius: 0.075, stroke: black, fill: black)
}
#let node-empty(start) = {
cetz.draw.on-layer(1, {
cetz.draw.circle(start, radius: 0.075, stroke: black, fill: white)
})
}
#let node-content(start,node,node-anchor) = {
let angle = 90deg
if (node-anchor.contains("north")){
angle = -90deg
}
cetz.draw.content((rel: (angle: angle, radius: 0.25), to: start),text(eval(node)),anchor: node-anchor)
}
#let spdt(start,xscale:none,yscale:none,name:none) = {
let (x1,y1,..) = start
let center-point = (start,50%,end)
let content-angle = angle
if (angle == 0deg or angle == 90deg){
content-angle = 0deg
}
let in-point = (rel: (0.5, 0.25),to: start)
let out-1-point = start
let out-2-point = (rel: (0, 0.5),to: start)
cetz.draw.group(name: name, ctx => {
cetz.draw.line(
start,
in-point,
fill: none
)
node-empty(in-point)
node-empty(out-1-point)
node-empty(out-2-point)
anchors((
"in": in-point,
"out 1": out-1-point,
"out 2": out-2-point,
))
})
} |
https://github.com/kotfind/hse-se-2-notes | https://raw.githubusercontent.com/kotfind/hse-se-2-notes/master/algo/lectures/2024-09-03.typ | typst | #import "/utils/math.typ": *
= Введение
<NAME>
$ "Накоп" = 0.25 dot "Кол" + 0.25 dot "КР" + 0.4 dot "ДЗ" + 0.1 dot "Сем" $
$
"Итог" = cases(
round("Накоп")\, &" if НЕ идти на экзамен",
0.5 dot "Накоп" + 0.5 dot "Экз"\, &" if идти на экзамен"
)
$
Контесты на 1--2 недели
= Структуры данных
#def[
#defitem[Абстрактный тип данных] --- определяем, какие операции делает структура, но не
определяем конкретную реализацию
]
Контейнеры:
- Последовательные (напр, вектор)
- Ассоциативные (напр, map)
- Адаптеры (не имеют итераторов)
= Линейные структуры данных
== Стек и очередь
#figure(table(
columns: 2,
table.header("Стек", "Очередь"),
"LIFO", "FIFO",
))
Реализации:
- Массив
- Список
- Deque
- (для очереди) на двух стеках
== Список
Односвязный:
- `begin()` указывает на первый эл-т
- каждый элемент указывает на следующий
- `end()` указывает в пустоту
Двусвязный:
- каждый элемент указывает ещё и на прошлый эл-т
Список может быть зациклен
Зацикленный список может иметь незацикленное начало
== Стек с минимумом
Помимо основного стека поддерживаем стек минимумов (на префиксе)
#figure(table(
columns: 2,
table.header(`st`, `min_st`),
$2$, $2$,
$5$, $3$,
$3$, $3$,
$6$, $4$,
$4$, $4$,
))
Минимум в стеке -- `min_st.top()`
== Очередь через два стека
Имеем два стека: `st1` и `st2`
Push:
```
st1.push(x)
```
Pop:
```
if st2 is empty:
переложить весь st1 в st2
st2.pop()
```
=== Асимптотика
аморт. $O(1)$
Над каждым элементом совершается не более 3 операций:
+ Положить в `st1`
+ Переложить из `st1` в `st2`
+ Вытащить из `st2`
== Вектор
+ Изначально выделяется память под несколько эл-в
+ Можем `push`-ить, пока `v.size() < v.capacity()`
+ Когда место кончается, вектор выделяет в два раза больше памяти и копирует
туда элементы
+ При удалении `capacity()` не меняется
=== Асимптотика
аморт. $O(1)$
На $n$ операций уходит $n + n/2 + n/4 + ... + 1 -> 2n = O(n)$ копирований
= Метод потенциалов
Метод подсчета асимптотики
$ phi_0 -> phi_1 -> ... -> phi_n $
#def[
#defitem[Потенциал] --- функция от наших структур данных
]
#def[
#defitem[Аморт. время работы] --- $a_i = t_i + Delta phi$
]
$ sum a_i = sum (t_i + Delta phi) = sum t_i + (phi_n - phi_0) $
$
(sum t_i) / n
= (phi_0 - phi_n) / n + (sum a_i) / n
<= (phi_0 - phi_n) / n + max(a_i)
$
Хотим минимизировать $max(a_i)$ и $(phi_0 - phi_n) / n$
== Для стека
$ phi_i := 2n_1 $
#figure(table(
columns: 2,
table.header(`push`, `pop`),
$t_i = 1$, $t_i = 1 " или " 2n_1 + 1$,
$a_i = 1 + 2 = 3$, $a_i = 1 " или " 2n_1 + 1 + (0 - 2n_1) = 1$,
))
|
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/2_RTB/PianoDiProgetto/sections/PianificazioneSprint/QuartoSprint.typ | typst | MIT License | #import "../../functions.typ": glossary
=== Quarto #glossary[sprint]
*Inizio*: Venerdì 15/12/2023
*Fine*: Giovedì 21/12/2023
*Obiettivi dello #glossary[sprint]*:
- Proseguire la stesura delle _Norme di Progetto_, ponendo particolare attenzione sul controllo di versione e la gestione delle #glossary[repository]\;
- Proseguire la stesura del _Piano di Progetto_, con:
- Aggiornamento di pianificazione e preventivo pertinenti allo #glossary[sprint] 4 e l'inserimento del consuntivo pertinente allo #glossary[sprint] 3;
- Creazione di un'automazione per creare le tabelle e i grafici pertinenti ai consuntivi a partire da quanto riportato dal team all'interno dello #glossary[spreadsheet] "Time & Resource Manager".
- Proseguire la stesura del _Piano di Qualifica_:
- Inizio stesura del cruscotto delle metriche con valutazione delle metriche sui processi primari (fornitura) e di supporto (#glossary[documentazione]) rispetto agli #glossary[sprint] 1 e 2 e inclusione di grafici di confronto tra metriche per ricavare informazioni utili sull'andamento effettivo del progetto;
- Inizio revisione in stile #glossary[walkthrough] dell'_Analisi dei Requisiti_. |
https://github.com/jassielof/IT425-P2 | https://raw.githubusercontent.com/jassielof/IT425-P2/main/Documento%20de%20Parcial.typ | typst | #import "template/apa7/lib.typ": *
#import "@preview/charged-ieee:0.1.2": *
#show: apa7.with(
title: [Parcial 2: Aplicación de Lean Six Sigma en la Industria de Micromovilidades],
custom-authors: [<NAME>],
custom-affiliations: [Facultad de Ingeniería, Universidad de Santa Cruz de la Sierra],
course: [IT425: Gestión de la Calidad Total],
instructor: [Mgs.<NAME>],
due-date: [Miércoles, 23 de octubre de 2024],
language: "es",
region: "bo",
toc: true,
running-head: [Industria de Micromovilidades],
paper-size: "a4",
implicit-introduction-heading: false,
keywords: (
"Lean Six Sigma",
"Micromovilidades",
"Calidad",
"Productividad",
"Sustentabilidad",
"Competitividad",
),
)
= Contexto: Industria de Micromovilidades
== Fundamentación Personal
Yo elegí este contexto porque la industria de micromovilidad está transformando la manera en que las personas se desplazan en las ciudades, ofreciendo una alternativa sostenible y accesible al transporte tradicional. En ciudades como Santa Cruz de la Sierra, donde el tráfico y la contaminación son desafíos constantes, la implementación de soluciones de micromovilidad puede ser un cambio significativo.
A nivel mundial, la micromovilidad no solo reduce la huella de carbono, sino que también promueve una vida más saludable al incentivar el uso de bicicletas y scooters eléctricos en lugar de vehículos motorizados. La industria está en pleno crecimiento, con un mercado global que se espera que alcance los 300 mil millones de dólares para 2030, lo que demuestra su potencial económico y su papel en la movilidad urbana del futuro.
Además, este contexto me resulta particularmente interesante porque tengo un vínculo cercano con la sostenibilidad y el transporte alternativo a través de mi familia, que siempre ha promovido el uso de bicicletas y el respeto por el medio ambiente. Esta cercanía me ha permitido ver de primera mano los beneficios de la micromovilidad y entender los desafíos técnicos y logísticos que enfrenta.
Finalmente, considero que si podemos aplicar Lean Six Sigma en la industria de micromovilidad, se podría mejorar significativamente la eficiencia operativa y la calidad del servicio, reduciendo costos y aumentando la satisfacción del usuario. Esto no solo beneficiaría a los usuarios finales, sino también contribuiría a la sostenibilidad y competitividad de la industria.
Algunos ejemplos de soluciones o propuestas de micromovilidades son:
=== Citroën Ami
#quote(attribution: [@CitroenAmi])[
El Citroën Ami se celebra su cuarto aniversario en el mercado consolidado como un modelo innovador y sorprendente en el mundo de la movilidad urbana.
Desde su lanzamiento en 2020, el Ami ha sabido conquistar a más de 65.000 clientes, convirtiéndose en un auténtico fenómeno social que ha transformado la movilidad en áreas urbanas y suburbanas. Y ahora llega el momento de seguir avanzando en este camino con un retoque en su estilo.
]
#figure(
image("assets/2024-10-22-11-52-43.png"),
caption: [Foto referencial de Motor 16],
)
=== Regulación en Argentina
#quote(attribution: [@Diario3DosProyectos])[
La Comisión de Legislación, Interpretación y Acuerdos estudia dos iniciativas similares (pero por separado) para ordenar la circulación con vehículos denominados de «micromovilidad urbana», como monopatines eléctricos.
El bloque de Unión por la Patria planteó el establecimiento de un marco normativo para su utilización mientras que el de Juntos volvió a presentar una iniciativa similar a la despachada el año pasado para que el Municipio adhiera a las disposiciones de la Agencia Nacional de Seguridad Vial (ANSV) que regulan la circulación en esas unidades.
Entre otros temas, despachó una autorización para que se celebre un contrato de locación con la Suprema Corte de Justicia por el inmueble de 25 de Mayo 162 donde funciona el Tribunal Oral Criminal.
]
#figure(
image("assets/2024-10-22-11-58-28.png"),
caption: [Foto referencial de Diario 3],
)
=== Micromovilidad: desafíos y oportunidades
#quote(attribution: [@ComunicarSeMicromovilidad])[
Después de la pandemia, la micromovilidad se consolida como tendencia para lograr una movilidad sostenible. Empresas tecnológicas y automotrices avanzan en el desarrollo de scooters, bicicletas y vehículos compactos, para responder a las nuevas demandas de los consumidores.
Más de la mitad de la población mundial vive en ciudades y la tendencia está en aumento, conllevando un gran impacto en el sector de la movilidad. El número de habitantes de las ciudades va acompañado de un aumento del número de vehículos, lo que también implica mayor contaminación. Además, luego de la pandemia la movilidad también se vio afectada, donde en algunos casos se siguió prefiriendo el transporte individual, lo que generó un mayor tráfico, pero en paralelo también la micromovilidad ganó terreno. Esta última tendencia, en crecimiento, abarca una gama de vehículos livianos como bicicletas, scooters eléctricos y ciclomotores.
Según la encuesta de consumidores de McKinsey el uso de la micromovilidad podría aumentar. Esté estudio demostró que el número de encuestados dispuestos a utilizar la micromovilidad en la nueva normalidad aumentará en un 9 % para la micromovilidad privada y en un 12 % para la micromovilidad compartida en comparación con los niveles anteriores a la crisis.
Por lo tanto, estas soluciones de micromovilidad privada y compartida experimentarán una recuperación completa en la cantidad de pasajeros-kilómetros viajados, sin una caída significativa con respecto a los niveles previos a la crisis.
]
#figure(
caption: [Foto referencial de ComunicarSe],
image("assets/2024-10-22-11-54-07.png"),
)
=== MOBI LATAM
#quote(attribution: [@OpinionMobiLatam])[
La empresa boliviana de ecomovilidad urbana y energía limpia MOBI se asoció con la compañía estadounidense de baterías y litio Energy Exploration Technologies Inc. (EnergyX). Ambas empresas trabajarán para crear una cadena de suministro de baterías de litio nacionales en Bolivia, para desarrollar el mercado de movilidad eléctrica de la región. La nación sudamericana ha estado trabajando para crear la primera economía de litio sostenible del mundo, capaz de producir y procesar el metal para las necesidades nacionales e internacionales. La asociación entre MOBI y EnergyX destaca el próspero entorno de innovación en Bolivia y llevará al país un paso más cerca de convertirse en una superpotencia de energía verde.
MOBI fue fundada en 2020 por <NAME> y <NAME>, dos exitosos emprendedores con experiencia en las industrias tecnológica y automotriz, y está revolucionando la micromovilidad en América Latina. La compañía cerró la ronda inicial de capital semilla más grande de la historia para una startup en Bolivia, con más \$ 1,3 millones de una valuación de \$ 5 millones. MOBI se está estableciendo rápidamente como líder en movilidad sostenible mientras desarrolla un ecosistema de transporte limpio que hará de la región un caso de estudio de clase mundial en transporte eléctrico.
]
#figure(
image("assets/2024-10-22-11-49-32.png"),
caption: [Foto referencial de Opinión],
)
=== Startup boliviana: Bolivia Emprende
#quote(attribution: [@BoliviaEmprendeStartup])[
A 15 meses del inicio de operaciones en Santa Cruz de la Sierra, MOBI está lista para dar los primeros saltos de su expansión nacional e internacional. La startup boliviana de micromovilidad sostenible anunció que en septiembre llegará a dos nuevas ciudades del país: Tarija y Cochabamba. Sin embargo, las buenas noticias llegarán mucho más lejos. MOBI también iniciará operaciones en Asunción (Paraguay) y Viña del Mar (Chile), cerrando el año con cinco ciudades y tres países.
]
#figure(
image("assets/2024-10-22-11-54-55.png"),
caption: [Foto referencial de Bolivia Emprende. Fuente: <NAME>],
)
== Fundamentación Teórica
La micromovilidad se refiere a soluciones de transporte urbano de corta distancia, como bicicletas compartidas, scooters eléctricos y patinetes. Estas soluciones buscan reducir la congestión vehicular, mejorar la calidad del aire y promover un estilo de vida más saludable.
Estas soluciones buscan reducir la congestión vehicular, mejorar la calidad del aire y promover un estilo de vida más saludable.~@BBVABlog
=== Importancia de la Micromovilidad
==== Reducción de la Congestión y la Contaminación
Según un artículo de BBVA, la micromovilidad puede disminuir significativamente la congestión y la contaminación en las ciudades. Por ejemplo, en Berna, Suiza, la integración de bicicletas compartidas en la red de transporte ha mejorado la movilidad urbana.
==== Intermodalidad
La micromovilidad facilita la combinación de diferentes modos de transporte, lo que permite a los usuarios moverse de manera más eficiente. Un estudio de Electromov destaca la importancia de la intermodalidad y cómo la micromovilidad puede integrarse con el transporte público.~@ElectroMovCL
==== Impacto Económico
La micromovilidad no solo beneficia al medio ambiente, sino que también tiene un impacto económico positivo. Según un informe de IE University, el mercado de la micromovilidad se espera que alcance los 440 mil millones <NAME> para 2030.~@IEUniversity
==== Salud Pública
La promoción del uso de bicicletas y scooters eléctricos contribuye a una vida más saludable, reduciendo la dependencia de vehículos motorizados.
=== Implementación de Lean Six Sigma en la Industria de Micromovilidades
La implementación de Lean Six Sigma en la industria de micromovilidad se fundamenta en la necesidad crítica de optimizar los procesos de producción, mejorar la calidad y aumentar la eficiencia operativa en un sector que experimenta un crecimiento exponencial. Esta justificación se sustenta en diversas investigaciones que demuestran la efectividad y pertinencia de esta metodología en industrias de movilidad y manufactura ligera.
#quote(attribution: [@Ayyildizz2023])[
As an alternate mode of transportation, the use of shared bicycle and scooter systems, which are now being brought to the forefront under the term micromobility, has recently been increasing. Among them, electric scooters called e-scooters have started to be preferred frequently due to their ease of use. With this study, the factors to be considered for the adaptation of e-scooters have been determined and an e-scooter situation analysis has been carried out for Turkiye. For this reason, the factors brought together within the scope of the SWOT analysis have been examined and their priority degrees have been calculated. The factors considered in the SWOT analysis have been weighted with the interval type-2 fuzzy analytic hierarchy process (AHP). As a result, the most important and least important factors have been determined.
]
#quote(attribution: [@SZMELTERJAROSZ2023100263])[
Car culture is very strong in Poland. Urban sprawl, congestion and external effects of transport influence the perceived quality of life in urban and suburban areas. Therefore, it is essential to examine the reasons for choosing transport modes and willingness to change the mobility choices for sustainable ones. Using the Theory of Planned Behaviour within a structural equation modelling (SEM) approach, the goal was to comprehensively explain the perceived quality of life, transport behaviour and the underlying psychological processes, norms and beliefs, and to show the interdependencies between them. The main research method was the survey. Then, SEM models have been estimated to verify which values have the strongest effect on the transport behaviour and which variables significantly mediate this relation. The models were estimated based on a randomly-selected sample of Polish citizens, urban, suburban and rural residents (n = 1700) in October 2020 and October 2021. The results for residents of cities and suburban areas were compared with those for rural dwellers to refine the results significant for creating the urban policies. The research results are focused on explaining the interrelations between the perceived quality of life, mobility choices and residents' priorities and motivations to improve the urban policies.
]
= Descripción de la Empresa
_MicroMove Technologies_ es una empresa del sector de micromovilidad urbana. Fue fundada en 2018 en Santa Cruz de la Sierra. _MicroMove Technologies_ se especializa en el diseño, fabricación y distribución de scooters eléctricos, bicicletas eléctricas y sus componentes esenciales. La empresa actualmente cuenta con 180 empleados distribuidos en las siguientes áreas o departamentos clave:
== Departamentos y Personal
- *Producción y Ensamblaje: 80 empleados*
- Ensamblaje final de vehículos
- Fabricación de componentes específicos
- Control de línea de producción
- Pruebas de funcionamiento
- *Control de Calidad: 15 empleados*
- Inspección de componentes
- Pruebas de seguridad
- Certificación de productos
- Auditorías de calidad
- *Logística y Gestión de Flota: 25 empleados*
- Gestión de inventario
- Distribución de productos
- Mantenimiento de flota compartida
- Coordinación con proveedores
- *Ventas y Marketing: 15 empleados*
- Desarrollo de mercado
- Gestión de clientes corporativos
- Marketing digital
- Servicio al cliente
- *Administración y Finanzas: 10 empleados*
- Contabilidad y finanzas
- Planificación estratégica
- Gestión de presupuestos
- Análisis financiero
- *Investigación y Desarrollo (I+D): 15 empleados*
- Diseño de nuevos productos
- Innovación en baterías y sistemas eléctricos
- Desarrollo de software de control
- Optimización de rendimiento
- *Recursos Humanos: 5 empleados*
- Gestión del talento
- Capacitación y desarrollo
- Bienestar laboral
- Reclutamiento
- *Mantenimiento e Infraestructura: 15 empleados*
- Mantenimiento de equipos
- Gestión de instalaciones
- Soporte técnico
- Mantenimiento preventivo
== Productos y Servicios
_MicroMove Technologies_ produce:
- Scooters eléctricos de uso personal
- Bicicletas eléctricas
- Componentes electrónicos específicos
- Software de gestión de flotas
- Servicios de movilidad compartida
== Situación Actual
La empresa se encuentra en una fase de rápido crecimiento, impulsada por la creciente demanda de soluciones de movilidad sostenible en áreas urbanas. Actualmente opera con un enfoque semi-automatizado en sus procesos de producción, con oportunidades significativas de mejora en:
- Eficiencia energética
- Automatización de procesos
- Control de calidad
- Gestión de la cadena de suministro
- Integración de sistemas digitales
Los departamentos operan de manera semi-integrada, con sistemas de gestión independientes pero interconectados a través de un sistema ERP básico. La empresa está en proceso de modernización de sus instalaciones y busca implementar metodologías de mejora continua para optimizar sus operaciones y mantener su competitividad en el mercado.
== Infraestructura
La empresa cuenta con:
- Una planta de producción de 5,000 m²
- Un centro de investigación y desarrollo
- Un almacén principal
- Tres centros de servicio y mantenimiento
- Una red de estaciones de carga y mantenimiento para su flota compartida
== Marco Estratégico
=== Misión
Transformar la movilidad urbana proporcionando soluciones de transporte eléctrico innovadoras, seguras y sostenibles que mejoren la calidad de vida de las personas y contribuyan a la construcción de ciudades más limpias y eficientes.
=== Visión
Ser líder latinoamericano en soluciones de micromovilidad para 2030, reconocidos por nuestra innovación tecnológica, compromiso con la sostenibilidad y excelencia en el servicio al cliente, impulsando la transformación hacia ciudades más inteligentes y ecológicas.
=== Valores Corporativos
==== Innovación
- Fomentamos la creatividad y el pensamiento disruptivo
- Invertimos constantemente en investigación y desarrollo
- Adoptamos nuevas tecnologías y metodologías
- Promovemos la mejora continua en todos nuestros procesos
==== Sostenibilidad
- Priorizamos el impacto ambiental positivo
- Desarrollamos productos energéticamente eficientes
- Implementamos prácticas de producción eco-amigables
- Promovemos la economía circular en nuestras operaciones
==== Seguridad
- Garantizamos los más altos estándares de seguridad en nuestros productos
- Priorizamos la seguridad de nuestros empleados
- Mantenemos rigurosos controles de calidad
- Fomentamos una cultura de seguridad proactiva
==== Excelencia
- Buscamos la máxima calidad en todos nuestros productos y servicios
- Nos esforzamos por superar las expectativas del cliente
- Promovemos el desarrollo profesional continuo
- Mantenemos altos estándares en todos nuestros procesos
==== Colaboración
- Fomentamos el trabajo en equipo
- Construimos relaciones sólidas con stakeholders
- Promovemos un ambiente laboral inclusivo
- Valoramos la diversidad de perspectivas
=== Políticas Corporativas
==== Política de Calidad
- Cumplir con estándares internacionales de calidad (ISO 9001:2015)
- Implementar controles de calidad en cada etapa del proceso productivo
- Realizar auditorías regulares de calidad
- Mantener un sistema de mejora continua
- Capacitar constantemente a nuestro personal en gestión de calidad
==== Política Ambiental
- Cumplimiento de normativas ambientales nacionales e internacionales
- Reducción continua de nuestra huella de carbono
- Gestión responsable de residuos y reciclaje
- Uso eficiente de recursos energéticos
- Promoción de prácticas ambientalmente responsables
==== Política de Seguridad y Salud Ocupacional
- Cumplimiento de normativas de seguridad laboral
- Programas de prevención de riesgos
- Capacitación continua en seguridad
- Mantenimiento preventivo de equipos e instalaciones
- Protocolos de respuesta ante emergencias
==== Política de Recursos Humanos
- Promoción del desarrollo profesional
- Igualdad de oportunidades
- Compensación justa y competitiva
- Balance vida-trabajo
- Ambiente laboral positivo y seguro
==== Política de Innovación
- Inversión continua en I+D
- Fomento de la cultura innovadora
- Colaboración con instituciones académicas
- Protección de la propiedad intelectual
- Desarrollo de nuevas tecnologías
==== Política de Servicio al Cliente
- Atención personalizada y profesional
- Respuesta rápida a inquietudes y reclamos
- Seguimiento post-venta
- Mejora continua basada en feedback
- Garantía de satisfacción
=== Objetivos Estratégicos 2024-2026
==== Crecimiento y Expansión
+ Aumentar la participación de mercado en un 25%
+ Expandir operaciones a tres nuevas ciudades principales
+ Desarrollar dos nuevas líneas de productos
+ Incrementar la red de distribución en un 40%
==== Innovación y Desarrollo
+ Lanzar nueva generación de baterías de mayor duración
+ Implementar sistema IoT en toda la flota
+ Desarrollar plataforma de gestión de flota mejorada
+ Obtener tres nuevas patentes tecnológicas
==== Sostenibilidad
+ Reducir huella de carbono en 30%
+ Alcanzar 60% de uso de energía renovable
+ Implementar programa de reciclaje de baterías
+ Obtener certificación ISO 14001
==== Operacional
+ Reducir costos operativos en 15%
+ Mejorar eficiencia productiva en 20%
+ Implementar automatización en 40% de procesos
+ Reducir tiempo de mantenimiento en 25%
==== Satisfacción del Cliente
+ Alcanzar 90% de satisfacción del cliente
+ Reducir tiempo de respuesta en 40%
+ Implementar programa de fidelización
+ Mejorar NPS en 15 puntos
= Análisis de Problemas y Desafíos en MicroMove Technologies
== Desafíos de Infraestructura y Operaciones
=== Gestión de Flota y Mantenimiento
- *Dispersión de Unidades*: La flota de scooters y bicicletas eléctricas de _MicroMove_ presenta dificultades en su distribución óptima, con acumulación en zonas de alta demanda y escasez en otras áreas.
- *Mantenimiento Preventivo*: Se registra un 25% de unidades fuera de servicio debido a:
- Desgaste acelerado de componentes
- Vandalismo y mal uso
- Demoras en mantenimiento preventivo
- Falta de repuestos en tiempo oportuno
=== Infraestructura de Soporte
- *Estaciones de Carga*: Déficit del 40% en puntos de carga en relación a la demanda actual
- *Almacenamiento*: Capacidad limitada en centros de mantenimiento
- *Sistemas de Monitoreo*: Cobertura insuficiente de GPS y sistemas de tracking
== Problemas de Calidad y Seguridad
=== Seguridad del Usuario
- *Incidentes Reportados*:
- 15% de usuarios reportan problemas de estabilidad
- 20% de quejas relacionadas con fallas en el sistema de frenos
- 10% de reportes sobre problemas eléctricos
- *Factores Ambientales*:
- Deterioro acelerado por exposición a condiciones climáticas
- Daños por pavimento irregular
- Problemas de rendimiento en pendientes
=== Control de Calidad
- *Defectos de Fabricación*:
- Tasa de defectos del 3.5% en componentes críticos
- Variabilidad en el rendimiento de baterías
- Inconsistencias en el ensamblaje final
== Impacto Ambiental y Sostenibilidad
=== Gestión de Residuos
- *Baterías*:
- Solo 60% de baterías son recicladas efectivamente
- Costos elevados en disposición de residuos electrónicos
- *Componentes*:
- Acumulación de partes obsoletas
- Dificultad en separación de materiales para reciclaje
=== Huella de Carbono
- *Operaciones*:
- 30% de energía proviene de fuentes no renovables
- Emisiones significativas en logística de mantenimiento
- *Producción*:
- Alto consumo energético en línea de producción
- Dependencia de materiales no sostenibles
== Desafíos Operativos y de Proceso
=== Eficiencia Productiva
- *Línea de Producción*:
- Tiempo de ciclo 25% superior al objetivo
- Cuellos de botella en ensamblaje final
- Subutilización de capacidad instalada
- *Gestión de Inventario*:
- Exceso de inventario en componentes no críticos
- Faltantes frecuentes en componentes esenciales
=== Integración de Sistemas
- *Tecnología*:
- Sistemas legacy no integrados
- Duplicación de datos entre departamentos
- Baja automatización en procesos clave
== Oportunidades de Mejora Identificadas
=== Prioridades Inmediatas
- Implementar sistema de mantenimiento predictivo
- Mejorar control de calidad en componentes críticos
- Optimizar gestión de flota y distribución
- Aumentar puntos de carga y mantenimiento
=== Objetivos a Mediano Plazo
- Desarrollar programa de reciclaje integral
- Automatizar procesos de producción clave
- Implementar sistema ERP integrado
- Establecer centro de control centralizado
= Variables de Medición y Control para Simulación y Mejora
== Variables de Calidad
=== Defectos y Fallas
- *DPMO (Defectos Por Millón de Oportunidades)*
- Defectos en ensamblaje final
- Fallas en componentes eléctricos
- Problemas de software
- Defectos en baterías
- *Tasa de Defectos por Categoría (%)*
- Defectos críticos de seguridad
- Defectos funcionales
- Defectos estéticos
- Defectos de software
- *First Pass Yield (%)*
- Tasa de unidades correctas primera vez
- Yield por estación de trabajo
- Yield por línea de producción
=== Reprocesos
- *Costos de Reproceso (\$)*
- Mano de obra
- Materiales
- Tiempo de máquina
- Energía consumida
- *Tiempo de Reproceso (horas)*
- Por tipo de defecto
- Por componente
- Por línea de producción
== Variables Operativas
=== Tiempos de Proceso
- *Cycle Time (minutos)*
- Tiempo de ensamblaje
- Tiempo de pruebas
- Tiempo de empaque
- Tiempo de carga inicial
- *Lead Time (días)*
- Tiempo total de producción
- Tiempo de aprovisionamiento
- Tiempo de distribución
- *Downtime (%)*
- Paradas programadas
- Paradas no programadas
- Tiempo de setup
- Mantenimiento correctivo
=== Eficiencia
- *OEE (Overall Equipment Effectiveness)*
- Disponibilidad
- Rendimiento
- Calidad
- *Productividad*
- Unidades por hora
- Unidades por operario
- Eficiencia energética
== Variables de Mantenimiento
=== Flota en Servicio
- *Disponibilidad de Vehículos (%)*
- Por zona geográfica
- Por tipo de vehículo
- Por temporada
- *MTBF (Mean Time Between Failures)*
- Por componente crítico
- Por tipo de vehículo
- Por condición de uso
- *MTTR (Mean Time To Repair)*
- Por tipo de falla
- Por componente
- Por ubicación
=== Costos de Mantenimiento
- *Costo por Unidad (\$)*
- Mantenimiento preventivo
- Mantenimiento correctivo
- Repuestos
- Mano de obra
== Variables de Seguridad
=== Incidentes
- *Tasa de Accidentes*
- Por tipo de vehículo
- Por zona geográfica
- Por condición climática
- Por hora del día
- *Severidad de Incidentes*
- Daños menores
- Daños mayores
- Incidentes críticos
=== Fallas de Seguridad
- *Tasa de Fallas Críticas*
- Sistema de frenos
- Sistema eléctrico
- Estructura
- Software
== Variables Ambientales
=== Consumo y Eficiencia
- *Consumo Energético*
- kWh por unidad producida
- kWh por km recorrido
- Eficiencia de carga
- *Huella de Carbono*
- CO2 por unidad producida
- CO2 por km operado
- Emisiones totales
=== Residuos
- *Tasa de Reciclaje (%)*
- Baterías
- Componentes electrónicos
- Materiales estructurales
- *Generación de Residuos*
- Kg por unidad producida
- Residuos peligrosos
- Residuos reciclables
== Variables Financieras
=== Costos Operativos
- *Costo por Unidad (\$)*
- Materiales directos
- Mano de obra directa
- Overhead
- Logística
- *Costos de Calidad (\$)*
- Prevención
- Evaluación
- Fallas internas
- Fallas externas
=== Impacto Financiero
- *ROI de Mejoras*
- Por proyecto
- Por área
- Por tipo de mejora
- *Ahorro Generado (\$)*
- Reducción de defectos
- Eficiencia energética
- Optimización de procesos
- Reducción de desperdicios
== Variables de Satisfacción
=== Cliente Final
- *NPS (Net Promoter Score)*
- *CSAT (Customer Satisfaction)*
- *Tasa de Quejas*
- *Tiempo de Respuesta*
=== Experiencia de Usuario
- *Disponibilidad del Servicio (%)*
- *Tiempo de Espera*
- *Facilidad de Uso*
- *Confiabilidad del Servicio*
= Simulación de Variables
== Cycle Time (minutos): Tiempo de pruebas
- Tiempo promedio de pruebas: 45 minutos
- Producción mensual: 1,500 unidades
- Datos históricos de los últimos 24 meses
- Meses del Año / Tiempo de Pruebas (minutos)
#figure(
image("assets/2024-10-22-14-52-22.png"),
caption: [Gráfico de barras y puntos de Tiempo de Pruebas],
)
== Tasa de Accidentes: Por condición climática
- Condición / Tasa de accidentes (por 1000 viajes)
- Condiciones:
- Lluvia
- Sol
- Nublado
- Noche
- Viento
- Otros
#figure(
image("assets/2024-10-22-14-59-19.png"),
caption: [Gráfico de barras y de pastel],
)
== Consumo Energético: Eficiencia de carga
- Consumo estándar: 0.5 kWwh/carga completa
- Semanas del año / Eficiencia de carga (%)
#figure(
image("assets/2024-10-22-15-08-33.png"),
caption: [Gráfico de puntos],
)
= Aplicación de Lean Six Sigma---¿Cómo puede LSS ser aplicado en la industria de micromovilidades para mejorar la calidad, productividad, sustentabilidad y competitividad de la empresa?
Lean six sigma puede ser una herramienta bastante util para mejorar la industria de micromovilidades en términos de calidad, productividad, sustentabilidad y competitividad al integrar los principios Lean con las metodologías Six Sigma, las empresas pueden optimizar las operaciones y reducir el desperdicio.
== Mejora de Calidad
=== Reducción de Defectos
LSS emplea herramientas estadísticas para identificar y eliminar defectos en productos y servicios, asegurando estándares de alta calidad.~@ahmed_mousa_2012
=== Satisfacción del cliente
Al enfocarse en las necesidades del cliente y reducir tiempos de espera, las empresas pueden mejorar la experiencia del usuario y lealtad.~@yury_klochkov__2019
== Mejora del Producto
=== Optimización de Procesos
Implementar técnicas Lean como Value Stream Mapping ayuda a identificar actividades de valor no agregado, y así mejorar el flujo de trabajo y eficiencia.~@sanjay_h__zala__2024
=== Transformación Rápida
La combinación de LSS permite la adaptación rápida a cambios de mercado, esencial en la dinámica del sector de micromovilidades.~@ahmed_mousa_2012
== Enfoque de Sustentabilidad
=== Reducción de Gastos
LSS promueve prácticas que minimizan desperdicios y emisiones, que se alinean con metas de sustentabilidad medio ambientales.~@jag_mohan__2024
=== Eficiencia de Recursos
Al optimizar el uso de recursos, las empresas pueden bajar los costos de operación mientras mejoran su marca ambiental.~@abdallah_izzat_mahmoud_barakat_2024
== Ventaja Competitiva
=== Reacción de Mercado
LSS permite firmas responder rápidamente a demandas de mercado, manteniendo la competitividad en una industria rápidamente evolucionando.~@sanjay_h__zala__2024
=== Facilidad de Innovación
La integración de tecnologías recientes con LSS pueden llevar a innovaciones, que mejoren la posición de mercado de la empresa.~@abdallah_izzat_mahmoud_barakat_2024
Si bien LSS puede ofrecer varios beneficios, algunas empresas pueden llevar retos en su implementación debido a la resistencia al cambio o falta de entendimiento de sus principios.
Abordar estos impedimentos es crucial para una adopción exitosa en la industria micromoviliaria.
#pagebreak()
#outline(
target: figure.where(kind: image),
title: [Figuras],
)
#pagebreak()
#bibliography(
"Referencias.yml",
full: true,
) |
|
https://github.com/CreakZ/mirea-algorithms | https://raw.githubusercontent.com/CreakZ/mirea-algorithms/master/reports/work_5/work_5.typ | typst | #import "../title_page_template.typ": title_page
#import "../layouts.typ": head1, head2, un_heading, indent
#set page(margin: 2cm)
#set text(size: 14pt, font: "New Computer Modern", kerning: false)
#set page(paper: "a4")
#set heading(numbering: "1.")
#set par(justify: true)
#set par(leading: 0.65em)
#set figure(supplement: [Рисунок])
#set figure.caption(separator: [ -- ])
#title_page(5, [Строки])
#un_heading([Оглавление])
#outline(
title: none
)
#pagebreak()
#set page(numbering: "1")
#head1([= Условие задачи])
#par(
[#indent Требуется выполнить два задания по обработке текста.]
)
#par(
[#indent В первом задании для представления текста использовать ноль-терминированную строку языка C. Для управления строкой использовать функции файла заголовка (библиотека функций управления ноль-терминированной строкой) #raw("<string.h>"). ]
)
#par(
[#indent Во втором задании использовать для представления текста строку класса #raw("<string>"). Для управления строкой использовать методы класса #raw("string").]
)
#indent Задание моего персонального варианта (№22):
#par(
[#indent Даны два предложения. Вывести слова, общие для этих двух предложений. Если таких слов нет, то вывести сообщение об этом.]
)
#pagebreak()
#head1([= Описание алгоритма решения задачи])
#par(
[#indent Первым делом для решения задачи нужно разбить оба предложения на отдельные слова (строки), а затем попарно их сравнить. Для этого можно воспользоваться алгоритмом перебора с вложенным циклом. Допустим, у нас есть 2 массива со словами (строками) длинами $n$ и $m$ соответственно. Запустим цикл от 0 до $n-1$ для первого массива, а внутри данного цикла запустим еще один от 0 до $m-1$ для второго массива (таким образом мы гарантируем однократное сравнение каждого слова первого массива с каждым словом второго массива). Далее просто сравним элементы каждой пары. Если элементы равны, то выведем любой из этих элементов на экран. Если среди элементов пар ни найдется ни одних равных, то выведем сообщение об этом. ]
)
#head1([= Реализация задания 1])
#head2([== Описание применяемых в реализации методов строки функций из библиотеки #raw("<string.h>")])
#par(
[
+ #raw("strtok") -- поиск лексем строки по разделителям.
+ #raw("strlen") -- возврат длины строки.
+ #raw("strcpy") -- копирование содержимого строки.
+ #raw("strncpy") -- копирование $n$ первых символов строки.
+ #raw("strcat") -- конкатенация двух строк.
+ #raw("memcpy") -- копирование блока данных из строки.
]
)
#head2([== Код реализации задания 1])
#par([1. Код файла #raw("myString.h")])
#let c_str_h = read("../../src/work_5/null_terminated_string/headers/my_string.h")
#raw(c_str_h, lang: "cpp")
#par([2. Код файла #raw("myString.cpp")])
#let c_str_cpp = read("../../src/work_5/null_terminated_string/source/my_string.cpp")
#raw(c_str_cpp, lang: "cpp")
#par([3. Код файла #raw("main.cpp")])
#let c_str_main = read("../../src/work_5/null_terminated_string/main.cpp")
#raw(c_str_main, lang: "cpp")
#head2([== Время выполнения задачи варианта])
#par([Таблица 1 -- Таблица зависимости времени выполнения программы от объема входных данных])
#figure(
table(
columns: 3,
table.header(
[Номер], [Объем входных данных, символы], [Время выполнения, мс]
),
[1], [10], [2],
[2], [100], [29],
[3], [500], [54],
[4], [1000], [103]
),
)
#head2([== Тестирование])
#head2([=== Таблица с тестами])
#text(
[Таблица 2 -- Таблица тестов для задания 1]
)
#figure(
table(
columns: 3,
align: (center, left, center),
table.header(
[Номер], [Входные данные], [Ожидаемый вывод]
),
[1], [Escape the society \ We live in society], [society],
[2], [The quick brown fox jumps over the lazy dog \ A quick brown dog jumps over a lazy fox], [quick brown jumps over lazy]
)
)
#pagebreak()
#head2([=== Результаты тестирования])
#figure(
image(
"images/test1_1.png", width: 80%
),
caption: [Тест №1]
)
#figure(
image(
"images/test1_2.png", width: 80%
),
caption: [Тест №2]
)
#pagebreak()
#head1([= Реализация задания 2])
#head2([== Описание применяемых в реализации методов строки функций из библиотеки #raw("<string>")])
#par(
[
+ #raw("resize") -- изменение текущего размера строки
+ #raw("substr") -- возврат подстроки
+ #raw("insert") -- вставка подстроки
+ #raw("erase") -- удаление подстроки
+ #raw("find_first_of") -- возврат первого индекса любого из набора символов
+ #raw("empty") -- проверка строки на пустоту
]
)
#head2([== Код реализации задания 2])
#par([1. Код файла #raw("myString.h")])
#let std_str_h = read("../../src/work_5/std_string/headers/my_string.h")
#raw(std_str_h, lang: "cpp")
#par([2. Код файла #raw("myString.cpp")])
#let std_str_cpp = read("../../src/work_5/std_string/source/my_string.cpp")
#raw(std_str_cpp, lang: "cpp")
#par([3. Код файла #raw("main.cpp")])
#let std_str_main = read("../../src/work_5/std_string/main.cpp")
#raw(std_str_main, lang: "cpp")
#head2([== Время выполнения задачи варианта])
#figure(
table(
columns: 3,
table.header(
[Номер], [Объем входных данных, символы], [Время выполнения, мс]
),
[1], [10], [4],
[2], [100], [32],
[3], [500], [57],
[4], [1000], [122]
),
)
#head2([== Тестирование])
#head2([=== Таблица с тестами])
#par([Таблица 3 -- Таблица тестов для задания 2])
#figure(
table(
columns: 3,
align: (center, left, center),
table.header(
[Номер], [Входные данные], [Ожидаемый вывод]
),
[1], [Don't take my snacks, please \ Can you borrow me some cheaps?], [Общих слов нет],
[2], [He has a decent problem to solve \ I am stuck on this problem], [problem]
)
)
#pagebreak()
#head2([=== Результаты тестирования])
#figure(
image(
"images/test2_1.png", width: 80%
),
caption: [Тест №1]
)
#figure(
image(
"images/test2_2.png", width: 80%
),
caption: [Тест №2]
)
#head1([= Выводы])
#par(
[#indent В результате выполнения работы были получены знания по выполнению операций над ноль-терминированной строкой, организации хранения строковых данных класса #raw("string") и использованию средств языка C++ для реализации алгоритмов обработки текстовых данных.]
) |
|
https://github.com/mrcinv/nummat-typst | https://raw.githubusercontent.com/mrcinv/nummat-typst/master/14_povprecna_razdalja.typ | typst | = Povprečna razdalja med dvema točkama na kvadratu
== Naloga
- Izpeljite algoritem, ki izračuna integral na več dimenzionalnem kvadru kot
večkratni integral tako, da za vsako dimenzijo uporabite isto kvadraturno formulo za enkratni integral.
- Pri implementaciji pazite, da ne delate nepotrebnih dodelitev pomnilnika.
- Uporabite algoritem za izračun povprečne razdalje med dvema točkama na enotskem
kvadratu $[0, 1]^2$ in enotski kocki $[0, 1]^3$.
- Za sestavljeno Simpsonovo formulo in Gauss-Legendrove kvadrature ugotovite, kako napaka
pada s številom izračunov funkcije, ki jo integriramo. Primerjajte rezultate s preprosto Monte-Carlo metodo (računanje vzorčnega povprečja za enostaven slučajni vzorec). |
|
https://github.com/typst-community/guidelines | https://raw.githubusercontent.com/typst-community/guidelines/main/README.md | markdown | Creative Commons Attribution Share Alike 4.0 International | # API & Style Guidelines
This repository contains a document which outlines guidelines for developers across the Typst ecosystem to help them create packages and templates with good APIs and consistent styling.
The guidlines are not hard rules, nor are they prefect, contributions and discussions to improve them are welcome.
## State
This document is currently a draft and sections may change as it and the ecosystem evolves.
## Contributing
If you wish to fix grammatical errors or improve the layout or appearance of the document feel free to open a PR directly.
If you wish to propose a change to the guidlines itself, please open an issue to discuss the changes with the community.
To preview, watch and build the document locally, see the [Justfile], you can use it anywhere by installing [just] and a sh-compatible shell.
Alternatively, you can run the recipes manually by setting the required environment variables and arguments yourself.
[Justfile]: ./Justfile
[just]: https://just.systems
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/text/lang-02.typ | typst | Other | // Error: 17-21 expected string, found none
#set text(lang: none)
|
https://github.com/myst-templates/lapreprint-typst | https://raw.githubusercontent.com/myst-templates/lapreprint-typst/main/template.typ | typst | MIT License | #import "lapreprint.typ": *
#show: template.with(
title: "[-doc.title-]",
abstract: (
(
title: "Abstract",
content: [
[-parts.abstract-]
]
[# if parts.summary #]
), (
title: "Plain Language Summary",
content: [
[-parts.summary-]
]
[# endif #]
),
),
[# if doc.subtitle #]
subtitle: "[-doc.subtitle-]",
[# endif #]
[# if doc.short_title #]
short-title: "[-doc.short_title-]",
[# endif #]
[# if doc.open_access !== undefined #]
open-access: [-doc.open_access-],
[# endif #]
[# if doc.doi #]
doi: "[-doc.doi-]",
[# endif #]
[# if doc.date #]
date: datetime(
year: [-doc.date.year-],
month: [-doc.date.month-],
day: [-doc.date.day-],
),
[# endif #]
[# if doc.keywords #]
keywords: (
[#- for keyword in doc.keywords -#]"[-keyword-]",[#- endfor -#]
),
[# endif #]
[# if doc.bibtex #]
bibliography-file: "[-doc.bibtex-]",
[# endif #]
authors: (
[# for author in doc.authors #]
(
name: "[-author.name-]",
[# if author.orcid #]
orcid: "[-author.orcid-]",
[# endif #]
[# if author.affiliations #]
affiliations: "[#- for aff in author.affiliations -#][-aff.index-][#- if not loop.last -#],[#- endif -#][#- endfor -#]",
[# endif #]
),
[# endfor #]
),
affiliations: (
[# for aff in doc.affiliations #]
(
id: "[-aff.index-]",
name: "[-aff.name-]",
),
[# endfor #]
),
[# if doc.venue.title #]
venue: "[-doc.venue.title-]",
[# endif #]
[# if options.logo #]
logo: "[-options.logo-]",
[# endif #]
[# if options.kind #]
kind: "[-options.kind-]",
[# endif #]
margin: (
[# if parts.acknowledgements #]
(
title: "Acknowledgements",
content: [
[-parts.acknowledgements-]
],
),
[# endif #]
[# if parts.availability #]
(
title: "Data Avaliability",
content: [
[-parts.availability-]
],
),
[# endif #]
),
)
[-IMPORTS-]
[-CONTENT-] |
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-119A0.typ | typst | Apache License 2.0 | #let data = (
("NANDINAGARI LETTER A", "Lo", 0),
("NANDINAGARI LETTER AA", "Lo", 0),
("NANDINAGARI LETTER I", "Lo", 0),
("NANDINAGARI LETTER II", "Lo", 0),
("NANDINAGARI LETTER U", "Lo", 0),
("NANDINAGARI LETTER UU", "Lo", 0),
("NANDINAGARI LETTER VOCALIC R", "Lo", 0),
("NANDINAGARI LETTER VOCALIC RR", "Lo", 0),
(),
(),
("NANDINAGARI LETTER E", "Lo", 0),
("NANDINAGARI LETTER AI", "Lo", 0),
("NANDINAGARI LETTER O", "Lo", 0),
("NANDINAGARI LETTER AU", "Lo", 0),
("NANDINAGARI LETTER KA", "Lo", 0),
("NANDINAGARI LETTER KHA", "Lo", 0),
("NANDINAGARI LETTER GA", "Lo", 0),
("NANDINAGARI LETTER GHA", "Lo", 0),
("NANDINAGARI LETTER NGA", "Lo", 0),
("NANDINAGARI LETTER CA", "Lo", 0),
("NANDINAGARI LETTER CHA", "Lo", 0),
("NANDINAGARI LETTER JA", "Lo", 0),
("NANDINAGARI LETTER JHA", "Lo", 0),
("NANDINAGARI LETTER NYA", "Lo", 0),
("NANDINAGARI LETTER TTA", "Lo", 0),
("NANDINAGARI LETTER TTHA", "Lo", 0),
("NANDINAGARI LETTER DDA", "Lo", 0),
("NANDINAGARI LETTER DDHA", "Lo", 0),
("NANDINAGARI LETTER NNA", "Lo", 0),
("NANDINAGARI LETTER TA", "Lo", 0),
("NANDINAGARI LETTER THA", "Lo", 0),
("NANDINAGARI LETTER DA", "Lo", 0),
("NANDINAGARI LETTER DHA", "Lo", 0),
("NANDINAGARI LETTER NA", "Lo", 0),
("NANDINAGARI LETTER PA", "Lo", 0),
("NANDINAGARI LETTER PHA", "Lo", 0),
("NANDINAGARI LETTER BA", "Lo", 0),
("NANDINAGARI LETTER BHA", "Lo", 0),
("NANDINAGARI LETTER MA", "Lo", 0),
("NANDINAGARI LETTER YA", "Lo", 0),
("NANDINAGARI LETTER RA", "Lo", 0),
("NANDINAGARI LETTER LA", "Lo", 0),
("NANDINAGARI LETTER VA", "Lo", 0),
("NANDINAGARI LETTER SHA", "Lo", 0),
("NANDINAGARI LETTER SSA", "Lo", 0),
("NANDINAGARI LETTER SA", "Lo", 0),
("NANDINAGARI LETTER HA", "Lo", 0),
("NANDINAGARI LETTER LLA", "Lo", 0),
("NANDINAGARI LETTER RRA", "Lo", 0),
("NANDINAGARI VOWEL SIGN AA", "Mc", 0),
("NANDINAGARI VOWEL SIGN I", "Mc", 0),
("NANDINAGARI VOWEL SIGN II", "Mc", 0),
("NANDINAGARI VOWEL SIGN U", "Mn", 0),
("NANDINAGARI VOWEL SIGN UU", "Mn", 0),
("NANDINAGARI VOWEL SIGN VOCALIC R", "Mn", 0),
("NANDINAGARI VOWEL SIGN VOCALIC RR", "Mn", 0),
(),
(),
("NANDINAGARI VOWEL SIGN E", "Mn", 0),
("NANDINAGARI VOWEL SIGN AI", "Mn", 0),
("NANDINAGARI VOWEL SIGN O", "Mc", 0),
("NANDINAGARI VOWEL SIGN AU", "Mc", 0),
("NANDINAGARI SIGN ANUSVARA", "Mc", 0),
("NANDINAGARI SIGN VISARGA", "Mc", 0),
("NANDINAGARI SIGN VIRAMA", "Mn", 9),
("NANDINAGARI SIGN AVAGRAHA", "Lo", 0),
("NANDINAGARI SIGN SIDDHAM", "Po", 0),
("NANDINAGARI HEADSTROKE", "Lo", 0),
("NANDINAGARI VOWEL SIGN PRISHTHAMATRA E", "Mc", 0),
)
|
https://github.com/pedrofp4444/BD | https://raw.githubusercontent.com/pedrofp4444/BD/main/report/templates/index.typ | typst | #let index = {
show outline: it => {
show heading: set text(size: 18pt)
it
}
{
show outline.entry.where(level: 1): it => {
v(5pt)
strong(it)
}
outline(
title: [Índice],
indent: true,
)
}
v(-0.4em)
outline(
title: none,
target: figure.where(kind: "attachment"),
indent: n => 1em,
)
outline(
title: [Lista de Figuras],
target: figure.where(kind: image),
)
outline(
title: [Lista de Tabelas],
target: figure.where(kind: table),
)
}
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/import-23.typ | typst | Other | // An item after a star.
// Error: 24 expected semicolon or line break
#import "module.typ": *, a
|
https://github.com/jujimeizuo/ZJSU-typst-template | https://raw.githubusercontent.com/jujimeizuo/ZJSU-typst-template/master/template/body.typ | typst | Apache License 2.0 | #import "font.typ": *
#import "utils.typ": *
#pagebreak()
#counter(page).update(1)
// 章节计数器,记录公式层级
#let counter_chapter = counter("chapter")
#let counter_equation = counter(math.equation)
#let counter_image = counter(figure.where(kind: image))
#let counter_table = counter(figure.where(kind: table))
// 图片和表的格式
#show figure: it => [
#v(6pt)
#set text(font_size.wuhao)
#set align(center)
#if not it.has("kind") {
it
} else if it.kind == image {
it.body
[
#textbf("图")
#locate(loc => {
[#counter_chapter.at(loc).first().#counter_image.at(loc).first()]
})
#it.caption
]
} else if it.kind == table {
[
#textbf("表")
#locate(loc => {
[#counter_chapter.at(loc).first().#counter_table.at(loc).first()]
})
#it.caption
]
it.body
} else {
it.body
}
#v(6pt)
]
// 设置公式格式
#set math.equation(
numbering: (..nums) => locate( loc => {
numbering("(1.1)", counter_chapter.at(loc).first(), ..nums)
})
)
// 设置引用格式
#show ref: it => {
locate(loc => {
let elems = query(it.target, loc)
if elems == () {
it
} else {
let elem = elems.first()
let elem_loc = elem.location()
if numbering != none {
if elem.func() == math.equation {
link(elem_loc, [#textbf("式")
#counter_chapter.at(elem_loc).first().#counter_equation.at(elem_loc).first()
])
} else if elem.func() == figure{
if elem.kind == image {
link(elem_loc, [#textbf("图")
#counter_chapter.at(elem_loc).first().#counter_image.at(elem_loc).first()
])
} else if elem.kind == table {
link(elem_loc, [#textbf("表")
#counter_chapter.at(elem_loc).first().#counter_table.at(elem_loc).first()
])
}
}
} else {
it
}
}
})
}
#set heading(numbering: (..nums) =>
if nums.pos().len() == 1 {
"第"+zhnumbers(nums.pos().first()) +"章"
}
else {
nums.pos().map(str).join(".")
})
#show heading: it => {
if it.level == 1 {
set align(center)
set text(font:heiti, size: font_size.sanhao, weight: "bold")
counter_chapter.step()
counter_equation.update(())
counter_image.update(())
counter_table.update(())
it
v(12pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
} else if it.level == 2 {
set text(font:heiti, size: font_size.xiaosan, weight: "bold")
it
v(18pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
} else if it.level == 3 {
set text(font:heiti, size: font_size.sihao, weight: "thin")
it
v(18pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
} else if it.level == 4 {
set text(font:heiti, size: font_size.sihao, weight: "thin")
it
v(18pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
}
}
// 设置正文格式
#set text(font: songti, size: font_size.xiaosi)
#set par(justify: true, leading: 1em, first-line-indent: 2em)
#show par: it => {
it
}
#include "../contents/context.typ" |
https://github.com/Trebor-Huang/HomotopyHistory | https://raw.githubusercontent.com/Trebor-Huang/HomotopyHistory/main/common.typ | typst | #import "@preview/cetz:0.2.0"
#import "@preview/fletcher:0.4.0" as fletcher: node, edge
// #import "@preview/ctheorems:1.1.0": *
// #show: thmrules
// #let proof = thmplain(
// "proof",
// "证明",
// base: "theorem",
// bodyfmt: body => [#body #h(1fr) $square$]
// ).with(numbering: none)
#let slash = math.class("fence", sym.slash)
#let fork = math.class("relation", "⋔")
#let cap = math.class("binary", "⌢")
#let cup = math.class("binary", "⌣")
// Override calligraphic font
#let cal(t) = text(t, font: "KaTeX_Caligraphic")
|
|
https://github.com/fabriceHategekimana/master | https://raw.githubusercontent.com/fabriceHategekimana/master/main/3_Theorie/Matrices.typ | typst | Aujourd'hui les réseaux de neuronnes sont l'outil le plus populaire utilisés jusqu'à présent dans les sciences de données ou le machine learning. J'ai trouvé intéressant de voir ce que notre nouveau système de type est capable de faire pour ce type de cas.
Pour la réalisation de notre projet, il nous faut d'abord définir les éléments nécessaire à l'établissement de ce module, à savoir, les matrices, les vecteurs et les scalaires.
Les matrices sont à la base de l'algèbre linéaire et sont grandement utilisé pour émuler des réseaux de neurones. Dans notre cas, une matrice peut simplement être représenté comme un vecteur de vecteur.
== Tenseurs (Shape compatibility)
En sciences des données un tenseur est une généralisation des vecteurs et matrices. Il permet d'avoir une représentation homogène des données et de simplifier certains calcules. En effet, une matrice est un tenseur de 2 dimensions, un vecteur est un tenseur de 1 dimension et un scalaire est un tenseur de dimension 0. De plus un hypercube est en fait un tenseur de dimension 3. C'est une façon de représenter les données de façon efficace.
Bien qu'on puisse travailler avec des tenseurs de plusieurs dimensions, on se rend compte en réalité que les datascientistes travail le plus souvent avec des tenseur allant jusqu'à la dimension 4. En effet, prendre de plus grande dimension rend les données difficile à interpréter.
## Introduction
type([...]) = int[type(...)]
## Exemple
```r
int # Scalaire
int[1,1] # Vecteur d'une ligne et d'une colonne
int[2, 3] # matrice de deux lignes et 3 colonnes
bool[3,3,3] # tenseur de degré 3 cubique
```
# Tenseurs (Shape compatibility)
## Opérations de base
type(e1) = int[A1], type(e2) = int[A2], A1 = A2 = A -- type(e1 + e2) = type[A]
type(e1) = int[A1], type(e2) = int[A2], A1 = A2 = A -- type(e1 - e2) = type[A]
type(e1) = int[A1], type(e2) = int[A2], A1 = A2 = A -- type(e1 * e2) = type[A]
type(e1) = int[A1], type(e2) = int[A2], A1 = A2 = A -- type(e1 / e2) = type[A]
# Tenseurs (Shape compatibility)
## Produit matriciel
type(e1 %*% e2) = type[]
type[3,3] dot type[3,3] = type[3,3]
type[2,3] dot type[3,2] = type[2,2]
type[3,2] dot type[2,3] = type[3,3]
## Règles
type[N,M] dot type[M,N] = type[N,N]
type[N1,M] dot type[M,N2] = type[N1,N2]
# Concatenation
## Restrictions actuelles
On peut concatener que si les MDA sont de même dimension.
La concatenation crée une "addition" sur les paramètres tout à droite.
## Exemples
concat(1, 1) faux car concat(int, int)
concat([1, 2], 1) faux car concat(<int, 2>, int)
concat([1, 2], [3]) juste car concat(<int, 2>, <int, 1>) -> <int, 3>
concat([1; 2], [3, 4]) faux car concat(<int, 2, 1>, <int 2>)
concat([1; 2], [3 4; 5 6]) vrai car concat(<int, 2, 1>, <int, 2, 2>) -> <int, 2, 3>
# Typing rules
## Default types
$$
\infer{\Gamma \vdash Value : logical}{%
Value \in Logical
}
$$
$$
\infer{\Gamma \vdash Value : integer}{%
Value \in Integer
}
$$
$$
\infer{\Gamma \vdash Value : float}{%
Value \in Float
}
$$
$$
\infer{\Gamma \vdash Value : complex}{%
Value \in Complex
}
$$
$$
\infer{\Gamma \vdash Value : character}{%
Value \in Character
}
$$
# Typing rules
## Typing of a variable
$$
\infer{\Gamma \vdash Variable }{%
Variable : Type \in \Gamma
}
$$
# Typing rules
## Typing of a variable declaration
$$
\infer{\Gamma \vdash Variable : Type -> \Gamma'
}{%
}
$$
# Typing rules
## Typing of an assignation
$$
\infer{\Gamma \vdash Variable : Type = Value -> \Gamma'
}{%
Value \in Type
}
$$
### Constructors
let matrix = func <T, N1, N2>(dim: (N1, N2), value: T) -> [[T, N2], N1]
let zeros = func <N1, N2>(dim: (N1, N2)) -> [[int, N2,] N1]
let ones = func <N1, N2>(dim: (N1, N2)) -> [[int, N2,] N1]
let trues = func <N1, N2>(dim: (N1, N2)) -> [[bool, N2,] N1]
let falses = func <N1, N2>(dim: (N1, N2)) -> [[bool, N2,] N1]
## Matrices
### arithmetic operations
let add = func <T>(a: T, b: T) -> T
let sub = func <T>(a: T, b: T) -> T
let mul = func <T>(a: T, b: T) -> T
let div = func <T>(a: T, b: T) -> T
### Structure operations
let length = func <T, N1, N2>(m: [[T, N2], N1]) -> int
let shape = func <T, N1, N2>(m: [[T, N2], N1]) -> (int, int)
let eltype = func <T, N1, N2>(m: [[T, N2], N1]) -> T
## Matrices
### Miscellanious
let dot = func <P, M, N>(m1: [[int, P], M], m2: [[int, N], P]) -> [[int, N], M]
let fill = func <T, N1, N2>(m: [[T, N2], N1], e: T) -> [[T, N2], N1]
### Hooray !
let linearize = func <T, N1, N2>(m1: [[T, N2], N1]) -> [T, N1*N2]
**Must specify the shape:**
let reshape = func <T1, T2, T3>(m1: [[T1, N2], N1], shape: T2) -> T3
- T1 is any type
- T2 is a tuple
- T3 is an Array
## Neural network layer

## Layer
### Definition
let NNLayer = type <T, N1, N2> -> (m: [[T, N2], N1], b: [T, N1])
let forward = func <T, N, N1, N2>(input: [T, N], layer: ([[T, N2], N1], [T, N1]))
## Neural network layers

## Questions
### How to represent a networks of layers ?
- Array of layers
### How to represent a networks of different layers ?
- How to check the compatibility of layers of multiple sizes
## Dot product
### Matrix MxN
[[T, N], M]
### Dot product
let dot = func <P, M, N>(m1: [[int, P], M], m2: [[int, N], P]) -> [[int, N], M]
## Dot product with vectors


## Observation (numpy)
```python
import numpy as np
col = np.array([1, 2, 3, 4])
lin = np.array([[4, 3, 2, 1]])
res1 = np.dot(lin, col) # 20
res1 = np.dot(col, lin) # error
# ValueError: shapes (4,) and (1,4)
# not aligned: 4 (dim 0) != 1 (dim 0)
```
Show shape error but only at runtime.
## Observation (pytorch)
```python
import torch
col = torch.tensor([1, 2, 3, 4])
lin = torch.tensor([[4, 3, 2, 1]])
res1 = torch.dot(lin, col) # error
res2 = torch.dot(col, lin) # error
# RuntimeError: 1D tensors expected,
# but got 2D and 1D tensors
```
Only indication at runtime.
## Observation (pytorch) 2
```python
import torch
col = torch.tensor([1, 2, 3, 4])
lin = torch.tensor([[4, 3, 2, 1]])
res1 = lin + col # tensor([[5, 5, 5, 5]])
res2 = col + lin # tensor([[5, 5, 5, 5]])
print("res1:", res1)
print("res2:", res2)
```
## Specificity of broadcasting

Broadcasting add lacking dimension to tensors.
## Correspondances
### Broadcast vector
So a **(n,)** vector can be broadcasted to a **(n, 1)** matrix.
We can call it a **column vector** (n lines and 1 column).
We also have a **line vector** (1 line and n columns).
### Vectors
column vector: [[T, 1], N]
line vector: [[T, N], 1]
## Dot product vectors line-column

**(1, n) dot (n, 1) -> (1, 1)**
dot(m1: **[[T, N], 1]**, m2: **[[T, 1], N]**) -> **[[T, 1], 1]**
## Dot product vectors column-line

**(n, 1) dot (1, n) -> (n, n)**
dot(m1: **[[T, 1], N]**, m2: **[[T, N], 1]**) -> **[[T, N], N]**
## Vectors
### Properties
- Are different from arrays
- Are a specific case of matrix
- Are of 2 kinds (line/column vector)
- Are compatible with dot product
- Are compatible with transposition
### Definition
- let ColVec = func<T, N>(a: [T, N]) -> [[T, 1], N]
- let LinVec = func<T, N>(a: [T, N]) -> [[T, N], 1]
## Scalar product
### Question
**Can we also represent a scalar as matrix ?**
### Representation
- **scalar**: 7 : T
- **"matrix" scalar**: [\[7]] : [[T, 1], 1]
- Now compatible with dot product
## Scalar dot product (what works)
### With a scalar
(1, 1) dot (1, 1) -> (1, 1)
### With a line vector (left product)
(1, 1) dot (1, n) -> (1, n)
dot<T, N>(m1: [[T, 1], 1], m2: [[T, N], 1]) -> [[T, N], 1]
### With a column vector (right product)
(n, 1) dot (1, 1) -> (n, 1)
dot<T, N>(m1: [[T, 1], N], m2: [[T, 1], 1]) -> [[T, 1], N]
## Scalar dot product (what works)
### Modification
**We need to extend the scalar to a square matrix**
**In fact [[T, 1], 1] is himself a square matrix**
### With a matrix (left product)
(m, m) dot (m, n) -> (m, n)
dot<T, N>(m1: [[T, M], M], m2: [[T, N], M]) -> [[T, N], M]
### With a matrix (right product)
(m, n) dot (n, n) -> (m, n)
dot<T, N>(m1: [[T, N], M], m2: [[T, N], N]) -> [[T, N], M]
## Scalars
### Properties
- Are different from primitiv types
- Are a specific case of matrix
- Are compatible with dot product
- Are compatible with transposition (neutral function)
- Can replace the multiplication if we define them as identity matrix
## Other cases (from tensorflow)
- add a scalar to a vector (**broadcasting**)
- multiply two tensors (**broadcasting**)
- matrices over for loops (**broadcasting**)
- add vector to a matrix (**broadcasting**)
## Other cases (from tensorflow)
- add a scalar to a vector (**use map**)
- multiply two tensors (**use map**)
- matrices over for loops (**use map**)
- add vector to a matrix (use map)
## Goal

## Miscellanious
### Linearisation
let linearize = func <T, N1, N2>(m1: [[T, N2], N1]) -> [T, N1\*N2]
### Higher order function
let map = func<T, U, N>(f: (T) -> U, a: [T, N]) -> [U, N] {
if N == 0 {
[]
} else {
[f(head(a)), map(f, tail(a))]
}
}
## Neural network layer

## Layer
### Definition 1
let NNLayer = func<T, N1, N2>(m: [[T, N2], N1], b: [[T, 1], N2]) -> f([[T, N1], 1]) -> f([[T, 1], N2])
Take a line vector and return a column vector
### Definition 1
let NNLayer = func<T, N1, N2>(m: [[T, N2], N1], b: [[T, 1], N2]) -> f([T, N1]) -> f([T, N2])
The array to vector (and vice versa) conversion is done in the function
## Neural network layers

## Layers
let l1 = NNLayer(m1, b1) in
let a = sigmoid in
let l2 = NNLayer(m2, b2) in
a(l1(a(l2)))([1, 2, 3, 4])
|
|
https://github.com/Readon/NSFC-application-template-typst | https://raw.githubusercontent.com/Readon/NSFC-application-template-typst/main/README.md | markdown | MIT License | # NSFC-application-template-typst
Typst自然科学基金申报书模板-非官方
Typst速度快,易于检查,语法简单效率高。
有问题可以先查阅[Typst中文用户指南](https://typst-doc-cn.github.io/docs/chinese/)。
## 问题
基本排版问题不大。欢迎提issue。
|
https://github.com/kiwiyou/algorithm-lecture | https://raw.githubusercontent.com/kiwiyou/algorithm-lecture/main/basic/02-number-theory.typ | typst | #import "@preview/cetz:0.1.2"
#import "@preview/algorithmic:0.1.0"
#import "../slide.typ"
#show: slide.style
#show link: slide.link
#show footnote.entry: slide.footnote
#let algorithm(..args) = text(font: ("linux libertine", "Pretendard"), size: 17pt)[#algorithmic.algorithm(..args)]
#let func(body) = text(font: ("linux libertine", "Pretendard"))[#smallcaps[#body]]
#align(horizon + center)[
= 알고리즘 기초 세미나
02: 정수론
#text(size: 0.8em)[
연세대학교 전우제#super[kiwiyou] \
2023.11.25.r3
]
]
#slide.slide[합동식][
- $N$개의 정수 $A_1, A_2, dots.c, A_N$이 있을 때, $A_1 + A_2 + dots.c + A_N$을 $3$으로 나눈 나머지를 구하자.
- $A_i$가 최대 $10^18$이고, $N$이 최대 $10^5$
- 합이 너무 커요 ㅠㅠ
- $A_i$를 직접 더하지 않고 나머지만을 구할 수 없을까?
#pagebreak()
- $(A + B) mod 3 = (A mod 3 + B mod 3) mod 3$
- 놀랍게도 이 성질은 $+$뿐만 아니라 $-$, $times$에도 적용되는데...
- 놀랍게도 이 성질은 $3$이 아닌 모든 양의 정수에도 적용되는데...
- 수식으로는 $A mod 3 = B mod 3 <=> A equiv B space (mod 3)$
- 구현 시에는 모든 수, 모든 $+$, $-$, $times$ 시마다 나머지를 취하기
- $mod N$에서 모든 수의 범위는 $0$ 이상 $N$ 미만으로 줄어든다!
]
#slide.slide[과제][
- #slide.problem("9711", "피보나치")
- #slide.problem("4375", "1")
- #slide.problem("14928", "큰 수 (BIG)")
- #slide.problem("27965", "N결수")
]
#slide.slide[소수 판정][
- $1$과 자기 자신만을 양의 약수로 가지는 $2$ 이상의 정수
- 양의 정수 $N$이 소수인지 판단하기
- $1$부터 $N$까지 전부 나누면 $cal(O)(N)$
- 조금 더 빠르게 할 수 없을까?
#pagebreak()
- $42$의 약수 $#text(fill: red)[1], #text(fill: green)[2], #text(fill: blue)[3], #text(fill: purple)[6], #text(fill: purple)[7], #text(fill: blue)[14], #text(fill: green)[21], #text(fill: red)[42]$
- $1 times 42 = 42$
- $2 times 21 = 42$
- $3 times 14 = 42$
- $6 times 7 = 42$
- *앞쪽* 절반만 본다면 $cal(O)(sqrt(N))$
#pagebreak()
#columns(2)[
#set text(size: 20pt)
#algorithm({
import algorithmic: *
Function([Is-Prime], args: ([$N$], ))
If(
cond: $N = 1$,
Return[*false*]
)
For(
cond: [$i = 2$ *upto* #FnI[floor][#FnI[sqrt][$N$]]],
If(
cond: [$N equiv 0 space (mod i)$],
Return[*false*]
),
)
Return[*true*]
})
- `sqrt`나 `floor`는 실수 오차를 동반하고, 느릴 수 있음
#colbreak()
#algorithm({
import algorithmic: *
Function([Is-Prime-2], args: ([$N$], ))
If(
cond: $N = 1$,
Return[*false*]
)
Assign[$i$][$2$]
While(
cond: [$i^2 <= N$],
If(
cond: [$N equiv 0 space (mod i)$],
Return[*false*]
),
Assign[$i$][$i + 1$],
)
Return[*true*]
})
- 정수 연산은 정확
]
#pagebreak()
- 작은 약수부터 찾아 나눌 때, 나누어지지 않을 때까지 나눠보기
#algorithm({
import algorithmic: *
Function([Factorize], args: ([$N$], ))
Assign[$i$][$2$]
While(
cond: [$i^2 <= N$],
While(
cond: [$N equiv 0 space (mod i)$],
State[*print* $i$],
Assign[$N$][$N slash i$],
),
Assign[$i$][$i + 1$],
)
If(
cond: $N eq.not 1$,
State[*print* $N$],
)
})
]
#slide.slide[과제][
- #slide.problem("24039", "2021은 무엇이 특별할까?")
- #slide.problem("27065", "2022년이 아름다웠던 이유")
- #slide.problem("28138", "재밌는 나머지 연산")
]
#slide.slide[에라토스테네스의 체][
- $N$ 이하의 소수를 모두 구해야 하는 경우 $cal(O)(N sqrt(N))$
- 중복 연산이 너무 많아요
- *약수를 세는 것보다 배수를 세는 것이 빠르다*
#pagebreak()
- $2$ 이상 $N$ 이하의 각 정수가 소수인지를 배열에 저장
- 처음에는 모든 수가 소수라고 가정
- 가장 작은 소수를 하나 찾으면, 그 수의 배수는 소수가 아니라고 확정
- 시간복잡도는 $cal(O)(N log log N)$
#pagebreak()
#h(0pt)
#algorithm({
import algorithmic: *
Function([Find-Primes], args: ([$N$], ))
Assign[isPrime$[2..N]$][*true*]
Assign[primeList][${}$]
For(
cond: [$i = 2$ *upto* $N$],
If(
cond: [isPrime$[i]$],
State[*add* $i$ *to* primeList],
Assign[$j$][$2 times i$],
While(
cond: [$j <= N$],
Assign[isPrime$[j]$][*false*],
Assign[$j$][$j + i$]
),
)
)
Return[primeList]
})
#pagebreak()
- 체에 `true`, `false` 대신 그 수의 소인수를 넣는다면?
#algorithm({
import algorithmic: *
Function([Find-Prime-Factors], args: ([$N$], ))
Assign[primeFactor$[i]$][$i$]
For(
cond: [$i = 2$ *upto* $N$],
If(
cond: [primeFactor$[i] = i$],
Assign[$j$][$2 times i$],
While(
cond: [$j <= N$],
Assign[primeFactor$[j]$][$i$],
Assign[$j$][$j + i$]
),
)
)
Return[primeFactor]
})
]
#slide.slide[과제][
- #slide.problem("1929", "소수 구하기")
- #slide.problem("2421", "저금통")
- #slide.problem("16563", "어려운 소인수분해")
] |
|
https://github.com/ivaquero/book-control | https://raw.githubusercontent.com/ivaquero/book-control/main/03-状态空间方程.typ | typst | #import "@local/scibook:0.1.0": *
#show: doc => conf(
title: "状态空间方程",
author: ("ivaquero"),
header-cap: "现代控制理论",
footer-cap: "github@ivaquero",
outline-on: false,
doc,
)
= 状态空间
<状态空间>
== 状态空间方程
对弹簧阻尼系统
$ m dot.double(x) + B dot(x) + k x = f(t) $
选择状态变量
- $z_1 = x$
- $z_2 = x ̇$
由此,得
$ z ̇ = 1 / m u(t) - B / m z_2 - k / m z_1 $
转化为矩阵形式
$
dot(z) = A z + B u\
y = C z + D u
$
对原方程两端做 Laplace 变换,得
$ m s^2 X(s) + B s X(s) + k X(s) = F(s) $
结合状态空间方程,有
$ G(s) = frac(X (s), F(s)) = frac(Y (s), U(s)) = frac(1, m s^2 + B s + k) $
对状态空间方程两端做 Laplace 变换,得
$
Z(s) = (s 𝑰 - 𝑨)^(-1) B U(s)\
Y(s) = C (s 𝑰 - 𝑨)^(-1) B U(s) + D U(s)
$
由此,得
$ G(s) = frac(Y(s), U(s)) = C (s 𝑰 - 𝑨)^(-1) B + D $
又
$
s 𝑰 - 𝑨 = mat(delim: "[", s, 0; 0, s) - mat(delim: "[", 0, 1; - k/m, - B/m) = mat(delim: "[", s, - 1; k/m, s + B/m)
$
则
$
(s 𝑰 - 𝑨)^(-1) = (s 𝑰 - 𝑨)^*|s 𝑰 - 𝑨| =
frac(mat(delim: "[", s + B/m, 1; - k/m, s), s (s + B/m)) - (-1) k / m =
frac(mat(delim: "[", s + B/m, 1; - k/m, s), s^2 + B/m s + k/m)
$
代入可知,空间状态方程和传递函数是统一的。
#pagebreak()
== 特征行列式
- $G(s)$的极点:$G(s)$分母的根
- 决定系统的稳定性
- 数值上等于$𝑨$的特征值,即$|s 𝑰 - 𝑨|$的根
= 矩阵指数函数
== 推导
对独立方程组
$
dot(x)_1(t) &= x_1(t)\
dot(x)_2(t) &= -2 x_2(t)
$
即
$
dot(x)_1(t) &= x_1 (0) e^t\
dot(x)_2(t) &= x_2 (0) e^(-2 t)
$
矩阵形式为
$ dv(𝒙, t) = 𝑨 𝒙 $
其中,$𝒙 = mat(delim: "[", x_1; x_2)$。
对耦合方程组
$
dot(x)_1(t) &= x_1(t) + x_2(t)\
dot(x)_2(t) &= 4 x_1(t) - 2 x_2(t)
$
矩阵形式中,$𝒙(t) = e^(𝑨 t) 𝒙(0)$。
由 Taylor 级数
$ e^(𝑨 t) = ∑_(i = 0) frac(1, i!)(𝑨 t)^i $
于是
$ dv(e^(𝑨 t), t) = ∑_(i=1) frac(1, (i - 1))! 𝑨^i t^(i - 1) = 𝑨 e^(𝑨 t) $
而$e^(𝑨 t)$被称为矩阵$𝑨$的指数函数。
== 性质
设有矩阵指数函数$e^(𝑨 t)$,则
- 当$𝑨 = 𝟎$,$e^(𝑨 t) = 𝑰$
- 当$𝑨 = 𝚲$,$e^(𝑨 t) = dmat(delim: "[", e^(λ_1 t), e^(λ_t t), ⋱, e^(λ_n t))$
- $e^(𝑨 t) = 𝑷 e^(𝜦 t) 𝑷^(-1)$
对状态空间方程
$ dv(𝒙(t), t) = 𝑨 𝒙(t) + 𝑩 𝒖(t) $
其中,$𝒖(t)$为系统输入。
两端同时乘以$e^(𝑨 t)$,得
$
e^(-𝑨 t) 𝑩 𝒖(t) = e^(-𝑨 t) dv(𝒙(t), t) - 𝑨 e^(-𝑨 t) 𝒙(t)
= dv(e^(-𝑨 t) 𝒙(t), t)
$
两端同时定积分,得
$ e^(-𝑨 τ) 𝒙(τ) bar.v_(t_0)^t = ∫_(t_0)^t e^(-𝑨 τ) 𝑩 𝒖(τ) dd(τ) $
整理,得
$ 𝒙(t) = e^(𝑨 (t - t_0)) 𝒙(t_0) + ∫_(t_0)^t e^((t - τ)) 𝑩 𝒖(τ) dd(τ) $
其中
- 第一项的$e^(𝑨 (t - t_0))$被称为状态转移矩阵($n × n$),其变化由$𝑨$的特征值决定。
- 第二项为卷积运算
= 相平面
<相平面>
== 过渡矩阵
设$𝑷 = mat(delim: "[", 𝒗_1, 𝒗_2)$,则
#let mv = (i, j) => $v_(#(i)#(j))$
$
𝑨 𝑷 &= mat(delim: "[", 𝑨 𝒗_1, 𝑨 𝒗_2)
= mat(delim: "[", 𝑨 mat(delim: "[", v_(11); v_(12)), 𝑨 mat(delim: "[", v_(12); v_(22))) = xmat(delim: "[", 2, 2, #mv)
dmat(delim: "[", λ_1, λ_2) = 𝑷
dmat(delim: "[", λ_1, λ_2)
$
== 二维相平面
<二维相平面>
#let ma = (i, j) => $a_(#(i)#(j))$
$
dv(, t)
mat(delim: "[", z(t)_1; z(t)_2) = 𝑨
mat(delim: "[", z(t)_1; z(t)_2) =
xmat(delim: "[", 2, 2, #ma)
mat(delim: "[", z(t)_1; z(t)_2)
$
- 实数根
#figure(
table(
columns: 4,
align: center + horizon,
inset: 4pt,
stroke: frame(rgb("000")),
[$λ_1 λ_2$], [$λ_1 + λ_2$], [点类型], [稳定性],
[$> 0$], [$< 0$], [node], [✓],
[$> 0$], [$> 0$], [node], [×],
[$< 0$], [], [saddle], [×],
),
caption: [实数根],
supplement: "表",
kind: table,
)
- 复数根
#figure(
table(
columns: 3,
align: center + horizon,
inset: 3pt,
stroke: frame(rgb("000")),
[$λ_i$], [点类型], [稳定性],
[$a > 0$], [focus], [×],
[$a < 0$], [focus], [✓],
[$a = 0$], [center], [✓⁻],
),
caption: [复数根],
supplement: "表",
kind: table,
)
#tip[
✓⁻:Lyapunov 稳定,见后续章节。
]
|
|
https://github.com/Favo02-unimi/statistica-e-analisi-dei-dati | https://raw.githubusercontent.com/Favo02-unimi/statistica-e-analisi-dei-dati/main/Dispensa.typ | typst | #import "@preview/gentle-clues:0.8.0": *
#import "@preview/cetz:0.2.2"
#import "@preview/codly:0.2.0": *
#import "@preview/equate:0.2.0": equate
// pdf metadata
#set document(
title: "Statistica e Analisi dei dati",
author: ("<NAME>", "<NAME>"),
)
// codly setup
#show: codly-init.with()
#codly(
languages: (python: (name: "Python", color: blue, icon: none)),
zebra-color: white,
stroke-width: 1.5pt,
stroke-color: blue,
enable-numbers: false
)
// evidenziare link
#show link: it => {
if type(it.dest) != str {
// link interni
underline(it, stroke: 1.5pt + blue)
}
else {
// link esterni
underline(it, stroke: 1.5pt + red)
}
}
// evidenziare link nell'indice
#show outline.entry: it => {
underline(it, stroke: 1.5pt + blue)
}
// settings box colorati
#show: gentle-clues.with(breakable: true)
// settings equazioni
#show: equate.with(breakable: true)
// box colorati
#let nota(body) = { info(title: "Nota")[#body] }
#let attenzione(body) = { warning(title: "Attenzione")[#body] }
#let informalmente(body) = { conclusion(title: "Informalmente")[#body] }
#let dimostrazione(body) = { memo(title: "Dimostrazione")[#body] }
// testo matematico colorato
#let mg(body) = text(fill: olive, $#body$)
#let mm(body) = text(fill: maroon, $#body$)
#let mo(body) = text(fill: orange, $#body$)
#let mr(body) = text(fill: red, $#body$)
#let mp(body) = text(fill: purple, $#body$)
#let mb(body) = text(fill: blue, $#body$)
// numerazione titoli
#set heading(numbering: "1.1.")
// pagina iniziale (titolo)
#page(align(left + horizon, block(width: 90%)[
#text(3em)[*Statistica e Analisi dei dati*]
#text(1.5em)[Università degli studi di Milano - Informatica]
#link("https://github.com/Favo02")[
#text(1.5em, "<NAME>")
]
#text(", ")
#link("https://github.com/tsagae")[
#text(1.5em, "<NAME>")
]
#text(", ")
#link("https://github.com/Gallja")[
#text(1.5em, "<NAME>")
]
#text("Ultima modifica:")
#datetime.today().display("[day]/[month]/[year]")
]))
#set par(linebreaks: "optimized")
// impostazioni pagine
#let numberingH(c)={
return numbering(c.numbering,..counter(heading).at(c.location()))
}
#let currentH(level: 1)={
let elems = query(selector(heading).after(here()))
if elems.len() != 0 and elems.first().location().page() == here().page() {
return [#numberingH(elems.first()) #elems.first().body]
} else {
elems = query(selector(heading).before(here()))
if elems.len() != 0 {
return [#numberingH(elems.last()) #elems.last().body]
}
}
return ""
}
#set page(
numbering: "1",
number-align: bottom + right,
header: [
#set text(8pt)
_Statistica e Analisi dei dati_
#h(1fr)
#context[_ #currentH() _]
],
footer: [
#set text(8pt)
#context[
_<NAME>, <NAME> - #datetime.today().display("[day]/[month]/[year]")_
#h(1fr)
#text(12pt)[#counter(page).display("1")]
]
],
)
#heading(outlined: false, bookmarked: false, numbering: none, "Statistica e Analisi dei dati")
La statistica si occupa di raccogliere, analizzare e trarre conclusioni su dati, attraverso vari strumenti:
- #link(<descrittiva>)[Statistica descrittiva]: esposizione e *condensazione* dei dati, cercando di limitarne l'incertezza;
- #link(<probabilità>)[Calcolo delle probabilità]: creazione e analisi di modelli in situazioni di *incertezza*;
- #link(<inferenziale>)[Statistica inferenziale]: *approssimazione* degli esiti mancanti, attraverso modelli probabilistici;
- _Appendice: #link(<modelli>)[Cheatsheet variabili aleatorie e modelli]:_ riassunto formule e proprietà delle variabili aleatorie e dei modelli;
- _Appendice: #link(<python>)[Cheatsheet Python]:_ raccolta funzioni/classi Python utili ai fini dell'esame _(e non)_;
- _Appendice: #link(<matematica>)[Cheatsheet matematica]:_ trucchi per risolvere/semplificare equazioni, serie, integrali;
- _Appendice: #link(<esercizi>)[Esercizi]:_ come svolegere gli esercizi dell'esame.
#heading(outlined: false, bookmarked: false, numbering: none, "Autori, Ringraziamenti e Licenza")
/ Autori: #link("https://github.com/Favo02")[<NAME>], #link("https://github.com/tsagae")[<NAME>], #link("https://github.com/Gallja")[<NAME>] _(Appendice Python)_
/ Ringraziamenti: <NAME> _(appunti)_, <NAME> _(revisione)_
/ Sorgente e Licenza: #link("https://github.com/Favo02/statistica-e-analisi-dei-dati")[github.com/Favo02/statistica-e-analisi-dei-dati] (#link("https://creativecommons.org/licenses/by/4.0/")[CC-BY-4.0])
/ Ultima modifica: #datetime.today().display("[day]/[month]/[year]")
// indice
#outline(
title: "Indice",
indent: auto
)
#pagebreak()
// TODO: roba da sistemare!!!
/*
cosa vuol dire che lo stimatore non è deviato? questo vuol dire che stimando molte volte usando lo stimatore, allora la media delle stime ottenute è molto vicino al vero valore di p
omega: insieme degli esiti
algebra: insieme di eventi, quindi insieme di sottoinsiemi di omega
aggiungere in correzione di bessel: corregge la stima della varianza campionaria, provando a fare i conti esce che serve il n-1
dimostrazione teorema proprietà totali
dimostrazione eventi indipendenti
indipendenza a più eventi (non si estende per transitività)
indipendenza variabili aleatorie
regola empirica (e grafico) per normale
distribuzione uniforme continua: tutti gli esiti non sono equiprobabili (gli esiti sono tutti 0 inun punto)
quantile di una variabile aleatoria: inverso della funzione di ripartizione
da quantile descrittivo a quello di un va: quello che è una percentuale, diventa una probabilità
quindi F^-1 passa dalla probabilità (quindi il percento del quantile) all'evento associato alla probabilità (x_q)
specificare meglio la differenza tra quantile (x_q) e livello (q)
funzione di ripartizione va, aggiungere proprietà: continua da destra
*/
= Statistica descrittiva <descrittiva>
/ Popolazione: insieme di elementi da _analizzare_, spesso troppo numerosa per essere analizzata tutta
/ Campione: _parte_ della popolazione estratta per essere analizzata, deve essere rappresentativo
/ Campione casuale (semplice): tutti i membri della popolazione hanno la _stessa probabilità_ di essere selezionati
/ Campione stratificato: vengono individuate delle _categorie_ e si forma un campione facendo in modo che tutte le categorie siano proporzionalmente rappresentate
== Classificazione dei dati: qualitativi e quantitativi <quantitativi>
/ Dati quantitativi \/ Scalari \/ Numerici: l'esito della misurazione è una quantità numerica
/ Discreti: si lavora su valori singoli (spesso interi), ad esempio: _numeri di figli_
/ Continui: si lavora su range di intervalli, ad esempio: _peso_ o _altezza_
/ Dati qualitativi \/ Categorici \/ Nominali: l'esito della misurazione è un'etichetta
/ Booleani \/ Binari: due valori possibili, ad esempio: _sesso_
/ Ordinali: valori ordinabili, ad esempio: _livello di soddisfazione_
/ Nominali \/ Sconnessi: valori *non* ordinabili, ad esempio: _nome_
#nota[
Spesso alcuni dati _numerici_ vengono considerati _qualitativi_, dato che non ha senso effettuare su di essi considerazioni algebriche o numeriche. Un esempio potrebbe essere la data di nascita
]
== Frequenze <frequenze>
/ Frequenza assoluta: _occorrenza assoluta di un carattere_, ovvero il numero di volte che un certo valore appare in un campione di dati
/ Frequenza relativa: rapporto tra la _frequenza assoluta_ di un dato e il la _dimensione del campione_ (quindi è sempre compresa tra $0$ e $1$)
=== Frequenze cumulate e Funzione cumulativa empirica
/ Frequenza cumulata: somma delle _frequenze assolute_ di tutti i valori _inferiori o uguali_ a un determinato valore, rappresenta il numero totale di osservazioni fino a quel valore
#attenzione[
È possibile calcolare la _frequenza cumulata_ solo in caso di dati _ordinabili_
]
/ Funzione cumulativa (o di ripartizione) empirica: funzione $hat(F) : bb(R) -> [0,1]$ che, per ogni valore, mostra la proporzione del campione che è minore o uguale a quel valore
$ hat(F)(x) = (\#{x_i <= x})/n = 1/n sum_(i=1)^n I_(-infinity, x] (x_i) $
#nota[
$I_A : bb(R) -> {0,1}$ indica la #link(<indicatrice>)[funzione indicatrice] dell'insieme $A$, ovvero vale $0 space forall a in.not A$ e $1 space forall a in A$
]
#nota[
È possibile vedere la funzione cumulativa empirica come una stima della #link(<funzione-ripartizione>)[funzione di ripartizione], infatti ne è un buon #link(<stimatore>)[stimatore] consistente in media quadratica
]
=== Frequenze conguinte e marginali
Prendiamo in considerazione due caratteri dell'insieme di osservazioni:
/ Frequenza congiunta assoluta/relativa: numero di osservazioni in cui i due caratteri assumono dei determinati valori _(dati in corsivo nella tabella sottostante)_
/ Frequenze marginali: somme delle frequenze congiunte lungo le righe o le colonne di una tabella di contingenza, rappresentando la frequenza totale di ciascuna variabile _(dati in giallo nella tabella sottostante)_
#figure(caption: "Tabella di contingenza")[
#table(
columns: 5,
inset: 10pt,
align: horizon,
table.cell(colspan: 2, rowspan: 2, []),
table.cell(colspan: 2, fill: silver, [*Capelli*]), [],
table.cell(fill: silver, [*Chiari*]), table.cell(fill: silver, [*Scuri*]),
table.cell(fill: yellow, [*Tot Occhi*]),
table.cell(rowspan: 3, fill: silver, [*Occhi*]),
table.cell(fill: silver, [*Verdi*]), [_5_], [_3_],
table.cell(fill: yellow, [8]),
table.cell(fill: silver, [*Marroni*]), [_2_], [_42_],
table.cell(fill: yellow, [44]),
table.cell(fill: silver, [*Azzurri*]), [_8_], [_1_],
table.cell(fill: yellow, [9]),
[], table.cell(fill: yellow, [*Tot Capelli*]),
table.cell(fill: yellow, [15]),
table.cell(fill: yellow, [46]),
table.cell(fill: yellow, [61])
)
]
== Indici di centralità <indice-centralita>
Sono indici che danno un'idea approssimata dell'_ordine di grandezza_, intorno a quale valore si colloca l'insieme dei valori esistenti.
=== Media campionaria <media>
Viene indicata da $overline(x)$, ed è la *media aritmetica* di tutte le osservazioni del campione.
$ overline(x) = 1 / n sum_(i=1)^(n) x_i $
La media opera linearmente, quindi può essere scalata ($dot a$) e/o traslata ($+ b$):
$ forall i space y_i = a x_i + b => overline(y) = a overline(x) + b $
#nota[
Può essere calcolata solo con #link(<quantitativi>)[dati quantitativi].
]
#attenzione[
La media non è un indice robusto rispetto agli *outlier* <outlier>, ovvero i dati fuori scala, che la influenzano pesantemente
]
=== Mediana campionaria
È il valore nella *posizione di mezzo* di un dataset ordinato in ordine crescente, ovvero un valore $>=$ e $<=$ di almeno la metà dei dati.
Dato un dataset di dimensione $n$ la mediana è:
- l'elemento in posizione $(n+1)/2$ se n è dispari
- la media aritmetica tra gli elementi in posizione $n/2$ e $n/2 + 1$ se n è pari
#nota[
È robusta rispetto agli #link(<outlier>)[_outlier_] ma può essere calcolata solo su _campioni ordinabili_
]
=== Moda campionaria
È l'osservazione che compare con la *maggior frequenza*. Se più di un valore compare con la stessa frequenza allora tutti quei valori sono detti modali.
== Quantili
Il quantile di ordine $alpha$ (con $alpha$ un numero reale nell'intervallo $[0,1]$) è un valore $q_alpha$ che *divide* la popolazione in due parti, proporzionali in numero di elementi ad $alpha$ e (1-$alpha$) e caratterizzate da valori rispettivamente *minori* e *maggiori* di $q_alpha$.
/ Percentile: quantile descritto in percentuale
/ Decile: popolazione divisa in 10 parti con ugual numero di elementi
/ Quartile: popolazione divisa in 4 parti con ugual numero di elementi
#attenzione[
È richiesto un ordinamento totale nel campione
]
/ Range: distanza tra il punto minimo e il punto massimo
/ Range interquartile (IQR): distanza tra il primo e il terzo quartile
== Indici di dispersione
Sono indici che misurano quanto i valori del campione si _discostano_ da un _valore centrale_.
=== Scarto assoluto medio
Per ogni osservazione, lo *scarto* è la distanza dalla media: $x_i - overline(x)$.
La somma di tutti gli scarti farà sempre $0$:
$ sum_(i=1)^n x_i - overline(x) quad = quad sum_(i=1)^n x_i - sum_(i=1)^n overline(x) quad = quad n overline(x) - n overline(x) quad = 0 $
#informalmente[
Questo indice è ovviamente _inutile_, dato che vale sempre $0$, è utile per introdurre il concetto di _scarto_, ovvero la distanza di un'osservazione dalla media
]
=== Varianza campionaria
Misura di quanto i valori si _discostano_ dalla media campionaria
$ s^2 = 1/(n-1) sum_(i=1)^n (x_i - overline(x))^2 $
Metodo alternativo per calcolare la varianza:
$ s^2 = 1/(n-1) (sum_(i=1)^n x_i^2 - n overline(x)^2) $
#dimostrazione[
$ sum^n_(i=1) (mb(x_i) - mo(overline(x)))^2 &= sum_(i=1)^n (mb(x^2_i) - 2 mb(x_i) mo(overline(x)) + mo(overline(x)^2)) \
&= sum_(i=1)^n x_i^2 - mp(2 overline(x) underbrace(sum_(i=1)^n x_i, = n overline(x))) + mg(sum_(i=1)^n overline(x)^2) \
&= sum_(i=1)^n x_i^2 - mp(2 overline(x) dot n overline(x)) + mg(n overline(x)^2) \
&= sum_(i=1)^n x_i^2 - mp(cancel(2) n overline(x)^2) + mg(cancel(n overline(x)^2)) \
&= sum_(i=1)^n x_i^2 - n overline(x)^2 $
]
#informalmente[
Verrebbe intuitivo applicare il _valore assoluto_ ad ogni scarto medio, ma questo causa dei problemi. Per questo motivo la differenza viene elevata al _quadrato_, in modo da renderla sempre positiva
]
La varianza _non_ è un operatore lineare: la traslazione ($+ b$) non ha effetto mentre la scalatura ($dot a$) si comporta:
$ forall i space y_i = a x_i + b => s^2_y = a^2 s^2_x $
==== Correzione di Bessel <bessel>
Perchè si divide per $mr(n-1)$ e non $n$?
Introduciamo la formula della *varianza* (NON campionaria):
$ sigma^2 = 1/mr(n) sum_(i=1)^n (x_i - overline(x))^2 $
Se calcoliamo la _varianza_ di tutti i possibili _campioni_ di una popolazione e poi facciamo la media, *non* otteniamo la _varianza_ di tutta la _popolazione_. Questo invece avviene calcolando la varianza campionaria:
$ s^2 = sigma^2 n/(n-1) $
$ s^2 = 1/mr(n-1) sum_(i=1)^n (x_i - overline(x))^2 $
#informalmente[
Un campione è composto da un _minore_ numero di elementi rispetto all'intera popolazione, quindi la dispersione rispetto al valore medio è _inferiore_. Per questo motivo è necessario _correggere_ la varianza
]
==== Varianza campionaria standard (deviazione standard campionaria) <varianza-standard>
È possibile applicare alla varianza campionaria la radice quadrata, ottenendo la *varianza campionaria standard*. Questo è utile per portare la varianza nella stessa _unità di misura_ dei dati del campione
$ s = sqrt(s^2) $
#attenzione[
Applicando la radice quadrata solo dopo l'elevamento a potenza, non abbiamo reintrodotto il problema dei valori negativi: $sqrt(a^2) quad != quad (sqrt(a))^2 = a$
]
=== Coefficiente di variazione (deviazione standard relativa)
Valore *adimensionale*, utile per confrontare misure di fenomeni con _unità di misura differenti_.
$ s^* = s / (|overline(x)|) $
#nota[
Sia la #link(<varianza-standard>)[varianza campionaria standard] che la #link(<media>)[media campionaria] sono dimensionali, ovverro hanno unità di misura. Dividendoli tra loro otteniamo un valore adimensionale
]
=== Altri indici di dispersione
/ Intervallo di varianzione: differenza tra il _più grande_ e il _più piccolo_ valore del campione
/ Scarto interquartile: lunghezza dell'intervallo in cui è presente la _metà centrale dei dati_, ovvero la differenza tra il _25-esimo_ e il _75-esimo_ percentile
== Indici di correlazione
/ Campione bivariato: campione formato da coppie ${ (x_1, y_1), ..., (x_n, y_n) }$
/ Relazione/Correlazione: relazione tra due variabili tale che a ciascun valore della prima corrisponda un valore della seconda seguendo una certa regolarità
- *diretta*: a _piccoli_ valori di $x$ corrispondono _piccoli_ valori di $y$ e viceversa
- *indiretta*: a _piccoli_ valori di $x$ corrispondono _grandi_ valori di $y$ e viceversa
=== Covarianza campionaria <covarianza>
È un valore numerico che fornisce una misura di quanto le due variabili _varino assieme_.
Dato un campione bivariato definiamo la *covarianza campionaria* come:
$ "Cov"(x, y) = 1/(n-1)sum_(i=1)^n (x_i-overline(x))(y_i-overline(y)) $
#nota[
Si divide per $n-1$ per lo stesso motivo della _varianza_: la #link(<bessel>)[correzione di Bessel]
]
Metodo alternativo di calcolo:
$ "Cov"(x, y) = 1/(n-1)sum_(i=1)^n (x_i y_i - n overline(x y)) $
#dimostrazione[
- *diretta*: _piccoli con piccoli_ e _grandi con grandi_
$ (x_i <= overline(x) and y_i <= overline(y)) or (x_i > overline(x) and y_i > overline(y)) $
da cui:
- $ x_i - overline(x) <= 0 and y_i - overline(y) <= 0 quad => quad (x_i - overline(x))(y_i - overline(y)) >= 0 $
- $ x_i - overline(x) > 0 and y_i - overline(y) > 0 quad => quad (x_i - overline(x))(y_i - overline(y)) > 0 $
quindi in entrambi i casi: $ (x_i - overline(x))(y_i - overline(y)) >= 0 $
- *indiretta*: _piccoli con grandi_ e _grandi con piccoli_
$ (x_i <= overline(x) and y_i >= overline(y)) or (x_i > overline(x) and y_i < overline(y)) $
da cui:
- $ x_i - overline(x) <= 0 and y_i - overline(y) >= 0 quad => quad (x_i - overline(x))(y_i - overline(y)) <= 0 $
- $ x_i - overline(x) > 0 and y_i - overline(y) < 0 quad => quad (x_i - overline(x))(y_i - overline(y)) < 0 $
quindi in entrambi i casi: $ (x_i - overline(x))(y_i - overline(y)) <= 0 $
Quindi dal osservando _il segno_ è possibile capire la _relazione_ del campione.
]
- $"Cov"(x, y) > 0$ probabile correlazione diretta
- $"Cov"(x, y) tilde.eq 0$ correlazione improbabile
- $"Cov"(x, y) < 0$ probabile correlazione indiretta
#figure(caption: [Correlazione lineare _diretta_ (sinistra) e _indiretta_ (destra)],
[
#box(cetz.canvas({
import cetz: *
plot.plot(
size: (4,4),
x-tick-step: 2,
y-tick-step: 2,
axis-style: "school-book",
{
plot.add(((-5,-5), (5,5)), style: (stroke: 2pt + red))
plot.annotate(draw.circle((0, 0), fill: gray, radius: 0.2))
plot.annotate(draw.circle((0.37, 0.4), fill: gray, radius: 0.2))
plot.annotate(draw.circle((1, 1.2), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-1, -1.5), fill: gray, radius: 0.2))
plot.annotate(draw.circle((2, 2.4), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-2, -2.2), fill: gray, radius: 0.2))
plot.annotate(draw.circle((3, 3.6), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-3, -3.3), fill: gray, radius: 0.2))
plot.annotate(draw.circle((4, 4.8), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-4, -4.4), fill: gray, radius: 0.2))
plot.annotate(draw.circle((0.5, 0.6), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-0.8, -1.0), fill: gray, radius: 0.2))
plot.annotate(draw.circle((1.5, 1.3), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-1.7, -1.8), fill: gray, radius: 0.2))
plot.annotate(draw.circle((2.5, 2.7), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-2.6, -2.4), fill: gray, radius: 0.2))
plot.annotate(draw.circle((3.3, 3.9), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-3.5, -3.1), fill: gray, radius: 0.2))
plot.annotate(draw.circle((4.2, 4.6), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-4.3, -4.7), fill: gray, radius: 0.2))
})
}))
#box(cetz.canvas({
import cetz: *
plot.plot(
size: (4,4),
x-tick-step: 2,
y-tick-step: 2,
axis-style: "school-book",
{
plot.add(((-5,5), (5,-5)), style: (stroke: 2pt + red))
plot.annotate(draw.circle((0, 0), fill: gray, radius: 0.2))
plot.annotate(draw.circle((0.4, -0.5), fill: gray, radius: 0.2))
plot.annotate(draw.circle((1.2, -1.5), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-1.5, 1.8), fill: gray, radius: 0.2))
plot.annotate(draw.circle((2.3, -2.6), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-2.4, 2.2), fill: gray, radius: 0.2))
plot.annotate(draw.circle((3.6, -3.8), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-3.8, 3.5), fill: gray, radius: 0.2))
plot.annotate(draw.circle((4.7, -4.3), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-4.5, 4.2), fill: gray, radius: 0.2))
plot.annotate(draw.circle((0.2, -0.2), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-0.6, 0.7), fill: gray, radius: 0.2))
plot.annotate(draw.circle((1.3, -1.1), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-1.8, 2.0), fill: gray, radius: 0.2))
plot.annotate(draw.circle((2.7, -2.9), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-2.9, 2.6), fill: gray, radius: 0.2))
plot.annotate(draw.circle((3.8, -4.0), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-4.0, 3.7), fill: gray, radius: 0.2))
plot.annotate(draw.circle((4.3, -4.5), fill: gray, radius: 0.2))
plot.annotate(draw.circle((-4.7, 4.4), fill: gray, radius: 0.2))
})
}))
])
#nota[
Una relazione diretta/indiretta non è necessariamente _lineare_, può essere anche _logaritmica_ o seguire altre forme
]
=== Indice di correlazione di Pearson (indice di correlazione lineare) <correlazione-lineare>
Utilizziamo l'indice di correlazione di Pearson per avere un valore _adimensionale_ che esprime una correlazione. Possiamo definirlo anche come una _misura normalizzata_ della covarianza nell'intervallo $[-1, +1]$.
$ rho(x,y) = 1/(n-1)(limits(sum)_(i=1)^n (x_i-overline(x))(y_i-overline(y)))/(s_x s_y) $
Dove $s$ è la varianza campionaria standard.
- $rho tilde.eq +1$ probabile correlazione linearmente diretta
- $rho tilde.eq 0$ correlazione improbabile
- $rho tilde.eq -1$ probabile correlazione linearmente indiretta
#attenzione[
L'#link(<correlazione-lineare>)[indice di correlazione lineare] ($rho$) cattura *solo* relazioni dirette/indirette _lineari_ ed è insensibile alle trasformazioni lineari
]
#attenzione[
La #link(<covarianza>)[covarianza campionaria] o l'#link(<correlazione-lineare>)[indice di correlazione lineare] $tilde.eq 0$ non implicano l'indipendenza del campione, ma è vero il contrario:
$ "Cov"(x, y) tilde.eq 0 quad arrow.r.double.not quad "Indipendenza" $
$ rho(x, y) tilde.eq 0 quad arrow.r.double.not quad "Indipendenza" $
$ "Indipendenza" quad arrow.r.double quad rho(x, y) tilde.eq "Cov"(x, y) tilde.eq 0 $
]
Formula alternativa:
$ rho = (limits(sum)_(i=1)^n x_i y_i - n overline(x) overline(y)) / (sqrt((limits(sum)_(i=1)^n x_i^2 - n overline(x)^2) (limits(sum)_(i=1)^n y_i^2 - n overline(y)^2))) $
== Indici di eterogeneità <eterogeneita>
Non ha senso analizzare concentrazione o dispersione per dati #link(<quantitativi>)[qualitativi], ma possiamo analizzare l'eterogeneità:
/ Massima eterogeneità: il campione è composto da tutti elementi _diversi_
/ Minima eterogeneità: il campione contiene solo elementi _uguali_ (campione _omogeneo_)
=== Indice di Gini (per l'eterogeneità) <gini>
$ I = 1 - sum_(i=1)^n f_i^2 $
Dove $f_i$ è la #link(<frequenze>)[frequenza relativa] di $i$ ed $n$ è il numero di elementi distinti. Quindi $forall i, 0 <= f_i <= 1$. Prendiamo in considerazione i due estremi:
- eterogeneità _minima_ (solo un valore con frequenza relativa 1): $ I = 1 - 1 = 0 $
- eterogeneità _massima_ (tutti i valori hanno la stessa frequenza relativa $1/n$ dove $n$ è la dimensione del campione): $ I = quad 1 - sum_(i=1)^n (1/n)^2 quad = quad 1 - n/n^2 quad = quad (n-1)/n $
Generalizzando, $I$ non raggiungerà mai $1$: $ 0 <= I <= (n-1)/n < 1 $
Dal momento che l'indice di Gini tende a $1$ senza mai arrivarci ($n-1$ è sempre minore di $n$) introduciamo l'*indice di Gini normalizzato*, in modo da arrivare a $1$ nel caso di eterogeneità massima, con dominio $[0, 1]$: $ I' = n/(n-1)I $
=== Entropia <entropia>
$ H = quad sum_(i=1)^n f_i log(1/f_i) quad = quad - sum_(i=1)^n f_i log(f_i) $
Dove $f_i$ è la #link(<frequenze>)[frequenza relativa] e $n$ è il numero di elementi distinti.
L'entropia assume valori nel range $[0, log(n)]$ quindi utilizziamo l'*entropia normalizzata* per confrontare due misurazioni con diverso numero di elementi distinti $n$.
$ H' = 1/log(n) H $
#nota[
In base alla base del logaritmo utilizzata, l'entropia avrà unità di misura differente:
- $log_2$: bit
- $log_e$: nat
- $log_10$: hartley
]
#informalmente[
Intuitivamente sia l'#link(<gini>)[indice di Gini] che l'#link(<entropia>)[entropia] sono una _"media pesata"_ tra la frequenza relativa di ogni elemento ed un peso: la _frequenza stessa_ nel caso di Gini e il _logaritmo del reciproco_ nell'entropia. La frequenza relativa è già nel range $[0, 1]$, quindi non c'è bisogno di dividere per il numero di elementi
]
== Indici di concentrazione
Un indice di concentrazione misura in che modo un _bene_ è distribuito nella _popolazione_.
/ Distribuzione del bene: $a_1, a_2, ... a_n$ indica la quantità ordinata in modo *non decrescente*, del bene posseduta dall'individuo $i$
/ Media: $overline(a)$ indica la quantità media posseduta da un individuo
/ Totale: $"TOT" = n overline(a)$ indica il totale del bene posseduto
/ Caso sperequo: concentrazione _massima_, un individuo possiete tutta la quantità $ a_(1..n-1) = 0, quad a_n = n overline(a) $
/ Caso equo: concentrazione _minima_, tutti gli individui possiedono la stessa quantità $ a_(1..n) = overline(a) $
=== <NAME> <lorenz>
La curva di Lorenz è una rappresentazione *grafica* della _distribuzione_ di un bene nella popolazione.
Dati:
- $F_i = i/n$: posizione percentuale dell'i-esima nell'insieme
- $Q_i = 1/"TOT" limits(sum)_(k=1)^i a_k$: quantità di bene cumulata fino all'i-esima osservazione
La tupla $(F_i, Q_i)$ indica che il $100 dot F_i%$ degli individui detiene il $100 dot Q_i%$ della quantità totale.
Inoltre: $forall i, space 0 <= Q_i <= F_i <= 1$.
#informalmente[
Possiamo vedere $F_i$ come _"quanta"_ popolazione è stata analizzata fino all'osservazione $i$, espressa nel range $[0, 1]$.
$Q_i$ è invece una #link(<frequenze>)[_"frequenza cumulata"_] della ricchezza, fino all'osservazione $i$
]
#figure(caption: [<NAME>z],
cetz.canvas({
import cetz: *
plot.plot(
name: "lorentz",
size: (4,4),
x-tick-step: 1,
y-tick-step: 1,
axis-style: "school-book",
{
plot.add(((0,0), (1,1)), line: "spline", style: (stroke: 2pt + green), label: "Minima")
plot.add(((0,0), (0.6, 0.2), (1,1)), line: "spline", style: (stroke: 2pt + orange), label: "Media")
plot.add(((0,0), (0.98,0)), line: "spline", style: (stroke: 2pt + red), label: "Massima")
plot.add-anchor("11", (1,1))
plot.add-anchor("10", (1,0))
})
draw.circle("lorentz.11", fill: red, stroke: none, radius: .1)
draw.circle("lorentz.10", fill: white, stroke: red, radius: .1)
})
)
=== Indice di Gini (per la concentrazione)
Dato che la #link(<lorenz>)[curva di Lorenz] _non assume mai_ alcun valore nella parte di piano superiore alla _bisettrice_, allora introduciamo l'*indice di Gini*, che trasforma la rappresentazione grafica in un _indice numerico_ che assume valori nel range $[0, 1]$.
Anche esso indica la _concetrazione_ di un bene nella popolazione.
$ G = (limits(sum)_(i=1)^(n-1) F_i - Q_i) / (limits(sum)_(i=1)^(n-1) F_i) $
#nota[
È possibile riscrivere il denominatore come:
$ sum_(i=1)^(n-1) F_i quad = quad 1/n sum_(i=1)^(n-1) i quad = quad 1/cancel(n) (cancel(n)(n-1))/2 quad = quad (n-1) / 2 $
]
Ottendendo come formula alternativa:
$ G = 2 / (n-1) sum_(i=1)^(n-1) F_i - Q_i $
#informalmente[
Facendo un parallelo con la #link(<lorenz>)[curva di Lorenz], possiamo vedere $F_i - Q_i$ come la distanza tra la bisettrice ($F_i$) e la ricchezza dell'osservazione $i$ ($Q_i$). La somma di queste distanze viene poi _"normalizzata"_, dividendo per $(n-1) / 2$
]
=== Analisi della varianza (ANOVA)
L'analisi della varianza (*ANOVA* - ANalysis Of VAriance) permette di analizzare un campione suddiviso in due o più _gruppi_ e capire se ci sono sostanziali differenze di un'attributo nei vari gruppi.
#informalmente[
Ad esempio, dato un campione di dati sulla _natalità_, si potrebbe analizzare formando gruppi per _regione_ o per _reddito_ e cercare l'influenza di questi fattori sulla natalità stessa
]
/ Numerosità dei gruppi: dato un campione diviso in $G$ gruppi, ognuno ha numerosità $n_1, ..., n_G$
/ Numerosità totale: numero totale delle osservazioni $n_1 + ... + n_G = n$
/ Osservazione: viene definita $x_i^g$ come l'$i$-esima osservazione del $g$-esimo gruppo
Possiamo calcolare alcuni indici e analizzarne i risultati:
/ Media campionaria di tutte le osservazioni: la media del campione $ overline(x) = 1/n sum_(g=1)^G sum_(i=1)^n_g x_i^g $
/ Media campionaria di un gruppo: la media dei valori del gruppo $ overline(x)_g = 1/n_g sum_(i=1)^n_g x_i^g $
/ Somme degli scarti:
- Somma *totale* degli scarti (tra _ogni elemento_ e la _media di tutto il campione_): $ "SS"_T = sum_(g=1)^G sum_(i=1)^n_g (x_i^g - overline(x))^2 $
- Somma degli scarti *entro/within* i gruppi (tra _ogni elemento_ e la _media del proprio gruppo_): $ "SS"_W = sum_(g=1)^G sum_(i=1)^n_g (x_i^g - overline(x)^g)^2 $
- Somma degli scarti *tra/between* i gruppi (tra la _media di ogni gruppo_ e _la media del campione_, "pesato" per la _numerosità_ del gruppo): $ "SS"_B = sum_(g=1)^G n_g (overline(x)^g - overline(x))^2 $
Vale la seguente regola: $"SS"_T = "SS"_W + "SS"_B$.
/ Varianza:
- *Total* (la varianza totale del campione): $ s_T^2 = ("SS"_T)/(n-1) $
- *Within* (la varianza di ogni elemento del gruppo): $ s_W^2 = ("SS"_W)/(n-G) $
- *Between* (la varianza tra ogni gruppo e il campione completo): $ s_B^2 = ("SS"_B)/(G-1) $
L'ipotesi alla base è che dati $G$ gruppi, sia possibile scomporre la varianza in due componenti: _Varianza interna ai gruppi_ (varianza *Within*) e _Varianza tra i gruppi_ (varianza *Between*).
#informalmente[
Analizzando diversi gruppi attraverso l'ANOVA, si possono raggiungere due conclusioni:
- i gruppi risultano significativamente *diversi* tra loro: la _varianza between_ $s_W^2$ contribuisce più significativamente alla varianza totale $s_T^2$ (il fenomeno è legato a caratteristiche proprie di ciascun gruppo)
- i gruppi risultano *omogenei*: la _varianza within_ $s_W^2$ contribuisce più significativamente alla varianza totale $s_T^2$ (il fenomeno è legato a caratteristiche proprie di tutti i gruppi)
]
== Alberi di decisione
Un albero di decisione è un *classificatore* (ovvero un processo che assegna una classe ad un oggetto) che sfrutta gli indici di #link(<eterogeneita>)[eterogeneità] per costruire la propria struttura efficacemente. Sono composti da:
- *nodi interni*: domanda sull'osservazione _(anche su più attributi)_ con risposta binaria
- *foglie*: classe a cui l'oggetto viene assegnato
Le domande vengono scelte e valutate in base all'_omogenetità_ dei _sottoinsiemi_ creati da essa, più i gruppi sono omogenei, _migliore_ è la domanda.
== Analisi dei classificatori
Dato un _classificatore binario_ che divide in due classi (positiva e negativa) e un _insieme di oggetti_ di cui è *nota* la classificazione, possiamo valutare la sua _bontà_ tramite il numero di casi classificati in modo _errato_. La classificazione errata può essere:
- *falso negativo*: oggetto _positivo_ classificato come _negativo_
- *falso positivo*: oggetto _negativo_ classificato come _positivo_
#nota[
Il peso di un falso positivo può *non* essere lo stesso di un falso negativo, si pensi al caso di una malattia contagiosa: un _falso negativo_ (ignaro della malattia) sarà molto più pericoloso di un _falso positivo_ (che verrà scoperto con ulteriori analisi)
]
Introduciamo la *matrice di confusione*, che riassume la bontà del classificatore:
#figure(caption: [Matrice di confusione])[
#table(
columns: 5,
inset: 10pt,
align: horizon,
table.cell(colspan: 2, rowspan: 2, stroke: none, []),
table.cell(colspan: 2, fill: silver, [*Valore effettivo*]),
table.cell(rowspan: 2, stroke: none, []),
[Positivo],
[Negativo],
table.cell(rowspan: 2, fill: silver, [*Predizione del classificatore*]),
[Positivo],
[Veri positivi (VP)],
[Falsi positivi (FP)],
[_Totali classificati positivi (TOT CP)_],
[Negativo],
[Falsi negativi (FN)],
[Veri negativi (VN)],
[_Totali classificati negativi (TOT CN)_],
table.cell(colspan: 2, stroke: none, []),
[_Totale positivi (TP)_],
[_Totale negativi (TN)_],
[_Totale casi (TOT casi)_],
)
]
/ Sensibilità: capacità del classificatore di predire bene i positivi $"VP"/"TP"$
/ Specificità: capacità del classificatore di predire bene i negativi $"VN"/"TN"$
È possibile valutare la *bontà* di un classificatore _graficamente_ attraverso il _punto_:
$ (1 - "Specifità", "Sensibilità") quad = quad (1 - "VN"/"TN", "VP"/"TP") quad = quad ("FP"/"TN", "VP"/"TP") $
=== Casi particolari
/ Classificatori costanti: associano indiscriminatamente gli oggetti ad una classe (positiva o negativa)
/ Classificatore positivo (CP): tutti i casi sono classificati come positivi
- _Sensibilità_: $1$, _Specificitià_: $0$, _Punto_ $(1,1)$ #box(circle(radius: 2.5pt, fill: green, stroke: 1pt + black))
/ Classificatore negativo (CN): tutti i casi sono classificati come negativi
- _Sensibilità_: $0$, _Specificitià_: $1$, _Punto_ $(0, 0)$ #box(circle(radius: 2.5pt, fill: red, stroke: 1pt + black))
/ Classificatore ideale (CI): tutti i casi sono classificati correttamente
- _Sensibilità_: $1$, _Specificitià_ $1$, _Punto_ $(0,1)$ #box(circle(radius: 2.5pt, fill: blue, stroke: 1pt + black))
/ Classificatore peggiore (CE): tutti i casi sono classificati erroneamente
- _Sensibilità_: $0$, _Specificitià_ $0$, _Punto_ $(1, 0)$ #box(circle(radius: 2.5pt, fill: gray, stroke: 1pt + black))
/ Classificatore casuale: ogni caso viene assegnato in modo casuale
- _Sensibilità_: $0.5$, _Specificitià_ $0.5$, _Punto_ $(1/2, 1/2)$ #box(circle(radius: 2.5pt, fill: yellow, stroke: 1pt + black))
#figure(caption: [Rappresentazione classificatori],
cetz.canvas({
import cetz: *
plot.plot(
name: "classificatori",
size: (4,4),
x-tick-step: 0.5,
y-tick-step: 0.5,
axis-style: "school-book",
{
plot.add(((0,0), (1,1)), style: (stroke: silver))
plot.add-anchor("00", (0,0))
plot.add-anchor("11", (1,1))
plot.add-anchor("01", (0,1))
plot.add-anchor("10", (1,0))
plot.add-anchor("55", (0.5,0.5))
})
draw.circle((0, 0), fill: red, radius: .1)
draw.circle((4, 4), fill: green, radius: .1)
draw.circle((0, 4), fill: blue, radius: .1)
draw.circle((4, 0), fill: gray, radius: .1)
draw.circle((2, 2), fill: yellow, radius: .1)
draw.content("classificatori.00", [*Negativo*], anchor: "south-west", padding: .2)
draw.content("classificatori.11", [*Positivo*], anchor: "west", padding: .2)
draw.content("classificatori.01", [*Ideale*], anchor: "west", padding: .2)
draw.content("classificatori.10", [*Peggiore*], anchor: "south", padding: .3)
draw.content("classificatori.55", [*Casuale*], anchor: "west", padding: .2)
}))
Un classificatore è tanto più efficace quanto più si avvicina al classificatore ideale.
#attenzione[
Il classificatore peggiore è, in realtà, un classificatore ideale: basta invertirlo
]
=== Classificatori a soglia (Curva ROC)
Un classificatore a soglia discrimina un caso in base ad una *soglia* stabilita a priori, in caso la misurazione sia _superiore_ alla soglia allora verrà classificato _positivamente_, altrimenti _negativamente_.
Per trovare il valore con cui _fissare_ la soglia, possiamo sfruttare questo metodo:
- definiamo $theta$ come una generica soglia, che ha intervallo $[theta_min, theta_max]$
- utilizzando $theta_min$ tutti i casi saranno positivi, ottenento un classificatore positivo #box(circle(radius: 2.5pt, fill: green, stroke: 1pt + black))
- utilizzando $theta_max$ tutti i casi saranno negativi, ottenento un classificatore negativo #box(circle(radius: 2.5pt, fill: red, stroke: 1pt + black))
- definiamo $D$ come una discretizzazione di questo intervallo continuo
Per ogni soglia $theta in D$ è possibile calcolare la _sensibilità_ e _specificità_. Questo classificatore viene quindi _rappresentato_ sul piano cartesiano attraverso il _punto_ $(1 - "Specifità", "Sensibilità")$.
Il risultato è una *curva*, detta *ROC* (Receiver Operator Carapteristic) #box(line(length: 10pt, stroke: 2pt + red), inset: (bottom: 3pt)), che ha sempre come estremi in $(0,0)$ (caso in cui viene usato $theta_max$) e $(1,1)$ (caso in cui viene usato $theta_min$).
Per misurare la _bontà_ del classificatore (in modo _indipendente_ dalla soglia scelta) viene misurata l'area di piano sotto la curva (*AUC* - Area Under the ROC Curve #box(rect(height: 7pt, width: 15pt, fill: rgb("#FFCDD2")))), più si avvicina a $1$, _migliore_ è il classificatore.
#figure(caption: [Curva ROC],
cetz.canvas({
import cetz: *
plot.plot(
name: "curvaroc",
size: (4,4),
x-tick-step: 0.5,
y-tick-step: 0.5,
axis-style: "school-book",
fill-below: true,
{
plot.add(((0,0), (1,1)), style: (stroke: silver))
plot.add(((0,0), (0.01, 0.05), (0.05, 0.3), (0.1, 0.5), (0.15, 0.6), (0.3, 0.8), (0.4, 0.85), (0.7, 0.92), (0.8, 0.94), (0.90, 0.97), (1,1)), style: (stroke: 2pt + red, fill: rgb("#FFCDD2")), fill: true, fill-type: "axis")
plot.add-anchor("00", (0,0))
plot.add-anchor("11", (1,1))
plot.add-anchor("01", (0,1))
plot.add-anchor("10", (1,0))
plot.add-anchor("55", (0.5,0.5))
})
draw.circle((0, 0), fill: red, radius: .1)
draw.circle((4, 4), fill: green, radius: .1)
draw.circle((0, 4), fill: blue, radius: .1)
draw.circle((4, 0), fill: gray, radius: .1)
draw.circle((2, 2), fill: yellow, radius: .1)
draw.content("curvaroc.00", [*Negativo*], anchor: "south-west", padding: .2)
draw.content("curvaroc.11", [*Positivo*], anchor: "west", padding: .2)
draw.content("curvaroc.01", [*Ideale*], anchor: "west", padding: .2)
draw.content("curvaroc.10", [*Peggiore*], anchor: "south", padding: .3)
draw.content("curvaroc.55", [*Casuale*], anchor: "west", padding: .2)
}))
#attenzione[
La curva è "spezzettata" dato che le soglie provate sono un _numero finito_, una discretizzazione dell'intervallo $[theta_min, theta_max]$
]
#informalmente[
Per trovare la _soglia migliore_, proviamo diversi valori della soglia $theta$, calcoliamo per ognuna il punto sul piano e teniamo la _migliore_.
I diversi tentativi disegnano la _curva ROC_, maggiore è l'_area sotto la curva AUC_, migliore è il _classificatore_ (indipendentemente dalla soglia scelta)
]
#nota[
Un _caso particolare _dei classificatori a soglia sono i *classificatori probabilistici*, dove la quantità assegnata agli oggetti (il valore con cui viene confrontata la soglia) è la _probabilità_ di appartenere ad una data _classe_
]
== Trasformazione dei dati
Dato un insieme di dati $X = {x_1, ..., x_n}$, trasformare significa trovare una funzione $g : X -> X'$ che trasforma $X$ in $Y = {x_1', ..., x_n'}$, dove $g(x_i) = x_i'$.
#attenzione[
La funzione $g$ che effettua la trasformazione *deve* essere *iniettiva*, altrimenti più dati distinti potrebbero essere mappati sullo stesso valore, causando una modifica alle frequenze
]
=== Trasformazioni lineari
Date due costanti $a, b in bb(R)$, allora $g(x) = a x + b$, da cui possiamo derivare:
/ Cambiamento di origine (traslazione):
- per $k>0$, viene traslata a sinistra con $x -> x - k$ e a destra con $x -> x + k$
- _media_, _mediana_ e _quantili_ vengono traslati della stessa quantità
- _range_, _distanza interquartile_, _varianza_, _deviazione standard_ rimangono invariati
/ Cambiamento di scala (dilatazione o concentrazione):
- per $h in bb(R)^+$, applichiamo $x -> x/h$:
- se $h > 1$ allora è una concentrazione
- se $h < 1$ allora è una dilatazione
- se $h < x_min$ allora $forall x_i > 1$
- se $h > x_max$ allora $forall x_i < 1$
- _media_, _mediana_, _quantili_, _range_, _distanza interquartile_ e _deviazione standard_ vengono scalati della stessa quantità $1/h$
- _varianza_ viene scalata di $1/(h^2)$
/ Cambiamento di origine e scala: se i nostri valori sono nel _range_ $(a,b)$ e li vogliamo nell'_intervallo_ $(c, d)$ allora possiamo:
$ x -> c + (d-c)/(b-a) (x-a) $
/ Standardizzazione: caso particolare del cambiamento di origine e scala: si scala rispetto alla _deviazione standard_ e si trasla a sinistra rispetto alla _media_, in modo da ottenere un insieme di _media_ $0$ e _deviazione_ $1$:
$ x -> (x - overline(x))/(sigma_x) $
#figure(caption: "Effetti delle trasformazioni sugli indici")[
#table(
columns: 4,
inset: 10pt,
align: horizon,
table.cell(colspan: 2, fill: silver, [*Indice*]),
table.cell(fill: silver, [$g(x) = x + k$]),
table.cell(fill: silver, [$g(x) = h x$]),
[Media], [$overline(x)$], [$overline(x) + k$], [$h overline(x)$],
[Mediana], [$m_x$], [$m_x + k$], [$h m_x$],
[Moda], [$M_x$], [$M_x + k$], [$h M_x$],
[Quantile], [$q_x$], [$q_x + k$], [$h q_x$],
[Varianza], [$s^2_x$], [$s^2_x$], [$h^2 s^2_x$],
[Dev. std.], [$s_x$], [$s_x$], [$|h| s_x$],
[Range], [$r_x$], [$r_x$], [$h r_x$],
[IQR], [$"IQR"_x$], [$"IQR"_x$], [$h "IQR"_x$],
)
]
=== Trasformazioni logaritmiche
Quando i valori di un campione sono molto grandi oppure molto distanti è possibile trasformali in maniera logaritmica: $x -> log x$
#nota[
Può risultare utile anche perchè eventuali prodotti/quozienti diventano somme/sottrazioni tra i rispettivi logaritmi
]
== Grafici
=== Diagramma a barre
Rappresenta graficamente la _tabella delle frequenze_, è opportuno utilizzarlo in caso di *attributi qualitativi non ordinati*.
L'uso delle _frequenze relative_ permette di confrontare situazioni in cui il numero di osservazioni è variabile, quindi ha senso sovrapporre grafici di campioni diversi.
Si utilizza per gli indici (asse delle x) con attributo qualitativo ma non ordinato.
```python
publisher_order = ['Hanna-Barbera', 'ABC Studios', 'Dark Horse Comics',
'Image Comics', 'Marvel Comics', 'DC Comics',
'George Lucas', 'Rebellion',
'Star Trek', 'Universal Studios']
publisher_rel_freq.loc[publisher_order,:].plot.bar(legend=False)
plt.show()
```
#nota[
- Se utilizzo plot.bar su value.counts ottengo un ordinamento non crescente sulle frequenze
- Se utilizzo plot.bar sul dataframe ottengo un ordinamento in base ai valori e una legenda dell'asse delle ascisse (che si puo togliere con legend=False)
]
=== Aerogramma (grafico a torta)
Grafico alternativo per le frequenze dei valori qualitativi
```python
gender_freq.plot.pie(y='Abs. frequence', colors=['pink', 'blue'])
plt.show()
```
=== Istogramma
Per rappresentare graficamente le tabelle delle frequenze di dati quantitativi, usiamo ancora ``` plot.bar```
Tuttavia è molto probabile che il risultato non sia ottimale in quanto le barre hanno uno spessore che puo suggerire un interpretazione fuorviante secondo cui le frequenze non facciano riferimento ad un anno ma a un intervallo.
```python
first_app_freq = heroes_with_year['First appearance'].value_counts()
plt.bar(first_app_freq.index, first_app_freq.values)
plt.show()
```
Si utilizza in questi casi un grafico a bastoncini in cui ogni punto è evidenziato da un segmento verticale che lo congiunge con l'asse delle ascisse. Lo si genera con ``` plt.vlines(listaIndici, yiniziale, listaValori)```.
Eventualmente si puo sovrapporre con un ``` plt.plot``` che rappresenta un cerchio.
```python
plt.vlines(first_app_freq.index, 0, first_app_freq.values)
plt.show()
```
Ulteriore problema che ricorre spesso nei dati quantitativi sono i numerosi valori distinti per un valore piccolissimo (ad esempio tra 81.01 e 81).
Risulta piu sensato calcolare le frequenze di intervalli di possibili valori, aggregando quindi i valori ma non sarebbe corretto perché daremmo l'impressione che un certo valore ha una frequenza alta.
Alcuni insiemi hanno troppi valori distinti per poter usare il metodo del grafico poligonale/a barre. Suddividiamo i valori in gruppi/classi e poi rappresentiamo con un grafico il numero di valori dei dati appartenenti a ciascuna classe.
Come scegliere li numero di classi:
- Se troppo poche classi perdo molte informazioni sui valori effettivi
- Se troppe classi ottengo frequenze troppo basse per ottenere delle informazioni dal grafico
Di solito si scelgono tra le 5 e le 10 classi. È prassi scegliere classi della stessa lunghezza.
I valori al bordo di una classe di chiamano estremi della classe.
Convenzione di inclusione a sinistra: classe include il suo estremo sinistro ma non quello destro.
Utilizziamo quindi un istogramma, generabile con ``` .hist()```.
Come argomento a hist possiamo specificare i bins, cioe i sotto intervalli espressi o come differenza tra gli intervalli.
``` .hist()``` in modo automatico raggruppa i dati, calcola le frequenze e le rappresenta.
es ``` bins=50``` oppure come con ``` np.stack((np.arange(0, 200, 20)), …)``` posso specificare gli intervalli.
- ``` np.hstack``` permette di giustapporre due o piu array numpy
- ``` np.arange``` permette di creare un array con valori che variano tra i primi due argomenti secondo il terzo argomento
```python
heroes['Weight'].hist(bins=50)
plt.show()
heroes['Weight'].hist(bins=np.hstack((np.arange(0, 200, 20),
np.arange(200, 500, 50),
np.arange(500, 1000, 100))))
plt.show()
```
#nota[
Nell'istogramma è l'area di ogni barra a essere legata alla frequenza. Se le barre hanno basi della stessa lunghezza, le aree sono proporzionali all'altezza
]
=== Diagramma di Pareto
// TODO: diagramma di pareto (catu pagina frequenze congiunte e marginali)
Le frequenze assolute/relative e cumulate di una variabile categorica possono essere considerate congiuntamente generando un diagramma di Pareto, ordinando i dati per frequenza decrescente su uno stesso sistema di riferimento in cui l'asse delle ascisse fa riferimento ai valori.
In questo sistema si sovrappongono il diagramma a barre delle frequenze assolute/relative e la linea spezzata he collega i valori delle frequenze cumulate
```python
eye_color = heroes['Eye color']
eye_color_freq = eye_color.value_counts(normalize=True)
eye_color_freq[eye_color_freq>.02].cumsum().plot()
eye_color_freq[eye_color_freq>.02].plot.bar()
plt.show()
```
La linea spezzata non arriva sempre all'ordinata 1 (considerando le frequenze relative) in quanto consideriamo solo un sottoinsieme dei dati dato il possibile gran numero di dati con una frequenza bassa e quindi trascurabile.
Voglio pero considerare un sottoinsieme del mio campione, per avere una risposta succinta non considero quindi tutti i valori distinti il cui numero potrebbe essere molto elevato ma con molti valori con una frequenza relativa trascurabile.
Per far arrivare il diagramma di Pareto fino all'ordinata unitaria si attua una normalizzazione in particolare dividiamo tutte le frequenze per la sommatoria.
La normalizzazione tipicamente si ottiene:
- dividendo per la somma, in modo che i dati trasformati hanno somma 1
- dividere per il massimo, in modo che i dati trasformati siano < 1
```python
norm_eye_color_freq = eye_color_freq[eye_color_freq>.02]/sum(eye_color_freq[eye_color_freq>.02])
norm_eye_color_freq.cumsum().plot()
norm_eye_color_freq.plot.bar()
plt.show()
```
In generale un diagramma di Pareto permette di identificare gli elementi piu rilevanti in termini di frequenze all'interno di un insieme di osservazioni.
Tuttavia dobbiamo considerare che non stiamo tenendo in considerazione tutte le osservazioni, considerandole tutte scirebbe un grafico poco leggibile.
```python
def my_pareto(data, threshold=0.02, renormalize=False):
freq = data.value_counts(normalize=True)
freq = freq[freq > threshold]
if renormalize:
freq = freq / sum(freq)
freq.cumsum().plot()
freq.plot.bar()
my_pareto(heroes['Eye color'], threshold=0)
```
=== Box plot
// TODO: boxplot (catu pagina coefficienti di variazione)
È possibile visualizzare un campione attraverso un *box plot*, partendo dal basso composto da:
- eventuali _outliers_, rappresentati con le `x` prima del baffo
- il _baffo_ "inferiore", che parte dal valore minimo e raggiunge il primo quartile
- il _box_ (scatola), che rappresenta le osservazioni comprese tra il primo e il terzo quartile
- la linea che divide in due il box, che rappresenta la _mediana_
- il _baffo_ "superiore", che parte terzo quartile e raggiunge il massimo
- eventuali _outliers_ "superiori", rappresentati con le `x` dopo il baffo
#figure(caption: [Grafico boxplot],
cetz.canvas({
import cetz: *
plot.plot(size: (4,4), x-tick-step: none, y-tick-step: none, {
plot.add-boxwhisker((
x: 1,
outliers: (7, 65, 69),
min: 20, max: 60,
q1: 25,
q2: 33,
q3: 50))
})
})
)
Il box plot permette di evidenziare la centralità delle osservazioni e la loro dispersione in termini di range interquartile (L'estensione della box) e di intervallo di variazione dei dati / range, definito come la distanza tra MAX e MIN
```python
year.plot.box(whis='range')
plt.show()
```
#nota[
Di default il box plot analizza ed esclude gli outlier, per evitare cioe specifichiamo l'argomento whis='range'. Un valore è un outlier se la sua distanza dalla mediana è più di una costante moltiplicata per il range interquartile. Per visualizzare il box plot in modo orizzontale usiamo vert=False
]
=== Simmetria
=== Q-Q plot
// TODO: diagrammi Q-Q (catu pagina indici di eterogeneità)
Un diagramma Q-Q / Quantile-Quantile è una rappresentazione grafica utilizzata per verificare se due campioni seguano una medesima distribuzione.
Ogni asse lo associo con un campione, segnando i punti come incontro dei due percentili. Prendendo come riferimento la bisettrice del primo e del terzo quadrante, se i punti sono vicini ad essa, allora posso dire che la popolazione è plausibilmente la stessa
Questi diagrammi si basano sul fatto che i quantili campionari rappresentano l'approssimazione di quantili teorici che considerandoli tutti individuano univocamente la distribuzione dei dati.
Se due campioni hanno un'uguale distribuzione allora estraendo da entrambi il quantile di un livello si dovranno ottenere due numeri molto vicini.
Oltre a mostrare il comportamento relativo di due variabili ed aiutarci nella rappresentazione, è utile per riconoscere i valori anomali/outliner.
```python
import statsmodels.api as sm
sm.qqplot_2samples(marvel_sample, dc_sample, line='45')
plt.show()
```
#nota[
Normalizzando i dati si possono notare piu facilmente valori fuori scala
]
=== Scatter plot
Considerando due serie con un medesimo indice, individuiamo un punto sul piano dato dai valori delle due serie. Oppure considerando una riga fissata, quindi un individuo, voglio analizzare la relazione tra due attributi dell'individuo.
Ripetendo la stessa cosa per tutti gli individui, otteniamo una nuvola di punti: Abbiamo otteniamo un diagramma di dispersione o scatter plot.
Non sto cercando una relazione deterministica, ma una tendenza.
Se esistesse una qualunque relazione lineare, otterrei esattamente una retta congiungendo i punti.
Generiamo uno scatter plot da un dataframe invocando ``` plot.scatter()``` indicando come argomenti i nomi dei caratteri da considerare.
```python
heroes[heroes['Gender']=='M'].plot.scatter('Height', 'Weight')
plt.show()
```
= Calcolo delle probabilità <probabilità>
== Calcolo combinatorio
Analizzare _come_ e in _quanti_ modi si possono effettuare raggruppamenti di elementi.
/ Principio di enumerazione (principio fondamentale del calcolo combinatorio): se dobbiamo compiere $t$ esperimenti e per ognuno di essi ci possono essere $s_i$ possibili risultati, il numero di risultati totali è $s_1 dot s_2 dot ... dot s_t$
#informalmente[
Vogliamo selezionare $k$ elementi da un insieme $A$ di $n$ elementi:
/ Disposizioni: l'ordine è importante $(a, b) != (b, a)$
/ Combinazioni: l'ordine _non_ è importante $(a, b) = (b, a)$
/ Permutazioni: tutti gli elementi vengono disposti $k = n$
È possibile sia _avere_ che _non avere_ delle *ripetizioni* in tutti i casi
]
=== Disposizioni
Dato un insieme di $n$ oggetti distinti $A = { a_1, ..., a_n }$, vogliamo selezionare $k$ oggetti (con $k <= n$), tenendo in cosiderazione l'*ordine*.
/ Disposizione senza ripetizioni (semplici): gli oggetti di $A$ possono essere usati una volta sola $ d_(n,k) = n! / (n-k)! $
#dimostrazione[
$ d_(n,k) &= n(n-1)(n-2)...(n-k+1) \
&= mb(n(n-1)(n-2)...(n-k+1)) dot mb((n-k)(n-k-1)...(1))/mo((n-k)(n-k-1)...(1)) \
&= mb(n!) / mo((n-k)!) $
]
/ Disposizione con ripetizione: gli oggetti di $A$ possono essere usati più di una volta $ D_(n,k) = underbrace(n dot n dot ... dot n, k "volte") = n^k $
#attenzione[
In questo caso $k$ può essere $> n$
]
=== Combinazioni
Dato un insieme di $n$ oggetti distinti $A = { a_1, ..., a_n }$, vogliamo selezionare $k$ oggetti (con $k <= n$), *senza* considerare l'ordine.
#nota[
Il numero di combinazioni $c_(n,k)$ è sempre minore del numero di disposizioni $d_(n,k)$, dato che l'ordine non conta
]
/ Combinazione senza ripetizioni (semplici): gli oggetti di $A$ possono essere usati una volta sola $ c_(n,k) = binom(n, k) $
#nota[
$binom(n, k)$ viene detto *coefficiente binomiale*, ovvero il numero _sottoinsiemi_ $k$ che si possono formare da un _insieme_ di $n$ elementi
]
#dimostrazione[
$ c_(n,k) = d_(n,k) / k! = n!/(k! dot (n-k)!) = binom(n, k) $
]
/ Combinazione con ripetizioni: gli oggetti di $A$ possono venir usati più di una volta $ C_(n,k) = (n+k-1)! / (k! dot (n-1)!) = binom(n + k - 1, k) $
#attenzione[
In questo caso $k$ può essere $> n$
]
=== Permutazioni
Dato un insieme di $n$ oggetti $A = { a_1, ..., a_n }$, una *permutazione* è una sequenza _ordinata_ in cui compaiono _tutti_ gli oggetti (quindi vogliamo selezionare $k$ elementi).
/ Permutazioni semplici (senza ripetizioni): l'insieme $A$ non contiene elementi duplicati $ P_n = n(n-1)(n-2)...(1) = n! $
/ Permutazioni di oggetti distinguibili a gruppi (con ripetizioni): l'insieme $A$ contiene $k$ gruppi di oggetti indistinguibili, ognuno con numerosità $n_1, ..., n_k$ (con $limits(sum)_(i=1)^k n_i = n)$, allora dobbiamo disporre tutti questi elementi $ P_(n:n_1, ..., n_k) = n! / (n_1 ! dot ... dot n_k !) = binom(n, (n_1, ..., n_k)) $
#nota[
$binom(n, (n_1, ..., n_k))$ viene detto *coefficiente multinomiale*
]
== Elementi di probabilità
/ Esito $omega in Omega$: risultato effettivo di un esperimento
/ Evento $E subset.eq Omega$: è un qualsiasi insieme formato da tutti, alcuni o nessuno dei possibili esiti di un esperimento
/ Probabilità: quantificazione dell'incertezza di un evento
/ Spazio campionario $Omega$ (insieme degli esiti o insieme universo): è l'insieme di tutti gli esiti possibili. Può essere _finito_ o _infinito_, _continuo_ o _discreto_
#informalmente[
_Esempio_: lanciando un dado, l'_esito_ è il numero risultante, un _evento_ può essere "esce 3 o 6" e la _probabilità_ di questo evento è $2/6$
]
/ Evento certo $E = Omega$: si verifica sempre
/ Evento impossibile $E = emptyset$: non si verifica mai
#nota[
Indichiamo sempre con una _minuscola_ un _esito_, mentre con una _maiuscola_ un _evento_
]
Dati degli eventi, è possibile applicare le operazioni e proprietà degli insiemi su di essi:
/ Unione $E union F$: quando si verifica l'evento $E$ o l'evento $F$
/ Intersezione $E sect F$: quando si verificano entrambi gli eventi $E$ ed $F$
/ Mutualmente esclusivi $E sect F = emptyset$: i due eventi sono _mutualmente esclusivi_
/ Differenza $E - F$: si verifica l'evento $E$, ma l'evento $F$ non si verifica (l'operazione di sottrazione non è commutativa, $E - F != F - E$)
/ Complemento $Omega - E = E^c = overline(E)$: quando l'evento $E$ non si verifica
/ Sottoinsieme $E subset.eq F = E -> F$: quando si verifica $E$, allora si verifica anche $F$
/ Proprietà per unione e intersezione:
/ Commutatività: $E union F = F union E$
/ Associatività: $(D union E) union F = D union (F union E)$
/ Distrbutività: $D union (E sect F) = (D union E) sect (D union F)$
/ De Morgan: $overline(E union F) = overline(E) sect overline(F)$: l'evento che si verifica quando non si verifica $E$ o $F$ è lo stesso evento che si verifica quando non si verifica $E$ e non si verifica $F$
È possibile dare diverse _interpreazioni_ alla probabilità:
/ Approccio soggettivista: la probabilità di un esito non è oggettiva: è il livello di _fiducia_ che un soggetto _(lo studioso)_ ripone nel verificarsi di un evento
/ Approccio frequentista: la probabilità di un esito è una _proprietà_ dell'esito stesso: viene calcolata come il rapporto tra il numero di casi _favorevoli_ e il numero di casi _possibili_ ripetendo l'esperimento un numero di volte tendente all'infinito
=== Algebra di eventi
Un algebra di eventi $A$ è un insieme di eventi ${E_1, E_2, ...}$ a cui sono associate delle operazioni che soddisfa le proprietà:
- $forall E in A, space E subset.eq Omega$: ogni evento appartenente all'_algebra_ $A$ appartiene all'insieme di tutti gli _eventi possibili_ $Omega$
- $Omega in A$: l'insieme di tutti gli _eventi possibili_ $Omega$ appartiene all'_algebra_ $A$
- $forall E in A, space overline(E) in A$: chiusura rispetto al _complemento_
- $forall E, F in A, space E union F in A$: chiusa rispetto all'_unione_
- $forall E, F in A, space E sect F in A$: chiusura rispetto all'_intersezione_
#nota[
La chiusura rispetto all'_intersezione_ non è una vera proprietà, ma deriva dalla chiusura rispetto all'_unione_ a cui viene applicata la _legge di De Morgan_
]
#nota[
Se la chiusura sull'_unione_ vale anche per $|Omega| = infinity
$, allora $A$ viene chiamata $sigma$-algebra
]
#attenzione[
L'algebra degli eventi non è un _vero_ insieme di eventi, ma è un _"dizionario"_ che sfruttiamo per definire quali _operazioni_ e _variabili_ sono ammesse su un $Omega$
]
=== Assiomi di Kolmogorov
Definiamo la funzione *probabilità* $P : A -> [0,1]$, che stabilisce la probabilità che un evento avvenga. $P : A -> [0,1]$ è una funzione di probabilità se e solo se:
1. La probabilità di un evento è sempre _positiva_ $ forall E in A, quad P(E) >= 0 $
#attenzione[
Grazie al secondo assioma, viene imposto il limite superiore di $1$, quindi: $ forall E in A, quad 0 <= P(E) <= 1 $
]
2. Lo spazio degli esiti contiene necessariamente tutti i possibili esiti: $ P(Omega) = 1 $
3. La probabilità che avvengano due eventi disguinti è la somma delle loro probabilità: $ forall E, F in A, quad (E sect F) = emptyset quad => quad P(E union F) = P(E) + P(F) $
#nota[
Il terzo assioma è generalizzabile come la probabilità che accadano diversi eventi _distinti_ $E_i, E_j$ e _disgiunti_ $E_i sect E_j = emptyset$ è la _somma_ delle loro probabilità:
$ P(limits(union.big)_(i=1)^n E_i) = sum_(i=1)^n P(E_i) $
]
#attenzione[
Formalmente la funzione probabilità è definita $P : A -> bb(R)^+$ (numeri _reali positivi_), applicando gli assiomi il _codominio_ viene ristretto a $[0, 1]$
]
=== Teoremi derivati dagli assiomi
/ Probabilità del complemento: $ forall E in A, space P(overline(E)) = 1 - P(E) $
#dimostrazione[
$ E sect overline(E) = emptyset, quad E union overline(E) &= Omega "(definizione complemenatre)" \
P(Omega) &= 1 "(per secondo assioma)" \
P(E union overline(E)) = P(E) + P(overline(E)) &= 1 "(per terzo assioma)" \
P(overline(E)) &= 1 - P(E) $
]
/ Probabilità dell'evento impossibile: $ P(emptyset) = 0 $
#dimostrazione[
$ P(Omega) &= 1 "(per secondo assioma)" \
P(emptyset) &= P(overline(Omega)) \
&= 1 - P(Omega) \
&= 1 - 1 = 0 $
]
/ Proprietà di monotonicità: $ forall E, F in A | E subset.eq F => P(E) <= P(F) $
/ Probabilità dell'unione di eventi: $ forall E, F in A, space P(E union F) = P (E) + P (F) − P(E sect F) $
#dimostrazione[
È possibile riscrivere $E union F$ come $E union (overline(E) sect F)$, quindi:
$ mb(P(E union F) &= P(E) + P(overline(E) sect F)) "(per terzo assioma)" $
È possibile riscrivere $F$ come $(E sect F) union (overline(E) sect F)$, quindi:
$ mo(P(F) = P(E sect F) + P(overline(E) sect F)) "(per terzo assioma)" $
Sottraendo membro a membro le due equazioni otteniamo:
$ mb(P(E union F)) - mo(P(F)) = mb(P(E) + cancel(P(overline(E) sect F))) - mo(P(E sect F) - cancel(P(overline(E) sect F))) $
Quindi:
$ P(E union F) = P(E) + P(F) - P(E sect F) $
]
=== Spazi di probabilità ed Esiti equiprobabili <spazio-probabilita>
Definiamo lo *spazio di probabilità* come la tripla $(Omega, A, P)$ composta dallo spazio di _esiti possibili_ $Omega$, l'_algebra_ $A$ e la _funzione probabilità_ $P$.
/ Spazio equiprobabile: uno spazio è _equiprobabile_ se gli eventi elementari (gli elementi $Omega$) hanno tutti la _stessa_ probabilità: $ P(E) = 1/N quad quad P({E_1, ..., E_k}) = k/N $
#dimostrazione[
Si dimostra con il secondo assioma di _Kolmogorov_:
$ P(Omega) = 1 = P({e_1}) + ... + P({e_N}) = sum_(i=1)^N P({e_i}) $
]
#nota[
Uno spazio può essere _equiprobabile_ solo se $Omega$ è un _insieme finito_
]
== Probabilità condizionata
Dati due eventi $E, F$, la probabilità che si verifichi l'evento $E$ _sapendo_ che _si è verificato_ l'evento $F$ è detta *probabilità condizionata*: $ P(E|F) = P(E sect F) / P(F) $
#nota[
- $P(E|F)$ si legge _"probabilità di $E$ dato $F$"_
- $E$ si dice evento _condizionato_
- $F$ si dice evento _condizionante_
]
#attenzione[
In caso $P(F) = 0$, ovvero $F = emptyset$, allora $P(E|F) = "indefinita"$
]
#informalmente[
Intuitivamente $P(E|F)$ è la probabilità che preso un punto qualsiasi all'interno di $F$, il punto appartenga a $E sect F$, quindi $(E sect F) / F$
]
=== Regola di fattorizzazione
Dati due eventi $E, F in Omega$, la probabilità che accadano _entrambi_ (la loro intersezione) è data dalla regola di _fattorizzazione_:
$ P(E sect F) = P(F) dot P(E|F) $
#informalmente[
A differenza di una possibilità condizionata "semplice", _non sappiamo_ se $F$ si sia già verificato o meno, quindi dobbiamo considerare anche la _sua possibilità_ oltre a quella condizionata di $E$
]
=== Teorema delle probabilità totali
Dato $Omega$ partizionato in $F_1, ..., F_n$ partizioni disguinte, la probabilità che accada un evento $E in Omega$ è:
$ P(E) = sum_(i=1)^n P(F_i) dot P(E|F_i) $
#nota[
Insieme $A$ partizionato: $limits(union.big)_(i=1)^n F_i = A$ con $forall i, j, space i != j, space F_i sect F_j = emptyset$. L'_unione_ di tutte le partizioni è uguale all'insieme iniziale e tutte le partizioni sono _disgiunte_
]
#figure(caption: [
Probabilità di $E$:
$ P(E) =
mp(P(F_1)) dot P(mr(E)|mp(F_1)) + mb(P(F_2)) dot P(mr(E)|mb(F_2)) + mg(P(F_3)) dot P(mr(E)|mg(F_3))\
=(mp(1/3) dot mr(0)) + (mb(1/3) dot mr(1/6)) + (mg(1/3) dot mr(1/2)) = 2/9\
$
], [
#cetz.canvas({
import cetz.draw: *
rect((0, 0), (2,3), fill: rgb(255, 0, 255, 20))
rect((2, 0), (4,3), fill: rgb(0, 0, 255, 20))
rect((4, 0), (6,3), fill: rgb(0, 255, 0, 20))
circle((4.5,1.5), stroke: red, fill: rgb(255, 0, 0, 20), name: "E")
content((6.3,3.3), $ Omega $)
content("E", $ mr(E) $)
content((1, 0.3), $ mp(F_1) $)
content((3, 0.3), $ mb(F_2) $)
content((5, 0.3), $ mg(F_3) $)
})
])
È possibile esprimere $E$ come:
$ P(E) = mr(P(E sect F)) &+ mb(P(E sect overline(F))) \
= mr(P(E | F) P(F)) &+ mb(P(E | overline(F)) P(mo(overline(F)))) \
= P(E | F) P(F) &+ P(E | overline(F)) P(mo(1-P(F))) $
Altre trasformazioni utili:
$ (E sect F) union (E sect overline(F)) = E sect (F union overline(F)) = E sect Omega = E $
$ (E sect F) sect (E sect overline(F)) = E sect (F union overline(F)) = E sect emptyset = emptyset $
=== Teorema di Bayes
Dato $Omega$ partizionato in $F_1, ..., F_n$ partizioni disguinte, e un evento $E$, la probabilità che accada una certa $F_i subset.eq Omega $ è:
$ P(F_i | E) &= (P(E | F_i) P(F_i)) / mr(P(E)) \
&= (P(E | F_i) P(F_i)) / mr(limits(sum)_(i=1)^n P(E | F_i) P(F_i)) $
#figure(caption: [
Probabilità di $F_2$:
$ P(F_2) =
(mp(P(E|F_2)) dot mb(P(F_2))) / mr(P(E)) \
= (mp(P(E sect F_2) / P(F_2)) mb(P(F_2))) / mr((P(F_1) dot P(E|F_1)) + (P(F_2) dot P(E|F_2)) + (P(F_3) dot P(E|F_3))) \
= (mp((1/6 dot 1/3) / (1/3)) mb(1/3)) / mr(2/9) = 1/4
$
], [
#cetz.canvas({
import cetz.draw: *
rect((0, 0), (2,3))
rect((2, 0), (4,3), fill: rgb(0, 0, 255, 20))
rect((4, 0), (6,3))
circle((4.5,1.5), stroke: red, fill: rgb(255, 0, 0, 20), name: "E")
content((6.3,3.3), $ Omega $)
content("E", $ mr(E) $)
content((1, 0.3), $ F_1 $)
content((3, 0.3), $ mb(F_2) $)
content((5, 0.3), $ F_3 $)
content((3.8, 1.5), $ mp(E|F_2) $)
})
])
=== Classificatore naive-Bayes
Possiamo generalizzare il _teorema di Bayer_ per ricavarne un *classificatore*: date delle _caratteristiche_ $X_1, ..., X_n$ che assumono valore $x_1, ..., x_n$, vogliamo assegnare l'oggetto $y_k$ alla _classe_ che massimizza la probabilità:
$ P(Y = y_k | X_1 = x_1, ..., X_n = x_n) $
Applicando il teorema di _Bayes_:
$ = (P(X_1 = x_1, ..., X_n = x_n | Y = y_k) dot P(Y = y_k)) / P(X_1 = x_1, ..., X_n = x_n) $
La formula viene semplificata in modo "ingenuo" _(naive)_, assumendo che le caratteristiche siano #link(<eventi-indipendenti>)[eventi indipendenti] $P(X_1 = x_1 and X_2 = x_2 | Y) = P(X_1 = x_1 | Y) dot P(X_2 = x_2 | Y)$:
$ = (P(Y = y_k) dot limits(product)_(i=1)^n P(X_i = x_i | Y = y_k)) / P(X_1 = x_1, ..., X_n = x_n) $
#informalmente[
Questa _assunzione_ è, appunto, _ingenua_: ad esempio, una persona con i capelli chiari è _più probabile_ che abbia anche gli occhi chiari rispetto ad una persona con i capelli scuri. Le due caratteristiche _non_ sono _indipendenti_.
Come capire _formalmente_ se due eventi (o più) eventi sono _indipendenti_ è descritto nel #link(<eventi-indipendenti>)[paragrafo successivo]
]
Per trovare la classe alla quale _assegnare_ l'oggetto, bisogna calcolare la probabilità per ogni possibile $y_k$ e trovare il massimo $k^*$:
$ k^* = arg max_k P(Y = y_k) dot product_i^n P(X_i = x_i | Y = y_k) $
#nota[
Dato che ci interessa solo $y_k$ massimo e il _denominatore_ non dipende da $k$, allora possiamo _ignorarlo_ dato che non influenzerà la scelta del masssimo
]
=== Eventi indipendenti <eventi-indipendenti>
Quando il verificarsi di un evento $F$ _non influenza_ la probabilità del verificarsi di un altro evento $E$, allora gli eventi si dicono *indipendenti*:
$ P(E | F) = P(E) $
$ P(E sect F) = P(E) dot P(F) $
#nota[
Sfruttando le formule viste in precedenza, è possibile verificare che i conti tornino:
$ P(E) = mr(P(E sect F)) / P(F) = mr(P(E) dot cancel(P(F))) / cancel(P(F)) = P(E) $
]
#informalmente[
È molto difficile rappresentare _graficamente_ attraverso _diagrammi di Venn_ eventi indipendenti, meglio non farlo :)
]
/ Proprietà:
- Se $E$ è indipendente da $F$, $F$ è indipendente da $E$
- Se $E$ e $F$ sono indipendenti, allora anche $E$ e $overline(F)$ sono indipendenti
=== Indipendenza a tre o più eventi
Tre eventi $E, F, G$ sono _indipendenti_ se valgono le proprietà:
- $P(E sect F sect G) = P(E) dot P(F) dot P(G)$
- $P(E sect F) = P(E) dot P(F)$
- $P(F sect G) = P(F) dot P(G)$
- $P(E sect G) = P(E) dot P(G)$
È possibile _estendere_ la definizione ad un numero _arbitrario_ di eventi:
Gli eventi $E_1, ..., E_n$ si dicono indipendenti se per ogni loro sottogruppo $E_a_1, ..., E_a_r$ con $1 <= a_1 <= ... <= a_r <= n$ vale l'equazione:
$ P(sect.big_(i=1)^r E_(a i)) = product_(i=1)^r P(E_a_i) $
== Variabili aleatorie
Una _variabile aleatoria_ o _casuale_ (random variable) è una variabile che può assumere un valore _diverso_ non prevedibile ogni osservazione. Permettono di codificare gli _esiti_ numericamente.
Dato uno #link(<spazio-probabilita>)[spazio di probabilità] $(Omega, A, P)$, una variabile aleatoria è $X : Omega -> bb(R)$ che associa ad ogni _esito_ un _numero reale_.
#attenzione[
Non tutte le funzioni definite come $X : Omega -> bb(R)$ rappresentano una _variabile aleatoria_
]
/ Specificazioni: _valori_ che possono essere assunti da una variabile aleatoria
/ Supporto: l'_insieme_ delle _specificazioni_, dove la probabilità non sia nulla $P(X = x_i) != 0$
#nota[
È possibile calcolare il _supporto_ di una variabile aleatoria *discreta* calcolando l'_insieme_ di punti in cui la #link(<funzione-massa>)[funzione di massa] non assuma valore nullo
]
==== Funzione indicatrice <indicatrice>
Dati $A, B$ due insiemi tali che $A subset.eq B$, la funzione indicatrice di $A$ rispetto a $B$ è la funzione $I : B -> {0, 1}$ che vale:
$ I_(A) (x) = cases(1 "se" x in A, 0 "se" x in.not A) $
#informalmente[
La _funzione indicatrice_ agisce da _filtro_, "limitando il dominio". Quando viene _moltiplicata_ con una probabilità la _annulla_ o la _lascia inalterata_.
Ad esempio, la probabilità di un _dado_ è: $P(X = x) = 1/6 dot mr(I_({1, ..., 6}(x)))$, la funzione indicatrice _annulla_ la probabilità di $1/6$ in caso $x$ non sia nel "dominio" del dado ($1 <= x <= 6$)
]
=== Variabili aleatorie discrete <aleatorie-discrete>
Una variabile aleatoria si dice discreta se il suo supporto è un insieme _discreto_, ovvero è composto da _punti isolati_. Dato $[m,n]$ il range di valori che possono _essere assunti_ dalla variabile $X$, vale:
$ 1 = P(Omega) = P(union.big_(i=m)^n {X = i}) = sum_(i=m)^n P(X = i) $
==== Funzione di massa di probabilità <funzione-massa>
Data una variabile aleatoria discreta $X$ con supporto $D$, la sua _funzione di massa di probabilità_ $P_X : bb(R) -> [0,1]$ è la funzione che associa ad ogni valore $x in bb(R)$ la probabilità che l'esito di $X$ ne sia uguale:
$ forall x in bb(R), quad p_(X)(x) = P(X = x) dot I_(D)(x) $
*Proprietà* che la funzione di massa di probabilità deve _rispettare_:
- non può essere *negativa*:
$ forall x in bb(R), f_(X)(x) >= 0 $
- la *somma* della funzione di massa per tutti i valori che $x$ può assumere deve fare $1$:
$ sum_(x in D) f_(X)(x) = 1 $
#figure(caption: [Grafico funzione di massa di probabilità di un dado],
cetz.canvas({
import cetz: *
plot.plot(
name: "massa",
size: (4,4),
x-tick-step: 1,
y-tick-step: 0.1666,
x-min: 0,
x-max: 6,
y-min: 0,
y-max: 1,
axis-style: "school-book",
{
plot.add-anchor("1", (1, 0.1666))
plot.add-anchor("2", (2, 0.1666))
plot.add-anchor("3", (3, 0.1666))
plot.add-anchor("4", (4, 0.1666))
plot.add-anchor("5", (5, 0.1666))
plot.add-anchor("6", (6, 0.1666))
plot.add-anchor("01", (1, 0))
plot.add-anchor("02", (2, 0))
plot.add-anchor("03", (3, 0))
plot.add-anchor("04", (4, 0))
plot.add-anchor("05", (5, 0))
plot.add-anchor("06", (6, 0))
plot.add-anchor("00", (0, 0))
plot.add-anchor("inf", (7.5, 0))
})
draw.line("massa.00", "massa.06", stroke: red)
draw.line("massa.06", "massa.inf", stroke: (paint: red, dash: "dashed"))
draw.line("massa.01", "massa.1", stroke: (dash: "dashed"))
draw.line("massa.02", "massa.2", stroke: (dash: "dashed"))
draw.line("massa.03", "massa.3", stroke: (dash: "dashed"))
draw.line("massa.04", "massa.4", stroke: (dash: "dashed"))
draw.line("massa.05", "massa.5", stroke: (dash: "dashed"))
draw.line("massa.06", "massa.6", stroke: (dash: "dashed"))
draw.circle("massa.1", fill: red, radius: .07)
draw.circle("massa.2", fill: red, radius: .07)
draw.circle("massa.3", fill: red, radius: .07)
draw.circle("massa.4", fill: red, radius: .07)
draw.circle("massa.5", fill: red, radius: .07)
draw.circle("massa.6", fill: red, radius: .07)
draw.circle("massa.01", fill: white, radius: .07)
draw.circle("massa.02", fill: white, radius: .07)
draw.circle("massa.03", fill: white, radius: .07)
draw.circle("massa.04", fill: white, radius: .07)
draw.circle("massa.05", fill: white, radius: .07)
draw.circle("massa.06", fill: white, radius: .07)
draw.content("massa.1", [$1/6$], anchor: "south", padding: 0.3)
draw.content("massa.2", [$1/6$], anchor: "south", padding: 0.3)
draw.content("massa.3", [$1/6$], anchor: "south", padding: 0.3)
draw.content("massa.4", [$1/6$], anchor: "south", padding: 0.3)
draw.content("massa.5", [$1/6$], anchor: "south", padding: 0.3)
draw.content("massa.6", [$1/6$], anchor: "south", padding: 0.3)
}))
==== Funzione di ripartizione <funzione-ripartizione>
Data una variabile aleatoria $X$, la sua _funzione di ripartizione_ o _distribuzione cumulativa_ $F_X : bb(R) -> [0, 1]$ è la funzione che associa ad ogni valore $x in bb(R)$ la probabilità che l'esito di $X$ ne sia minore o uguale:
$ forall x in bb(R), quad F_(X)(x) = P(X <= x) $
*Proprietà* che la funzione di ripartizione deve _rispettare_:
- è una funzione a scalini con punti di discontinuità sul supporto della variabile
- è una funzione monotona crescente:
$ forall x_1 < x_2, space F(x_1) <= F(x_2) $
- probabilità per un _valore maggiore_ di una certa soglia o compreso tra due estremi sfruttando il complementare:
$ P(X>x) = 1 - P(X <= x) = 1 - F(x) $
$ P(a < X <= b) = P(X <= b) - P(X <= a) = F(b) - F(a) $
- può essere riscritta come la somma della _funzione di massa_ di tutti i valori minori o uguali:
$ F(a) = sum_(x <= a) p(x) $
- assume $0$ e $1$ come estremi:
$ lim_(x -> -infinity) F(x) = 0, quad lim_(x -> +infinity) F(x) = 1 $
#figure(caption: [Grafico funzione di ripartizione di un dado],
cetz.canvas({
import cetz: *
plot.plot(
name: "ripartizione",
size: (4,4),
x-tick-step: 1,
y-tick-step: 0.1666,
axis-style: "school-book",
x-min: 0,
x-max: 6,
y-min: 0,
y-max: 1,
{
plot.add-anchor("1", (1, 0.1666))
plot.add-anchor("2", (2, 0.3333))
plot.add-anchor("3", (3, 0.5))
plot.add-anchor("4", (4, 0.6666))
plot.add-anchor("5", (5, 0.8333))
plot.add-anchor("6", (6, 1))
plot.add-anchor("00", (0, 0))
plot.add-anchor("01", (1, 0))
plot.add-anchor("02", (2, 0.1666))
plot.add-anchor("03", (3, 0.3333))
plot.add-anchor("04", (4, 0.5))
plot.add-anchor("05", (5, 0.6666))
plot.add-anchor("06", (6, 0.8333))
plot.add-anchor("66", (6.5, 1))
plot.add-anchor("6inf", (7.2, 1))
})
draw.line("ripartizione.00", "ripartizione.01", stroke: red)
draw.line("ripartizione.1", "ripartizione.02", stroke: red)
draw.line("ripartizione.2", "ripartizione.03", stroke: red)
draw.line("ripartizione.3", "ripartizione.04", stroke: red)
draw.line("ripartizione.4", "ripartizione.05", stroke: red)
draw.line("ripartizione.5", "ripartizione.06", stroke: red)
draw.line("ripartizione.6", "ripartizione.66", stroke: red)
draw.line("ripartizione.66", "ripartizione.6inf", stroke: (paint: red, dash: "dashed"))
draw.circle("ripartizione.1", fill: red, radius: .07)
draw.circle("ripartizione.2", fill: red, radius: .07)
draw.circle("ripartizione.3", fill: red, radius: .07)
draw.circle("ripartizione.4", fill: red, radius: .07)
draw.circle("ripartizione.5", fill: red, radius: .07)
draw.circle("ripartizione.6", fill: red, radius: .07)
draw.content("ripartizione.1", [$1/6$], anchor: "south", padding: 0.3)
draw.content("ripartizione.2", [$2/6$], anchor: "south", padding: 0.3)
draw.content("ripartizione.3", [$3/6$], anchor: "south", padding: 0.3)
draw.content("ripartizione.4", [$4/6$], anchor: "south", padding: 0.3)
draw.content("ripartizione.5", [$5/6$], anchor: "south", padding: 0.3)
draw.content("ripartizione.6", [$1$], anchor: "south", padding: 0.3)
}))
==== Valore atteso
Il _valore atteso_ di una variabile aleatoria $X$ è un _indice dimensionale_ di #link(<indice-centralita>)[centralità] delle _specificazioni_ della variabile aleatoria.
$ E[X] = sum_i x_i dot P(X = x_i) = sum_i x_i dot p(x_i) $
#informalmente[
Il _valore atteso_ è semplicemente la _"media pesata"_ per ogni possibile valore nel _dominio_ (specificazione) di una variabile aleatoria
]
#nota[
- $mu$: media dell'intera popolazione
- $overline(X)$: media di un campione (media campionaria)
- $E[overline(X)] = mu$: il valore atteso della media campionaria stima $mu$
- $E[X] = E[overline(X)] = mu$: il valore attesso della variabile aleatoria stima il valore atteso della media campionaria, quindi stima $mu$
]
*Proprietà* del valore atteso: <proprieta-valore-atteso>
- il _valore atteso_ di una funzione indicatrice è uguale alla _probabilità dell'evento_: $ E[I_A] = P(A) $
#dimostrazione[
$ E[I] &= 1 dot P(I=1) + 0 dot P(I=0) \
&= P(I=1) = P(A) $
]
- il _valore atteso_ di una variabile aleatoria discreta $X$ opera in modo _lineare_: $ Y = a dot X + b quad quad E[Y] = a dot E[X] + b $
#dimostrazione[
$ E[Y] &= sum_i (a x_i +b) P(X = x_i) \
&= a sum_i x_i P(X = x_i) + b sum_i P(X = x_i) \
&= a E[X] + b sum_i p(x_i) \
&= a E[X] + b dot 1 $
]
- data una qualsiasi _funzione_ reale $g$ e una variabile aleatoria $X$ con funzione di massa $p$, allora vale: $ E[g(X)] = sum_i g(x_i) dot p(x_i) $
- data una qualsiasi _funzione_ reale $g$ di due variabili e due variabili aleatorie discrete $X, Y$, allora vale: $ E[g(X, Y)] = sum_x sum_y g(x,y) dot p(x,y) $
==== Varianza
Sia $X$ una variabile aleatoria di media $mu$, la varianza $sigma^2$ di $X$ è:
$ "Var"(X) = G_X^2 = E[(X - mu)^2] $
$ "Var"(X) = E[X^2] - E[X]^2 $
#dimostrazione[
$ "Var"(X) &= E[(X - mu)^2] \
&= E[X^2 - 2 mu X + mu^2] \
&= E[X^2] - 2 mu E[X] + mu^2 \
&= E[X^2] - 2mu^2 + mu^2 \
&= E[X^2] - E[X]^2 \
&= (sum_i x_i^2 dot P(X = x_i)) - E[X]^2 $
]
*Proprietà* varianza:
- la varianza della funzione indicatrice è la probabilità dell'_evento_ moltiplicata per la probabilità dell'_evento complementare_ $ "Var"(I) = P(A) dot P(overline(A)) $
#dimostrazione[
$ "Var"(I) &= E[I^2] - E[I]^2 \
&= E[I] - E[I]^2 \
&= E[I](1 - E[I]) \
&= P(A)[1 - P(A)] \
&= P(A) P(overline(A)) $
]
- la varianza non opera in modo _lineare_: $ "Var"(a X + b) = a^2 "Var"(X) $
#dimostrazione[
$ "Var"(a X + b) &= E[(a X +b - mu)^2] \
&= E[(a X + b - E[a X + b])^2] \
&= E[(a X + cancel(b) - a mu - cancel(b))^2] \
&= E[a^2 (X - mu)^2] \
&= a^2 E[(X - mu)^2] \
&= a^2 "Var"(X) $
]
- la varianza della _media campionaria_ vale: $ "Var"(overline(X)) = "Var"(X)/n $
#dimostrazione[
$ "Var"(overline(X)) &= "Var"(1/n sum_(i=1)^n X_i) \
&= 1/n^2 "Var"(sum_(i=1)^n X_i) \
&= 1/n^cancel(2) dot cancel(n) "Var"(X) \
&= "Var"(X)/n $
infatti
$ "Var"(sum_(i=1)^n X_i) &= "Var"(X_1 + ... + X_n) \
&= "Var"(X_1) + ... + "Var"(X_n) \
&= n "Var"(X) != "Var"(n X) $
]
/ Deviazione standard: la deviazione standard $sigma$ è calcolata come la radice quadrata della _varianza_: $ sigma_X = sqrt("Var"(X)) $
Anche la deviazione standard non opera in modo lineare:
$ sigma_(a X + b) = sqrt(a^2 "Var"(X)) = |a| sigma_X $
=== Variabili aleatorie multivariate
// TODO: rivedere proprietà valore atteso, varianza, covarianza tra più variabili
Oltre alle #link(<aleatorie-discrete>)[variabili aleatorie *univariate*], è possibile utilizzare un _vettore_ di lunghezza arbitraria, ottenendo una variabile aleatoria *multivariata*.
#nota[
Quando il vettore è lungo $2$ elementi, la variabile aleatoria si dice *bivariata*
]
==== Funzione di ripartizione congiunta
Sia $A$ una variabile aleatoria bivariata formata da $X,Y$ variabili aleatorie _univariate discrete_, allora la loro _funzione di ripartizione conguinta_ è:
$ F_"X,Y" (x,y) = P(X <= x, Y <= y) $
#nota[
La virgola dentro la probabilità denota l'intersezione: $ P(X <= x, Y <= y) = P(X <= x sect Y <= y) $
]
È possibile _estendere_ a variabili aleatorie _multivariate_ di dimensione arbitraria:
$ F_(X_1, ..., X_n)(x_1, ..., x_n) = P(X_1 <= x_1, ..., X_n <= x_n) $
#informalmente[
Si dice _funzione di ripartizione (o massa)_ *marginale* quando da una _funzione di ripartizione (o massa)_ *congiunta* estraggo una _funzione di ripartizione (o massa)_ di una variabile *univariata*
]
Possiamo ottenere una $F_(X)(x)$ *funzione di ripartizione marginale* di $X$:
$ lim_(y -> +infinity) F_"X,Y" (x,y) = F_X (x) $
#dimostrazione[
$ lim_(y -> +infinity) F_"X,Y" (x,y) &= mr(lim_(y -> +infinity)) P(X <= x, mr(Y <= y)) \
&= P(X <= x) space dot mr(lim_(y -> +infinity) P(Y <= y)) \
&= P(X <= x) dot mr(Omega) \
&= P(X <= x) \
&= F_X (x) $
]
==== Funzione di massa di probabilità congiunta
Siano $X,Y$ due variabili aleatorie _univariate discrete_, allora la loro _funzione di massa di probabilità congiunta_ $p: bb(R) times bb(R) -> [0, 1]$ è:
$ p_(X,Y) (x,y) = P(X = x, Y = y) $
È possibile _estendere_ a variabili aleatorie _multivariate_ di dimensione arbitraria:
$ p_(X_1, ..., X_n)(x_1, ..., x_n) = P(X_1 = x_1, ..., X_n = x_n) $
Possiamo ottenere una $p_(X)(x)$ *funzione di massa di probabilità marginale* di $X$ nel seguente caso:
$ sum_(y_i in D) p_(X,Y) (x,y) = p_X (x) $
#dimostrazione[
$ sum_(y_i in D) p_(X,Y) (x,y) &= mr(sum_(y_i in D)) P({X = x} sect mr({Y = y})) \
&= P({X = x} sect space mr(union.big_(y_i in D) {Y = y})) \
&= P({X = x} dot mr(Omega)) \
&= P({X = x}) \
&= p_X (x) $
]
==== Indipendenza
Due variabili aleatorie $X, Y$ si dicono *indipendenti* se $forall A, B subset.eq RR, space X in A$ e $Y in B$ sono #link(<eventi-indipendenti>)[indipendenti]:
$ P(X in A, Y in B) = P(X in A) dot P(Y in B) $
$ P(X <= A, Y <= B) = P(X <= A) dot P(Y <= B) $
$ p(x,y) = p_(X)(x) dot P_(Y)(y) $
$ F(a,b) = F_(X)(a) dot F_(Y)(b) $
#nota[
Questi risultati sono _dimostrabili_ usando gli _assiomi delle probabilità_
]
È possibile _estendere_ a variabili aleatorie _multivariate_ di dimensione arbitraria:
$ P(X_1 in A_1, ..., X_n in A_n) = product_(i=1)^n P(X_i in A_i) $
==== Valore atteso
Il valore atteso della _somma_ di variabili aleatorie discrete è:
$ E[sum_i X_i] = sum_i E[X_i] $
Il valore atteso del _prodotto_ di variabili aleatorie discrete è:
$ E[product_i X_i] = product_i E[X_i] $
==== Covarianza
Siano $X$ e $Y$ due variabili aleatorie di media $mu_X$ e $mu_Y$, la loro _covarianza_ è:
$ "Cov"(X, Y) = E[(X - mu_X)(Y - mu_Y)] $
$ "Cov"(X, Y) = E[X Y] - E[X] E[Y] $
#dimostrazione[
$ "Cov"(X, Y) &= [X Y - mu_X Y - mu_Y X + mu_X mu_Y] \
&= E[X Y] - mu_X E[Y] - mu_y E[X] + mu_X mu_Y \
&= E[X Y] - E[X] E[Y] $
]
*Proprietà* della covarianza:
- simmetria: $ "Cov"(X, Y) = "Cov"(Y, X) $
- generalizzazione concetto di varianza: $ "Cov"(X, X) = "Var"(X) $
- non è lineare, è possibile solo scalarla:
- $"Cov"(a X + b, Y) = a "Cov"(X, Y)$
#dimostrazione[
$ "Cov"(a X + b, Y) &= E[(a X + b) Y] - E[a X + b] E[Y] \
&= a E[X Y] + cancel(b) - a E[X] E[Y] - cancel(b) \
&= a(E[X Y] - E[X] E[Y]) \
&= a "Cov"(X, Y) $
]
- $"Cov"(X + Y, Z) = "Cov"(X, Z) + "Cov"(Y, Z)$
- $"Cov"(limits(sum)_i X_i, limits(sum)_j X_j) = limits(sum)_i limits(sum)_j "Cov"(X_i, Y_j)$
==== Varianza <multivariate-varianza>
Siano $X$ e $Y$ due variabili aleatorie la loro _varianza_ della loro _somma_ è:
$ "Var"(X + Y) = "Var"(X) + "Var"(Y) + 2 "Cov"(X, Y) $
$ "Var"(X - Y) = "Var"(X) + "Var"(Y) - 2 "Cov"(X, Y) $
È possibile _estendere_ a variabili aleatorie _multivariate_ di dimensione arbitraria:
$ "Var"(sum_i^n X_i) = sum_i^n "Var"(X_i) + sum_i^n sum_(j, j != i)^n "Cov"(X_i, X_j) $
=== Variabili aleatorie continue <aleatorie-continue>
Una variabile aleatoria si dice *continua* quando ha un _supporto *non* numerabile_.
==== Funzione densità di probabilità
#informalmente[
La #link(<funzione-massa>)[funzione di massa] (come spiegato sotto) non ha più senso per le _variabili aleatorie continue_, quindi lo stesso concetto prende il nome di _funzione densità di probabilità_
]
Siccome $X$ deve per forza assumere un valore in $bb(R)$, allora la _funzione di densità_ $f_"X" : bb(R) -> bb(R)^+$ deve rispettare:
$ P(X in bb(R)) = integral_(-infinity)^(+infinity) f_(X)(x) dif x = 1 $
Per variabili aleatorie _continue_ non ha senso cercare la probabilità assunta da un _singolo valore_, infatti:
$ P(X = a) = integral_a^a f_(X)(x) dif x = 0 $
Per questo motivo si ragiona in termini di _intervalli_ di probabilità:
$ P(a <= X <= b) = integral_a^b f_(X)(x) dif x $
==== Funzione di ripartizione
Data una variabile aleatoria $X$, la sua _funzione di ripartizione_ o _distribuzione cumulativa_ $F_X : bb(R) -> [0, 1]$ è la funzione che associa ad ogni valore $x in bb(R)$ la probabilità che l'esito di $X$ ne sia minore o uguale:
$ forall x in bb(R), quad F_(X)(x) = P(X <= x) $
La funzione di ripartizione $F$ viene calcolata come l'_integrazione_ della funzione di densità $f$ fino ad una soglia:
$ F(a) &= P(X <= a) = integral_(-infinity)^a f_(X)(x) dif x $
Quindi la *funzione di densità* è uguale alla derivata della funzione di ripartizione $F$:
$ f_(X)(a) = F'(a) $
*Proprietà* che la funzione di ripartizione deve _rispettare_:
- è una funzione monotona crescente continua:
$ forall x_1 < x_2, space F(x_1) <= F(x_2) $
#attenzione[
Non è più una funzione a scalini, non ha discontinuità
]
- probabilità per un _valore maggiore_ di una certa soglia o compreso tra due estremi sfruttando il complementare:
$ P(X>x) = 1 - P(X <= x) = 1 - F(x) $
$ P(a < X <= b) = P(X <= b) - P(X <= a) = F(b) - F(a) $
- assume $0$ e $1$ come estremi:
$ lim_(x -> -infinity) F(x) = 0, quad lim_(x -> +infinity) F(x) = 1 $
==== Valore atteso
Il _valore atteso_ di una variabile aleatoria continua vale:
$ E[X] = integral_(-infinity)^(+infinity) x dot f_(X)(x) dif x $
#nota[
Formula alternativa per il valore atteso:
$ E(X) = integral_0^(+infinity) 1 - F_(X)(x) dif x $
]
#nota[
Valgono le stesse proprietà del valore atteso per le #link(<proprieta-valore-atteso>)[variabili aleatorie discrete]:
- opera in modo _lineare_: $ Y = a dot X + b quad quad E[Y] = a dot E[X] + b $
- data una qualsiasi _funzione_ reale $g$ e una variabile aleatoria $X$ con funzione di densità $f$, allora vale: $ E[g(X)] = sum_i g(x_i) dot f(x_i) $
]
==== Varianza
La _varianza_ di una variabile aleatoria continua vale:
$ "Var"(X) = E[(X - mu)^2] = integral_(-infinity)^(+infinity) (x - mu)^2 f_(X)(x) dif x $
==== Disuguaglianza di Markov <disuguaglianza-markov>
#informalmente[
Permette di ottenere un limite superiore alla probabilità dalla sola conoscenza del valore atteso
]
Sia $X$ una _variabile aleatoria_ $X >= 0$, allora $forall a > 0 in bb(R)$, vale:
$ P(X >= a) <= E[X] / a $
#nota[
Possiamo trarre che:
$ P(X < a) space = space 1 - P(X >= a) space >= space 1 - E[x] / a $
]
#dimostrazione[
Variabili aleatorie *discrete*:
$ E[X] &= sum_(mr(x>=0)) x dot p(x) \
&= sum_(mr(x < a)) x dot p(x) + sum_(mr(x >= a)) x dot p(x)
quad >= quad sum_(mb(x >= a)) mb(x) dot p(x))
quad &>= quad sum_(mb(x >= a)) mb(a) dot p(x)) \
& & = a dot sum_(x >= a) p(x) \
& & = a dot P(X >= a) $
Quindi
$ mr(E[X]) >= mb(mp(a) dot P(X >= a)) quad => quad mb(P(X >= a)) <= mr(E[X]) / mp(a) $
]
#dimostrazione[
Variabili aleatorie *continue*:
$ E[X] &= integral_(mr(-infinity))^(mr(+infinity)) x f_X (x) dif x \
&= integral_(mr(0))^(mr(a)) x f_X (x) dif x + integral_(mr(a))^(mr(+infinity)) x f_X (x) dif x
quad >= quad integral_(mb(a))^(mb(+infinity)) mb(x) f_X (x) dif x
quad &>= quad integral_(mb(a))^(mb(+infinity)) mb(a) f_X (x) dif x \
& &= a integral_(a)^(+infinity) f_X (x) dif x \
& &= a dot P(X >= a) $
Quindi $ mr(E[X]) >= mb(mp(a) dot P(X >= a)) quad => quad mb(P(X >= a)) <= mr(E[X]) / mp(a) $
]
==== Disuguaglianza di Chebyshev <chebyshev>
#informalmente[
Permette di ottenere un limite superiore alla probabilità che il valore di una variabile aleatoria si discosti dal suo valore atteso di una quantità maggiore o uguale a una soglia scelta
]
Sia $X$ una _variabile aleatoria_ di valore atteso $E[X] = mu$ e varianza $"Var"(X) = sigma^2$, allora:
$ forall r > 0, quad P(|X - mu| >= r) <= sigma^2 / r^2 $
#informalmente[
$|X - mu|$ è la distanza tra la variabile aleatoria e il suo valore atteso
]
#nota[
Un'_applicazione_ della _disuguaglianza di Chebyshev_ riguarda la _deviazione standard_: esprime l'andamento della probabilità allontanandosi dal valore attesi di quantità ripetute della deviazione standard:
$ P(|X - mu| >= k sigma) <= 1/k^2 $
]
#dimostrazione[
Partendo dalla #link(<disuguaglianza-markov>)[disuguaglianza di Markov]:
$ P(X >= a) <= E[X] / a $
Al posto della costante $mb(a)$, usiamo la costante $mb(r^2)$:
$ P(X >= mb(r^2)) <= E[X] / mb(r^2) $
Al posto della variabile aleatoria $mp(X)$, usiamo $mp(Y = (X - mu)^2)$:
$ P(mp((X - mu)^mg(2)) >= r^mg(2)) &<= E[mp((X - mu)^2)] / r^2 \
P(mg(|X - mu |) >= mg(r)) &<= mr(E[(X - mu)^2]) / r^2 \
P(|X - mu| >= r) &<= mr(sigma^2) / r^2 $
]
=== Modelli di distribuzione discreti
Alcune _distribuzioni/modelli_ di variabili aleatorie sono molto _frequenti_, di conseguenza esistono dei risultati notevoli.
==== Modello di Bernoulli $X tilde B(p)$ <bernoulliano>
La variabile aleatoria può assumere solo due specificazioni: *fallimento* o *successo*, ovvero il loro supporto è $D_X = { 0, 1 }$. Il parametro $p$ indica la probabilità che $X = 1$ con $p in [0, 1]$
$ mr(X tilde B(p)) $
/ Funzione di massa:
$ p_X (x) = P(X = x) &= mr(p^x (1-p)^((1-x)) I_{0,1} (x)) \
&= cases(1-p quad "per " x=0, p quad "per " x=1, 0 quad "altrimenti") $
/ Funzione di ripartizione:
$ F_X (x) = P(X <= x) &= mr((1 - p) I_[0,1)(x) + I_([1, +infinity))(x)) \
&= cases(0 quad "se " x < 0, 1-p quad "se " 0 <= x < 1, 1 quad "se " x >= 1 ) $
/ Valore atteso:
$ E[X] = mr(p) $
#dimostrazione[
$ E[X] &= sum_x x dot P(X = x) \
&= 0 dot P(X = 0) + 1 dot P(X = 1) \
&= 1 dot P(X = 1) = p $
]
/ Varianza:
$ "Var"(X) = mr(p(1-p)) $
#dimostrazione[
$ "Var"(X) &= E[(X - mu)^2] \
&= E[(X - p)^2] \
&= sum_x (x-p)^2 dot P(X=x) \
&= (0-p)^2 dot P(X = 0) + (1-p)^2 dot P(X=1) \
&= p^2(1-p) + p(1-p)^2 \
&= p(1-p) (cancel(p) + 1 cancel(-p)) = p(1-p) $
]
==== Modello binomiale $X tilde B(n, p)$ <binomiale>
Il modello ripete $n$ volte un *esperimento bernulliano indipendente* di probabilità $p$, dove $n$ e $p$ sono i due parametri del modello. La variabile aleatoria $X$ assume il *numero di successi*, quindi il supporto è $D_X = {0, ..., n}$.
$ mr(X tilde B(n, p)) $
/ Funzione di massa:
$ p_X (x) = P(X = x) = mr(binom(n, x) p^x (1-p)^((n-x)) I_{0, ..., n} (x)) $
#figure(caption: [Funzione di massa modello binomiale], image("imgs/binomiale-massa.png", width: 40%))
#informalmente[
Ci servono $n-x$ insuccessi, quindi $(1-p)^(n-x)$ e ci servono $x$ successi, quindi $p^x$. Ma non ci interessa l'ordine con il quale avvengono _insuccessi_ e _successi_, quindi utilizziamo il _coefficiente binomiale_ $binom(n,x)$
]
/ Funzione di ripartizione:
$ F_X (x) &= P(X <= x) = \
&= sum_(i=0)^floor(x) p_X (i) dot I_[0, n](x) + I_((n, +infinity))(x) \
&= mr(sum_(i=0)^floor(x) binom(n, i) p^i (1-p)^(n-i) dot I_[0,n] (x) + I_((n, +infinity)) (x)) \
&= cases(
limits(sum)_(i=0)^floor(x) p_X (i) quad "per " x <= n,
1 quad "per " x > n
)
$
#attenzione[
Nella formula della funzione di ripartizione, $p_X$ indica la _funzione di massa_, solo $p$ indica la _probabilità di successo_
]
/ Valore atteso:
$ E[X] = mr(n p) $
#dimostrazione[
$ E[X] &= E[sum_i^n X_i] \
&= sum_i^n E[X_i] = n dot p $
]
/ Varianza:
$ "Var"(X) = mr(n p (1-p)) $
/ Riproducibilità:
Siano $X_1 tilde B(n, p)$ e $X_2 tilde B(m, p)$ indipendenti, allora:
$ X_1 + X_2 = sum_(i=1) ^n X_(1,i) + sum_(j=1)^m X_(2,j) = sum_(i=1)^(n+m) Y_i = Y $
dove $Y tilde B(n+m, p)$
#nota[
Per la riproduzione, il parametro $n$ può essere diverso ogni variabile, ma il parametro $p$ deve essere uguale
]
==== Modello uniforme discreto $X tilde U(n)$ <uniforme-discreto>
Tutti gli esiti della variabile aleatoria discreta sono *equiprobabili*, dove il parametro $n$ è il numero dei possibili esiti, con $n in bb(N) backslash {0}$. Ogni esito possibile viene mappato su un numero naturale, la variabile assume il numero associato all'*esito dell'esperimento*, quindi il supporto del modello è $D_X = {1, ..., n}$.
$ mr(X tilde U(n)) $
/ Funzione di massa:
$ p_X (x) = P(X = x) = mr(1/n I_{1, ..., n} (x)) $
#figure(caption: [Funzione di massa modello uniforme discreto], image("imgs/uniforme-discreto-massa.png", width: 40%))
/ Funzione di ripartizione:
$ F_X (x) = P(X <= x) = mr(floor(x) / n I_[1, n] (x) + I_((n, +infinity)) (x)) $
#dimostrazione[
$ forall x <= n quad F_X (x) &= P(X <= x) \
&= sum_(i=1)^floor(x) P(X = i) \
&= 1/n sum_(i=1)^floor(x) i \
&= 1^floor(x) dot 1/n = floor(x)/n $
]
#figure(caption: [Funzione di ripartizione modello uniforme discreto], image("imgs/uniforme-discreto-ripartizione.png", width: 40%))
/ Valore atteso:
$ E[x] = mr((n+1)/2) $
#dimostrazione[
$ E[X] &= sum_(i=1)^n i dot P(X=i) \
&= sum_(i=1)^n i dot 1/n \
&= 1/n sum_(i=1)^n i \
&= 1/cancel(n) dot (cancel(n)(n+1))/2 = (n+1)/2 $
]
#nota[
$ E[X^2] = ((n+1)(2n+1))/6 $
]
/ Varianza:
$ "Var"(X) = mr((n^2 - 1)/12) $
#dimostrazione[
$ "Var"(X) &= E[X^2] - E[X]^2 \
&= ((n+1)(2n+1))/6 - ((n+1)/2)^2 \
&= (n+1)((2n+1)/6 - (n+1)/n) \
&= (n+1)((4n + 2 - 3n - 3) / 12) \
&= (n+1)((n-1)/12) = (n^2 - 1)/12 $
]
==== Modello geometrico $X tilde G(p)$ <geometrico>
La variabile assume il *numero di insuccessi consecutivi* prima che si verifichi un successo in una serie di #link(<bernoulliano>)[esperimenti Bernoulliani] *indipendenti* e *identicamente distribuiti* (i.i.d.) di parametro $p in (0, 1]$. Il supporto del modello è $D_X = {0, ..., +infinity}$.
$ mr(X tilde G(p)) $
/ Funzione di massa:
$ p_X (x) = P(X = x) = mr(p(1-p)^x I_{0, ..., +infinity} (x)) $
#informalmente[
La funzione di massa equivale a calcolare le probabilità che accadano $x$ _insuccessi_, quindi $(1-p)^x$ a cui succede un _successo_ $dot p$, ottenenendo $p dot (1-p)^x$
]
#figure(caption: [Funzione di massa modello geometrico], image("imgs/geometrico-massa.png", width: 40%))
/ Funzione di ripartizione:
$ F_X (x) = P(X <= x) = mr((1 - (1-p)^(floor(x)+1)) I_[0, +infinity) (x)) $
#nota[
$ F_X (x) = F_X (floor(x)) $
]
#figure(caption: [Funzione di ripartizione modello geometrico], image("imgs/geometrico-ripartizione.png", width: 40%))
#dimostrazione[
Possiamo calcolare la funzione di ripartizione $P(X <= x)$ come $1 - P(X > x)$:
$ mb(P(X > x)) &= sum_(i = x+1)^(+infinity) p_X (i) \
&= sum_(i = x+1)^(+infinity) p(1-p)^i \
&= p sum_(i = x+1)^(+infinity) (1-p)^i \
&= p sum_(i = x+1)^(+infinity) (1-p)^i dot mp(((1-p)^(x+1))/((1-p)^(x+1))) \
&= p mp((1-p)^(x+1)) sum_(i = x+1)^(+infinity) ((1-p)^i)/mp((1-p)^(x+1)) \
&= p(1-p)^(x+1) sum_(i=mg(x+1))^(+infinity) (1-p)^(i- mp((x+1))) \
&= p(1-p)^(x+1) sum_(i=mg(0))^(+infinity) (1-p)^i \
&= p(1-p)^(x+1) dot 1/(cancel(1)-(cancel(1)-p)) \
&= cancel(p)(1-p)^(x+1) dot 1/cancel(p) = mb((1-p)^(x+1)) $
Quindi: $ P(X <= x) = 1 - mb(P(X > x)) = 1 - mb((1-p)^(x+1)) $
]
/ Valore atteso:
$ E[X] = mr((1-p)/p) $
#dimostrazione[
Usando la definizione:
$ E[X] &= sum_(i=0)^(+infinity) i dot p_X (i) \
&= sum_(i=0)^(+infinity) i p (1-p)^i \
&= p dot mb(sum_(i=0)^(+infinity) i (1-p)^i) \
&= cancel(p) dot mb((1-p)/p^cancel(2)) = (1-p)/p $
#nota[
Nel punto #text(blue)[evidenziato di blu] abbiamo usato il lemma:
$ limits(sum)_(i=0)^(+infinity) i alpha^i = alpha/(i-alpha)^2 $
]
]
/ Varianza:
$ "Var"(X) = mr((1-p)/p^2) $
/ Assenza di memoria:
Il numero di _fallimenti consecutivi_ (anche se elevato) durante la ripetizione dell'#link(<bernoulliano>)[esperimento Bernoulliano] non ci fornisce _nessuna informazione_ sugli esperimenti successivi.
Quindi la probabilità di costante insuccesso dell'$i + j$-esimo esperimento non è condizionata dalle probabilità di costante insuccesso dell'$i$-esimo:
$ P(X >= x + y | X >= x) = P(X >= y) $
#dimostrazione[
$ P(X >= x + y | X >= x) &= P(X >= x + y, X >= x) / P(X > x) \
&= P(X >= x + y) / P(X >= x) \
&= (1-p)^(cancel(floor(x)) + floor(y))/cancel((1-p)^floor(x)) \
&= (1-p)^(floor(y)) = P(X >= y) $
]
==== Modello di Poisson $X tilde P(lambda)$ <poisson>
La variabile assume il *numero di eventi* che si verificano in un dato intervallo di tempo _(ignoto, non contribuisce al modello)_, sapendo che *mediamente* se ne verificano un numero $lambda in (0, +infinity)$. Tutti gli eventi sono *indipendenti*.
Il modello ha supporto $D_X = {0, ..., +infinity}$.
$ mr(X tilde P(lambda)) $
/ Funzione di massa:
$ p_X (x) = P(X = x)= mr(e^(-lambda) dot (lambda^x)/x! I_{0, ..., +infinity} (x)) $
#figure(caption: [Funzione di massa modello di Poisson], image("imgs/poisson-massa.png", width: 40%))
/ Valore atteso:
$ E[X] = mr(lambda) $
#dimostrazione[
Usando la definizione:
$ E[X] &= sum_(i=0)^(+infinity) i dot p_X (i) \
&= sum_(i=1)^(+infinity) i e^(-lambda) (lambda^i)/i! \
&= e^(-lambda) sum_(i=1)^(+infinity) i (lambda^i)/i! \
&= e^(-lambda) sum_(i=1)^(+infinity) cancel(i) (lambda^i)/(cancel(i)(i-1)!) \
&= e^(-lambda) sum_(i=1)^(+infinity) (lambda (lambda^(i-1)))/(i-1)! \
&= lambda e^(-lambda) sum_(i=1)^(+infinity) (lambda^(i-1))/(i-1)! $
Poniamo $j = i-1$ e applichiamo lo sviluppo in serie di Taylor:
$ &= lambda e^(-lambda) sum_(j=0)^(+infinity) (lambda^j)/(j!) \
&= lambda e^(-lambda) e^lambda = lambda $
#nota[
Dal secondo passaggio in poi l'indice della sommatoria $i$ parte da 1 e non da 0. Questo perché moltiplicando tutto per $i$ se $i = 0$ il contributo alla sommatoria sarà nullo
]
]
/ Varianza:
$ "Var"(X) = mr(lambda) $
#dimostrazione[
$ E[X^2] = sum_(i=1)^(+infinity) i^2 e^(-lambda) (lambda^i)/i! = mb(lambda^2 + lambda) $
$ "Var"(X) = mb(E[X^2]) - E[X]^2 = mb(lambda^2 + lambda) - (lambda)^2 = lambda $
]
/ Approssimazione #link(<binomiale>)[binomiale]:
È possibile usare una variabile aleatoria di Poisson per approssimare una variabile aleatoria #link(<binomiale>)[binomiale] di parametri $(n, p)$, quando $n$ è molto grande e $p$ molto piccolo.
$ X tilde B(n, p) approx X tilde P(n dot p) $
#dimostrazione[
Poniamo $lambda = n p$.
$ P(X = i) &= binom(n, i) p^i (1-p)^(n-i) \
&= binom(n, i)(lambda/n)^i (1- lambda/n)^(n-i) \
&= (n(n-1)...(n-i+1)cancel((n-i)!))/(cancel((n-i)!) dot i!) dot lambda^i / n^i (1-lambda/n)^(n-i) \
&= (n(n-1)...(n-i+1))/n^i dot lambda^i / i! dot (1-lambda/n)^(n-i) \
&= mo(n/n dot (n-1)/n dot ... dot (n-i+1)/n) dot lambda^i / i! dot (1-lambda/n)^(mb(n)-mp(i)) $
Dato che $n -> +infinity$ allora $mb((1-lambda/n)^n) approx e^(-lambda)$, $mp((1-lambda/n)^i) approx 1$, $mo(n/n dot (n-1)/n dot ... dot (n-i+1)/n) approx 1 $
Allora:
$ &= mo(1) dot lambda^i/i! dot mb(e^(-lambda)) dot mp(1) \
&= P(X = i) approx lambda^i/i! e^(-lambda) $
]
/ Riproducibilità:
Date due variabili aleatorie $X_1 tilde P(lambda_1)$ e $X_2 tilde P(lambda_2)$ indipendenti, allora: $ X_1 + X_2 tilde P(lambda_1 + lambda_2) $
Se le due variabili aleatorie sono anche identicamente distribuite allora: $ X_1 + X_2 tilde P(2lambda) $
==== Modello ipergeometrico $X tilde H(n, M, N)$ <ipergeometrico>
La variabile assume il numero di *oggetti corretti* estratti da un'urna di oggetti binari _(corretti o scorretti)_ durante un'*estrazione senza reimmissione* dopo $n$ estrazioni. Il supporto del modello è ${max(0, n-M), ..., min(n, N)}$ e i parametri sono:
- $n in bb(N)^+$: numero di estrazioni
- $N in {0, ..., n}$: numero di oggetti _"corretti"_
- $M in {0, ..., n}$: numero di oggetti _"errati"_
$ mr(X tilde H(n, M, N)) $
#nota[
In caso ci fossero reimmissioni è possibile usare il #link(<binomiale>)[modello binomiale]
]
#informalmente[
Il #text(blue)[minor numero] di oggetti _corretti_ estraibili è: $max(0, "estrazioni" - "errati") = mb(max(0, n - M))$.
Il #text(purple)[massimo numero] di oggetti _corretti_ estraibili è: $min("estrazioni", "corretti") = mp(min(n, N))$.
Di conseguenza, il supporto del modello è: $ {mb(max(0, n-M)), ..., mp(min(n, N))} $
]
#nota[
È possibile indicare il modello anche come $X tilde I(n, M, N)$
]
/ Funzione di massa:
$ p_X (x) = P(X = x) = mr((binom(N, x) binom(M, n-x)) / binom(N+M, n) I_{0, ..., n} (x)) $
#figure(caption: [Funzione di massa modello ipergeometrico], image("imgs/ipergeometrica-massa.png", width: 40%))
#informalmente[
- $binom(N, x)$ rapresenta il numero di modi in cui è possibile estrarre $x$ oggetti _corretti_ su $N$ _corretti_ totali
- $binom(M, n-x)$ rapresenta il numero di modi in cui è possibile estrarre $n-x$ (quindi i rimanenti) oggetti _errati_ su $M$ _errati_ totali
- $binom(N+M, n)$ rappresenta il _totale_ delle possibili estrazioni di $n$ oggetti dal totale $N+M$ degli oggetti
]
#dimostrazione[
Per dimostrare la funzione di massa di basiamo sul calcolo delle disposizioni semplici di $T$ _oggetti_ su $w$ _posti_.
#nota[
Stiamo usando nuovi nomi per le variabili, che andremo a riconvertire alla fine:
- Oggetti totali: $T = N+M$
- Posti totali (estrazioni): $w = n$
- Oggetti corretti: $C = N$
- Oggerri errati: $T-C = M$
- Posti oggetti corretti: $g = x$
- Posti oggetti errati: $w-g = n-x$
]
Dato che siamo in uno #link(<spazio-probabilita>)[spazio equiprobabile], vale la legge: $ P(E) = (|E|) / mo(|Omega|) $
Dove l'insieme degli eventi possibili è:
$ mo(|Omega|) = T! / (T-w)! $
E tutte le possibili sequenze di $w$ oggetti di cui $g$ corretti:
$ |E| = mb(o_1) dot mp(o_2) dot mg(o_3) $
Dove:
- $o_1$ sono tutti i modi di scegliere le $g$ posizioni in cui compare un oggetto corretto
$ mb(o_1) = binom(w, g) $
- $o_2$ è il numero di scelte possibili per gli oggetti corretti da destinare alle $g$ posizioni senza che siano reimmessi
$ mp(o_2) = C! / (C-g)! $
- $o_3$ è il numero di scelte possibili per gli oggetti errati da destinare alle posizioni rimanenti
$ mg(o_3) = (T-C)! / ((T-C) - (w-g))! $
Quindi:
$ P(E) = (|E|) / mo(|Omega|) &= mb(binom(w, g)) dot mp(C! / (C-g)!) dot mg((T-C)! / ((T-C) - (w-g))!) dot mo((T-w)! / T!) \
&= (binom(C, g) binom(T-C, w-g)) / binom(T, w) = (binom(N, i) binom(M, n-i)) / binom(N+M, n) $
]
/ Valore atteso:
$ E[X] = mr((n dot N)/(N+M)) $
#dimostrazione[
Scomponiamo la variabile in $n$ variabili #link(<bernoulliano>)[aleatorie bernoulliane] _NON indipendenti_ definite:
$ X_i = cases(1 quad "oggetto corretto", 0 quad "oggetto errato") $
Essendo bernoulliane, vale:
$ P(X_i = 1) = M / (N+M) = E[X_i] = p $
Quindi:
$ E[X] &= E[sum_(i=1)^n X_i] \
&= sum_(i=1)^n E[X_i] \
&= n dot p \
&= n N/(N+M) $
]
/ Varianza:
$ "Var"(X) &= mr((n (N + M - n) N M) / ((N + M)^2 (N + M - 1))) $
#nota[
Per $N+M -> infinity$ il modello si semplifica in un #link(<binomiale>)[modello binomiale]
]
#dimostrazione[
Scomponiamo la variabile in $n$ variabili #link(<bernoulliano>)[aleatorie bernoulliane] _NON indipendenti_ come per il valore atteso.
$ "Var"(X_i) &= E[X_i] (1-E[X_i]) \
&= (N/(N+M)) (1 - N / (N+M)) \
&= M / (N+M) dot M / (N+M) \
&= mo((N M) / (N+M)^2) $
Non essendo indipendenti, è necessario usare la #link(<multivariate-varianza>)[formula generale] della somma di variabili aleatorie:
$ mp("Var"(sum_(i=1)^n X_i)) = mg(sum_(i=1)^n) mo("Var"(X_i)) + mg(sum_i^n sum_(i != j)^n) mb("Cov"(X_i, X_j)) $
Sappiamo che:
$ mb("Cov"(X_i, X_j)) &= E[X_i dot X_j] - E[X_i] dot E[X_j] \
&= mr(- (N M) / ((N + M - 1)(N+M)^2)) $
Quindi:
$ mp("Var"(X)) &= mg(n) mo((N M)/(N+M)^2) - mg(n(n-1)) dot mr((N M) / ((N+M-1)(N+M)^2)) \
&= (n N M) / (N + M)^2 (1 - (n-1)/(N+M-1)) \
&= n (n/(N+M)) (M/(N+M)) (1- (n-1)/(N+M-1)) \
&= n p (1-p) (1 - (n-1)/(N+M-1)) \
&= (n (N + M - n) N M) / ((N + M)^2 (N + M - 1)) $
]
=== Modelli di distribuzione continui
Alcune _distribuzioni/modelli_ di variabili aleatorie sono molto _frequenti_, di conseguenza esistono dei risultati notevoli.
==== Modello uniforme continuo $X tilde U(a,b)$ <uniforme-continuo>
La variabile assume un valore nell'*intervallo* $[a,b]$, di conseguenza il supporto è $D_X = [a, b]$. Tutti gli esiti della variabile aleatoria continua sono *equiprobabili*.
$ mr(X tilde U(a,b)) $
/ Funzione di densità di probabilità:
$ f_X (x) = mr(1/(b-a) I_[a,b] (x)) $
#attenzione[
La funzione di _densità_ *non* è la _probabilità_ nel punto, ma è l'_ordinata_ della funzione in quel punto, che poi deve essere _integrata_ in un intervallo per ottenere la probabilità
$ P(X = x) != f(x), quad P(X = x) = 0 $
$ P(a <= X <= b) = integral_a^b f(x) dif x $
]
#figure(caption: [Funzione di densità modello uniforme continuo], image("imgs/uniforme-continuo-densita.png", width: 40%))
#dimostrazione[
Sappiamo che la $limits(integral)_(-infinity)^(+infinity) f_X (x) dif x$ deve essere uguale a $1$, quindi possiamo ricavare la funzione di densità $f_X$, sostituendola con l'incognita $alpha$:
$ integral_(-infinity)^(+infinity) a &= integral_(a)^(b) alpha dif x \
&= alpha integral_(a)^(b) 1 dif x \
&= alpha[x]^b_a \
&= alpha (b-a) \
&= alpha = 1/(b-a) $
Possiamo quindi ricavare $f_X$: $ f_X (x) = 1/(b-a) $
]
/ Funzione di ripartizione:
$ F_X (x) = P(X <= x) = mr((x-a)/(b-a) I_[a,b] (x) + I_((b,+infinity)) (x)) $
#figure(caption: [Funzione di ripartizione modello uniforme continuo], image("imgs/uniforme-continuo-ripartizione.png", width: 40%))
#dimostrazione[
$ F_X (x) &= integral_(a)^(x) f_(X)(y) dif y \
&= integral_(a)^(x) 1 / (b-a) dif y \
&= 1 / (b-a) integral_(a)^(x) 1 dif y \
&= 1 / (b-a) [y]^x_a = (x-a)/(b-a) $
]
/ Valore atteso:
$ E[X] = mr((b + a) / 2) $
#dimostrazione[
$ E[X] &= integral_a^b f_X (x) dif x \
&= integral_a^b x dot 1/(b-a) dif x \
&= 1/(b-a) integral_a^b x dif x \
&= 1/(b-a) dot [x^2 / 2]_a^b \
&= 1/(b-a) dot (b^2 - a^2)/2 \
&= 1/cancel(b-a) dot ((a+b)cancel((b-a)))/2 \
&= (a+b) / 2 $
]
#nota[
$ E[X^2] = (a^2 + a b + b^2) / 3 $
]
/ Varianza:
$ "Var"(X) = mr(((b-a)^2)/12) $
==== Modello esponenziale $X tilde E(lambda)$ <esponenziale>
La variabile assume il *tempo di attesa* tra due eventi, che mediamente accadono ogni $lambda in (0, +infinity)$ unità di tempo. La probabilità che questa attesa si _allunghi_ decresce in maniera _esponenziale_. Il supporto è $D_X = [0, +infinity)$.
$ mr(X tilde E(lambda)) $
/ Funzione di densità:
$ f_X (x) = mr(lambda e^(-lambda x) I_[0, +infinity) (x)) $
#attenzione[
La funzione di _densità_ *non* è la _probabilità_ nel punto, ma è l'_ordinata_ della funzione in quel punto, che poi deve essere _integrata_ in un intervallo per ottenere la probabilità
$ P(X = x) != f(x), quad P(X = x) = 0 $
$ P(a <= X <= b) = integral_a^b f(x) dif x $
]
#figure(caption: [Funzione di massa modello esponenziale], image("imgs/esponenziale-densita.png", width: 40%))
/ Funzione di ripartizione:
$ F_X (x) = mr((1 - e^(-lambda x)) I_[0, +infinity) (x)) $
#dimostrazione[
$ F_X (x) &= integral_0^x f_X (u) dif u \
&= integral_0^x lambda e^(-lambda u) dif u $
Risolvo per sostituzione, con $z = lambda u$, quindi $u = z/lambda$ e $dif u = 1/lambda dif z$:
$ &= integral_0^(lambda x) cancel(lambda) e^(-z) 1/cancel(lambda) dif z \
&= [-e^(-z)]_0^(lambda x) \
&= -e^(-lambda x) - e^0 = 1 - e^(-lambda x) $
]
#figure(caption: [Funzione di massa modello esponenziale], image("imgs/esponenziale-ripartizione.png", width: 40%))
/ Valore atteso:
$ E[X] = mr(1/lambda) $
#dimostrazione[
$ E[X] &= integral_0^(+infinity) x f_X (x) dif x $
Risolvo per parti:
$ &= [-x e^(-lambda x)]_0^(+infinity) + integral_0^(+infinity) e^(-lambda x) dif x \
&= integral_0^(+infinity) e^(-lambda x) dif x \
&= 1/lambda dot underbrace(integral_0^(+infinity) lambda e^(-lambda x) dif x, "per definizione di densità" =1) = 1/lambda $
]
#nota[
$ E[X^2] = 2/lambda^2 $
]
/ Varianza:
$ "Var"(X) = mr(1/lambda^2) $
#dimostrazione[
$ "Var"(X) = E[X^2] - E[X]^2 = 2/lambda^2 - 1/lambda^2 = 1/lambda^2 $
]
/ Assenza di memoria:
Unica distribuzione continua con questa proprietà:
$ P(X >= s + t | X >= s) = P(X >= t) $
#dimostrazione[
$ P(X >= s + t | X >= s) &= P(X >= t) \
P(X >= s + t, X >= s)/P(X >= s) &= P(X >= t) \
P(X >= s + t)/P(X >= s) &= P(X >= t) \
P(X >= s + t) &= P(X >= t) dot P(X >= s) $
Ricordiamo che $P(X>= x) = 1- F_X (x) = e^(-lambda x)$
Quindi:
$ e^(-lambda (s+t)) = e^(-lambda t) dot e^(-lambda s) $
Che non è altro che l'identità, come volevasi dimostrare
]
/ Scalatura:
Sia $X tilde E(lambda)$ e $Y = a X$ (scalatura con $a > 0$), allora $ Y tilde E(lambda/a) $
#dimostrazione[
$ F_Y (x) &= P(Y <= x) \
&= P(a X <= x) \
&= P(X <= x/a) $
$ F_X (x/a) &= 1 - e^(-lambda x/a) \
&= 1 - e^(-lambda/a x) $
Sostituiamo $lambda' = lambda/a$, per cui:
$ F_Y (x) = 1-e^(-lambda' x) ==> Y tilde E(lambda') = E(lambda/a) $
]
/ Proprietà su massimo e minimo:
Siano $X_1, ..., X_n$ variabili aleatorie indipendenti e sia $Y$ il massimo di esse $Y = max_i X_i$, allora:
$ F_Y (x) = product_(i=1)^n F_(X_i) (x) $
Nel caso le variabili aleatorie siano _indipendenti identicamente distribuite_, allora:
$ F_Y (x) = F_X (x)^n $
Siano $X_1, ..., X_n$ variabili aleatorie indipendenti e sia $Z$ il minimo di esse $Z = min_i X_i$, allora:
$ F_Z (x) = 1 - product_(i=1)^n (1 - F_(X_i) (x)) $
Nel caso le variabili aleatorie siano _indipendenti identicamente distribuite_, allora:
$ F_Z (x) = 1- (1-F_X (x))^n $
Siano $X_1, ..., X_n$ variabili aleatorie indipendenti e sia $Z = min_i X_i$. Se $forall i, X_i tilde E(lambda_i)$, allora:
$ Z tilde E(sum_(i=1)^n lambda_i) $
Se $X tilde E(lambda)$ e $Y = c X$ con $c in R^+$, allora $Y$ è una variabile aleatoria esponenziale di parametro $lambda/c$:
$ F_Y (x) = 1 - e^(-lambda/c x) $
==== Modello Gaussiano (o normale) $X tilde N(mu, sigma)$ <gaussiano>
Facendo *tante osservazioni* dello *stesso fenomeno*, misureremo molte volte valori vicini alla media, mentre sempre meno volte valori lontani da essa. Graficando le probabilità otteniamo la classica forma a *campana*.
Formalizzando questo concetto in una variabile aleatoria, otteniamo una distribuzione normale o Gaussiana. I parametri $mu$ e $sigma$ descrivono la forma della campana, essendo la _media_ (quindi il centro) e la _deviazione standard_ (quindi la larghezza).
#attenzione[
Questo modello non ha funzione indicatrice, è definito su tutto $bb(R)$
]
$ mr(X tilde N(mu, sigma)) $
/ Funzione di densità:
$ f_X (x) = mr(1/(sigma sqrt(2 pi)) dot e^(-(x-mu)^2 / (2 sigma^2))) $
#attenzione[
La funzione di _densità_ *non* è la _probabilità_ nel punto, ma è l'_ordinata_ della funzione in quel punto, che poi deve essere _integrata_ in un intervallo per ottenere la probabilità
$ P(X = x) != f(x), quad P(X = x) = 0 $
$ P(a <= X <= b) = integral_a^b f(x) dif x $
]
/ Funzione di ripartizione:
#attenzione[
Non è nota una forma analitica
]
$ F_X (x) = P(X <= x) = mr(integral_(-infinity)^x 1 / (sigma sqrt(2 pi)) dot e^((- (x - mu)^2)/(2 sigma^2)) dif x) $
#nota[
Data una qualsiasi variabile aleatoria $X tilde N(mu, sigma)$, è possibile trovare la funzione di ripartizione #link(<normale-standard>)[standard]:
$ F_X (x) = Phi((x-mu)/sigma) $
#dimostrazione[
$ F_X (x) &= P(X <= x) \
&= P((X-mu)/sigma <= (x-mu)/sigma) \
&= P(Z <= (x - mu)/sigma) \
&= F_Z ((x-mu)/sigma) = Phi((x-mu)/sigma) $
]
]
/ Valore atteso:
$ E[X] = mr(mu) $
/ Varianza:
$ "Var"(X) = mr(sigma^2) $
/ Riproducibilità:
Date le variabili aleatorie $X_1, ... X_n$ gaussiane e indipendenti, dove $forall i, X_i tilde N(mu_i, sigma_i)$, allora:
$ Y tilde N(sum_(i=1)^n mu_i, sqrt(sum_(i=1)^n sigma^2_i)) $
#nota[
Questa proprietà è condivisa dal modello #link(<binomiale>)[binomiale] e #link(<ipergeometrico>)[ipergeometrico]
]
/ Eventi simmetrici:
Una qualsiasi variabile che segue la distribuzione normale ha una curva _simmetrica_ rispetto al centro ($mu$), di conseguenza:
$ F_X (-x) = 1 - F_X (x) $
#figure(caption: [Le due parti non evidenziate della curva sono identiche, quindi $F(-1) = 1 - F(1)$], image("imgs/normale-simmetria.png", width: 40%))
===== Distribuzione normale standard <normale-standard>
Siano $X tilde N(mu, sigma^2)$ e $Y = a X + b$ con $a,b in bb(R), a != 0$, allora:
$ Y tilde N(a mu +b, a^2, sigma^2) $
Data una qualsiasi variabile aleatoria di distribuzione normale, possiamo calcolare la sua *standardizzazione*, indicata con $Z$:
$ Z = (X - mu)/sigma = Z tilde N(0,1) $
#dimostrazione[
Valore atteso:
$ E[Z] &= 1/sigma E[X] - mu/sigma \
&= 1/sigma (E[X] - mu) \
&= 1/sigma (mu - mu) = 0 $
Varianza:
$ "Var"(Z) &= 1/sigma^2 "Var"(X) \
&= 1/cancel(sigma^2) cancel("Var"(X)) = 1 $
]
/ Funzione di ripartizione $Phi$:
La *funzione di densità* e la *funzione di ripartizione* di una variabile aleatoria normale standard si indicano con $phi_Z (x)$ e $Phi_Z (x)$:
$ Phi_Z (x) = mr(1/sqrt(2 pi) integral_(-infinity)^x e^(-(u^2)/2) dif u) $
#informalmente[
Portando una curva normale in una curva standard, _perdiamo informazione_ (la curva standard è appunto standard, sempre uguale). Per non perdere informazioni, viene applicata la _stessa trasformazione_ direttamente al parametro della funzione $Phi$ nota della variabile standardizzata:
$ F_X (x) = Phi_Z ((x-mu)/sigma) $
La dimostrazione non è altro che applicare la stessa trasformazione da entrambe le parti della funzione di probabilità, in modo da ricondurci alla funzione nota $F_Z$:
$ P(X <= x) = P((X-mu)/sigma <= (x-mu)/sigma) $
]
#nota[
Si può usare la stessa tecnica per calcolare la probabilità tra due insiemi:
$ P(a <= x <= b) = Phi(b) - Phi(a) $
]
=== Teorema centrale del limite <teorema-centrale-limite>
Siano $X_1, ... X_n$ variabili aleatorie _indipendenti identicamente distribuite_, ovvero $forall i, E[X_i] = mu, "Var"(X_i) = sigma^2$.
Allora, per $n -> +infinity$ le variabili sono distribuite in modo *approssimativamente normale*:
$ sum_(i=1)^n X_i tilde.dot N(n dot mu, sigma sqrt(n)) $
#attenzione[
In caso volessimo stimare una media campionaria $overline(X)$ <tcl-media-campionaria>, allora otteniamo:
$ overline(X) tilde.dot N(mu, sigma/sqrt(n)) $
#dimostrazione[
Sappiamo che:
$ overline(X) = 1/mb(n) sum_(i=1)^n X_i $
Quindi per linearità:
$ overline(X) &tilde.dot N((cancel(n) mu)/mb(cancel(n)), (sigma sqrt(n))/mb(n)) \
overline(X) &tilde.dot N(mu, sigma / sqrt(n)) $
]
]
#attenzione[
// TODO: sistemare questa roba
In caso di _scalatura/traslazione_ della variabile aleatoria approssimata, l'_approssimazione_ è:
$ X = overline(X) = 1/n sum_(i=1)^n X_i $
$ X &tilde.dot N(n mu, sigma sqrt(n)) \
mb(a) X + mp(b) &tilde.dot N(mb(a) (n mu) + mp(b), mb(|a|) sigma sqrt(n)) $
]
Questa distribuzione si può standardizzare:
$ ((limits(sum)_(i=1)^n X_i) - n mu) / (sigma sqrt(n)) tilde.dot N(0,1) $
Quindi per $n$ grande e $x$ qualsiasi, vale l'approssimazione:
$ P(((limits(sum)_(i=1)^n X_i) - n mu) / (sigma sqrt(n)) < x) approx Phi(x) $
#informalmente[
Quando abbiamo una *generica* variabile aleatoria $X$ che segue una _distribuzione ignota_, allora possiamo vederla come la *somma di tante osservazioni* $X_1, ..., X_n$, ognuna con $E[X_i] = E[X]$ e $"Var"(X_i) = "Var"(X)$. Ogni osservazione è una variabile aleatoria *indipendente e identicamente distribuita*.
Facendo tante osservazioni, grazie al teorema centrale del limite, possiamo approssimare la somma $sum_i X_i$ come una distribuzione normale
#attenzione[
Non stiamo approssimando la _variabile_ aleatoria $X$, ma la _somma_ di tante sue osservazioni $sum_i X_i$
]
]
#nota[
È possibile approssimare una variabile #link(<binomiale>)[binomiale] $X tilde B(n, p)$ con $n$ grande utilizzando il teorema:
$ X = sum_(i=1)^n X_i tilde.dot N(n p, sqrt(n p (1-p))) \
(x - n p)/sqrt(n p (1-p)) tilde.dot N(0,1) $
]
= Statistica inferenziale <inferenziale>
La statistica inferenziale vuole _analizzare_ e _trarre risultati_ da campioni selezionati da _grandi popolazioni_. Viene ipotizzato che i valori numerici del campione seguano dei _modelli_, quindi vengono trattati come *variabili aleatorie* di una distribuzione non conosciuta $F$.
/ Inferenza: processo di induzione con cui si cerca di _formare delle ipotesi_, ovvero a partire da specifiche manifestazioni si cerca di determinare la verità generale
/ Popolazione: grande insieme di oggetti descritti da una variabile aleatoria $X tilde F$
/ Campione: _sottoinsieme_ della popolazione usato per studiare le leggi. Si estraggono i campioni in modo casuale (per questo motivo si assume indipendenza). Viene descritto come una successione di variabili aleatorie $X_1, ... X_n$ _indipendenti identicamente distribuite_, con $n$ grandezza del campione
La statistica inferenziale permette di capire quale distribuzione descrivere le osservazioni, attraverso due metodologie:
- *Statistica inferenziale non parametrica* _(non trattata nel corso)_: la distribuzione $F$ è completamente sconosciuta
- *Statistica inferenziale parametrica*: della distribuzione $F$ è conosciuta solo la famiglia, mentre sono sconosciuti i parametri. Indichiamo la distrbuzione con $F(theta)$, dove $theta$ è il parametro sconosciuto di cui stimiamo il valore:
- *stima puntuale*: forniamo un numero molto vicino a $theta$
- *stima per intervalli*: forniamo un intervallo di valori in cui ricade $theta$
#nota[
Spesso ci interessa un valore che _dipende_ dal parametro ignoto $theta$, non $theta$ stesso. Questo valore lo indichiamo come $tau(theta)$
]
/ Statistica / Stimatore: funzione $t : D_X^n -> bb(R)$ che dati dei valori, stima il valore che ci interessa: $ t(x_1, ..., x_n) = hat(tau) approx tau(theta) $
#nota[
Dato che il campionamento è casuale, diversi campioni $(x_1, ..., x_n)$ della stessa popolazione danno $hat(tau)$ diversi, per cui $hat(tau) approx tau(theta)$
]
/ Variabile aleatoria per stimatore: indichiamo con $T$ la variabile aleatoria che indica una stima, ricevendo delle variabili aleatorie (non più dei valori):
$ T = t(X_1, ..., X_n) = hat(t) approx tau(theta) $
== Stimatori non deviati <stimatore>
Uno stimatore $T$ è *non deviato* (*non distorto* o *corretto*) per $tau(theta)$ se e solo se:
$ E[t(X_1, ..., X_n)] = tau(theta) $
#nota[
$E[t(X_1, ..., X_n)]$ descrive la *centralità* della stima, quindi per essere corretta deve posizionarsi attorno a $tau(theta)$
]
#attenzione[
Quando due stimatori sono *non* deviati, allora si possono confrontare attraverso la loro *varianza*
]
=== Media campionaria
Un ottimo stimatore *non deviato* per stimare il *valore atteso*, indipendentemente dalla distribuzione, è quello della media:
$ t(x_1, ..., x_n) = overline(X) = 1/n sum_(i=1)^n x_i quad quad tau(theta) = E[X] $
#dimostrazione[
$ E[T] &= E[1/n sum_(i=1)^n X_i] \
&= 1/n E[sum_(i=1)^n X_i] \
&= 1/n sum_(i=1)^n E[X_i] = cancel(n/n) dot E[X] $
]
Abbiamo dimostrato che lo stimatore non è deviato. Per valutarne la bontà ne calcoliamo la varianza:
$ "Var"(overline(X)) = "Var"(1/n sum_(i=1)^n X_i) = "Var"(X)/n $
Quindi per $n -> +infinity$ la varianza tende a 0: per campioni più grandi il risultato è più preciso.
#informalmente[
La varianza di uno stimatore non distorto descrive le *"oscillazioni"* intorno a $tau(theta)$
]
== Bias e Scarto quadratico medio (MSE)
Il *bias* è una misura di quanto uno stimatore tende a _sottostimare_ o _sovrastimare_ $tau(theta)$, è definito come:
$ b_tau(theta) (T) = E[T] - tau(theta) $
#attenzione[
Il bias di uno _stimatore_ *non* distorto è $0$
]
L'errore quadratico medio (MSE) è un'altra misura dell'errore che uno stimatore compie stimando $tau(theta)$. È definito come:
$ "MSE"_tau(theta) (T) = E[(T_n - tau(theta))^2] $
$ "MSE"_tau(theta) (T) = "Var"(T_n) + b_tau(theta) (T_n)^2 $
#dimostrazione[
$ "MSE"_tau(theta) &= E[(T - tau(theta))^2] \
&= E[(mb(T - E[T]) + mp(E[T] - tau(theta)))^2] \
&= E[mb((T-E[T])^2) + 2 mb((T- E[T])) mp((E[T] - tau(theta))) + mp((E[T] - tau(theta))^2)] \
&= mo(E[(T - E[T])^2]) + cancel(2(E[T] - tau(theta))mr(E[T-E[T]])) + mg((E[T] - tau(theta))^2) $
Sappiamo che $mr(E[X - E[X]]) = 0$, allora:
$ mo(E[(T - E[T])^2]) + mg((E[T] - tau(theta))^2) =\
mo("Var"(T)) + mg(underbrace((E[T] - tau(theta))^2, "bias")) $
]
#informalmente[
Lo scarto quadratico medio e il bias sono due metodi per calcolare la _bontà_ di uno _stimatore_.
Se immaginiamo la quantità da stimare come un bersaglio, possiamo vedere:
- il _bias_ come la distanza tra centro del bersaglio e il punto in cui "mira" lo stimatore
- lo _scarto quadratico medio_, la precisione dei "colpi" attorno al punto "mirato"
]
== Consistenza in media quadratica
La consistenza in media quadratica implica che non solo lo stimatore è *corretto in media*, ma anche che le sue *variazioni* attorno al valore stimato $tau(theta)$ diminuiscono con l'aumentare del campione ${X_1, ..., X_n}$, quindi uno stimatore gode della proprietà di *consistenza in media quadratica* se:
$ lim_(n->infinity) "MSE"_tau(theta) (T_n) = 0 $
#informalmente[
Se all'aumentare delle osservazioni, l'_errore diminuisce_ sempre di più (fino a tendere a $0$), allora lo stimatore è consistente in media quadratica
]
/ Consistenza debole: uno stimatore è debolmente consistente rispetto a $tau(theta)$ se
$ forall epsilon > 0, quad lim_(n -> +infinity) P(tau(theta) - epsilon <= T <= tau(theta) + epsilon) = 1 $
#nota[
La consistenza in media quadratica _implica_ la _consistenza debole_
#dimostrazione[
$ P(-epsilon <= T - tau(theta) <= epsilon) &= P(|T - tau(theta)| <= epsilon) \
&= P((T - tau(theta))^2 <= epsilon^2) \
&= 1 - P((T - tau(theta))^2 > epsilon) $
Per la #link(<disuguaglianza-markov>)[disuguaglianza di Markov]:
$ = 1 - P((T - tau(theta))^2 > epsilon) &>= 1 - (E[(T - tau(theta))^2])/epsilon^2 \
&>= 1 - mb("MSE"_tau(theta) (T))/epsilon^2 $
Dato che $limits(lim)_(n -> infinity) mb("MSE"_tau(theta) (T)) = 0$, allora:
$ 1 - mb(0) = 1 $
]
]
== Legge dei grandi numeri
#informalmente[
Date $n$ osservazioni $X_1, ..., X_n$ con $n -> infinity$, allora la media campionaria $overline(X)_n$ converge al valore atteso di ogni $X_i$: $ lim_(n->+infinity) overline(X)_n = E[X_i] = mu $
]
=== Legge forte dei grandi numeri:
Data una media campionaria $overline(X)_n$ su $n$ elementi, se $n -> +infinity$ allora la probabilità che essa stimi $E[X] = mu$ vale 1:
$ P(lim_(n->+infinity) overline(X)_n = mu) = 1 $
#nota[
Quindi per campioni di grandezza infinita, la _media campionaria_ non è una _variabile aleatoria_ ma una _costante_
]
=== Legge debole dei grandi numeri:
Data una media campionaria $overline(X)_n$ su $n$ elementi, se $n -> +infinity$ allora la probabilità che essa stimi $E[X] = mu$ con un errore $>$ di un $epsilon$ fissato vale 9:
$ lim_(n->+infinity) P(abs(overline(X)_n - mu) > epsilon) = 0 $
#informalmente[
Questo risultato è semplicemente "l'inverso" della _legge forte dei grandi numeri_, se la stima è sempre corretta $P(...) = 1$, allora l'errore non è mai superiore $P(...) = 0$ ad $epsilon$
]
== Taglia minima di un campione
Data una variabile aleatoria $X$ vogliamo stimare la taglia minima $n$ di un campione tale che abbia probabilità molto alta di avere il valore di $overline(X)$ molto vicino al valore atteso $mu$:
$ P(|overline(X)_n - mu| <= epsilon) >= 1 - delta $
=== Con teorema del limite centrale:
Troviamo prima la probabilità, per poi confrontarla con $1 - delta$:
$ P(abs(overline(X)- mu) <= epsilon) &= P(-epsilon <= overline(X) - mu <= epsilon) $
Per calcolare questa _probabilità_, vogliamo ricondurci ad una _distribuzione nota_ e sfruttare la sua _funzione di ripartizione_. Dato che la media campionaria è una somma, possiamo sfruttare il #link(<teorema-centrale-limite>)[teorema centrale del limite] per ricavare una #link(<gaussiano>)[distribuzione normale], #link(<normale-standard>)[standardizzarla] e usare $Phi$ per calcolare la probabilità.
Quindi, come dimostrato in #link(<tcl-media-campionaria>)[precendeza], sappiamo che:
$ overline(X) tilde.dot N(mo(mu), mp(sigma/sqrt(n))) $
Normalizziamo la $overline(X)$ all'interno della probabilità:
$ P(-epsilon/(sigma/sqrt(n)) <= mb((overline(X) - mo(mu)) / mp(sigma/sqrt(n))) <= epsilon/(sigma/sqrt(n))) &= P((-epsilon sqrt(n))/sigma <= mb(Z) <= (epsilon sqrt(n))/sigma)\
&= Phi((epsilon sqrt(n))/sigma) - Phi(-(epsilon sqrt(n))/sigma)\
&= Phi((epsilon sqrt(n))/sigma) - (1 - Phi((epsilon sqrt(n))/sigma))\
&= Phi((epsilon sqrt(n))/sigma) - 1 + Phi((epsilon sqrt(n))/sigma)\
&= 2 Phi((epsilon sqrt(n))/sigma) - 1 $
Confrontiamolo con $1 - delta$:
$ 2 Phi((epsilon sqrt(n))/sigma) - 1 >= 1 - delta \
Phi((epsilon sqrt(n))/sigma) >= 1 - delta/2 \
(epsilon sqrt(n))/sigma >= Phi^(-1)(1 - delta/2) \
n >= sigma^2/epsilon^2 dot (Phi^(-1)(1 - delta/2))^2 $
#nota[
$Phi^(-1)(x)$ è l'inversa della funzione di ripartizione della #link(<normale-standard>)[normale standard], che è uguale al quantile di livello $x$.
In questo caso vale $Phi^(-1)(1- delta/2) = "quantile di livello" 1 - delta/2$
]
=== Con disuguaglianza di Chebyshev:
Data una variabile aleatoria discreta o continua $X$, con $E[X] = mu$ e $"Var"(X) = sigma^2$, allora per la #link(<chebyshev>)[disuguaglianza di Chebyshev]:
$ forall epsilon > 0, quad P(abs(X - mu) >= epsilon) <= (sigma^2)/(epsilon^2) $
Al posto di usare una variabile generica $X$, utilizziamo la media campionaria $overline(X)$, sostituendo $"Var"(X) = sigma^2$ con $"Var"(overline(X)) = (sigma^2)/n$:
$ forall epsilon > 0, quad mb(P(abs(overline(X) - mu) >= epsilon)) <= (sigma^2)/(n epsilon^2) $
Possiamo trasformare la #text(red)[probabilità che vogliamo trovare] per renderla uguale:
$ mr(P(abs(overline(X) - mu) < epsilon)) &= 1 - mb(P(abs(overline(X) - mu) >= epsilon)) <= mg((sigma^2)/(n epsilon^2)) \
1 - mb(P(abs(overline(X) - mu) >= epsilon)) &>= mg(1 - (sigma^2)/(n epsilon^2)) >= 1 - delta $
#attenzione[
Effettuando il complementare, viene invertito anche il segno della disequazione ed effettuato il #text(olive)[complementare della quantità]
]
Quindi:
$ 1 - (sigma^2)/(n epsilon^2) >= 1 - delta \
(sigma^2)/(n epsilon^2) < delta \
n >= sigma^2 / (delta epsilon^2) $
== Metodo plug-in
Il metodo plug in (o di sostituzione), permette di "costruire" uno _stimatore_ partendo da uno _noto_ o da una _stima di un suo parametro_.
#nota[
Non abbiamo visto questo metodo nel dettaglio
]
#informalmente[
Esempio: popolazione = $X tilde B(m,p)$ dove conosco solo $m$, devo trovare $p$:
Dato che $E[overline(X)] = E[X] = m p$
$ p = E[overline(X)] dot 1/m $
Grazie alla linearità del valore atteso:
$ p = E[overline(X)] dot 1/m = E[overline(X) dot 1/m] $
Quindi $T = 1/m dot overline(X)$ è uno stimatore non distorto per $p$
]
#informalmente[
Esempio: abbiamo lo stimatore $S = overline(X)$ e il suo valore atteso $E[S] = a/2 + 4$
Vogliamo ottenere uno stimatore $T$ non deviato per $a$, quindi:
$ T = 2S - 4 quad E[T] = a $
]
== Metodo di massima verosomiglianza
// TODO: metodo di massima verosomiglianza
== Processo di Poisson
// TODO: processo di poisson
#pagebreak()
// numerazione appendici
#set heading(numbering: "A.1.")
#counter(heading).update(0)
= Cheatsheet variabili aleatorie e modelli <modelli>
== Variabili aleatorie
=== Proprietà del valore atteso
- il _valore atteso_ di una funzione indicatrice è uguale alla _probabilità dell'evento_:
$ E[I_A] = P(A) $
- il _valore atteso_ di una variabile aleatoria discreta $X$ opera in modo _lineare_:
$ Y = a dot X + b quad quad E[Y] = a dot E[X] + b $
- data una qualsiasi _funzione_ reale $g$ e una variabile aleatoria $X$ con funzione di massa $p$, allora vale:
$ E[g(X)] = sum_i g(x_i) dot p(x_i) $
$ E[X^2] = sum_i x_i^2 dot p(x_i) $
$ E[ |X| ] = sum_i |x_i| dot p(x_i) $
- data una qualsiasi _funzione_ reale $g$ di due variabili e due variabili aleatorie discrete $X, Y$, allora vale: $ E[g(X, Y)] = sum_x sum_y g(x,y) dot p(x,y) $
- il valore atteso della _somma_ di variabili aleatorie discrete è:
$ E[sum_i X_i] = sum_i E[X_i] $
- il valore atteso del _prodotto_ di variabili aleatorie discrete è:
$ E[product_i X_i] = product_i E[X_i] $
=== Proprietà della varianza
- la varianza della funzione indicatrice è la probabilità dell'_evento_ moltiplicata per la probabilità dell'_evento complementare_
$ "Var"(I) = P(A) dot P(overline(A)) $
- la varianza non opera in modo lineare:
$ "Var"(a X + b) = a^2 "Var"(X) $
- la varianza della somma di due variabili aleatorie $X$ e $Y$ vale:
$ "Var"(X + Y) = "Var"(X) + "Var"(Y) + 2 "Cov"(X, Y) $
$ "Var"(X - Y) = "Var"(X) + "Var"(Y) - 2"Cov"(X,Y) $
#attenzione[
Se le variabili sono _indipendenti identicamente distribuite_, allora la covarianza vale $0$
]
- la varianza della somma di più variabili aleatorie vale:
$ "Var"(sum_i^n X_i) = sum_i^n "Var"(X_i) + sum_i^n sum_(j, j != i)^n "Cov"(X_i, X_j) $
- la varianza della media campionaria è: $ "Var"(overline(X)) = "Var"(1/n sum_(i=1)^n X_i) = 1/n^2 "Var"(sum_(i=1)^n X_i) = 1/n^2 dot n "Var"(X) = "Var"(X)/n $
=== Proprietà della covarianza
- la covarianza è simmetrica:
$ "Cov"(X, Y) = "Cov"(Y, X) $
- la covarianza di una variabile aleatoria con sè stessa è uguale alla varianza:
$ "Cov"(X, X) = "Var"(X) $
- opera in modo lineare:
$ "Cov"(a X, Y) = "Cov"(X, a Y) = a "Cov"(X, Y) $
$ "Cov"(X + Y, Z) = "Cov"(X, Z) + "Cov"(Y, Z) $
== Modelli
- #link(<bernoulliano>)[Modello di Bernoulli]: $X tilde B(p)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$p^x (1-p)^((1-x)) I_{0, 1} (x)$],
[- Ripartizione:], [$(1-p) I_[0,1)(x) + I_([1, +infinity)) (x)$],
[- Valore atteso:], [$p$],
[- Varianza:], [$p(1-p)$]
)
- #link(<binomiale>)[Modello binomiale]: $X tilde B(n, p)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$binom(n, x) p^x (1-p)^((n-x)) I_{0, ..., n} (x)$],
[- Ripartizione:], [$limits(sum)_(i=0)^floor(x) binom(n, i) p^i (1 - p)^((n-i)) I_[0, n] (x) + I_((n, +infinity)) (x)$],
[- Valore atteso:], [$n p$],
[- Varianza:], [$n p(1-p)$],
[- Proprietà:], [riproducibilità]
)
- #link(<uniforme-discreto>)[Modello uniforme discreto]: $X tilde U(n)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$1/n I_{1, ..., n} (x)$],
[- Ripartizione:], [$floor(x)/n I_[1, n] + I_((n, +infinity)) (x)$],
[- Valore atteso:], [$(n+1)/2$],
[- Varianza:], [$(n^2 - 1)/12$]
)
- #link(<geometrico>)[Modello geometrico]: $X tilde G(p)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$p(1-p)^x I_{0, ..., +infinity} (x)$],
[- Ripartizione:], [$(1-(1-p) ^ (floor(x) + 1)) I_[0, +infinity) (x)$],
[- Valore atteso:], [$(1-p)/p$],
[- Varianza:], [$(1-p)/p^2$],
[- Proprietà:], [assenza di memoria]
)
- #link(<poisson>)[Modello di Poisson]: $X tilde P(lambda)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$e^(-lambda) dot (lambda^x)/(x!) I_{0, ..., +infinity} (x)$],
[- Ripartizione:], [_non vista nel corso_],
[- Valore atteso:], [$lambda$],
[- Varianza:], [$lambda$],
[- Proprietà:], [approssimazione binomiale, riproducibilità]
)
- #link(<ipergeometrico>)[Modello ipergeometrico]: $X tilde H(n, M, N)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$(binom(N, x) binom(M, n-x)) / binom(N + M, n) I_{0, ..., n} (x)$],
[- Ripartizione:], [_non vista nel corso_],
[- Valore atteso:], [$(n dot N) / (N + M)$],
[- Varianza:], [$(n(N + M - n)N M) / ((N+M)^2 (N + M - 1))$]
)
- #link(<uniforme-continuo>)[Modello uniforme continuo]: $X tilde U(a,b)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$1 / (b-a) I_[a,b] (x)$],
[- Ripartizione:], [$(x-a) / (b-a) I_[a,b] (x) + I_((b, +infinity)) (x)$],
[- Valore atteso:], [$(b+a) / 2$],
[- Varianza:], [$(b-a)^2 / 12$]
)
- #link(<esponenziale>)[Modello esponenziale]: $X tilde E(lambda)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$lambda e^(-lambda x) I_[0, +infinity) (x)$],
[- Ripartizione:], [$(1 - e^(-lambda x)) I_[0, +infinity) (x)$],
[- Valore atteso:], [$1/lambda$],
[- Varianza:], [$1/lambda^2$],
[- Proprietà:], [assenza di memoria, scalatura, proprietà su massimo e minimo]
)
- #link(<gaussiano>)[Modello Gaussiano]: $X tilde G(mu, sigma)$
#grid(columns: 2, row-gutter: 15pt, column-gutter: 5pt, align: horizon + left,
[- Massa:], [$1 / (sigma sqrt(2 pi)) e^(-(x-mu)^2 / (2 sigma^2))$],
[- Ripartizione:], [$limits(integral)_(-infinity)^(x) 1 / (sigma sqrt(2 pi)) e^(-(x-mu)^2 / (2 sigma^2)) dif x$],
[- Valore atteso:], [$mu$],
[- Varianza:], [$sigma^2$],
[- Proprietà:], [standardizzazione, riproducibilità]
)
#figure(caption: "Forma grafici dei modelli")[
#image("imgs/forma-modelli.png", width: 70%)
]
= Cheatsheet Python <python>
/ Calcolo combinatorio\: utilità:
```python
from math import factorial as fact
from scipy.special import binom
# fattoriale
fact(5) # 120
# coefficiente binomiale
binom(5, 2) # 10
```
/ Calcolo combinatorio\: combinazioni/disposizioni/permutazioni: solo il numero
```python
# N = numerosità insieme da cui pescare
# k = numero oggetti da estrarre
from scipy.special import comb, perm
# combinazioni, con o senza ripetizioni (ordine non conta)
comb(N, k, repetition=False)
# disposizioni (o permutazioni in caso k = N) SENZA RIPETIZIONI (ordine conta)
perm(N, k)
```
/ Calcolo combinatorio\: combinazioni/disposizioni/permutazioni: calcolati davvero
```python
from itertools import product, permutations, combinations, combinations_with_replacement
# prodotto cartesiano tra insiemi
product(set1, set2, ..., repeat=1)
product("ABC", "123", "xy") # ['A1x', 'A1y', 'A2x', 'A2y', 'A3x', 'A3y', 'B1x', 'B1y', 'B2x', 'B2y', 'B3x', 'B3y', 'C1x', 'C1y', 'C2x', 'C2y', 'C3x', 'C3y']
product([0,1], repeat=3) # ['000', '001', '010', '011', '100', '101', '110', '111']
# disposizioni (o permutazioni in caso n=None o n = len(set1)) SENZA RIPETIZIONI (ordine conta)
permutations(set1, n=None)
permutations("ABC") # ['ABC', 'ACB', 'BAC', 'BCA', 'CAB', 'CBA']
permutations("ABC", 2) # ['AB', 'AC', 'BA', 'BC', 'CA', 'CB']
# in caso ci siano duplicati nell'iterabile passato, allora questo metodo NON restituisce le permutazioni di oggetti distinguibili a gruppi
permutations("ABA") # sbagliato: ['ABA', 'AAB', 'BAA', 'BAA', 'AAB', 'ABA']
# combinazioni SENZA ripetizioni
combinations(set1, n)
combinations("ABC", 2) # ['AB', 'AC', 'BC']
# combinazioni CON ripetizioni
combinations_with_replacement(set1, n)
combinations_with_replacement("ABC", 2) # ['AA', 'AB', 'AC', 'BB', 'BC', 'CC']
```
/ Librerie:
```python
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
import statsmodels.api
import sklearn
import itertools
```
/ Lettura dati:
```python
data = pd.read_csv('file_path/fileName.csv', delimiter=',', decimal=',', quotechar='"')
```
Per avere il numero di righe nel _dataset_:
```python
len(data)
```
Per avere a disposizione gli attributi in corrispondenza ai loro valori mancanti nel _dataset_:
```python
valMancanti = {col : len(data[col]) - len(data[col].dropna()) for col in data.columns}
dataF = pd.DataFrame(index = valMancanti.keys(), data = valMancanti.values(), columns = ['mancanti'])
```
In alternativa, possibile utilizzare i metodi _*isna().sum()*_ per soddisfare la medesima richiesta:
```python
valMancanti = {col : data[col].isna().sum() for col in data.columns}
```
Frequenze assolute di un dato *qualitativo* in formato tabulare:
```python
data['attributo'].value_counts()
```
Frequenze assolute di un dato *qualitativo* con grafico:
```python
data['attributo'].value_counts().plot.bar()
```
Frequenze relative:
```python
data['attributo'].value_counts(normalize = True)
```
Frequenze assolute di un dato *quantitativo* in formato tabulare:
```python
data['attributo'].value_counts(bins = 20)
```
/ Indici di centralità:
Moda di un carattere:
```python
data['attributo'].mode()
```
Media campionaria di un carattere:
```python
data['attributo'].mean()
```
Mediana di un carattere:
```python
data['attributo'].median()
```
/ Indici di dispersione:
Varianza campionaria:
```python
data['attributo'].var()
```
Deviazione standard campionaria:
```python
data['attributo'].std()
```
/ Quantili:
Quantile di un certo livello:
```python
data['attributo'].quantile(0.25) #in questo caso restituisce il primo quartile
```
/ Indici di eterogeneità:
Indice di Gini fra 2 valori distinti:
```python
def gini_2_val(f):
return 1 - f**2 - (1-f)**2
x = np.arange(0, 1.01, .01) # start, stop, step
y = map(gini_2_val, x)
plt.plot(x, y)
plt.ylim((0, 0.55))
plt.show()
```
Indice di Gini:
```python
def gini(series):
return 1 - sum(series.value_counts(normalize=True).map(lambda f: f**2))
# import librerie
# assegnamento a 'data' delle osservazioni del dataset
gini(data['attributo'])
```
Entropia fra 2 valori distinti:
```python
def entropia_2_val(f):
return 0 if f in (0, 1) else - f * np.log2(f) - (1-f) * np.log2(1-f)
x = np.arange(0, 1.01, .01)
y = map(entropia_2_val, x)
plt.plot(x, y)
plt.ylim((0, 1.1))
plt.show()
```
Entropia:
```python
def entropia(series):
return sum(series.value_counts(normalize=True).map(lambda f: -f * np.log2(f)))
# import librerie
# assegnamento a 'data' delle osservazioni del dataset
entropia(data['attributo'])
```
/ Diagramma di Pareto:
```python
from paretochart import pareto
freqAttributo = data['attributo'].value_counts()/len(data)
attributoComune = freqAttributo[freqAttributo > .02].index
attributoComune = (data['attributo'][data['attributo'].isin(attributoComune)])
attributoComune_freq = attributoComune.value_counts()/len(attributoComune)
pareto(attributoComune_freq, labels=attributoComune_freq.index)
plt.show()
```
/ Funzioni utili per grafici:
```python
plt.grid() # mostrare griglia
np.linspace(start, stop, campioni) # spesso conviene aumentare il numero di campioni (1000)
plt.axvline(0.5) # linea verticale a x=0.5
plt.axhline(0.5) # linea verticale a y=0.5
```
/ Parametri utili per funzioni di `plt`:
- `color="red"` o `colors=["red", "blue"]` (in base al grafico): colorare linea/riempimento
- `linestyle="dashed"` o `linestyles="dashed"`: stile della linea
- `read_csv()`
/ Analisi degli elementi del dataset:
Elemento massimo:
```python
dato = data[data['attributo'] == max(data['attributo'])]
```
Elemento minimo:
```python
dato = data[data['attributo'] == min(data['attributo'])]
```
Tutti i valori assumibili da un carattere:
```python
list(data['attributo'].unique())
```
Tipo e forza della correlazione fra 2 caratteri:
```python
data['attributo 1'].corr(data['attributo 2'])
```
Questa riga di codice restituisce il valore di un indice che ci porta a fare le seguenti valutazioni:
1. Se il valore restituito è circa $1$, è probabile che vi sarà una *_correlazione lineare diretta_* tra i 2 attributi presi in analisi.
2. Se il valore restituito è circa $0$, significa che fra i 2 attributi presi in analisi vi è una *_correlazione improbabile_*.
3. Se il valore restituito è circa $-1$, allora è probabile che i 2 attributi godano di una *_probabile correlazione linearmente indiretta_*.
/ Tabelle congiunte:
Tabella delle frequenze relative congiunte:
```python
pd.crosstab(data['attributo 1'], data['attributo 2'], normalize = True)
```
Tabella delle frequenze assolute congiunte:
```python
pd.crosstab(data['attributo 1'], data['attributo 2'])
```
= Cheatsheet matematica <matematica>
== Sommatorie
Proprità sommatoria:
- solo il primo elemento dopo una sommatoria è il suo _"corpo"_:
$ sum_(i=1)^3 a + b = 3a + b $
$ sum_(i=1)^3 (a + b) = 3a + 3b $
- somma dei primi $n$ numeri:
$ sum_(i=1)^n = (n(n+1))/2 $
- associativitià e dissociatività:
$ sum_(k=n)^m f(k) + sum_(k=n)^m g(k) = sum_(k=n)^m (f(k) + g(k)) $
- distributività:
$ a dot sum_(k=n)^m f(k) = sum_(k=n)^m (a dot f(k)) $
- scomposizione indici:
$ sum_(k=n)^(m_1+m_2) f(k) = sum_(k=n)^(m_1) f(k) + sum_(k=m_1+1)^(m_2) f(k) $
- "portare dentro" somme:
$ (sum_(i=1)^n a) + b = sum_(i=1)^n (a + b/n) $
== Derivate
L'operatore derivata è un _operatore lineare_, ovvero è possibile:
$ (a dot f(x) + b dot g(x))' = a dot f'(x) + b dot g'(x) $
=== Derivate immediate
#align(center)[
#table(
columns: (30%, 30%),
inset: 5pt,
align: center + horizon,
table.cell(fill: silver, [$ f(x) $]),
table.cell(fill: silver, [$ f'(x) $]),
[$ n in bb(R) $], [$ 0 $],
[$ x $], [$ 1 $],
[$ x^a $], [$ a x^(a-1) $],
[$ e^x $], [$ e^x $],
[$ a^x $], [$ a^x ln(a) $],
[$ ln(x) $], [$ 1/x, x > 0 $],
[$ ln|x| $], [$ 1/x, x != 0 $],
[$ log_a (x) $], [$ 1/(x ln(a)), x > 0 $],
[$ |x| $], [$ (|x|) / x, x != 0 $],
[$ sin(x) $], [$ cos(x) $],
[$ cos(x) $], [$ -sin(x) $],
[$ tan(x) $], [$ 1/(cos^2(x)), x != pi/2 + k pi $],
[$ cot(x) $], [$ - 1 / (sin^2(x)), x != k pi $],
[$ arcsin(x) $], [$ 1 / sqrt(1-x^2), x in [-1,1] $],
[$ arccos(x) $], [$ - 1 / sqrt(1-x^2), x in [-1,1] $],
[$ arctan(x) $], [$ 1/(1+x^2) $]
)
]
=== Derivate "avanzate"
#align(center)[
#table(
columns: (30%, 30%),
inset: 5pt,
align: horizon,
table.cell(fill: silver, [$ f(x) $]),
table.cell(fill: silver, [$ f'(x) $]),
[$ f(g(x)) $], [$ f'(g(x)) dot g'(x) $],
[$ f(x) dot g(x) $], [$ f'(x) dot g(x) + f(x) dot g'(x) $],
[$ f(x)/g(x) $], [$ (f'(x) dot g(x) - f(x) dot g'(x))/(g^2(x)) $]
)
]
== Integrali
=== Integrali immediati
#align(center)[
#table(
columns: (30%, 30%),
inset: 5pt,
align: horizon,
table.cell(fill: silver, [$ g'(x) $]),
table.cell(fill: silver, [$ g(x) $]),
[$ integral 1 dif x $], [$ x + c $],
[$ integral a dif x $], [$ a x + c $],
[$ integral x^n dif x $], [$ (x^(n+1))/(n+1) + c $],
[$ integral 1/x dif x $], [$ ln|x| + c $],
[$ integral e^x dif x $], [$ e^x + c $],
[$ integral a^x dif x $], [$ (a^x)(ln(a)) + c $],
[$ integral sin(x) dif x $], [$ -cos(x) + c $],
[$ integral cos(x) dif x $], [$ sin(x) + c $],
[$ integral 1/(cos^2(x)) dif x $], [$ tan(x) + c $],
[$ integral 1/(1+x^2) dif x $], [$ arctan(x) + c $],
[$ integral 1/(sqrt(1-x^2)) dif x $], [$ arcsin(x) + c $],
)
]
=== Integrali "avanzati"
Trucchi per risolvere integrali "composti" (termine improprio):
- controllare se è presente la derivata $f'(x)$ della funzione $f(x)$ nella parte più complessa dell'integrale (quasi sempre a denominatore):
$ integral (f(x)) / (f'(x)) dif x = ln(|f(x)|) + c $
$ integral (f(x))/(1 + f'(x)^2) dif x = arctan(f(x)) + c $
- cercare sempre di portare a numeratore le somme, in modo da poter dividere l'integrale
$ integral a + b dif x = integral a dif x + integral b dif x $
$ integral x/(sqrt(x+1)) dif x = integral (t-1)/(sqrt(t)) dif t "(sostituzione)" $
=== Integrazione per sostituzione
- sostituire una certa quantità con $t$
- sostituire tutte le occorrenze rimanenti di $x$, calcolandone il nuovo valore
- sostituire $dif x$, calcolandone il nuovo valore risolvendo l'equazione $dif t = D(t) dif x$
- calcolare l'integrale per $t$
- riportare la soluzione in $x$, sostituendo in modo inverso $t$
$ integral mr(sqrt(x)) / (1 + x) mb(dif x) quad quad t = mr(sqrt(x)) quad quad x = t^2 \
dif t = D(sqrt(x)) mb(dif x) quad quad dif t = 1/(2mr(sqrt(x))) mb(dif x) quad quad dif t = 2mr(sqrt(x)) mb(dif x) quad quad mb(dif x) = 2mr(t) dif t \
= integral t / (1 + t^2) mb(2 t dif t) = ... = 2 mr(t) - 2 arctan(mr(t)) + c = 2 mr(sqrt(x)) - 2 arctan(mr(sqrt(x))) + c $
=== Integrazione per parti
- portare l'integrale nella forma:
$ integral f(x) dot g(x) dif x $
- scegliere una funzione da derivare $f$ e una da integrare $g'$
- calcolare derivata $f'$ e integrale $g$
- applicare la formula:
$ f(x) dot g(x) - integral f'(x) dot g(x) dif x $
= Esercizi <esercizi>
- Dimostrare / trovare se la funzione $f(x)$ è una massa/densità valida
- sommatoria / integrale = 1
- è possibile considerare la seconda incognita (quella su cui non si integra) come una costante, quindi portarla fuori dalla sommatoria/integrale
- sempre positiva $>= 0$ (da dimostrare, non graficamente)
- dimostrare stimatore distorto per $tau(theta)$: $E[T] != tau(theta)$
- calcolare bias: $E[T] - tau(theta)$
- calcolare MSE: $"Var"(T) + "bias"^2$
- attenzione: quando lo stimatore è ad esempio la media campionaria $overline(X)$, allora calcolare la varianza come $E[T^2] - E[T]^2 = E[overline(X)^2] - E[overline(X)]^2$ è un suicidio, meglio usare direttamente $"Var"(T) = "Var"(overline(X)) = "Var"(X) / n$ dove $n$ è la grandezza dei campioni
- gode della proprietà di consistenza in media quadratica? $lim_(n -> infinity) "MSE" = 0$
- metodo plug-in: lo stimatore è distorto, va "raddrizzato":
- $E[T] = a/2 + 6 != a$
- risolviamo per $a$: $T = a/2 + 6$, quindi $T*2 - 6$ è non distorto
- probabilità dell'evento che si verifica quando l'errore in valore assoluto che si compie usando $T$ per stimare $a$ è minore o uguale di $epsilon$:
- l'errore è lo stimatore meno il suo valore atteso: $T - E[T]$ (il valore atteso $E[T]$, dato che lo stimatore non è distorto è uguale ad $a$)
- quindi $P(-epsilon < T - E[T] < epsilon)$
- di solito si trasforma il valore atteso in $mu$ (calcolato in base a quanto vale lo stimatore $T$ rispetto alla media campionaria $overline(X)$)
- si toglie il valore assoluto
- si calcola la normale che approssima $T$, attraverso il teorema del limite centrale
- si porta la normale in normale standard
- è possibile usare $Phi$ per la probabilità
|
|
https://github.com/piepert/grape-suite | https://raw.githubusercontent.com/piepert/grape-suite/main/examples/essay01.typ | typst | MIT License | #import "/src/library.typ": exercise
#import exercise: project, task, subtask, todo
#show: project.with(
title: "Lorem ipsum dolor sit",
university: [University],
institute: [Institute],
seminar: [Seminar],
abstract: lorem(100),
show-outline: true,
author: "<NAME>",
show-solutions: false
)
= Introduction
#lorem(100)
#lorem(100)
= Long heading: #lorem(20)
= Main Part
#lorem(100)
#lorem(100)
== Subheading 1
#lorem(200)
== Subheading 2
#lorem(100)
== #lorem(20)
#lorem(200)
= Conclusion
#lorem(100)
#lorem(100) |
https://github.com/adeshkadambi/ut-thesis-typst | https://raw.githubusercontent.com/adeshkadambi/ut-thesis-typst/main/README.md | markdown | MIT License | 
This is an unofficial Typst template for a University of Toronto graduate thesis.
### Installation Instructions
1. Fork the repository.
2. Clone your forked repository:
```shell
git clone https://github.com/YOUR-USERNAME/ut-thesis-typst
```
3. If you have Typst installed:
```shell
# creates `main.pdf` in the current working dir
typst compile main.typ
# watches source and recompiles on changes (recommended)
typst watch main.typ
```
Read my blog post [here](https://adeshkadambi.github.io/blog/Typst-Template).
|
https://github.com/sesameman/typst-slides-nankai | https://raw.githubusercontent.com/sesameman/typst-slides-nankai/main/themes/nankai.typ | typst | // =========================================
// Nankai University theme for Typst slides.
// Made By SesameMan.
// https://github.com/sesameman/typst-slides-nankai
// =========================================
#import "./slides.typ": *
#let heiti = ("Times New Roman", "Heiti SC", "Heiti TC", "SimHei")
#let songti = ("Times New Roman", "Songti SC", "Songti TC", "SimSun")
#let zhongsong = ("Times New Roman", "STZhongsong", "SimSun")
#let kaiti = ("Times New Roman", "Kaiti SC")
#set text(font: songti)
#let nankai-theme(
mail: "<EMAIL>",
college: "计算机学院",
usage: "毕业答辩",
color: rgb("#711A5F"),
biglogo: "nk-image/nankai.png",
watermark: "nk-image/nankai-10.png",
logo: "nk-image/nankai-white.png",
nkutext: "nk-image/nku-text.png",
) = data => {
let title-slide(slide-info, bodies) = {
place(dx: 50%,
dy: -13%,
image(watermark, height: 510pt)
)
place(dx: 0.4em+80pt+0.4em,
dy: 0.4em,
image(nkutext, height: 80pt)
)
place(dx: 0.4em,
dy: 0.4em,
image(biglogo, height: 80pt)
)
v(82pt)
align(center + horizon)[
#block(
stroke: (y: 1mm + rgb("#711A5F"), x: 1mm + rgb("#711A5F")),
inset: 1em,
breakable: false,
fill: rgb("#E4E5EA"),
radius: 15pt,
[
#box()[#text(1.3em)[*#data.title*] \
#{
if data.subtitle != none {
parbreak()
text(.9em)[#data.subtitle]
}
}
]
]
)
// #h(1fr)
#set text(size: 1em)
#grid(
columns: (1fr,) * calc.min(data.authors.len(), 3),
column-gutter: 1em,
row-gutter: 1em,
..data.authors
)
#block(
stroke: (left: 2mm + rgb("#711A5F")),
inset: 0.4em,
breakable: false,
align(left)[
#if bodies.len() > 1 {
panic("title slide of default theme does not support too many bodies")
} else if bodies.len() == 1 {
let body = bodies.first()
text(size: 1em, body)
}
]
)
#parbreak()
#text(0.8em)[#data.date]
#v(15fr)
]
}
// globe font setting
// Next Pages
let displayed-title(slide-info) = if "title" in slide-info {
text(fill: rgb("#fafafa"), slide-info.title)
} else {
[]
}
let decoration(position, body) = {
let border = color
let strokes = (
header: ( bottom: border ),
footer: ( top: border )
)
block(
stroke: none,
width: 100%,
height: 1em,
fill: color,
outset: 0em, inset: 0em, breakable: false,
align(left + horizon)[#h(0.2em)
#box[
#box(image(logo, width: .8em))]
#text(fill: rgb("#fafafa"), 0.5em,body
)
]
)
}
let default(slide-info, bodies) = {
if bodies.len() != 1 {
panic("default variant of default theme only supports one body per slide")
}
let body = bodies.first()
place(dx: 50%,
dy: -13%,
image(watermark, height: 510pt)
)
// header
decoration("header", section.display() + h(1fr) +displayed-title(slide-info)+ h(1fr)+ data.date +h(1em)+ usage + h(1em))
if "title" in slide-info {
block(
width: 100%, inset: (x: 2em), breakable: false,
outset: 0em,
heading(level: 1, slide-info.title)
)
}
v(1fr)
block(
width: 100%, inset: (x: 2em), breakable: false, outset: 0em,
text(size: 0.8em)[#body]
)
v(2fr)
// footer
decoration("footer")[
#h(1em)
#data.short-authors #h(4em)
#mail #h(4em)
#college #h(1fr)
#text(1.5em)[#logical-slide.display()] #h(1em)
]
}
let wake-up(slide-info, bodies) = {
if bodies.len() != 1 {
panic("wake up variant of default theme only supports one body per slide")
}
let body = bodies.first()
v(0em)
// block(
// width: 100%, inset: (x: 2em), breakable: false, outset: 0em,
// text(size: 1.5em, fill: white, {v(1fr); body; v(1fr);})
// )
block(
width: 100%, height: 100%-1em,inset: 2em, breakable: false, outset: 0em,
fill: color,
text(size: 1.5em, fill: white, {v(1fr); body; v(1fr);})
)
v(1fr)
decoration("footer")[
#h(1fr)#text(1.5em)[#logical-slide.display()] #h(1em)
]
}
(
"title slide": title-slide,
"default": default,
"wake up": wake-up,
)
} |
|
https://github.com/DannySeidel/typst-dhbw-template | https://raw.githubusercontent.com/DannySeidel/typst-dhbw-template/main/examples/example-glossary.typ | typst | MIT License | #let glossary = (
Vulnerability: "A Vulnerability is a flaw in a computer system that weakens the overall security of the system.",
Patch: "A patch is data that is intended to be used to modify an existing software resource such as a program or a file, often to fix bugs and security vulnerabilities.",
Exploit: "An exploit is a method or piece of code that takes advantage of vulnerabilities in software, applications, networks, operating systems, or hardware, typically for malicious purposes.",
) |
https://github.com/gyarab/2023-4e-ruzicka-jako_pavouk | https://raw.githubusercontent.com/gyarab/2023-4e-ruzicka-jako_pavouk/main/src-docs/titulnistrany.typ | typst | #set page(numbering: none)
#[
#set page(margin: 0in)
#set align(center)
#v(7em)
#stack(dir: ltr, spacing: 1em)[
#image("obrazky/logogyarab.png", width: 6em)
][
#v(1.5em)
#set align(left)
#stack(dir: ttb, spacing: 12pt)[
#text(20pt)[*Gymnázium Arabská, Praha 6, Arabská 14*]
][
#text(16pt)[Obor programování, vyučující Mgr. <NAME>]
]
]
#v(8em)
#box(inset: (left: 8%),
image("obrazky/pavoukStudent.svg", width: 50%)
)
#v(8em)
#text(20pt)[*<NAME>*]\
#text(15pt)[Psaní všemi deseti]
#v(3em)
#text(18pt)[<NAME>]
#v(1.5em)
Duben, 2024
#v(1fr)
#set page(margin: 1in)
#v(1fr)
#text(hyphenate: false,
"Prohlašuji, že jsem jediným autorem tohoto projektu, všechny citace jsou řádně označené a všechna použitá literatura a další zdroje jsou v práci uvedené. Tímto dle zákona 121/2000 Sb. (tzv.~Autorský zákon) ve znění pozdějších předpisů uděluji bezúplatně škole Gymnázium, Praha 6, Arabská 14 oprávnění k výkonu práva na rozmnožování díla (§ 13) a práva na sdělování díla veřejnosti (§ 18) na dobu časově neomezenou a bez omezení územního rozsahu."
)
#v(4em)
#set align(right)
V #box(width: 7em, repeat[.]) dne #box(width: 8em, repeat[.]) #h(1fr) <NAME> #box(width: 8em, repeat[.])
#v(8em)
]
// ---------------- abstract -----------------
#set align(center)
#text(size: 18pt)[*Anotace*]\
#text(size: 12pt)[Zadání projektu]
#v(0.7em)
#pad(x: 2em)[
Webová aplikace na výuku psaní všemi deseti. Bude se skládat z lekcí přidávajících vždy 2-3 písmena na klávesnici. Tímto způsobem si člověk postupně osvojí celou klávesnici. V každé lekci budou cvičení s náhodnými písmenky, ale i se slovy obsahující pouze již naučená písmena.
Aplikace bude taktéž obsahovat přihlašovací systém, díky kterému bude mít každý uživatel k dispozici profil pro zobrazení svých zlepšení a statistik jako např. průměrnou rychlost, chybovost, procento dokončených lekcí atd. Co se týče technologií, použiji programovací jazyk Go pro backend, framework Vue.js pro frontend a databázi PostgreSQL pro ukládání dat o uživatelích a pro přístup k slovníku spisovné češtiny.
]
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/bugs/smartquotes-on-newline_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
"test"#linebreak()"test"
"test"\
"test"
|
https://github.com/typst-doc-cn/tutorial | https://raw.githubusercontent.com/typst-doc-cn/tutorial/main/src/intermediate/stateful/s1.typ | typst | Apache License 2.0 |
#let curr-heading = state("curr-heading", ())
#set text(size: 8pt)
#let first-heading = state("first-heading", (:))
#let last-heading = state("last-heading", (:))
#let find-headings(headings, page-num) = if page-num > 0 {
headings.at(str(page-num), default: find-headings(headings, page-num - 1))
}
#let get-heading-at-page(loc) = {
let first-headings = first-heading.final(loc)
let last-headings = last-heading.at(loc)
let page-num = loc.page()
first-headings.at(str(page-num), default: find-headings(last-headings, page-num))
}
#let update-heading-at-page(h) = locate(loc => {
let k = str(loc.page())
last-heading.update(it => {
it.insert(k, h)
it
})
first-heading.update(it => {
if k not in it {
it.insert(k, h)
}
it
})
})
#let set-heading(content) = {
show heading.where(level: 2): it => {
it
update-heading-at-page(it.body)
}
show heading.where(level: 3): it => {
show regex("[\p{hani}\s]+"): underline
it
}
show heading: it => {
show regex("KiraKira"): box("★", baseline: -20%)
show regex("FuwaFuwa"): box("✎", baseline: -20%)
it
}
set page(header: locate(loc => {
set text(size: 5pt);
emph(get-heading-at-page(loc))
}))
content
}
#let set-text(content) = {
show regex("feat|refactor"): emph
content
}
#show: set-heading
#show: set-text
#set page(width: 120pt, height: 120pt, margin: (top: 12pt, bottom: 10pt, x: 5pt))
== 雨滴书v0.1.2
=== KiraKira 样式改进
feat: 改进了样式。
=== FuwaFuwa 脚本改进
feat: 改进了脚本。
== 雨滴书v0.1.1
refactor: 移除了LaTeX。
feat: 删除了一个多余的文件夹。
== 雨滴书v0.1.0
feat: 新建了两个文件夹。
|
https://github.com/ParaN3xus/numblex | https://raw.githubusercontent.com/ParaN3xus/numblex/main/lib/model.typ | typst | MIT License | // Models
/// Ordinals
/// --------
/// type: "ord"
/// func: `(<positional>n: int, <named>depth: int) => str`
#let ord(f) = {
if type(f) == str {
import "ordinals.typ": ordinal_funcs
// assert(
// f in ordinal_funcs,
// message: "Unknown ordinal: " + f,
// )
if f not in ordinal_funcs {
return (type: "ord", func: (n, ..args) => numbering(f, n))
}
return (type: "ord", func: ordinal_funcs.at(f))
}
return (type: "ord", func: f)
}
#let ord_mod(o, prefix: "", suffix: "") = {
return ord((..args) => prefix + (o.func)(..args) + suffix)
}
/// Constants
/// ---------
/// type: "const"
/// func: `(<named>depth: int) => str`
#let const(f) = {
if type(f) == str {
return (type: "const", func: (..args) => f)
}
return (type: "const", func: f)
}
#let const_from_str(s) = {
return const((..args) => s)
}
// Conditions
// ----------
// `(<positional>n: int, <named>depth: int) => bool`
/// Construct a new ord
///
/// - `cases`: A list of cases. Each case is a tuple of a condition and an ordinal.
#let match_ord(..cases) = {
cases = cases.pos()
return ord((..args) => {
for (cond, ord) in cases {
if cond(..args) {
if ord == none {
return ""
}
return (ord.func)(..args)
}
}
return ""
})
}
/// Construct a new const
///
/// - `cases`: A list of cases. Each case is a tuple of a condition and a constant.
#let match_const(..cases) = {
cases = cases.pos()
return const((..args) => {
for (cond, const) in cases {
if cond(..args) {
if const == none {
return ""
}
return (const.func)(..args)
}
}
return ""
})
}
/// Constructs a numbering function from parsed numbering structure
///
/// - elements: A list of Ordinal | Const
#let to_numbering(..elements, repeat: none) = {
elements = elements.pos()
return (..nums) => {
nums = nums.pos()
let cur_ind = 0
let depth = nums.len()
let res = ""
for e in elements {
if e.type == "ord" {
// Ignore the ordinal if the depth is not enough
if cur_ind >= depth {
continue
}
res += (e.func)(nums.at(cur_ind), depth: depth)
cur_ind += 1
} else if e.type == "const" {
res += (e.func)(depth: depth)
}
}
if repeat != none {
assert(false, message: "Not implemented yet")
}
return res
}
}
|
https://github.com/RaphGL/ElectronicsFromBasics | https://raw.githubusercontent.com/RaphGL/ElectronicsFromBasics/main/DC/chap7/chap7.typ | typst | Other | == Series-parallel combination circuits
#include "1_whats_a_series_parallel_circuit.typ"
#include "2_analysis_technique.typ"
#include "3_redrawing_complex_schematics.typ"
#include "4_component_failure_analysis.typ"
#include "5_building_series_parallel_resistor_circuits.typ"
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/layout/clip.typ | typst | Apache License 2.0 | // Test clipping with the `box` and `block` containers.
---
// Test box clipping with a rectangle
Hello #box(width: 1em, height: 1em, clip: false)[#rect(width: 3em, height: 3em, fill: red)]
world 1
Space
Hello #box(width: 1em, height: 1em, clip: true)[#rect(width: 3em, height: 3em, fill: red)]
world 2
---
// Test cliping text
#block(width: 5em, height: 2em, clip: false, stroke: 1pt + black)[
But, soft! what light through
]
#v(2em)
#block(width: 5em, height: 2em, clip: true, stroke: 1pt + black)[
But, soft! what light through yonder window breaks? It is the east, and Juliet
is the sun.
]
---
// Test clipping svg glyphs
Emoji: #box(height: 0.5em, stroke: 1pt + black)[🐪, 🌋, 🏞]
Emoji: #box(height: 0.5em, clip: true, stroke: 1pt + black)[🐪, 🌋, 🏞]
---
// Test block clipping over multiple pages.
#set page(height: 60pt)
First!
#block(height: 4em, clip: true, stroke: 1pt + black)[
But, soft! what light through yonder window breaks? It is the east, and Juliet
is the sun.
]
---
// Test clipping with `radius`.
#set page(height: 60pt)
#box(
radius: 5pt,
stroke: 2pt + black,
width: 20pt,
height: 20pt,
clip: true,
image("/files/rhino.png", width: 30pt)
)
---
// Test clipping with `radius`, but without `stroke`.
#set page(height: 60pt)
#box(
radius: 5pt,
width: 20pt,
height: 20pt,
clip: true,
image("/files/rhino.png", width: 30pt)
)
|
https://github.com/hargoniX/bachelor | https://raw.githubusercontent.com/hargoniX/bachelor/main/abstract/abstract.typ | typst | #let title = "verix: A Verified Rust ix(4) driver"
#let author = "<NAME>"
#set par(justify: true)
#set document(
title: title,
author: author
)
#set page(numbering : "1")
// Title page.
#align(center)[
#text(2em, weight: 700, title)
#text(1.00em, author)
]
#v(10pt)
// Inspired by: https://grosser.science/howtos/paper-writing
// Introduction. In one sentence, what’s the topic?
As computer systems are becoming increasingly ubiquitous and complex both the negative impact of bugs as well as the likelihood of them occurring are increasing.
Because of this, catching bugs with potentially catastrophic effects before they happen is becoming more and more important.
These bugs usually occur in one of two fashions, either in the implementation of some algorithm or data structure or in the interaction of a computer system with the outside world.
This thesis is mostly concerned with the latter kind.
// State the problem you tackle
Networking represents a key interaction point with the outside world for most contemporary computer systems, it usually involves three key components:
- the NIC driver
- the network stack
- networking application
While verifying network stacks and applications requires substantial resources due to their complexity, verification of NIC drivers is much more approachable since they have only two jobs
1. Setting up the NIC
2. Handling the receiving and transmission of packets.
// Summarize why nobody else has adequately answered the research question yet.
To our knowledge, there do not exist network drivers whose interaction with the NIC itself has been formally verified.
Instead, the focus is usually put on the interaction with other parts of the operating system or proving the absence of C-based issues:
- #cite("witowski2007drivers") and #cite("ball2004slam") are mostly concerned with the driver interactions with the rest of the kernel
- #cite("more2021hol4drivers") writes a verified monitor for interactions of a real driver with the NIC instead of verifying the driver itself
// Explain, in one sentence, how you tackled the research question.
In this thesis, we are going to show that formally verifying the interaction of a driver with the NIC is possible by implementing a model of the target hardware and using BMC to prove that they cooperate correctly.
// How did you go about doing the research that follows from your big idea?
To show that the concept is viable in practice we are going to implement a driver for the widely used Intel 82559ES NIC.
This is going to happen on the L4.Fiasco microkernel so misbehavior of the driver can barely affect the system as a whole in the first place.
On top of that we are going to use the Rust programming language which guarantees additional safety properties out of the box.
The driver and model themselves are going to be developed using a custom Rust eDSL in the spirit of svd2rust to make correct peripheral access easier.
We are then going to show, using the kani BMC, that the driver correctly cooperates with a model of the 82559ES, where correctly means that:
- The driver doesn't panic
- The driver doesn't put the model into an undefined state
- The driver receives all packets that are received by the model
- The model sends all packets that the driver is told to send
#bibliography("bib.bib") |
|
https://github.com/Tran-Thu-Le/typst-collection | https://raw.githubusercontent.com/Tran-Thu-Le/typst-collection/main/plot/graph-of-function.typ | typst | #import "@preview/cetz:0.2.2": canvas, plot
#import "@preview/cetz:0.2.2"
#canvas(length: 1cm, {
import cetz.draw: *
// content((2, 2), [Here], padding: .1)
plot.plot(
size: (7, 7),
axis-style: "school-book",
plot-style: (stroke: black),
x-tick-step: 2, y-tick-step: 2,
x-min: 1-5, x-max: 1+5,
y-min: 1-5, y-max: 1+5,
x-grid: true, y-grid: true,
{
plot.add(
style: (stroke: (paint: blue, thickness: 2pt)),
label: $f(x) = (x-3)/(x-1)$,
domain: (-10,0.9),
samples: 100,
x => (x, (x -3)/(x - 1)),
)
plot.add(
style: (stroke: (paint: blue, thickness: 2pt)),
domain: (1.1, 10),
samples: 100,
x => (x, (x -3)/(x - 1)),
)
plot.add(
style: (stroke: (paint: red, dash: "dashed")),
domain: (-10, 10),
samples: 100,
x => (x, 1),
)
plot.add(
style: (stroke: (paint: red, dash: "dashed")),
domain: (-10, 10),
samples: 100,
x => (1, x),
)
plot.annotate({
content((-2, 2), $f(x)$, padding: 0.5em)
})
})
})
|
|
https://github.com/An-314/Note-of-Quantum_Mechanics | https://raw.githubusercontent.com/An-314/Note-of-Quantum_Mechanics/main/cheatsheet.typ | typst | #import "@preview/physica:0.9.2": *
#import "@preview/tablex:0.0.6": tablex, hlinex
#import "@preview/tablem:0.1.0": tablem
#let three-line-table = tablem.with(
render: (columns: auto, ..args) => {
tablex(
columns: columns,
auto-lines: false,
align: center + horizon,
hlinex(y: 0),
hlinex(y: 1),
..args,
hlinex(),
)
}
)
#set page(
paper: "a4",
margin: (x: 0.5cm, y: 0.5cm),
)
#show: rest => columns(3,gutter: 6pt, rest)
#let song = ("Linux Libertine", "SimSun")
#let hei = ("Linux Libertine", "SIMHEI")
#let kai = ("Linux Libertine", "KaiTi",)
#let xbsong = "FZXiaoBiaoSong-B05"
#let code = "Consolas"
#let title-font = hei
#let author-font = kai
#let body-font = song
#let heading-font = hei
#let caption-font = kai
#let header-font = kai
#let strong-font = hei
#let emph-font = kai
#let raw-font = code
#set block(spacing: 4pt)
#show strong: set text(font: strong-font)
#show strong: set block(spacing: 6pt)
#show emph: set text(font: emph-font)
#set text(size: 8pt, font: body-font)
#set par(leading: 4pt)
*量子力学基本公设*
#set text(size: 6pt)
+ 微观体系的状态由波函数描述,波函数满足单值、有限、连续条件
+ 波函数的动力学演化满足薛定鄂方程
+ 力学量用*厄密算符*表示,且有组成*完备集的本征函数系*
+ 公任一波函数可以展开为力学量算符本征函数的线性叠加,测得力学量为本征值$lambda_n$的几率为展开式中对应本征函数系数的模方$abs(c_n)^2$
+ 全同粒子相互调换不改变体系状态
#set text(size: 8pt)
*表象变换*
$S$是*幺正变换*,$S^dagger = S^(-1)$
_基底_:$bra(u)^* = bra(u')^* S$,$S_(m n) = braket(u^'_m,u_n)$
即:$ket(u') = S^* ket(u)$, $ket(u) = S^TT ket(u')$
_态矢_:$ket(psi') = S ket(psi)$,$S_(m n) = braket(psi^'_m,psi_n)$
即:$ket(psi') = S ket(psi)$,$ket(psi) = S^dagger ket(psi')$
_算符_:$A' = S A S^dagger$
*Dirac符号*
基底$\{ket(n)\}$_正交归一_:$braket(m,n) = delta_(m n)$
_完备性_:$sum_n ketbra(n) = I$
上面中的一项被称作*投影算符*:$P_n = ket(n) bra(n)$,有性质$P_n^2 = P_n$
- 态矢量在具体表象中的表示
基矢量$ket(k)$,态矢量$ket(psi)$用基矢量展开:
$
ket(psi) = sum_k ket(k) braket(k, psi) = sum_k a_k ket(k)
$
其中$a_k = braket(k, psi)$是态矢量在$ket(k)$表象中的分量。
${a_k} = {braket(k, psi)}$是态矢量$ket(psi)$在$ket(k)$表象中的表示
$
mat(
a_1, a_2, dots;
)^TT=
mat(
braket(1, psi), braket(2, psi), dots;
)^TT
$
- 算符在具体表象中的表示
$
bra(j) hat(L) ket(psi) = sum_k bra(j) hat(L) ket(k) braket(k, psi) = braket(j, phi)
$
即:$sum_k L_(j k) a_k = b_j$
其中
$
L_(j k) = bra(j) hat(L) ket(k)
$
就是算符$hat(L)$在$F$表象中的*矩阵元*。
算符$hat(L)$的狄拉克符号表示为:
$
hat(L) = sum_(j k) L_(j k) ket(j) bra(k) = sum_(j k) ket(j) bra(j) hat(L) ket(k) bra(k)
$
算符$hat(F)$在其自身$F$表象中的表示为:
$
F_(m n) = bra(m) hat(F) ket(n) = bra(m) f_n ket(n) = f_n delta_(m n)\
hat(F) = sum_n f_n ket(n) bra(n)
$
其中$f_n$是$hat(F)$在$ket(n)$表象中的本征值。*任何算符在其自身表象中是对角化的*。
*中心力场中的运动*
中心力场中两体问题的定态Schrödinger方程:
#set text(size: 6pt)
$
(-hbar^2/(2 m_1) nabla^2_1 -hbar^2/(2 m_2) nabla^2_2+ U(abs(arrow(r)_1 - arrow(r)_2))) Psi(arrow(r)_1, arrow(r)_2) = E_"tot" Psi(arrow(r)_1, arrow(r)_2)
$
#set text(size: 8pt)
总能量$E_"tot"$可分为整体*平动*动能和*相对运动*能量(相对动能+势能)两部分。坐标转化:
$
cases(
arrow(R) = (m_1 arrow(r)_1 + m_2 arrow(r)_2)/(m_1 + m_2) "(质心系坐标)"\
arrow(r) = arrow(r)_1 - arrow(r)_2 "(相对坐标)"
), arrow(r)_1 , arrow(r)_2 => arrow(R), arrow(r)
$
#set text(size: 5pt)
$
1/m_1 nabla^2_1 + 1/m_2 nabla^2_2 = 1/M nabla^2_R + 1/mu nabla^2_r , mu = (m_1 m_2) /M
$
#set text(size: 8pt)
分离变量带入:
#set text(size: 5pt)
$
- hbar^2/(2 M) nabla^2_R phi(arrow(R)) = E_"cm" phi(arrow(R)) , (- hbar^2/(2 mu) nabla^2_r + U(r)) psi(arrow(r)) = (E_"tot" - E_"cm") psi(arrow(r))
$
#set text(size: 8pt)
第一个方程的解即自由粒子平面波:
$
phi(arrow(R)) = c_1 e^(i/hbar arrow(P) dot arrow(R)) + c_2 e^(-i/hbar arrow(P) dot arrow(R))
$
其中$P = sqrt(2 M E_"cm")$是质心动量。
第二个方程的解是中心势场问题的解:
#set text(size: 7pt)
$
(- hbar^2/(2 mu) nabla^2_r + U(r)) psi(arrow(r)) = E psi(arrow(r))
$
#set text(size: 8pt)
若$U(r -> oo) > E$,这个方程的能量本征值分立。
#set text(size: 6pt)
$
nabla^2_r = 1/r^2 partial/(partial r) (r^2 partial/(partial r)) + 1/(r^2 sin(theta)) partial/(partial theta) (sin(theta) partial/(partial theta)) + 1/(r^2 sin^2(theta)) partial^2/(partial phi^2)\
- hbar^2/(2 mu) nabla^2 = - hbar^2/(2 mu r^2) partial/(partial r) (r^2 partial/(partial r)) + 1/(2 mu r^2) hat(L)^2
$
#set text(size: 8pt)
其中$hat(L)^2$是角动量平方算符。方程化为
#set text(size: 6pt)
$
(- hbar^2/(2 mu r^2) partial/(partial r) (r^2 partial/(partial r)) + 1/(2 mu r^2) hat(L)^2 + U(r)) psi(arrow(r)) = E psi(arrow(r))
$
#set text(size: 8pt)
在球坐标系中分离变量:
$
psi(arrow(r)) = R(r) Y(theta, phi)\
hat(L)^2 Y_(l m) (theta, phi) = l (l + 1) hbar^2 Y_(l m) (theta, phi)
$
径向方程:
#set text(size: 6pt)
$
(- hbar^2/(2 mu r^2) dd("")/(dd(r)) (r^2 dd("")/(dd(r))) + l (l + 1) hbar^2/(2 mu r^2) + U(r)) R_l (r) = E R_l (r)
$
#set text(size: 8pt)
设$R(r) = u(r)/r$,则有
#set text(size: 5pt)
$
- hbar^2 / (2 mu) dd(""^2u) /dd(r^2) + ((l (l + 1) hbar^2) / (2 mu r^2) + U(r)) u = E u
$
有$2l+1$重简并。边界条件:
$
lim_(r -> 0) r R_l (r) = lim_(r ->0) u(r) = 0
$
当$r→0$时,上式的渐进式是:
$
dd(""^2R)/dd(r^2) + 2/r dd(R)/dd(r) - (l(l+1))/r^2 R= 0
$
在正则奇点$r=0$的邻域内,设$R∝ r^s$,指标方程
$
s(s-1) r^(s-2) + 2 s r^(s-1) - l(l+1) r^(s-2) = 0
$
得到
#set text(size: 8pt)
$
u(r) = r R(r) prop r^(l + 1)
$
#parbreak()
*氢原子*
#set text(size: 5pt)
$
U(r) = - 1/(4 pi epsilon_0) (Z e^2) / r
,
dd(""^2 u)/dd(r^2) + ((2 mu) / hbar^2( E + k_1 (Z e^2) / r )- (l (l + 1)) / (r^2)) u = 0
$
#set text(size: 8pt)
对于束缚态,$E<0$。定义无量纲量
#set text(size: 6pt)
$
rho = alpha r , alpha = sqrt(8 mu abs(E)) / (hbar), beta = (2 mu k_1 Z e^2)/(alpha hbar^2) = (k_1 Z e^2) / hbar sqrt(mu/(2 abs(E)))
$
#set text(size: 8pt)
于是方程化为:
#set text(size: 6pt)
$
dd(""^2 u)/dd(rho^2) + (- 1/4 + beta/rho - (l (l + 1)) / rho^2 ) u = 0
$
渐进分析假设$u(rho) = rho^(l+1) e^(-rho) nu(rho)$,得到超流几何方程:
$
dd(""^2 nu)/dd(rho^2) + (2(l + 1)/rho - 1) dd(nu)/dd(rho) + (beta - (l + 1))/rho nu = 0
$
有解条件$n_r = beta - l - 1 = 0, 1, 2, ...$
#set text(size: 8pt)
得到$n^2$度简并的能级:
$
E_n = (mu k_1^2 Z^2 e^4)/(2 hbar^2 n^2) = - (k_1 Z e^2)/(2 a/Z n^2) , a = hbar^2/(mu k_1 e^2)
$
对应的波函数是:
#set text(size: 6pt)
$
psi_(n l m) (r, theta, phi) = R_(n l) (r) Y_(l m) (theta, phi)\,R_(n l) (r) = (u_(n l) (r))/r
$
其中$u_(n l) (r)$是缔合Laguerre多项式。
#set text(size: 8pt)
电子在$(r, r+dd(r))$*球壳中的概率*为:
$
|R_(n l) (r)|^2 r^2 dd(r) = |u_(n l) (r)|^2 dd(r)
$
$|u_(n l) (r)|^2$的极大值位置称为*最可几半径*。
在$(theta , phi)$的*立体角$dd(Omega)$中的概率密度*和$phi$无无关:
$
|Y_(l m) (theta, phi)|^2 dd(Omega) prop | P_l^m (cos theta) |^2 sin theta dd(theta) dd(phi)
$
#set text(size: 6pt)
设氢原子中电子$psi_(n l m)$态电流密度:
$
arrow(j) = - e (i hbar)/(2 mu) (psi grad psi^* - psi^* grad psi)
$
而
$
psi_(n l m) tilde R_(n l) (r) P^m_l (cos theta) e^(i m phi)
$
其中$R,P$是实函数,从而$j_r = j_theta =0$。
#set text(size: 5pt)
$
j_phi &= (i e hbar)/(2 mu) 1/(r sin theta) (psi^* partial/(partial phi) psi - psi partial/(partial phi) psi^*)= - (e hbar m)/(mu r sin theta) |R_(n l) (r) P^m_l (cos theta)|^2
$
#set text(size: 6pt)
电流密度$j_phi$对应的磁矩为:
#set text(size: 4pt)
$
arrow(M) = 1/2 integral arrow(r) crossproduct arrow(j) dd(V),
M_z = 1/2 integral r sin theta j_phi dd(V) = - (e hbar m)/(2 mu) |R_(n l) (r) P^m_l (cos theta)|^2 dd(r) dd(theta) dd(phi) = - (e hbar m)/(2 mu)
$
#set text(size: 6pt)
$
M_z = - mu_B m,mu_B = (e hbar) / (2 mu), g = M_z / L_z = (- m mu_B)/(m hbar) = - e/(2 mu)
$
#set text(size: 8pt)
#parbreak()
*三维各向同性谐振子*
_直角坐标系下_
#set text(size: 5pt)
$
V(r) = 1/2 mu omega^2 r^2 = 1/2 mu omega^2 (x^2 + y^2 + z^2),H = sum_i H_i , H_i = - hbar^2/(2 mu) nabla_i^2 + 1/2 mu omega^2 r_i^2
$
#set text(size: 8pt)
系统波函数分离变量:
$
psi(x, y, z) = psi_(n_x) (x) psi_(n_y) (y) psi_(n_z) (z)
$
其中$psi_n$为*一维谐振子*与量子数$n$的本征函数。
$
E = (n_x + n_y + n_z + 3/2) hbar omega = (N + 3/2) hbar omega
$
#set text(size: 6pt)
Virial定理:$macron(T) = macron(V) = 1/2 macron(E)$;对于给定的$N$,其简并度为
$((N + 1)(N + 2))/2$
#set text(size: 8pt)
_极坐标系下_
#set text(size: 6pt)
$
(dd("")/dd(r) + (2 mu)/hbar^2 (E - 1/2 mu omega^2 r^2) -( l (l + 1) hbar )/(2 mu r^2)) u = 0
$
$
rho = alpha r, alpha = sqrt(mu omega / hbar)\,lambda = 2 E / (hbar omega)
,
R(rho) = e^(-rho^2/2) rho^l v(rho)
$
代入原方程后,再做变量代换$ξ = ρ^2$得到*合流超几何方程*:
$
dd(""^2v)/dd(ξ^2) + ((2l + 3)/(2 ξ) - 1) dd(v)/dd(ξ) + (lambda - 2l -3)/(4 eta) v = 0
$
有解条件为$n_r = (lambda - 2l - 3)/4 = 0, 1, 2, ...$
解得
$
lambda = 2N + 3 ,N = 2n_r + l
,
E_N = (N + 3/2) hbar omega
$
在$N$给定以后,$l$可以取值$l = N , N - 2, ..., 0"或"1$
最后系统径向波函数为
$
R(r) = C L_(n_r)^(l+1/2) (rho^2) rho^l e^(-rho^2/2) (rho = sqrt((mu omega) / hbar) r)
$
其中$L_(n_r)^(l+1/2)$是缔合Laguerre多项式。
#set text(size: 8pt)
_表象变换_ $(hat(H), hat(L)^2, hat(L)_z)$与$(hat(H)_x, hat(H)_y, hat(H)_z)$
#set text(size: 7pt)
$
Psi_000 = (2 alpha^(3/2))pi^(1/4) e^(-alpha^2 r^2/2) Y_00 = psi_0 (x) psi_0 (y) psi_0 (z) = Phi_000
$
$
mat(
Psi_011; Psi_010 ; Psi_(01-1)
) = (sqrt(2) alpha^(5/2)) / pi^(3/4) e^(-alpha^2 r^2/2) mat(
-1/sqrt(2) r sin theta e^(i phi); r cos theta; -1/sqrt(2) - r sin theta e^(-i phi)
) prop mat(
-1/sqrt(2)(x + i y); x; 1/sqrt(2)(x - i y)
)
$
$
mat(
Phi_100; Phi_010; Phi_001
) = (sqrt(2) alpha^(5/2)) / pi^(3/4) e^(-alpha^2 r^2/2) mat(
x; y; z
)
Phi = S^* Psi, S = mat(
- 1/sqrt(2), 0 ,1/sqrt(2); - i/sqrt(2), 0, -i/sqrt(2); 0, 1, 0
)
$
#set text(size: 8pt)
*带电粒子在电磁场中的运动*
带电粒子在外电磁场作用下的Hamilton算符:
$
hat(H) &= 1/(2 mu) (hat(p) - q hat(A))^2 + q Phi(arrow(r)) = 1/(2 mu) (- i hbar grad - q arrow(A))^2 + q Phi(arrow(r))
$
规范(gauge)变换不改变系统物理学性质
$
hat(H)' psi' = i hbar partial/(partial t) psi'
cases(
psi -> psi' = e^(i theta) psi\
hat(A) -> hat(A)' = hat(A) + hbar/q grad theta\
Phi -> Phi' = Phi - hbar/q (partial theta) / (partial t)
)
$
*Zeeman效应(奇):碱金属原子的能级在强磁场中分裂的现象称为正常Zeeman效应。【磁-轨耦合】*
#set text(size: 6pt)
$
1/(2m) (-hbar^2 nabla^2 psi + i hbar q ((grad dot arrow(A)) psi +2 arrow(A) dot grad psi) + q^2 arrow(A)^2 psi) = (E - q Phi) psi\
$
#set text(size: 8pt)
取*Coulomb规范*:$div arrow(A) = 1/2 div (arrow(B) crossproduct arrow(r)) = 0 $则
#set text(size: 6pt)
$
1/(2m) (-hbar^2 nabla^2 psi + 2i hbar q (arrow(A) dot grad psi) + q^2 arrow(A)^2 psi) = (E - q Phi) psi\
1/(2m) (-hbar^2 nabla^2 psi + i hbar q (arrow(B) crossproduct arrow(r)) dot grad psi + 1/4q^2 (arrow(B) crossproduct arrow(r))^2 psi) = (E - q Phi) psi\
$
第二项(的变化量)来说,第三项可以忽略不计:
$
(- hbar^2/(2m) nabla^2 - q arrow(B) dot hat(arrow(L)) + q Phi) psi = E psi
$
如果选择$z$轴的方向为$arrow(B)$的方向,电子电荷$q = - e$,所以
$
(- hbar^2/(2m) nabla^2 - (q B)/(2 m) hat(L)_z + q Phi) psi = E psi
$
$
(- hbar^2/(2 mu) nabla^2 + (e B)/(2 mu) hat(L)_z - e Phi) psi = E psi
$
未加磁场$(B=0)$时碱金属原子的能级与波函数
$
E_(n l) , psi_(n l) (r, theta, phi) = R_(n l) (r) Y_(l m) (theta, phi)
$
#set text(size: 8pt)
每一个能级是$(2l+1)$度简并的。那么加上外磁场后(相当于$hat(H)' = hat(H) + (e B)/(2 mu)hat(L)_z$),本征波函数不变,本征值发生改变,简并将被打破:
$
E_(n l m) = E_(n l) + (e B)/(2 mu)hbar m
$
*Landau能级*
*对称规范*约定$arrow(A) = 1/2 arrow(B) crossproduct arrow(r)$,取$B$沿z轴方向。
进行规范变换,新的$arrow(A)$仍满足库伦规范,$arrow(B)$不变:
$
arrow(A) -> arrow(A)' = arrow(A) + grad f , f = 1/2 B x y
$
这时磁矢势变为*Landau规范*
$
arrow(A) = mat(-1/2 B y, 1/2 B x, 0)^TT -> arrow(A)' = mat(0, B x, 0)^TT
$
电子运动限制在$x-y$平面内(二维电子气模型):
$
1/(2 m) (hat(p)_x^2 + (hat(p)_y + e B x)^2) psi = E psi
$
$[hat(p)_y, hat(H)] = 0$,分离:$psi(x, y) = e^(i k_y y) phi(x)$
$
(- hbar^2/(2 m) dd(""^2)/dd(x^2) + 1/2 m omega_c^2 (x + x_0)^2 ) phi(x) = E phi(x)
$
$
omega_c = (e B) / m, x_0 = k_y l_c^2 , l_c = sqrt(hbar /(m omega_c)) = 1/alpha , alpha^2 = (e B) / hbar
$
$ω_c$是回旋角频率,$l_c$是最小回旋半径
$
(m v)/R = e B ,
T = (2 pi R)/v = (2 pi m) / (e B),
omega_c = (2 pi) / T = (e B)/m
$
$
2 pi l_c = lambda = h/p,
e B = p/l_c
=> l_c = sqrt(hbar / (e B)) = sqrt(hbar / (m omega_c))
$
解是坐标平移了$x_0$是一维谐振子方程的解:
$
phi(x) = phi_n (x+x_0),
E_n = (n + 1/2) hbar omega_c "Landau能级"
$
*粒子能量是转动产生的磁矩*与磁场的相互作用能
#set text(size: 6pt)
$
E = (n + 1/2) hbar omega_c = (n + 1/2) (e hbar) / m B = -mu_z B
,
mu_z = - (e hbar) / (2 m) = - mu_B
$
#set text(size: 8pt)
即磁矩方向与磁场方向相反——*Landau抗磁性*。
#set text(size: 6pt)
波函数$e^(i k_y y) phi_n (x+x_0)$,简并度是无穷大的:对于每个能级$E_n$,对应波函数中$k_y$可以任意取值。考虑电子气限于$L_x$宽的长条,有
$
0 < x_0 < L_x => 0 < k_y < L_x alpha^2
$
$y$向周期性边界条件:长条内每$L_y$长度内有一个电子(箱归一化)
$
k_y = (2 pi N)/L_y , N = 0, 1, 2, ...
,
0 < N < (L_x L_y alpha^2)/(2 pi) = (e B L_x L_y) / (h)
$
于是*单位面积内的能级简并度为*
$
g = (e B) / (h)
$
#set text(size: 8pt)
*电子自旋及其描述*
角动量算符的一般定义:$[hat(J)_i, hat(J)_j] = i hbar epsilon_(i j k) hat(J)_k$
$
[hat(J)^2 , hat(J)_i] = 0, hat(J)^2 = hat(J)_x^2 + hat(J)_y^2 + hat(J)_z^2
$
*角动量本征态*是$hat(J)^2$和$hat(J)_z$的共同本征态
$
hat(J)^2 ket(eta"," m) = eta hbar^2 ket(eta"," m),hat(J)_z ket(eta"," m) = m hbar ket(eta"," m)
$
*阶梯算符*:$hat(J)_(plus.minus) = hat(J)_x ± i hat(J)_y$
$
[hat(J)_z , hat(J)_(plus.minus)] = [hat(J)_z, hat(J)_x] ± i [hat(J)_z, hat(J)_y] = ± hbar hat(J)_(plus.minus)
$
#set text(size: 6pt)
$
hat(J)_z hat(J)_(plus.minus) ket(eta"," m) = (m ± 1) hbar hat(J)_(plus.minus) ket(eta"," m)
,
hat(J)_(plus.minus) ket(eta"," m) = c ket(eta'"," m ± 1)
$
#set text(size: 8pt)
$
[hat(J)^2 , hat(J)_(plus.minus)] = [hat(J)^2 , hat(J)_x] ± i [hat(J)^2 , hat(J)_y] = 0
$
#set text(size: 5pt)
$
hat(J)^2 hat(J)_(plus.minus) ket(eta"," m) = hat(J)_(plus.minus) hat(J)^2 ket(eta"," m) = eta hbar^2 hat(J)_(plus.minus) ket(eta"," m)
,
hat(J)^2 c ket(eta'"," m ± 1) = c eta' hbar^2 ket(eta'"," m ± 1) => eta = eta'
$
#set text(size: 8pt)
$m$极大值为$j$,$eta=j(j+1)$,用$ket(j "," m)$表示本征态:
$
hat(J)_(plus.minus) ket(j "," m) = hbar sqrt(j(j+1) - m(m ± 1)) ket(j "," m ± 1)
$
在$(hat(L)^2,hat(L)_z)$表象,基底$ket(11),ket(10),ket(1-1)$,
$
hat(J)_z = hbar mat(
1, 0, 0; 0, 0, 0; 0, 0, -1
), hat(J)_x = (hat(J)_+ + hat(J)_-)/2 = hbar/sqrt(2) mat(
0, 1, 0; 1, 0, 1; 0, 1, 0
)
$
久期方程得到$hat(J)_x$本征态与概率幅:
#set text(size: 6pt)
$
1/2 mat(1;sqrt(2);1) , 1/sqrt(2) mat(1;0;-1) , 1/2 mat(1;-sqrt(2);1); 1/4, 1/2, 1/4
$
#set text(size: 8pt)
*电子自旋角动量*
- 电子自旋任何方向投影只取$plus.minus hbar/2$,$hat(S)_i$本征值$± hbar/2$,
$
hat(S)_x^2 = hat(S)_y^2 = hat(S)_z^2 = (hbar/2)^2,hat(S)^2 = hat(S)_x^2 + hat(S)_y^2 + hat(S)_z^2 = 3/4 hbar^2
$
- 自旋角动量导致电子有*自旋磁矩*,其$z$轴投影为
$
mu_z / S_z = - e/m_e , mu_z = minus.plus mu_B , hat(arrow(mu)) = - e /m_e hat(arrow(S))
$
在$(hat(S)^2, hat(S)_z)$表象下,利用升降算符,得Pauli矩阵
$
sigma_x = mat(
0,1;
1,0
), sigma_y = mat(
0,-i;
i,0
), sigma_z = mat(
1,0;
0,-1
)
$
$
hat(S)_x = hbar/2 sigma_x, hat(S)_y = hbar/2 sigma_y, hat(S)_z = hbar/2 sigma_z\,hat(arrow(S)) = hbar/2 hat(arrow(sigma))
$
#set text(size: 6pt)
*Clifford代数*
$sigma_i sigma_j = delta_(i j) I + i epsilon_(i j k) sigma_k$,
$(arrow(a) dot arrow(sigma)) (arrow(b) dot arrow(sigma)) = arrow(a) dot arrow(b) I + i arrow(a) crossproduct arrow(b) dot arrow(sigma)
$
#set text(size: 8pt)
*二分量波函数*,*二分量旋量*:
$
Psi(arrow(r), t) = Psi_1 (arrow(r), t) v_+ + Psi_2 (arrow(r), t) v_- = mat(Psi_1 (arrow(r), t), Psi_2 (arrow(r), t))^TT
$
$
Psi_1 (arrow(r), t) = Psi(arrow(r) , t, S_z = hbar/2), Psi_2 (arrow(r), t) = Psi(arrow(r) , t, S_z = -hbar/2)
$
粒子轨道和自旋*无耦合*时,直积态:$Psi(arrow(r), t) = psi(arrow(r), t) ket(chi)$,$ket(chi) = c_1 ket(arrow.t) + c_2 ket(arrow.b) = mat(c_1; c_2)$,否则*耦合*。
算符的平均值是:
$
macron(A) = integral Psi^dagger hat(A) Psi dd(arrow(r)) = integral (mat(Psi_1^*, Psi_2^*) hat(A) mat(Psi_1, Psi_2)^TT) dd(arrow(r))
$
算符$hat(A)$和自旋无关则在自旋表象中对角化:$hat(A) = hat(A) I$
*静磁场*只考虑内禀自旋,$hat(H) = - g hat(arrow(S)) dot arrow(B) = - (g hbar B)/2 arrow(sigma) dot arrow(e)_B$
#set text(size: 6pt)
$
arrow(sigma) dot arrow(e)_B = mat(
cos theta, sin theta e^(-i phi);
sin theta e^(i phi), - cos theta
), nu_1 = mat(
cos theta/2 e^(-i phi/2);
sin theta/2 e^(i phi/2)
), nu_(-1) = mat(
sin theta/2 e^(-i phi/2);
- cos theta/2 e^(i phi/2)
)
$
#set text(size: 8pt)
在静磁场中两个相邻本征能级之差为
$
Delta E = abs(g hbar B) = abs(hbar omega_L),omega_L = - g B "Lamor频率"
$
*时间演化与量子跃迁*:
$
ket(chi(t)) &= e^(- i/hbar t hat(H)) ket(chi(0))
= e^(- i (w_L t)/2 arrow(sigma) dot arrow(e)_B) mat(a_0; b_0) sigma_z"表象"\
&= (cos(w_L/2 t) - i sin(w_L/2 t) arrow(sigma) dot arrow(e)_B ) mat(a_0; b_0)\
$
$e^(- i/hbar t hat(H))$有非0非对角矩阵元,则可能自旋“*跃迁*”。
周期性跃迁*振荡*:$sigma_z$和$hat(H)$不对易,$ket(chi)$非定态。
*角动量的合成* $arrow(J) = arrow(J)_1 + arrow(J)_2$也是角动量算符
$hat(arrow(J))_1$$hat(arrow(J))_2$独立:$[hat(J)_i, hat(J)_j] = i hbar epsilon_(i j k) hat(J)_k$,$[hat(J)_(1 i), hat(J)_(2 j)] = 0$;
*未耦合*$(hat(J)_1^2, hat(J)_2^2, hat(J)_1^2, hat(J)_2^2)$
共同本征态是直积,并矢
$
ket(j_1","m_1","j_2","m_2) = ket(j_1","m_1) ket(j_2","m_2) [(2j_1 + 1)(2j_2 + 1)"维"]
$
*耦合*$(hat(J)^2, hat(J)_z, hat(J)_1^2, hat(J)_2^2)$共同本征态记做$ket(j","m";"j_1","j_2)$
#set text(size: 5.5pt)
用$hat(J)_z = hat(J)_(1z) + hat(J)_(2z)$的本征态展开:$(m - m_1 - m_2) C(j ,m; j_1, m_1, j_2, m_2) = 0$
#set text(size: 6pt)
*Clebsch-Gordan系数*
$
ket(j","m";"j_1","j_2) = sum_(m = m_1 + m_2) C(j ,m; j_1, m_1, j_2, m_2) ket(j_1","m_1) ket(j_2","m_2)
$
#set text(size: 8pt)
*最大投影态*:直积与耦合的本征态相同$ket(j_1 j_1) ket(j_2 j_2) = ket(j = j_1 + j_2","m = j_1 + j_2","j_1","j_2)$
$
hat(J)^2 = hat(arrow(J))_1^2 + hat(arrow(J))_2^2 + hat(J)_(1 +) hat(J)_(2 -) + hat(J)_(1 -) hat(J)_(2 +) + 2 hat(J)_(1 z) hat(J)_(2 z)
$
*三角形法则*:$abs(j_1 - j_2) <= j <= j_1 + j_2,m = m_1 + m_2$
- *电子的旋-轨耦合总角动量*
#set text(size: 6pt)
$
hat(arrow(J)) = hat(arrow(L)) + hat(arrow(S))
,
j_1 = l = 0, 1, 2, ...,
j_2 = s = 1/2
,
j = l+1/2 或 l-1/2
$
#set text(size: 8pt)
*CG系数的特点及符号约定*:CG系数都为*实数*,同时在$m=j,m_1=j_1$时,系数为*非负实数*。
#set text(size: 6pt)
$
mat(
ket(l + 1/2","m);
ket(l - 1/2","m)
) = 1/sqrt(2l + 1)
mat(
sqrt(l + 1/2 + m), sqrt(l + 1/2 - m);
- sqrt(l + 1/2 + m), sqrt(l + 1/2 - m)
)
mat(
ket(l","m - 1/2) ket(1/2","1/2);
ket(l","m + 1/2) ket(1/2","-1/2)
)
$
$
mat(
ket(l","m - 1/2) ket(1/2","1/2);
ket(l","m + 1/2) ket(1/2","-1/2)
) = 1/sqrt(2l + 1)
mat(
sqrt(l + 1/2 + m), - sqrt(l + 1/2 + m);
sqrt(l + 1/2 - m), sqrt(l + 1/2 - m)
)
mat(
ket(l + 1/2","m);
ket(l - 1/2","m)
)
$
#set text(size: 8pt)
L-S耦合表象本征态不是$hat(S)_z$的,$hat(S)_z$与$hat(J)_z, hat(L)^2, hat(S)^2$对易
#set text(size: 5pt)
$
[hat(J)^2, hat(S)_z] &= [hat(L)^2 + hat(S)^2 + hat(L)_+ hat(S)_- + hat(L)_- hat(S)_+ + 2 hat(L)_z hat(S)_z, hat(S)_z]
= hat(L)_+ [ hat(S)_-, hat(S)_z] + hat(L)_- [ hat(S)_+, hat(S)_z]\
& =^([hat(J)_plus.minus , hat(J)_z] = minus.plus hbar hat(J)_plus.minus) hbar( hat(L)_+ hat(S)_- - hat(L)_- hat(S)_+)!= 0
$
$
macron(S)_z &= hbar/(2(2l+1)) integral mat(
sqrt(j + m) Y_(l,m-1/2);
sqrt(j - m) Y_(l,m+1/2)
)^dagger sigma_z mat(
sqrt(j + m) Y_(l,m-1/2);
sqrt(j - m) Y_(l,m+1/2)
) dd(tau)= (m hbar)/(2l+1)
$
#set text(size: 6pt)
非耦合表象下$hat(S)_z$的矩阵形式是$
hbar/2 mat(
1, 0;
0, -1
)$,耦合表象下是$
U hbar/2 mat(
1, 0;
0, -1
) U^dagger = hbar/(2(2l+1)) mat(
2m,-sqrt((2l+1)^2 - 4m^2);
-sqrt((2l+1)^2 - 4m^2), -2m
)
$
#set text(size: 7pt)
$hat(arrow(L)) dot hat(arrow(S))= 1/2(hat(J)^2 - hat(L)^2 - hat(S)^2)$的本征态*是L-S耦合表象的基底*。
#set text(size: 8pt)
- *碱金属原子光谱双线结构*【*旋-轨耦合*】
$
hat(H) = hat(p)^2/(2 mu) + V(r) + xi(r) arrow(L) dot arrow(S)
,
xi(r) = 1/(2 m^2 c^2) dd(V)/dd(r)
$
#set text(size: 6pt)
$
Delta E &= braket(n","j","m_j","l","1/2 , xi(r) arrow(L) dot arrow(S) , n","j","m_j","l","1/2) \
&= braket(n l, xi(r), n l) braket(j","m_j","l","1/2 , arrow(L) dot arrow(S) , j","m_j","l","1/2)\
&= xi_(n l) hbar^2/2 (j(j+1) - l(l+1) - 3/4) = cases(
1/2 hbar^2 xi_(n l) 当(j= l+1/2) , - (l + 1)/2 hbar^2 xi_(n l)当 (j= l-1/2)
)
$
#set text(size: 8pt)
- *反常Zeeman效应(偶)*【*旋-轨耦合*和*磁-轨耦合*】
#set text(size: 6pt)
$
hat(H) &= hat(p)^2/(2 mu) + V(r) + xi(r) arrow(L) dot arrow(S) + (e B)/(2 mu) (hat(L)_z + 2 hat(S)_z)\
&= hat(p)^2/(2 mu) + V(r) + xi(r)/2 (hat(J)^2 - hat(L)^2 - hat(S)^2) + (e B)/(2 mu) hat(J)_z + (e B)/(2 mu) hat(S)_z
$
#set text(size: 8pt)
无外磁场,旋-轨耦合表象$E_(n l j) , ket(n "," j "," m_j "," l "," 1/2)$
加入$(e B)/(2 mu) hat(J)_z$项,与原哈密顿算符对易,量子态不变:
$
E_(n l j m_j) = E_(n l j) + (e B)/(2 mu) m_j hbar [$(2j+1)$"重简并被完全消除"]
$
$(e B)/(2 mu) hat(S)_z$*微扰*,修正为
#set text(size: 6pt)
$
Delta E = braket(n "," j "," m_j "," l "," 1/2 , (e B)/(2 mu) hat(S)_z , n "," j "," m_j "," l "," 1/2)
= plus.minus (e B hbar)/(2 mu (2l + 1) ) m_j
$
#set text(size: 8pt)
于是最后修正后的能级为
$
E = cases(
E_(n l j) + (e B)/(2 mu) (1 + 1/(2l + 1)) hbar m_j 当(j = l + 1/2) , E_(n l j) + (e B)/(2 mu) (1 - 1/(2l + 1)) hbar m_j 当(j = l - 1/2)
)
$
- *两个电子自旋的合成*
#set text(size: 6pt)
$
mat(
ket(1","1);
ket(1","-1);
ket(1","0);
ket(0","0)
) = mat(
1,0 ,0 ,0 ;
0,1 ,0 ,0 ;
0 ,0 , 1/sqrt(2), 1/sqrt(2);
0 ,0 , 1/sqrt(2), -1/sqrt(2)
)
mat(
ket(arrow.t "," arrow.t);
ket(arrow.b "," arrow.b);
ket(arrow.t "," arrow.b);
ket(arrow.b "," arrow.t);
)
$
$hat(S)_+ = hat(S)_(1+) + hat(S)_(2+)$在旧表象$mat(
ket(arrow.t "," arrow.t),
ket(arrow.b "," arrow.b),
ket(arrow.t "," arrow.b),
ket(arrow.b "," arrow.t)
)^TT$下的矩阵和新表象$mat(
ket(1","1),
ket(1","-1),
ket(1","0),
ket(0","0)
)^TT
$下矩阵为
$
U mat(
0, 0, hbar, hbar;
0, 0, 0, 0;
0, hbar, 0, 0;
0, hbar, 0, 0
) U^dagger = hbar mat(
0, 0, sqrt(2), 0;
0, 0, 0, 0;
0, sqrt(2), 0, 0;
0, 0, 0, 0
)
$
#set text(size: 8pt)
*定态微扰论*
#set text(size: 6pt)
$
(hat(H)^((0)) + hat(H)^') (psi_n^((0)) + psi_n^((1)) + psi_n^((2)) + ...) = (E_n^((0)) + E_n^((1)) + E_n^((2)) + ...) (psi_n^((0)) + psi_n^((1)) + psi_n^((2)) + ...)\
(hat(H)^((0)) - E_n^((0)) + H' - E_n^((1)) - E_n^((2)) + ...) (psi_n^((0)) + psi_n^((1)) + psi_n^((2)) + ...) = 0
$
#set text(size: 8pt)
*零级方程*就是无微扰时$hat(H)^((0))$的本征方程
$
(hat(H)^((0)) - E_n^((0))) psi_n^((0)) = 0
$
*一级方程*$(hat(H)^((0)) - E_n^((0))) psi_n^((1)) = (E_n^((1)) - hat(H)^') psi_n^((0))$正交性
$
a_(n m )^((1)) (E_m^((0)) - E_n^((0)) )= - braket(psi_m^((0)) , hat(H)^' , psi_n^((0))) + E_n^((1)) delta_(m n)\
E_n^((1)) = braket(psi_n^((0)) , hat(H)^' , psi_n^((0))) = H_(n n)^',psi_n^((1)) = sum_m a_(n m)^((1)) psi_m^((0))
$
$
a_(n m)^((1)) = - braket(psi_m^((0)) , hat(H)^' , psi_n^((0))) / (E_m^((0)) - E_n^((0))),
psi_n^((1)) = sum_(m != n) H_(m n)^' / (E_n^((0)) - E_m^((0))) psi_m^((0))
$
$
braket(psi_n^((0)), psi_n^((1))) + braket(psi_n^((1)), psi_n^((0))) = 0 => a^((1))_(n n) i = a_n^((1)) =^"相位因子"_"不妨" 0
$
*二级方程*
#set text(size: 7pt)
$(hat(H)^((0)) - E_n^((0))) psi_n^((2)) = - (hat(H)^' - E_n^((1))) psi_n^((1)) + E_n^((2)) psi_n^((0))$
#set text(size: 6pt)
$
E_n^((2)) = sum_(m != n) H_(m n)^' / (E_n^((0)) - E_m^((0))) braket(psi_n^((0)) , hat(H)' ,psi_m^((0))) = sum_(m != n) (H_(m n)^' H_(m n)^')/ (E_n^((0)) - E_m^((0))) = sum_(m != n) abs(H_(m n)^')^2 / (E_n^((0)) - E_m^((0)))
$
#set text(size: 8pt)
$
"非简并情形"
cases(
psi_n = psi_n^((0)) + sum_(m != n) H_(m n)^' / (E_n^((0)) - E_m^((0))) psi_m^((0)) + ...\
E_n = E_n^((0)) + H_(n n)^' + sum_(m != n) abs(H_(m n)^')^2 / (E_n^((0)) - E_m^((0))) + ...
)
$
*在静电场中的一维谐振子*
#set text(size: 6.5pt)
$hat(H)^((0)) = hat(p)^2/(2 mu) + 1/2 mu omega^2 hat(x)^2,hat(H)^' = - q E hat(x)$
#set text(size: 7pt)
利用递推关系$hat(x) psi_n^((0)) = sqrt(hbar/(2 mu omega)) (sqrt(n+1) psi_(n+1)^((0)) + sqrt(n) psi_(n-1)^((0)))$
二级修正$E_n = (n + 1/2) hbar omega - (q^2 E^2) / (2 mu omega^2)$
#set text(size: 8pt)
$
"简并情形" (hat(H)^((0)) - E_n^((0))) psi_(n)^((1)) = (E_n^((1)) - hat(H)^') sum_i c_(n i)^((0)) psi_(n i)^((0))\
sum_i c_(n i)^((0)) ( H_(j i)^' - E_n^((1)) delta_(j i) ) = 0 => det(H' - E_n^((1))I) = 0
$
*Stark效应*【静电场】$hat(H)^' = e E z = e E r cos theta$
$H' = braket(psi_(2 l' m')^((0)), hat(H)^', psi_(2 l m)^((0))) $利用球谐递推
#set text(size: 5pt)
$
cos theta Y_(l,m) = sqrt(((l+1)^2 - m^2)/((2l + 1)(2l+3))) Y_(l+1,m) + sqrt((l^2 - m^2)/((2l + 1)(2l-1))) Y_(l-1,m)
$
$
det
mat(
- E_2^((1)) , -3e E a_0, 0 , 0 ;
-3e E a_0 , - E_2^((1)) , 0 ,0;
0 , 0 , - E_2^((1)) , 0 ;
0 , 0, 0 , - E_2^((1))
)
= 0
=>
E_2^((1)) = 0, plus.minus 3 e E a_0
$
#set text(size: 8pt)
*散射理论*
*散射波函数*$lim_(r->oo) psi_(arrow(r)) = e^(i arrow(k) dot arrow(r)) + f(theta, phi) e^(i arrow(k) dot arrow(r))/r$
#set text(size: 6pt)
$
arrow(J)_i = rho arrow(v) = abs(psi_i)^2 (hbar arrow(k)) /m = (hbar arrow(k)) /m
,
arrow(J)_s = (i hbar)/(2 m) (psi_s grad psi_s^* - psi_s^* grad psi_s) = (hbar k)/( r^2 m) |f(theta, phi)|^2 arrow(e)_r
$
#set text(size: 8pt)
*散射截面*$dd(sigma) = (J_s r^2)/J_i dd(Omega) = abs(f(theta, phi))^2dd(Omega) = sigma(theta, phi) dd(Omega)$
单位时间内入射波被散射概率$J_i sigma ("散射速率")$
*Lippman-Schwinger方程* $(hat(H)_0 + hat(V)) ket(psi) = E ket(psi)$
#set text(size: 6pt)
$
cases(
(E - hat(H)_0) ket(psi_0) = 0,
(E - hat(H)_0) ket(psi) = hat(V) ket(psi)
) =>
ket(psi) = ket(psi_0) + (E - hat(H)_0)^(-1) V ket(psi) = ket(psi_0) + hat(G) V ket(psi)
$
#set text(size: 8pt)
*Born近似*
#set text(size: 5pt)
$ket(psi) = (1 + hat(G) V) ket(psi_0)$$braket(arrow(r),psi)= braket(arrow(r),psi_0) + integral dd(""^3 r') braket(arrow(r),hat(G), arrow(r')) braket(arrow(r'),V, psi_0)$
$
braket(arrow(r),hat(G), arrow(r')) = 1/(2 pi)^3 integral dd(""^3 k') e^(i arrow(k') dot (arrow(r) - arrow(r'))) / (E - (hbar^2 k'^2) /( 2 m)) = - (m)/(2 pi^2 hbar^2) 1/abs(arrow(r) - arrow(r') )e^(i k abs(arrow(r) - arrow(r'))) = - (m)/(2 pi^2 hbar^2)e^(i k e)/r e^(- i arrow(k') dot arrow(r')), arrow(k') = k arrow(e)_r\
braket(arrow(r'),V(hat(arrow(r))), psi_0) = V(r') braket(arrow(r'), psi_0) = V(r') e^(i arrow(k) dot arrow(r')), E= (hbar^2 k^2)/(2 m)\
f(theta, phi) = - m/(2 pi^2 hbar^2) integral dd(""^3 r) e^(- i arrow(k') dot arrow(r)) V(arrow(r)) e^(i arrow(k) dot arrow(r)),abs(arrow(k)) = abs(arrow(k'))
$
#set text(size: 8pt)
$"中心势场"
sigma(theta ) = (4 m^2)/(hbar^4 q^2) abs(integral_0^oo r V(r) sin(q r) dd(r))^2
$
*Rutherford散射* $V(r) = (Z Z' e_s^2)/r e^(- r/a), e_s = e/sqrt(4 pi epsilon_0)$
#set text(size: 6pt)
$
sigma(theta) = (4 m^2 Z^2 Z'^2 e_s^4)/(hbar^4 (q^2 + 1/a^2)^2) approx^("忽略"a) (Z^2 Z'^2 e_s^4)/(4 m^2 v^4 sin^4 theta/2)
$
#set text(size: 8pt)
*角动量在散射过程中是守恒量*
*全同粒子散射*#set text(size: 7pt)
$psi(arrow(r)) = e^(i arrow(k) dot arrow(r)) plus.minus e^(- i arrow(k) dot arrow(r)) + (f(theta) plus.minus f(pi - theta)) e^(i arrow(k) dot arrow(r))/r$
#set text(size: 8pt)
#set text(size: 5pt)
#three-line-table[
| 空间部分交换对称 $ket(00)$ | 空间部分交换反对称 $ket(↑↑)$ | 自旋部分不全同 $ket(↑↓)$ |
| ---| ---| --- |
| $abs(f(theta)+f(pi - theta))^2$ | $abs(f(theta)-f(pi - theta))^2$ | $abs(f(theta))^2 + abs(f(pi - theta))^2$ |
]
#set text(size: 7pt)
交换两个粒子后,自旋部分波函数的符号变为$(-1)^(j_1 - j_2 + j)$
#set text(size: 8pt)
*含时微扰*
*概率幅*
#set text(size: 7pt)
$A_(f i) = braket(phi_f, e^(- i/hbar hat(H) t), phi_i) = e^(- i/hbar E_i t)braket(phi_f, phi_i) = e^(- i/hbar E_i t) delta_(f i)$
$
i hbar partial/(partial t) psi = (hat(H)_0 + V(arrow(x), t)) psi , V =H',psi = sum_n a_n (t) phi_n e^(- i/hbar E_n t)
$
$
i hbar dd(a_f)/dd(t) = sum_n a_n (t) integral dd(""^3 x) phi_f^* V phi_n e^(- i/hbar (E_n - E_f) t)
$
$
cases(
a_i (-T/2) = 1,
a_n (-T/2) = 0 "for" n != i
)
=>^"一级近似"
dd(a_f)/dd(t) = - i/hbar integral dd(""^3 x) phi_f^* V phi_i e^(- i/hbar (E_i - E_f) t)
$
$
i T_(f i) = a_f (T/2) = 1/(i hbar) integral_(-T/2)^(T/2) dd(t) e^(- i/hbar (E_i - E_f) t)integral dd(""^3 x) phi_f^* V phi_i "跃迁振幅"
$
#set text(size: 8pt)
*有限时常微扰*
#set text(size: 6pt)
$i T_(f i) = - 2 pi i (sin ((Delta E_(f i) T)/(2 hbar)))/(pi Delta E_(f i)) V_(f i) -> - 2 pi i delta(E_f - E_i) V_(f i)$
#set text(size: 8pt)
$
W_(f i) = abs(T_(f i))^2/T = (2 pi)/hbar abs(V_(f i))^2 delta(E_f - E_i) "跃迁速率"
$
密度积分得*Fermi黄金定则*$W_(f i) = (2 pi)/hbar abs(V_(f i))^2 rho(E_i)$
*箱归一化*自由系统$psi = L^(-3/2) e^(i/hbar arrow(p) dot arrow(r))$,
$p_(x y z) = (2 pi n_(x y z) hbar)/L$
#set text(size: 5pt)
$
rho(E) dd(E) = (4 pi p^2 dd(p))/((2 pi hbar)/L)^3 = (L/(2 pi hbar))^3 4 pi m sqrt(2 m E) dd(E),
rho(E, Omega) dd(E) dd(Omega) = (L/(2 pi hbar))^3 m sqrt(2 m E) dd(E) dd(Omega)
$
#set text(size: 8pt)
*散射含时微扰*$phi_i =L^(-3/2) e^(i/hbar arrow(p) dot arrow(r)) = L^(-3/2) e^(i arrow(k) dot arrow(r)) $
$
V_(f i) &= integral phi^*_f V(arrow(r)) phi_i dd(""^3 x)= L^(-3) integral e^(- i arrow(q) dot arrow(r)) V(arrow(r)) dd(""^3 x)
$
#set text(size: 6pt)
$
W = (L^3 m)/(4 pi^2 hbar^4) integral abs(V_(f i))^2 sqrt(2 m E_i) dd(Omega), W(θ, φ) dd(Omega) = (L^3 m)/(4 pi^2 hbar^4) abs(V_(f i))^2 p dd(Omega) = j_s r^2 dd(Omega)
$
$
j_("in") = rho v = (1/L^3) (p/m)
,
sigma(θ, φ) &= W(θ, φ)/j_("in")= m^2/(4 pi^2 hbar^4) abs(integral e^(- i arrow(q) dot arrow(r)) V(arrow(r)) dd(""^3 x))^2
$
#set text(size: 8pt)
*有限时周期微扰*$hat(H) = hat(H)_0 + hat(H)'(t), hat(H)'(t) = hat(F) sin (omega t)$
$
H'_(m k) (t) = F_(m k) sin (omega t), F_(m k) = integral phi_m^* hat(F) phi_k dd(tau)
$
$
a_(k->m) &= F_(m k) 1/(i hbar) integral_0^t sin(omega t') e^(i omega_(m k) t') dd(t'),
omega_(m k) = (E_m - E_k)/hbar\
&= - F_(m k) 1/(2 i hbar)( (e^(i(omega+omega_(m k))t)-1)/(omega + omega_(m k)) + (e^(-i(omega-omega_(m k))t)-1)/(omega - omega_(m k)))
$
#set text(size: 7pt)
$
P_(k->m) (t) &= abs(F_(m k))^2/(4 hbar^2) abs( (e^(i(omega+omega_(m k))t)-1)/(omega + omega_(m k)) + (e^(-i(omega-omega_(m k))t)-1)/(omega - omega_(m k)))^2\
&-> abs(F_(m k))^2/(2 hbar^2)pi t (delta(omega + omega_(m k)) + delta(omega - omega_(m k)))
$
#set text(size: 5pt)
$E_m=E_k plus.minus hbar ω$共振吸收/发射,$1/omega_min$特征时间, $W_(k->m) = W_(m->k)$*细致平衡原理*
#set text(size: 8pt)
*电偶极跃迁*$hat(F) = e arrow(r) dot arrow(E)_0$,$F_(m k) = e integral phi_m^* arrow(r) dot arrow(E)_0 phi_k dd(""^3 x)$
#set text(size: 7pt)
$
W_(k->m) = (pi e^2 E_0^2)/(2 hbar^2) abs(x_(m k))^2 delta(omega - omega_(m k))
= (4 pi^2 e_s^2)/hbar^2 I abs(x_(m k))^2 delta(omega - omega_(m k))
$
#set text(size: 8pt)
$
W_(k->m) = (4 pi^2 e_s^2)/(3 hbar^2) I(omega_(m k)) abs(arrow(r)_(m k))^2,
Delta l = plus.minus 1, Delta m = 0, plus.minus 1
$
*Einstein自发辐射*
#set text(size: 4pt)
$E_k<E_m$,$B_(m k)$受激发射系数,$B_(k m)$吸收系数,$A_(m k)$自发辐射系数
#set text(size: 6pt)
$
N_k B_(k m) I(omega_(m k)) = N_m (A_(m k) + B_(m k) I(omega_(m k)))
=>
I(omega_(m k)) = A_(m k)/(e^((hbar omega_(m k))/(k T)) B_(k m) - B_(m k))
$
$
I(nu_(m k)) = (2pi A_(m k))/(e^((hbar nu_(m k))/(k T)) B_(k m) - B_(m k))
,
I(nu) = (8 pi h nu^3)/(c^3 (e^((h nu)/(k T)) - 1))
=>
A_(m k) = (hbar omega_(m k)^3)/(pi^2 c^3) B_(k m)\
W_(k->m) = (4 pi^2 e_s^2)/(3 hbar^2) I(omega_(m k)) abs(arrow(r)_(m k))^2 = B_(k m) I(omega_(m k))
,
B_(k m) = B_(m k) = (4 pi^2 e_s^2)/(3 hbar^2) abs(arrow(r)_(m k))^2
$
#set text(size: 5pt)
$A_(m k)$单个原子*自发辐射跃迁速率*,平均寿命$tau_(m k)= 1/A_(m k)$总平均寿命为$tau_m = sum_k 1/A_(m k)$
#set text(size: 6pt)
- 无限深方势阱$E_n = p^2/(2m) = (n^2 pi^2 hbar^2) / (8 m a^2), psi_n (x) = sqrt(1/a) sin(n pi (x+a)/(2a))$
- 谐振子$E_n = (n + 1/2) hbar omega, psi_n (x) = N_n e^(-xi^2/2) H_n (xi) = N_n H_n (alpha x) e^(-alpha^2 x^2/2)$
- *Virial定理*:$macron(T) = (1/2 macron(sum_i hat(x)_i partial/(partial x_i) hat(V)))$
- *Heisenberg*:$Delta hat(F) = hat(F) - macron(F) , macron(Delta hat(F)^2) =macron(hat(F)^2) - macron(F)^2, macron(Delta hat(F)^2) macron(Delta hat(G)^2) >= 1/4 |macron(hat(C))|^2$
- $hat(x)$:$x$,$i hbar dd("")/dd(p)$,$delta(x - x')$,$1/sqrt(2 pi hbar) e^(i/hbar p x)$;$hat(p)$:$- i hbar dd("")/dd(x)$,$p$,$1/sqrt(2 pi hbar) e^(-i/hbar p x)$ ,$delta(p - p')$
- Bose:自旋整数,对称,光介子;Fermi:自旋半整数,反对称,电质中子
$
psi_+ (arrow(R), arrow(r)) = 1/(2 pi)^3 e^(i arrow(K) dot arrow(R))1/sqrt(2) (e^(i arrow(k) dot arrow(r)) + e^(i arrow(k) dot arrow(r))) = 1/(2 pi)^3 e^(i arrow(K) dot arrow(R)) sqrt(2) cos(arrow(k) dot arrow(r))
$
|
|
https://github.com/benjamineeckh/kul-typst-template | https://raw.githubusercontent.com/benjamineeckh/kul-typst-template/main/src/text-blobs.typ | typst | MIT License | #let declaration-of-originality = (
"en": [
I declare to be responsible for the content I'm presenting in order to obtain the final degree, not to have plagiarized in all or part of, the work produced by others and having cited original sources in consistent way with current plagiarism regulations and copyright. I am also aware that in case of false declaration, I could incur in law penalties and my admission to final exam could be denied
],
"nl": [
Hier moet wat Nederlands komen
]
)
#let submission-text(degree, elective) = (
"en": [
Thesis submitte as part of the degree Master of Engineering Science: #degree, with elective #elective
],
"nl" : [
Thesis voorgedragen tot het behalen van de graad van Master of Science in de ingenieurswetenschappen: #degree, hoofdoptie #elective
]
)
#let copyright = (
"en":[#sym.copyright Copyright <NAME>
#v(1em)
Without written permission of the supervisors and the author it is forbidden to
reproduce or adapt in any form or by any means any part of this publication. Requests
for obtaining the right to reproduce or utilize parts of this publication should be
addressed to the Departement Computerwetenschappen, Celestijnenlaan 200A bus
2402, B-3001 Heverlee, +32-16-327700 or by email
#text(font:"DejaVu Sans Mono", size: 0.9em)[#link("mailto:<EMAIL>")].
A written permission of the supervisors is also required to use the methods, products,
schematics and programmes described in this work for industrial or commercial use,
and for submitting this publication in scientific contests.],
"nl": [Zonder voorafgaande schriftelijke toestemming van zowel de promotoren als de auteur
is overnemen, kopiëren, gebruiken of realiseren van deze uitgave of gedeelten ervan
verboden. Voor aanvragen tot of informatie i.v.m. het overnemen en/of gebruik
en/of realisatie van gedeelten uit deze publicatie, wend u tot het Departement
Computerwetenschappen, Celestijnenlaan 200A bus 2402, B-3001 Heverlee, +32-16-
327700 of via e-mail #text(font:"DejaVu Sans Mono", size: 0.9em)[#link("mailto:<EMAIL>")].
Voorafgaande schriftelijke toestemming van de promotoren is eveneens vereist voor het
aanwenden van de in deze masterproef beschreven (originele) methoden, producten,
schakelingen en programma's voor ]
)
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/visualize/shape-aspect-05.typ | typst | Other | // Test that square doesn't overflow due to its aspect ratio.
#set page(width: 40pt, height: 25pt, margin: 5pt)
#square(width: 100%)
#square(width: 100%)[Hello there]
|
https://github.com/sses7757/sustech-graduated-thesis | https://raw.githubusercontent.com/sses7757/sustech-graduated-thesis/main/sustech-graduated-thesis/layouts/doc.typ | typst | Apache License 2.0 | // 文稿设置,可以进行一些像页面边距这类的全局设置
#let doc(
// documentclass 传入参数
info: (:),
// 其他参数
fallback: true, // 字体缺失时使用 fallback,不显示豆腐块
lang: "zh",
margin: (x: 3cm, y: 3.1cm),
it,
) = {
// 1. 对参数进行处理
// 1.1 如果是字符串,则使用换行符将标题分隔为列表
if type(info.title) == str {
info.title = info.title.split("\n")
}
// 2. 基本的样式设置
set text(fallback: fallback, hyphenate: auto)
set par(justify: true)
set page(margin: margin, header-ascent: 20%, footer-descent: 20%)
// 3. PDF 元信息
set document(
title: (("",) + info.title).sum(),
author: info.author,
)
it
} |
https://github.com/jneug/schule-typst | https://raw.githubusercontent.com/jneug/schule-typst/main/src/subjects/math.typ | typst | MIT License |
#let cancelup(base, new, dx: 2pt, dy: -1pt) = $cancel(base)^#move(dx:dx, dy:dy, $new$)$
#let canceldown(base, new, dx: 2pt, dy: 1pt) = $cancel(base)_#move(dx:dx, dy:dy, $new$)$
#let point(..coords) = $(#coords.pos().map(v => $#v$).join($thin|thin$))$
#let vect(name) = $accent(name, arrow)$
#let rem(r) = $med upright(sans("R"))#r$
#let punkt(..coords) = $(#coords.pos().map(v => $#v$).join($thin|thin$))$
#let vect(name) = $accent(name, arrow)$
#let rest(r) = $med upright(sans("R"))#r$
#import "math-fractions.typ" as frac
|
https://github.com/dzwdz/agh-notes | https://raw.githubusercontent.com/dzwdz/agh-notes/main/algebra.typ | typst | #set page(margin: 1cm)
#set page(numbering: "i")
#set text(lang: "pl")
#set heading(numbering: "1.")
//#set text(font: "New Computer Modern")
#set grid(column-gutter: 16pt)
#show link: underline
#grid(
columns: (1fr, 1fr),
align: horizon,
[
#set par(justify: true)
Te notatki są moją próbą skondensowania całego wykładu dr. Przybyło w kilka stron gęstej "ściągi".
Nie zastepują one wykładu, a uzupełniają go.
Algebra był moim ulubionym przedmiotem w pierwszym semestrze, więc do dzisiaj zależy mi na jakości tych notatek.
Jeśli znajdziesz jakiś błąd lub masz inne uwagi -- pisz!
Ostatnio zmodyfikowane: 2024-04-16. \
#link("https://dzwdz.net/agh-notes")
],
outline()
)
#let Dp = rotate(180deg, $D$)
= Liczby zespolone
$z_1 dot z_2 = (x_1 x_2 - y_1 y_2, space x_1 y_2 + x_2 y_1)$ #h(1fr)
$z^(-1) = ((x+y)/(x^2 + y^2), (x-y)/(x^2+y^2))$ #h(1fr)
$overline(x + y i) = x - y i$ #h(1fr)
$arg z in [0; 2pi)$ \
*<NAME>'a*: $z^n = |z|^n (cos(n phi) + i sin (n phi)) = |z|^n e^(i n phi)$ #h(1fr)
$root(n, z) = root(n, |z|)(cos(phi + 2k pi)/n + i sin(phi + 2k pi)/n)$ \
$z overline(z) = |z|^2$
*Tw. Bézout*: $W(z_0) = 0 <==> exists P in CC[x] : W(z)=(z-z_0)P(z)$ \
*Tw. o pierw. wymiernych*:
$RR[x] in.rev a_n x^n + ... + a_0 = W(x), wide
W(p/q) = 0 and p,q "coprime" ==> (a_0|p) and (a_n|q)$
*Zasadnicze tw. algebry*: $forall W in CC[x] : exists z : W(c) = 0$ \
*Tw. o pierw. zespolonych wielomianu rzecz.*: $W in RR[x], space W(z) = 0 <==> W(overline(z)) = 0$
= Relacje
$R = (X, "gr"R, Y)$ \
*naddziedzina* $= X supset D_f$ \
*zapas* $= Y supset Dp_R$
1. $R$ *zwrotne* $<==> forall x in X: x R x$
2. $R$ *symetryczne* $<==> forall x,y in X: x R y <=> y R x$
3. $R$ *przechodnie* $<==> forall x,y,z in X : (x R y and y R z => x R z)$
4. $R$ *antysymetryczne* $<==> forall x, y in X: (x R y and y R x => x = y)$
5. $R$ *asymetryczne* $<==> forall x, y in X : (x R y => not space y R x)$
6. $R$ *spójne* $<==> forall x, y in X : (x R y or y R x or x = y)$
$R$ *relacją równoważności* $<==> R$ zwrotne(1), symetryczne(2), przechodnie(3) \
*klasa równoważności*: $[x] := {y : x R y}$ \
*zbiór ilorazowy*: $X \/_R := {[x] : x in X}$
$R$ (słabym) *porządkiem* (częściowym) $<==> R$ zwrotne(1), antysymetryczne(4), przechodnie(3) \
$R$ *porządkiem totalnym* / *liniowym* $<==> R$ porządkiem i spójne(6) \
$overline(M)$ (jedynym) *el. największym* $<==> forall x : x lt.curly.eq overline(M)$ #h(1fr) (*el. najmniejszy* analogicznie)\
$M_max$ *el. maksymalnym* $<==> forall x : (M_max lt.curly.eq X => M_max = X)$ #h(1fr) (*el. minimalny* analogicznie)\
$M in X$ *majorantą* $A subset X$ $<==> forall x in A : x lt.eq.curly M$ #h(1fr) (*minoranta* analogicznie) \
*kres górny* $A$ w $X =$ element najmniejszy zbioru majorant $= sup A$ \
*kres dolny* $A$ w $X =$ element największy zbioru minorant $= inf A$
$C$ *łańcuchem* $<==> C in X and (C, R|_(C times C)) "liniowo uporządkowane"$
#pagebreak()
= Struktury
Dla $(X, circle)$:
0. $circle$ *wewnętrzne* $<==> forall x, y in X : x circle y in X$
1. $circle$ *łączne* $<==> forall x, y, z : (x circle y) circle z = x circle (y circle z)$
2. $circle$ *przemienne* $<==> forall x, y : x circle y = y circle x$
3. $e in X$ *el. neutralnym* $<==> forall x : e circle x = x = x circle e$
4. $x'$ *el. odwrotnym* $x$ $<==> x' circle x = e = x circle x'$
5. $*$ *rozdzielne* względem $plus.circle$ $<==> forall x,y,z : (x plus.circle y) * z = x * z plus.circle y * z$ i w drugą stronę
$(X, plus.circle)$ *grupą* $<==>$ $plus.circle$ wewnętrzne(0), łączne(1), $exists e$ (3), $forall x exists x'$ (4) \
$(X, plus.circle)$ *grupą abelową* $<==>$ $(X, plus.circle)$ grupą $and$ $plus.circle$ przemienne(2) \
W szczególności skoro $exists e$, $x != emptyset$.
$(X, plus.circle, *)$ *pierścieniem* $<==>$ $(X, plus.circle)$ grupą abelową $and (X, *)$ wewnętrzne(0), łączne(1), rozdzielne względem $plus.circle$ \
$(X, plus.circle, *)$ *pierścieniem przemiennym* $<==> (X, *)$ przemienne \
*działanie addytywne*: $plus.circle$, el. neutralny to $bold(0)$ \
*działanie multiplykatywne*: $*$, el. neutralny, _jeśli istnieje_, to $bold(1)$.
Wtedy mamy *pierścień z jedynką*. \
$x, y$ *dzielnikami 0* $<==> x, y != bold(0) and x * y = bold(0)$ \
*pierścień całkowity*: z jedynką, bez dzielników zera \
$(K, plus.circle, *)$ *ciałem* $<==>$ pierścieniem całkowitym z el. odwrotnymi względem mnożenia \
$(K, plus.circle, *)$ *ciałem* $<==>$ $(K, plus.circle)$ grupą abelową $and$ $(K\\{0}, *)$ grupą $and$ $*$ rozdzielne względem $plus.circle$ \
$f$ *homo/*hyhy*/morfizmem grup*
$<==> forall a, b : f(a) + f(b) = f(a plus.circle b)$
#h(1fr) (nie musi być różnowartościowy!)\
$f$ *homomorfizmem pierścieni*
$<==> forall a, b : f(a) + f(b) = f(a plus.circle b) and f(a) * f(b) = f(a dot.circle b)$
$(V, K, +, *)$ *przestrzenią wektorową*:
- $(K, plus.circle, dot.circle)$ ciałem przemiennym, $(V, +)$ grupą abelową ($V != emptyset$)
- $forall u, v in V, alpha in K : a * (u + v) = a * u + a * v$
- $forall v in V, a, b in K : (a dot.circle b) * v = a * (b dot.circle v)
and (a plus.circle b) * v = a * v + b * v$
- $forall v in V: bold(1) * v = v$
$U subset V, space U != emptyset$ *podprzestrzenią* $<==> forall u, v in U, k in K: u+v in U and k * v in U$ \
Równoważna charakterystyka: $forall a, b in K, space u, v in U : a * u + b * v in U$ \
Uogólniona równoważna: $forall a_1, ..., a_n in K, v_1, ..., v_n in U : a_1 * v_1 + ... + a_n * v_n in U$ \
$dim U <= dim V, wide dim U = dim V => U=V$ \
$dim(U_1 + U_2) = dim U_1 + dim U_2 - dim U_1 sect dim U_2$ \
$U_1 + U_2 = U_1 plus.circle U_2 <==> U_1 sect U_2 = {overline(0)}$ \
$B$ *bazą* $V$ $<==>$ $B$ maks. zbiorem liniowo niezależnych wektorów $<==>$ $B$ min. zbiorem wektorów rozpinających $V$ \
*reper bazowy* $B = (e_1, e_2, ...)$: baza z ustalona kolejnością \
*współrzędne* wzgledem $B$: $[k_1, k_2, ...]_B = e_1k_1 + e_2k_2 + ...$
#pagebreak()
= Macierze
$A = [a_(i j)]_(m times n) =
mat(delim:"[",
a_11, a_12, ..., a_(1n);
a_21, a_22, ..., a_(2n);
dots.v, dots.v, dots.down, dots.v;
a_(m 1), a_(m 2), ..., a_(m n);
)_(m times n)$
$[a_(i j)]_(m times n) [b_(i j)]_(n times o) = [c_(i j)]_(m times o) wide
"gdzie" c_(i j) = sum_(k=1)^n a_(i k) b_(k j)$
$det A = sum_(sigma in S_n) epsilon(sigma) dot a_(1 sigma(1)) dot a_(2 sigma(2)) dot ... dot a_(n sigma(n)) wide$ gdzie $epsilon(sigma) =$ znak permutacji $sigma$ \
$det A = 0 <==$ wiersze/kolumny nie są liniowo niezależne \
pomnożenie jednego z wierszy przez $n$ mnoży wyznacznik przez $n$ $ wide => det n A = n^2 det A$ \
przestawienie wierszy/kolumn zmienia znak \
*Tw. Cauchy'ego*: $det(A B) = det A dot det B$
*Minor* stopnia $k$: wyznacznik dowolnej podmacierzy $k times k$ \
*Minor odpowiadający* $a_(i j)$ (w macierzy $A_(k times k)$) $= M_(i j) =$ wyznacznik po pozbyciu się wiersza/kolumny $a_(i j)$ \
*Dopełnienie algebraiczne* $a_(i j)$ $= A_(i j) = (-1)^(i+j) dot M_(i j)$ \
*Twierdzenie Laplace'a*: $forall j in [1,n] : det A = sum_(i=1)^n a_(i j) A_(i j)$
*R<NAME>* $= r(A) =$ maks. liczba liniowo niezależnych wierszy/kolumn $=$ maks. stopień minora niezerowego
*operacje elementarne*: zamiana, dodanie kombinacji liniowej, pomnożenie [wierszy/kolumn]
*macierz odwrotna*:
$A A^(-1) = A^(-1) A = I$,
$wide exists A^(-1) <==> det A != 0 <==> A$ *nieosobliwa* $<==> r(A_(n times n)) = n$ \
$det (A^(-1)) = (det A)^(-1), wide (A^T)^(-1) = (A^(-1))^T, wide (A B)^(-1) = B^(-1) A^(-1)$ \
$A^(-1) = (det A)^(-1) (A^D)^T, wide$ gdzie $A^D = [A_(i j)] =$ *macierz dopełnień algebraicznych*
= Układy równań
$A_(m times n) = $ macierz współczynników/główna #h(1fr)
$B_(m times 1) =$ macierz wyrazów wolnych #h(1fr)
$[A|B] =$ macierz uzupełniona \
$A dot X_(m times 1) = B$ \
*układ jednorodny* $<==> B = overline(0)$ \
*układ oznaczony* $<==>$ dokładnie jedno rozwiązanie \
*układ nieoznaczony* $<==>$ więcej rozwiązań \
*układ sprzeczny* $<==>$ bez rozwiązań \
*układ kwadratowy* $<==> m=n <==> A$ kwadratowe \
$wide$*układ Cramera* $<==> m=n and det A != 0 ==>$
dokładnie jedno rozwiązanie, $x_j = (det A)^(-1) dot D_x_j$ \
$wide det A = 0 and exists j : D_x_j != 0 <==>$ sprzeczny \
$wide det A = 0 and forall j : D_x_j = 0 <==>$ nieoznaczony $or$ sprzeczny \
*<NAME>*: układ ma rozwiązanie $<==> r(A) = r([A|B])$ \
$wide r(A) = r([A|B]) = n <==>$ układ oznaczony
#pagebreak()
= Geometria analityczna
$P = (x,y,z), space arrow(v) = [x,y,z]$
*iloczyn skalarny* (dot product): $u dot v := sum u_i v_i = ||u|| dot ||v|| cos angle.spheric(u, v) in RR$ \
$||v|| = sqrt(v dot v), wide forall u, v : |u dot v| <= ||u||dot||v||$
*kąt między wektorami*:
$angle.spheric(u, v) in [0; pi/*)*/], space cos angle.spheric(u, v) = (u dot v)/(||u|| dot ||v||)$ \
*prostopadłe*: $angle.spheric(u,v) = pi/2 <=> u perp v <=> u dot v = 0$ \
*równoległe*: $angle.spheric(u,v) = 0 <=> u || v <=> u, v "liniowo niezależne"$
// na prawo od definicji iloczynu
#place(right,
$ u times v = mat(delim: "|",
arrow(i), arrow(j), arrow(k);
thin u_1, thin u_2, thin u_3;
v_1, v_2, v_3
) $
)
*iloczyn wektorowy*: $times : (arrow(E_3))^2 in.rev (u,v) -> u times v in arrow(E_3)$ \
$||u times v|| = ||u||dot||v|| sin angle.spheric(u, v) wide (therefore u || v => u times v = overline(0))$ \
$||u times v|| =$ *pole równoległoboku* rozpiętego przez $u, v$
// na prawo od definicji iloczynu
#place(right,
$ (u times v) dot w = mat(delim: "|",
u_1, u_2, u_3;
v_1, v_2, v_3;
w_1, w_2, w_3;
) $
)
*iloczyn mieszany*: $(u times v) dot w, wide u,v,w in arrow(E_3)$ \
*objętość równoległościanu* rozpiętego przez $u,v$ $= (u times v) dot w$ \
.
*płaszczyzna* $pi$, $space P_0 = (x_0, y_0, z_0) in pi, space n = [A, B, C] perp pi, space n != overline(0)$ \
równanie *normalne*: $0 = arrow(P_0 P) dot n = A(x - x_0) + B(x - x_0) + C(z - z_0)$ \
równanie *ogólne*: $A x + B y + C z + D = 0, wide D = -A x_0 - B y_0 - Z z_0$ \
równanie *parametryczne*: $(x,y,z)=P_0 + t u + s v wide t,s in RR wide u, v || pi, space not(u || v)$ \
równanie *odcinkowe*: $x/a + y/b + z/c = 1$ *TODO* co? \
// odległość punktu $Q = (x, y, z)$: $abs(A x + B x + C x + D)/sqrt(A^2 + B^2 + C^2)$ #h(1fr) liczymy "błąd" równania ogólnego \
// odległość płaszczyzn $pi_1 || pi_2$: $d(pi_1, pi_2) = d(pi, P_2) = abs(A x_2 + B y_2 + C z_2 + D_1) / sqrt(A^2 + B^2 + C^2) = abs(D_1 - D_2) / sqrt(...)$
*prosta* $l$, $space P_0 = (x_0, y_0, z_0) in l, space v = [a,b,c] || l, space v != overline(0)$ \
równanie *parametryczne*: $P=P_0+ t v wide t in RR$ \
równanie *kierunkowe*: $(x-x_0)/a = (y-y_0)/b = (z-z_0)/c$ \
równanie *krawędziowe*: dwa równania ogólne $pi_1, pi_2$, gdzie $l in pi_1, pi_2 and not (pi_1 || pi_2)$
// odległość punktu $Q$: $(||arrow(P_0P) times v||)/(||v||)$
// odległość prostych $l_1, l_2$: $abs((v_1 times v_2) dot arrow(P_1 P_2))/(|| v_1 times v_2 ||)$
*Nierówność Cauchy'ego, Buniakowskiego, Schwarza*:
$|u dot v| <= ||u|| dot ||v||, wide |u dot v| = ||u|| dot ||v|| <==> u || v$
== Odległości
$
d(pi, (x,y,z)) &= abs(A x + B x + C x + D)/sqrt(A^2 + B^2 + C^2) wide "(liczymy \"błąd\" równania ogólnego)"\
pi_1 || pi_2 => space d(pi_1, pi_2) = d(pi, P_2) &= abs(A x_2 + B y_2 + C z_2 + D_1) / sqrt(A^2 + B^2 + C^2) = abs(D_1 - D_2) / sqrt(...) \
d(P, l) &= (||arrow(P_0P) times v||)/(||v||) \
d(l_1, l_2) &= abs((v_1 times v_2) dot arrow(P_1 P_2))/(|| v_1 times v_2 ||) \
$
#pagebreak()
= Odwzorowania liniowe
Niech $V, W$ będą przestrzeniami wektorowymi nad ciałem $K$. \
$f : V -> W$ *odzworowaniem liniowym* $<==> forall u,v : f(u+v)=f(u)+f(v) and forall v, alpha : f(alpha v) = alpha f(v)$
*jądro* $f = "Ker"f := {v in V : f(v) = 0} wide wide thin thin = f^(-1){overline(0)_W}$ \
*obraz* $f = "Im"f := {w in W: exists v :f(v) = w} wide = f(V)$ \
$dim V = dim "Ker"f + dim "Im"f$ \
*rząd odwzorowania* $= r(f) = dim "Im"f$
$f$ *monomorfizmem* $<==>$ różnowartościowe $<==>$ injektywne \
$f$ *epimorfizmem* $<==>$ surjektywne $<==> "Im"f = W$ \
$f$ *izomorfizmem* $<==>$ bijektywne \
$f$ *endomorfizmem* $<==> V = W$ \
$f$ *automorfizmem* $<==>$ endomorfizmem bijektywnym \
$f$ *formą liniową* $<==>$ W = K #h(1fr) np. $f:RR^3->RR, f(x,y,z)=2x+y$
$V, W$ *izomorficzne* $<==> V thin ~ thin W <==> exists$ izomorfizm $V -> W$ \
$f: V->W$ izomorfizmem $<==> f^(-1) : W->V$ izomorfizmem \
$V thin~thin W <==> space (dim V = dim W < infinity) space and space$ $(V,W$ są nad tym samym ciałem$)$
$f$ endomorfizmem $and M_f (B_1, B_2)$ nieosobliwe $<==>$ $f$ automorfizmem (izomorfizmem)
$P_(B -> B') = M_id_V (B', B)$
*Tw. o zmianie macierzy odwzorowania przy zmianie baz*: \
$wide M_f (B'_V, B'_W) = P_(B'_W -> B_W) dot M_f (B_V, B_W) dot P_(B_V -> B'_V)$
#linebreak()
Niech $A = M_f (B)$, w dowolnej bazie $B$, a $f$ będzie endomorfizmem. \
$lambda in K$ *wartością własną* $f$
$<==>$ $exists v in V \\ {overline(0)}: f(v) = lambda v$
$<==>$ $det(A - lambda I) = 0$ \
*wielomian charakterystyczny* $f$: $Delta(lambda) := det(A - lambda I) = (-lambda)^n + ... + det A$ \
$v = [a_1, ...]_B != overline(0)$ *wektorem własnym* odpowiadającym $lambda$
$<==>$ $f(v) = lambda v$
$<==>$ $(A - lambda I) mat(delim:"[",a_1;...;a_n) = overline(0)$ \
$V_lambda = {v in V : f(v) = lambda v}$ jest podprzestrzenią wektorową.
#pagebreak()
#set heading(bookmarked: false, outlined: false)
= Etc.
*injekcja*: funkcja różnowartościowa
*surjekcja*: funkcja "na" zbiór Y - przyjmuje wszystkie wartości ze zbioru Y
*izomorfizm*: homomorfizm bijektywny \
$h "monomorfizmem z" A "na" B "i bijekcją" => h "izomorfizmem"$
*automorfizm*: izomorfizm na siebie
*monomorfizm*: homomorfizm iniektywny
=== TODO
bardziej konsekwentne oznaczanie wektorów, potencjalnie funkcja jak na fiz
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/valkyrie/0.1.0/README.md | markdown | Apache License 2.0 | # The `Valkyrie` Package
<div align="center">Version 0.1.0</div>
This package implements type validation, and is targetted mainly at package and template developers. The desired outcome is that it becomes easier for the programmer to quickly put a package together without spending a long time on type safety, but also to make the usage of those packages by end-users less painful by generating useful error messages.
## Example Usage
```typ
#import "@preview/valkyrie:0.1.0" as z
#let my-schema = z.dictionary(
should-be-string: z.string(),
complicated-tuple: z.tuple(
z.email(),
z.ip(),
z.either(
z.string(),
z.number()
)
)
)
#z.parse(
(
should-be-string: "This doesn't error",
complicated-tuple: (
"<EMAIL>",
"NOT AN IP", // Error: Schema validation failed on argument.complicated-tuple.1:
// String must be a valid IP address
1
)
),
my-schema
)
``` |
https://github.com/Skimmeroni/Appunti | https://raw.githubusercontent.com/Skimmeroni/Appunti/main/Metodi%20Algebrici/Polinomi/Costruzione.typ | typst | Creative Commons Zero v1.0 Universal | #import "../Metodi_defs.typ": *
Siano $K$ un campo e $g(x) in K[x]$ un polinomio fissato. Due
polinomi $f(x), h(x) in K[x]$ si dicono *congrui modulo* $g(x)$
se $g(x) | f(x) − h(x)$. Per indicare che $f(x)$ e $h(x)$ sono
congrui modulo $g(x)$ si usa la notazione $f(x) equiv h(x) mod
g(x)$.
#example[
Si considerino i polinomi a coefficienti sul campo $RR$ nell'incognita
$x$. Valendo:
$ 3x^(5) + 2x^(4) - x^(3) + 3x^(2) - 4x + 7 =
(3x^(3) + 8x^(2) + 12x + 19)(x^(2) - 2x + 1) + (22x - 12) $
É possibile scrivere:
$ 3x^(5) + 2x^(4) - x^(3) + 3x^(2) - 4x + 7
equiv 22x - 12 mod x^(2) - 2x + 1 $
]
#theorem[
Siano $K$ un campo e $g(x) in K[x]$ un polinomio fissato.
La congruenza modulo $g(x)$ é una relazione di equivalenza.
] <Polynomial-modulo-is-congruence>
#proof[
La congruenza modulo $g(x)$ é una relazione di equivalenza se é
riflessiva, simmetrica e transitiva:
- La congruenza modulo $g(x)$ é riflessiva se, per qualsiasi
$f(x) in K[x]$, si ha $f(x) equiv f(x) mod g(x)$.
$f(x) equiv f(x) mod g(x)$ equivale a $g(x) | f(x) - f(x)$, ovvero
esiste un polinomio $q(x) in K[x]$ tale per cui $f(x) - f(x) = q(x)
g(x)$. Si noti peró come $f(x) - f(x)$ sia il polinomio nullo, e
l'espressione $0 = q(x) g(x)$ é sempre verificata ponendo $q(x) = 0$;
- La congruenza modulo $g(x)$ é simmetrica se, presi due polinomi
$f(x), h(x) in K[x]$, se vale $f(x) equiv h(x) mod g(x)$ questo
implica $h(x) equiv f(x) mod g(x)$.
Se vale $f(x) equiv h(x) mod g(x)$, allora esiste un polinomio
$q(x) in K[x]$ tale per cui $f(x) - h(x) = q(x) g(x)$. Moltiplicando
per $-1$ ambo i membri si ha $h(x) - f(x) = -q(x) g(x)$; essendo
$-q(x)$ certamente un polinomio appartenente a $K[x]$, é possibile
scrivere $g(x) | h(x) − f(x)$, ovvero $h(x) equiv f(x) mod g(x)$.
- La congruenza modulo $g(x)$ é transitiva se, presi tre polinomi
$f(x), h(x), t(x) in K[x]$, se vale $f(x) equiv h(x) mod g(x)$ e
$h(x) equiv t(x) mod g(x)$ questo implica $f(x) equiv t(x) mod g(x)$.
Se valgono $f(x) equiv h(x) mod g(x)$ e $h(x) equiv t(x) mod g(x)$
allora esistono $q_(1)(x), q_(2)(x) in K[x]$ tali per cui $f(x) -
h(x) = q_(1)(x) g(x)$ e $h(x) - t(x) = q_(2)(x) g(x)$. Sommando la
seconda nella prima, si ha:
$ f(x) - cancel(h(x)) + cancel(h(x)) - t(x) =
q_(1)(x) g(x) + q_(2)(x) g(x) => f(x) - t(x) =
(q_(1)(x) + q_(2)(x)) g(x) $
Essendo $q_(1)(x) + q_(2)(x) in K[x]$, é possibile scrivere
$g(x) | f(x) - t(x)$, ovvero $f(x) equiv t(x) mod g(x)$.
]
Essendo, per il @Polynomial-modulo-is-congruence, la congruenza modulo
$g(x)$ una relazione di equivalenza, é possibile definire su questa delle
classi di equivalenza ed un insieme quoziente. La classe di equivalenza
per la congruenza modulo $g(x)$ avente rappresentante $f(x) in K[x]$ si
indica con $[f(x)]_(g(x))$, mentre l'insieme quoziente con $K[x] slash
g(x)$. In particolare:
$ [f(x)]_(g(x)) &= {h(x) in K[x] : h(x) equiv f(x) mod g(x)} \
&= {h(x) in K[x] : h(x) − f(x) = g(x) q(x) "per un certo" q(x) in K[x]} \
&= {h(x) in K[x] : h(x) = f(x) + g(x) q(x) "per un certo" q(x) in K[x]} \
&= {f(x) + g(x) q(x) : q(x) in K[x]} $
E naturalmente $K[x] slash g(x) = {[f(x)]_(g(x)) : f(x) in K[x]}$.
Sull'insieme quoziente $K[x] slash g(x)$ é possibile definire delle operazioni
di somma e di prodotto come segue:
#grid(
columns: (0.5fr, 0.5fr),
[$ [f(x)]_(g(x)) + [h(x)]_(g(x)) = [f(x) + h(x)]_(g(x)) $],
[$ [f(x)]_(g(x)) dot [h(x)]_(g(x)) = [f(x) dot h(x)]_(g(x)) $]
)
#lemma[
La struttura algebrica $(K[x] slash g(x), +, dot)$, dove le due
operazioni sono la somma ed il prodotto sopra definite, é un anello
commutativo con unitá $[1]_(g(x))$.
] <Remainders-polynomials-is-ring>
// #proof[
// Dimostrabile, da aggiungere
// ]
#theorem[
Siano $K$ un campo e $g(x) in K[x]$ un polinomio fissato. Se
vale $diff(g(x)) > 0$, ogni elemento $f(x) in K[x] slash g(x)$
puó essere scritto in maniera univoca nella forma $[r(x)]_(g(x))$,
dove $r(x)$ é il resto della divisione fra $f(x)$ e $g(x)$.
] <Polynomial-class-is-unique>
#proof[
Data una generica classe di equivalenza $[f(x)]_(g(x)) in K[x]
slash g(x)$, dividendo $f(x)$ per $g(x)$ si ottiene $f(x) =
q(x) g(x) + r(x)$, con $diff(r(x)) < diff(g(x))$. Spostando
$f(x) - r(x)$ a primo membro si ottiene $f(x) - r(x) = q(x) g(x)$,
ovvero $g(x) | f(x) - r(x)$ cioé $f(x) equiv r(x) mod g(x)$. Questo
significa che $f(x)$ e $r(x)$ appartengono alla medesima classe di
equivalenza per la congruenza modulo $g(x)$, pertanto $[f(x)]_(g(x))
= [r(x)]_(g(x))$. Tale scrittura é univoca perché non soltanto $r(x)$
é univoco per definizione, ma é anche non ulteriormente divisibile
per $g(x)$.
]
La forma presentata in @Polynomial-class-is-unique per le classi di
equivalenza puó essere considerata la "forma standard" per rappresentarle.
#example[
Si considerino i polinomi
#grid(
columns: (0.5fr, 0.5fr),
[$ p(x) = 3x^(5) + 2x^(4) - x^(3) + 3x^(2) - 4x + 7 $],
[$ q(x) = 7x^(3) + 9x^(2) - 17x + 11 $]
)
Per entrambi la divisione per $g(x) = x^(2) - 2x + 1$ da resto
$r(x) = 22x - 12$. Per il @Polynomial-class-is-unique, é allora
possibile scrivere:
$ [3x^(5) + 2x^(4) - x^(3) + 3x^(2) - 4x + 7]_(x^(2) - 2x + 1) =
[7x^(3) + 9x^(2) - 17x + 11]_(x^(2) - 2x + 1) =
[22x - 12]_(x^(2) - 2x + 1) $
]
#corollary[
Sia $g(x) in ZZ_(p)[x]$ un polinomio fissato, con $p$ numero primo.
$ZZ_(p)[x] slash g(x)$ ha esattamente $p^(diff(g(x)))$ elementi
distinti, con $diff(g(x)) > 0$.
]
#proof[
Per il @Polynomial-class-is-unique, é possibile scrivere
in maniera univoca ogni elemento di $K[x] slash g(x)$ come
$[r(x)]_(g(x))$, con $diff(r(x)) < diff(g(x))$. Le possibili
scelte per $r(x) = [a_(n - 1)]_(p) x^(n - 1) + ... + [a_(1)]_(p)
x + [a_(0)]_(p)$ dipendono esclusivamente dai valori di $[a_(n - 1)]_(p),
..., [a_(1)]_(p), [a_(0)]_(p)$, dato che le incognite sono le medesime
per ogni $r(x)$. Per il @Distinct-residue-classes, ciascun $[a_(i)]$
contiene $p$ elementi, pertanto complessivamente esistono $p^(diff(g(x)))$
modi per scegliere i coefficienti di $r(x)$.
]
#example[
Sia $g(x) = [1]_(3) x^(2) - [2]_(3) x + [1]_(3)$ un polinomio
a coefficienti in $ZZ_(3)$. Essendo $diff(g(x)) = 2$ e $p = 3$
(con $3$ numero primo), l'insieme quoziente $ZZ_(3) slash
[1]_(3) x^(2) - [2]_(3) x + [1]_(3)$ contiene esattamente
$3^(2) = 9$ elementi distinti. Essendo pochi, possono essere
enumerati esplicitamente:
#set math.mat(delim:none, column-gap: 1.25em)
$ mat([0]_(3), [1]_(3), [2]_(3), -[1]_(3) x, -[2]_(3) x,
-[1]_(3) x + [1]_(3), -[1]_(3) x + [2]_(3),
-[2]_(3) x + [1]_(3), -[2]_(3) x + [2]_(3)) $
]
#theorem[
Siano $K$ un campo e $g(x) in K[x]$ un polinomio, tale che
$diff(g(x)) > 0$. L'anello $(K[x] slash g(x), +, dot)$ è un
campo se e soltanto se $g(x)$ è irriducibile in $K[x]$
] <Polynomial-ring-if-prime>
#proof[
+ Sia $g(x)$ un polinomio irriducibile in $K[x]$. Per definizione
di campo, L'anello $(K[x] slash g(x), +, dot)$ é un campo se é
commutativo, unitario e se $(K[x] slash g(x) - {[0]_(g(x))}, dot)$
é un gruppo. Per il @Remainders-polynomials-is-ring é giá noto che
$(K[x] slash g(x), +, dot)$ sia commutativo ed unitario.
Affinché $(K[x] slash g(x) - {[0]_(g(x))}, dot)$ sia un gruppo,
é necessario che $dot$ possieda la proprietá associativa, che
ammetta un elemento neutro e che ammetta un inverso per ogni
elemento non nullo di $K[x] slash g(x)$. Il fatto che $dot$
possieda la proprietá associativa deriva direttamente dal modo
in cui questa é stata formulata, mentre l'esistenza dell'elemento
neutro é implicitamente derivante dal fatto che l'anello sia
unitario.
Sia $[f(x)]_(g(x))$ una classe di equivalenza non nulla.
Senza perditá di generalitá, é possibile assumere che $g(x)$
non divida $f(x)$; se cosí non fosse, sarebbe sufficiente
applicare il @Polynomial-class-is-unique e scegliere come
rappresentante della classe di equivalenza $[f(x)]_(g(x))$
il resto della divisione fra $f(x)$ e $g(x)$. Si consideri
$"MCD"(f(x), g(x))$; essendo $g(x)$ irriducibile per ipotesi,
$"MCD"(f(x), g(x))$ dev'essere uguale a $1$ oppure ad un
multiplo non nullo di $g(x)$. Rimane quindi solo da provare
che ogni elemento non nullo di $K[x] slash g(x)$ ammette inverso.
Si supponga che valga $"MCD"(f(x), g(x)) = k g(x)$ con $k in
K - {0_(dot)}$. Se cosí fosse, si avrebbe $g(x) | f(x)$; infatti,
per definizione $"MCD"(f(x), g(x)) | f(x)$ ma se fosse vero che
$"MCD"(f(x), g(x)) = k g(x)$ allora si avrebbe $k g(x) | f(x)$,
ovvero $g(x) | f(x)$. Questo non é peró possibile, perché é
stato assunto che $g(x)$ non divida $f(x)$. Deve quindi aversi
$"MCD"(f(x), g(x)) = 1$. Per il @MCD-polynomials-exists, esistono
$t(x), s(x) in K[x]$ tali per i quali $s(x) f(x) + t(x) g(x) = 1$,
ovvero $s(x) f(x) = 1 - t(x) g(x)$. Si osservi come:
$ [f(x)]_(g(x)) dot [s(x)]_(g(x)) &= [f(x) dot s(x)]_(g(x)) =
[1 - t(x) g(x)]_(g(x)) = [1]_(g(x)) - [t(x)]_(g(x)) dot
[g(x)]_(g(x)) = \ &= [1]_(g(x)) - ([t(x)]_(g(x)) dot [0]_(g(x))) =
[1]_(g(x)) - [0]_(g(x)) = [1]_(g(x)) $
Se vale $[f(x)]_(g(x)) dot [s(x)]_(g(x)) = [1]_(g(x))$, allora
$[s(x)]_(g(x))$ é l'inverso di $[f(x)]_(g(x))$ rispetto a $dot$,
e che quindi per qualsiasi classe di equivalenza non nulla in
$K[x] slash g(x)$ esiste un inverso.
+ Sia $(K[x] slash g(x), +, dot)$ un campo. Si supponga per assurdo
che $g(x)$ non sia un polinomio irriducibile: esiste allora una sua
fattorizzazione in polinomi (anche non primi):
$ g(x) = a(x) b(x) space "con" space 0 < diff(a(x)) < diff(g(x))
" e " space 0 < diff(b(x)) < diff(g(x)) $
Le classi di equivalenza $[a(x)]_(g(x))$ e $[b(x)]_(g(x))$ non
sono necessariamente nulle. Si noti peró come:
$ [a(x)]_(g(x)) dot [b(x)]_(g(x)) = [a(x) dot b(x)]_(g(x)) =
[g(x)]_(g(x)) = [0]_(g(x)) $
Questo non é peró possibile, perché in un campo non possono esistere
due elementi non nulli il cui prodotto restituisca l'elemento nullo.
Occorre quindi convenire che $g(x)$ sia un polinomio irriducibile.
]
#example[
Sia $g(x) = x^(2) + 1$ un polinomio a coefficienti in $RR[x]$. Essendo
$g(x)$ irriducibile in $RR[x]$, per il @Polynomial-ring-if-prime l'anello
$RR[x] slash x^(2) + 1$ è un campo.
]
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/out-of-flow-in-block_02.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Placed item in the first region.
#set page(height: 5cm, margin: 1cm)
Placed item in the first region.
#block(breakable: true, above: 1cm, stroke: 1pt, inset: 0.5cm)[
#place(dx: -0.5cm, dy: -0.75cm, box(width: 200%)[OOF])
#rect(height: 2cm, fill: gray)
]
|
https://github.com/Jollywatt/typst-fletcher | https://raw.githubusercontent.com/Jollywatt/typst-fletcher/master/docs/gallery/ml-architecture.typ | typst | MIT License | #import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#import fletcher.shapes: house, hexagon
#set page(width: auto, height: auto, margin: 5mm, fill: white)
#set text(font: "Fira Sans")
#let blob(pos, label, tint: white, ..args) = node(
pos, align(center, label),
width: 26mm,
fill: tint.lighten(60%),
stroke: 1pt + tint.darken(20%),
corner-radius: 5pt,
..args,
)
#diagram(
spacing: 8pt,
cell-size: (8mm, 10mm),
edge-stroke: 1pt,
edge-corner-radius: 5pt,
mark-scale: 70%,
blob((0,1), [Add & Norm], tint: yellow, shape: hexagon),
edge(),
blob((0,2), [Multi-Head\ Attention], tint: orange),
blob((0,4), [Input], shape: house.with(angle: 30deg),
width: auto, tint: red),
for x in (-.3, -.1, +.1, +.3) {
edge((0,2.8), (x,2.8), (x,2), "-|>")
},
edge((0,2.8), (0,4)),
edge((0,3), "l,uu,r", "--|>"),
edge((0,1), (0, 0.35), "r", (1,3), "r,u", "-|>"),
edge((1,2), "d,rr,uu,l", "--|>"),
blob((2,0), [Softmax], tint: green),
edge("<|-"),
blob((2,1), [Add & Norm], tint: yellow, shape: hexagon),
edge(),
blob((2,2), [Feed\ Forward], tint: blue),
)
|
https://github.com/liuguangxi/erdos | https://raw.githubusercontent.com/liuguangxi/erdos/master/Problems/typstdoc/figures/p70.typ | typst | #import "@preview/cetz:0.2.1"
#cetz.canvas(length: 1.5pt, {
import cetz.draw: *
let r1 = 64
let r2 = 32
let r3 = 16
set-style(fill: luma(200), stroke: 1.5pt)
circle((0, 0), radius: r1)
circle((95.99992087, -0.12325953), radius: r2)
circle((40*calc.sqrt(3), -40), radius: r3)
content((0, 0), text(size: 18pt)[*R#sub[1]*])
content((95.99992087, -0.12325953), text(size: 18pt)[*R#sub[2]*])
content((40*calc.sqrt(3), -40), text(size: 18pt)[*R#sub[3]*])
let k1 = 1/r1
let k2 = 1/r2
let k3 = 1/r3
let k = k1 + k2 + k3 + 2*calc.sqrt(k1*k2 + k2*k3 + k3*k1)
let r = 1/k
set-style(fill: yellow)
circle((65.44740555, -19.94442188), radius: r)
})
|
|
https://github.com/ohmycloud/computer-science-notes | https://raw.githubusercontent.com/ohmycloud/computer-science-notes/main/Misc/songs.typ | typst | #import "@preview/cetz:0.1.1"
#import "@preview/splash:0.3.0": *
#outline(title: "目录")
听说,
把一首歌反复听到上千遍,
与这首歌有关的场景或人就再也忘不掉了。
#pagebreak()
#set page(columns: 2, margin: .5cm)
#let code(body) = {
set text(weight: "regular")
show: box.with(
fill: luma(249),
inset: 0.4em,
radius: 3pt,
baseline: 0.4em,
width: 60%,
)
[#body]
}
#show "你": it => {
text(red)[#it]
}
#show heading.where(level: 1): it => {
align(left)[#code(text(tol-light.light-blue)[ *#it*])]
v(0.25em)
}
= 谁(live 版)
在你眼中我是谁
你想我代替谁
彼此交换悲喜
爱的多的人总先掉眼泪
= 彗星
就像日落燃烧最后的执着
再微弱的爱也感到炽热
我就像彗星千里外划破夜空
你却爱着升空的烟火
= 海堤上
不温暖的心有两颗
它们都曾遇到太多
风浪挨不过
不如握紧我 虽然我的手和你一样 冰凉
可两个灵魂紧紧相握终会 温热
= 在日月的璀璨间闪耀
残存的热血挣扎于
疲惫却不肯睡去的夜
头也不回的世界啊
我们要拥有这一切呀
= 不寻常夏日
我想再看一眼盛夏烟火
看回忆从晚空浪漫坠落
蝉鸣声的时间都被定格
剩鼓噪的心跳还在唱着
夏天是道苦茶格外青涩
树叶渗透小雨解暑消渴
也行初恋适合暮色
也许离别映衬不舍
你的夏天还好吗
天边橙色 深不可测
#colbreak(weak: true)
= 你
我知道
只有不断出发
才能够紧随
你纵情的步伐
= 你在终点等我
只想你找回让你像你的热情
然后就拖着自己到山城隐居
你却在终点等我住进你心里
没有你的地方都是他乡
没有你的旅行都是流浪
= 晴空日记
莫不是你忽然回了头
我怎么会心里乱了踪
莫不是那春风入了梦
风吹过起花红
人群中你回眸
在盛开的时候
年少的人啊
眼里是晴空
= 天使亲吻过的声音
呼唤着 你姓名 我就有远方
只要能 凝望着
你笑容 就拥有向往
只要我 失去梦 失去自我
流浪无尽黑夜
被天使 亲吻过的声音 引领我飞翔
只要你 呼喊着 我姓名 我就有力量
= 是你
海平面的烟火不断闪烁
照亮你的每一个角落
向山矗立远方 我越是前往
#place(top+left, [
#cetz.canvas(length: 2.5cm, {
import cetz.draw: *
set-style(
line: (
fill: red,
stroke: (dash: "dashed")),
)
line((0, 0), (0, 5.2), close: false, stroke: (paint: red))
line((0, 0), (0, -5.8), close: false, stroke: (paint: blue))
})], dx: -45pt, dy: -8pt)
|
|
https://github.com/MrToWy/Bachelorarbeit | https://raw.githubusercontent.com/MrToWy/Bachelorarbeit/master/Code/endpointAfter.typ | typst | ```ts
async update(moduleDto: ModuleDto) {
const {
id,
responsibleId,
responsible,
requirementsHardId,
requirementsSoftId,
requirementsHard: requirementsHardNew,
requirementsSoft: requirementsSoftNew,
degreeProgramId,
groupId,
group,
translations,
subModules,
...moduleData
} = moduleDto;
await this.prisma.$transaction(async (prisma) => {
await prisma.module.update({
where: { id },
data: moduleData
});
await this.updateRequirements(prisma, moduleDto);
await this.upsertModuleTranslations(prisma, moduleDto);
await this.connectResponsible(prisma, moduleDto);
await this.connectSubModules(prisma, moduleDto);
await this.connectGroup(prisma, moduleDto);
});
}
async connectResponsible(prisma: any, moduleDto: ModuleDto) {
const { responsibleId, id } = moduleDto;
if (!responsibleId) return;
await prisma.module.update({
where: { id },
data: {
responsible: responsibleId ? {
connect: { id: responsibleId }
} : undefined
}
});
}
``` |
|
https://github.com/31core/prescription | https://raw.githubusercontent.com/31core/prescription/main/prescription.typ | typst | #set page("a5", margin: (x: 5%, y: 5%))
#import "@preview/tiaoma:0.2.1"
#import "config.typ": *
#include "config.typ"
#place(top + left)[
#tiaoma.dbar-exp("[01]98898765432106", width: 25%)
]
#place(
top + right,
text(size: 11pt)[
#rect(stroke: 0.5pt)[普通]
]
)
#set text(top-edge: 5pt)
#set text(font: "Noto Sans CJK SC")
#align(center)[#text(size: 15pt)[#title]]
#align(center)[#text(size: 18pt)[处 方 笺]]
#set text(font: "SimSun", size: 10pt)
#grid(
columns: 4,
gutter: 1fr,
[姓 名:#underline[#name]],
[性 别:#underline[#gender 性]],
[年 龄:#underline[#age 岁]],
[科 室:#underline[#department 科]],
)
#grid(
columns: 3,
gutter: 1fr,
[门诊号:#underline[98898765432106]],
[费 别:#underline[自 费]],
[日 期:#underline[#datetime.today().year()]年#underline[#datetime.today().month()]月#underline[#datetime.today().day()]日],
)
临床诊断及证型:#underline[#diagnosis]
#line(stroke: 1pt, length: 100%)
#text(size: 20pt)[*R.*]
#context for i in medicines.get() {
[#i
]
}
#align(bottom)[
#stack(
spacing: 1em,
line(stroke: 1pt, length: 100%),
)]
#align(bottom)[
#grid(
columns: 2,
gutter: 1fr,
[医 师: #underline[#doctor]],
[金 额: #underline[#amount]],
)]
#align(bottom)[
#grid(
columns: 2,
gutter: 1fr,
[药师(审核、校对、发药): #underline[]],
[药师/士(调配): #underline[]],
)]
#align(bottom)[
#grid(
columns: 2,
gutter: 1fr,
[#text(size: 7pt, font: "Noto Sans CJK SC")[
温馨提示:
+ 本处方当天有效,过期作废;
+ 取药时请仔细核对清单,点齐药品;
+ 依《药品管理法》,药品一经发出,一律不得退换;
]],
[
#tiaoma.qrcode("https://github.com/31core/prescription", width: 50pt)
])]
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/pinit/0.1.0/pinit-core.typ | typst | Apache License 2.0 | // -----------------------------------------------
// Code from https://github.com/ntjess/typst-drafting
// -----------------------------------------------
#let _run-func-on-first-loc(func, label-name: "loc-tracker") = {
// Some placements are determined by locations relative to a fixed point. However, typst
// will automatically re-evaluate that computation several times, since the usage
// of that computation will change where an element is placed (and therefore update its
// location, and so on). Get around this with a state that only checks for the first
// update, then ignores all subsequent updates
let lbl = label(label-name)
[#metadata(label-name)#lbl]
locate(loc => {
let use-loc = query(selector(lbl).before(loc), loc).last().location()
func(use-loc)
})
}
/// Place content at a specific location on the page relative to the top left corner
/// of the page, regardless of margins, current container, etc.
/// -> content
#let absolute-place(dx: 0em, dy: 0em, content) = {
_run-func-on-first-loc(loc => {
let pos = loc.position()
place(dx: -pos.x + dx, dy: -pos.y + dy, content)
})
}
// -----------------------------------------------
// Core
// -----------------------------------------------
#let _pin-label(loc, name) = label("page-" + str(loc.position().page) + ":" + repr(name))
#let pin(name) = {
_run-func-on-first-loc((loc) => {
[#metadata("pin-tracker")#_pin-label(loc, name)]
})
}
#let pinit(pins, func) = {
let is-single-arg = false
if type(pins) != array {
is-single-arg = true
pins = (pins,)
}
_run-func-on-first-loc((loc) => {
let positions = ()
for pin-name in pins {
let elems = query(
selector(_pin-label(loc, pin-name)),
loc,
)
if elems == () {
return
}
positions.push(elems.at(0).location().position())
}
if (is-single-arg) {
func(positions.at(0))
} else {
func(positions)
}
})
}
#let pinit-place(
dx: 0pt,
dy: 0pt,
pin-name,
body,
) = {
pinit(pin-name, pos => {
absolute-place(dx: pos.x + dx, dy: pos.y + dy, body)
})
}
|
https://github.com/YabusameHoulen/ncu_slide | https://raw.githubusercontent.com/YabusameHoulen/ncu_slide/master/demo.typ | typst | #import "@preview/polylux:0.3.1": *
#import "@preview/cuti:0.2.1": fakeitalic
#import "themes/ncu.typ": *
// #show: show-cn-fakebold
#set text(font: ("Times New Roman", "寒蟬錦書宋"))
#show: ncu-theme.with(footer: [南昌大学 肖君宇], short-title: [GRB数据处理])
#title-slide(title: [GRB数据处理], subtitle: [#text(20pt)[
An #text(red)[*Oversimplified*] Introduction to Bayesian Methods
]], authors: ([肖君宇]), date: [2024年5月19日])
#slide(title: [#h(12em)简介])[
基于MCMC的贝叶斯数据分析方法,几乎被应用在所有科学学科来分析和解释数据。
虽然贝叶斯推理的一系列方法很早就被人提出,但贝叶斯式的计算冗长困难。在20世纪80年代出现MCMC方法之后才被广泛使用。
][
#set rect(inset: 0pt, fill: rgb("e4e5ea"))
#grid(
columns: (auto, auto),
rows: (auto, auto),
gutter: 0.2pt,
rect[#image(height: 80%, "graph/bayes_self_figure.png")],
rect[#image(height: 80%, "graph/laplace_self_figure.jpg")],
)
]
#slide(
)[
#place(top + center, [#image(width: 82%, "graph/bayesian_workflow.png")])
][
频率学派中的“不确定性”依赖于想象的数据抽样的前提。参数和模型都是确定的,没有概率分布,多次重复测量的结果才体现概率分布 \
\
贝叶斯方法将随机性视为信息的特质,可能更加符合我们主观上感知的世界运转模式 @van_de_schoot_bayesian_2021
]
#new-section-slide("一点统计知识")
#slide(
title: [贝叶斯定理],
)[
#set text(22pt)
$ P(theta|"data") = (P("data"|theta) dot P(theta))/P("data") $
\
#set text(20pt)
#list(
indent: 4em,
[$P(theta)$ --- 先验概率],
[$P("data"|theta)$ --- 似然 (likelihood)],
[$P("data")$ --- 证据 (Evidence)],
[$P(theta|"data")$ --- 后验概率],
)
#h(4em)$P(theta|"data") prop P("data"|theta)dot P(theta)$
][
#set text(22pt)
MLE看作MAP特例 (形式上) :
$ theta_"MAP" & = "arg"attach(max, b: theta) product_i P(x_i|theta) dot P(theta) \
& = "arg"attach(max, b: theta) sum_i log P(x_i|theta) + log P(theta) \
& = "arg"attach(max, b: theta) sum_i log P(x_i|theta) \
& = theta_"MLE" $
]
#slide[
计算上困难主要在于高维积分
如果参数的维数比较少,可以直接使用格点近似、二项逼近...
参数太多时,为了兼顾计算效率,使用MCMC等概率方法:
#list(indent: 2em, [Metropolis-Hastings], [Nested Sampling], [......])
][
对参数的贝叶斯估计 :
$ hat(theta) = (integral_theta theta pi(theta) p(x|theta)d theta)/(integral_theta pi(theta) p(x|theta)d theta) $
对参数的函数有 @2020天文学中的概率统计 :
$ f(hat(theta)) = (integral_theta f(theta) pi(theta) p(x|theta)d theta)/(integral_theta pi(theta) p(x|theta)d theta) $
]
#new-section-slide("采样方法简介")
#slide(title: [Rejection Sampling])[
用容易采样的分布采样目标分布
#only(1)[
#enum(
indent: 2em,
[用放缩过的易采样分布$G$(Gaussian)包住目标分布],
[对G采样得到样本 $x_0$ ,计算$x_0$处目标分布 $t = T(x_0)$],
)
]
#only(2)[
#enum(
indent: 2em,
[用放缩过的易采样分布$G$(Gaussian)包住目标分布],
[对G采样得到样本 $x_0$ ,计算$x_0$处目标分布 $t = T(x_0)$],
[对均匀分布[0,$G(x_0)$]取样 g],
[若 $t >= g$ 则输出 $x_0$],
)
]
][
\
#image(width: 85%, height: 50%, "graph/rejection_sampling.png")
#h(1em) 对重尾分布采样效果$times$@bishop2006pattern
]
#slide(title: [Markov Chain])[
假定存在单一的后验稳态分布\
和满足遍历性的Markov链......
满足遍历性 (Ergodicity) 时Markov Chain 会收敛到唯一的目标分布
*细致平衡 (detail balance)* $ S(i) T(j|i) = S(j) T(i|j) $
][
#align(center + top, image(width: 50%, "graph/MC_State.png"))
$ T = mat(p_11, p_12;p_21, p_22) $
#align(center)[状态转移矩阵 $T(i,j)$]
]
#slide(
title: [Metropolis-Hastings],
)[
#set text(23pt)
$
S(i) G(j|i) alpha(j|i) = S(j) G(i|j) alpha(i|j)
$
#only(1)[其中 $alpha(dot, dot):$ 对G提案是否接受的修正 ]
(类似转移矩阵) 取分布 $G$ (Gaussian), 设定采样的初始状态 $x_0$
#only(
2,
)[
从条件分布 $G(x|x_t), (t = 0,1,2...$) 中采样 $x^star$, 从Uniform [0,1] 中采样 u
$u <= min((p(x^star)cancel(G(x_t|x^star)))/(p(x_t)cancel(G(x^star|x_t))), 1) = alpha (x^star|x_t)$ 则前进到 $x^star$,否则停在原来位置 $x_t$
]
][
#align(center + top)[#image(height: 70%, "graph/MH_example.png")MH采样示例]
]
#slide(title: [Markov Chain])[
#set rect(inset: 1pt, fill: rgb("e4e5ea"), width: 100%)
#grid(
columns: (8cm, 8cm),
rows: (auto, auto),
gutter: 0.2pt,
rect[#image("graph/MCMC_trap_in.png")],
rect[#image(width: 100%, height: 65.8%, "graph/curse_of_dim.png")],
)
][
#align(center)[
常遇到的问题 :
#enum(indent: 1em, [困在局部极值点], [采样结果不收敛], [结束条件不明确], [可能“维度诅咒”])
]
]
#slide(
)[
*Nested Sampling*\
#set text(19pt)
可以解决Markov Chain的上述问题@ashton_nested_2022
$
integral_theta limits(P("data"|theta))^L(theta) limits(P(theta))^pi(theta) d theta = lim_(|Delta theta arrow 0 |) sum L(theta) pi(theta) Delta theta
$
体积元对先验概率加权, $Z = sum L(X)Delta X$
#image("graph/nested_sample_graph.png")
][
#align(center, image("graph/Nested_Sampling.png"))
]
#slide()[
#set text(18.7pt)
你说的对,但是Nested Sampling 是 Skilling 于2004年在贝叶斯框架下提出的采样方法
采样发生在约束的高维先验下,在这里一定数量的livepoints将被逐渐 “内移” 来引导计算
你将需要非负的似然函数,在分层抽样中计算各个点的似然值,按值的大小去除最小的点
在对后验分布采样的同时,估计体积元大小(e.g. $(X_(i-1)+X_i)/2$),逐步累计得到估计的Evidence
$ "Evidence" = sum_i^n_"iter" (X_(i)+X_(i-1))/2 L^star_i $
][
#align(center, image("graph/Rad_NS.png"))
]
#new-section-slide("例:GRB谱拟合")
#slide()[
#only((1, 2))[
#text(green, "G")#text(red, "R")#text(blue, "B")
是自"大爆炸"以来能量最高的一类电磁脉冲的现象,一般认为源于大质量恒星坍缩或致密天体并和
#list(indent: 2em, [恒星的尺度,宇宙起源], [未观测到银河系内GRB], [大概每天观测到一两个])
e.g. GRB190611B的部分拟合结果
]
][
#only(1)[
#align(center)[#image("graph/GRB.png")
GRB的火球模型]
]
#only(2)[为什么对GRB谱拟合 $arrow$
老师:
- 获取极端条件下宝贵物理信息
- 能谱成分有助于理解发射机制
- 观测参数限制重要物理过程
- ......
我:根据探测光子数据做统计;p...]
]
#slide()[
#set text(20pt)
#align(top + center)[
#image(width: 100%, height: 60%, "graph/ncu_plot/nihe.png")
#image(width: 80%, height: 30%, "graph/ncu_plot/traceplot.png")
]
][
#align(top + center)[
#image(height: 40%, "graph/ncu_plot/converge_test.png")
#image(height: 49%, "graph/ncu_plot/flux.png")
]
][
#set text(20pt)
#image("graph/ncu_plot/output_more.png")
]
#slide(
)[
#set text(22pt)
#bibliography(title: [参考书籍文献], style: "gb-7714-2015-numeric", "ref.bib")
]
#end-slide(background_color: aqua)[
#let ferris = image(width: 45%, height: 47%, "graph/Ferris.png")
#place(
top + left,
dx: 27% - ferris .at("width", default: 12cm) / 2,
dy: 24.7%,
[ #h(1.5em)
#text(blue, 45pt)[感谢您的倾听]
#ferris ],
)
]
|
|
https://github.com/cyp0633/hnu-bachelor-thesis-typst-template | https://raw.githubusercontent.com/cyp0633/hnu-bachelor-thesis-typst-template/master/outlines.typ | typst | // 各种目录
#import "@preview/i-figured:0.2.4"
#let toc = [
#set par(first-line-indent: 0em)
#show outline.entry: it => [
#text(size: 12pt)[#it]
]
#show outline.entry.where(level: 1): it => [
#text(font: ("Times New Roman", "Source Han Sans"), size: 12pt)[
#it
]
]
#align(center)[
#text(font: "Source Han Sans", size: 16pt)[
目#h(3.5em)录
]
]
#v(-0.5cm)
#outline(title: [], indent: auto)
]
#let figures = [
#text(size: 0em)[
#heading(level: 1)[插图索引]
]
#set par(first-line-indent: 0em)
#show outline.entry: it => [
#text(size: 12pt)[#it]
]
#align(center)[
#text(font: "Source Han Sans", size: 16pt, weight: "regular")[
插图索引
]
]
#v(-0.5cm)
#outline(title: [], target: figure.where(kind: i-figured._prefix + repr(image)))
]
#let tables = [
#text(size: 0em)[
#heading(level: 1)[附表索引]
]
#set par(first-line-indent: 0em)
#show outline.entry: it => [
#text(size: 12pt)[#it]
]
#align(center)[
#text(font: "Source Han Sans", size: 16pt)[
附表索引
]
]
#v(-0.5cm)
#outline(title: [], target: figure.where(kind: i-figured._prefix + repr(table)))
] |
|
https://github.com/manforowicz/resume | https://raw.githubusercontent.com/manforowicz/resume/main/README.md | markdown | ## Marcin's Resume
I wrote my [resume](/resume.pdf) in [Typst](https://typst.app/docs), a modern alternative to LaTex.
Also check out [my website](https://manforowicz.github.io/) and my [LinkedIn](https://www.linkedin.com/in/m-anforowicz/).
Don't hesitate to reach out to me at <<EMAIL>>.
Happy coding!
|
|
https://github.com/ammar-ahmed22/typst-resume | https://raw.githubusercontent.com/ammar-ahmed22/typst-resume/main/src/utils/icons.typ | typst | #let icon(name, size: 11pt) = {
text(
font: "Font Awesome 6 Free Solid",
size: size,
box[#name]
)
}
#let iconUnicode(unicode, size: 11pt, color: black) = {
text(
font: "Font Awesome 6 Free Solid",
size: size,
fill: color,
box[
#symbol(unicode)
]
)
}
// Font Awesome Icons
#let github = symbol("\u{f09b}")
#let linkedin = symbol("\u{f08c}")
#let link = symbol("\u{f0c1}")
#let envelope = symbol("\u{f0e0}")
#let phone = symbol("\u{f095}") |
|
https://github.com/OkazakiYumemi/nju-ps-typst-template | https://raw.githubusercontent.com/OkazakiYumemi/nju-ps-typst-template/master/hw-preamble.typ | typst | // Basic layout
#let ind2 = h(2em)
// Environment for beautiful code blocks
#import "@preview/codly:1.0.0": *
#let codly_icon() = {
box(
height: 0.8em,
baseline: 0.05em,
)
h(0.1em)
}
// Environment for algorithmic pseudocode
#import "@preview/lovelace:0.3.0": *
#let pseudocode = pseudocode.with(indentation-guide-stroke: 0.5pt)
#let comment-color = gray
#let comment(body) = {
h(1fr)
text(size: .85em, fill: comment-color, sym.triangle.stroked.r + sym.space + body)
}
// Mitex for latex math equations
#import "@preview/mitex:0.2.4": *
// CeTZ for drawing
#import "@preview/cetz:0.2.2"
// Fletcher for drawing
#import "@preview/fletcher:0.5.1"
// Environment for sections, problems, solutions, etc
#let problem_counter = counter("problem")
#let section_counter = counter("section")
#let solsection_counter = counter("solsection")
#let default_problem_beginning = "Problem"
#let default_solution_beginning = "Solution:"
#let section(title: none) = {
if title != none {
align(center, text(20pt)[
== #section_counter.step() #section_counter.display() #h(0.5em) #title
])
} else {
align(center, text(20pt)[
== #section_counter.step() #section_counter.display()
])
}
problem_counter.update(0)
}
#let problem(
beginning: default_problem_beginning,
title: none,
body
) = {
if title != none {
if beginning != none {
text(14pt)[=== #beginning #problem_counter.step() #problem_counter.display() (#title)]
} else {
text(14pt)[=== #problem_counter.step() #title]
}
} else {
text(14pt)[=== #beginning #problem_counter.step() #problem_counter.display()]
}
set par(first-line-indent: 2em)
body
}
#let solution(
beginning: default_solution_beginning,
body
) = {
if beginning != none {
v(.3em)
text(12pt)[*#beginning*]
// text(12pt)[=== #beginning]
}
solsection_counter.update(0)
set par(first-line-indent: 2em)
v(-.5em)
block(
width: 100%,
inset: 8pt,
radius: 4pt,
stroke: gray,
body)
line(length: 100%, stroke: black)
}
#let solsection(
title: none
) = {
if title != none {
text(12pt)[*#solsection_counter.step() #solsection_counter.display() #h(0.5em) #title*]
} else {
text(12pt)[*#solsection_counter.step() #solsection_counter.display()*]
}
}
// Some math operators
// Writing math operators in Typst is a bit tricky.
// if with difficulty, you can refer to the following links:
// https://typst.app/docs/reference/math/
// https://typst.app/docs/reference/symbols/sym/
#let argmin = [#math.arg]+[#math.min]
#let argmax = [#math.arg]+[#math.max]
// Initiate the document title, author...
#let assignment_class(title, author, student_number, due_time, body) = {
set text(font: ("Noto Serif", "IBM Plex Serif"), lang: "zh", region: "cn")
set document(title: title, author: author)
// Basic page settings
set page(
paper: "a4",
header: locate(
loc => if (
counter(page).at(loc).first()==1) { none }
else {
let page_number = counter(page).at(loc).first()
let total_pages = counter(page).final(loc).last()
align(right,
[*#author* | *#title* | *Page #page_number of #total_pages*]
)
}
),
// footer: locate(loc => {
// let page_number = counter(page).at(loc).first()
// let total_pages = counter(page).final(loc).last()
// align(center)[Page #page_number of #total_pages]
// })
)
block(height: 25%, fill: none)
// Title and Infomation
align(center, text(24pt)[*#title*])
block(
// height: 40pt,
// columns(2, gutter: 14pt)[
// #set par(justify: true)
// #set text(14pt)
// #h(50%) 姓名:#underline(evade: false, offset: 2pt)[#author]
// #h(50%) 评分:#underline[]
// #colbreak()
// 学号:#underline(evade: false, offset: 2pt)[#student_number]
// 评阅:#underline[]
// ]
height: 50pt,
[
#set par(justify: true)
#set text(14pt)
#grid(
columns: (20%, auto, 0.85fr, auto, 1fr, 20%),
rows: (50%, 50%),
column-gutter: (0pt, 6pt, 14pt, 6pt, 0pt),
[],
[姓名:],
[#align(center)[#author] #v(2pt, weak: true) #line(length: 100%, stroke: .7pt)],
[学号:],
[#align(center)[#student_number] #v(2pt, weak: true) #line(length: 100%, stroke: .7pt)],
[],
[],
[评阅:],
[#hide([评阅人]) #v(2pt, weak: true) #line(length: 100%, stroke: .7pt)],
[评分:],
[#hide([分数]) #v(2pt, weak: true) #line(length: 100%, stroke: .7pt)],
[],
)
]
)
align(center, text(14pt)[#due_time])
// Alerts or Announcements
align(center)[
#block(
inset: 8pt,
stroke: black,
radius: 3pt
)[
这是适用于问题求解作业的 #link("https://github.com/OkazakiYumemi/nju-ps-typst-template")[Typst 模板],
// 同时也可用于其他类型的作业。
也可用于其他类型的作业与报告等。
该模板仍在进行测试中,
*请谨慎使用。*
// 这是适用于问题求解作业的 #link("https://github.com/OkazakiYumemi/nju-ps-typst-template")[Typst 模板],
// 经过一段时间的测试,
// 已经基本稳定。
// 同时也可用于写作实验报告等。
]
]
pagebreak(weak: false)
// Enable the codly environment
show: codly-init.with()
show raw: it => box(
text(font: ("FiraCode Nerd Font Mono", "Noto Sans CJK SC"), size: 10pt, it)
)
codly(
display-icon: false,
stroke: 1pt + rgb("666666")
)
// // Enable the lovelace environment
// show: setup-lovelace
// Setting link style
show link: it => text(blue, underline(it, evade: false, offset: 2pt))
body
// locate(loc => {
// let i = counter(page).at(loc).first()
// if i == 1 { return }
// set text(size: script-size)
// grid(
// columns: (6em, 1fr, 6em),
// if calc.even(i) [#i],
// align(center, upper(
// if calc.odd(i) { title } else { author-string }
// )),
// if calc.odd(i) { align(right)[#i] }
// )
// })
// if student_number != none {
// align(top + left)[Student number: #student_number]
// }
// align(center)[= #title]
}
|
|
https://github.com/FkHiroki/ex-B2_typst | https://raw.githubusercontent.com/FkHiroki/ex-B2_typst/main/sections/section3.typ | typst | MIT No Attribution | = 5. 考察(強度バターンの実験値の誤差原因について)
今回の実験では、単スリットやダブルスリットを用いて、スリットの幅やスリット間隔を変えた際の回折強度パターンを観測し、理論値との比較を行った。その結果、実験値と理論値の相対誤差はどれも$10 %$以下に収まったことが確認できたが、いくつかの場合で他の場合よりも比較的に誤差が大きかった。ここでは、実験値の誤差の原因について考察する。
まず、実験値の誤差の原因として考えられるのは、スリットの幅やスリット間隔、強度パターンの測定誤差である。例として、SS1の強度パターンの周期測定、スリットの幅測定の時に、1 pixelずれた(増えた)場合の誤差への影響を計算してみる。@tab:measure で求めた定規と目盛りの値を用いて、@tab:slit_with と@tab:pattern から、SS1で1 pixelの測定誤差があった場合の理論値と測定値の相対誤差を計算すると、@tab:error になる。
#figure(
caption: [SS1の周期測定、スリットの幅測定の誤差の影響],
table(
columns: 3,
stroke: (none),
table.hline(),
table.header(
[],
[強度パターンがずれた場合], [スリット幅がずれた場合]
),
table.hline(),
[相対誤差 /%], [10.1], [11.8],
table.hline(),
),
) <tab:error>
@tab:error から、測定が1 pixelずれると強度パターン、スリット幅がずれた場合どちらも$1~2 space %$誤差が大きくなったことがわかる。@tab:pattern よりsinc関数の相対誤差は$3 space %$以下のものが多い。よって、sinc関数のパターン相対誤差の原因としては、スリット幅の測定誤差や強度パターンの周期測定誤差が妥当であると考えられる。しかし、SS1については、他の場合よりもsinc関数のパターンの相対誤差が$5 space %$以上大きい。これには別の誤差原因がある可能性がある。
SS1の誤差原因として考えられるのは、レーザー光の標準がずれてしまったことである。まず、SS1は単スリットの中で最もスリット幅が小さいスリットであった。ここで、レーザー光の標準がずれて一部の光が壁に衝突するという状況になった時、その光にとってはスリット幅が小さくなることになる。ここで、@tab:slit_with 、@tab:pattern に見られるように、スリット幅が小さくなるほど、強度パターンの周期が大きくなる。よって、強度パターン周期の実験値の値が理論値よりも大きくなることが予想される。以上より、レーザー光の標準がずれたことがSS1の他の場合と比べて大きい誤差の原因であると考えられる。
= 6.結論
今回の実験では、単スリットやダブルスリットを用いて、スリットの幅やスリット間隔を変えた際の回折強度パターンを観測し、理論値との比較を行った。その結果、実験値と理論値の相対誤差はどれも$10 space %$以下に収まったことが確認できた。また、SS1の実験値の誤差が他の場合よりも大きかったが、それはレーザー光の標準がずれたことが原因であると考えられる。以上より、今回の実験は、理論値と実験値の比較を通じて、回折強度パターンの周期がスリットの幅やスリット間隔にどのように影響されるかを理解することができた。また、Octaveを用いたシミュレーションを通じて、実験で見られた現象がシミュレーション上でも再現されることが確認できた。
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.