repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/MultisampledNight/flow | https://raw.githubusercontent.com/MultisampledNight/flow/main/src/palette.typ | typst | MIT License | #import "cfg.typ"
#let _key-on-theme(section) = {
for (name, value) in section {
if type(value) == dictionary {
value = value.at(cfg.theme)
}
section.at(name) = value
}
section
}
#let duality = (
fg: rgb("#B6B3B4"),
bg: rgb("#212224"),
pink: rgb("#FA86CE"),
violet: rgb("#AAA9FF"),
blue: rgb("#00C7F7"),
orange: rgb("#FF9365"),
yellow: rgb("#C7B700"),
green: rgb("#11D396"),
)
#let print = (
fg: luma(0%),
bg: luma(100%),
)
#let fg = (duality: duality.fg, bow: print.fg, wob: print.bg).at(cfg.theme)
#let bg = (duality: duality.bg, bow: print.bg, wob: print.fg).at(cfg.theme)
#let gamut = gradient.linear(bg, fg, space: oklch)
#let dim(body) = text(fill: gamut.sample(60%), body)
#let status = _key-on-theme((
empty: gamut.sample(75%),
urgent: (
duality: duality.orange,
bow: orange,
wob: orange,
),
progress: (
duality: duality.violet,
bow: purple,
wob: purple,
),
pause: (
duality: duality.green,
bow: green,
wob: green,
),
block: (
duality: duality.blue,
bow: blue,
wob: blue,
),
complete: fg,
cancel: gamut.sample(40%),
unknown: (
duality: duality.yellow,
bow: yellow,
wob: yellow,
),
remark: (
duality: duality.green,
bow: green,
wob: green,
),
hint: (
duality: duality.violet,
bow: purple,
wob: purple,
)
))
#let reference = _key-on-theme((
external: (
duality: duality.blue,
bow: blue,
wob: blue,
),
other-file: (
duality: duality.violet,
bow: purple,
wob: purple,
),
same-file: (
duality: duality.green,
bow: green,
wob: green,
),
))
|
https://github.com/fuchs-fabian/typst-template-aio-studi-and-thesis | https://raw.githubusercontent.com/fuchs-fabian/typst-template-aio-studi-and-thesis/main/template/literature_and_bibliography.typ | typst | MIT License | #let literature-and-bibliography() = [
#lorem(20)
] |
https://github.com/mrknorman/evolving_attention_thesis | https://raw.githubusercontent.com/mrknorman/evolving_attention_thesis/main/04_application/04_application.typ | typst | #set page(numbering: "1", number-align: center)
#set math.equation(numbering: it => {[4.#it]})
#counter(math.equation).update(0)
#import "../notation.typ": vectorn, uvectorn, dvectorn, udvectorn, matrixn
= The Application of Machine Learning to Gravitational-Wave Data Analysis <application-sec>
The following chapter will explore the intersection between gravitational waves and machine learning. We will review the possible areas for detection applications, and describe how we have constructed the example datasets used for training and validation throughout the thesis. We will then explore the results of repeating the dense network experiments from previous chapters on gravitational-wave data. We will introduce Convolutional Neural Networks (CNNs), and review previous work from the literature that has been performed using CNNs. We will conclude the chapter by recreating some key results from the literature. This chapter is not indended to be presented as original work but as an exploration of existing methods, and a demonstration of the novel data pipeline that will used through the rest of the thesis. By the end of this chapter, we will have accumulated a large number of possible network, training, and data configurations. These form the set of hyperparameters that we must, though some approach, narrow down; we will explore how we can do this in @dragonn-sec.
We have demonstrated that simple artificial neural networks can be used to classify input data drawn from a restricted distribution into a number of classes, $N$, with a high ($>99.9%$) degree of accuracy. We didn't design the network with any particular consideration for the dataset (besides the dimensionality of its elements), therefore, we can infer that artificial neural networks should be general enough to classify data drawn from other distributions that contain discrete differentiable classes. It is not clear, however, which other distributions can be classified and what network complexity is required to achieve a similar degree of accuracy. It is easy to imagine distributions that are considerably simpler than the MNSIT dataset @mnist and, conversely, ones that are much more complex. There may be a mathematical approach to determine the link between the distribution and required model complexity.
One possible metric that touches upon this relation is the Rademacher complexity, $accent(cal(R), hat)_M$, given by
$ accent(cal(R), hat)_M (H) = EE_in [1/M sum_(i=1)^M epsilon_i h [vectorn(x)_bold(i)]] , $
where $M$ is the number of data points in a dataset, $X = [vectorn(x)_bold(1), ..., vectorn(x)_bold(i), ..., vectorn(x)_bold(M)]$ where each point is a vector, $vectorn(x)$, in our case the input vectors of our training dataset, and $vectorn(epsilon) = [vectorn(epsilon)_bold(1), ..., vectorn(epsilon)_bold(i), ..., vectorn(epsilon)_bold(M)]$ are vectors of equal length to our input vectors with data uniformly distributed in ${-1, +1}^M$. $H$ represents the hypothesis space, $H = {h_1, h_2, ...}$, the set of all possible functions, ${h_i}$, that our neural network architecture can learn. In simpler terms, $H$ includes every pattern or rule the neural network might use to make predictions based on the input data. The diversity and complexity of functions in $H$ are directly influenced by the network's architecture --- more layers and neurons mean a larger hypothesis space. $vectorn(epsilon)$ is a set of random variables that take on values of either -1 or +1, with each $vectorn(epsilon)_bold(i)$ associated with a corresponding data point, $vectorn(x)_bold(i)$, in the dataset. These random values are used to inject randomness into the calculation of Rademacher complexity, essentially testing how well the hypothesis space, $H$ can fit or adapt to completely random outcomes.
The Rademacher complexity is a measure of how well functions in $H$ can fit random noise in the data. A higher Rademacher complexity indicates that the function class can fit the noise better, which implies a higher capacity to overfit to the data. So, one approach to optimising the model would be to attempt to minimise Rademacher complexity value whilst maximising model performance. More details about this metric and its use in defining the relationship between data samples and model complexity can be found at @model_complexity. Despite the existence of this metric, however, it would appear that there has not been substantial research into the link between dataset complexity and required model size @model_complexity_2, though it is possible that such a paper has been missed.
One method that we can use to explore this question is to find out the answer empirically. As we move from the MNIST dataset @mnist to distributions within gravitational-wave data science, the natural starting point is to repeat the previous experiments with gravitational-wave data, both as a comparison and as a baseline as we move forward.
== Gravitational-Wave Classifiers
The scope of gravitational-wave data problems to which we can apply artificial neural network models is large @ml_in_gw_review. However, we shall limit our investigation to perhaps the simplest type of problem --- classification, the same type of problem that we have previously demonstrated with our classification of the MNIST dataset @mnist into discrete classes. Though classification is arguably the most straightforward problem available to us, it remains one of the most crucial --- before any other type of transient signal analysis can be performed, transients must first be identified.
There are several problems in gravitational-wave data analysis which can be approached through the use of classification methods. These can broadly be separated into two classes --- detection and differentiation. *Detection* problems are self-explanatory; these kinds of problems require the identification of the presence of features within a noisy background. Examples include Compact Binary Coalescence (CBC) @gabbard_messenger_cnn @pycbc, burst @cWB @MLy, and glitch detection @gravity_spy @idq; see @data_features for a representation of the different features present in gravitational-wave data. *Differentiation* problems, usually known simply as classification problems, involve the separation of detected features into multiple classes, although this is often done in tandem with detection. An example of this kind of problem is glitch classification, in which glitches are classified into classes of known glitch types, and the classifier must separate input data into these classes @gravity_spy.
#figure(
image("data_features.png", width: 100%),
caption: [ A non-exhaustive hierarchical depiction of some of the features, and proposed features, of gravitational-wave interferometer data. The first fork splits the features into two branches, representing the duration of the features. Here, *continuous* features are defined as features for which it is extremely unlikely for us to witness their start or end within the lifespan of the current gravitational-wave interferometer network and probably the current scientific community @continious_gravitational_waves. These features have durations anywhere from thousands to billions of years. *Transient* features have comparatively short durations @first_detction, from fractions of seconds in the case of stellar-mass Binary Black Hole (BBH) mergers @first_detction to years in the case of supermassive BBH mergers @supermassive_mergers. It should be noted that the detectable period of supermassive binaries could be much longer; although the mergers themselves are transient events, there is no hard cut-off between the long inspiral and the merger event. Nevertheless, the mergers are probably frequent enough that some will end within the lifetime of the proposed LISA space constellation, so in some cases, they can be considered transients @supermassive_mergers. The next fork splits features by origin. Features of *astrophysical* origin originate from beyond Earth. This distinction is practically synonymous with the distinction between gravitational waves and signals from other sources since no other astrophysical phenomena are known to have a similar effect in interferometers @first_detction. Features of *terrestrial* origin, unsurprisingly, originate from Earth. These primarily consist of detector glitches caused by seismic activity or experimental artifacts @det_char. Astrophysical transients have a further practical division into CBCs and bursts. The category of *bursts* contains all astrophysical transients that are not CBCs @bursts_O1. The primary reason for this distinction is that CBCs have been detected and have confirmed waveform morphologies @first_detction @first_bns. As of the writing of this thesis, no gravitational-wave burst events have been detected @bursts_O1 @bursts_O2 @bursts_O3. Bursts often require different detection techniques @cWB @x-pipeline; of the proposed sources, many are theorised to have waveforms with a much larger number of free parameters than CBCs, as well as being harder to simulate as the physics are less well-understood @supernovae_waveforms_2 @starquake_detection. These two facts compound to make generating large template banks for such signals extremely difficult. This means that coherence detection techniques that look for coherent patterns across multiple detectors are often used over matched filtering @x-pipeline @oLIB @cWB @BayesWave @MLy. The astrophysical leaves of the diagram represent possible and detected gravitational-wave sources; the text's colourings represent their current status. Green items have been detected using gravitational-wave interferometers, namely the merger of pairs of Binary Black Holes (BBHs) @first_detction, Binary Neutron Stars (BNSs) @first_bns, or one of each (BHNSs) @first_nsbh; see @GWTC-1 @GWTC-2 @GWTC-3 for full catalogues of detections. Yellow items have been detected via gravitational waves but using Pulsar Timing Arrays (PTAs) rather than interferometers @PTA. Blue items represent objects and systems that are theorised to generate gravitational waves and have been detected by electromagnetic observatories but not yet with any form of gravitational wave detection. This includes white dwarf binaries @white_dwarf_binary_em @white_dwarf_lisa_detection, the cosmological background @cosmological_background_em @cosmological_background_gw, starquakes @starquake_em @starquake_gw, and core-collapse supernovae CCSN @supernovae_em @supernovae_gw. This is because they are too weak and/or too uncommon for our current gravitational-wave detector network to have had a chance to detect them. Finally, red items are possible, theorised sources of gravitational waves that have not yet been detected by any means. These are, evidently, the most contentious items presented, and it is very possible that none of these items will ever be detected or exist at all. It should be noted that the number of proposed sources in this final category is extensive, and this is far from an exhaustive list. The presented proposed continuous sources are neutron star asymmetries @neutron_star_gw_review, and the presented transient sources are extraterrestrial intelligence @et_gw, cosmic string kinks and cusps @cosmic_string_cusps, accretion disk instabilities @accrection_disk_instability, domain walls @domain_walls, and nonlinear memory effects @non_linear_memory_effects.]
) <data_features>
@data_features shows that several possible transients with terrestrial and astrophysical origins could be targeted for detection. For our baseline experiments and throughout this thesis, we will select two targets.
Firstly, *Binary Black Holes (BBHs)*. We have the most numerous detections of BBH signals @GWTC-1 @GWTC-2 @GWTC-3, and whilst this might make them seem both less interesting and as a solved problem, they have several benefits. As test cases to compare different machine learning techniques against traditional methods, they have the most material for comparison because of their frequency; they would also see the greatest benefits from any computational and speed efficiency savings that may be wrought by the improvement of their detection methods @computational_cost. These factors may become especially relevant when the 3#super[rd] generation detectors, such as the Einstein Telescope @einstein_telescope and Cosmic Explorer @cosmic_explorer, come online. During their observing periods, they expect detection rates on the order of between $10^4$ and $10^5$ detections per year @overlapping_search, which would stretch computing power and cost if current methods remain the only options. In the shorter term, if detection speeds can be improved, faster alerts could be issued to the greater astronomical community, allowing increased opportunity for multimessenger analysis @multimessnger_review. Only one multimessenger event has thus far been detected --- a Binary Neutron Star (BNS) event @first_bns, but it is probable, due to the relative similarity in their morphologies, that methods to detect BBHs could be adapted for BNS detection.
Secondly, we will investigate the detection of unmodeled *burst* signals using a machine learning-based coherent detection technique. Bursts are exciting sources whose detection could herald immense opportunities for scientific gain @bursts_O3. Possible burst sources include core-collapse supernovae @supernovae_gw, starquakes @starquake_gw, accretion disk instabilities @accrection_disk_instability, nonlinear memory effects @non_linear_memory_effects, domain walls @domain_walls, and cosmic string cusps @cosmic_string_cusps, as well as a plethora of other proposed sources. It should be noted that whilst many bursts have unknown waveform morphologies, some, such as cosmic string cusps, are relatively easy to model and are grouped with bursts primarily due to their as-yet undetected status @cosmic_string_cusps.
Our current models of the physics of supernovae are limited both by a lack of understanding and computational intractability; detecting the gravitational-wave signal of a supernova could lead to new insights into the supranuclear matter density equation of state as well other macrophysical phenomena present in such events such as neutron transport and hydrodynamics @neutron_star_equation_of_state_1 neutron_star_equation_of_state_2 @neutron_star_equation_of_state_3. We may also detect proposed events, such as accretion disk instabilities @accrection_disk_instability, which may be missed by standard searches. We can search for the gravitational-wave signals of electromagnetic events that currently have unknown sources, such as fast radio bursts @targeted_frb_search, magnetar flares @targeted_magnetar_search, soft gamma-ray repeaters @targeted_grb_search, and long gamma-ray bursts @targeted_grb_search. Although it's possible that some of these events produce simple, modelable waveforms, it is not currently known, and a general search may one day help to reveal their existence. Some of the more hypothetical proposed sources could fundamentally alter our understanding of the universe, such as evidence for dark matter @domain_wall_dark_matter and/or cosmic strings @cosmic_string_cusps, or if we fail to find them, it could also help to draw limits on theory search space.
It is unknown whether current burst search methods have sufficient detection capability to detect all theoretically detectable sources, nor whether current methods are maximally efficient at gaining information on sources that they do detect. Currently, the LIGO-Virgo-KAGRA collaboration has a number of active burst detection pipelines, X-Pipeline @x-pipeline, oLIB @oLIB, Coherent Wave Burst (cWB) @cWB and BayesWave @BayesWave. These include both offline and online searches, including targeted searches wherein a known electromagnetic event is used to limit the search space @targeted_frb_search @targeted_magnetar_search @targeted_grb_search. It could be that the current detection software is adequate and, indeed, the search is hardware rather than software-limited. Even if this is the case, there are probably computational improvements that are possible. It seems unlikely that we have reached the limit of coherent search efficiency.
Traditional coherence techniques require the different detector channels to be aligned for successful detection; therefore, because we don't know a priori the direction of the gravitational-wave sources (unless we are performing a targeted offline search), coherent search pipelines such as X-Pipeline @x-pipeline and cWB @cWB must search over a grid covering all possible incidence directions. In the case of all-sky searches, this grid will necessarily cover the entire celestial sphere. In targeted searches, the grid can be significantly smaller and cover only the uncertainty region of the source that has already been localised by an EM detection @targeted_grb_search @targeted_frb_search. Higher resolution grids will result in a superior search sensitivity; however, they will simultaneously increase computing time. Covering the entire sky with a grid fine enough to achieve the desired sensitivity can be computationally expensive. It is possible to circumnavigate the need to search over a grid using artificial neural networks, shifting much of the computational expense to the training procedure. This has been demonstrated by the MLy pipeline @MLy --- the only fully machine-learning-based pipeline currently in review for hopeful deployment before the end of the fourth observing run (O4). Improvements in the models used for this task could be used to improve the effectiveness of the MLy pipeline. Indeed, some of the work discussed in this thesis was used at an early stage in the pipeline's development to help design the architecture of the models; see @deployment-in-mly. It is hoped that in the future, more aspects of the work shown here can find use in the pipeline's development.
We will focus on the binary detection problem rather than multi-class classification, as there is only one discrete class of BBH (unless you want to draw borders within the BBH parameter space or attempt to discern certain interesting features, such as eccentricity), and in the unmodeled burst case, coherent detection techniques are not usually tuned to particular waveforms, which, in any case, are not widely available for many types of burst. In the next subsection, we will discuss how we can create example datasets to train artificial neural networks for this task.
== Dataset Design and Preparation
In the case of CBCs, we have only a very limited number ($<200$) of example interferometer detections @GWTC-1 @GWTC-2 @GWTC-3, and in the burst case, we have no confirmed examples @bursts_O1 @bursts_O2 @bursts_O3. This means that to successfully train artificial neural network models, which typically require datasets with thousands to millions of examples @dataset_size, we must generate a large number of artificial examples.
In order to facilitate the real-time generation of training datasets, a custom Python package named GravyFlow was created @gwflow_ref. GravyFlow handles the generation of fake noise and waveforms (with the use of the custom cuPhenom GPU waveform generator @cuphenom_ref), as well as the acquisition and processing of real interferometer data, and the injection, projection, and scaling of waveforms. It packages this functionality into a configurable TensorFlow dataset. Since the majority of the processing, except the acquisition of real noise, is performed on the GPU, the dataset can be adjusted and training can commence without the need to pre-generate the entire dataset. This allows for much quicker iteration through dataset hyperparameters.
The following subsections describe how GravyFlow handles the creation of these examples, including the acquisition of noise, the generation and scaling of simulated waveforms, and data conditioning.
=== The Power Spectral Density (PSD) <psd-sec>
The Power Spectral Density (PSD) is an important statistical property that is used by several elements of dataset design @psd_ref. Since a custom function was written for this thesis in order to speed up the calculation of the PSD, and since it is helpful to have an understanding of the PSD in order to understand many of the processes described in subsequent sections, a brief explanation is presented.
The PSD is a time-averaged description of the distribution of a time series's power across the frequency spectrum @psd_ref. Unlike a Fourier transform, which provides a one-time snapshot, the PSD conveys an averaged view, accounting for both persistent and transient features; see @psd_eq for a mathematical description. The PSD is used during data conditioning in the whitening transform, wherein the raw interferometer data is processed so that the noise has roughly equal power across the frequency domain, see @feature-eng-sec. For some types of artificial noise generation, the PSD can be used to colour white noise in order to generate more physically active artificial noise; see @noise_acquisition_sec. The PSD is also used to calculate the optimal Signal to Noise ratio, which acts as a metric that can be used to measure the detectability of an obfuscated feature and thus can be used to scale the amplitude of the waveform to a desired detection difficulty.
Imagine a time series composed of a stationary #box("20" + h(1.5pt) + "Hz") sine wave. In the PSD, this would materialise as a distinct peak at #box("20" + h(1.5pt) + "Hz"), effectively capturing the concentrated power at this specific frequency: the frequency is constant, and the energy is localised. If at some time, $t$, we remove the original wave and introduce a new wave at a different frequency, #box("40" + h(1.5pt) + "Hz"), the original peak at #box("20" + h(1.5pt) + "Hz") would attenuate but not vanish, as its power is averaged over the entire time-series duration. Concurrently, a new peak at #box("40" + h(1.5pt) + "Hz") would appear. The power contained in each of the waves, and hence the heights of their respective peaks in the PSD, is determined by the integrated amplitude of their respective oscillations; see @psd-example for a depiction of this example. When applied to a more complicated time series, like interferometer noise, this can be used to generate an easy-to-visualise mapping of the distribution of a time series's power across frequency space.
#show figure: set block(breakable: true)
#figure(
image("example_psd.png", width: 100%),
caption: [Examples of Power Spectral Density (PSD) transforms. _Left:_ Two time domain series. The red series is a #box("20" + h(1.5pt) + "Hz") wave with a duration of #box("0.7" + h(1.5pt) + "s"), and the blue series is this same time series concatenated with a #box("40" + h(1.5pt) + "Hz") wave from $t = 0.7#h(1.5pt)s$ onwards. _Right:_ The two PSDs of the time series are displayed in the left panel. The red PSD was performed across only the #box("0.7" + h(1.5pt) + "s") of the red wave's duration, whereas the blue PSD was taken over the full #box("2.0" + h(1.5pt) + "s") duration. As can be seen, the blue PSD has two peaks, representing the two frequencies of the two waves combined to make the blue time series --- each peak is lower than the red peak, as they are averaged across the full duration, and their respective heights are proportional to their durations as both waves have the same amplitude and vary only in duration.]
) <psd-example>
The PSD can be calculated using Welch's method, which uses a periodogram to calculate the average power in each frequency bin over time @welchs_method. More specifically, the following steps are enacted:
+ First, the time series is split up into $K$ segments of length $L$ samples, with some number of overlapping samples $D$; if $D = 0$, this method is equivalent to Bartlett's method.
+ Each segment is then windowed with a user-chosen window function, $w(n)$. This is done in order to avoid spectral leakage, avoid discontinuities in the data, smoothly transition between segments, and control several other factors about the method, which allow for fine-tuning to specific requirements.
+ For each windowed segment, $i$, we then estimate the power of the segment, $I_i (f_k)$, at each frequency, $f_k$, by computing the periodogram with
$ I_i (f_k) = 1/L|X_i (k)|^2 $ <periodogram>
where $I_i (f_k)$ is the result of the periodogram, $X_i (k)$ is the FFT of the windowed segment, and $f_k$ is the frequency corresponding to the $k^op("th")$ FFT sample.
4. Finally, we average the periodograms from each segment to get the time-average PSD:
$ S(f_k) = 1/K sum_(i=1)^K I_i (f_k) $ <average_periodograms>
where where $S(f_k)$ is the PSD. Combining @periodogram and @average_periodograms gives
$ S(f_k) = 1/K sum_(i=1)^K 1/L|X_i (k)|^2 $ <psd_eq>
To compute the PSD with enough computational speed to perform rapid whitening and SNR, $rho_"opt"$, calculation during model training and inference, an existing Welch method from the SciPy scientific Python library @scipy was adapted and added to the GravyFlow pipeline @gwflow_ref, converting its use of the NumPy vectorised CPU library @numpy to the TensorFlow GPU library @tensorflow; this converted code is seen in @psd_calculation.
#figure(
```py
@tf.function
def calculate_psd(
signal : tf.Tensor,
nperseg : int,
noverlap : int = None,
sample_rate_hertz : float = 1.0,
mode : str ="mean"
) -> (tf.Tensor, tf.Tensor):
if noverlap is None:
noverlap = nperseg // 2
signal = detrend(signal, axis=-1, type='constant')
# Step 1: Split the signal into overlapping segments
signal_shape = tf.shape(signal)
step = nperseg - noverlap
frames = tf.signal.frame(signal, frame_length=nperseg, frame_step=step)
# Step 2: Apply a window function to each segment
# Hanning window is used here, but other windows can be applied as well
window = tf.signal.hann_window(nperseg, dtype = tf.float32)
windowed_frames = frames * window
# Step 3: Compute the periodogram (scaled, absolute value of FFT) for each
# segment
periodograms = \
tf.abs(tf.signal.rfft(windowed_frames))**2 / tf.reduce_sum(window**2)
# Step 4: Compute the median or mean of the periodograms based on the
#median_mode
if mode == "median":
pxx = tfp.stats.percentile(periodograms, 50.0, axis=-2)
elif mode == "mean":
pxx = tf.reduce_mean(periodograms, axis=-2)
else:
raise "Mode not supported"
# Step 5: Compute the frequencies corresponding to the power spectrum values
freqs = fftfreq(nperseg, d=1.0/sample_rate_hertz)
#Create mask to multiply all but the 0 and nyquist frequency by 2
X = pxx.shape[-1]
mask = \
tf.concat(
[
tf.constant([1.]),
tf.ones([X-2], dtype=tf.float32) * 2.0,
tf.constant([1.])
],
axis=0
)
return freqs, (mask*pxx / sample_rate_hertz)
```,
caption : [_Python @python ._ TensorFlow @tensorflow graph function used by GravyFlow @gwflow_ref to calculate the PSD of a signal. `signal` is the input time series as a TensorFlow tensor, `nperseg` is the number of samples per segment, $L$, and `noverlap` is the number of overlapping samples, $D$. TensorFlow has been used in order to utilise GPU parallelisation, which offers a significant performance boost over a similar function written in NumPy @numpy.]
) <psd_calculation>
A closely related property, the Amplitude Spectral Density (ASD), is given by the element-wise square root of the Power Spectral Density (PSD)
$ A(f_k) = S(f_k)^(compose 1/2). $ <asd-func>
Here $vectorn(a)^(compose 1/2)$ is the element-wise squareroot, i.e. $vectorn(a)^(compose 1/2) = [sqrt(a_1), ..., sqrt(a_i), ..., sqrt(a_N)] .$
=== Noise Generation and Acquisition <noise_acquisition_sec>
There are two possible avenues for acquiring background noise to obfuscate our injections. We can either create artificial noise or use real segments extracted from previous observing runs. As was discussed in @interferometer_noise_sec, real interferometer noise is neither Gaussian nor stationary, and many of the noise sources which compose this background are not accounted for or modelled @det_char. This means that any artificial noise will only be an approximation of the real noise --- it is not clear, intuitively, how well this approximation will be suited to training an artificial neural network.
One perspective argues that using more approximate noise could enhance the network's generalisation capabilities because it prevents overfitting to the specific characteristics of any given noise distribution; this is the approach adopted by the MLy pipeline @MLy. Conversely, another perspective suggests that in order to properly deal with the multitude of complex features present in real noise, we should make our training examples simulate real noise as closely as possible @dataset_mismatch_problems @dataset_mismatch_problems_2 @dataset_mismatch_problems_3, even suggesting that models should be periodically retrained within the same observing run in order to deal with variations in the noise distribution. These are not discrete philosophies, and the optimal training method could lie somewhere between these two paradigms.
Evidently, in either case, we will want our validation and testing datasets to approximate the desired domain of operation as closely as possible; if they do not, we would have no evidence, other than assumption, that the model would have any practical use in real data analysis @dataset_mismatch_problems_3. The following subsection will outline the possible types of noise that could be used to create artificial training examples. Throughout the thesis, for all validation purposes, we have used real noise at GPS times, which are not used at any point during the training of models, even when the training has been done on real noise.
*White Gaussian:* The most simplistic and general approach, and therefore probably the most unlike real noise, is to use a white Gaussian background. This is as simplistic as it sounds; we generate $N$ random variables, where N is the number of samples in our noise segment. Each sample is drawn from a normal distribution with a mean of zero and some variance according to the input scaling; often, in the case of machine learning input vectors, this would be unity; see the two uppermost plots in @noise_comparison.
*Coloured Gaussian:* This noise approximation increases the authenticity of the noise distribution by colouring it with a noise spectrum; typically, we use an ASD drawn from the interferometer we are trying to imitate in order to do this; see @psd-sec. By multiplying the frequency domain transform of Gaussian white noise by a given PSD, we can colour that noise with that PSD. The procedure to do this is as follows:
+ Generate white Gaussian noise.
+ Transform the Gaussian noise into the frequency domain using a Real Fast Fourier Transform (RFFT).
+ Multiply the noise frequency spectrum by the selected ASD in order to colour it.
+ Return the newly coloured noise to the time domain by performing an Inverse RFFT (IRFFT).
There are at least two choices of PSD we could use for this process. We could use the PSD of the detector design specification. It represents the optimal PSD given perfect conditions, no unexpected noise sources, and ideal experimental function. This would give a more general, idealistic shape of the PSD across a given observing run. Alternatively, we could use the PSD of a real segment of the background recorded during an observing run; this would contain more anomalies and be a closer approximation to the specific noise during the period for which the PSD was taken. Since the PSD is time-averaged, longer segments will result in more general noise. The MLy pipeline @MLy refers to this latter kind of noise as *pseudo-real* noise; see examples of these noise realisations in the four middle plots of @noise_comparison.
*Real:* Finally, the most authentic type of noise that can be gathered is real interferometer noise. This is noise that has been sampled directly from a detector. Even assuming that you have already decided on which detector you are simulating, which is required for all but white noise generation, there are some extra parameters, shared with the pseudo-real case, that need to be decided. The detector data information, the time period from which you are sampling, and whether to veto any features that may be present in the segment --- e.g. segments that contain events, candidate events, and known glitches.
To acquire the real data, we utilise the GWPy Python Library's @gwpy data acquisition functionality --- since there are multiple formats in which we could retrieve the data, we must specify some parameters, namely, the frame, the channel, and the state flag. Interferometer output data is stored in a custom file format called a frame file @frame-file; thus, the choice of frame determines the file to be read. Within each frame file lies multiple channels --- each of which contains data from a single output stream. These output streams can be raw data, e.g. raw data from the interferometer photodetector itself; various raw auxiliary data streams, such as from a seismometer; conditioned data, e.g., the primary interferometer output with lines removed; or the state flag channel, which contains information about the status of the detector at every time increment --- the state flag will indicate whether the detector is currently in observing mode or otherwise, so it is important to filter the data for the desired detector state. For the real noise used in this thesis, we use the frame, channel, and state flag, shown in @detector_data_table. We have excluded all events and candidate events listed in the LIGO-Virgo-Kagra (LVK) collaboration event catalogues @GWTC-1 @GWTC-2 @GWTC-3 but included detector glitches unless otherwise stated.
#figure(
table(
columns: (auto, auto, auto, auto),
inset: 10pt,
align: horizon,
[*Detector*], [*Frame*], [*Channel*], [*State Flag*],
[LIGO Hanford (H1)], [HOFT_C01], [H1:DCS-CALIB_STRAIN_CLEAN_C01], [DCS-ANALYSIS_READY_C0- 1:1],
[LIGO Livingston (L1)], [HOFT_C01], [L1:DCS-CALIB_STRAIN_CLEAN_C01], [DCS-ANALYSIS_READY_C0- 1:1],
[VIRGO (V1)], [V1Online], [V1:Hrec_hoft_16384Hz], [ITF_SCIENCE:1],
),
caption: [The frame, channel, and state flags used when obtaining data from the respective detectors during the 3#super("rd") observing run (O3). This data was used as obfuscating noise when generating artificial examples to train and validate artificial neural network models throughout this thesis. It should be noted that although the clean channels were produced offline in previous observing runs, the current observing run, O4, produces cleaned channels in its online run, so using the cleaned channels during model development ensures that the training, testing, and validation data is closer to what would be the normal operating mode for future detection methods.]
) <detector_data_table>
#figure(
image("noise_comparison.png", width: 100%),
caption: [One-second examples of the four possible types of simulated and real noise considered by this thesis. Where real noise is used, it is taken from the LIGO Livingston detector during the third observing run at the GPS times listed. In order, from top to bottom, these are examples of white Gaussian noise, coloured Gaussian noise, pseudo-real noise, and real noise. A description of these noise types and their generation can be found in @noise_acquisition_sec. The left column shows the unaltered values of the noise. Note that the noise has been scaled in all cases except for the pure white noise, which is generated at the correct scale initially. This scaling is used to reduce precision errors and integrate more effectively with the machine learning pipeline, as most loss and activation functions are designed around signal values near unity; see @loss_functions_sec and @activation_functions_sec. The right column shows the same noise realisations after they have been run through a whitening filter. In each case, the PSD of a #box("16.0" + h(1.5pt) + "s") off-source noise segment not displayed is used to generate a Finite Impulse Response (FIR) filter, which is then convolved with the on-source data; see @feature-eng-sec. For the simulated and pseudo-real noise cases, the off-source data is generated using the same method as the on-source data but with a longer duration. In the real noise case, the off-source data consists of real interferometer data drawn from #box("16.5" + h(1.5pt) + "s") before the start of the on-source segment to #box("0.5" + h(1.5pt) + "s") before the start of the on-source segment. This 0.5s gap is introduced because #box("0.5" + h(1.5pt) + "s") must be cropped from the data following the whitening procedure in order to remove edge effects induced via windowing, as well as acting as a buffer to reduce contamination of the off-source data with any features present in the on-source data. Note that the whitened noise plots look very similar for the three simulated noise cases --- a close examination of the data reveals that there is some small variation between the exact values. This similarity occurs because the off-source and on-source noise segments for these examples are generated with identical random seeds and thus have identical underlying noise realisations (which can be seen exactly in the unwhitened white noise plot). Since the PSDs of the on-source and off-source data are nearly identical for the simulated cases, the whitening procedure is almost perfect and reverts it nearly perfectly to its white state. If anything, this similarity boosts confidence that our custom whitening procedure is operating as expected.]
) <noise_comparison>
For our baseline training dataset used in this section, we will employ a real-noise background. An argument can be made that it is an obvious choice --- because it is real noise, it contains the full spectrum of noise features that might be present in a real observing run, even if it does not contain the particular peculiarities of any given future observing run in which we may wish to deploy developed models.
In each case, we will acquire two seconds of data at a sample rate of #box("2048.0" + h(1.5pt) +"Hz"), which includes #box("0.5" + h(1.5pt) + "s") of data on either side of the time series, which will be cropped after whitening. The whitening is performed similarly in all cases in order to ensure symmetry when comparing obfuscation methods. A power-of-two value is used as it simplifies many of the mathematical operations that need to be performed during signal and injection processing, which may, in some cases, improve performance, as well as help to avoid edge cases that may arise from odd numbers. This frequency was selected as its Nyquist frequency of #box("1024.0" + h(1.5pt) + "Hz") will encompass nearly the entirety of the frequency content of BBH signals; it also covers a large portion of the search space of proposed transient burst sources. The duration of #box("1.0" + h(1.5pt) + "s") is a relatively arbitrary choice; however, it is one that is often the choice for similar examples found in the literature @gabbard_messenger_cnn @george_huerta_cnn, which makes comparison easier. It also encompasses the majority of the signal power of BBH waves @first_detction, as well as the theoretically detectable length of many burst sources @bursts_O1. For each on-source noise example gathered or generated, #box("16.0" + h(1.5pt) + "s") of off-source background noise is also acquired to use for the whitening procedure; see @feature-eng-sec.
In the case where multiple detectors are being used simultaneously during training or inference, such as coherence detection, noise is generated independently for each interferometer using the same methods, with the restriction that noise acquired from real interferometer data is sampled from each detector within a common time window of #box("2048.0" + h(1.5pt) + "s") so that the noise all originates from a consistent time and date. This is done as there are periodic non-stationary noise features that repeat in daily, weekly, and yearly cycles due to weather, environmental conditions, and human activity @det_char. When validating methods, we want to make our validation data as close as possible to reality whilst maintaining the ability to generate large datasets. As we are only ever training our method to operate in real noise conditions (which our validation data attempts to mimic), there is no need to deviate from this method of acquiring noise for our training datasets.
=== Waveform Generation <injection-gen-sec>
Once the background noise has been acquired or generated, the next step is to introduce some differentiation between our two classes, i.e. we need to add a transient signal into some of our noise examples so that our model can find purpose in its existence. When we add a transient into background noise that was not there naturally, we call this an *injection*, since we are artificially injecting a signal into the noise. This injection can be a transient of any type.
Typically, this injection is artificially simulated both due to the limited @GWTC-1 @GWTC-2 @GWTC-3 (or non-existent @bursts_O1 @bursts_O2 @bursts_O3) number of real examples in many cases and because we will only be able to obtain the real signal through the lens of an interferometer, meaning it will be masked by existing real detector noise. If we were to inject a real injection into some other noise realisation, we would either have to perform a denoising operation (which, even when possible, would add distortion to the true signal) or inject the injection plus existing real noise into the new noise, effectively doubling the present noise and making injection scaling a difficult task. Using artificial examples also allows us granular control of the parameters of each waveform, which can be useful when designing training datasets and when evaluating our model in different areas of parameter space. Thus, we will be using simulated injections to generate our training, testing, and validation datasets. This is not unprecedented, most other gravitational-wave detection and parameter estimation methods rely on simulated signals for their operation, including matched filtering @pycbc.
Luckily, there is a well-developed field of research into modelling gravitational-wave waveforms. These models are known as "approximants", so named because they only approximate real gravitational-wave signals @imrphenom_d. As well as providing valuable insights into general relativity and the behaviour of CBCs in and of themselves @aproximant_usage, approximants are used in both detection and parameter estimation pipelines @pycbc. In our case, approximants can be injected into simulated or real noise to generate artificial examples of interferometer data that contain gravitational-wave signals. Depending on the complexity and accuracies of the chosen approximant and the source parameter range you are investigating, there will be some level of mismatch between any approximant and the real waveform it is attempting to simulate, even when using state-of-the-art approximants @imrphenom_future.
To simulate BBH waveforms, we will be using a version of the IMRPhenomD approximant @imrphenom_d, which has been adapted from LAL Simualtion's @LALSimulation implementation to run on GPUs using NVIDIAs CUDA GPU library. We name this adapted waveform library cuPhenom @cuphenom_ref for consistency with other CUDA libraries such as cuFFT @cufft. IMRPhenomD has adjustable parameters that can be altered to generate BBHs across a considerable parameter space, although it should be noted that it does not simulate eccentricity, non-aligned spins, or higher modes. IMRPhenomD was calibrated for systems up to a mass ratio of 1:18 and spins up to $a/m$ ∼ 0.85, so we must be careful not to leave this parameter space when generating examples for our training dataset. Although IMRPhenomD was first published in 2015 @imrphenom_d, and newer approximants now exist, it remains accurate enough for most detection and parameter estimation tasks and will only suffer from significant mismatch at the edges of its parameter space or potentially if encountering systems that have any of the aforementioned features that IMRPhenomD does not attempt to model. This is still an ongoing area of research and the exact effect of these features on detection and parameter estimation pipelines is still being investigated @higher_modes @eccentricity @precession. Since we have yet to detect any of these effects with a high degree of certainty, it is thought they are either rare and/or minimal enough not to affect most current searches given contemporary detector sensitivity. That being said, the search for the presence of these features is an exciting area of research that could, in time, reveal promising incites both into the sources themselves and the astrophysical conditions that lead to their creation.
The IMRPhenomD @imrphenom_d approximant generates a waveform by simulating the Inspiral, Merger, and Ringdown regions of the waveform, hence the IMR in the approximant name. The waveform is generated in the frequency domain since we are working in the time domain, we must transform the signal into the time domain before we inject it into our noise segments. The inspiral is generated using post-Newtonian expressions, and the merger ringdown is generated with a phenomenological ansatz; both parts of the model were empirically tuned using a small bank of numerical relativity waveforms. Detailed investigation of approximant generation was out of the scope of this thesis and will not be covered. See @example_injections for examples of waveforms generated using cuPhenom @cuphenom_ref
The increased performance of cuPhenom is significant and speeds up the training and iteration process of models considerably @cuphenom_ref. Because of cuPhenom's ability to generate injections on the fly during the training process without significant slowdown, it allows for very quick alteration of dataset parameters for training adjustments. It was felt that this advantage outweighed any gains that would be achieved by using newer waveform models that had not yet been adapted to the GPU, as it seems unlikely, especially in the detection case, that the newer waveform models would make for a significantly harder problem for the model to solve. This statement is, however, only an assumption, and it would be recommended that an investigation is carried out to compare the differences between approximants before any of the methods are used in a real application. A final retraining with these more accurate models would be recommended, in any case.
In the case of unmodelled burst detection, the accuracy of the signal shape is not as fundamental, as the ground truth shapes are not known and, for some proposed events, cover a very large shape space @supernovae-review. In order to cover the entire search space, GravyFlow uses artificially generated White Noise Bursts (WNBs) generated on the GPU via a simple custom Python @python function utilising TensorFlow @tensorflow. The procedure for generating WNBs with randomised duration and frequency content is as follows.
+ A maximum waveform duration is decided; typically, this would be less or equal to the duration of the example noise that you are injecting the waveform into, with some room for cropping.
+ Arrays of durations, minimum frequencies, and maximum frequencies are generated, each with a number of elements, $N$, equal to the number of waveforms that we wish to generate. These arrays can be pulled from any distribution as long as they follow the following rules. Duration cannot be larger than our maximum requested duration or less than zero. The frequency bounds cannot be less than zero or greater than the Nyquist frequency.
+ It is enforced that the maximum frequency is greater than the minimum frequency for any waveform by swapping values where this is not the case.
+ Gaussian white noise is generated with as many samples, which, given the selected sample rate, will produce a time series with the same duration as our requested max waveform duration.
+ A number of samples at the end of each waveform are zeroed so that each waveform has a number of samples equivalent to the randomised duration assigned to that signal.
+ Each waveform is transformed into the frequency domain by a RFFT.
+ Samples are zeroed at each end of each frequency-domain signal in order to perform a bandpass and limit the waveform between the assigned frequency constraints for each waveform.
+ The remaining signal is windowed using a Hann window to reduce the effects of the discontinuities generated by the bandpass operation.
+ The frequency domain signal is then returned to the time domain via a IRFFT.
+ Finally, the time-domain waveform is enveloped by a sigmoid window.
+ Assuming the plus polarisation component of the waveform strain was generated first, repeat with the same parameters but different initial noise distributions for the cross polarisation component.
Because we have used random noise across a range of frequency spaces, our distribution will, in theory, cover all possible signals within the specified parameter range. These WNBs can generate waveforms that look qualitatively similar to many proposed burst sources, including current supernovae simulations; see @supernovae_example. See @example_injections for examples of our WNBs and @wnb_calculation for the code used to generate these waveforms.
#figure(
```py
@tf.function
def generate_white_noise_burst(
num_waveforms: int,
sample_rate_hertz: float,
max_duration_seconds: float,
duration_seconds: tf.Tensor,
min_frequency_hertz: tf.Tensor,
max_frequency_hertz: tf.Tensor
) -> tf.Tensor:
# Casting
min_frequency_hertz = tf.cast(min_frequency_hertz, tf.float32)
max_frequency_hertz = tf.cast(max_frequency_hertz, tf.float32)
# Convert duration to number of samples
num_samples_array = tf.cast(sample_rate_hertz * duration_seconds, tf.int32)
max_num_samples = tf.cast(max_duration_seconds * sample_rate_hertz, tf.int32)
# Generate Gaussian noise
gaussian_noise = tf.random.normal([num_waveforms, 2, max_num_samples])
# Create time mask for valid duration
mask = tf.sequence_mask(num_samples_array, max_num_samples, dtype=tf.float32)
mask = tf.reverse(mask, axis=[-1])
mask = tf.expand_dims(mask, axis=1)
# Mask the noise
white_noise_burst = gaussian_noise * mask
# Window function
window = tf.signal.hann_window(max_num_samples)
windowed_noise = white_noise_burst * window
# Fourier transform
noise_freq_domain = tf.signal.rfft(windowed_noise)
# Frequency index limits
max_num_samples_f = tf.cast(max_num_samples, tf.float32)
num_bins = max_num_samples_f // 2 + 1
nyquist_freq = sample_rate_hertz / 2.0
min_freq_idx = tf.cast(
tf.round(min_frequency_hertz * num_bins / nyquist_freq), tf.int32)
max_freq_idx = tf.cast(
tf.round(max_frequency_hertz * num_bins / nyquist_freq), tf.int32)
# Create frequency masks using vectorized operations
total_freq_bins = max_num_samples // 2 + 1
freq_indices = tf.range(total_freq_bins, dtype=tf.int32)
freq_indices = tf.expand_dims(freq_indices, 0)
min_freq_idx = tf.expand_dims(min_freq_idx, -1)
max_freq_idx = tf.expand_dims(max_freq_idx, -1)
lower_mask = freq_indices >= min_freq_idx
upper_mask = freq_indices <= max_freq_idx
combined_mask = tf.cast(lower_mask & upper_mask, dtype=tf.complex64)
combined_mask = tf.expand_dims(combined_mask, axis=1)
# Filter out undesired frequencies
filtered_noise_freq = noise_freq_domain * combined_mask
# Inverse Fourier transform
filtered_noise = tf.signal.irfft(filtered_noise_freq)
envelopes = generate_envelopes(num_samples_array, max_num_samples)
envelopes = tf.expand_dims(envelopes, axis=1)
filtered_noise = filtered_noise * envelopes
return filtered_noise
```,
caption : [_ Python @python . _ TensorFlow @tensorflow graph function to generate the plus and cross polarisations of WNB waveforms; see @injection-gen-sec for a description of the generation method. `num_waveforms` takes an integer value of the number of WNBs we wish to generate. `sample_rate_hertz` defines the sample rate of the data we are working with. `max_duration_seconds` defines the maximum possible duration of any signals within our output data. `duration_seconds`, `min_frequency_hertz`, and `max_frequency_hertz` all accept arrays or in this case TensorFlow tensors, of values with a number of elements equal to `num_waveforms`, each duration. Both polarisations of the WNB are generated with parameters determined by the value of these three arrays at the equivalent index. This method is implemented by the GravyFlow pipeline @gwflow_ref.]
) <wnb_calculation>
#figure(
image("example_injections.png", width: 100%),
caption: [Eight simulated waveforms that could be used for injection into noise to form an obfuscated training, testing, or validation example for an artificial neural network. Note that only the plus polarisation component of the strain, $h_plus$, has been plotted in order to increase visual clarity. The leftmost four injections are IMRPhenomD waveforms generated using cuPhenom @cuphenom_ref, with parameters (shown in the adjacent grey information boxes) drawn from uniform distributions between #box("5.0" + h(1.5pt) + $M_dot.circle$) and #box("95.0" + h(1.5pt) + $M_dot.circle$) for the mass of both companions and between -0.5 and 0.5 for the dimensionless spin component. Note that during injection generation, the two companions are always reordered so that the mass of companion one is greater and that the IMRPhenomD waveform ignores the x and y spin components. They are included just for code completion. The rightmost four injections consist of WNB waveforms generated via the method described in @injection-gen-sec. Their parameters are again drawn from uniform distributions and are shown in the grey box to their right. The durations are limited between #box("0.1"+ h(1.5pt) + "s") and #box("1.0" + h(1.5pt) + "s"), and the frequencies are limited to between #box("20.0" + h(1.5pt) + "Hz") and #box("500.0" + h(1.5pt) + "Hz"), with the minimum and maximum frequencies automatically swapped.]
) <example_injections>
#figure(
image("supernova_example.png", width: 80%),
caption: [The plus polarisation component of the gravitational-wave strain of a simulated core-collapse supernova at a distance of #box("10" + h(1.5pt) + "kpc"), this data was taken from @supernovae_waveforms. Although some structures can clearly be observed, it is possible to imagine that a method trained to detect WNB signals, such as those presented in @example_injections, might be able to detect the presence of such a signal. ]
) <supernovae_example>
=== Waveform Projection <projection-sec>
As has been discussed, gravitational waves have two polarisation states plus, $plus$, and cross, $times$, which each have their own associated strain values $h_plus$ and $h_times$ @gravitational_waves_ref @gravitational_wave_interfereometers. Since these strain polarisation states can have different morphologies and since the polarisation angle of an incoming signal paired with a given interferometer's response will alter the proportion of each polarisation that is perceptible by the detector, our approximant signals are also generated with two polarisation components. Before being injected into any data, the waveforms must be projected onto each detector in our network in order to simulate what that signal would look like when observed with that detector. This projection will account for the full antenna response of each detector @gravitational_wave_interfereometers. Since a given interferometer has different sensitivities depending on both the direction of the source and the polarisation angle of the incoming wave, some waves will be entirely undetectable in a given detector.
If we want accurate data when simulating multi-interferometer examples, we must account for both the polarisation angle and direction of the source so that the relative strain amplitudes and morphologies in each detector are physically realistic @gravitational_wave_interfereometers.
Since the detectors have a spatial separation, there will usually, depending on source direction, also be a difference in the arrival time of the waves at the different detectors @gravitational_wave_interfereometers --- this discrepancy is especially important for localising sources, as it provides the possibility for source triangulation, which, along with the antenna responses of each detector, can be used to generate a probability map displaying the probability that a wave originated from a given region of the sky. In coherence detection methods, it also allows for the exclusion of multi-interferometer detections if the detections arise with an arrival time difference greater than that which is physically possible based on the spatial separation of the detectors.
None of this is essential when dealing with single detector examples --- in those cases, we could choose to forgo projection entirely and inject one of the strain polarisation components directly into the obfuscating noise as there are no time separations to model accurately and signal proportionality between detectors is also irrelevant.
The projection from both the antenna response parameters and the arrival time delay are dependent on the source direction @gravitational_wave_interfereometers. The plane of the wavefront and the direction of travel of the wave are dependent on the direction of the source. Since the sources are all extremely distant, the wavefront is considered a flat plane. Waves have some time duration, so both the time delay and antenna response parameters will change over the course of the incoming wave's duration as the Earth and the detectors move in space. As we are dealing with relatively short transients ($< 1.0 space s$), the change in these factors will be considered negligible and is not included in projection calculations.
Assuming that we ignore the Earth’s motion, the final waveform present in a detector is given by
$ h(t) = F_plus h_plus (t + Delta t) + F_times h_times (t + Delta t) $ <projection_equ>
where $h(t)$ is the resultant waveform present in the detector output at time $t$; $F_plus$ and $F_times$ are the detector antenna response parameters in the plus and cross polarisations for a given source direction, polarisation angle, and detector; $h_plus$ and $h_times$ are the plus and cross polarisations of the gravitational-wave strain of simulated or real gravitational waves; and $Delta t$ is the arrival time delay taken from a common reference point, often another detector or the Earth’s centre.
We can also calculate the relative times that the signals will arrive at a given detector,
$ Delta t = frac( (vectorn(x)_bold(0) - vectorn(x)_bold(d)) dot.op uvectorn(m), c) $ <time-delay_eq>
where $Delta t$ is the time difference between the wave's arrival at location $vectorn(x)_bold(d)$ and $vectorn(x)_bold(0)$, $c$ is the speed of light, $vectorn(x)_bold(0)$ is some reference location, often taken as the Earth’s centre, $vectorn(x)_d$ is the location for which you are calculating the time delay, in our case, one of our interferometers, and $uvectorn(m)$ is the direction of the gravitational-wave source. If we work in Earth-centred coordinates and take the Earth's centre as the reference position so that $vectorn(x)_bold(0) = [0.0, 0.0, 0.0]$ we can simplify @time-delay_eq to
$ Delta t = - frac(vectorn(x) dot.op uvectorn(m), c) . $ <time-delay_eq_sim>
Finally, combining @projection_equ and @time-delay_eq_sim, we arrive at
$ h(t) = F_plus h_plus (t - frac(vectorn(x) dot.c uvectorn(m), c) ) + F_times h_times (t - frac(vectorn(x) dot.c uvectorn(m) , c)) . $ <final_response_equation>
In practice, for our case of discretely sampled data, we first calculate the effect of the antenna response in each detector and then perform a heterodyne shift to each projection to account for the arrival time differences. When multiple detector outputs are required for training, testing, or validation examples, GravyFlow performs these calculations using a GPU-converted version of the PyCBC @pycbc project_wave function; see @projection_examples for example projections.
#figure(
image("projection_examples.png", width: 80%),
caption: [Example projection of two artificial gravitational-wave waveforms. The blue waveforms have been projected into the LIGO Livingston interferometer, the red waveforms have been projected into the Ligo Hanford interferometer, and the green waveforms have been projected into the VIRGO interferometer. The left column displays different projections of an IMRPhenomD waveform generated with the cuPhenom GPU library @cuphenom_ref. The right column displays different projections of a WNB waveform generated with the method described in @injection-gen-sec. The projections are performed using a GPU adaptation of the PyCBC Python library's @pycbc project_wave function. Both waveforms are projected from different source locations; the projection and time displacement are different in each case. ]
) <projection_examples>
=== Waveform Scaling
Once waveforms have been projected to the correct proportionality, we must have some method to inject them into obfuscating noise with a useful scaling. If using physically scaled approximants, such as the IMRPhenomD waveform, we could forgo scale by calculating the resultant waveform that would be generated by a CBC at a specified distance from Earth, then injecting this into correctly scaled noise (or simply raw real noise). However, since we are also using non-physical waveforms such as WNBs, and because we would like a more convenient method of adjusting the detectability of our waveforms, we will use a method to scale the waveforms to a desired proportionality with the noise.
Evidently, if we injected waveforms that have been scaled to values near unity into real unscaled interferometer noise (which is typically on the order of $10^(-21)$), even a very simple model would not have much of a problem identifying the presence of a feature. Equally, if the reverse were true, no model could see any difference between interferometer data with or without an injection. Thus, we must acquire a method to scale our injections so that their amplitudes have a proportionality with the background noise that is similar to what might be expected from real interferometer data.
Real data holds a distribution of feature amplitudes, with quieter events appearing in the noise more commonly than louder ones @gravitational_wave_population @network_snr --- this is because gravitational-wave amplitude scales inversely with distance @network_snr @gravitation, whereas the volume of searchable space, and thus matter and, generally, the number of systems which can produce gravitational waves, scale cubically with distance from Earth.
Features with quieter amplitudes will, in general, be harder for a given detection method to identify than features with louder amplitudes. We must design a training dataset that contains a curriculum that maximises model efficacy across our desired regime, with examples that are difficult but never impossible to classify and perhaps some easier cases that can carve channels through the model parameters, which can be used to direct the training of more difficult examples.
In any given noise distribution, there will, for any desired false alarm rate, be a minimum detectable amplitude below which it becomes statistically impossible to make any meaningful detections @gw_gaussian_case. This minimum amplitude occurs because even white Gaussian noise will occasionally produce data that looks indistinguishable from a certain amplitude of waveform.
We can use matched filtering statistics to prove this point, as we know that given an exactly known waveform morphology and perfect Gaussian noise, matched filtering is the optimal detection statistic @gw_gaussian_case. The probability that a matched filtering search of one template produces a false alarm is dependent only on the rate at which you are willing to miss true positives. We can use the $cal(F)$-statistic, $cal(F)_0$, for our probability metric to adjust this rate. Assuming that the noise is purely Gaussian, and we are only searching for one specific template, the probability of false detections of this exact waveform, i.e. $P(cal(F) > cal(F)_0)$, can be expressed as
$ P_F (cal(F)_0) = integral_(cal(F)_0)^infinity p_0(cal(F))d cal(F) = exp(-cal(F_0)) sum_(k=0)^(n/2-1) frac(cal(F)_0^k, k!) $ <false_alarm_rate_eq>
where n is the number of degrees of freedom of $chi^2$ distributions, and $p_0$ is the probability density function of $cal(F)$ when a known signal is not present. We can see from @false_alarm_rate_eq that the False Alarm Rate (FAR) in this simple matched filtering search is only dependent on the arbitrary choice of $cal(F)_0$. However, in practice, the choice of $cal(F)_0$ will be determined by the minimum amplitude waveform you wish to detect because the probability of detection, $P_d$ given the presence of a waveform, is dependent on the optimal SNR, $rho_"opt"$ of that waveform, $rho$, which has a loose relationship to the amplitude of the waveform. The probability of detection is given by
$ P_D (rho, cal(F)_0) = integral_(cal(F)_0)^infinity p_1(rho, cal(F))d cal(F) = integral_(cal(F)_0)^infinity frac((2 cal(F))^((n/2 - 1)/2), rho^(n/2 - 1)) I_(n/2-1) (rho sqrt(2 cal(F))) exp(-cal(F) - 1/2 rho^2) d cal(F) $
where $I_(n/2-1)$ is the modified Bessel function of the first kind and order $n/2 -1$, and and $p_1$ is the probability density function of $cal(F)$ when a known signal is present. For more information on this, please refer to @gw_gaussian_case.
More complex types of noise, however, like real LIGO interferometer noise, could potentially produce waveform simulacra more often than artificially generated white noise @det_char.
Louder false alarms are less likely than quieter ones, and at a certain amplitude, a given detection method will start producing a greater number of false alarms than the desired false alarm rate. If our training dataset includes waveforms with an amplitude that would trigger detections with a false alarm rate near or less than our desired rate, this could significantly reduce the performance of our network @feature_noise, so we must select a minimum amplitude that maximises our detection efficiency at a given false alarm rate.
Our minimum possible detection amplitude is limited by the combination of the noise and the false alarm rate we desire. There is not a maximum possible signal amplitude, other than some very unuseful upper bound on the closest possible gravitational-wave-producing systems to Earth (a nearby supernova or CBC, for example), but these kinds of upper limit events are so astronomically rare as not to be worth considering. Events will, however, follow a distribution of amplitudes @network_snr @gravitational_wave_population. As is often the case, we can try to generate our training data using a distribution that is as close as possible to the observed data, with the exception of a lower amplitude cutoff @dataset_mismatch_problems_3, or we can instead use a non-realistic distribution, uniformly or perhaps Gaussianly distributed across some amplitude regime which contains the majority of real signals --- making the assumption that any detection methods we train using this dataset will generalise to higher amplitudes, or failing that, that the missed signals will be so loud that they would not benefit greatly from improved detection methods.
Thus far in this subsection, we have been talking rather nebulously about waveform "amplitude", as if that is an easy thing to define in a signal composed of many continuous frequency components. There are at least three properties we might desire from this metric. Firstly, magnitude, some measure of the energy contained by the gravitational wave as it passes through Earth --- this measure contains a lot of physical information about the gravitational wave source. Secondly, significance, given the circumstances surrounding the signal, we may want to measure how likely the signal is to have been astrophysical rather than terrestrial, and finally, closely related to the significance and perhaps most importantly when designing a dataset for artificial neural network training, the detectability, given a chosen detection method this would act as a measure of how easy it is for that method to detect the signal.
Naively, one might assume that simply using the maximum amplitude of the strain, $h_op("peak")$, would be a good measure, and indeed, this would act as a very approximate measure of the ease of detection --- but it is not a complete one. Consider, for a moment, a sine-Gaussian with an extremely short duration on the order of tens of milliseconds but a maximum amplitude that is only slightly louder than a multi-second long BNS signal @first_bns. You can imagine from this example that the BNS would be considerably easier to detect, but if you were going by $h_op("peak")$ alone, then you would have no idea.
Within gravitational-wave data science, there are nominally two methods for measuring the detectability of a signal --- the Root-Sum-Squared strain amplitude @gravitational_wave_interfereometers @hrss_ref, $h_op("rss")$, and the optimal matched filter Signal Noise Ratio, $rho_"opt"$ @snr_ref @gravitational_wave_interfereometers. What follows is a brief description of these metrics.
==== The Root-Sum-Squared strain amplitude, $h_op("rss")$ <hrss-sec>
The Root-Sum-Squared strain amplitude, $h_op("rss")$:, is a fairly simple measure of detectability @hrss_ref. Unlike $rho_"opt"$, it is exclusive to gravitational-wave science. It accounts for the power contained across the whole signal by integrating the square of the strain across its duration, essentially finding the area contained by the waveform. It is given by
$ h_op("rss") = sqrt(integral (h_plus (t)^2 + h_times (t)^2 )d t) $
or written in its discrete form, which is more relevant for digital data analysis
$ h_op("rss") = sqrt(sum_(i=1)^(N) (h_plus [t_i]^2 + h_times [t_i]^2)) $
when $h_op("rss")$ is the root-sum-squared strain amplitude, $h_plus (t)$ and $h_times (t)$ are the plus and cross polarisations of the continuous strain, $h_plus (t_i)$ and $h_times (t_i)$ are the plus and cross polarisations of the discrete strain at the i#super("th") data sample, and $N$ is the number of samples in the waveform.
It should be noted that with any measure that utilises the strain, such as $h_op("peak")$ and $h_op("rss")$, there is some ambiguity concerning where exactly to measure strain. You could, for example, measure the raw strains $h_plus$ and $h_times$ before they have been transformed by the appropriate detector antenna response functions, or you could take the strain $h$ after it has been projected onto a given detector. The advantage of the former is that you can fairly compare the magnitude of different gravitational waves independent of information about the interferometer in which it was detected. This is the commonly accepted definition of the $h_op("rss")$.
The $h_op("rss")$ is most often used during burst analysis as a measure of the detectability, magnitude, and significance of burst transients. Within CBC detection SNR, $rho_"opt"$, is often preferred. Whilst $h_op("rss")$ is a simple and convenient measure, it ignores noise, so it cannot by itself tell us if a signal is detectable.
==== Optimal Signal-to-Noise Ratio (SNR) ($rho_"opt"$) <snr-sec>
The optimal Signal-to-Noise Ratio (SNR), $rho_"opt"$, solves both of these issues by acting as a measure of detectability, magnitude, and significance in comparison to the background noise. Consequently, because it is relative to the noise, the magnitude of a given waveform can only be compared to the optimal SNR of a waveform that was obfuscated by a similar noise distribution. If a real gravitational-wave signal were detected in a single LIGO detector, say, LIGO Hanford, for example, then its optimal SNR would be significantly larger than the same signal detected only in VIRGO, even if the signal was aligned in each case to the original from the optimally detectable sky location. This is because the sensitivity of the VIRGO detector is substantially lower than the two LIGO detectors @detector_sensitivity, so the noise is proportionally louder compared to the waveforms.
It is, however, possibly a good measure of detectability, as detection methods do not much care about the actual magnitude of the signal when they are attempting to analyse one; the only relevant factors, in that case, are the raw data output, consisting of the portion of the gravitational-wave strain perceptible given the detector's antenna response function, see @final_response_equation, and the interferometer noise at that time.
The SNR can also sometimes be an ambiguous measurement, as there are multiple different metrics that are sometimes referred to by this name, most prominently, a ratio between the expected value of the signal and the expected value of the noise, or sometimes the ratio between the root mean square of the signal and noise. Within gravitational-wave data science, though there is sometimes confusion over the matter, the commonly used definition for SNR is the matched filter SNR, $rho_"opt"$ @snr_ref. Since matched filtering is the optimal method for detecting a known signal in stationary Gaussian noise @snr_ref, we can use the result of a matched filter of our known signal with that signal plus noise as a measure of the detectability of the signal in a given noise distribution.
The optimal SNR, $rho_"opt"$, is given by
$ rho_"opt" = sqrt(4 integral_0^infinity (|accent(h, tilde.op)(f)|^2)/ S(f) d f) $
where $rho_"opt"$ is the optimal SNR, S(f) is the one sided PSD, and
$ accent(h, tilde.op)(f) = integral_(-infinity)^infinity h(x) e^(-i 2 pi f t) d t $
is the Fourier transform of h(f). The coefficient of 4 is applied since, in order to use only the one-sided transform, we assume that $S(f) = S(-f) $, which is valid because the input time series is entirely real. This applies a factor of two to the output, and since we are only integrating between 0 and $infinity$ rather than $-infinity$ to $infinity$, we apply a further factor of 2.
Because, again, for data analysis purposes, the discrete calculation is more useful, the $rho_"opt"$ of discrete data is given by
$ rho_"opt" = sqrt(4 sum_(k=1)^(N - 1) (|accent(h, tilde.op)[f_k]|^2)/ S(f_k)) $ <snr-equation>
where $N$ is the number of samples, and, in this case, the discrete Fourier transform $accent(h, tilde)[f]$ is given by
$ accent(h, tilde)[f_k] = sum_(i=1)^(N - 1) h[t_i] e^(-(2 pi)/N k i) $ <fourier-transform-eq>
For the work during this thesis, we have added a TensorFlow @tensorflow implementation for calculating the $rho_"opt"$ to GravyFlow @gwflow_ref. This implementation is shown in @snr_calculation.
#figure(
```py
@tf.function
def calculate_snr(
injection: tf.Tensor,
background: tf.Tensor,
sample_rate_hertz: float,
fft_duration_seconds: float = 4.0,
overlap_duration_seconds: float = 2.0,
lower_frequency_cutoff: float = 20.0,
) -> tf.Tensor:
injection_num_samples = injection.shape[-1]
injection_duration_seconds = injection_num_samples / sample_rate_hertz
# Check if input is 1D or 2D
is_1d = len(injection.shape) == 1
if is_1d:
# If 1D, add an extra dimension
injection = tf.expand_dims(injection, axis=0)
background = tf.expand_dims(background, axis=0)
overlap_num_samples = int(sample_rate_hertz*overlap_duration_seconds)
fft_num_samples = int(sample_rate_hertz*fft_duration_seconds)
# Set the frequency integration limits
upper_frequency_cutoff = int(sample_rate_hertz / 2.0)
# Calculate and normalize the Fourier transform of the signal
inj_fft = tf.signal.rfft(injection) / sample_rate_hertz
df = 1.0 / injection_duration_seconds
fsamples = \
tf.range(0, (injection_num_samples // 2 + 1), dtype=tf.float32) * df
# Get rid of DC
inj_fft_no_dc = inj_fft[:,1:]
fsamples_no_dc = fsamples[1:]
# Calculate PSD of the background noise
freqs, psd = \
calculate_psd(
background,
sample_rate_hertz = sample_rate_hertz,
nperseg = fft_num_samples,
noverlap = overlap_num_samples,
mode="mean"
)
# Interpolate ASD to match the length of the original signal
freqs = tf.cast(freqs, tf.float32)
psd_interp = \
tfp.math.interp_regular_1d_grid(
fsamples_no_dc, freqs[0], freqs[-1], psd, axis=-1
)
# Compute the frequency window for SNR calculation
start_freq_num_samples = \
find_closest(fsamples_no_dc, lower_frequency_cutoff)
end_freq_num_samples = \
find_closest(fsamples_no_dc, upper_frequency_cutoff)
# Compute the SNR numerator in the frequency window
inj_fft_squared = tf.abs(inj_fft_no_dc*tf.math.conj(inj_fft_no_dc))
snr_numerator = \
inj_fft_squared[:,start_freq_num_samples:end_freq_num_samples]
if len(injection.shape) == 2:
# Use the interpolated ASD in the frequency window for SNR calculation
snr_denominator = psd_interp[:,start_freq_num_samples:end_freq_num_samples]
elif len(injection.shape) == 3:
snr_denominator = psd_interp[:, :, start_freq_num_samples:end_freq_num_samples]
# Calculate the SNR
SNR = tf.math.sqrt(
(4.0 / injection_duration_seconds)
* tf.reduce_sum(snr_numerator / snr_denominator, axis = -1)
)
SNR = tf.where(tf.math.is_inf(SNR), 0.0, SNR)
# If input was 1D, return 1D
if is_1d:
SNR = SNR[0]
return SNR
```,
caption : [_ Python @python. _ The GravyFlow TensorFlow @tensorflow graph function to calculate the optimal SNR, $rho_"opt"$, of a signal. `injection` is the input signal as a TensorFlow tensor, `background` is the noise into which the waveform is being injected, `sample_rate_hertz` is the sample rate of both the signal and the background, `fft_duration_seconds` is the duration of the FFT window used in the PSD calculation, `overlap_duration_seconds` is the duration of the overlap of the FFT window in the PSD calculation, and `lower_frequency_cutoff` is the frequency of the lowpass filter, below which the frequency elements are silenced.]
) <snr_calculation>
Once the optimal SNR or $h_op("rss")$ of an injection has been calculated, it is trivial to scale that injection to any desired optimal SNR or $h_op("rss")$ value. Since both metrics scale linearly when the same coefficient scales each sample in the injection,
$ h_op("scaled") = h_op("unscaled") M_op("desired") / M_op("current") $ <scaling-equation>
where $h_op("scaled")$ is the injection strain after scaling, $h_op("unscaling")$ is the injection strain before scaling, $M_op("desired")$ is the desired metric value, e.g. $h_op("rss")$, or $rho_"opt"$, and $M_op("current")$ is the current metric value, again either $h_op("rss")$, or $rho_"opt"$. Note that since $h_op("rss")$ and $rho_"opt"$ are calculated using different representations of the strain, $h_op("rss")$ before projection into a detector, and $rho_"opt"$ after, the order of operations will be different depending on the scaling metric of choice, ie. for $h_op("rss")$: scale $arrow$ project, and for $rho_"opt"$: project $arrow$ scale.
#figure(
image("scaling_comparison.png", width: 100%),
caption: [Eight examples of artificial injections scaled to a particular scaling metric and added to a real noise background to show variance between different scaling methods. The blue line demonstrates the whitened background noise plus injection; the red line represents the injection after being run through the same whitening transform as the noise plus injection, and the green line represents the injection after scaling to the desired metric. The leftmost column contains an IMRPhenomD waveform, generated using @cuphenom_ref, injected into a selection of various background noise segments and scaled using SNR; see @snr-sec. From upper to lower, the SNR values are 4, 8, 12, and 16, respectively. The rightmost column displays a WNB injected into various noise distributions, this time scaled using $h_op("rss")$; see @hrss-sec. From upper to lower, the $h_op("rss")$ values are as follows: $8.52 times 10^(-22)$, $1.70 times 10^(-21)$, $2.55 times 10^(-21)$, and $3.41 times 10^(-21)$. As can be seen, though both sequences are increasing in linear steps with a uniform spacing of their respective metrics, they do not keep in step with each other, meaning that if we double the optimal SNR of a signal, the $h_op("rss")$ does not necessarily also double.]
) <scaling_comparison>
For the experiments performed later in this section, we will use SNR as our scaling metric drawn from a uniform distribution with a lower cutoff of 8 and an upper cutoff of 20. These values are rough estimates of a desirable distribution given the SNR values of previous CBC detections.
If we wish to utilise multiple detectors simultaneously as our model input, we can scale the injections using either the network SNR or the $h_op("rss")$ before projection into the detectors. In the case of $h_op("rss")$, the scaling method is identical, performed before detection and injection. Network SNR is computed by summing individual detector SNRs in quadrature @network_snr, as shown by
$ rho_op("network") = sqrt(sum_(i=1)^(N) rho_i^2) $ <network-snr>
where $rho_op("network")$ is the network SNR, $N$ is the total number of detectors included in the input, and $rho_i$ is the detector SNR of the i#super("th") detector given in each case by @snr-equation. To scale to the network, SNR @scaling-equation can still be used, with the network SNR of @network-snr as the scaling metric, by multiplying the resultant projected injection in each detector by the scaling coefficient.
=== Data Dimensionality and Layout <dim_sec>
Interferometer output data is reasonably different from the example MNIST data @mnist we have been using to train models thus far, the primary difference being that it is one-dimensional rather than two, being more similar to audio than image data. In fact, most of the features we are looking for within the data have a frequency that, when converted to sound, would be audible to the human ear @human_hearing, so it is often useful to think of the problem in terms of audio classification. In many ways, this reduced dimensionality is a simplification of the image case. In pure dense networks, for example, we no longer have to flatten the data before feeding it into the model; see @flatten-sec.
There are, however, multiple interferometers across the world. During an observing run, at any given time, there are anywhere between zero to five operational detectors online: LIGO Livingston (L1), LIGO Hanford (H1), Virgo (V1), Kagra (K1), and GEO600 (G1) @open_data (although as of this thesis, there has yet been a time when all five detectors were online). GEO600 is not considered sensitive enough to detect any signals other than ones that would have to be so local as to be rare enough to dismiss the probability, so it is usually not considered for such analysis @geo_sensitivity. It should also be noted that during O4, both Virgo and Kagra are currently operating with a sensitivity and up-time frequency that makes it unlikely they will be of much assistance for detection @current_status. It is hoped that the situation at these detectors will improve for future observing runs. Even with just the two LIGO interferometers, it is possible to include multiple detectors within our model input, and in fact, such a thing is necessary for coherence detection to be possible @x-pipeline @cWB.
This multiplicity brings some complications in the construction of the input examples. Currently, we have only seen models that ignore the input dimensionality; however, with other network architectures, such as Convolutional Neural Networks (CNNs), this is not always the case @deep_learning_review. Therefore, we must consider the data layout. In the simplest cases, where we are not modifying the shape of the data before injection, we can imagine three ways to arrange the arrays; see @layout_options for a visual representation.
- *Lengthwise*: wherein the multiple detectors are concatenated end to end, increasing the length of the input array by a factor equal to the number of detectors. This would evidently still be a 1D problem, just an extended one. While perhaps this is the simplest treatment, we can imagine that this might perhaps be the hardest to interpret by the model, as we are mostly discarding the dimensionality, although no information is technically lost.
- *Depthwise*: Here, the detectors are stacked in the depth dimension, an extra dimension that is not counted toward the dimensionality of the problem, as it is a required axis for the implementation of CNNs, in which each slice represents a different feature map; see @cnn-sec. Often, this is how colour images are injected by CNNs, with the red, green, and blue channels each taking up a feature map. This would seem an appropriate arrangement for the detectors. However, there is one significant difference between the case of the three-colour image and the stacked detectors, that being the difference in signal arrival time between detectors; this means that the signal will be offset in each channel. It is not intuitively clear how this will affect model performance, so this will have to be empirically compared to the other two layouts.
- *Heightwise*: The last possible data layout that could be envisioned is to increase the problem from a 1D problem to a 2D one. By concatenating the arrays along their height dimension, the 1D array can be increased to a 2D array.
#figure(
image("data_layout.png", width: 100%),
caption: [Possible data layouts for multi-detector examples. Here, $d$ is the number of included detectors, and $N$ is the number of input elements per time series. There are three possible ways to align interferometer time-series data from multiple detectors. These layouts are discussed in more detail in @dim_sec. ]
) <layout_options>
For pattern-matching methods, like that which is possible in the CBC case, there are also advantages to treating each detector independently. If we do this, we can use the results from each model as independent statistics, which can then be combined to create a result with a far superior False Alarm Rate (FAR) @false_alarm_rate_ref. We could combine the score from both models and calculate a false alarm rate empirically using this combined score, or use each detector as a boolean output indicating the presence of a detector or not, and combine the FARs using @comb_far_eq.
For the first case treating the two models as one, the combined score is calculated by
$ op("S")_(op("comb")) = product_(i=1)^N op("S")_i $
where $op("S")_(op("comb"))$ is the combined classification score, which can be treated approximately as a probability if the output layer uses a softmax, or single sigmoid, activation function, see @softmax-sec, $op("S")_i$ is the output score of the $i^op("th")$ classifier input with the data from the $i^op("th")$ detector, and $N$ is the number of included detectors. Note that one could employ a uniquely trained and/or designed model for each detector or use the same model for each detector.
In the second case, treating each model as an independent boolean statistic and assuming that the output of the detectors is entirely independent except for any potential signal, the equation for combining FARs is
$ op("FAR")_(op("comb")) = (w - o) product_(i=1)^N op("FAR")_i $ <comb_far_eq>
where $op("FAR")_(op("comb"))$ is the combined FAR, $N$ is the number of included detectors, $w$ is the duration of the input vector in unit time, and $o$ is the overlap between windows @false_alarm_rate_ref also in unit time. This equation works in the case when a detection method tells you a feature has been detected within a certain time window, $w$, but not the specific time during that window, meaning that $t_"central" > w_"start" and t_"central" < w_"end"$, where $t_"central"$ is the signal central time, $w_"start"$ is the input vector start time and $w_"end"$ is the input vector end time.
If a detection method can be used to ascertain a more constrained time for a feature ($w_"duration" < "light_travel_time"$), then you can use the light travel time between the two detectors to calculate a FAR @false_alarm_rate_ref. For two detectors, combing the FAR in this way can be achieved by
$ op("FAR")_(1, 2) = 2 op("FAR")_1 op("FAR")_2 w_(1,2) $
where $op("FAR")_(op("comb"))$ is the combined FAR, and $w_(1,2)$ is the light travel time between detectors 1 and 2, as this is the largest physically possible signal arrival time separation between detectors; gravitational waves travel at the speed of light, and detector arrival time difference is maximised if the direction of travel of the wave is parallel to the straight-line path between the two detectors.
In the case where we are using $t_"central"$ and coincidence times to calculate our combined FAR, if we use overlapping data segments to feed our model, we must first group detections that appear in multiple inferences and find one central time for the detection. We can use an empirical method to determine how best to perform this grouping and identify if and how model sensitivity varies across the input window.
=== Feature Engineering and Data Conditioning <feature-eng-sec>
Invariably, there are data transforms that could be performed prior to ingestion by the model. If there are operations that we imagine might make the task at hand easier for the model, we can perform these transforms to improve network performance. Because we are attempting to present the data to the model in a form that makes the features easier to extract, this method of prior data conditioning is known as *feature engineering* @feature_engineering_ref. It should be noted that feature engineering does not necessarily add any extra information to the data. In fact, in many cases, it can reduce the overall information content whilst simultaneously simplifying the function that the model is required to approximate in order to operate as intended @feature_engineering_ref, see for example the whitening procedure described in @whitening-sec. As we have said before, although the dense neural network with a CAP above two, is, at its limit, a universal function approximator @universal_aproximators, there are practical limitations to finding the right architecture and parameters for a given function, so sometimes simplifying the task can be beneficial. This can reduce the model size and training time, as well as improve achievable model performance when the time available for model and training optimisation is limited @feature_engineering_performance.
==== Raw Data
When designing the package of information that will be presented to the network at each inference, the simplest approach would be to feed the raw interferometer data directly into the model. There are certainly some methodologies that consider it optimal to present a model with as much unaltered information as possible @deep_learning_review. By performing little to no data conditioning, you are allowing the network to find the optimal path to its solution; if all the information is present and an adequate model architecture is instantiated, then a model should be able to approximate the majority of possible conditioning transforms during model training, not only this, but it may be able to find more optimal solutions that you have not thought of, perhaps ones customised to the specific problem at hand, rather than the more general solutions that a human architect is likely to employ. This methodology, however, assumes that you can find this adequate model architecture and have an adequate training procedure and dataset to reach the same endpoint that could be achieved by conditioning the data. This could be a more difficult task than achieving a result that is almost as good with the use of feature engineering.
==== Whitened Data <whitening-sec>
One type of data conditioning that we will employ is time-series whitening @whitening_ref. As we have seen in @interferometer_noise_sec, as well as containing transient glitches, the interferometer background is composed of many different continuous quasi-stationary sources of noise, the frequency distributions of which compose a background that are unevenly distributed across our frequency search space @det_char. This leaves us with 1D time series that have noise frequency components with much greater power than any interesting features hidden within the data. This could potentially make detections using most methods, including artificial neural networks, much more difficult, especially when working in the time domain; see @whitnening_examples for an example of the PSD of unwhitened noise.
#figure(
image("whitening_examples.png", width: 100%),
caption: [An example of a segment of interferometer data before and after whitening. The two leftmost plots in blue show the PSD, _upper_, and raw data, _lower_, output from the LIGO Hanford detector before any whitening procedure was performed. The two rightmost plots show the same data after the whitening procedure described in @whitening-sec has been implemented. The data was whitened using the ASD of a #box("16.0" + h(1.5pt) + "s") off-source window from #box("16.5" + h(1.5pt) + "s") before the start of the on-source window to #box("0.5" + h(1.5pt) +"s") before. The #box("0.5" + h(1.5pt) +"s") gap is introduced as some data must be cropped after whitening due to edge effects caused by windowing. This also acts to ensure that it is less likely that any features in the on-source data contaminate the off-source data, which helps reduce the chance that we inadvertently whiten any interesting features out of the data.]
) <whitnening_examples>
Fortunately, there exists a method to flatten the noise spectrum of a given time series whilst minimising the loss of any transient features that don't exist in the noise spectrum @whitening_ref. This requires an estimate of the noise spectrum of the time series in question, which does not contain the hidden feature. In this case, this noise spectrum will take the form of an ASD; see @asd-func.
Since the noise spectrum of the interferometer varies with time, a period of noise close to but not overlapping with the section of detector data selected for analysis must be chosen --- we call this time series the *off-source* period. The period being analysed, the *on-source* period, is not included in the off-source period so that any potential hidden features that are being searched for, e.g. a CBC signal, do not contribute significant frequency components to the ASD, which may otherwise end up dampening the signal along with the noise during the whitening procedure. It should be noted, then, that whitening via this process uses additional information from the off-source period that is not present in the on-source data. During this thesis, we have elected to use an off-source window duration of #box("16.0" + h(1.5pt) +"s"), as this was found to be an optimal duration by experiments performed as part of previous work during the development of MLy @MLy, although it should be noted that we have taken the on-source and crop regions after the off-source as opposed to the initial MLy experiments wherein it was taken at the centre of the off-source window. See @onsource_offsource_regions for a depiction of the relative locations of the on-source and off-source segments.
#figure(
image("onsource_offsource_regions.png", width: 100%),
caption: [Demostration of the on-source and off-source regions used to calculate the ASD used during the whitening operations throughout this thesis wherever real noise is utilised. Where artificial noise is used, the off-source and on-source segments are generated independently but with durations equivalent to what is displayed above. The blue region shows the #box("16.0" + h(1.5pt) + "s") off-source period, the green region shows the #box("1.0" + h(1.5pt) + "s") on-source period, and the two red regions represent the #box("0.5" + h(1.5pt) + "s") crop periods, which are removed after whitening. During an online search, the on-source region would advance in second-long steps, or if some overlap was implemented, less than second-long steps, meaning all data would eventually be searched. The leading #box("0.5" + h(1.5pt) + "s") crop region will introduce an extra #box("0.5" + h(1.5pt) + "s") of latency to any search pipeline. It may be possible to avoid this latency with alternate whitening methods, but that has not been discussed here. ]
) <onsource_offsource_regions>
We can whiten the data by convolving it with a suitably designed Finite Impulse Response (FIR) filter. This procedure is described by the following steps:
+ Calculate the ASD using @asd-func, this will act as the transfer function, $G(f)$, for generating the FIR filter. This transfer function is a measure of the frequency response of the noise in our system, and during the whitening process, we will essentially try to normalise the on-source by this off-source noise in order to flatten its PSD. We generate a filter with a #box("1" + h(1.5pt) + "s") duration.
+ Next, we zero out the low and high-frequency edges of the transfer function with
$ G_"trunc" (f) = cases(
0 "if" f <= f_"corner",
G(f) "if" f_"corner" < f < f_"Nyquist" - f_"corner",
0 "if" >= f_"Nyquist" - f_"corner"
). $
This stage discards frequency components which we no longer care about both because these frequencies are outside of the band we are most interested in and because discarding them can improve function stability and performance whilst reducing artifacting.
3. Optionally, we can apply a Planc-taper window to smooth the discontinuities generated by step 2; we will apply this window in all cases. The Planc-taper window has a flat centre with smoothly tapering edges, thus the windowing is only applied as such to remove discontinuities whilst affecting the central region as little as possible.
$ G_"smoothed" (f) = G_"trunc" (f) dot W(f). $
4. Next we compute the inverse Fourier transform of $G_"smoothed" (f)$ to get the FIR filter, $g(t)$, with
$ g(t) = 1 / (2 pi) integral_(-infinity)^infinity G_"smoothed" (f) e^(j f t) d f. $
This creates a time-domain representation of our noise characteristics, which can then be used as a filter to remove similar noise from another time-domain signal. In practice, we utilise an RFFT function to perform this operation on discrete data. As opposed to an FFT, this transform utilises symmetries inherent when transforming from complex to real data in order to halve the computational and memory requirements.
5. Finally, we convolve our FIR filter, $g(t)$, with the data we wish to whiten, $x(t)$,
$ x_"whitened" (t) = x(t) ast.op g(t) $
where $x_"whitened" (t)$ is the resultant whitened time-series, $x(t)$ is the original unwhitened data, and $g(t)$ is the FIR filter generated from the off-source ASD. This convolution effectively divides the power of the noise at each frequency by the corresponding value in $G(f)$. This flattens the PSD, making the noise uniform across frequencies; see @whitnening_examples for an example of this transform being applied to real interferometer data.
This method was adapted from the GWPy Python library @gwpy and converted from using NumPy functions @numpy to TensorFlow GPU operations @tensorflow in order to work in tandem with the rest of the GravyFlow @gwflow_ref pipeline and allow for rapid whitening during the training process.
==== Pearson Correlation
A method of feature engineering that is employed prominently by the MLy pipeline @MLy involves extracting cross-detector correlation using the Pearson correlation @pearson_ref. The Pearson correlation is given by
$ r = frac( N (sum_(i=0)^N x_i y_i) - (sum_(i=0)^N x_i) (sum_(i=0)^N y_i ) , sqrt( [N sum_(i=0)^N x_i^2 - (sum_(i=0)^N x_i)^2] times [N sum_(i=0)^N y_i^2 - (sum_(i=0)^N y_i)^2] ) ) $
where r is the Pearson correlation coefficient, N is the number of data points in each input array, and $x_i$ and $y_i$ are the i#super("th") elements of the $vectorn(x)$ and $vectorn(y)$ arrays respectively @pearson_ref.
Nominally, this produces one scalar output value given two input vectors, $vectorn(x)$ and $vectorn(y)$, of equal length, $N$. A value of $r = 1$ indicates perfect correlation between the two vectors, whereas a value of $r = -1$ indicates perfect anti-correlation. Finally, a value of $r = 0$ indicates no correlation between the vectors. Note that if one of the vectors is entirely uniform, then the result is undefined.
This calculation assumes that the two vectors are aligned such that the value in $x_i$ corresponds to the value in $y_i$. If this is not the case, as would happen for interferometer data if there is an arrival time difference (which there will be for most sky locations), then this will be an imperfect measure of correlation, even discarding the obfuscation of the noise. Because, as was discussed previously in @projection-sec, we do not know the direction of the source a priori, MLy @MLy calculates the correlation for all possible arrival times given the light travel time between the two detectors in question. It uses minimum increments of the sample duration so that no heterodyning is necessary. This is done with the assumption that any difference in arrival time less than the sample duration will have a negligible effect on the correlation. It should be noted that this method is still hampered by the different polarisation projections dependent on the source polarization and by the obfuscating noise. See @pearson_example for examples of the rolling Pearson correlation calculated for LIGO Hanford and LIGO Livingston interferometer data.
#figure(
image("pearson_example.png", width: 100%),
caption: [Example whitened on-source and correlation plots of real interferometer noise from a pair of detectors, in this case, LIGO Livingston and LIGO Hanford, with either coherent, incoherent, or no injections added. The leftmost plots adjacent to the info panels are grouped into pairs. In each case, LIGO Livingston is at the top, and LIGO Hanford is underneath. Identical on-source and off-source noise segments are used for each example of the same detector, and noise for each detector was gathered with a time difference of no more than #box("2048.0" + h(1.5pt) + "s"). In the leftmost plots, the green series is the unwhitened but projected waveform to be injected into the real noise from that detector. The red series is that same injection but subject to the same whitening procedure that will also be applied to the on-source plus injections, and the blue series is the whitened on-source plus injections. The rightmost plots each correspond to a pair of detectors and display the rolling Pearson correlation values between those two whitened on-source plus injection series. Since there is approximately a max arrival time difference of #box("0.01" + h(1.5pt) + "s") between LIGO Livingston and LIGO Hanford, the number of correlation calculations performed corresponds to the rounded number of samples required to represent #box("0.02" + h(1.5pt) + "s") of data at #box("2048.0" + h(1.5pt) + "Hz"). This number is two times the maximum arrival time difference because the difference could be positive or negative. In this case, that difference comes to 40 samples. All injections have been scaled to an optimal network SNR of 30 using the method described in @snr-sec. The upper pair of detectors has no injection. As would be expected, the correlation is low regardless of the assumed arrival time difference. The second pair from the top has been injected with a coherent white noise burst (WNB), see @injection-gen-sec, which has been projected onto the two detectors using a physically realistic mechanism previously described in @projection-sec. Here, the correlation is much stronger. We can see it rise and fall as the waveforms come in and out of coherence. The third from the top, the central plot, shows an injection of two incoherent WNBs. They are processed identically to the coherent case, but the initial waveforms are generated independently, including their durations. The Pearson correlation looks very similar to the pure noise case in the uppermost plot, as might be expected. The second from the lowest pair has been injected with a coherent IMRPhenomD waveform, which again has been correctly projected. We can observe that a small correlation is observed at an arrival time difference of around #box("0.005" + h(1.5pt) + "s"), suggesting that the two waveforms arrived at the detectors #box("0.005" + h(1.5pt) + "s") apart. Finally, the lowest plot depicts two incoherent IMRPhenomD waveforms projected into the noise. Though these are generated with different parameters, the shared similarities in morphology between all CBC waveforms cause correlation to be registered. By maximum amplitude alone, it may even appear as though there is more correlation happening here than in the correlated case. This highlights one potential weakness of using the Pearson correlation, which can sometimes show some degree of correlation even if the two waveforms are not produced using the same physically simulated mechanism.]
) <pearson_example>
As with most mathematical functions, we have created a new GPU-based function for the calculation of the Pearson correlation in Python @python, using the TensorFlow GPU library @tensorflow for computational speed and easy integration with the rest of the GravyFlow pipeline @gwflow_ref.
==== Fourier Transform
So far, we have looked at data conditioning, which produces results in the time domain. As we know, and as has been demonstrated by the previous discussion, many aspects of time series processing are performed in the frequency domain. Often, features that are hard to distinguish in the time domain are relatively easy to spot in the frequency domain, even with the human eye. Many have characteristic morphologies, such as distinct lines due to powerline harmonics and violin modes. If we make the assumption that if it is easier for a human, it might also be easier for a machine learning method, we should certainly examine feature engineering methods that take us into the frequency domain. The most obvious way to do this would be to use a simple Fourier transform @fourier_transform_ref, which takes us directly from a time-domain series to a frequency-domain one. The discrete form of the Fourier transform is given above in @fourier-transform-eq.
==== Power Spectral Density (PSD) and Amplitude Spectral Density (ASD)
As discussed in @psd-sec @psd_ref, the PSD is used in many calculations and transforms in gravitational wave data analysis, so it makes sense that along with the closely related property, the ASD, it may also be useful information to provide to a model. Since the PSD has already been discussed in detail in @psd-sec, we will not linger on it here.
==== Spectrograms
The final feature engineering method that we will discuss allows us to represent data in both the time and frequency domains simultaneously. Spectrograms are visualisations of the Short-Time Fourier Transform (STFT) of a time series @spectrograms_ref. The STFT is computed by dividing a time series into many smaller periods, much like in the calculation of a PSD; however, instead of being averaged, you can simply use this 2D output as an image in its own right, which displays how the frequency components of a time series fluctuate over its duration. This retains some information from the time domain. The 2D STFT of a continuous time series, $x(t)$, is given by
$ op("STFT")(x)(t, f) = integral_(-infinity)^infinity x(tau) w(t - tau) e^(-i 2 pi f tau) d tau $
where $op("STFT")(x)(f, t)$ is the value of the STFT of $x(t)$ at a given time, $t$, and frequency, $f$, $w(t)$ is a configurable window function that helps to minimize the boundary effects, and $tau$ is a dummy integration variable used to navigate through the time domain at the expense of losing some information from the frequency domain, making the spectrogram, like whitening, a lossy transform. In its discrete form, this becomes
$ op("STFT")(x)[n, k] = sum_(m = 0)^(N-1) x[m] w[n - m] e^((-i 2 pi k m) / N) $ <stft-eq>
where $op("STFT")(x)[n, k]$ is the value of the discrete STFT of a discrete time series, $x[m]$ at a given time index, $n$, and frequency index, $k$, $w[t]$ is a discrete window function, and N is the number of samples in our discrete time series. It should be noted that there are two time indices present, $n$ and $m$, because a reduction in dimensionality along the time axis usually occurs since the step between adjacent FFT segments is commonly greater than one.
When creating a spectrogram, the values are typically squared,
$ S[k, n] = (op("STFT")(x)[n, k])^2 $ <stft_sq>
to represent the power of the frequency components, similar to the process of calculating the PSD. Alternatively, the magnitude can be taken with
$ S[k, n] = |op("STFT")(x)[n, k]|. $
Before plotting, the data is often converted into decibels to better visualize the dynamic range,
$ op("DATA") = 10 times log (S[k, n]). $ <dec-eq>
We have created a custom Python TensorFlow function @tensorflow to perform these calculations on the GPU; see @spectrogram_examples for illustrations of this in use on real noise with injected waveform approximants. As is the case with multiple 1D time series, the question also remains of how to combine multiple spectrograms in the case of multiple detector outputs, see @dim_sec.
#figure(
image("spectrogram_examples.png", width: 85%),
caption: [Six example noise segments and their corresponding spectrograms. In all cases, the noise is real interferometer data acquired from the LIGO Hanford detector during the 3#super("rd") observing run. It is whitened using the procedure described in @whitening-sec. For the time series plots, the green series represents the original, unwhitened waveform before injection, the red series is the waveform with the same whitening transform applied to it as was applied to the on-source background plus injection, and the blue series is the whitened on-source background plus injection, except for the first two time series plots which contain no injection. The spectrograms are generated using the STFT described by @stft-eq, converted into power with @stft_sq, and finally transformed into a decibel logarithmic scale for plotting using @dec-eq. The two uppermost plots and their respective spectrograms have no injections. The two middle plots and their respective spectrograms have IMRPhenomD @imrphenom_d approximants created with cuPhenom injected into the noise @cuphenom_ref, and the two lower plots and their respective spectrograms, have White Noise Burst (WNB) waveforms generated using the method described in @injection-gen-sec, injected into the noise. In all cases, the injections are scaled to an optimal SNR randomly selected between 15 and 30; these are quite high values chosen to emphasize the features in the spectrograms. As can be seen, the whitened noise that contains injected features has spectrograms with highlighted frequency bins that have a magnitude much larger than the surrounding background noise; the different signal morphologies also create very different shapes in the spectrograms. This allows us to see the frequency components of the signal more easily, observe the presence of interesting features, and differentiate between the WNB and the CBC case. ]
) <spectrogram_examples>
==== Summary
There are multiple different possibilities for how to condition the data before it is fed into any potential machine learning model; see @feature-enginering-types, and we have only covered some of the possibilities. Most methods come at the cost of removing at least some information from the original data. It remains to be seen, however, if this cost is worthwhile to ensure adequate model performance and feasible training durations.
#figure(
table(
columns: (auto, auto, auto),
inset: 10pt,
align: horizon,
[*Possible Model Inputs*], [*Dimensionality of Output*], [*Output Domain*],
[Raw Onsource + Injection], [1], [Time],
[Whitened Onsource + Injection], [1], [Time],
[Pearsons Corrleation], [1], [Time],
[Fourier Transform (PSD)], [1], [Frequency],
[Power Spectral Density (PSD)], [1], [Frequnecy],
[Spectrogram ], [2], [Time and Frequency]
),
caption: [A non-exhaustive table of possible data conditioning modes. Feature engineering is often used in order to simplify a problem before it is presented to a machine learning model. There are many ways we could do this with gravitational-wave data. Presented are some of the most common. Each is described in more detail in @feature-eng-sec.]
) <feature-enginering-types>
=== Transient Glitch Simulation <glitch-sec>
As has previously been noted, as well as a quasi-stationary coloured Gaussian background, interferometer noise also contains transient detector glitches caused by a plethora of sources, both known and unknown. These glitches have a prominent effect on the upper-sensitivity bound of most types of search, so it may be important to represent features of this type in our training pipeline. Previous experiments performed during the development of the MLy pipeline @MLy had shown that networks can often have greatly increased FARs when performing inference on data segments that contain transient glitches, even when those glitches were only present in the off-source segment used to generate the PSD used for data whitening. As such, a method to add glitches to the training distribution should be considered so that methods to deal with features of this type can hopefully be incorporated into the model's learned parameters during training.
There have been multiple attempts to classify and document the many transient glitches found in real interferometer data @noise_clasification @dict_glitch_classifer @gravity_spy, both through automated and manual means @online_glitch_classification_review. During operation within a standard observing run, there are both intensive manual procedures @O2_O3_DQ to characterise the detector state and automated pipelines such as the iDQ pipeline @idq. There is also a large amount of work done offline to characterise the noise in a non-live environment @O2_O3_DQ. These methods utilize correlation with auxiliary channels, frequency of triggers, and other information about the detector state to ascertain the likelihood that a given feature is a glitch or of astrophysical origin.
One of the most prominent attempts to classify transient glitches is the Gravity Spy project @gravity_spy, which combines machine learning and citizen science to try and classify the many morphologies of transient glitches into distinct classes. Successful methods to classify glitches are highly useful since if a similar morphology appears again in the data it can be discounted as a probable glitch. Gravity Spy differentiates glitches into 19 classes plus one extra "no_glitch" class for noise segments that are proposed that do not contain a glitch. The other 19 classes are as follows: air_compressor, blip, chirp, extremely_loud, helix, koi_fish, light_modulation, low_frequency_burst, low_frequency_lines, none_of_the_above, paired_doves, power_line, repeating_blips, scattered_light, scratchy, tomte, violin_mode, wandering_line, and whistle. Some types, such as blips, are much more common than others.
There are two options we could use as example data in our dataset in order to familiarise the model with glitch cases. We could either use real glitches extracted from the interferometer data using the timestamps provided by the Gravity Spy catalog @gravity_spy or simulated glitches we generate ourselves. The forms of each would vary depending on whether it was a multi, or single-detector example and whether we are attempting to detect CBCs or bursts.
*Real Glitches:* The addition of real glitches to the training dataset is a fairly intuitive process, though there are still some parameters that have to be decided upon. By using timestamps from the Gravity Spy catalog @gravity_spy, we can extract time segments of equal length to our example segments, which contain instances of different classes of glitches. We should process these identically to our regular examples with the same whitening procedure and off-source segments. Real glitches have the distinct advantage that any model will be able to use commonalities in their morphology to exclude future instances; this is also, however, their disadvantage. If you train a model on specific morphologies, then the introduction of new glitch types in future observing runs, which may well be possible given the constant upgrades and changes to detector technology, then it may be less capable of rejecting previously unseen glitch types @gravity_spy. However, it is still possible that these glitches will help the model to reject anything other than the true type of feature it has been trained to recognise by weaning it off simple excess power detection.
*Simulated Glitches:* The other option is to use simulated glitches. The form of these glitches depends highly on the nature of the search, primarily because you wish to avoid confusion between the morphology of the feature you want the method to identify and simulated glitches. For example, in a CBC search, you could use WNBs as simulated glitches, as their morphologies are entirely distinct, and there is no possibility of confusion. However, if we are using coherent WNBs across multiple detectors to train a model to look for coherence, then we must be careful that our glitch cases do not look indistinguishable from true positive cases, as this would poison the training pool by essentially mislabeling some examples. We could, in this case, use incoherent WNBs as simulated glitches as, ideally, we want our coherent search to disregard incoherent coincidences. This is the approach taken by the MLy pipeline @MLy, as a method to train the models to reject counterexamples of coherent features.
Other than the question of whether to use simulated or real glitches or maybe even both, a few questions remain: what is the ideal ratio between examples of glitches and non-glitched noise examples? Should the glitched background also be injected with waveforms at some rate? A real search would occasionally see glitches overlapping real signals, though this would occur in a relatively low number of cases, and including these types of signal-glitch overlaps could perhaps interfere with the training process whilst not adding a great deal of improvement to the true positive rate. Should glitches form their own class so that the model instead has to classify between signal, noise, or glitch rather than just signal or noise? These questions must be answered empirically.
For the multi-detector case, and thus also the burst detection case, we must decide how to align glitches across detectors. It seems safe to assume that adding coherent glitches across multiple detectors would be a bad idea in a purely coherence-based search pipeline --- although perhaps if the model can learn to disregard certain morphologies based on prior experience, this would be a nice extension. For some simple glitch types, coincident and fairly coherent instances across detectors are not extremely unlikely. For example in the case of the most common glitch class identified by GravitySpy @gravity_spy, blips, we often see coincident glitches in multiple detectors with a physically plausible arrival time difference, and because they are only glitches, their morphologies can often be similar.
We could also include cases of incoherent glitches across detectors but of the same class, incoherent glitches across detectors but of different classes, and any combination of glitches found in less than the full complement of detectors. Perhaps it would be the case that a good mix of all of these cases would better inoculate our model against glitches.
== Perceptron Results <perceptron-results>
Now that we have finally assembled all the pieces required to generate training, testing, and validation datasets that can acquire training examples using and/or real data, we can finally repeat the experiments we performed on the MNIST data in @mnist-test-sec, with both single and multi-layer perceptrons. The model architectures are similar, though the input vectors are now the size of our simulated interferometer output examples: `(NUM_EXAMPLES_PER_BATCH, NUM_SAMPLES)` in the case of the single detector CBC search and `(NUM_EXAMPLES_PER_BATCH, NUM_DETECTORS, NUM_SAMPLES)` in the multi-detector coherent burst search. We will use 32 training examples per batch, `NUM_EXAMPLES_PER_BATCH = 32`, as this is a standard power-of-two value used commonly across artificial neural network literature, and, in the multi-detector case, we will use only LIGO Hanford and LIGO Livingston, for now, excluding the Virgo detector, `NUM_DETECTORS = 2`. We have chosen to use only the two LIGO detectors as in many ways, this is the simplest possible multi-detector network case; signals projected onto these two detectors will have a greater similarity than signals projected onto either of these two detectors and the Virgo detector, both due to sensitivity and orientation and position differences. We have chosen to use a sample rate of #box("2048.0" + h(1.5pt) + "Hz") and an on-source duration of #box("1.0" + h(1.5pt) + "s"), allowing an additional crop region #box("0.5" + h(1.5pt) + "s") either side of the onsource segment to remove edge effects created when whitening with #box("16.0" + h(1.5pt) + "s") of off-source background. The reasoning for these choices has been described previously in this chapter. This means we will have 2048 samples per detector, `NUM_SAMPLES = 2048`, after it has been passed through the whitening layer. A flattening layer, see @flatten-sec, will only be required in the multi-detector case; in the single-detector case, the input is already one-dimensional. The batch dimensions are not a dimension of the input data and simply allow for parallel processing and gradient descent; see @gradient-descent-sec.
The obfuscating noise consists of real data taken from LIGO Hanford and LIGO Livingston @open_data for each respective detector. Locations of confirmed and candidate events are excluded from the data, but known glitch times have been included in the training, testing, and validation datasets.
For the single CBC case, cuPhenom @cuphenom_ref waveforms with masses drawn from uniform distributions between #box("5.0" + h(1.5pt) + $M_dot.circle$) and #box("95.0" + h(1.5pt) + $M_dot.circle$) for the mass of both companions and between -0.5 and 0.5 for the dimensionless aligned-spin component are injected into the noise and scaled with optimal SNR values taken from a uniform distribution of between 8.0 and 15.0 unless explicitly stated.
For the multi-detector Burst case, coherent WNBs are injected with durations between #box("0.1"+ h(1.5pt) + "s") and #box("1.0" + h(1.5pt) + "s"), and the frequencies are limited to between #box("20.0" + h(1.5pt) + "Hz") and #box("500.0" + h(1.5pt) + "Hz"). The injected bursts are projected correctly onto the detectors using a physically realistic projection. The bursts are injected using the same scaling type and distribution as the CBC case, although notably, the network SNR was used rather than a single detector SNR.
During network training, the gradients are modified by batches consisting of 32 examples at a time, chosen as an industry standard batch size, and with a learning rate of 1.0 $times$ 10#super("-4"), and using the Adam optimiser @adam_optimiser, which again is a common standard across the industry @improved_adam. During training epochs, $10^5$ examples are used before the model is evaluated against $10^4$ examples of the previously unseen test data. It should be noted that due to the nature of the generators used for the training, unlike in standard model training practices, no training examples are repeated across epochs, but the test dataset is kept the same for each epoch. After each epoch, if the validation loss for that epoch is the lowest yet recorded, the model is saved, replacing the existing lowest model. If no improvement in validation loss is seen in ten epochs (patience), the training is halted, and the best model is saved for further validation tests. @perceptron-training-parameters shows a large number of the training and dataset hyperparameters.
#figure(
table(
columns: (auto, auto),
inset: 10pt,
align: horizon,
[*Hyperparameter*], [*Value*],
[Batch Size], [32],
[Learning Rate], [10#super("-4")],
[Optimiser], [ Adam ],
[Scaling Method], [SNR],
[Minimum SNR], [8.0],
[Maximum SNR], [15.0],
[SNR Distribution], [Uniform],
[Data Acquisition Batch Duration], [ #box("2048.0" + h(1.5pt) + "s") ],
[Sample Rate], [ #box("2048.0" + h(1.5pt) + "Hz")],
[On-source Duration], [ #box("1.0" + h(1.5pt) + "s")],
[Off-source Duration], [ #box("16.0" + h(1.5pt) + "s")],
[Scale Factor], [10#super("21") ],
),
caption: [The common training and dataset hyperparameters shared by the CBC and Burst perceptron experiments. Note that the scale factor here refers to the factor used during the upscaling of the CBC waveforms and real interferometer noise from their extremely small natural dimensions to make them artificial neuron-friendly. This is done both to ensure that the input values work well with the network activation functions and learning rates, which are tuned around values near one, and to reduce precision errors in areas of the code that use 32-bit precision, employed to reduce memory overhead, computational cost and duration. Data acquisition batch duration is a parameter of the GravyFlow data acquisition module @gwflow_ref. For speed, the GravyFlow data acquisition system downloads data in larger segments than is required for each training batch, then randomly samples examples from this larger segment to assemble each training batch. The data acquisition batch duration determines how long this larger batch is. Smaller values will result in a more evenly mixed training data set and a lower overall GPU memory overhead but will be more time-consuming during the training process. ]
) <perceptron-training-parameters>
==== Architectures
We used architectures with four different layer counts: zero, one, two, and three hidden layers; see @perceptron-cbc-architectures. All models have a custom-implemented whitening layer, which takes in two vectors, the on-source and off-source segments, and performs a whitening operation as described in @whitening-sec. They also all have a capping dense layer with a single output value that represents either the presence of a feature or the absence of one. The capping layer uses the Sigmoid activation function; see @softmax, and the other hidden layers use ReLU activation functions, see @relu.
Layers are built with a number of neurons selected from this list $[64, 128, 256, 512]$, though fewer combinations are tested in architectures with a greater number of model layers. Models tested have these 14 configurations of neuron numbers per layer, specified as [num_hidden_layers:num_neurons_in_layer_1, ..., num_layers_in_layer_n]: ([0], [1:64], [1:128], [1:256], [1:512], [2:64,64], [2:128,64], [2:128,128], [2:256,64], [2:256,128], [2:256,256], [3:64,64,64], [3:128,128,128], [3:256,256,256]). These combinations were chosen to give a reasonable coverage of this section of the parameter space, though it is notably not an exhaustive hyperparameter search. From the performances demonstrated in this search compared to other network architectures, it was not deemed worthwhile to investigate further.
#figure(
image("perceptron_diagrams.png", width: 90%),
caption: [Perceptron diagrams. The four different architectures used to test the use of purely dense models for both the single-detector CBC detection case and the multi-detector burst detection problem. The only differences are that the input vector sizes are different between the cases: `(NUM_EXAMPLES_PER_BATCH, NUM_SAMPLES)` in the case of the single detector CBC search and `(NUM_EXAMPLES_PER_BATCH, NUM_DETECTORS, NUM_SAMPLES)` in the multi-detector coherent burst search. All models take in two input vectors into a custom-designed GravyFlow whitening layer, the off-source and the on-source vectors; see @whitening-sec for more information about the whitening procedure, and all models are capped with a dense layer with a single output neuron that is used to feed the binary loss function, with a sigmoid activation function. Each hidden layer has been tested with 64, 128, and 256 neurons, and one hidden layer was tested with 512 as a sample with higher neuron counts: _Top:_ Zero-hidden layer model. _Second to top:_ Two-hidden layer model. _Second to bottom:_ Three-hidden layer model. _Bottom:_ One hidden layer model. ]
) <perceptron-cbc-architectures>
=== CBC Detection Dense Results
==== Training
First, we can examine the results of applying dense-layer perceptrons to the CBC single-detector morphology detection problem. Even during the training process, it is clear that, at least amongst the selected hyperparameters, these models will not be useful; see @perceptron_single_accuracy and @perceptron_single_loss. None reach an accuracy of above 75% with a training patience of ten epochs. Setting a training patience of ten ensures that if no improvement in the validation loss is seen within ten epochs, the training process is halted. Examining the plots; see @perceptron_single_accuracy and @perceptron_single_loss, it seems possible that some of the perceptrons are on a very slow training trajectory and could have seen some marginal improvement if the training patience had been increased. It is also possible that other larger perceptron architectures may achieve greater success, as this was far from an exhaustive or even guided search of the perceptron hyperparameter space. However, as can be seen in @perceptron_single_accuracy, the models take a significant number of epochs to reach the values they do, which is what we would expect from entirely dense models. As will be seen in later sections, see @cnn-literature, other architectures can achieve much better results in fewer epochs. These results are here to act as an example of the difficulties of training dense networks for complex recognition tasks. For comparison with other methods, a more sophisticated analysis will be shown after the training history plots; see @single-perceptron-validation-sec.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("perceptron_single/perceptron_single_training_accuracy.png", width: 100%) ],
[ #image("perceptron_single/perceptron_single_validation_accuracy.png", width: 100%) ],
),
caption: [The accuracy history of perceptron models training to detect IMRPhenomD waveforms generated using cuPhenom @cuphenom_ref that have been obfuscated by real interferometer noise sampled from the LIGO Livingston detector during the 3#super("rd") observing run. Visit #link("https://tinyurl.com/ypu3d97m") for interactive plots, whilst they're still working. The optimal SNR of waveforms injected into the training and validation sets was uniformly distributed between 8 and 15. Input was from a single detector only. A rough search was performed over a relatively arbitrary selection of model architectures, which varied the number of layers and the number of perceptrons in each layer. The architectures of each model can be seen in the figure legends as a list of numbers where each digit is the number of artificial neurons in that layer. All are trained with the same training hyperparameters, details of which can be found in @perceptron-training-parameters. Each epoch consisted of $10^5$ training examples, and it should be noted that, unlike the regular training pipelines, each training epoch consisted of newly generated waveforms injected into unseen noise segments, though the validation examples are consistent. Training of each model was halted after ten consecutive epochs with no improvement to validation loss, the values of which are shown in @perceptron_single_loss. Validation noise was drawn from a separate pool of data segments inaccessible to the training data loader. We can see that the maximum accuracy achieved by any perceptron model only approaches 75%. Although these validations are performed with a pool containing mixed waveform SNRs and at an unrestrained False Alarm Rate (FAR) (this accuracy uses a score threshold of 0.5 regardless of FAR), it is clear that this is insufficient to be useful. _Upper:_ Plot of model accuracies when measured with training data ($10^5$ epoch-unique examples). _Lower:_ Plot of model accuracies when mesured with validation data ($10^4$ epoch-consistent examples).]
) <perceptron_single_accuracy>
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("perceptron_single/perceptron_single_training_loss.png", width: 100%) ],
[ #image("perceptron_single/perceptron_single_validation_loss.png", width: 100%) ],
),
caption: [Training loss history of perceptron models training to detect IMRPhenomD waveforms generated using cuPhenom @cuphenom_ref, obfuscated by real interferometer noise from the LIGO Livingston detector from the 3#super("rd") observing run. The loss is computed using binary cross entropy loss function and is used by the gradient descent algorithm, in this case, the Adam optimizer, as a minimization target. It also acts as the monitor by which the pipeline knows to stop the training process early. If the pipeline detects that the validation model loss has not decreased in more than 10 epochs, training is halted.
Visit #link("https://tinyurl.com/ypu3d97m") for interactive plots. See @perceptron_single_accuracy for a more detailed description of the training data. _Upper:_ Plot of model loss when measured with training data ($10^5$ epoch-unique examples). _Lower:_ Plot of model loss when mesured with validation data ($10^4$ epoch-consistent examples).]
) <perceptron_single_loss>
==== Validation <single-perceptron-validation-sec>
Although the perceptron training performance was low, and probably sufficient to tell us that at least these configurations of perceptrons are not capable enough for CBC detection, a more complete validation was nonetheless performed on the trained models using the third as-yet-unseen validation dataset. This was both for comparison with later methods and to ensure that our initial assessment of the results was correct. Although it is easy to draw quick conclusions from the training results, it is not an accurate profile of the model performance, as the training validation results draw from a mixed pool of SNR values, do not consider the classes independently, and in the case of the accuracy result, use an uncalibrated detection threshold of 0.5. This means that if a model outputs a score over 0.5 it is considered a detection, and a score lower than 0.5 is considered noise. By tuning this threshold, we can arrive at the desired False Alarm Rate (FAR), though this will have an inverse effect on the sensitivity, (the true positive rate) of the model.
Before we can apply this tuning we must evaluate our model's performance on a dataset consisting exclusively of noise examples. The perfect classifier would output zero for all examples in such a dataset. We are not dealing with perfect classifiers, so the model will output a score value for each pure noise example. If our classifier has good performance most of these scores will be low, preferably near zero, but some will inevitably rise above whatever detection threshold we set, dependant of course on the size of our validation dataset, the larger the dataset the larger the expected value of our largest noise score. The size of the dataset required for threshold calibration will depend on the value of FAR that is desired, with smaller FARs requiring larger datasets. We will require a dataset in which the combined example durations sum to at least the duration of time wherein, given our desired FAR, we would expect one detection. However, since this is a statistical result, having only the exact duration required for our FAR would result in a great deal of error on that value. The larger the validation dataset, the more confident we can be in our calculation of the required FAR threshold. We will attempt to use a validation dataset around ten times larger than the minimum required, so we would, on average, expect ten false alarms total from running the model on the dataset with the given threshold.
Of course, there is only so far we can tune the threshold value within the precision available to us with 32-bit floats, and if the model gives scores to pure noise examples of exactly one, there is no way to differentiate them from true positive classifications. This means any model will have a maximum possible threshold, and therefore minimum FAR, beyond which it cannot distinguish positive results from negative ones.
In order to determine the score threshold of a model for a given FAR, we can run that model over a sufficiently large pure noise dataset, sort these scores from smallest to highest, and then assign each score an equivalent FAR. For example, if we sorted the scores from lowest to highest, and the first score was above the score threshold, then the FAR would be $1.0 / d_op("example") "Hz"$, where $d_op("example")$ is the length of the input example in our case #box("1"+ h(1.5pt) + "s"). If we set the threshold to be smaller than the smallest score this would mean that almost every noise example would score above the threshold, therefore the model would produce a false alarm nearly every time it ran. If the second sorted score was above the threshold but not the first, then all but one of the examples would be a false alarm, therefore we can estimate the FAR to be $(d_op("total") - d_op("example")) / d_op("total") times 1.0 / d_op("example") "Hz"$, $d_op("total")$ is the total duration of examples in the validation set. This gives a general formula for the y-axis,
$ y = (d_op("total") - i times d_op("example")) / d_op("total") times 1.0 / d_op("example") "Hz" , $ <FAR_index_calc>
where $i$ is the x-axis index. The FAR is plotted against the required model threshold to achieve that FAR in @perceptron_single_far.
#figure(
image("perceptron_single/perceptron_single_far_curves.png", width: 100%),
caption: [Perceptron False Alarm Rate (FAR) curves. This plot was created by running each of our 14 models over a pure noise validation dataset of $10^5$ noise examples. A relatively small number of noise examples are used due to the observed inaccuracy of the models during training which suggested that they would not be able to reach low FAR scores and thus would not necessitate a larger validation dataset. The output scores of the model from each inference over the pure noise validation dataset are sorted and plotted on this graph. The x-axis is the output score of the model inference on that example of noise. The y-axis is calculated by using @FAR_index_calc and provides the estimated number of false alarms that the model would output per second of pure noise data given the threshold score displayed on the x-axis. We can use this graph to calculate positive result thresholds for our classifier, at different false alarm rates. Once again, the models are listed with the number of artificial neurons in each hidden layer. Visit #link("https://tinyurl.com/2wkaarkh") to view an interactive plot. ]
) <perceptron_single_far>
Using @perceptron_single_far, we can select the score index that is closest to our desired FAR, and find the threshold that will generate a FAR of approximately this value. With a method to calculate threshold values in hand, we can create efficiency curves at specific FARs. Efficiency curves allow us to examine the sensitivity of the model to detect signals at different optimal SNR values. This time we can utilize datasets containing true results at set SNR values. We can run the models over these datasets and extract model scores for each true example. From those scores, we can calculate the sensitivity at different FARs. The sensitivity is given by
$ "sensitivity" = (|| "scores" > "score_threshold" ||) / (|| "scores" ||) $ <specificity>
where $|| "scores" > "score_threshold" ||$ is the number of scores above the score threshold, and $|| "scores" ||$ is the total number of examples tested. In @perceptron_efficiency_curves_single, we present the efficiency curves at three different values of FAR, #box("0.1"+ h(1.5pt) + "Hz"), #box("0.01"+ h(1.5pt) + "Hz"), and #box("0.001"+ h(1.5pt) + "Hz"), which are not particularly low FARs, but as can be seen from the plots, below these values we would encounter only negligible accuracies in the SNR ranges considered. As can be seen from the curves, the models do not perform well even with very generous FAR constraints.
#figure(
grid(
image("perceptron_single/perceptron_single_efficiency_curve_0_1.png", width: 100%),
image("perceptron_single/perceptron_single_efficiency_curve_0_01.png", width: 100%),
image("perceptron_single/perceptron_single_efficiency_curve_0_001.png", width: 100%),
),
caption: [Perceptron efficiency curves. For each of the 14 perceptron models trained, 31 efficiency tests are performed at evenly spaced optimal SNR values between 0 and 15. For each test, 8192 examples with signals of the relevant SNR are examined by the model, and the percentage of those that scored above the threshold was plotted, see @specificity, for three different False Alarm Rate (FAR) thresholds: #box("0.1"+ h(1.5pt) + "Hz"), #box("0.01"+ h(1.5pt) + "Hz"), and #box("0.001"+ h(1.5pt) + "Hz"). The efficiency curve for each FAR threshold is presented on a unique plot. Some models have been excluded, they are shaded grey on the legends, because they are incapable of performing any classification at the chosen FAR thresholds. Visit #link("https://tinyurl.com/2wkaarkh") to view an interactive plot. . _Upper:_ Efficiency curves at a FAR of #box("0.1" + h(1.5pt) + "Hz"). _Middle:_ Efficiency curves at a FAR of #box("0.01" + h(1.5pt) + "Hz"). _Lower:_ Efficiency curves at a FAR of #box("0.001" + h(1.5pt) + "Hz").]
) <perceptron_efficiency_curves_single>
Finally, we can examine the model performance from a different perspective by freezing the SNR of the validation dataset and plotting the True Positive Rate (TPR), i.e. the sensitivity, against the False Alarm Rate (FAR). This will give us a Reciever Operator Curve (ROC), see @perceptron_roc_curve. We can compare the area under the curve for each model to make a comparison of its relative performance; although in this case, all the models perform very similarly at the chosen optimal SNR of eight. Eight was chosen as this is often considered a good detectability threshold for CBCs, in the catalog of events from the first half of the third joint observing run, all confident detections had an SNR above nine, and candidate signals had SNRs above eight @GWTC-2.
#figure(
image("perceptron_single/perceptron_single_roc.png", width: 100%),
caption: [Reciever Operator Curve (ROC) Curve at $rho_"opt" = 8$. To create this plot a validation dataset containing waveforms all of an SNR of eight was generated. The ability of the model to detect these waveforms was then measured at different FARs. All models show very similar, poor performance. Visit #link("https://tinyurl.com/2wkaarkh") to view an interactive plot. ]
) <perceptron_roc_curve>
From these results, we can summarise that things are as anticipated from the results of the training. None of these models would have any useful application in gravitational-wave data science, as they all fall well below the performance of matched filtering, and they are unable to perform at acceptable FARs. In order to offer a competitive approach, we must turn to other network architectures.
=== Burst Detection Dense Results
==== Training
Although it may seem unlikely that we will have better results with what is arguably a more complex problem, we present the application of dense neural networks to multi-detector arbitrary waveform detection. Note that there are no incoherent or single detector counter-examples added to either the training or validation data, so in order to function a model would only have to identify the presence of excess power. The training and validation SNR ranges were also increased from 8 to 15 to 12 to 30 since initial testing at the SNR range used for CBC detection provided small accuracies across all FARs. From the training results it was clear that this was going to be a more complex problem than CBC detection; see @perceptron_multi_accuracy. Again there is the possibility that less constrained training or larger models could lead to better performance, but even if a solution was found outside the considered hyperparameter range, training time and computational requirements would soon become prohibitive. If other, less general networks can offer far superior results, they will be preferred.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("perceptron_multi/perceptron_multi_training_accuracy.png", width: 100%) ],
[ #image("perceptron_multi/perceptron_multi_validation_accuracy.png", width: 100%) ],
),
caption: [The accuracy history of perceptron models training to detect multi-detector WNBs generated using GravyFlow and obfuscated by real interferometer noise sampled from the LIGO Livingston and LIGO Hanford detectors during the 3#super("rd") observing run. Visit ADD_LINK for interactive plots. The optimal SNR of waveforms injected into the training and validation sets was uniformly distributed between 12 and 30. The input was generated using real noise from LIGO Hanford and LIGO Livingston. The training procedure was identical to the single detector case, except for the SNR range increase and the multiple detector data supply. We can see in these training plots, that despite the increased SNR range, training and validation accuracy barely creep above 50% (which can be achieved by random selection). This indicates that dense networks are even less suited for the more complex coherence detection problem. Further validation will be performed for completion. Visit #link("https://tinyurl.com/4jj3t5fj") to view an interactive plot. _Upper:_ Plot of model accuracies when measured with training data ($10^5$ epoch-unique examples). _Lower:_ Plot of model accuracies when tested with validation data ($10^4$ epoch-consistent examples).]
) <perceptron_multi_accuracy>
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("perceptron_multi/perceptron_multi_training_loss.png", width: 100%) ],
[ #image("perceptron_multi/perceptron_multi_validation_loss.png", width: 100%) ],
),
caption: [The loss history of perceptron models training to detect multi-detector WNBs generated using GravyFlow and obfuscated by real interferometer noise sampled from the LIGO Livingston and LIGO Hanford detectors during the 3#super("rd") observing run. Visit ADD_LINK for interactive plots. The optimal SNR of waveforms injected into the training and validation sets was uniformly distributed between 12 and 30. The input was generated using real noise from LIGO Hanford and LIGO Livingston. The losses show a similar picture to the accuracy plots, and although we see a gradual decline it is very shallow and triggers the patience early stopping before it has had any chance to gain significant performance, assuming that is even possible. Patience could be increased, but as we will see in later architectures, this is not competitive. _Upper:_ Plot of model losses when measured with training data ($10^5$ epoch-unique examples). _Lower:_ Plot of model losses when tested with validation data ($10^4$ epoch-consistent examples). Visit #link("https://tinyurl.com/4jj3t5fj") to view an interactive plot.]
) <perceptron_multi_loss>
==== Validation
As with the CBC case, we first present the FAR curve that will be used to determine model FAR thresholds in @perceptron_multi_far. Then we show the efficiency curves at two FARs, #box("0.1" + h(1.5pt) + "Hz"), and #box("0.01" + h(1.5pt) + "Hz"); see @perceptron_efficiency_curves_multi. Only two FAR thresholds are presented here as lower FARs resulted in negligible accuracies. Finally, we show the ROC curves for these models, which are unsurprisingly also poor; see @perceptron_roc_curve_multi.
#figure(
image("perceptron_multi/perceptron_multi_far_curves.png", width: 100%),
caption: [Perceptron False Alarm Rate (FAR) curves. This plot was created by running each of our 14 models over a pure noise validation dataset of $10^5$ noise examples. Performance is low across the board demonstrating that dense layer perceptrons are unsuitable for this kind of WNB detection, at least within the hyperparameter range tested. Visit #link("https://tinyurl.com/bdz9axpf") to view an interactive plot.]
) <perceptron_multi_far>
#figure(
grid(
image("perceptron_multi/perceptron_multi_efficiency_curve_0_1.png", width: 100%),
image("perceptron_multi/perceptron_multi_efficiency_curve_0_01.png", width: 100%),
),
caption: [Perceptron efficiency curves for the multi-detector WNB detection model. For each of the 14 perceptron models trained, 31 efficiency tests are performed at evenly spaced optimal SNR values between 0 and 30. For each test, 8192 examples with signals of the relevant SNR are examined by the model. The percentage of those that scored above the threshold is plotted, see @specificity, for two different False Alarm Rate (FAR) thresholds: #box("0.1"+ h(1.5pt) + "Hz") and #box("0.01"+ h(1.5pt) + "Hz"), lower FARs are excluded due to small accuracies. _Upper:_ Efficiency curves at a FAR of #box("0.1" + h(1.5pt) + "Hz"). _Lower:_ Efficiency curves at a FAR of #box("0.01" + h(1.5pt) + "Hz"). Visit #link("https://tinyurl.com/bdz9axpf") to view an interactive plot.]
) <perceptron_efficiency_curves_multi>
#figure(
image("perceptron_multi/perceptron_multi_roc.png", width: 100%),
caption: [Reciever Operator Curve (ROC) Curve at an optimal SNR of eight. To create this plot a validation dataset containing waveforms all of an SNR of eight was generated. The ability of the model to detect these waveforms was then measured at different FARs. Again, all models show very similar, poor performance. Visit #link("https://tinyurl.com/bdz9axpf") to view an interactive plot.]
) <perceptron_roc_curve_multi>
From these validation results, we can determine that dense layer networks alone are unsuitable for the task of coherence detection. Once again these results are not surprising and presented as a reference. In the next section, we will describe another deep-learning architecture that has seen much more promising results in the literature.
== Introducing Convolutional Neural Networks (CNNs) <cnn-sec>
\
As we have seen, simple dense-layer perceptrons can not adequately perform detection tasks on gravitational-wave data. This was anticipated, given the complexity of the distribution. Perceptrons have not been at the forefront of artificial neural network science for some time. We must turn toward other architectures. Although, in some ways, specialising the network will limit the capacity of our model to act as a universal function approximator @universal_aproximators, in practice, this is not a concern, as we have at least some idea of the process that will be involved in completing the task at hand, in this case, image, or more correctly time-series recognition.
The Convolutional Neural Network (CNN) is currently one of the most commonly used model archetypes @deep_learning_review @conv_review. In many ways, the development of this architecture was what kickstarted the current era of artificial neural network development. On 30#super("th") December 2012, the AlexNet CNN @image_classification achieved performance in the ImageNet multi-class image recognition competition, far superior to any of its competitors. This success showed the world the enormous potential of artificial neural networks for achieving success in previously difficult domains.
CNNs are named for their similarity in operation to the mathematical convolution @deep_learning_review @conv_review, although it is more closely analogous to a discrete cross-correlation wherein two series are compared to each other by taking the dot product at different displacements. Unless you are intuitively familiar with mathematical correlations, it is not a useful point of reference for understanding CNNs. So, we will not continue to refer to convolutions in the mathematical sense.
CNNs are primarily employed for the task of image and time-series recognition @deep_learning_review @conv_review Their fundamental structure is similar to dense-layer networks on a small scale @cnn_review. They are comprised of artificial neurons that take in several inputs and output a singular output value after processing their inputs in conjunction with that neuron's learned parameters; see @artificial_neuron_sec. Typical CNNs ingest an input vector, have a single output layer that returns the network results, and contain a variable number of hidden layers. However, the structure and inter-neural connections inside and between the layers of a CNN are fundamentally different.
Unlike perceptrons, layers inside CNNs are, by definition, not all dense, fully-connected layers @deep_learning_review @conv_review. CNNs introduce the concept of different types of sparsely-connected computational layers. The classical CNN comprises a variable number, $C$, of convolutional layers stacked upon the input vector, followed by a tail of $D$ dense layers, which output the result of the network. This gives a total of $N = C + D$ layers, omitting any infrastructure layers that may also be present, such as a flattening layer (which is often employed between the last convolutional layer and the first dense layer; convolutional layers inherently have multidimensional outputs and dense layers do not). Purely convolutional networks, which consist only of convolutional layers, are possible @gebhard_conv_only_cnn, but these are a more unusual configuration, especially for classification tasks. Purely convolutional networks appear more often as autoencoders @autoencoder_ref and in situations where you want to lessen the black-box effects of dense layers. Convolutional layers are often more interpretable than pure dense layers as they produce feature maps that retain the input vector's dimensionality @cnn_interpretability.
Convolutional layers can and often do appear as layers in more complex model architectures, which are not necessarily always feed-forward models @deep_learning_review @conv_review. They can appear in autoencoders @autoencoder_ref, Generative Adversarial Networks (GANs) @gan_ref, Recurrent Neural Networks (RNNs) @conv_rnn, and as part of attention-based architectures such as transformers @conv_transformer and generative diffusion models @conv_diffusion. We will, for now, consider only the classical design: several convolutional layers capped by several dense ones.
As discussed, CNNs have a more specialised architecture than dense layers @deep_learning_review @conv_review. This architecture is designed to help the network perform in a specific domain of tasks by adding _a priori_ information defining information flow inside the network. This can help reduce overfitting in some cases, as it means a smaller network with fewer parameters can achieve the same task as a more extensive dense network. Fewer parameters mean less total information can be stored in the network, so it is less likely that a model can memorise specific information about the noise present in training examples. A CNN encodes information about the dimensionality of the input image; the location of features within the input image is conserved as it moves through layers. It also utilises the fact that within some forms of data, the same feature is likely to appear at different locations within the input vector; therefore, parameters trained to recognise features can be reused across neurons. For example, if detecting images of cats, cats' ears are not always going to be in the same location within the image. However, the same pattern of parameters would be equally helpful for detecting ears wherever it is in the network.
The following subsections describe different aspects of CNNs, including a description of pooling layers, which are companion layers often employed within convolutional networks.
=== Convolutional Layers
CNNs take inspiration from the biological visual cortex @bio_inspired_conv. In animal vision systems, each cortical neuron is not connected to every photoreceptor in the eye; instead, they are connected to a subset of receptors clustered near each other on the 2D surface of the retin @receptive_field_bio. This connection area is known as the *receptive field*, a piece of terminology often borrowed when discussing CNNs @bio_inspired_conv.
*Convolutional Layers* behave similarly. Instead of each neuron in every layer being connected to every neuron in the previous layer, they are only connected to a subset, and the parameters of each neuron are repeated across the image, significantly reducing the number of model parameters and allowing for translation equivariant feature detection @deep_learning_review @conv_review. It is a common misnomer that convolutional layers are translation invariant @lack_of_invariance; this is untrue, as features can and usually do move by values that are not whole pixel widths, meaning that even if the filters are the same, the pixel values can be different and give different results. One common problem with CNNs is that very small changes in input pixel values can lead to wildly different results, so this effect should be mitigated if possible. If they do not involve subsampling, however, CNNs are sometimes equivariant. This means that independent of starting location, ignoring edge effects, if you shift the feature by the same value, the output map will be the same --- this can be true for some configurations of CNN but is also broken by most common architectures.
This input element subset is nominally clustered spacially, usually into squares of input pixels @deep_learning_review @conv_review. This means that unlike with dense input layers, wherein 2D and greater images must first be flattened before being ingested, the dimensionality of the input is inherently present in the layer output. In a dense layer, each input is equally important to each neuron. There is no distinguishing between inputs far away from that neuron and inputs closer to that neuron (other than distinctions that the network may learn during the training process). This is not the case inside convolutional layers, as a neuron on a subsequent layer only sees inputs inside its receptive field.
As the proximity of inputs to a neuron can be described in multiple dimensions equal to that of the input dimensionality, the network, therefore, has inherent dimensionality baked into its architecture --- which is one example of how the CNN is specialised for image recognition @deep_learning_review @conv_review. In the case of a 2D image classification problem, we now treat the input vector as 2D, with the receptive field of each neuron occupying some shape, most simply a square or other rectangle, on the 2D vector's surface.
The term receptive field is usually reserved to describe how much of the input image can influence the output of a particular neuron in the network @deep_learning_review @conv_review. The set of tunable parameters that define the computation of a neuron in a convolutional layer when fed with a subset of neuron outputs or input vector values from the previous layer is called a *kernel*. Each kernel looks at a subset of the previous layers' output and produces an output value dependent on the learned kernel parameters. A kernel with parameters tuned by model training is sometimes called a *filter*, as, in theory, it filters the input for a specific translation-invariant feature (although, as we have said, this is only partially true). The filter produces a strong output if it detects that feature and a weak output in its absence. Identical copies of this kernel will be tiled across the previous layer to create a new image with the same dimensionality as the input vector, i.e. kernels in a time-series classifier will each produce their own 1D time-series feature map, and kernels fed a 2D image will each produce a 2D image feature map. In this way, each kernel produces its own feature map where highly scoring pixels indicate the presence of whatever feature they have been trained to identify, and low-scoring ones indicate a lack thereof. Because the network only needs to learn parameters for this single kernel, which can be much smaller than the whole image and only the size of the feature it recognises, the number of trainable parameters required can be significantly reduced, decreasing training time, memory consumption, and overfitting risk. For a single kernel with no stride or dilation, see @stride-sec, applied to an input vector with no depth dimension, the number of trainable parameters is given by
$ op("len")(theta_"kernel") = (product_i^N S_i) + 1 $
where $op("len")(theta_"kernel")$ is the number of trainable parameters in the kernel, N is the number of dimensions in the input vector, and $S_i$ is the configurable hyperparameter, kernel size in the i#super("th") dimension. The extra plus one results from the bias of the convolutional kernel.
For example, a 1D kernel of size 3, would have $3 + 1 = 4$ total parameters, independent of the size of the input vector, and a 2D kernel of size $3 times 3$ would have $3 times 3 + 1 = 10$ total parameters, again independent of the size of the 2D input vector in either dimension. See @kernel_example for an illustration of the structure of a convolutional kernel.
#figure(
image("convolutional_kernel.png", width: 40%),
caption: [Diagram of a single kernel, $""_1^1k$, in a single convolutional layer. In this example, a 1D vector is being input; therefore, the single kernel's output is also 1D. This kernel has a kernel size of three, meaning that each neuron receives three input values from the layer's input vector, $vectorn(x)$, which in this case is length five. This means there is room for three repeats of the kernel. Its parameters are identical for each iteration of $""_1^1k$ at a different position. This means that if a pattern of inputs recognised by the kernel at position 1, $""_1^1k_1$ is translated two elements down the input vector, it will be recognised similarly by the kernel at $""_1^1k_3$. Although this translational invariance is only strict if the translation is a whole pixel multiple and no subsampling (pooling, stride, or dilation) is used in the network, this pseudo-translational invariance can be useful, as often, in images and time series data, similar features can appear at different spatial or temporal locations within the data. For example, in a speech classification model, a word said at the start of the time series can be recognised just as easily by the same pattern of parameters if that word is said at the end of the time series (supposing it lies on the sample pixel multiple). Thus, the same kernel parameters and the same filter can be repeated across the time series, reducing the number of parameters needed to train the model. This particular kernel would have $3 + 1 = 4$ total parameters, as it applied to a 1D input vector, and has a kernel size of three, with an additional parameter for the neuron bias. With only a single kernel, only one feature can be learned, which would not be useful in all but the most simple cases. Thus, multiple kernels are often used, each of which can learn its own filter. ]
) <kernel_example>
When first reading about convolutional layers, it can be confusing to understand how they each "choose" which features to recognise. What should be understood is that this is not a manual process; there is no user input on which kernels filter which features; instead, this is all tuned by the chosen optimiser during the training process @deep_learning_review @conv_review. Even the idea that each kernel will cleanly learn one feature type is an idealised simplification of what can happen during training. Gradient descent has no elegant ideas of how it should and should not use the architectures presented to it and will invariably follow the path of least resistance, which can sometimes result in strange and unorthodox uses of neural structures. The more complex and non-linear the recognition task, the more often this will occur.
Although we do not specify exactly which features each kernel should learn, there are several hyperparameters that we must fix for each convolutional layer before the start of training @deep_learning_review @conv_review. We must set a kernel (or filter) size for each dimension of the input vector. For a 1D input vector, we will set one kernel size per kernel; for a 2D input vector, we must set two, and so on. These kernel dimensions dictate the number of input values read by each kernel in the layer and are nominally consistent across all kernels in that layer; see @kernel-size for an illustration of how different kernel sizes tile across a 2D input.
#figure(
image("kernel_sizes.png", width: 50%),
caption: [Illustration of how different values of kernel size would be laid out on a $4 times 4$ input image. In each case, unused input image values are shown as empty black squares on the grid, and input values read by the kernel are filled in red. The grids show the input combinations that a single kernel would ingest if it has a given size, assuming a stride value of one and zero dilation. The kernel sizes are as follows: _Upper left:_ $2 times 2$. _Upper right:_ $3 times 2$. _Lower left:_ $2 times 3$. _Lower right:_ $3 times 3$. One pixel in the output map is produced for each kernel position. As can be seen, the size of the output map produced by the kernel depends both on the input size and the kernel size; smaller kernels produce a larger output vector.]
) <kernel-size>
The other hyperparameters that must be set are the number of different kernels and the choice of activation function used by the kernel's neurons @deep_learning_review @conv_review. These hyperparameters can sometimes be manually tuned using information about the dataset, i.e. the average size of the features for kernel size and the number of features for the number of kernels, but these can also be optimised by hyperparameter optimisation methods, which might be preferable as it is often difficult to gauge which values will work optimally for a particular problem @cnn_hyperparameters.
Multiple kernels can exist up to an arbitrary amount inside a single convolutional layer @deep_learning_review @conv_review. The intuition behind this multitude is simply that input data can contain multiple different types of features, which can each need a different filter to recognise; each kernel produces its own feature map as it is tiled across its input, and these feature maps are concatenated along an extra *depth* dimension on top of the dimensionality of the input vector. A 1D input vector will have 2D convolutional layer outputs, and a 2D input vector will result in 3D convolutional outputs. The original dimensions of the input vector remain intact, whilst the extra discrete depth dimension represents different features of the image; see @multi_kernel_example.
In the case of a colour picture, this depth dimension could be the red, green, and blue channels, meaning this dimension is already present in the input vector. The number of trainable parameters of a single convolutional layer is given by
$ op("len")(theta_"conv_layer") = K times ((D times product_i^N S_i) + 1) $ <conv-layer-size>
where $op("len")(theta_"conv_layer")$ is the total number of parameters in a convolutional layer, $K$ is the number of convolutional kernels in that layer, a tunable hyperparameter, and $D$ is the additional feature depth dimension of the layer input vector, which is determined either by the number of pre-existing feature channels in the input vector, i.e. the colour channels in a full-colour image or, if the layer input is a previous convolutional layer, the number of feature maps output by that previous layer, which is equivalent to the number of kernels in the previous layer. For example, a 1D convolutional layer with three kernels, each with size three, ingesting a 1D input with only a singleton depth dimension would have $3 times ((1 times (3)) + 1) = 12$ total trainable parameters, whereas a 2D convolutional layer with three kernels of size $3 times 3$ looking at a colour RGB input image would have $3 times (3 times ( 3 times 3 ) + 1) = 84$ total trainable parameters.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("multiple_convolutional_kernel.png", width: 45%) ],
[ #image("single_conv_abstraction.png", width: 100%) ],
),
caption: [_Upper:_ Diagram of three convolutional kernels, $[""_1^1k, ""_2^1k, ""_3^1k]$, in a single convolutional layer. Each kernel is coloured differently, in red, green, and blue. Artificial neurons of the same colour will share the same learned parameters. Again, a 1D vector is being input; therefore, the output of each of the kernels is 1D, and the output of the kernels stack to form a 2D output vector, with one spatial dimension retained from the input vector and an extra discrete depth dimension representing the different features learned by each of the kernels. Again, each kernel has a kernel size of three. Multiple kernels allow the layer to learn multiple features, each of which can be translated across the input vector, as with the single kernel. Using @conv-layer-size, this layer would have $3 times ((1 times 3) + 1) = 12$ trainable parameters. It should be noted that this is a very small example simplified for visual clarity; real convolutional networks can have inputs many hundreds or thousands of elements long and thus will have many more iterations of each kernel, as well as many more kernels sometimes of a much larger size. _Lower:_ Abstracted diagram of the same layer with included hyperparameter information. ]
) <multi_kernel_example>
As with dense layers, multiple convolutional layers can be stacked to increase the possible range of computation available @deep_learning_review @conv_review; see @multi_cnn_layer_example. The first convolutional layer in a network will ingest the input vector, but subsequent layers can ingest the output of previous convolutional layers, with kernels slicing through and ingesting the entirety of the depth dimension. In theory, this stacking allows the convolutional layers to combine multiple more straightforward features in order to recognise more complex, higher-level features of the input data --- although, as usual, things are not always quite so straightforward in practice. When calculating the number of trainable parameters in multiple convolutional layers, we can use @conv-layer-size for each layer and sum the result.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("two_convolutional_layers.png", width: 60%) ],
[ #image("multi_conv_abstraction.png", width: 100%) ],
),
caption: [_Upper:_ Diagram of two convolutional layers, each with independent kernels. The first layer has three kernels, each with a size of three. The second layer has two kernels, both with a size of two. Again, this is a much-simplified example that would probably not have much practical use. Different kernels are coloured differently, in red, green, and blue. Although it should be noted that similar colours across layers should not be taken as any relationship between kernels in different layers, they are each tuned independently and subject to the whims of the gradient descent process. This example shows how the kernels in the second layer take inputs across the entire depth of the first layer but behave similarly along the original dimension of the input vector. In theory, the deeper layer can learn to recognise composite features made from combinations of features previously recognised by the layers below and visible in the output feature maps of the different kernels. This multi-layer network slice would have $(3 times ((1 times 3) + 1)) + (2 times ((3 times 2) + 1)) = 26$ total trainable parameters. This was calculated by applying @conv-layer-size to each layer. _Lower:_ Abstracted diagram of the same layers with included hyperparameter information. ]
) <multi_cnn_layer_example>
The result of using one or more convolutional layers on an input vector is an output vector with an extra discrete depth dimension, with each layer in the stack representing feature maps @deep_learning_review @conv_review. Whilst often considerably more interpretable than maps of the parameters in dense layers, these maps are often not very useful alone @cnn_interpretability. However, a flattened version of this vector is now, hopefully, much easier for dense layers to classify than the original image; see @flatten-sec. As such, CNNs used for classification are almost always capped by one or more dense layers in order to produce the final classification result; see @cnn_diagram for a toy example of a CNN used for binary classification.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("cnn_diagram.png", width: 100%) ],
[ #image("cnn_abstracted.png", width: 100%) ],
),
caption: [_Upper:_ Diagram of a very simple convolutional neural network binary classifier consisting of four layers with tunable parameters plus one infrastructure layer without parameters. Two consecutive convolutional layers ingest the five-element input vector, $vectorn(x)$. The 2D output of the latter of the two layers is flattened into a 1D vector by a flattening layer. This flattened vector is then ingested by two dense layers, the latter of which outputs the final classification score. The first convolutional layer has three convolutional kernels, each with a size of three, and the second convolutional layer has two kernels, both with a size of two. The first dense layer has three artificial neurons, and the final output dense layer has a number of neurons dictated by the required size of the output vector. In the case of binary classification, this is either one or two. Different kernels within a layer are differentiated by colour, in this case, red, green, or blue, but a similar colour between layers does not indicate any relationship. Dimensionless neurons are shown in black; it should be noted that after flattening, dimensional information is no longer necessarily maintained by the network structure. Of course, no information is necessarily lost either, as the neuron index itself contains information about where it originated, so, during training, this information can still be used by the dense layers; it is just not necessarily maintained as it is in convolutional layers. This network will have in total $26 + (3 times 4 + 4) + (2 times 3 + 2) = 50$ trainable parameters. This network is very simple and would probably not have much practical use in real-world problems other than straightforward tasks that would probably not necessitate using neural networks. _Lower:_ Abstracted diagram of the same model with included hyperparameter information.
]
) <cnn_diagram>
=== Stride, Dilation, and Padding <stride-sec>
*Stride* is a user-defined hyperparameter of convolutional layers that must be defined before training @deep_learning_review @conv_review @cnn_hyperparameters. Like kernel size, it is a multidimensional parameter with a value for each input vector dimension. A convolutional layer's stride describes the distance the kernel moves between instances. A stride of one is the most commonly used choice. For example, if the stride is one, then a kernel is tiled with a separation of one input value from its last location. Stride, $S$, is always greater than zero, $S > 0$. The kernels will overlap in the i#super("th") dimension if $S_i < k_i$. If $S_i = k_i$, there will be no overlap and no missed input vector values. If $S_i > k_i$, some input vector values will be skipped; this is not usually used. Along with kernel size, stride determines the output size of the layer. A larger stride will result in fewer kernels and, thus, a smaller output size; see @stride below for an illustration of different kernels strides.
#figure(
image("stride.png", width: 70%),
caption: [Illustration of how different values of kernel stride would be laid out on a $4 times 4$ input image. In each case, unused input image values are shown as empty black squares on the grid, and input values read by the kernel are filled in red. Similar to kernel size, different values of stride result in a different output vector size. The strides shown are as follows: _Upper left:_ $1, 1$. _Upper right:_ $2, 1$. _Lower left:_ $1, 2$. _Lower right:_ $2, 2$.]
) <stride>
Introducing kernel stride primarily serves to reduce the overall size of the network by reducing the output vector without adding additional parameters; in fact, the number of parameters is independent of stride @deep_learning_review @conv_review. Reducing the size of the network might be a desirable outcome as it can help reduce computational time and memory overhead. It can also help to increase the receptive field of neurons in subsequent layers as it condenses the distance between spatially separated points, so when adjusting the resolution of feature maps in a model to balance the identification of smaller and larger scale features, it could potentially be a useful dial to tune. In most cases, however, it's left at its default value of one, with the job of reducing the network size falling to pooling layers; see @pool-sec.
One interesting and potentially unwanted effect of introducing stride into our network is that it removes the complete translation equivariance of the layer by subsampling; instead, translations are only equivariant if they match the stride size, i.e. if a kernel has a stride of two features are invariant if they move exactly two pixels, which is not a common occurrence.
*Dilation* is a further hyperparameter that can be adjusted prior to network training @deep_learning_review @conv_review @cnn_hyperparameters. Dilation introduces a spacing inside the kernel so that each input value examined by the kernel is no longer directly adjacent to another kernel input value, but instead, there is a gap wherein the kernel ignores that element; see @dilation. By default, this value would be set to zero, and no dilation would be present. This directly increases the receptive field of that kernel without introducing additional parameters, which can be used to help the filters take more global features into account. It can also be used in the network to try and combat scale differences in features; if multiple kernels with different dilations are used in parallel on different model branches, the model can learn to recognise features at the same scale but with different dilations.
#figure(
image("dilation.png", width: 40%),
caption: [Diagram illustrating how different values of kernel dilation affect the arrangement of the kernel input pixels. In this example, the receptive field of a single $3 times 3$ kernel at three different dilation levels is displayed; differing colours represent the input elements at each dilation level. The shaded red kernel illustrates dilation level zero; the shaded blue region is a kernel with dilation of one, and the green kernel has a kernel dilation of two.]
) <dilation>
Particular stride, dilation, and size combinations will sometimes produce kernel positions that push them off the edge of the boundaries of the input vector. These kernel positions can be ignored, or the input vector can be padded with zeros or repeats of the nearest input value; see @padding.
#figure(
image("padding.png", width: 40%),
caption: [Diagram illustrating how padding can be added to the edge of an input vector in order to allow for otherwise impossible combinations of kernel, stride, size, and dilation. In each case, unused input image values are shown as empty black squares on the grid, input values read by the kernel are shaded red, and empty blue squares are unused values added to the original input vector, containing either zeros or repeats of the closest data values. In this example, the kernel size is $3 times 3$, and the kernel stride is $2, 2$.]
) <padding>
=== Pooling <pool-sec>
Pooling layers, or simply pooling, is a method used to restrict the number of data channels flowing through the network @deep_learning_review @conv_review. They see widespread application across the literature and have multiple valuable properties. They can reduce the size of the network and thus the computation and memory overhead, and they can also make the network more robust to small translational, scale, and rotational differences in your input features. Convolutional layers record the position of the feature they recognise but can sometimes be overly sensitive to tiny shifts in the values of input pixels. Small changes in a feature's scale, rotation, or position within the input can lead to a very different output, which is evidently not often desirable behaviour.
Pooling layers do not have any trainable parameters, and their operation is dictated entirely by the user-selected hyperparameters chosen before the commencement of model training. Instead, they act to group pixels via subsampling throwing away excess information by combining their values into a single output. In this way, they are similar to convolutional kernels, however. instead of operating with trained parameters, they use simple operations. The two most common types of pooling layers are *max pooling* and *average pooling*; max pooling keeps only the maximum value within each of its input bins, discarding the other values; intuitively, we can think of this as finding the strongest evidence for the presence of the feature within the pooling bin and discarding the rest. Average pooling averages the value across the elements inside each pooling bin, which has the advantage that it uses some information from all the elements.
As can be imagined, the size of CNNs can increase rapidly as more layers and large numbers of kernels are used, with each kernel producing a feature map nearly as large as its input vector. Although the number of parameters is minimised, the number of operations increases with increasing input size. Pooling layers are helpful to reduce redundant information and drastically reduce network size whilst also making the network more robust to small changes in the input values.
Along with the choice of operational mode, i.e. average or maximum, pooling layers have some of the same hyperparameters as convolutional kernels, size, and stride. Unlike convolutional layers, the pooling stride is usually set to the same value as the pooling size. Meaning that there will be no overlap between pooling bins but also no gaps. This is due to the purpose of pooling layers, which attempt to reduce redundant information; if stride were set to smaller values, there would be little reduction and little point to the layer.
Because pooling with stride is a form of subsampling, it does not maintain strict translational equivariance unless the pool stride is one, which, as stated, is uncommon. Thus, as most CNN models use pooling, most CNNs are neither strictly translationally invariant nor equivariant @lack_of_invariance.
== Results from the Literature <cnn-literature>
Both gravitational-wave astrophysics and deep learning methods have been through rapid advancement in the previous decade, so it is perhaps unsurprising that there has also developed a significant intersection between the two fields @gw_machine_learning_review. Multiple artificial network architectures, CNNs @gabbard_messenger_cnn @george_huerta_cnn, autoencoders @source_agnostic_lstm, generative adversarial networks @burst_detection_gans, recurrent neural networks @bidirectional_lstm, and attention-based networks like transformers @detection_conv_transformer have been applied to numerous gravitational-wave data analysis problems. This review will focus on efforts to apply CNN classifiers to detect features hidden within interferometer data. First, we will look at attempts to detect Compact Binary Coalescences (CBCs), followed by a look at unmodeled Bursts detection attempts. More complex network architectures will be reviewed later when we examine attention layers in closer detail; see @skywarp-sec. This is not intended to be an exhaustive catalogue, although efforts have been made to be as complete as possible.
=== Convolutional Neural Networks (CNNs) for the detection of Compact Binary Coalescences (CBCs)
The earliest attempts at CBC classification using artificial neural networks were a pair of papers by George _et al._ @george_huerta_cnn which was followed up by Gabbard _et al._ @gabbard_messenger_cnn. George _et al._ @george_huerta_cnn applied CNNs to the binary classification problem and basic parameter estimation. They used CNNs with two outputs to extract parameter estimates for the two companion masses of the binary system. They used the whitened outputs of single interferometers as inputs and utilized CNNs of a standard form consisting of convolutional, dense, and pooling layers; see @gabbard_diagram and @george_diagram. They evaluated two models, one smaller and one larger. In their first paper, they used only simulated noise, but they produced a follow-up paper showing the result of the model's application to real interferometer noise @george_huerta_followup. Gabbard _et al._ @gabbard_messenger_cnn used an alternate CNN design with a different combination of layers. They only used a single network architecture, and no attempt at parameter estimation was made. A differentiating feature of their paper was the training of individual network instances to recognize different SNRs. Both George _et al._ @george_huerta_cnn and Gabbard _et al._ @gabbard_messenger_cnn achieved efficiency curves that closely resembled that of matched filtering; of note, however, both were validated at a considerably higher FAR ($tilde 10^3$ Hz) than is useful for a gravitational-wave search, this will be a consistent theme throughout the literature and is one of the greatest blockers to using CNNs in gravitational-wave transient detection.
There have been many papers that follow up on these two initial attempts. Several papers with mixed results are difficult to compare due to a variety of different conventions used to characterise signal amplitude and assess model performance. Luo _et al._ @luo_cnn attempted to improve the model described by Gabbard _et al._ They have presented their results using a non-standard "Gaussian noise amplitude parameter". Whilst within their own comparisons, they seem to have improved network operation over the original design, at least at higher FARs, it is difficult to make a comparison against other papers because of the unorthodox presentation. Schmitt _et al._ @schmitt_cnn attempted to compare the performance of one of the models presented in George _et al._ @george_huerta_cnn with three different model architectures, Temporal Convolutional Networks (TCNs), Gated Recurrent Units (GRUs), and Long Short-Term Memory (LSTMs). They seem to show that the other model architectures can achieve higher performance than CNNs, but they have used an unfamiliar waveform scaling method, so it is hard to compare to other results.
A more interesting follow-up by Fan _et al._ @multi_detector_fan_cnn took the smaller of the two models introduced in George _et al._ @george_huerta_cnn and extended it to use multiple detectors as inputs rather than the previously mentioned studies, which looked at only single detectors. They do this for both detection and parameter estimation and appear to show improved accuracy results over the original paper @george_huerta_cnn, although they do not address the confounding factor of having to deal with real noise. Krastev _et al._ tested the use of the other larger model introduced by George _et al._ @george_huerta_cnn. They tested its use on Binary Neuron Star (BNS) signals, as well as reaffirming its ability to detect BBH signals. They used significantly longer input windows to account for the longer detectable duration of BNS signals. They found BNS detection to be possible, although it proved a significantly harder problem.
Using a different style of architecture, Gebhard _et al._ @gebhard_conv_only_cnn argued that convolution-only structures are more robust and less prone to error, as they remove much of the black-box effect produced by dense layers and allow for multiple independently operating (though with overlapping input regions) networks, creating an ensemble which generates a predictive score for the presence of a signal at multiple time positions. This results in a time-series output rather than a single value, which allows the model to be agnostic to signal length. Their determination of the presence of a signal can thus rely on the overall output time series rather than just a single classification score. Similarly to Fan _et al._ @multi_detector_fan_cnn, they used multiple detector inputs.
There have been at least two papers that utilise ensemble approaches to the problem. Ensembles consist of multiple independently trained models in the hopes that the strengths of another will counteract the weaknesses of one under the assumption that it is less likely for them both to be weak in the same area. A joint decision is then taken through some mechanism that takes the result of all models into consideration, often waiting for certain models' votes under certain criteria. Huerta _et al._ @huerta_fusion_cnn used an approach consisting of four independently trained models, each of which has two separate CNN branches for the LIGO Hanford and LIGO Livingston detectors, which are then merged by two further CNN layers. Still, they have efficiency results down to a lower FAR than any paper reviewed so far, at $1 times 10^(-5)$, which is impressive, although the efficiency scores at these FARs are low ($<1%$). Overall, the paper is more focused on the software infrastructure for deploying neural network models. Ma _et al._ @ma_ensemble_cnn used an ensemble network that employ one of the architectures described by Gabbard _et al._ @gabbard_messenger_cnn. They utilise two "subensembles" in an arrangement in which each detector has its own ensemble composed of networks that vote on a false/positive determination; the results of both of the two subensembles are then combined for a final output score. They do not give efficiency scores at set SNRs, so again, it is difficult to compare against other results.
There have also been some interesting studies that use feature engineering to extract features from the input data before those features are fed into the CNN models, see @feature-eng-sec. Wang _et al._ @wang_cnn use a sparse matched filter search, where template banks of only tens of features, rather than the usual hundreds of thousands or millions, were used. The output of this sparce matched filter was then ingested by a small CNN, which attempted to classify the inputs. Notably, they use real noise from the 1#super("st") LIGO observing run @GWTC-1 and multi-detector inputs. Though an interesting method, their results appear uncompetitive with other approaches. Reza @matched_filtering_combination _et al._ used a similar approach but split the input into patches before applying the matched filter. However, results are not presented in an easily comparable fashion. Bresten _et al._ @bresten_cnn_topology adapts one of the architectures from George _et al._ @george_huerta_cnn but applies a feature extraction step that uses a topological method known as persistent homology before the data is ingested by the network. It is an interesting approach, but their results are unconvincing. They limited their validation data to 1500 waveforms at only 100 specific SNR values in what they term their "hardest case". They showed poor results compared to other methods, suggesting their method is undeveloped and heavily SNR-tailored.
There have been at least three spectrogram-based attempts to solve the CBC detection problem. Yu _et al._ @spectrogram_cnn_2 used single detector spectrograms, which are first analysed in strips using multiple 1D CNNs before being fed into a 2D CNN for final classification; they achieve middle-of-the-range efficiency results. Aveiro _et al._ @bns_object_detection_spectogram focused on BNS detection and used an out-of-the-box object detection network to try and detect patterns in spectrograms. They do not state efficiencies for SNRs less than ten. Finally, there was also a search paper @o2_search_cnn, which searched through the second observing run using spectrograms-based CNNs; they detected nothing of significance.
There has also been an attempt to use wavelet decomposition for the problem. Lin _et al._ @lin_wavelet_bns focused on the detection of BNS signals by wavelet decomposition with some very promising results shown to outperform matched filtering; a subsequent follow-up paper @li_wavelet_cnn showed that the same method could be applied to BBH signals with equal promise. They achieve an efficiency of 94% when detecting waveforms with an SNR of 2 at a FAR of $1 times 10^(-3)$, which undercuts the competition by considerable margins. Their method is certainly worth investigation but was unfortunately missed until this thesis was in the latter stages of construction, so no wavelet decomposition methods have been attempted.
There have also been a number of papers utilising CNNs for specialised detection cases, such as mass asymmetric CBCs @mass_asymetric_cbcs by Andrés-Carcasona _et al._, who employ spectrogram-based CNNs to run a search over O3, and eccentric CBCs by Wei _et al._ @wei_cnn, the latter of which also focuses on early detection along with a few other papers @early_alert_bbh @early_alert_bns @early_detection which attempt to find CBC inspirals before they are detectable by standard methods. There have also been a number of papers that discuss the use of CNNs for the analysis of data from future space-based detectors @space_detection @space_detection_2. For brevity, and as they are less relevant to our problems, these special cases will not be discussed here.
As can be seen, it is very difficult to compare the performance of many of the architectures and methods presented in the literature. The results are presented at wildly different FARs and SNR ranges, often using different incomparable metrics and with varying levels of rigour. There is a tendency to apply new tools and ideas to the problem without careful thought about how the results can be standardised. @literature-results displays results from some of the papers that were found to have at least somewhat comparable metrics.
#set page(
flipped: true
)
#figure(
table(
columns: (auto, 100pt, auto, auto, auto, auto, auto, auto, auto, auto, auto),
inset: 10pt,
align: horizon,
[*Name*], [*Model*], [*Real Noise?*], [*Detectors*], [*Target*], [*Feature*], [*SNR Tailored*], [*FAR*], [*Acc 8*], [*Acc 6*], [*Acc 4*],
[George _et al._ @george_huerta_cnn], [ Smaller Novel CNN ], [No], [Single], [BBH], [No], [No], [$5 times 10^(-2)$], [0.98], [0.70], [0.16],
[-], [Larger Novel CNN ], [-], [-], [-], [-], [-], [-], [0.99], [0.80], [0.21],
[George _et al._ @george_huerta_followup], [-], [ #text(red)[*Yes*] ], [-], [-], [-], [-], [-], [0.98], [0.77], [0.18],
[Gabbard _et al._ @gabbard_messenger_cnn], [Novel CNN], [No], [Single], [BBH], [No], [#text(red)[*Yes*]], [$1 times 10^(-1)$], [1.0], [0.88], [0.44],
[-], [-], [-], [-], [-], [-], [-], [$1 times 10^(-2)$], [0.99], [0.69], [0.10],
[-], [-], [-], [-], [-], [-], [-], [$1 times 10^(-3)$], [0.98], [0.49], [0.02],
[Fan _et al._ @multi_detector_fan_cnn], [ Based on George et al. Small], [No], [#text(red)[*Three*]], [BBH], [No], [No], [$4 times 10^(-2)$], [0.99], [0.84], [0.32],
[Krastev _et al._ @krastev_bnn_cnn ], [Based on George et al. Large], [No], [Single], [#text(red)[*BNS*]], [No], [No], [$1 times 10^(-1)$], [0.71], [0.42], [0.20],
[-], [-], [-], [-], [-], [-], [-], [$1 times 10^(-2)$], [0.32], [0.10], [0.02],
[-], [-], [-], [-], [-], [-], [-], [$1 times 10^(-3)$], [0.11], [0.00], [0.00],
[Gebhard _et al._ @gebhard_conv_only_cnn], [#text(red)[Novel Conv-Only Model]], [#text(red)[*Yes*]], [#text(red)[*Two*]], [BBH], [No], [No], [$1.25 times 10^(-3)$], [0.83], [0.35], [Not Given],
[Wang _et al._ @wang_cnn], [-], [#text(red)[*Yes*]], [#text(red)[*Two*]], [BBH], [#text(red)[*Matched Filter*]], [No], [$1 times 10^(-1)$], [0.60], [0.24], [0.12],
[-], [-], [-], [-], [-], [-], [-], [$1 times 10^(-2)$], [0.30], [0.05], [0.00],
[-], [-], [-], [-], [-], [-], [-], [$1 times 10^(-3)$], [0.08], [0.00], [0.00],
[Huerta _et al._ @huerta_fusion_cnn], [#text(red)[*Novel Ensemble*]], [Yes], [Two], [BBH], [No], [No], [#text(red)[*$5 times 10^(-4)$*]], [0.20], [0.15], [Not Given],
[-], [-], [-], [-], [-], [-], [-], [#text(red)[*$5 times 10^(-5)$*]], [0.01], [0.001], [Not Given],
[Yu _et al._ @spectrogram_cnn_2], [#text(red)[*Novel Multi-Branched CNN*]], [Yes], [Single], [BBH], [#text(red)[*Spectrogram*]], [No], [$6 times 10^(-2)$], [0.89], [0.67], [0.20]
),
caption: [A comparison of results from the literature, red values indicate the significant feature of the study. Note: Some accuracy values are extracted from plots by eye, so substantive error will have been introduced. Some results were not included as they did not state comparable performance metrics. ]
) <literature-results>
#set page(
flipped: false
)
=== Convolutional Neural Networks (CNNs) for the detection of Gravitational-Wave Bursts
The literature surrounding burst detections with CNNs is considerably more limited than for CBCs. In all of the previously mentioned deep-learning studies, the training of the network has relied on accurate models of CBC waveforms. As has been noted, the availability of reliable waveforms for other potential gravitational-wave sources, i.e. bursts, is considerably narrower due to unknown physical processes, large numbers of free parameters, and computational intractability, making it nearly impossible to have a sufficiently sampled template bank.
Despite this, there have been some attempts, most notably using simulated supernovae waveforms, as these are the most likely candidates for initial burst detection. There have been at least five attempts to classify supernovae with this method. Iess _et al._ @supernovae_cnn_1 used a CNN mode with two separate inputs; a 1D time series and a 2D spectrogram were fed into different input branches of the model. They used supernova signals taken from simulation catalogues along with a simple phenomenological model for two transient glitches classes in order to train the CNN to distinguish between the glitches and supernovae in the hopes that if a supernova signal were to appear in future interferometer data, it could be identified as such, rather than being ruled out as a glitch. Perhaps unsurprisingly, due to the complexity of the signal compared to CBCs, they require a significantly higher SNR in order to achieve similar accuracy results as the CBC case, although they still achieve some efficiency at lower SNRs. Chan _et al._ @supernovae_cnn_2 trained a CNN using simulated core-collapse supernovae signals drawn from several catalogues covering both magnetorotational-driven and neutrino-driven supernovae. They measured the ability to detect the signal and correctly classify which of the two types it fell into. They used moderately deep CNNs and emphasised the importance of multi-detector inputs for the task. They found it possible to detect magnetorotational-driven events at considerably greater distances than neutrino-driven supernovae.
Lopez _et al._ @supernovae_cnn_3 @supernovae_cnn_4 forgoes the use of simulated template backs in their training for a phenomenological approach in an attempt to try and avoid the problem of small template banks. They used an intricate model architecture comprised of mini-inception-resnets to detect supernova signals in time-frequency images of LIGO-Virgo data. Mini-inception resnets consist of multiple network branches of different lengths, which run in parallel before combining to produce a final classification score. Having some paths through the network that are shorter than others can be beneficial to avoid the vanishing gradient problem, wherein gradients fall off to zero within the network; having shortcuts allows the network to maintain a clearer view of the inputs even when other paths have become deep @skip_connections. Blocks of layers within networks that have *skip connections* periodically like this are known as residual blocks @res_net_intro, and allow much deeper architectures than would otherwise be possible. Networks that employ skip connections are known as *residual networks* or *resnets* @res_net_intro. Inception designs have multiple different network branches, all consisting of residual blocks, so there are many paths through the network from input vector to output.
Sasaoka _et al._ @supernovae_spectrogram @sasaoka_resnet use gradient-weighted feature maps to train CNNs to recognise supernovae spectrograms. They utilised core-collapse supernovae waveforms from a number of catalogues. However, they only achieved good classification performances at #box("1" + h(1.5pt) + "kpc"). They attributed some of their difficulties to features lost to the lower resolution of their time-frequency maps and recommended trying a different algorithm for their generation.
There have also been a few attempts to apply CNNs to the problem of unmodelled signal detection, looking for generic signals using methods that do not require a precisely tailored training set. As has been discussed, we do not yet know how well our simulations will align with real supernovae' gravitational emissions, and it is hard to tell whether the differences between our training datasets and the real signals will significantly hinder our model's ability to detect real signals. Such difficulty could certainly be a possibility; often, deep learning modules can be very sensitive to changes in their distribution and can lose significant efficacy when applied to out-of-distribution examples. If a sensitive enough generic model could be trained, this would alleviate this problem.
Marianer _et al._ @semi-supervised attempt a generic detection method via anomaly recognition. This is not a novel idea in the field of machine learning. However, its application to a generic burst search is intriguing. They apply their model to spectrograms of known transient noise glitches and use a mini-inception resnet to classify the input spectrograms into glitch classes. While examining the feature space of the classifier as it examines data, i.e. the feature maps of the neurons in internal convolutional layers, they utilise two anomaly detection methods to identify when a particular feature space does not look like it belongs to any of the known classes. This means they do not rely directly on the model to output a "novel" class. The latter poses a difficult problem as it is unclear how to ensure the training set is well sampled over every possible counterexample.
The other and most relevant work to this thesis on unmodeled glitch detection is MLy @MLy. MLy is a deep learning pipeline that relies on CCN models, which are trained to directly identify coherence between multiple detectors rather than using any pattern recognition or anomaly rejection techniques. This makes it somewhat unique amongst the methods presented. Rather than using a dataset consisting of particular morphologies of signal, MLy utilises distributions of generic white noise burst signals that, in their entirety, will cover all possible burst morphologies with a certain frequency range and duration. One would note that these distributions would also cover all possible glitch morphologies within that parameter space. Therefore, MLy is trained not only to notice the presence of a signal but also the coherence of that signal across detectors. In that sense, it is similar to the operation of many of the preexisting burst pipelines, though it is the only purely machine-learning pipeline to attempt to do this.
MLy achieves this goal by utilising two independent CNN models, one of which looks simply for excess power in both detectors, the coincidence model, and one of which attempts to determine coherence between detections; the second model is fed feature-engineered data in the form of the rolling Pearson correlation between detectors with a number of factor-of-sample-interval timeshifts equivalent to the maximum arrival time difference between the two detectors in question. It does this for the two LIGO detectors and the Virgo detector. It is trained on four types of example: pure noise, noise with a simulated transient (WNB) in one detector only, noise with a simulated transient in all three detectors but with enforced incoherence, and coherent WNBs projected into the three detectors in a physically realistic manner. Using this method, the coherence network can learn to differentiate between coincident glitches and coherent signals.
As baseline models to compare with the results of our improvement attempts, in the next section, we will train five model architectures using the GravyFlow data acquisition and training pipeline @gwflow_ref. Since they are the basis of many of the subsequent attempts at CBC detection, we will train both the models presented in George _et al._ @george_huerta_cnn along with the model presented in Gabbard _et al._ @gabbard_messenger_cnn and for the coherence case, the two models of the MLy pipeline @MLy.
=== CBC Detection Recreation
We present an attempt to recreate the model and training procedure presented in George _et al._ @george_huerta_cnn and Gabbard _et al._, see @george_diagram and @gabbard_diagram respectively. The model architectures themselves were recreated as closely as possible to how they are presented in the literature, except for the addition of GravyFlow whitening layers as their first layer in order to replicate the data conditioning performed in both studies. These models will act as performance baselines as we move forward and try to improve their operation. Rather than trying to recreate the exact training procedure and training distribution from the literature, however, which could end up being a difficult task, we have standardized the training procedure to achieve parity with the previously conducted perceptron experiments. See @perceptron-training-parameters for details of the parameters used in the training procedure.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("george_small_diagram.png", width: 98%) ],
[ #image("george_large_diagram.png", width: 100%) ],
),
caption: [The two CNN architectures presented in George _et al._ @george_huerta_cnn. _Upper:_ Smaller model. _Lower:_ Larger model.]
) <george_diagram>
#figure(
image("gabbard_diagram.png", width: 100%),
caption: [CNN architecture from Gabbard _et al._ @gabbard_messenger_cnn.]
) <gabbard_diagram>
==== Training
Looking at the training plots, shown in @cnn_single_accuracy, it is quickly obvious that these models drastically outperform any of the perceptrons. This is perhaps unsurprising, as the CNN architecture was specifically designed for pattern recognition both in images and time series. The training parameters are identical to those used for the single detector perceptron, see @perceptron-training-parameters. The models also train more rapidly, saturating their validation loss in fewer epochs than was required for the perceptrons, which only reached a lower accuracy even with a longer training time, see @cnn_single_loss. This is also as expected; more of the function that the networks are attempting to approximate has been predefined in the CNNs architecture. Convolutional kernels can learn from features wherever they appear in the image. If dense layers used a similar technique they would have to learn equivalent "kernels" individually for every possible feature location and they would have to do so with far fewer examples for each of these unique positions. In CNNs, a kernel learning to recognize a feature can train on instances of that feature wherever they appear in the image if a similar kernel-like structure were to develop in a dense layer, each instance of that kernel-like structure would have to learn to recognize a feature from its appearance at a single location, which would presumably occur much less often than the feature as a whole. Similar to the perceptron experiments, further validation results are presented in @cnn-validation-sec.
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("cnn_single/training_accuracy_cnn.png", width: 100%) ],
[ #image("cnn_single/validation_accuracy_cnn.png", width: 100%) ],
),
caption: [The accuracy history of attempts to retrain Convolutional Neural Networks (CNNs) with architectures adapted from the literature using the GravyFlow pipeline. A custom GravyFlow whitening layer has been added to the start of each model in order to reproduce the whitening data conditioning step applied in the original studies. The structure of the models is otherwise identical. Differences in the training and validation procedures, however, may lead to slightly different results than in the original studies. Rather than exactly attempting to mimic the datasets and training process used in each of these studies, it has been kept consistent with the other results throughout the thesis, in order to facilitate comparison. The models presented are the two models from George _et al._ @george_huerta_cnn, labeled "George Small", and "George Large", to differentiate them in terms of parameter count, and the single model from Gabbard _et al._ @gabbard_messenger_cnn. The network structure of these models can be seen in @george_diagram and @gabbard_diagram, respectively. The training and validation datasets were maintained from the perceptron single-detector training experiment. The dataset contains IMRPhenomD waveforms generated using cuPhenom @cuphenom_ref injected into real interferometer noise sampled from the LIGO Livingston detector during the 3#super("rd") joint observing run @O2_O3_DQ. The optimal SNR of waveforms injected into the training and validation sets was uniformly distributed between 8 and 15. Input was from a single detector only. Each epoch consisted of $10^5$ training examples, and it should be noted that, unlike regular training pipelines, each training epoch consisted of newly generated waveforms injected into unseen noise segments, though the validation examples are consistent. Training of each model was halted after ten consecutive epochs with no improvement to validation loss, the values of which are shown in @cnn_single_loss. Validation noise was drawn from a separate pool of data segments inaccessible to the training data loader. It is immediately clear that this is a huge improvement over the perceptron models, and it makes it evident why we abandon the idea of perceptrons so quickly. Both the training and validation accuracies jump to above 90% almost immediately, and in the case of the model from Gabbard _et al._, and the largest of the models from George _et al._, they plateau at approximately 98% accuracy, with only marginal improvements from there. The smaller model from George _et al._ plateaus closer to 96% accuracy. Considering approximants from both the training and validation datasets are generated with CBCs drawn uniformly between an optimal SNR of 8 and 15, this demonstrates good performance. Because two of the models plateau at statistically similar accuracies with quite different architectures, it suggests that they are approaching the detectability limit in both cases. An interesting examination will be to compare their performance with FAR-calibrated detection thresholds. _Upper:_ Plot of model accuracies when measured with training data ($10^5$ epoch-unique examples). Visit #link("https://tinyurl.com/mwxfvp33") for interactive plots. _Lower:_ Plot of model accuracies when measured with validation data ($10^4$ epoch-consistent examples).]
) <cnn_single_accuracy>
#figure(
grid(
columns: 1,
rows: 2,
gutter: 1em,
[ #image("cnn_single/training_loss_cnn.png", width: 100%) ],
[ #image("cnn_single/validation_loss_cnn.png", width: 100%) ],
),
caption: [The loss history of attempts to retrain Convolutional Neural Networks (CNNs) from the literature using the GravyFlow pipeline. These loss values correspond to the accuracies displayed in @cnn_single_accuracy. The models presented are the two models from George _et al._ @george_huerta_cnn, labeled "George Small", and "George Large", to differentiate them in terms of parameter count, and the single model from Gabbard _et al._ @gabbard_messenger_cnn. The network structure of these models can be seen in @george_diagram and @gabbard_diagram, respectively. The training and validation datasets were maintained from the perceptron single-detector training experiment. The dataset contains IMRPhenomD waveforms generated using cuPhenom @cuphenom_ref and real interferometer noise sampled from the LIGO Livingston detector during the 3#super("rd") joint observing run @O2_O3_DQ. The optimal SNR of waveforms injected into the training and validation sets was uniformly distributed between 8 and 15. Input was from a single detector only. Each epoch consisted of $10^5$ training examples, and it should be noted that, unlike regular training pipelines, each training epoch consisted of newly generated waveforms injected into unseen noise segments, though the validation examples are consistent. The loss is the metric used to determine when training is halted; this is done after ten epochs have passed with no improvement. Again we can see that this is a vast improvement over the perceptron case, see @perceptron_single_loss, at least in the time frame that is monitored, with loss values quickly falling to a region with a much smaller reduction gradient and then gradually improving from there with diminishing returns. It is these diminishing returns that can have a great impact on the ability of the model to sustain high accuracies with low FAR thresholds. Visit #link("https://tinyurl.com/mwxfvp33") for interactive plots. _Upper:_ Plot of model losses when measured with training data ($10^5$ epoch-unique examples). _Lower:_ Plot of model accuracies when measured with validation data ($10^4$ epoch-consistent examples).]
) <cnn_single_loss>
==== Validation <cnn-validation-sec>
The validation results portray a similar picture --- vastly improved performance over the perceptron results. We can see in the False Alarm Rates (FAR) curves, @cnn_far_single, that we can utilize significantly lower FARs without dramatically increasing the required score threshold. In most cases, we expect this to allow higher efficiencies at lower FARs if the model has gained adequate detection ability. The benefit of this is displayed in the efficiency plots, @cnn_efficiency_curves_single, and the ROC plots, @roc_curves_single. We can very clearly see, that these models are dramatically improved over the perceptron case, whose efficiency curves can be seen in @perceptron_efficiency_curves_single, and ROC curves can be seen in @perceptron_roc_curve.
#figure(
image("cnn_single/far_plot.png", width: 100%),
caption: [False Alarm Rate (FAR) plotted against the score threshold required to achieve that FAR, for three recreations of models from the literature. Two models are adapted from George _et al._ @george_huerta_cnn, labeled "George Small", and "George Large", to differentiate them in terms of model parameter count, and the single model from Gabbard _et al._ was also adapted. The network structure of these models can be seen in @george_diagram and @gabbard_diagram, respectively. The presented FAR curves are significantly lower than those achieved by the perceptrons in the single detector case, see @perceptron_single_far. This means that we will be able to achieve lower FARs with lower score thresholds, which typically, though not necessarily, leads to higher efficiencies at those FARs. We explore the efficiency results in @cnn_efficiency_curves_single. Visit #link("https://tinyurl.com/2s3dtd8a") for interactive plots.]
) <cnn_far_single>
#figure(
grid(
image("cnn_single/efficiency_curve_0_1.png", width: 100%),
image("cnn_single/efficiency_curve_0_01.png", width: 100%),
image("cnn_single/efficiency_curve_0_001.png", width: 100%),
image("cnn_single/efficiency_curve_0_0001.png", width: 100%),
),
caption: [Model efficiency curves for three models adapted from the literature. Two models are adapted from George _et al._ @george_huerta_cnn, labelled "George Small", and "George Large", to differentiate them in terms of model parameter count, and the single model from Gabbard _et al._ @gabbard_messenger_cnn was also adapted. The network structure of these models can be seen in @george_diagram and @gabbard_diagram, respectively. These models verify that CNNs can achieve much higher accuracies within the training regime utilized, even when using threshold scores that are calibrated to specific False Alarm Rates (FARs). The perceptron efficiency curves for the single detector CBC detection case can be seen in @perceptron_efficiency_curves_single. They achieve higher accuracies almost across the board at the highest FARs depicted, #box("0.1" + h(1.5pt) + "Hz") and #box("0.01" + h(1.5pt) + "Hz"), except at SNRs where detection becomes virtually impossible ($<2$) in which case they perform similarly. They are also able to achieve results at lower FARs #box("0.001" + h(1.5pt) + "Hz") and #box("0.0001" + h(1.5pt) + "Hz"); at these FARs, the perceptron models had negligible performance and were not depicted, so this is a significant improvement. Visit #link("https://tinyurl.com/2s3dtd8a") for interactive plots. _First:_ Efficiency curves at a FAR of #box("0.1" + h(1.5pt) + "Hz"). _Second:_ Efficiency curves at a FAR of #box("0.01" + h(1.5pt) + "Hz"). _Third:_ Efficiency curves at a FAR of #box("0.001" + h(1.5pt) + "Hz"). _Fourth:_ Efficiency curves at a FAR of #box("0.0001" + h(1.5pt) + "Hz").]
) <cnn_efficiency_curves_single>
#figure(
image("cnn_single/roc_8.png", width: 100%),
caption: [Reciever Operator Characteristic (ROC) curves, for three models adapted from the literature. Two models are adapted from George _et al._ @george_huerta_cnn, labelled "George Small", and "George Large", to differentiate them in terms of model parameter count, and the single model from Gabbard _et al._ @gabbard_messenger_cnn was also adapted. The network structure of these models can be seen in @george_diagram and @gabbard_diagram, respectively. In comparison with the ROC curves achieved by the perception models, see @perceptron_roc_curve, which at an optimal SNR of 8 looks to be almost randomly guessing, this is a significant improvement. The curves shown illustrate the model operating on a pool of injected signals at an optimal SNR of 8. Visit #link("https://tinyurl.com/2s3dtd8a") for interactive plots.]
) <roc_curves_single>
It would appear from our investigation that CNNs offer a far superior solution to the single-detector CBC detection problem than perceptrons do. Several questions remain, however. Are we nearing the limit of the CNN's capacity to solve this problem, or could further hyperparameter tuning, squeeze additional performance out of the architecture, especially at low False Alarm Rates? There have been many attempts to improve on these models throughout the literature @spectrogram_cnn_2 @krastev_bnn_cnn, but there lacks a systematic algorithm to search across all possible solutions, (or at least a large number of possible solutions), to find the optimal detection architecture and training procedure. We investigate this in @dragonn-sec, where we attempt to use genetic algorithms to search the large hyperparameter space presented for more optimal solutions. There are also several more recently developed architectures, including attention-based models @attention_is_all_you_need, which offer alternate and possibly superior alternatives to convolutional layers, we explore the use of attention layers for CBC classification in @skywarp-sec. Finally, there are other, potentially more challenging problems facing gravitational-wave data science, including parameter estimation. In @crosswave-sec we tackle a special case wherein two overlapping signals are present in our input data and examine if we can separate the case of single and overlapping signals. We then test if we can extract parameters from each signal, both in aid of alternate parameter estimation methods, and potentially as a precursor to a full machine learning-based parameter estimator. |
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/visualize/shape-fill-stroke.typ | typst | Apache License 2.0 | // Test shape fill & stroke.
---
#let variant = rect.with(width: 20pt, height: 10pt)
#let items = for (i, item) in (
variant(stroke: none),
variant(),
variant(fill: none),
variant(stroke: 2pt),
variant(stroke: eastern),
variant(stroke: eastern + 2pt),
variant(fill: eastern),
variant(fill: eastern, stroke: none),
variant(fill: forest, stroke: none),
variant(fill: forest, stroke: conifer),
variant(fill: forest, stroke: black + 2pt),
variant(fill: forest, stroke: conifer + 2pt),
).enumerate() {
(align(horizon)[#(i + 1).], item, [])
}
#grid(
columns: (auto, auto, 1fr, auto, auto, 0fr),
gutter: 5pt,
..items,
)
---
// Test stroke folding.
#let sq(..args) = box(square(size: 10pt, ..args))
#set square(stroke: none)
#sq()
#set square(stroke: auto)
#sq()
#sq(fill: teal)
#sq(stroke: 2pt)
#sq(stroke: blue)
#sq(fill: teal, stroke: blue)
#sq(fill: teal, stroke: 2pt + blue)
---
// Test stroke composition.
#set square(stroke: 4pt)
#set text(font: "Roboto")
#stack(
dir: ltr,
square(
stroke: (left: red, top: yellow, right: green, bottom: blue),
radius: 50%, align(center+horizon)[*G*],
inset: 8pt
),
h(0.5cm),
square(
stroke: (left: red, top: yellow + 8pt, right: green, bottom: blue + 2pt),
radius: 50%, align(center+horizon)[*G*],
inset: 8pt
),
h(0.5cm),
square(
stroke: (left: red, top: yellow, right: green, bottom: blue),
radius: 100%, align(center+horizon)[*G*],
inset: 8pt
),
)
// Join between different solid strokes
#set square(size: 20pt, stroke: 2pt)
#set square(stroke: (left: green + 4pt, top: black + 2pt, right: blue, bottom: black + 2pt))
#stack(
dir: ltr,
square(),
h(0.2cm),
square(radius: (top-left: 0pt, rest: 1pt)),
h(0.2cm),
square(radius: (top-left: 0pt, rest: 8pt)),
h(0.2cm),
square(radius: (top-left: 0pt, rest: 100pt)),
)
// Join between solid and dotted strokes
#set square(stroke: (left: green + 4pt, top: black + 2pt, right: (paint: blue, dash: "dotted"), bottom: (paint: black, dash: "dotted")))
#stack(
dir: ltr,
square(),
h(0.2cm),
square(radius: (top-left: 0pt, rest: 1pt)),
h(0.2cm),
square(radius: (top-left: 0pt, rest: 8pt)),
h(0.2cm),
square(radius: (top-left: 0pt, rest: 100pt)),
)
|
https://github.com/tgeorg-ethz/Tanzquotient-Playlist-Printer | https://raw.githubusercontent.com/tgeorg-ethz/Tanzquotient-Playlist-Printer/main/README.md | markdown | MIT License | # Tanzquotient-Playlist-Printer
Typst template and Python script to generate a playlist printout for TQ Open Dancing events
Made up of
- a Python script to get a Spotify playlist, parse it, and associate dances with different songs
- a Typst document that takes the generated CSV to create a PDF to print out
|
https://github.com/fenjalien/metro | https://raw.githubusercontent.com/fenjalien/metro/main/tests/complex/complex-root-position/test.typ | typst | Apache License 2.0 | #import "/src/lib.typ": *
#set page(width: auto, height: auto, margin: 1cm)
#complex(67, -0.9)
#complex(67, -0.9, complex-root-position: "before-number")
#complex(67, -0.9, complex-root-position: "after-number") |
https://github.com/NOOBDY/formal-language | https://raw.githubusercontent.com/NOOBDY/formal-language/main/q4.typ | typst | The Unlicense | #let q4 = [
4. (Fun with regex) Give regular expressions describing the following languages. In all cases, the alphabet is ${0, 1}$.
+ ${w | w "contains with at least three 1s"}$.
$Sigma^ast (1 Sigma^ast)^3$
+ ${w | w "contains exactly two 0s and at least two 1s"}$.
= TODO
+ ${w | "every odd position in" w "is 1"}$.
$(10 union 11)^ast (1 union epsilon.alt)$
]
|
https://github.com/jemus42/typst-slides-bips | https://raw.githubusercontent.com/jemus42/typst-slides-bips/main/bips.typ | typst | MIT License | #import "@preview/polylux:0.2.0": *
#let BIPS_en = [Leibniz Institute for Prevention Research and Epidemiology -- BIPS]
#let BIPS_de = [Leibniz-Institut für Präventionsforschung und Epidemiologie -- BIPS]
#let bips-colors = (
white: rgb(253, 253, 253),
blue: rgb(23, 99, 170),
gray: rgb(66, 66, 66),
orange: rgb(250, 133, 55),
green: rgb(49, 210, 57)
)
#let base-size = 22pt
#let title-size = 42pt
#let header-size = 28pt
//#let base-size = 2em
//#let title-size = 1em
//#let header-size = 1.05em
#let bips-author-main = state("bips-author-main", none)
#let bips-author-main-email = state("bips-author-main-email", none)
#let bips-lang = state("bips-lang", "english")
#let bips-institute-name = state("bips-institute-name", none)
#let bips-institute-web = state("bips-institute-web", none)
#let bips-contact-header = state("bips-contact-header", none)
#let bips-logo = state("bips-logo", none)
// #let divider = line.with(length: 90%, stroke: rgb("e4e5ea"))
#let gradient(height: 2pt) = {
box(width: 90%,
for x in range(150, 250, steps: 1) {
box(rect(width: 1%, height: height, fill: luma(x)))
}
)
}
#let title-case(string) = {
string.replace(
regex("[A-Za-z]+('[A-Za-z]+)?"),
word => upper(word.text.first()) + lower(word.text.slice(1)),
)
}
// Pre-set block of logo in top right, with or without numbering (for title slide)
// This feels pretty weird syntactically.
#let logoblock(number: true) = {
locate(loc => {
let logo_img = image(bips-logo.at(loc), height: 15%, width: auto)
// Workaround until polylux is updated to export logic module
// https://github.com/andreasKroepelin/polylux/issues/61#issuecomment-1654348478
let slide_num = themes.simple.logic.logical-slide.display()
set align(top + right)
set text(size: 20pt)
if (number) {
pad(rest: 30pt)[
#logo_img
#pad(right: 35pt, top: -14pt, slide_num)
]
} else {
pad(rest: 30pt, logo_img)
}
})
}
// This feels poorly thought out. It is.
#let logo(height: 100%, width: auto) = {
locate(loc => {
image(bips-logo.at(loc), height: height, width: width)
})
//image(logo, height: height, width: width)
}
#let bips-theme(
aspect-ratio: "16-9",
author_corresponding: (name: none, email: none),
logo: "logo.png",
lang: "english",
body) = {
set page(
paper: "presentation-" + aspect-ratio,
fill: bips-colors.white,
margin: (x: 10%, top: 0%, bottom: 5%)
)
// Not sure if necessary to adapt text
// language and region settings
let text-lang = (lang: none, region: none)
if (lang == "english") {
text-lang = (lang: "en", region: "US")
} else if (lang == "german") {
text-lang = (lang: "de", region: "DE")
}
// Default text options for slide contents
set text(
fill: bips-colors.gray,
size: base-size,
font: "Fira Sans",
weight: "light",
lang: text-lang.lang,
region: text-lang.region
)
// Widen spacing in lists
// Tight lists use par(leading: ) which is annoying as it also changes line spacing withing
// list items. Ideally I'd like to convert all tight lists to non-tight lists.
show list.where(tight: true): it => [
#set list(marker: "\u{25CF}")
#set par(leading: 1.1em)
#it
]
show list.where(tight: false): set list(marker: "\u{25CF}", spacing: 1.5em)
// Override heading defaults just so I can use semantic elements rather than showing text around
show heading.where(level: 1): it => [
#set align(center)
#set text(size: title-size, fill: bips-colors.blue, weight: "regular")
#block(it.body)
]
show heading.where(level: 2): it => [
#set align(center)
#set text(size: header-size, fill: bips-colors.blue, weight: "regular")
#block(it.body)
]
show raw.where(block: true): it => {
set text(font: "Fira Mono", size: base-size - 7pt)
it
}
bips-author-main.update(author_corresponding.name)
bips-author-main-email.update(author_corresponding.email)
bips-lang.update(lang)
bips-logo.update(logo)
body
}
#let title-slide(
title: [],
subtitle: [],
author: none,
institute: none,
date: datetime.today().display(),
occasion: none
) = {
polylux-slide({
// Setting this inside here messes up layout by introducing empty space,
// setting it outside polylux-slide() also causes issues down the line.
// set page(background: logoblock(number: false))
// This is scuffed.
pad(top: 30pt, right: -50pt, align(right + top, logo(height: 15%)))
if (author == none) {
author = bips-author-main.display()
}
if (institute == none) {
locate(loc => {
if (bips-lang.at(loc) == "english") {
bips-institute-name.update(BIPS_en)
} else if (bips-lang.at(loc) == "german") {
bips-institute-name.update(BIPS_de)
}
})
} else {
bips-institute-name.update(institute)
}
set align(center + horizon)
heading(level: 1, title-case(title))
set text(fill: bips-colors.blue)
text(subtitle, weight: "regular")
v(5%)
set text(size: 20pt)
author
parbreak()
set text(fill: bips-colors.gray)
bips-institute-name.display()
parbreak()
v(2cm)
date
parbreak()
occasion
})
}
// Cell with outline to debug grid arrangements
#let cell = rect.with(
inset: 5pt,
// fill: rgb("efefef"),
width: 100%, height: 100%,
radius: 0pt
)
#let slide(title: [], body) = {
set page(background: logoblock())
polylux-slide({
grid(
columns: (100%),
rows: (20%, 1%, 79%),
gutter: 0pt,
//cell()
{
set align(horizon)
pad(right: 8%, heading(level: 2, title))
},
align(horizon, gradient()),
//cell()
{
set align(left + horizon)
text(body, fill: bips-colors.gray)
},
)
})
}
#let thanks(thankstext: [], body) = {
polylux-slide({
locate(loc => {
if (bips-lang.at(loc) == "english") {
bips-contact-header.update("Contact")
bips-institute-web.update(link("https://leibniz-bips.de/en")[www.leibniz-bips.de/en])
} else if (bips-lang.at(loc) == "german") {
bips-contact-header.update("Kontakt")
bips-institute-web.update(link("https://leibniz-bips.de")[www.leibniz-bips.de])
}
})
// One large area above and two columns below, resulting in 3 cells
// But need two grids I guess. Probably possible to do with just one 2 column
// situation below a normal area with some fixed vertical spacing.
grid(
columns: 1,
rows: (10%, 40%, 10%, 40%),
gutter: 0em,
[], // Should have maybe set a top margin instead
//cell()
[
#set align(center + horizon)
#set text(fill: bips-colors.blue)
#text(weight: "regular", size: 25pt)[#thankstext]
#body
],
//cell()
[
#set align(center + top)
#set text(size: 1em)
#bips-institute-web.display()
],
//cell()
[
#grid(
columns: (55%, 45%), rows: 100%, gutter: 0em,
//cell()
[
#set align(right + horizon)
#set text(size: 0.85em)
#set par(leading: 0.75em) // Default 0.7em I think
#text(weight: "regular")[#bips-contact-header.display()] \
#text(fill: bips-colors.blue)[#bips-author-main.display()] \
#bips-institute-name.display() \
Achterstraße 30 \
D-28359 Bremen \
#bips-author-main-email.display()
],
//cell()
[
#set align(right + horizon)
#logo(height: 80%)
]
)
]
)
})
}
#let slide-references(
title: none,
bib: "references.bib",
style: "apa",
text_size: base-size - 10pt,
body) = {
set page(background: logoblock(number: false))
polylux-slide({
let bib = bibliography(bib, title: none, style: style)
grid(
columns: (100%),
rows: (20%, 1%, 79%),
gutter: 0pt,
[
#set align(horizon)
#set pad(right: 8%)
#heading(level: 2, title)
],
align(horizon, gradient()),
//cell()
[
#set align(left + horizon)
#set text(fill: bips-colors.gray, size: text_size)
#bib
],
)
})
}
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/174.%20pgh.html.typ | typst | pgh.html
How to Make Pittsburgh a Startup Hub
April 2016(This is a talk I gave at an event called Opt412 in Pittsburgh.
Much of it will apply to other towns. But not all, because
as I say in the talk, Pittsburgh has some important advantages over
most would-be startup hubs.)What would it take to make Pittsburgh into a startup hub, like
Silicon Valley? I understand Pittsburgh pretty well,
because I grew up here, in Monroeville. And I understand Silicon
Valley pretty well because that's where I live now. Could you get
that kind of startup ecosystem going here?When I agreed to speak here, I didn't think I'd be able to give a
very optimistic talk. I thought I'd be talking about what Pittsburgh
could do to become a startup hub, very much in the subjunctive.
Instead I'm going to talk about what Pittsburgh can do.What changed my mind was an article I read in, of all places, the New
York Times food section. The title was "Pittsburgh's Youth-Driven
Food Boom." To most people that might not even sound interesting,
let alone something related to startups. But it was electrifying
to me to read that title. I don't think I could pick a more promising
one if I tried. And when I read the article I got even more excited.
It said "people ages 25 to 29 now make up 7.6 percent of all
residents, up from 7 percent about a decade ago." Wow, I thought,
Pittsburgh could be the next Portland. It could become the cool
place all the people in their twenties want to go live.When I got here a couple days ago, I could feel the difference. I
lived here from 1968 to 1984. I didn't realize it at the time, but
during that whole period the city was in free fall. On top of the
flight to the suburbs that happened everywhere, the steel and nuclear
businesses were both dying. Boy are things different now. It's not
just that downtown seems a lot more prosperous. There is an energy
here that was not here when I was a kid.When I was a kid, this was a place young people left. Now it's a
place that attracts them.What does that have to do with startups? Startups are made
of people, and the average age of the people in a typical startup
is right in that 25 to 29 bracket.I've seen how powerful it is for a city to have those people. Five
years ago they shifted the center of gravity of Silicon Valley from
the peninsula to San Francisco. Google and Facebook are on the
peninsula, but the next generation of big winners are all in SF.
The reason the center of gravity shifted was the talent war, for
programmers especially. Most 25 to 29 year olds want to live in
the city, not down in the boring suburbs. So whether they like it
or not, founders know they have to be in the city. I know multiple
founders who would have preferred to live down in the Valley proper,
but who made themselves move to SF because they knew otherwise
they'd lose the talent war.So being a magnet for people in their twenties is a very promising
thing to be. It's hard to imagine a place becoming a startup hub
without also being that. When I read that statistic about the
increasing percentage of 25 to 29 year olds, I had exactly the same
feeling of excitement I get when I see a startup's graphs start to
creep upward off the x axis.Nationally the percentage of 25 to 29 year olds is 6.8%. That means
you're .8% ahead. The population is 306,000, so we're talking about
a surplus of about 2500 people. That's the population of a small
town, and that's just the surplus. So you have a toehold. Now you
just have to expand it.And though "youth-driven food boom" may sound frivolous, it is
anything but. Restaurants and cafes are a big part of the personality
of a city. Imagine walking down a street in Paris. What are you
walking past? Little restaurants and cafes. Imagine driving through
some depressing random exurb. What are you driving past? Starbucks
and McDonalds and Pizza Hut. As <NAME> said, there is no
there there. You could be anywhere.These independent restaurants and cafes are not just feeding people.
They're making there be a there here.So here is my first concrete recommendation for turning Pittsburgh
into the next Silicon Valley: do everything you can to encourage
this youth-driven food boom. What could the city do? Treat the
people starting these little restaurants and cafes as your users,
and go ask them what they want. I can guess at least one thing
they might want: a fast permit process. San Francisco has left you
a huge amount of room to beat them in that department.I know restaurants aren't the prime mover though. The prime mover,
as the Times article said, is cheap housing. That's a big advantage.
But that phrase "cheap housing" is a bit misleading. There are
plenty of places that are cheaper. What's special about Pittsburgh
is not that it's cheap, but that it's a cheap place you'd actually
want to live.Part of that is the buildings themselves. I realized a long time
ago, back when I was a poor twenty-something myself, that the best
deals were places that had once been rich, and then became poor.
If a place has always been rich, it's nice but too expensive. If
a place has always been poor, it's cheap but grim. But if a place
was once rich and then got poor, you can find palaces for cheap.
And that's what's bringing people here. When Pittsburgh was rich,
a hundred years ago, the people who lived here built big solid
buildings. Not always in the best taste, but definitely solid. So
here is another piece of advice for becoming a startup hub: don't
destroy the buildings that are bringing people here. When cities
are on the way back up, like Pittsburgh is now, developers race to
tear down the old buildings. Don't let that happen. Focus on
historic preservation. Big real estate development projects are
not what's bringing the twenty-somethings here. They're the opposite
of the new restaurants and cafes; they subtract personality from
the city.The empirical evidence suggests you cannot be too strict about
historic preservation. The tougher cities are about it, the better
they seem to do.But the appeal of Pittsburgh is not just the buildings themselves.
It's the neighborhoods they're in. Like San Francisco and New York,
Pittsburgh is fortunate in being a pre-car city. It's not too
spread out. Because those 25 to 29 year olds do not like driving.
They prefer walking, or bicycling, or taking public transport. If
you've been to San Francisco recently you can't help noticing the
huge number of bicyclists. And this is not just a fad that the
twenty-somethings have adopted. In this respect they have discovered
a better way to live. The beards will go, but not the bikes. Cities
where you can get around without driving are just better period.
So I would suggest you do everything you can to capitalize on this.
As with historic preservation, it seems impossible to go too far.Why not make Pittsburgh the most bicycle and pedestrian friendly
city in the country? See if you can go so far that you make San
Francisco seem backward by comparison. If you do, it's very unlikely
you'll regret it. The city will seem like a paradise to the young
people you want to attract. If they do leave to get jobs elsewhere,
it will be with regret at leaving behind such a place. And what's
the downside? Can you imagine a headline "City ruined by becoming
too bicycle-friendly?" It just doesn't happen.So suppose cool old neighborhoods and cool little restaurants make
this the next Portland. Will that be enough? It will put you in
a way better position than Portland itself, because Pittsburgh has
something Portland lacks: a first-rate research university. CMU
plus little cafes means you have more than hipsters drinking lattes.
It means you have hipsters drinking lattes while talking about
distributed systems. Now you're getting really close to San
Francisco.In fact you're better off than San Francisco in one way, because
CMU is downtown, but Stanford and Berkeley are out in the suburbs.What can CMU do to help Pittsburgh become a startup hub? Be an
even better research university. CMU is one of the best universities
in the world, but imagine what things would be like if it were the
very best, and everyone knew it. There are a lot of ambitious
people who must go to the best place, wherever it is. If CMU were it, they would all come here. There would be
kids in Kazakhstan dreaming of one day living in Pittsburgh.Being that kind of talent magnet is the most important contribution
universities can make toward making their city a startup hub. In
fact it is practically the only contribution they can make.But wait, shouldn't universities be setting up programs with words
like "innovation" and "entrepreneurship" in their names? No, they
should not. These kind of things almost always turn out to be
disappointments. They're pursuing the wrong targets. The way to
get innovation is not to aim for innovation but to aim for something
more specific, like better batteries or better 3D printing. And
the way to learn about entrepreneurship is to do it, which you
can't
in school.I know it may disappoint some administrators to hear that the best
thing a university can do to encourage startups is to be a great
university. It's like telling people who want to lose weight that
the way to do it is to eat less.But if you want to know where startups come from, look at the
empirical evidence. Look at the histories of the most successful
startups, and you'll find they grow organically out of a couple of
founders building something that starts as an interesting side
project. Universities are great at bringing together founders, but
beyond that the best thing they can do is get out of the way. For
example, by not claiming ownership of "intellectual property" that
students and faculty develop, and by having liberal rules about
deferred admission and leaves of absence.In fact, one of the most effective things a university could do to
encourage startups is an elaborate form of getting out of the way
invented by Harvard. Harvard used to have exams for the fall
semester after Christmas. At the beginning of January they had
something called "Reading Period" when you were supposed to be
studying for exams. And Microsoft and Facebook have something in
common that few people realize: they were both started during Reading
Period. It's the perfect situation for producing the sort of side
projects that turn into startups. The students are all on campus,
but they don't have to do anything because they're supposed to be
studying for exams.Harvard may have closed this window, because a few years ago they
moved exams before Christmas and shortened reading period from 11
days to 7. But if a university really wanted to help its students
start startups, the empirical evidence, weighted by market cap,
suggests the best thing they can do is literally nothing.The culture of Pittsburgh is another of its strengths. It seems
like a city has to be socially liberal to be a startup hub,
and it's pretty clear why. A city has to tolerate strangeness to
be a home for startups, because startups are so strange. And you
can't choose to allow just the forms of strangeness that will turn
into big startups, because they're all intermingled. You have to
tolerate all strangeness.That immediately rules out big chunks of the US. I'm optimistic
it doesn't rule out Pittsburgh. One of the things I remember from
growing up here, though I didn't realize at the time that there was
anything unusual about it, is how well people got along. I'm still
not sure why. Maybe one reason was that everyone felt like an
immigrant. When I was a kid in Monroeville, people didn't call
themselves American. They called themselves Italian or Serbian or
Ukranian. Just imagine what it must have been like here a hundred
years ago, when people were pouring in from twenty different
countries. Tolerance was the only option.What I remember about the culture of Pittsburgh is that it was
both tolerant and pragmatic. That's how I'd describe the culture
of Silicon Valley too. And it's not a coincidence, because Pittsburgh
was the Silicon Valley of its time. This was a city where people
built new things. And while the things people build have changed,
the spirit you need to do that kind of work is the same.So although an influx of latte-swilling hipsters may be annoying
in some ways, I would go out of my way to encourage them. And more
generally to tolerate strangeness, even unto the degree wacko
Californians do. For Pittsburgh that is a conservative choice:
it's a return to the city's roots.Unfortunately I saved the toughest part for last. There is one more
thing you need to be a startup hub, and Pittsburgh hasn't got it:
investors. Silicon Valley has a big investor community because
it's had 50 years to grow one. New York has a big investor community
because it's full of people who like money a lot and are quick to
notice new ways to get it. But Pittsburgh has neither of these.
And the cheap housing that draws other people here has no effect
on investors.If an investor community grows up here, it will happen the same way
it did in Silicon Valley: slowly and organically. So I would not
bet on having a big investor community in the short term. But
fortunately there are three trends that make that less necessary
than it used to be. One is that startups are increasingly cheap
to start, so you just don't need as much outside money as you used
to. The second is that thanks to things like Kickstarter, a startup
can get to revenue faster. You can put something on Kickstarter
from anywhere. The third is programs like Y Combinator. A startup
from anywhere in the world can go to YC for 3 months, pick up
funding, and then return home if they want.My advice is to make Pittsburgh a great place for startups, and
gradually more of them will stick. Some of those will succeed;
some of their founders will become investors; and still more startups
will stick.This is not a fast path to becoming a startup hub. But it is at
least a path, which is something few other cities have. And it's
not as if you have to make painful sacrifices in the meantime.
Think about what I've suggested you should do. Encourage local
restaurants, save old buildings, take advantage of density, make
CMU the best, promote tolerance. These are the things that make
Pittsburgh good to live in now. All I'm saying is that you should
do even more of them.And that's an encouraging thought. If Pittsburgh's path to becoming
a startup hub is to be even more itself, then it has a good chance
of succeeding. In fact it probably has the best chance of any city
its size. It will take some effort, and a lot of time, but if any
city can do it, Pittsburgh can.Thanks to <NAME> and <NAME> for reading
drafts of this, and to <NAME> for organizing Opt412 and inviting
me to speak.
|
|
https://github.com/johanvx/typst-undergradmath | https://raw.githubusercontent.com/johanvx/typst-undergradmath/main/CHANGELOG.md | markdown | Creative Commons Attribution Share Alike 4.0 International | # Changelog
All notable changes to this project will be documented in this file.
## [1.4.0](https://github.com/johanvx/typst-undergradmath/compare/v1.3.0..1.4.0) - 2024-05-19
### Features
- Use `thin` for the space between values and units (#43) - ([ca9cc03](https://github.com/johanvx/typst-undergradmath/commit/ca9cc039dd1cc284ff841498d430526134bfb7e9))
- Update spacing description (#42) - ([74a826a](https://github.com/johanvx/typst-undergradmath/commit/74a826a4d0e4655fc60591115b19d065fc88dfd9))
## [1.3.0](https://github.com/johanvx/typst-undergradmath/compare/v1.2.0..v1.3.0) - 2024-05-14
### Bug Fixes
- Stop underlining the title - ([2272ede](https://github.com/johanvx/typst-undergradmath/commit/2272edeec374255cbd8b80377895573d248e68a7))
### Documentation
- Add instructions for obtaining script letters - ([e98a15a](https://github.com/johanvx/typst-undergradmath/commit/e98a15a1dc9537905b0aa04a571cd5b0de2fecc2))
### Features
- Provide a simpler approach to get `\varnothing` in LaTeX - ([605b7b9](https://github.com/johanvx/typst-undergradmath/commit/605b7b94f8fcf5ff753df3e840da456a11c93b4a))
- Add script letters - ([2334574](https://github.com/johanvx/typst-undergradmath/commit/2334574f5e51cb531ef660e71975b831d49dd01f))
### Miscellaneous
- Remove `@unavailable` figure - ([d314d33](https://github.com/johanvx/typst-undergradmath/commit/d314d33c9145859de72e43037673560fbaabc909))
### Refactor
- Make the footer an acutal footer - ([d3b5eb7](https://github.com/johanvx/typst-undergradmath/commit/d3b5eb7ba515af7e7b107050b1a841fdbde5242f))
## [1.2.0](https://github.com/johanvx/typst-undergradmath/compare/v1.1.0..v1.2.0) - 2023-12-08
### Documentation
- *(README)* Mention some packages dedicated to typesetting unit - ([b1b62a7](https://github.com/johanvx/typst-undergradmath/commit/b1b62a706d94957671f52c3862050f1a363889fd))
- Add the contributing guide section - ([4b59481](https://github.com/johanvx/typst-undergradmath/commit/4b5948171bd7e6e791291cec6f0b60b6d08e08a1))
### Features
- Mention the compiler version in the introduction - ([6ee483d](https://github.com/johanvx/typst-undergradmath/commit/6ee483d010377e0e4e9dcd1fd886e15a6b0d42fa))
- Use newly introduced `sys.version` - ([9c532b4](https://github.com/johanvx/typst-undergradmath/commit/9c532b49252297e0b6dc721f2fe59680b8c75ffe))
- Use newly introduced `wide` - ([54d8723](https://github.com/johanvx/typst-undergradmath/commit/54d87237fec250a942f6fd849403d763298d8f1a))
### Miscellaneous
- Checkout with secrets.PAT - ([e2837c8](https://github.com/johanvx/typst-undergradmath/commit/e2837c8f5a45e130f1309dc16b5cc30280637517))
- Set author to github-actions[bot] - ([08301b2](https://github.com/johanvx/typst-undergradmath/commit/08301b27e53aa968e9deb0db6d5a4c3614fa674a))
- Set author and email explicitly - ([cbcbdb4](https://github.com/johanvx/typst-undergradmath/commit/cbcbdb40abf5582672c264dbd40dab7ef980f73b))
- Try to pass the protection rule - ([8f1a6f3](https://github.com/johanvx/typst-undergradmath/commit/8f1a6f39fd15014ebfef1eb7276d6eca654ccfe8))
- Fix the repository URL in cliff.toml - ([4eea288](https://github.com/johanvx/typst-undergradmath/commit/4eea288d139f8f60a4f9b4df28ef15424d49393b))
- Add cliff.toml - ([49352ce](https://github.com/johanvx/typst-undergradmath/commit/49352cec0ac2fc472bcb36168cbc5ba67838e79a))
- Support deleting tags and associated releases - ([0604a5a](https://github.com/johanvx/typst-undergradmath/commit/0604a5a39043d736ab1d9a9446cb6b498c8ea676))
- Support changelog and manual tagging - ([3726d22](https://github.com/johanvx/typst-undergradmath/commit/3726d2250082c75540ac9aaccbdb3157d7a81ae5))
- Remove unused environment variables - ([c024293](https://github.com/johanvx/typst-undergradmath/commit/c024293251fca478588f038582d253aec162c4d3))
## [1.1.0](https://github.com/johanvx/typst-undergradmath/compare/v1.0.0..v1.1.0) - 2023-09-16
### Features
- Add custom maths operator section - ([992d543](https://github.com/johanvx/typst-undergradmath/commit/992d5437cc0626d81ec77f7bbc50b1cdfd85198b))
## [1.0.0](https://github.com/johanvx/typst-undergradmath/compare/v0.1.1..v1.0.0) - 2023-08-08
### Features
- [**breaking**] Use renamed symbols - ([47a06f4](https://github.com/johanvx/typst-undergradmath/commit/47a06f41fcfaa3a8d822b2f8f231504cc4347637))
## [0.1.1](https://github.com/johanvx/typst-undergradmath/compare/v0.1.0..v0.1.1) - 2023-07-20
### Bug Fixes
- Correct a typo in the link to the doc of `array` - ([57e3564](https://github.com/johanvx/typst-undergradmath/commit/57e3564c7799a031933e5031c3e2b7271091bbd8))
## [0.1.0](https://github.com/johanvx/typst-undergradmath/compare/v0.0.2..v0.1.0) - 2023-06-15
### Bug Fixes
- Add missing raw text `kappa` - ([3dc79c1](https://github.com/johanvx/typst-undergradmath/commit/3dc79c15d6184ed46651c40cd9ff23f8270f7ca6))
### Features
- Adjust the alignment and gutters of math-code listings - ([7832fd3](https://github.com/johanvx/typst-undergradmath/commit/7832fd3a35d2d4f88a974eb441235ba71d3aa62e))
- Use a shorter name for Ring Operator `\u{2218}` - ([7ae19a0](https://github.com/johanvx/typst-undergradmath/commit/7ae19a056469ffaf16e362390daa69dd08167e3b))
- Remove the noidea annotation - ([a04b098](https://github.com/johanvx/typst-undergradmath/commit/a04b098aebda950b52ee77ce58609fa25790ab5a))
- Use dotless i and j - ([399419e](https://github.com/johanvx/typst-undergradmath/commit/399419ecd5dfc6c1cc42a915be20cc4905421ce8))
- Use `datetime` - ([e4768ea](https://github.com/johanvx/typst-undergradmath/commit/e4768ea3db27403755c397fe0f693f9a284213c5))
- Use `sigma.alt` - ([44c80e2](https://github.com/johanvx/typst-undergradmath/commit/44c80e2151dfb3c3911c5c6af21bb6e135833c53))
### Miscellaneous
- Add gitignore file - ([1540948](https://github.com/johanvx/typst-undergradmath/commit/1540948f26240f5ad649f5d2eac2e7b362ec321d))
## [0.0.2](https://github.com/johanvx/typst-undergradmath/compare/v0.0.1..v0.0.2) - 2023-05-21
### Documentation
- *(README)* Add new badges and clean up a bit - ([4224021](https://github.com/johanvx/typst-undergradmath/commit/42240217292c4025fcda3043ed8de4b3d622324b))
- *(README)* Update the link to the lastest release - ([2f6816e](https://github.com/johanvx/typst-undergradmath/commit/2f6816e3a3caf875234ad1ed8786114fbf5f6b1e))
### Miscellaneous
- Update date - ([f4e7768](https://github.com/johanvx/typst-undergradmath/commit/f4e77683777f4f25e07fbbfa84b7c97ca1aa6959))
### Refactor
- Remove unnecessary capitalization - ([c5b0afa](https://github.com/johanvx/typst-undergradmath/commit/c5b0afa0117c879d7c90d915ebfe863f88c2b822))
- Simplify some symbols - ([bf58fae](https://github.com/johanvx/typst-undergradmath/commit/bf58faeb1c6e167b3c0f5f64506d828923820a12))
## [0.0.1] - 2023-04-18
### Bug Fixes
- Add a hashtag for the h function example - ([0341034](https://github.com/johanvx/typst-undergradmath/commit/03410342f4a33efd51791deb095113100a5a71fc))
### Documentation
- *(README)* Update comments on `\varnothing` - ([7c68c65](https://github.com/johanvx/typst-undergradmath/commit/7c68c65cf685c43c44be6851c1d95c6520a5be52))
- *(README)* Update comments on doteq - ([65faf7c](https://github.com/johanvx/typst-undergradmath/commit/65faf7ceaa479ce0f1981008a7f51284eafdd58b))
- *(README)* Add a comment on arrays - ([537b126](https://github.com/johanvx/typst-undergradmath/commit/537b1264c1f526617ce39818cde154a19a98697a))
- Add a link to Typst - ([5c148c4](https://github.com/johanvx/typst-undergradmath/commit/5c148c4234ec640b6215d68a6e0f177e3e931bec))
- Add limitations section - ([b026d4b](https://github.com/johanvx/typst-undergradmath/commit/b026d4be113373a091321e0beb7cc63ea034f17c))
- Add README - ([3416ef3](https://github.com/johanvx/typst-undergradmath/commit/3416ef33d226bab3a1b02b642244aef08d21d1e4))
### Features
- *(arrays, matrices)* [**breaking**] Add array example - ([ef7c246](https://github.com/johanvx/typst-undergradmath/commit/ef7c246bcba1c16c730b14e3d99f4c94ca7afc3d))
- Update comment to `\varnothing` - ([ee80553](https://github.com/johanvx/typst-undergradmath/commit/ee8055306eebeb072f32a6ca3129277e17dd8798))
- Update code example for `\varnothing` - ([878ea8d](https://github.com/johanvx/typst-undergradmath/commit/878ea8dd2c75412b047a38c7020f57d0931ec5ed))
- Add an alternative for not equal - ([430b79b](https://github.com/johanvx/typst-undergradmath/commit/430b79bcaf78709873e7b21aedeb2ce86d71c541))
- Add tricky doteq symbol - ([7001164](https://github.com/johanvx/typst-undergradmath/commit/7001164e4fdc8f7f9b4fe17fa4389e30e7f98ed3))
- Update notes on determinant - ([49fab7c](https://github.com/johanvx/typst-undergradmath/commit/49fab7cafbbd0b6919f13d137b609b42a76f45a7))
- Add some comment on auto-scaling of fences - ([1146cd3](https://github.com/johanvx/typst-undergradmath/commit/1146cd3f6685233c88709cc5dcc8038970665528))
- Add a value-unit example - ([d4084ec](https://github.com/johanvx/typst-undergradmath/commit/d4084ec042fdd1b1ae6c1f2a9d04d799b36bf8c6))
- Add widehat example - ([317c168](https://github.com/johanvx/typst-undergradmath/commit/317c16823b6ce6e097b97d5f3fa28e3648b5dfe8))
- Add main contents - ([06e8f21](https://github.com/johanvx/typst-undergradmath/commit/06e8f217c78d7f9cf8a56f572cf8373dbf2efdee))
### Miscellaneous
- Support auto-release after pushing new tags - ([764a9e1](https://github.com/johanvx/typst-undergradmath/commit/764a9e1dcebe48af5e05aa3aad095c0fd8c19d72))
- Support tagging - ([16a59c9](https://github.com/johanvx/typst-undergradmath/commit/16a59c925018ad4b0af0f503561169efd214638c))
- Upload PDF as artifact on push to non main branch - ([d72f758](https://github.com/johanvx/typst-undergradmath/commit/d72f758ddfaa5d187cda1d18c41e4ea95018068a))
- Remove the PDF file - ([9f18cc0](https://github.com/johanvx/typst-undergradmath/commit/9f18cc03ac70942a1c7c62d28efc6f4d16ccae39))
- Update date - ([51f56a8](https://github.com/johanvx/typst-undergradmath/commit/51f56a8efed7b441b23d255cb0d809197df7652c))
- Add LICENSE - ([6ba2286](https://github.com/johanvx/typst-undergradmath/commit/6ba2286c786f9bb7c4a05da176dae77c0f20e73b))
<!-- generated by git-cliff -->
|
https://github.com/ren-ben/typst-notes | https://raw.githubusercontent.com/ren-ben/typst-notes/master/ds/data_manipulation_and_visualization.typ | typst | #import "@preview/sourcerer:0.2.1": code
#align(center, text(24pt)[
*Data Manipulation and Visualization in R*
])
#align(center)[
<NAME> \
Technologisches Gewerbemuseum \
#link("mailto:<EMAIL>")
]
#set heading(numbering: "1.")
#show par: set block(spacing: 0.65em)
#set par(
first-line-indent: 1em,
justify: true,
)
#pagebreak()
#outline()
#pagebreak()
= Introduction
We continue from the "Introduction to the R-Programming Language" and look at ways of optimizing the syntax to perform already familiar data operations and discover ways of visualizing the data.
= Data Manipulation
This chapter covers ways to simplify already discussedways of manipulating data using tools like *Dplyr* (manipulating) and *Tidyr* (cleaning)
== Dplyr
First, an installation using `install.packages('dplyr')` and an activation using `library(dplyr)` is required.
To allow for an easy way of showcasing the power of dplyr, the `nycflights13` package needs to be installed aswell using the same commands.
=== filter() & slice()
Allows for subset row selection. It's easier than the built-in subset function we've been using. To filter, we just pass in the data as the first argument and then any number of arguments that specify each column condition:
#code(
lang: "",
```r
filter(flights, month==11, day==3, carrier=='AA')
```
)
The slice function allows for positional selection:
#code(
lang: "",
```r
slice(flights, 1:10) # outputs the first 10 rows
```
)
=== arrange()
Allows for ordering and has a very similar structure to the `filter()` function.
#code(
lang: "",
```r
arrange(flights, year, month, desc(day)) # first order by year, then month and then by day but in descending order.
```
)
=== select() & rename()
Selects a defined set of columns:
#code(
lang: "",
```r
select(flights, carrier, day) # selects the carrier and the day columns.
```
)
And `rename()` quickly renames the columns (newname = oldname)
#code(
lang: "",
```r
rename(flights, newdayname = day)
```
)
=== distinct()
Selects distinct row values. Powerful to use in combinations such as with `select()`
#code(
lang: "",
```r
distinct(select(flights, carrier)) # selects the distinct carriers.
```
)
=== mutate() & transmutate()
Creates new columns as functions of existing columns
#code(
lang: "",
```r
mutate(flights, new_col = arr_delay-dep_delay)
```
)
If you only want the newly created columns, and not the entire data frame, use `transmutate()`
#code(
lang: "",
```r
transmutate(flights, new_col = arr_delay-dep_delay)
```
)
=== summarise()
Summarizes a column using an aggregate function
#code(
lang: "",
```r
summarise(flights, avg_air_time=mean(air_time, na.rm = TRUE ))
```
)
=== sample_n() & sample_frac()
Sample_n() returns a random set of n-samples out of a data frame
#code(
lang: "",
```r
sample_n(flights, 10) # 10 random sample rows
```
)
Sample_frac() returns a percentage of data. The range goes from 0 (0%) to 1 (100%)
#code(
lang: "",
```r
sample_frac(flights, 0.3) # returns 30% of data.
```
)
== The Pipe Operator
It allows better readability. An alternative to the pipe operator is nesting, which is barely readable beyond a certain point. You can also try to create multiple variables and assign them one by one, but then you're wasting memory space. The pipe operator allows you to chain an output of a function to the input of another function in a readable and clear way.
#code(
lang: "",
```r
# nesting
result <- arrange(sample_n(filter(df, mpg>20),size=5),desc(mpg))
# multiple assignments
a <- filter(df, mpg>20)
b <- sample_n(a, size=5)
result <- arrange(b, desc(mpg))
# pipe operator
result <- df %>% filter(mpg>20) %>% sample_n(size=5) %>% arrange(desc(mpg))
```
)
== Tidyr
First perform the installation & setup using `install.packages('tidyr')` and `library(tidyr)`. There's also a complimentary package that needs to be installed called `data.table` using the same commands. This package is very similar to the built-in data frames, although it increases computation speed substantially.
=== gather() & spread()
A prerequisite to the `gather()` and `spread()` functions is the knowledge of the wide and long formats. Long formats have repeating first columns, and wide formats have values that don't repeat in the first column.
The gather function converts a table into a long format and the spread function converts a table into a wide format. That's it.
`gather()` collapses a slice of columns into a key-value pair (the key is the column name and the value is the row data value.):
#code(
lang: "",
```r
gather(df, key, value, Qtr1:Qtr4)
```
)
`spread()` widens the key-value pair passed into it to individual tables. The unique keys become columns and the values become the row values.
#code(
lang: "",
```r
spread(df, key, value)
```
)
=== separate()
Separates a single column into multiple columns. As an example, there is a column that has data like "a-x" and it's required of us to separate "a" and "x" into their own respective columns
#code(
lang: "",
```r
separate(data=df, col= col.name, into c("abc", "xyz"), sep = '-')
```
)
=== unite()
Pastes multiple columns into one.
#code(
lang: "",
```r
unite(separated.df, new.joined.col, abc, xyz)
unite(separated.df, new.joined.col, abc, xyz, sep = "---")
```
)
= Data Visualization
This chapter focuses on visualizing data using the *ggplot2* library.
== Overview of ggplot2
The library is built on layers, namely the "*Data*" (the raw data), the "*Aesthetics*" (specify the columns and features you want to display) and "*Geometries*" (the type of plot) as the main layers.
Then there are other layers, such as "*Facets*" (multiple plots), "*Statistics*", "*Coordinates*" and "*Themes*"
#show link: underline
There exists a cheat sheet for `ggplot2` that explains the major aspects in around 2 pages. It can be found #link("https://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/data-visualization-2.1.pdf")[here].
== Histograms
This section requires both the 'ggplot2' and 'ggplot2movies' libraries to be installed and activated.
#code(
lang: "",
```r
# DATA & AESTHETICS
pl <- ggplot(movies, aes(x=rating)) # rating is a column in the movies data frame
# GEOMETRY
pl2 <- pl + geom_histogram(binwidth = 0.2, color='red', fill='pink', alpha=0.8)
print(pl2) # this actually displays the plot
pl3 <- pl2 + xlab('Movie Rating') + ylab("Count") # x & y labels
print(pl3 + ggtitle("MY TITLE"))
# CUSTOM COLOR GRADIENT
pl2 <- pl + geom_histogram(binwidth = 0.2, aes(fill=..count..)) # the higher the count the more blue it is.
```
)
== Scatterplots
#code(
lang: "",
```r
df <- mtcars
# DATA & AESTHETICS
ggplot(df, aes(x=wt,y=mpg))
# GEOMETRY
print(pl + geom_point(size=hp)) # size is based on the 'hp' column
# FACTOR
pl + geom_point(aes(size=factor(cyl))) # factor labels the 'cyl' column as categorical (not continuous) because it can only have either 4, 6 or 8 cylinders, not 5 or 7.
# SHAPE
pl + geom_point(aes(shape=factor(cyl))) # this assigns a different shape for different cylinder sizes, like a square or a triangle.
# CUSTOM COLOR GRADIENT
pl <- pl + geom_point(aes(color=hp), size=5)
pl + scale_color_gradient(low='blue', high='red')
# ATTENTION: You can only use column values to define shapes, colors, etc. inside of the aes() block.
```
)
== Barplots
#code(
lang: "",
```r
df <- mpg
pl <- ggplot(df, aes(x=class))
print(pl + geom_bar(aes(fill=drv), position="dodge")) # The custom fill automatically creates a stacked bar plot. The position argument set to dodge makes the stacked bars appear next to eachother. There's also "fill" that makes it a normalized area graph.
```
)
== Boxplots
#code(
lang: "",
```r
df <- mtcars
ggplot(df, aes(x=factor(cyl), y=mpg))
print(pl + geom_boxplot(aes(fill=factor(cyl))) + coord_flip()) # the coord_flip() flips the coordinate axes. It essentially rotates the plot -90°.
```
)
== 2 Variable Plotting
#code(
lang: "",
```r
pl <- ggplot(movies, aes(x=year, y=rating))
print(pl + geom_bind2d() + scale_fill_gradient(high="red", low="green")) # creates a 2d-bin-chart. The bins change color based on their count. (Essentially a heatmap)
```
)
There's also a `hexbin`, which creates a heatmap where each bin is a hexagon. For that, install the `hexbin` package.
#code(
lang: "",
```r
# HEXBIN
pl <- ggplot(movies, aes(x=year, y=rating))
pl2 <- pl + geom_hex()
print(pl2)
# DENSITY PLOT
pl2 <- pl + geom_density2d()
print(pl2)
```
)
== Coordinates Faceting
`ggplot2` allows for the changing of coordinate systems and the adjustment of their paramters like x-lims and y-lims:
#code(
lang: "",
```r
pl <- ggplot(mpg, aes(x=displ, y=hwy)) + geom_point()
# STANDARD COORDINATES
pl2 <- pl + coord_cartesian(xlim = c(1,4), ylim = c(15,30))
# FIXED RATIO COORDINATES
pl2 <- pl + coord_fixed(ratio = 1/3) # Default is 1:1.
# Consult the cheat sheet for more coordinate types.
print(pl2)
```
)
Faceting allows for the placement of multiple plots next to eachother.
#code(
lang: "",
```r
pl <- ggplot(mpg, aes(x=displ, y=hwy)) + geom_point()
print(pl + facet_grid(. ~ cyl)) # we seperate the mpg plot (on the x-axis) into 3 subplots each seperated by the cylinder column. So there's one plot for cars with 4 cylinder, one for cars with 6 cylinder, etc.
print(pl + facet_grid(drv ~ .)) # this separates the plot along the y-axis.
# The general syntax is "what you want to faucet by the y-axis, tilde symbol, and then what you want to facet by on the x-axis." A dot means "everything else".
# So the x and y separations can be mixed:
print(pl + facet_grid(drv ~ cyl)) # this separates the plot along the y-axis.
```
)
== Theming
You can set a theme by either setting it globaly using the `theme_set()` function or by just adding it when printing:
#code(
lang: "",
```r
theme_set(theme_minimal()) # Setting the theme globally
print(pl + theme_dark()) # Assigning the theme individually
```
)
To get even more themes, install the `ggthemes` library.
#pagebreak()
== Data Visualization Assignment Solution
#code(
lang: "",
```r
df <- fread("/home/ren/coding/r/R-Course-HTML-Notes/R-Course-HTML-Notes/R-for-Data-Science-and-Machine-Learning/Training Exercises/Capstone and Data Viz Projects/Data Visualization Project/Economist_Assignment_Data.csv", drop=1)
head(df)
pl <- ggplot(df, aes(x=CPI, y=HDI)) + geom_point(shape=1, size = 3, aes(color=Region))
pl2 <- pl + geom_smooth(aes(group=1), method = lm, formula = y ~ log(x), se = FALSE, color="red")
pointsToLabel <- c("Russia", "Venezuela", "Iraq", "Myanmar", "Sudan",
"Afghanistan", "Congo", "Greece", "Argentina", "Brazil",
"India", "Italy", "China", "South Africa", "Spane",
"Botswana", "Cape Verde", "Bhutan", "Rwanda", "France",
"United States", "Germany", "Britain", "Barbados", "Norway", "Japan",
"New Zealand", "Singapore")
pl3 <- pl2 + geom_text(aes(label = Country), color = "gray20",
data = subset(df, Country %in% pointsToLabel),check_overlap = TRUE)
print(pl3 + theme_bw() + scale_x_continuous(name = "Corruption Perception Index (CPI)", limits = c(1, 10), breaks = 1:10 ) + scale_y_continuous(name = "Human Development Index, 2011 (1=Best)", limits = c(0, 1), breaks = seq(0, 1, 0.1)) + ggtitle("Corruption and Human development"))
```
)
== Interactive Visualization with Plotly
Plotly allows for the interactive visualization of data in multiple languages like R, Python or Matlab. It's entirely open source, free and self-hosted.
A recent addition to their package is the ability to convert `ggplot2` plots into interactive plotly plots directly inside of the R environment.
To get started, install `plotly` activate it.
#code(
lang: "",
```r
pl <- ggplot(mtcars, aes(mpg, wt)) + geom_point()
gpl <- ggplotly(pl) # now you can zoom in and get more informationwithin the plot.
```
)
Their ggplot2 plotly documentation has alot of great tutorials on how to create various types of plots that can be accessed #link("https://plot.ly/ggplot2/")[here].
#pagebreak()
== Moneyball Assignment Solution
#code(
lang: "",
```r
batting <- read.csv('/home/ren/coding/r/R-Course-HTML-Notes/R-Course-HTML-Notes/R-for-Data-Science-and-Machine-Learning/Training Exercises/Capstone and Data Viz Projects/Capstone Project/Batting.csv')
head(batting)
str(batting)
batting$BA <- batting$H / batting$AB
tail(batting$BA, 5)
batting$OBP <- (batting$H + batting$BB + batting$HBP) / (batting$AB + batting$BB + batting$HBP + batting$SF)
batting$SLG <- ((batting$H - batting$X2B - batting$X3B - batting$HR) + (2 * batting$X2B) + (3 * batting$X3B) + (4 * batting$HR)) / batting$AB
str(batting)
sal <- read.csv("/home/ren/coding/r/R-Course-HTML-Notes/R-Course-HTML-Notes/R-for-Data-Science-and-Machine-Learning/Training Exercises/Capstone and Data Viz Projects/Capstone Project/Salaries.csv")
summary(batting)
batting <- subset(batting, yearID >= 1985)
summary(batting)
merged <- merge(batting, sal, by = c("playerID", "yearID"))
summary(merged)
# option 1
lost_players <- filter(merged, playerID == "giambja01" | playerID == "damonjo01" | playerID == "saenzol01")
# option 2
lost_players <- subset(merged, playerID %in% c("giambja01", "damonjo01", "saenzol01"))
distinct(lost_players, playerID)
count(lost_players, playerID)
lost_players <- subset(lost_players, yearID == 2001)
lost_players <- select(lost_players, H, X2B, X3B, HR, OBP, SLG, BA, AB)
print(lost_players)
sum(lost_players$AB) # 1469
mean(lost_players$OBP) # 0.363
replacement_players <- filter(merged, yearID == 2001, salary <= 5000000)
replacement_players
pl <- ggplot(replacement_players, aes(x = AB, y = OBP, name = playerID, salary = salary))
pl <- pl + geom_point()
ply <- ggplotly(pl, tooltip = c("x", "y", "name", "salary"))
ply
# I'd choose "berkmla01", "heltoto01" and "gonzalu01" as replacements for the lost players
```
)
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/004%20-%20Dragon's%20Maze/003_Barrin's%20Tall%20Tale.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Barrin's Tall Tale",
set_name: "Dragon's Maze",
story_date: datetime(day: 17, month: 04, year: 2013),
author: "<NAME>",
doc
)
#emph[There he was again.]
<NAME> wondered why that strange man in the hooded cloak had taken such an interest in the cornerstone with the odd etching that marked the edge of his shop and the intersecting alley off of Tin Street.
Barrin remembered seeing the strange symbols carved into that stone as a young child, when he accompanied his grandfather to the market to sell their floor coverings, brasswork, and craft-goods. The Greviks had been market people for generations and had run the same spot on Tin Street for as long as the family could remember.
The cornerstone had always been there, but it was often covered up with rolls of carpeting or stacks of boxes containing trinkets and sundries for sale. Occasionally, the odd passerby remarked on it or an outsider wondered at its significance or meaning, but no one really knew. Barrin's mother said it was commemorating the opening of Tin Street—which would have made the stone frightfully old—and that explanation suited Barrin just fine. Barrin was not interested in the ancient history of Ravnica or the weird, magical rituals of the guilds; he just wanted to make a tidy profit, feel superior to others, eat well, and ignore the general plight of the less fortunate.
That's why there was something about this stranger that chafed his well-groomed hide and gave him the unsettling feeling—this hooded man reeked of a certain kind of trouble that threatened the structured bliss of his shop.
"Can I help you?" Barrin felt a slight bit of irritation in his voice and decided he liked it. He was, after all, irritated.
As soon as the man looked up at Barrin, he felt less irritated and more uneasy.
It was Barrin's experience that most people in the market rarely made eye contact and talked behind a wall of feigned decorum—like Barrin himself did—but this cloaked stranger looked directly at him with eyes that held him like a vise. Barrin did not have the vocabulary to describe what he felt, so the streams of emotions and thoughts were relegated to something that landed between confusion and fear.
"Yes," the stranger said. "That stone. Do you know anything about the markings on it?"
Barrin intended to tell the young man to go take a long walk down a short street when he found himself talking all about the history of the stone, not leaving out a single detail of the family speculation about its origins and possible meanings. Barrin even told the man about his great uncle Estovar's theory that it was put there by the Azorius soon after the signing of the first Guildpact. Of course, Uncle Estovar was as crazy as an Izzet magister, but that didn't stop Barrin from sharing that bit of family lore. He felt compelled to leave nothing out and after a good while, Barrin had disgorged the entire known history of the stone to the stranger, who listened with calm intent.
"Thank you so much," the stranger said, a hint of a smile playing across his face. He left.
After a timeless moment, Barrin's wife put her face in front of his and said, "Hey! I'm talking to you! What was that all about?"
"You're welcome?" Barrin said.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Barrin was no Boros wojek, but after that odd and somewhat unnerving encounter, he wanted to know more about the young man in the blue cloak, and he was determined to get some answers. His wife, Nila, noticed that Barrin had "that look" in his eyes and she knew something was stuck in his craw. Barrin was a determined man. His whole family was a determined lot, but Barrin was especially hardheaded. His wife called them "the dromads" when referring to her in-laws, and she lumped Barrin in with them as well. Stubborn and ornery.
#figure(image("003_Barrin's Tall Tale/01.jpg", width: 100%), caption: [Dromad Purebred | Art by Carl Critchlow], supplement: none, numbering: none)
"I'm going to find out what he's up to, blasted snoop. I can't believe I told him everything about that stone. I hadn't even had a drop of bumbat all day and I was blathering on like Old Scrumpy at the tavern." Barrin was in a fine mood. His illusion of control over all his affairs had been shaken and that brought out the fight in him.
"What's got into you? He didn't steal anything." His wife put her hands on his shoulders as he sat at their kitchen table and made his lunch for the day at the shop.
"He's a Dimir, Nila. I just know it." Barrin sliced an onion with extra irritation. "I'm going to follow him to his rat's nest and find out his game." A surge of self-righteous condemnation flooded over him to help justify his witch-hunt. If someone was "a Dimir," Barrin had no problem in violating that person's basic right to privacy.
"He's not a Dimir," Nila said, and fetched a basket for Barrin's lunch. "That guild is on the up-and-up. All those rumors have got you and the rest of Tin Street in a hoopla. 'Blame the Dimir!' That's what they all say about every little thing that goes wrong around here."
"We'll see if he's on the 'up-and-up,' my little sugardove," Barrin muttered, lost in thoughts of shadowy thieves and cutthroats. "We'll see."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Barrin followed the hooded figure at a distance. He had a wojek friend and had learned a thing or two about tailing fugitives through Tin Street over their years of talking about ruffians, pickpockets, and the decline of social decency.
The hooded figure ducked into a dark alley.
#emph[Just like a Dimir miscreant] , Barrin thought.
He reached into his tunic and felt the handle of an old Boros pendrek, another gift from his wojek friend. It still had a few mana charges in it and could stun a loxodon—let alone a skulking Dimir thief. He felt a rush of adrenaline surge through him. Finally, he was about to catch one of these louts and bring him to justice.
"You'll trouble Tin Street no more," Barrin muttered as he ducked into the alley.
It took a little time before Barrin's eyes grew accustomed to the dark of the alley. Scrawny cats chewed on fish heads. As Barrin moved deeper along the wet cobblestones, a Rakdos goblin jumped out from behind a pile of garbage and hissed at him with teeth filed to points.
Barrin brandished his #emph[pendrek] and the goblin scampered off into the dark, cackling obscenities.
"Rakdos filth," Barrin said under his breath. His heart thumped within his chest.
The alley wound and twisted and Barrin thought about going back when he heard a buzzing sound followed by a flash of blue light coming from a basement window. He crept down the slick stairs that led to a stout wooden door. Carefully, he tested the latch. It wasn't locked. He slowly opened it to reveal the hooded stranger staring at a ghostlike image of the Tenth District, spread out before him like an exquisite model. A bright beam of red energy traced a series of angular lines that led along Tin Street, up several causeways, and eventually leading up the tall tower of—
"The old Azorius legislative archives," Barrin heard himself whisper.
"Exactly," the man in the cloak said, not turning to face Barrin.
#figure(image("003_Barrin's Tall Tale/02.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
Barrin had completely forgotten his #emph[pendrek] and his bluster. He could only look at the crackling image of the Tenth District and this mage who pondered it. The other end of the bright red line originated at—
"My shop." Barrin pointed at the glowing map.
The hooded man turned his head and looked at him. "The cornerstone by your shop is actually an Azorius waypoint, an ancient series of clues left by the Azorius guildmages. They are points along the Implicit Maze, and I have to find all of them."
"Implicit Maze?" Barrin said. "What is all this... this map, waypoints? Why haven't I heard of this?" Barrin didn't like to not be in "the know."
"It's complicated, Barrin." The man said his name as if they were old friends. "I'm not sure what the maze is myself. I would have liked more time to study and gather knowledge on its exact function and power, but the Dimir are forcing my hand to act fast."
"Ah! Dimir! I knew it!" Barrin pulled out his #emph[pendrek] . Finally, something he could grab on to.
"Yes, but they are just the catalyst." The mage seemed unfazed by Barrin and his wojek weapon. "The other guilds are going to destroy one another unless I can run the maze in the right sequence." The blue-robed mage got up and put his palms together. His glowing map blinked out like a glow-bug.
"You aren't a Dimir?" Barrin said.
"No." The mage smiled at him as he gathered up some scattered papers. "Sorry."
"Who are you, then?" Barrin pointed his pendrek, but some instinct told him it was like pointing a stick at a Boros blaze commando.
#figure(image("003_Barrin's Tall Tale/03.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
"My name is <NAME>," the mage said from under the hood of his cloak. Then, from within the shadow of the cowl, the mage's eyes glowed a baleful blue, lighting his grin. "And you are not going to remember a single moment of this, my foolish friend."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Barrin woke the next day.
"You must have had a good chat with Old Scrumpy," Nila said. "You came in like a Golgari shambler and headed straight for bed."
"Was I out with Scrump?" Barrin couldn't remember a thing. "Guess I must've been. But one thing's for sure, I'm never drinking bumbat again."
#figure(image("003_Barrin's Tall Tale/04.jpg", height: 25%), caption: [], supplement: none, numbering: none)
|
|
https://github.com/luiswirth/bsc-thesis | https://raw.githubusercontent.com/luiswirth/bsc-thesis/main/src/main.typ | typst | #import "setup.typ": *
#show: general-style
#preface-style[
#include "title.typ"
#include "abstract.typ"
#include "toc.typ"
]
#body-style[
#include "introduction.typ"
#include "theory.typ"
#include "implementation.typ"
]
#appendix-style[
= Rust Source Code
= Typst Source Code
]
#postface-style[
#bibliography("bibliography.yaml")
= Glossary
= Declaration of originality
]
|
|
https://github.com/max-niederman/CS250 | https://raw.githubusercontent.com/max-niederman/CS250/main/lib.typ | typst | #let implies = sym.arrow.r.double
#let iff = sym.arrow.r.l.double
#let common(title: "", subtitle: "", body) = {
set document(
author: "<NAME>",
title: title,
)
set page(
paper: "us-letter",
numbering: (..nums) => "Niederman " + numbering("1/1", ..nums),
number-align: center
)
set par(linebreaks: "optimized")
// title
if (title != "") {
block(text(weight: 700, 1.75em, title))
}
// subtitle
if (subtitle != "") {
block(text(weight: 700, 1.25em, subtitle))
}
// author
block(strong("<NAME>"), below: 2em)
body
}
#let homework(title: "", body) = {
show heading: set block(below: 1em)
common(title: title, body)
}
#let lecture-notes(date: "", topic: "", body) = {
show heading: set block(below: 1em)
show heading: set text(size: 0.85em)
common(
title: "CS 250 Lecture Notes",
subtitle: date + ", on " + topic,
body
)
} |
|
https://github.com/Skimmeroni/Appunti | https://raw.githubusercontent.com/Skimmeroni/Appunti/main/Metodi%20Algebrici/Codici/Blocchi.typ | typst | Creative Commons Zero v1.0 Universal | #import "../Metodi_defs.typ": *
Sia $A_(q) = {x_(1), x_(2), ..., x_(q)}$ un insieme finito di cardinalità
$q$, con $q gt.eq 2$. Prende il nome di *codice a blocchi* un qualunque
sottoinsieme non vuoto $C$ di $A_(q)^(n)$. In particolare:
- $A_(q)$ viene detto _alfabeto_ di $C$;
- $A_(q)^(n)$ viene detto _spazio delle parole_ di lunghezza $n$
(nell'alfabeto $A_(q)$);
- Una *parola* del codice $C$ è una qualsiasi $n$-upla ordinata di
simboli dell'alfabeto $A_(q)$;
- $n$ viene anche chiamata _lunghezza_ del codice;
- La cardinalità di $C$ viene chiamata _grandezza_ del codice.
La notazione matematica per le $n$-uple ordinate sarebbe $(x_(1), x_(2),
..., x_(n))$, ma per semplicitá verranno omesse sia le parentesi, sia le
virgole.
#example[
Si consideri l'alfabeto $A_(q) = {0, 1}$. Il codice $C$ sull'alfabeto
$A_(q)$ riportato di seguito é di lunghezza $3$ e di grandezza (numero
di parole) $3$:
#grid(
columns: (0.3fr, 0.7fr),
[
$ C = {001, 010, 100} $
],
[
$ A^(3)_(q) = A_(q) times A_(q) times A_(q) =
{000, 001, 010, 011, 100, 101, 110, 111} $
]
)
]
Si supponga di aver inviato una parola $p = (x_(1), ..., x_(n))$ e di aver
ricevuto una parola $p' = (y_(1), ..., y_(n))$. Se queste differiscono,
allora significa che si é in presenza di un errore. Per semplicitá, si
considerino solamente errori di primo tipo, ovvero che uno o piú simboli
di $p$ non corrispondano ai rispettivi simboli in $p'$. Il numero di errori
verrá conteggiato in base al numero di coppie di simboli che differiscono
(una coppia di simboli diversi é un errore, due coppie di simboli diversi
sono due errori, ecc...). Si assuma inoltre che gli errori siano _eventi
indipendenti_, ovvero che se $x_(i) != y_(i)$ per una certa posizione $i$
questo non influenza il verificarsi di un errore in un'altra posizione
$j != i$.
Per misurare quanto $p$ e $p'$ sono "dissimili", é necessario introdurre
una misura di _distanza_. La forma di distanza maggiormente utilizzata in
questo contesto é la *distanza di Hamming*:
$ d: A_(q)^(n) times A_(q)^(n) |-> RR, space
d(p, p') = |{i: x_(i) != y_(i)}| $
Ovvero, la distanza di Hamming é pari al numero di simboli (quali che
siano) delle due parole nella stessa posizione che differiscono. Dato
che in questo contesto verrá sempre usata la distanza di Hamming come
forma di distanza, si sottointenderá con il solo termine "distanza"
la distanza di Hamming.
La distanza (di Hamming) gode, per qualsiasi parola sull'alfabeto
$A_(q)$, delle seguenti proprietá:
+ $d(p, p') = d(p', p)$;
+ $d(p, p') = 0$ se e soltanto se $p = p'$;
+ $d(p, p') gt.eq 0$;
+ É verificata la *disuguaglianza triangolare*, ovvero
$d(p, p') lt.eq d(p, p'') + d(p'', p')$.
Dato un codice $C subset.eq A_(q)^(n)$, si dice *distanza minima* di $C$
il minimo delle distanze tra due parole distinte di $C$:
$ d(C) = min{d(p, p') : p, p' in C, p != p'} $
#example[
Sia $A_(2) = {0, 1}$ un alfabeto. Sia poi $C = {000, 001, 010, 100,
111}$ un codice su $A_(2)$ di lunghezza $3$. Le distanze fra ciascuna
coppia di parole di $C$, escludendo le coppie ripetute e le distanze
fra ciascuna parola e sé stessa, sono:
#set math.mat(delim:none, column-gap: 1.5em)
$ mat(
d(000, 001) = 1, d(000, 010) = 1, d(000, 100) = 1, d(000, 111) = 3,
d(001, 010) = 2; d(001, 100) = 2, d(001, 111) = 2, d(010, 100) = 2,
d(010, 111) = 2, d(100, 111) = 2
) $
Pertanto, $d(C) = 1$.
]
Si assuma che due entitá abbiano a disposizione il medesimo codice, e
che l'una invii all'altra una parola. Si supponga che tale parola ricevuta
non sia presente nel codice; se ne deduce che questa sia stata danneggiata
durante la trasmissione. La parola che é ragionevole assumere sia stata
inviata in origine é quella presente nel codice che maggiormente somiglia
a quella ricevuta, fintanto che la differenza fra le due é sufficientemente
piccola. Tale principio viene detto *principio di massima verosimiglianza*.
Si supponga di avere a disposizione un codice $C$ e di aver ricevuto
la parola $w in A_(q)^(n)$. Il codice $C$ *corregge* la parola $w$ se
e soltanto se esiste una ed una sola parola in $C$ a distanza minima
da $w$, cioè se e soltanto se esiste una ed una sola $x in C$ tale per
cui $d(x, w) = min{d(y, w) : y in C}$. In tal caso, $w$ viene corretta
con $x$.
#example[
Sia $C = {000000, 111111, 222222}$ un codice di lunghezza $6$
sull'alfabeto $A_(3) = {0, 1, 2}$. Si supponga che Alice invii
a Bob la parola $000000$, e che Bob corregga la parola ricevuta
impiegando il principio di massima verosimiglianza.
- Si supponga che Bob riceva la parola $001102$. Poiché:
$ d(000000, 001102) = 3, d(111111, 001102) = 4, d(222222, 001102) = 5 $
Bob corregge (correttamente) la parola ricevuta con $000000$.
- Si supponga che Bob riceva la parola $022220$. Poiché:
$ d(000000, 022220) = 4, d(111111, 022220) = 6, d(222222, 022220) = 2 $
Bob corregge (erroneamente) la parola ricevuta con $222222$.
- Si supponga che Bob riceva la parola $000111$. Poiché:
$ d(000000, 000111) = 3, d(111111, 000111) = 3, d(222222, 000111) = 6 $
Bob non é in grado di correggere la parola ricevuta, perché esistono
piú parole con la stessa distanza.
]
Un codice $C subset.eq A_(q)^(n)$ si dice $h$-*rivelatore* se $h$ è il numero
massimo di errori che è in grado di rivelare.
#theorem[
Sia $k = d(C) - 1$. Ogni codice $C subset.eq A_(q)^(n)$ è $k$-rivelatore.
]
#proof[
Sia $p$ la parola inviata, e sia $p'$ la parola ricevuta. Sia poi $t$ il
numero di errori subiti da $p$ durante la trasmissione, ovvero $d(p, p')
= t$. Si distinguono due casi:
- $t < k$. Allora $d(p, p') = t < k < k + 1 = d(C) = min{d(w, w') : w,
w' in C, w != w'}$. Questo significa che $p' in.not C$, e che quindi
i $t$ errori vengono rivelati. Essendo $t < k$, a maggior ragione i
$k$ errori verranno tutti rivelati;
- $t gt.eq k$. Allora $d(p, p') = t gt.eq k = d(C) - 1 = min{d(w, w') :
w, w' in C, w != w'} - 1$. Questo significa che potrebbe aversi $p'
in C$, e che quindi possa esistere un errore fra i $t$ che non viene
rivelato. Essendo $t gt.eq k$, non vi é garanzia che tutti i $k$ errori
verranno rivelati.
Si ha quindi che $C$ é $k$-rivelatore. Viceversa, sia $k$ il massimo
numero di errori che $C$ é in grado di rivelare. Ogni parola $p'' in
C$ distinta da $p$ deve differire da questa in almeno $k + 1$ componenti,
pertanto si ha $d(C) gt.eq k + 1$. Inoltre, poichè $C$ rivela $k$ errori
ma non $k + 1$, devono esistere due parole $w, w' in C$ tali per cui
$d(w, w') = k + 1$. Ne consegue che $d(C) - 1 = k$.
]
#corollary[
Un codice $C subset.eq A_(q)^(n)$ rivela $t$ errori se e soltanto se
$d(C) gt.eq t + 1$.
]
#proof[
Il codice $C$ rivela $t$ errori se e solo se alterando una parola
di $C$ in $r lt.eq t$ componenti non si ottiene un'altra parola di
$C$. Questo avviene se e solo se due parole di $C$ distano almeno
$t + 1$.
]
Un codice $C subset.eq A_(q)^(n)$ si dice $h$-*correttore* se $h$ è il numero
massimo di errori che è in grado di correggere.
#theorem[
Sia $k = floor(frac(d(C) - 1, 2))$. Ogni codice $C subset.eq A_(q)^(n)$ é
$k$-correttore.
] <Min-distance-is-correcting>
#proof[
Siano $p, p' in C$ rispettivamente la parola trasmessa e la
parola ricevuta, con $t$ numero di errori subiti da $p$ durante
la trasmissione. Si supponga poi $t lt.eq k$. Affinché $C$ sia
$k$-correttore, la parola $p$ che viene scelta come correzione
per $p'$ deve essere l'unica e sola parola in $C$ che dista da
$p'$ meno di tutte. In altre parole, qualsiasi parola $p''$
distinta da $p$ dev'essere piú distante da $p'$ di quanto $p'$
disti da $p$. Formalmente:
$ forall p'' in C, p'' != p space "si ha" space d(p'', p') >
d(p, p') = t $
Avendo supposto $t lt.eq k$, questo equivale a dimostrare che:
$ forall p'' in C, p'' != p space "si ha" space d(p'', p') > k $
Si supponga per assurdo che questo non sia vero, e che esista
quindi una parola $p''' in C$ distinta da $p$ tale per cui
$d(p''', p') lt.eq k$. Applicando la disuguaglianza triangolare,
si ha:
$ d(p''', p) lt.eq d(p''', p') + d(p', p) lt.eq k + k = 2k =
2floor(frac(d(C) - 1, 2)) $
Per definizione di arrotondamento per difetto, $floor(a) =
a - epsilon$ con $epsilon in RR$ tale che $0 lt.eq epsilon
< 1$. Si ha quindi:
$ d(p''', p) lt.eq 2(frac(d(C) - 1, 2) - epsilon) =
frac(cancel(2) (d(C) - 1), cancel(2)) - 2epsilon =
d(C) - 1 - 2epsilon lt.eq d(C) $
Questo peró non é possibile, perché per ipotesi $d(C)$ é la minima
distanza fra due parole in $C$. Pertanto, occorre assumere che $p'''$
non possa esistere.
]
#corollary[
Un codice $C subset.eq A_(q)^(n)$ corregge $t$ errori se e soltanto se
$d(C) gt.eq 2 t + 1$.
]
// #proof[
// Dimostrabile, da aggiungere
// ]
|
https://github.com/jxpeng98/Typst-CV-Resume | https://raw.githubusercontent.com/jxpeng98/Typst-CV-Resume/main/legacy/modernpro-cv-legacy.typ | typst | MIT License | #let date_colour = rgb("#666666")
#let primary_colour = rgb("#2b2b2b")
#let headings_colour = rgb("#6A6A6A")
#let subheadings_colour = rgb("#333333")
// Set font type for all text
#let fonttype = "macfont"
#let font_head = {
if fonttype == "macfont" {
"Helvetica Neue"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_section = {
if fonttype == "macfont" {
"Helvetica"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_subsection = {
if fonttype == "macfont" {
"Helvetica"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_term = {
if fonttype == "macfont" {
"Heiti TC"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_descript = {
if fonttype == "macfont" {
"Heiti SC"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_jobdetail = {
if fonttype == "macfont" {
"Helvetica"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_info = {
if fonttype == "macfont" {
"Helvetica"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_keyword = {
if fonttype == "macfont" {
"Helvetica"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_bib = {
if fonttype == "macfont" {
"Helvetica"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let font_award = {
if fonttype == "macfont" {
"Helvetica"
} else if fonttype == "openfont" {
"PT Sans"
} else {
"Times New Roman"
}
}
#let recipientgenerate(starttitle, jobtitle, date, department, university, address, postcode) = {
align(left, {
if department != [] {
text(10pt, font: font_info, fill: subheadings_colour, weight: "bold")[#department]
}
h(1fr)
if date != "" {
text(10pt, font: font_info, fill: primary_colour, weight: "light")[#date\ ]
} else {
text(
10pt,
font: font_info,
fill: primary_colour,
weight: "light",
)[ #datetime.today(offset: auto).display("[day] [month repr:long] [year]")\ ]
}
if university != [] {
text(10pt, font: font_info, fill: subheadings_colour, weight: "bold")[#university\ ]
}
if address != [] {
text(10pt, font: font_info, fill: headings_colour, weight: "light")[#address\ ]
}
if postcode != [] {
text(10pt, font: font_info, fill: headings_colour, weight: "light")[#postcode ]
}
})
align(
left,
text(12pt, font: "Helvetica", fill: primary_colour, weight: "medium")[#upper([Job Application for #jobtitle])],
)
v(0.1em)
set text(11pt, font: "Helvetica", fill: primary_colour, weight: "regular")
[#starttitle]
}
#let recepient(date, department, university, address, postcode) = {
align(left, {
text(10pt, font: font_info, fill: subheadings_colour, weight: "bold")[#department]
h(1fr)
text(10pt, font: font_info, fill: primary_colour, weight: "light")[#date\ ]
text(10pt, font: font_info, fill: subheadings_colour, weight: "bold")[#university\ ]
text(10pt, font: font_info, fill: headings_colour, weight: "light")[#address\ ]
text(10pt, font: font_info, fill: headings_colour, weight: "light")[#postcode ]
})
}
// Section Headings (Education, Experience, etc)
#let section(title) = {
text(12pt, font: font_section, fill: headings_colour, weight: "medium")[#upper[#title]\ ]
}
// Subsection Headings (University, Company, etc)
#let subsection(content) = {
text(11pt, font: font_subsection, fill: subheadings_colour, weight: "bold")[#upper[#content] ]
}
#let education(university, major, period, location, detail) = {
text(11pt, font: font_section, fill: subheadings_colour, weight: "bold")[#upper[#university] ]
h(1fr)
text(11pt, font: font_term, fill: headings_colour, weight: "medium")[#period \ ]
text(11pt, font: font_descript, fill: subheadings_colour, weight: "semibold")[#major ]
h(1fr)
text(11pt, font: font_term, fill: headings_colour, weight: "medium")[#location \ ]
if detail != [] or detail != "" {
text(11pt, font: font_info, fill: primary_colour, weight: "light")[#detail]
}
}
// Time period and location
#let term(period, location) = {
if location == [] or location == "" {
text(9pt, font: font_term, fill: headings_colour, weight: "medium")[#period ]
} else {
text(9pt, font: font_term, fill: headings_colour, weight: "medium")[#period | #location ]
}
}
// Projects
#let project(title, period, info) = {
text(11pt, font: font_descript, fill: subheadings_colour, weight: "semibold")[#title ]
if period != [] or period != "" {
h(1fr)
text(11pt, font: font_term, fill: headings_colour, weight: "medium")[#period \ ]
} else {
[\ ]
}
if info != [] or info != "" {
text(11pt, font: font_info, fill: primary_colour, weight: "light")[#info ]
}
}
// Description of a job, degree, etc
#let descript(content) = {
text(11pt, font: font_descript, fill: subheadings_colour, weight: "semibold")[#content ]
}
// Job title
#let jobtitle(firm, title, period, location) = {
text(11pt, font: font_section, fill: subheadings_colour, weight: "bold")[#upper[#firm] ]
h(1fr)
text(11pt, font: font_term, fill: headings_colour, weight: "medium")[#period \ ]
text(11pt, font: font_descript, fill: subheadings_colour, weight: "semibold")[#title]
h(1fr)
text(11pt, font: font_term, fill: headings_colour, weight: "medium")[#location]
}
//job details
#let jobdetail(content) = {
text(
11pt,
font: font_jobdetail,
fill: primary_colour,
weight: "light",
baseline: 0em,
)[#set enum(tight: false, spacing: 0em, indent: 0em, body-indent: 0em)
#content]
}
// Details
#let info(content) = {
text(11pt, font: font_info, fill: primary_colour, weight: "light")[#content\ ]
}
#let sectionsep = {
line(length: 100%, stroke: 0.1pt + primary_colour)
}
#let subsectionsep = {
[#v(0.5pt)]
}
#let awarddetail(award, organise, time) = {
text(11pt, font: font_award, fill: primary_colour, weight: "light")[#award, #organise #h(1fr) #time\ ]
}
#let reference(name, department, firm, address, email) = {
align(left, {
text(11pt, font: font_section, fill: subheadings_colour, weight: "semibold")[#name\ ]
text(10pt, font: font_term, fill: headings_colour, weight: "medium")[#department\ ]
text(10pt, font: font_term, fill: headings_colour, weight: "medium")[#firm\ ]
text(10pt, font: font_term, fill: headings_colour, weight: "medium")[#address\ ]
text(10pt, font: font_term, fill: headings_colour, weight: "medium")[#email]
})
}
#let teaching(position, university, detail) = {
text(11pt, font: font_section, fill: subheadings_colour, weight: "bold")[#upper[#university]]
text(11pt, font: font_descript, fill: subheadings_colour, weight: "semibold")[ | #position \ ]
if detail != [] or detail != "" {
text(11pt, font: font_info, fill: primary_colour, weight: "light")[#detail]
}
}
#let reference2(name, department, firm, email) = {
text(10pt, font: font_descript, fill: subheadings_colour, weight: "semibold")[#name | #email\ ]
text(10pt, font: font_term, fill: headings_colour, weight: "medium")[#department, #firm\ ]
}
#let biblist(contents) = {
for ids in contents [
#id.title (#id.year)
]
}
#let keyword(content) = {
text(9pt, font: font_info, fill: headings_colour, weight: "light")[#content\ ]
}
// last update
#let lastupdate(lastupdated, date)= {
if lastupdated == "true" {
set text(8pt, font: font_info, fill: primary_colour, weight: "light")
[Last updated: #date]
}
}
// Publications
#let publication(path, styletype) = {
set text(11pt, font: font_info, fill: primary_colour, weight: "light")
bibliography(path, title: none, full: true, style: styletype)
}
#let cv-single-legacy(
continue_header: "",
name: "",
address: "",
lastupdated: "",
pagecount: "",
date: "",
contacts: (),
bibfile: (),
mainbody,
) = {
// show contact details
let display(contacts) = {
set text(
9pt,
font: "Heiti TC",
fill: headings_colour,
weight: "medium",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 2pt,
)
contacts.map(contact =>{
if contact.link == none [
contact.text
] else {
link(contact.link)[#{ contact.text }]
}
}).join(" | ")
}
set page(
footer: [
#lastupdate(lastupdated, date)
#h(1fr)
#if pagecount == "true" {
text(9pt, font: "Helvetica", fill: primary_colour, weight: "light")[#counter(page).display("1 / 1", both: true)]
}
],
)
if continue_header == "true" {
set page(
margin: (left: 2cm, right: 2cm, top: 2.5cm, bottom: 1.5cm),
header: {
// Head Name Section
text(
20pt,
font: "Helvetica Neue",
fill: primary_colour,
weight: "light",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 11pt,
)[#align(center, [#name])]
v(2pt)
// text(9pt,font:"Heiti TC",fill:headings_colour, weight: "medium",top-edge:"baseline",bottom-edge:"baseline")[#align(center,[#address])]
align(center)[#display(contacts)]
line(length: 100%, stroke: 0.5pt + primary_colour)
},
header-ascent: 1em,
)
mainbody
} else {
set page(margin: (left: 1.8cm, right: 1.8cm, top: 1cm, bottom: 1cm))
text(
20pt,
font: "Helvetica Neue",
fill: primary_colour,
weight: "light",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 11pt,
)[#align(center, [#name])]
v(2pt)
// text(9pt,font:"Heiti TC",fill:headings_colour, weight: "medium",top-edge:"baseline",bottom-edge:"baseline")[#align(center,[#address])]
align(center)[#display(contacts)]
line(length: 100%, stroke: 0.5pt + primary_colour)
mainbody
}
//Main Body
}
#let cv-double-legacy(name: "", address: "", lastupdated: "", date: "", contacts: (), left, right) = {
// show contact details
let display(contacts) = {
set text(
11pt,
font: font_term,
fill: headings_colour,
weight: "medium",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 2pt,
)
contacts.map(contact =>{
if contact.link == none [
contact.text
] else {
link(contact.link)[#{ contact.text }]
}
}).join(" | ")
}
set page(margin: (left: 1.25cm, right: 1.25cm, top: 3.2cm, bottom: 1.5cm), header: {
// Head Name Section
text(
25pt,
font: font_head,
fill: primary_colour,
weight: "light",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 12pt,
)[#align(center, [#name])]
text(
11pt,
font: font_descript,
fill: headings_colour,
weight: "medium",
top-edge: "baseline",
bottom-edge: "baseline",
)[#align(center, [#address])]
align(center)[#display(contacts)]
line(length: 100%, stroke: 0.5pt + primary_colour)
}, header-ascent: 1em, footer: [
#lastupdate(lastupdated, date)
])
//Main Body
grid(columns: (1fr, 2fr), column-gutter: 2em, left, right)
}
#let coverletter-legacy(
name: "",
address: "",
contacts: (),
recipient: (starttitle: "", jobtitle: "", date: "", department: "", university: "", address: "", postcode: ""),
mainbody,
) = {
// show contact details
let display(contacts) = {
set text(
11pt,
font: font_term,
fill: headings_colour,
weight: "medium",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 2pt,
)
contacts.map(contact =>{
if contact.link == none [
contact.text
] else {
link(contact.link)[#{ contact.text }]
}
}).join(" | ")
}
set page(margin: (left: 2cm, right: 2cm, top: 3.2cm, bottom: 1.5cm), header: {
// Head Name Section
text(
25pt,
font: font_head,
fill: primary_colour,
weight: "light",
top-edge: "baseline",
bottom-edge: "baseline",
baseline: 12pt,
)[#align(center, [#name])]
text(
11pt,
font: font_descript,
fill: headings_colour,
weight: "medium",
top-edge: "baseline",
bottom-edge: "baseline",
)[#align(center, [#address])]
align(center)[#display(contacts)]
line(length: 100%, stroke: 0.5pt + primary_colour)
}, header-ascent: 1em)
// Add recipient details
recipientgenerate(
recipient.starttitle,
recipient.jobtitle,
recipient.date,
recipient.department,
recipient.university,
recipient.address,
recipient.postcode,
)
set par(justify: true, first-line-indent: 2em)
set text(11pt, font: "Helvetica", fill: primary_colour, weight: "regular")
mainbody
set text(11pt, font: font_info, fill: primary_colour, weight: "regular")
v(2pt)
[Sincerely,\ ]
v(2pt)
[*#name*]
} |
https://github.com/piepert/philodidaktik-hro-phf-ifp | https://raw.githubusercontent.com/piepert/philodidaktik-hro-phf-ifp/main/src/parts/ephid/descartes/wachsbeispiel.typ | typst | Other | #import "/src/template.typ": *
#let med(page) = en[Vgl. Descartes, Renè: AT VII. S. #page.]
== #ix("Das Wachsbeispiel", "Wachsbeispiel")
Auf das #ix("cogito-Argument") folgt die Untersuchung eines Stückchen Wachses. #ix("Descartes", "Descartes, René") erklärt daraufhin, dass er das Wachs nur durch seinen Geist wahrnimmt. Das erkenne man daraus, dass die empirischen Eigenschaften des Wachses sich verändern können und trotzdem dasselbe Wachs als dasselbe erkannt werden kann.#med[30] Er folgert, dass egal wie er seine Umwelt, die Körper um ihn herum -- wofür das Wachs stellvertretend steht --, wahrnimmt, er selbst nicht weniger existieren kann.#med[33]
#set par(justify: false)
#grid(columns: (10%, 45%, 45%).map(e => e - 1.5em),
row-gutter: 1em,
column-gutter: 1.5em,
strong[Stufe], strong[Thema im Rahmenplan], strong[Thema bei Descartes],
[5], align(center + horizon)[/], align(center + horizon)[/],
[6], [
- *Der Urstoff -- die vier Elemente:* Verändert sich alles Seien, oder ist es beständig?#en[Vgl. @MBWKMV1996_RP56[S. 26]]
], [
- Gibt es trotz Veränderung etwas, was gleich bleibt, was dann erkannt werden kann? Gibt es einen Stoff, der der Veränderung trotzt?
],
[7], [
- *Die Gedankenwelt:* Wann und wie denken Menschen?#en[Vgl. @MBWKMV2002_RP710[S. 25]]
], [
- Welche Rolle spielen unsere Gedanken beim Erfassen der Wirklichkeit?
],
[8/9], [
- *Eigene Befindlichkeit und Wahrnehmung der Wirklichkeit:* Hängt die Wahrnehmung von Wirklichkeit vom Bewusstsein meiner selbst ab?#en[Vgl. @MBWKMV2002_RP710[S. 26]]
], [
- Kann ich die Wirklichkeit nur mit meinem Bewusstsein wahrnehmen?
],
[10], [
- *Wege philosophischen Denkens:* Was heißt Denken?#en[Vgl. @MBWKMV2002_RP710[S. 34]]
], [
- Kann ich nur mit reinem Denken die Welt erkennen?
],
[11/12], [
- *Erkenntnis:* dialektische Auseinandersetzung mit Thesen zur menschlichen Erkenntnis, Rationalismus#en[Vgl. @MBWKMV2019_RP1112[S. 12]]
], [
- Denken und Vernunft als reine Methode der Erkenntnis
]) |
https://github.com/ludwig-austermann/typ-pack | https://raw.githubusercontent.com/ludwig-austermann/typ-pack/main/README.md | markdown | # typpack
A Tool to package a typst package for publishing.
## How to use
Run the script inside your package with nushell. Add the following to `typst.toml`:
```toml
...
[packaging]
include = []
prescript = ...
postscript = ...
```
where you can include the files to include besides the default files.
The `README.md` files gets `{{PACKAGE VERSION}}` replaced by the package version as given by the `typst.toml`. |
|
https://github.com/benjamineeckh/kul-typst-template | https://raw.githubusercontent.com/benjamineeckh/kul-typst-template/main/src/core/component.typ | typst | MIT License | #import "component/abstract.typ": insert-abstract
#import "component/bibliography.typ": insert-bibliography
#import "component/copyright.typ": insert-copyright
#import "component/cover-page.typ": insert-cover-page, parse-image
#import "component/declaration-of-originality.typ": insert-dec-of-orig
#import "component/keywords.typ": insert-keywords
#import "component/preface.typ": insert-preface
#import "component/outline.typ": insert-outline |
https://github.com/rabotaem-incorporated/algebra-conspect-1course | https://raw.githubusercontent.com/rabotaem-incorporated/algebra-conspect-1course/master/sections/05-group-theory/03-factor-groups.typ | typst | Other | #import "../../utils/core.typ": *
== Нормальные подгруппы и факторгруппы
#ticket[Нормальные подгруппы]
#def[
Подгруппа $H$ группы $G$ называется _нормальной подгруппой_, если:
$
forall g in G space forall h in H : g h g^(-1) in H
$
Обозначается $H nsubg G$.
]
#pr[
Пусть $H$ подгруппа $G$. Тогда 4 условия эквивалентны:
+ $H nsubg G$
+ $forall g in G: g H = H g$
+ $forall g in G: g H g^(-1) subset H$
+ $forall g in G: g H g^(-1) = H$
]
#proof[
- "$1 <==> 3$" и "$4 ==> 3$": тривиально.
- "$3 ==> 4$": Если $g H g^(-1) subset H$ для каждого $g$, то и $g^(-1) H g subset H$. Тогда
$
g^(-1) H g subset H ==> H g subset g H ==> H subset g H g^(-1).
$
Получилось обратное включение.
- "$2 <==> 4$": получается умножением/делением справа на $g$.
]
#notice[
Условие 2 можно переписать как $G fg H = H \\ G$.
]
#notice[
Если $G$ --- Абелева, то $H nsubg G$.
]
#notice[
$(G : H) = 2 ==> H nsubg G$
]
#ticket[Факторгруппа]
#def[
_Фактормножество_ --- множество всех классов эквивалентности для заданного отношения эквивалентности $sim$ на множестве.
То есть в нашем случае, это множество смежных классов.
]
#def[
Пусть $H nsubg G$. На фактормножестве $G fg H$ введем операцию умножения:
$
(G fg H) times (G fg H) &--> G fg H \
(M, N) &maps M N
$
Такая структура называется _факторгруппой_.
]
#th[
$G fg H$ с введенным выше умножением --- группа.
]
#proof[
+ #[
Замкнутость:
$
(g_1 H) (g_2 H) = g_1 H g_2 H = g_1 g_2 H H = g_1 g_2 H.
$
]
+ #[
Ассоциативность:
$ (M N) P = {(m n)p bar m in M; n in N, space p in P} = M(N P) $
]
+ #[
Нейтральный элемент:
$ e H space#[--- нейтральный элемент] $
]
+ #[
Обратный к $g H$ --- это $g^(-1)H$
$ g H mul g^(-1) H = (g g^(-1))H = e G $
$ g^(-1)H mul g H = (g^(-1)g)H = e H $
]
]
|
https://github.com/flechonn/interface-typst | https://raw.githubusercontent.com/flechonn/interface-typst/main/BD/TYPST/exo2_missing_attributes.typ | typst | #show terms: meta => {
let title = label("Integration Exercise")
let duration = label("30min")
let difficulty = label("hard")
let solution = label("1")
let figures = label("")
let points = label("20pts")
let bonus = label("0")
let author = label("")
let references = label("")
let language = label("english")
let material = label("")
let name = label("exo2_missing_attributes")
}
|
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/06-features-2/substitution/khmer.typ | typst | Other | #import "/lib/draw.typ": *
#import "/template/lang.typ": khmer
#let start = (0, 0)
#let end = (1000, 300)
#let color-khmer = (color, it) => text(fill: color, khmer(it))
#let red-khmer = color-khmer.with(red)
#let gray-text = text.with(fill: gray)
#let graph = with-unit((ux, uy) => {
// mesh(start, end, (100, 100))
let t = text[
#red-khmer[\u{1783}]#gray-text[+]#khmer[\u{17D2}]#gray-text[+]#khmer[\u{1784}]#gray-text[=]#box[#place[#khmer[\u{1783}\u{17D2}\u{1784}]]#red-khmer[\u{1783}]
]
]
txt(t, (10, 150), anchor: "lc", size: 200 * ux)
txt(align(center)[
U+1783#linebreak()
KHMER LETTER#linebreak()
KHO
], (100, 300), anchor: "ct", size: 18 * ux)
txt(align(center)[
U+17D2#linebreak()
KHMER SIGN#linebreak()
COENG
], (370, 300), anchor: "ct", size: 18 * ux)
txt(align(center)[
U+1784#linebreak()
KHMER LETTER#linebreak()
NGO
], (610, 300), anchor: "ct", size: 18 * ux)
})
#canvas(end, start: start, width: 100%, graph)
|
https://github.com/xiongyaohua/typst-template-swjtu-thesis | https://raw.githubusercontent.com/xiongyaohua/typst-template-swjtu-thesis/main/template/example.typ | typst | #import "/lib.typ": *
/// 开始论文写作
///
/// 参数提供论文的基本信息。暂不确定的信息可空置,论文中相关位置由红色文字占位。确定后及时补充。
#show: 论文.with(
题目: [如何用Typst排版论文],
年级: [2024],
//学号: [123456],
//姓名: [张三],
专业: [交通工程],
指导教师: [熊耀华],
//发题日期: datetime(year: 2023, month: 12, day: 1),
//完成日期: datetime.today(),
目的意义: [
写论文是大学毕业的必要条件,有重要的的意义。使用有效的工具排版论文可以减少工作量,提高结果质量。传统的工具中,微软Word所见即所得、容易上手,但编辑论文这样的复杂文档时难以保证格式统一;LaTeX是学术论文写作的事实标准,但开发时间久远,易用性不好,安装、配置、定制都很繁琐。
本研究探索一种新型排版工具Typst在本科论文排版中的潜力,为快速高质量排版本科论文提供技术支持。
],
任务: [
本研究中学生应完成以下任务:
- 阅读论文排版相关文献;
- 尝试用Word排版论文;
- 尝试用LaTeX排版论文;
- 结合本文学习Typst排版,与前两个排版流程进行对比;
- 总结Typst排版论文的优势和不足,为未来改进提供建议。
],
达成度: [
通过本研究,学生实现以下专业培养目标:
- 知识结构上了解排版的基本原理;
- 能力结构上掌握结构化排版技能;
- 素质结构上提升主动学习探索精神。
],
时间分配: [
/ 文献阅读: 阅读学术论文写作技巧、排版理论相关文献(3周)
/ Word排版试验: 用Word尝试排版两万字的学位论文。完整包括目录、章节、索引、引用等学术论文组成元素。(2周)
/ LaTeX排版试验: 用LaTeX排版同一内容,要求与上一条相同。(3周)
/ Typst排版试验: 用Typst排版同一内容,要求与上一条相同。(3周)
/ 撰写报告: 比较不同软件排版的体验,撰写报告。从安装难度、定制丰富性、排版效果等方面比较三种软件的优劣。(3周)
/ 评阅及答辩: 提交报告,准备答辩(2周)
],
备注:[
无
],
中文关键词: [论文、排版、Typst],
中文摘要: [
论文排版规范是毕业要求之一,有重要的意义。本文探索用新锐结构化排版工具Typst在论文排版工作中的应用。本文首先介绍了排版的一般理论;然后比较分析了不同类型排版软件的特点;最后用Typst实际排版一篇符合西南交通大学学位论文格式要求的论文作为案例。通过理论分析和案例分析本文得出结论:Typst能够保证排版质量、提高排版效率、降低排版工作量,是学位论文排版的有效工具。
],
英文关键词: [Thesis, Typesetting],
英文摘要: [
A well typesetted thesis, conforming to required style, is a prerequisite for academic degrees. This research explore the potential of a new structural typesetting system, named _Typst_, in thesis typesetting. We first introduced the general typesetting rules and principles, then analyzed various typesetting systems by feature comparison. Finally, with the gained insight and skill, we carried out an actural typesetting project, which produces a sample text conforming to the thesis style guide of Southwest Jiaotong University, as a case study. Based on theoritial analysis and case study we conclude that _Typst_ represents an effective typesetting tool for thesis, and it excels not only in quality of result, but also in efficiency of process.
]
)
#page(header: none, footer: none)[] //FIXME: 自动插入空白页
= 绪论 <绪论章>
Typst是一种新锐结构化文档排版工具,可用于复杂结构化文档的排版。
@吉祥物[]是该项目的吉祥物:字母怪兽,根据名字的首字母“t”设计。本文主要有两个目的,一是作为文档简要介绍用Typst排版本科毕业论文的方法;二是_本文自身用Typst排版_,可以作为案例证明Typst排版的有效性、易用性。文本源代码可访问#link("https://github.com/xiongyaohua/typst-template-swjtu-thesis")获取。
#figure(image(
"/images/typst-logo.jpg",
width: 40%
),
caption: [Typst项目吉祥物]
) <吉祥物>
== 背景
排版软件各式各样,论文写作最常用的有两种:微软Word和LaTeX。其中Word作为图形化程序上手简单,但排版论文,尤其是长篇学术论文时,往往存在效率低下,排版质量欠缺的问题。LaTeX是当前学术排版的事实标准,排版质量近乎完美,但是LaTeX开发年代久远,语法晦涩、使用繁琐,新手难以入门。
Typst可以看作现代化的LaTeX,基于同样的核心设计思想——_“内容与形式分离”_,但是吸收了半个世纪软件技术的成果,在表达能力、处理效率、易用性等各个方面有了长足的进步@madje2022。#ref(<衰和帅>)风趣的表达了两种系统的区别。
== 研究问题
本文将结合学位论文排版需求,探索Typst排版复杂的结构化文档的能力。排版一篇典型的学位论文,可以分解为下列问题:
- 如何安装Typst,配置写作环境;
- 如何对论文手稿进行版本管理;
- 如何在手稿中标记各种格式要求,例如标题、列表、强调、原文引用等;
- 如何排版数学公式;
- 如何插入图、表;
- 如何交叉索引论文中的其他元素,例如公式、图标、章节等;
- 如何管理参考文献、引用参考文献。
== 章节安排
本文后续章节安排如下:
- @文献综述章[]综述相关文献,包括排版的一般概念和理论;排版工具软件的主要分类;Typst软件的设计思想。
- @排版方法章[]描述Typst软件,基于西南交大学位论文模板(后文简称*模板*),排版学位论文的方法。
- @流程分析章[]分析Typst软件和模板的优势和不足,建议后续改进提供方向。
- 最后总结全文,得出结论。
= 文献综述 <文献综述章>
本章首先介绍排版的一般概念和理论,帮助读者建立排版流程的概念模型。然后简单比较排版软件的两种主要设计思路:结构化排版和所见即所得排版,比较各自的优缺点、适用范围。最后简单分析Typst软件的设计思想,如何实现功能强大、使用简单、运行高效的现代排版系统。
== 排版一般理论和概念
*排版*本意是将活字排列成印版,是活字印刷的前置环节@griffin2021。传统的书籍出版流程大致如下:
+ 作者写出*手稿*。手稿中除文字本身,还按照一定规则*标记*出文字的逻辑结构。例如加下划线表示标题,提行表示分段,下面加点代表强调。
+ 编辑校对,审核手稿,排除错别字。
+ 版面设计师设计*视觉样式*,决定各种逻辑结构的大小、形状、颜色等视觉属性。例如不同类型元素的字体、字号;行距、段距;压花、水印、页头、页脚、装饰花纹,等等。
+ 排版工人根据手稿和视觉元素规则将铅字排列和固定到印版上,也就是所谓的排版。排版大量文字不但工作量巨大,同时排版是否美观极其依赖排版工人的经验。
+ 印版安装上印刷机,实际印刷。
#h(2em)传统的排版流程人工耗费大、周期长、成本高,普通人无法承担,因此付印自己的作品是少数作者的特权。二十世纪六十年代,随着计算机开始在学术界普及,少数学者开始用计算机排版的尝试,其中最为成功的是斯坦福大学高德纳(<NAME>)教授。在出版自己学术著作的过程中,由于对出版社的排版水平不满意,他开发了一款称为TeX的排版软件,自己完成排版工作@knuth1986。一经推出TeX风靡学术界和出版界,后续Leslie Lamport博士在其基础上开发了LaTeX,至今仍是计算机排版的黄金标准@lamport1994。
在计算机排版流程中,排版软件取代了版面设计师和排版工人的角色。作者只要写好手稿文字,并按照排一定规则标记好段落、章节等逻辑结构,排版软件会自动选择视觉样式,生成可以用于印刷的文件(例如pdf文件)。有了排版软件的帮助,作者付印自己作品的难度大大降低。
== 排版工具分类
排版软件从功能上大致应该包含以下三个方面:
+ 编辑文字,标记其逻辑结构;
+ 修改某种逻辑结构的视觉样式;
+ 综合文字、逻辑结构标记、视觉样式三个元素,生成可应刷文件。
#h(2em)按照前两个方面的操作方式,排版软件可以大致分为两大类:批处理排版软件和所见即所得排版软件。更详细的分类参考文献@rebelo2023[]。
=== 批处理排版软件 <批处理软件小节>
TeX/LaTeX/Typst都属于批处理排版软件。所谓批处理是指作者写作过程中面向手稿,主要考虑文字和逻辑结构,而不关心视觉样式。当手稿写作完成后,排版软件一次性将手稿转换为可印刷文件。
批处理排版软件尤其适合处理规模大、结构性强的手稿@rebelo2023。首先写作过程中专注内容和结构,排除视觉元素干扰,更能集中注意力;其次批处理生成可应刷文件能够保证视觉样式的一致性。另外批处理排版本质上是一种面向文字的编程,手稿是源代码,排版软件是编译器,可印刷文件是编译后的结果。通过编程可以很快速完成大量重复的文字编辑工作。
=== 所见即所得排版软件
微软的Word是典型的所见即所得排版软件。所见即所得的写作过程中,不但能编辑文字,还能同时编辑文字的字体、颜色等各种视觉样式。这样的好处首先是直观,符合一般用户直觉,容易上手;其次可以随时观察和修改最终排版效果,灵活性强,适合用于少量视觉性、探索性强的排版工作,如海报等@rebelo2023。
== 批处理排版软件发展历程
Typst软件可以看作对TeX/LaTeX设计思想的继承和发展。TeX/LaTeX开创性的提出了*排版就是编程*这个概念,具有划时代的意义。但由于当时技术条件的局限性,软件的设计实现存在不少问题。例如:
- 70年代计算内存紧张,往往以k为单位。为了节省内存TeX被设计为一种*宏语言*。但后续计算机科学领域的研究和实践证明,宏语言存在各种缺陷,不适合作为主力语言。一个直接的后果就是LaTeX下脆弱的宏包。
- 70年代很少有人考虑非英语世界的用户,更没有Unicode标准。因此TeX一开始只支持英文,后续才扩展到其他欧洲文字,最后才是中日韩等非欧洲文字。由于初始设计的限制,这些扩展直到今天任然存在各种小问题。
- 70年代主力编程语言是C,TeX采用C语言的一种扩展CWEB开发。CWEB后续没能流行起来,除了TeX开发之外基本没有应用场景,因此TeX的代码很少有人能够理解和维护更不要说扩展。
#h(2em)基于以上原因,过去半个世纪里有各种改进TeX/LaTeX系统的尝试,分为两条技术路线。一是在保证与TeX兼容的前提下进行扩展。主要的工作有eTeX增加英语外的欧洲语言支持;pdfTeX支持输出pdf格式的文件;XeTeX增加Unicode编码以及ttf、otf字体技术支持@kew2005。最有雄心的计划应该是LuaTeX,准备用Lua语言尽可能替换TeX宏语言,降低复杂宏包的开发难度@isambert2011。
另外一条路线是放弃和TeX兼容,推倒重来,用现在的技术重新设计排版软件。这条路线上多数项目都是浅尝则止,没能流行起来。毕竟绝大多数用户已经习惯了TeX,新系统必须有很强的吸引力才能让用户愿意学习一种新的系统。这条路线上之前最成功的案例是SILE,现在是Typst,走到了一个新高度。
Typst相对TeX的优势可以风趣的总结为@衰和帅,具体而言包括:
- Typst虽然与TeX不直接兼容,但是完整继承了“排版就是编程”的精神,和对排版细节质量的追求。实际上Typst里的核心算法,如分词、断行等,直接来自于TeX。
- Typst抛弃了TeX的宏语言设计,采用了当前最先进的*不可变函数式编程*(immutable functional programing)语言设计@wadler1992。这种设计极大的提高了语言的严密性、表达力、运行效率。
- Typst开发中采用当前最新的软件技术与工具。例如用Rust语言开发,基于WASM的插件机制,基于freetype库的字体管理,基于LSP协议的编辑器整合,基于Web技术的在线编辑器,等等。通过这些技术不但降低了使用难度,也降低了开发难度。
#figure(
image("/images/why-typst.jpg"),
//caption: [LaTeX衰,Typst帅#emoji.dog.face],
caption: [LaTeX衰,Typst帅],
) <衰和帅>
== 小结
本节对排版软件的一般概念和方法进行综述,说明Typst系统的特性非常适合排版学位论文这类结构化的复杂文档。下一章我们将描述具体排版方法。
= 排版方法 <排版方法章>
本章结合西南交通大学学位论文模板介绍Typst的常用功能。如需了解完整功能请参考Typst官方文档@typst-doc。
== Typst安装和设置
搭建Typst环境最简单的方法是:
+ 下载、安装Visual Studio Code编辑器;
+ 打开编辑器,点击窗口左侧插件边栏;
+ 搜索、安装以下插件
- Typst LSP
- Typst Preview
+ 完成
#h(2em)另外,如果要自动化管理手稿版本和参考文献,还推荐安装以下软件:
+ 版本管理软件Git
+ 文献管理软件Zotero
== 文字模式和代码模式
Typst工作过程中为了区分手稿中的字符是需要排版的_文字_,还是需要运行的_代码_,因此存在两种不同的*工作模式*。其中默认状态下处于*文字模式*,如果需要明确切换到文字模式可以使用方括号包裹,例如`[文字]`。需要切换到*代码模式*时需要使用井号`#代码`表示一行代码,或者用井号加花括号`#{代码}`包裹多行代码块。
#示例(```[
排版是一门艺术,讲究说学逗唱。
#let a = 10
#lorem(a)
#{
let b = 20
lorem(b)
}
]```.text)
对于最终用户,绝大多数时候在默认的文字模式中工作,因此_最外层的方括号可以省略_。
== 普通文字排版
论文中绝大部分是普通文字,最重要的逻辑结构是_章节、段落、列表、强调_等。
Typst用不同数量的_等号`=`表示章节结构_,例如
#示例("[
= 绪论
论文排版很重要,有研究的必要性和可行性。论文排版工具有许多,用哪个工具最好?
= 文献综述
== 国内文献
关于排版国内学者做了如下研究……
== 外国文献
关于排版国外学者做了如下研究……
= 研究方法
我们的研究方法是……
= 实验设置和结果
我们的实验怎么做的,结果如何,说明了什么……
= 结论
我们的结论是:使用合适的工具排版事半功倍,Typst是最好的工具。
]")
其中,源码中正文文字用_空行表示分段_,连续的行属于同一段。例如
#示例("[
这是第一段。
这还是第一段。
空行之后是第二段。
空行之后是第二段。
空行之后是第二段。
空行之后是第三段。空行之后是第三段。
]")
注意,第二、第三段前提行两个字,但第一段没有。原因是首段不提行是西文排版规范。在中文排版规则支持完善前,我们需要用`#h(2m)`_手动插入空白_。
这里`#`切换到代码模式、`h`函数代表横向(horizontal)空白、参数`2em`代表两个字符宽度。
#示例("[
#h(2em)这是第一段。这还是第一段。这还是第一段。这还是第一段。
空行之后是第二段。空行之后是第二段。空行之后是第二段。
]")
Typst用_减号`- `表示无顺序列表,加号`+ `表示有顺序列表,下划线`_ _`表示强调,星号`* *`表示着重强调_。示例如下:
#示例("[
论文排版者,_毕业之大事_,死生之地,存亡之道,*不可不察*也。
排版中需要注意的事项有:
- 注意事项一
- 注意事项二
- ……
#h(2em)排版的流程分三步:
+ 第一步
+ 第二步
+ 第三步
]")
== 数学公式排版
学术论文,尤其是科学、工程类论文往往包含大量数学公式。TeX最初能够流行的重要原因就是高质量的公式排版。Typst继承了TeX的衣钵,具有同样强大,甚至某些方面更为强大的公式排版能力。
数学公式用`$`符号表示,分为两种形式:_行内公式_,内嵌在文字行中,用*没有空白*的`$`包围,例如$f(x)=a + b^2$ 这个函数。_行外公式_,单独成行,用*有空白*的`$`包围,例如
$ f(x)=integral sin(x)/cos(d) dif x $
注意,行内公式只适合简单、不需要在别处引用的公式;对复杂的,需要引用的公式,必须使用行外公式。
数学公式排版依赖一套语法规则,这里篇幅有限,只展示一些典型的案例。需要了解详细规则的读者自行参考Typst文档。
#示例(```[
- 这是行内公式:$f(x)=a + b^2$
- 这是行外公式:$ f(x)=integral sin(x)/cos(d) dif x $
- 多行公式用`\`表示换行,`&`表示对齐位置
$
sum_(k=0)^n k
&= 1 + ... + n \
&= (n(n+1)) / 2
$
- `lr`动态调整包围符号大小
$
angle.l i, 2^(2^i) angle.r \
lr(angle.l i, 2^(2^i) angle.r)
$
- 公式中内嵌文字
$ "area" = pi dot "radius"^2 $
- 使用其他字体
$ cal(A) := { x in RR | x "is natural" } $
]```.text)
== 图、表、代码排版 <图表排版节>
除了文字外,某些时候需要_插入图片、表格、代码段_帮助说明问题。Typst中插入图、表分为两个步骤:
+ 插入图、表、代码本身
+ 添加说明文字
这样设计的原因是,某些文件里图片和表格不需要说明。但_对于论文说明文字是必须的_。
首先使用`image`函数插入图片,`table`函数插入表格。各函数的详细用法参考官方文档,这里提供简单示例如下:
#示例(```[
#h(2em)Typst项目根据名字的首字母“t”,设计了该项目的吉祥物:字母怪兽t。
#image(
"images/typst-logo.jpg",
width: 80%
)
]```.text)
#示例(```[
#h(2em)常用排版软件的作者是:
#table(
columns: (1fr, 1fr),
[软件], [作者],
[Typst], [<NAME>和<NAME>],
[TeX], [<NAME>],
[LaTeX], [<NAME>],
[Word], [不详]
)
]```.text)
插入代码段可以用`raw`函数,或者六个单引号包围#raw("```代码```", block: false)。示例如下,注意单引号后的`python`一词,指定了代码语言,方便进行语法加亮。`raw`函数中的`lang`参数起同样的作用。Typst支持大量编程语言的语法,详情参考文档。
#示例("[
```python
a = 3
for i in range(20):
a += i
print(a)
```
]")
#示例(```[
#raw("int a = 3;
for(i=0;i<20;i++) {
a += i
}
cout << a;
", lang: "c++", block: true)
]```.text)
#示例("[
```rust
let mut a: i32 = 3;
for i: i32 in 0..20 {
a += i;
}
println!(a)
```
]")
添加标题需要使用`figure`函数,其中`caption`参数指定标题,示例如下:
#示例(```[
#h(2em)新老两代排版人,白发苍苍的老教授和意气风发的青年。
#figure(
grid(
columns: (1fr, 1fr), column-gutter: 4pt,
image("images/knuth.jpeg"),
image(
"images/martin-and-laurenz.jpeg"
),
),
caption: [TeX的创造者<NAME>教授(左图);Typst的创造者<NAME>和<NAME>(右图)]
)
]```.text)
#示例("[
#figure(
```python
a = 0
for i in range(1, 1001):
a += i
print(a)
```,
caption: [从1累加到1000的Python代码。使用了高斯教授发明的最新科技`for`循环]
)
]")
`figure`函数,同样可以给表格和代码添加标题,示例如下:
#示例(```[
#figure(
table(
columns: (1fr, 1fr, 0.5fr),
[*软件*], [*作者*], [*年代*],
[TeX], [<NAME>], [1970],
[LaTeX], [<NAME>], [1980],
[Typst], [<NAME>和\ <NAME>], [2020],
[Word], [不详], [1990],
),
caption: [常用排版软件和作者对照表],
kind: table
) <软件作者对照表>
]```.text)
注意,这里`<软件作者对照表>`是一个*引用标签*,方便在文章其他地方引用该表;详情见@交叉引用节。
== 内部交叉引用 <交叉引用节>
_交叉引用_文本中其他部分的内容是学位论文等复杂文本的一大特点。手工维护交叉引用非常繁琐,费时费力还容易出错。Typst可以自动维护交叉引用,节省作者的时间精力。
使用交叉引用首先要给被引用对象*添加标签*,方法是在_被引用对象最后一行尾部_添加,语法是`<标签名>`;常见可引用对象包括:章节、公式、图、表。然后在文章中需要的位置*引用标签*,语法是`@标签名`或者`#ref(<标签名>)`;前者后面_必须接标点符号或空一格_,后者之后可以不用。
注意,标签名的选用原则按重要性排序是有实际意义、便于记忆、简短。不能为了简短取没有意义、不好记忆的名字。例如本章不能用`<第三章>`作为标签,因为顺序可能会调整;而应该用`<排版方法章>`这个有实际意义的标签。
#示例(```[
$ E = m dot.c c^2 $ <质能方程>
#h(2em)爱因斯坦首先提出了@质能方程;根据#ref(<质能方程>) 可以计算原子核发生裂变时放出的能量,因此称为质能方程。
]```.text)
#示例(```[
#h(2em)#ref(<图表排版节>)中的@软件作者对照表 列出了常见排版软件的作者和开发年代。
#ref(<文献综述章>)里的@衰和帅 形象的展示了LaTeX和Typst功能上的区别。
#ref(<批处理软件小节>)介绍了不同批处理结构化排版软件的历史沿革。
]```.text)
附录中标号有特殊要求,一般用“A、B、C”代替“1、2、3”。不过本模板会处理好这些细节,不用作者操心。例如:
#示例(```[
#h(2em)附录中包含以下内容:@附录公式,@附录图,@附录表。
]```.text)
== 外部引用和文献管理
学术写作中往往需要_大量引用他人的作品_,作为支撑材料,作为研究对象、理论基础、或者观点佐证。与内部引用相比,外部引用数量更大,管理更繁琐,是学术写作的一大痛点。幸好Typst提供了很好的支持。
使用外部文献一般需要三个步骤:搜集、整理文献数据库;导出文献;引用文献。接下来简要介绍。
搜集文献是指在平时的阅读、研究过程中,将看过的论文、书籍、网页、邮件等各种资料的_信息输入一个数据库_;同时对数据库中的大量资料要_分门别类整理_。这个过程很多文献管理软件可以完成,这里推荐开源的Zotero。
论文写作时排版软件一般不能直接引用数据库,而是引用某种_标准格式的导出文件_,最常用的格式是BiBLaTeX格式,扩展名为`.bib`@lehman2006。从Zotero中可以将数据库导出成该格式文件。`.bib`文件中包含有很多_条目_,本文`.bib`文件中的第一个条目内容如下:
```
@book{griffin2021,
title = {Type {{Specimens}}: {{A Visual History}} of {{Typesetting}} and {{Printing}}},
author = {<NAME>},
date = {2021},
publisher = {{Bloomsbury Publishing}},
isbn = {1-350-11661-0}
}
```
每个条目有一个*键值*,用于引用该条目。键值一般是一个短字符串,由Zotero自动生成,保证唯一性。常见的方式是_作者姓名加发表年份_。例如以上条目的键值是`griffin2021`,从后续字段中可以看出这是D<NAME>在2021年出版的一本书。
在Typst中排版论文时,用交叉引用相同的语法进行外部引用,只是_用键值代替标签_。示例如下,具体解释见@交叉引用节。
#示例(```[
斯坦福大学的Donald Knuth教授发明了TeX排版系统@knuth1986。
北京大学的王选院士在文献#cite(<WangXuan1998>, style: "ieee")中回顾了中文计算机排版的发展历程。
]```.text)
所有被引用过的文献,Typst会自动调整格式后,按顺序排列在文末#link(<参考文献>)[参考文献]一节。
== 手稿版本管理
学位论文写作是一个长期的过程,往往需要几个月甚至几年,在漫长的过程中_需要反复增删改_。这个过程中留下每次增删改的详细记录,随时追溯不同版本的变化非常重要。
最简单的方法是每天保存一个版本文件,按日期命名,但这种方法过于简陋。更好的方法是向程序员学习,使用版本管理软件。这类软件不少,但当前最流行的是Git,功能强大、支持广泛,建议采用@zolkifli2018。
= 排版过程分析 <流程分析章>
略
#show: 结论
= 结论
本文介绍了新锐结构化排版软件Typst,包括它的设计理念、基本功能、实际使用。本文中提供的案例,尤其是本文排版过程本身证明了Typst是排版学位论文的优秀工具。
= 致谢
感谢<NAME>教授奠定了结构化排版的理论基础,实现了早期的可用系统TeX;感谢Leslie Lamport博士在TeX基础上开发了LaTeX,增强了易用性,推动结构化排版在学术界的广泛应用;最后感谢<NAME>和<NAME>两位的出色工作,继承TeX/LaTeX的精神,融合现代软件工程技术,开发出Typst这一优秀的结构化排版系统,让本文成文可能。
#bibliography("/references/reference.bib", style: "/references/china-national-standard-gb-t-7714-2015-numeric.csl") <参考文献>
#show: 附录
= 其他说明
附录中标号有特殊要求,一般用“A、B、C”代替“1、2、3”。不过本模板会处理好这些细节,不用作者操心。例如:
#示例(```[
$ f(x)=integral sin(x)/cos(d) dif x $ <附录公式>
]```.text)
#示例(```[
#figure(
table(
columns: (1fr, 1fr),
[软件], [作者],
[Typst], [<NAME>和<NAME>],
[TeX], [<NAME>],
[LaTeX], [<NAME>],
[Word], [不详]
),
caption: [常用排版软件和作者对照表],
kind: table
) <附录表>
]```.text)
#示例(```[
#figure(
grid(
columns: (1fr, 1fr),
column-gutter: 4pt,
image("images/knuth.jpeg"),
image("images/martin-and-laurenz.jpeg"),
),
caption: [TeX的创造者<NAME>教授(左图);Typst的创造者<NAME>和<NAME>(右图)]
) <附录图>
]```.text)
= 英文翻译
= 公式推导 |
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/ops-05.typ | typst | Other | // Error: 2-7 invalid binary number: 0b123
#0b123
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/MATLAB/touying/docs/docs/dynamic/equation.md | markdown | ---
sidebar_position: 3
---
# Math Equation Animations
Touying also provides a unique and highly useful feature—math equation animations, allowing you to conveniently use `pause` and `meanwhile` within math equations.
## Simple Animation
Let's start with an example:
```typst
#slide[
Touying equation with pause:
#touying-equation(`
f(x) &= pause x^2 + 2x + 1 \
&= pause (x + 1)^2 \
`)
#meanwhile
Touying equation is very simple.
]
```

We use the `touying-equation` function to incorporate `pause` and `meanwhile` within the text of math equations (in fact, you can also use `#pause` or `#pause;`).
As you would expect, the math equation is displayed step by step, making it suitable for presenters to demonstrate their math reasoning.
:::warning[Warning]
While the `touying-equation` function is convenient, you should always be aware that it doesn't perform complex syntax analysis. It simply splits the string using regular expressions. Therefore, you should not use `pause` or `meanwhile` within functions like `display(..)`!
:::
## Complex Animation
In fact, we can also use `only`, `uncover`, and `alternatives` within `touying-equation` with a little trick:
```typst
#slide(repeat: 3, self => [
#let (uncover, only, alternatives) = utils.methods(self)
#touying-equation(scope: (uncover: uncover), `
f(x) &= pause x^2 + 2x + uncover("3-", 1) \
&= pause (x + 1)^2 \
`)
])
```

We can pass the functions we need into the `touying-equation` through the `scope` parameter, such as `uncover` in this example.
## Parameters
The function definition of `touying-equation` is:
```typst
#let touying-equation(block: true, numbering: none, supplement: auto, scope: (:), body) = { .. }
```
Therefore, you can pass parameters like `block`, `numbering`, and `supplement` to `touying-equation` just like using normal math equations. |
|
https://github.com/joshuabeny1999/unisg-thesis-template-typst | https://raw.githubusercontent.com/joshuabeny1999/unisg-thesis-template-typst/main/layout/thesis_template.typ | typst | Other | #import "/layout/titlepage.typ": *
#import "/layout/disclaimer.typ": *
#import "/layout/directory_writing_aids.typ": directory_writing_aids as directory_writing_aids_layout
#import "/layout/abstract.typ": abstract as abstract_layout
#import "/utils/print_page_break.typ": *
#let buildHeader(headingContent) = {
[
#align(end, text(size: 11pt, weight: 400, headingContent))
#v(2mm)
]
}
#let getHeader() = {
locate(loc => {
// Find if there is a level 1 heading on the current page
let nextMainHeading = query(selector(heading).after(loc), loc).find(headIt => {
headIt.location().page() == loc.page() and headIt.level == 1
})
if (nextMainHeading != none) {
return buildHeader(nextMainHeading.body)
}
// Find the last previous level 1 heading -- at this point surely there's one
let lastMainHeading = query(selector(heading).before(loc), loc)
.filter(headIt => {
headIt.level == 1
})
.last()
return buildHeader(lastMainHeading.body)
})
}
#let thesis(
title: "",
subtitle: "",
type: "",
professor: "",
author: "",
matriculationNumber: "",
submissionDate: datetime,
abstract: "",
language: "en",
acknowledgement: "",
directory_writing_aids: "",
appendix: "",
is_print: false,
body,
) = {
assert(language in ("de", "en"), message: "The language supported are only 'de' and 'en'.")
titlepage(
title: title,
subtitle: subtitle,
type: type,
professor: professor,
author: author,
matriculationNumber: matriculationNumber,
submissionDate: submissionDate,
)
print_page_break(print: is_print, to: "even")
if abstract != "" {
abstract_layout(lang: language)[#abstract]
}
set page(
margin: (left: 2.5cm, right: 2.5cm, top: 2.5cm, bottom: 2.5cm),
numbering: none,
header: getHeader(),
)
let body-font = "Times New Roman"
set text(
font: body-font,
size: 12pt,
lang: language,
)
show math.equation: set text(weight: 400)
set table(
stroke: (x, y) => (
y: if y == 1 {
2pt + gray
} else {
1pt + gray
},
x: 1pt + gray,
),
fill: (x, y) => if y == 0 {
rgb(55, 126, 57)
},
)
show table.cell: it => {
if it.y == 0 {
set text(white)
strong(it)
} else {
it
}
}
// --- Headings ---
show heading: set block(below: 0.85em, above: 1.75em)
show heading: set text(font: body-font)
set heading(numbering: "1.1")
// Reference first-level headings as "chapters"
let chapter = (en: "Chapter", de: "Kapitel")
show ref: it => {
let el = it.element
if el != none and el.func() == heading and el.level == 1 {
chapter.at(language) + " "
numbering(
el.numbering,
..counter(heading).at(el.location()),
)
} else {
it
}
}
// --- Paragraphs ---
set par(leading: 1em)
// --- Citations ---
set cite(style: "alphanumeric")
// --- Figures ---
show figure: set text(size: 0.85em)
// --- Table of Contents ---
let tocTitle = (en: "Table of Contents", de: "Inhaltsverzeichnis")
show outline.entry.where(level: 1): it => {
strong(text(size: 12pt, it))
}
show outline.entry.where(level: 2): it => {
text(size: 11pt, it)
}
show outline.entry.where(level: 3): it => {
text(size: 10.5pt, it)
}
show outline.entry.where(level: 4): it => {
text(size: 10pt, it)
}
outline(
title: tocTitle.at(language),
indent: 2em,
)
v(2.4fr)
pagebreak()
set page(
margin: (left: 2.5cm, right: 2.5cm, top: 2.5cm, bottom: 2.5cm),
numbering: "1",
number-align: right,
)
counter(page).update(1)
// List of figures.
let figureListTitle = (en: "List of Figures", de: "Abbildungsverzeichnis")
heading(numbering: none)[#figureListTitle.at(language)]
outline(
title: "",
target: figure.where(kind: image),
)
// List of tables.
print_page_break(print: is_print)
let tableListTitle = (en: "List of Tables", de: "Tabellenverzeichnis")
heading(numbering: none)[#tableListTitle.at(language)]
outline(
title: "",
target: figure.where(kind: table),
)
pagebreak()
// Main body.
set par(justify: true)
body
if acknowledgement != "" {
pagebreak()
let acknowledgementTitle = (en: "Acknowledgement", de: "Danksagung")
heading(numbering: none)[#acknowledgementTitle.at(language)]
acknowledgement
}
// Appendix.
if appendix != "" {
pagebreak()
let appendixTitle = (en: "Appendix", de: "Anhang")
heading(numbering: none)[#appendixTitle.at(language)]
appendix
}
pagebreak()
let bibliographyTitle = (en: "References", de: "Literaturverzeichnis")
bibliography("/thesis.bib", style: "apa", title: bibliographyTitle.at(language))
print_page_break(print: is_print)
disclaimer(
title: title,
author: author,
language: language,
submissionDate: submissionDate,
)
print_page_break(print: is_print)
directory_writing_aids_layout(language: language, directory_writing_aids)
} |
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/复变函数/作业/hw3.typ | typst | #import "../../template.typ": proof, note, corollary, lemma, theorem, definition, example, remark, proposition,der, partialDer, Spec, seqLimit, seqLimitn
#import "../../template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: note.with(
title: "作业2",
author: "YHTQ",
date: none,
logo: none,
withOutlined : false,
withTitle :false,
)
#set heading(numbering: none)
(应交时间为3月22日)
= p33
== 6
=== (a)
$
R = seqLimitn norm(a^n/a^(n+1)) = norm(1/a)
$
=== (b)
$
R = seqLimitn norm(a^(n^2)/a^((n+1)^2)) = seqLimitn norm(1/a^(2 n + 1))
$
- 当 $norm(a) < 1$ 时上式给出 $+infinity$
- 当 $norm(a) = 1$ 时上式给出 $1$
- 当 $norm(a) > 1$ 时上式给出 $0$
=== (c)
似乎和 (a) 没有区别也是 $norm(1/k)$
=== (d)
$
1/R = limsup_(n -> +infinity) norm(a_n)^(1/n) = seqLimitn norm(1)^(1/(n!)) = 1 => R = 1
$
== 7
首先计算:
$
1/R = limsup_(n -> +infinity) norm(a_n)^(1/n) = seqLimitn norm((-1)^n/n)^(1/(n(n+1))) \
= seqLimitn 1/n^(1/(n(n+1)))
$
另一方面:
$
seqLimitn n^(1/(n(n+1))) = seqLimitn e^(1/(n(n+1)) ln n) = e^0 = 1
$
因此 $R = 1$
其次:
- $z = 1$ 时序列为:
$
sum_(n=1)^infinity (-1)^n/n
$
由莱布尼茨判别法知收敛
- $z = -1$ 时,注意到 $n(n+1)$ 总是偶数,因此 $z^(n(n+1)) = 1$,结果与上面相同
- $z = i$ 时,考虑:
$
n(n+1) = cases(
0 mod 4 quad n = 0 mod 4,
2 mod 4 quad n = 1 mod 4,
2 mod 4 quad n = 2 mod 4,
0 mod 4 quad n = 3 mod 4
)\
i^(n(n+1)) = cases(
1 quad n = 0 mod 4,
-1 quad n = 1 mod 4,
-1 quad n = 2 mod 4,
1 quad n = 3 mod 4
)\
(-1)^n i^(n(n+1)) = cases(
1 quad n = 0 mod 4,
1 quad n = 1 mod 4,
-1 quad n = 2 mod 4,
-1 quad n = 3 mod 4
)
$
原序列为:
$
sum_(n = 1)^(infinity) (-1)^n i^(n(n+1)) 1/n
$
注意到这是实数序列,由狄利克雷判别法可知级数收敛
= p44 13
取 $g(z) = log z$(对数函数的主分支),则:
$
(f(z))^n = e^(log z)\
(f(z)/(e^(1/n log z)))^n = 1
$
注意到 $z^n = 1$ 只有 $n$ 个离散的解 $xi_k = e^(k/n 2 pi i)$,而 $f(z)/(e^(1/n log z))$ 是连通开集上的解析函数,其值域离散蕴含着各点处导数均为零,进而是常函数\
故一定有:
$
f(z) = e^(1/n log z + 2 pi i k/n), forall k = 0, 1, 2, ..., n - 1
$
这就是满足条件的所有函数 |
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/04-opentype/exploring/cmap.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note
#import "/template/lang.typ": tibetan
#import "/lib/glossary.typ": tr
#show: web-page-template
// ### The `cmap` table
=== `cmap` 表
// A font can contain whatever glyphs, in whatever encoding and order, it likes. If you want to start your font with a Tibetan letter ga (ག) as glyph ID 1, nothing is going to stop you. But for the font to work, you need to provide information about how users should access the glyphs they want to use - or in other words, how your glyph IDs map to the Unicode (or other character set) code points that their text consists of. The `cmap` table is that character map.
字体设计者可以按照自己的偏好来决定字体中要包含哪些#tr[glyph],使用什么#tr[encoding],用什么顺序排列。如果你想把藏文字母ga(#tibetan[ག])安排在#tr[glyph]ID 1 的位置,没有任何规则会阻止你。但为了让这个字体能正常工作,你还需要提供一些让用户知道在什么时候调用这个#tr[glyph]的信息。也就是调用方需要知道怎么把Unicode(或者其他#tr[encoding])中的#tr[character]映射到绘制的#tr[glyph]上。`cmap` 表的作用就是提供这个#tr[character]映射信息。
// If a user's text has the letter `A` (Unicode code point `0x41`), which glyph in the font should be used? Here's how it looks in our dummy font:
如果有一段包含`A`(Unicode#tr[codepoint]为`0x41`)的文本,那么哪个#tr[glyph]会被调用呢?我们测试字体中的`cmap`信息如下:
```xml
<cmap>
<tableVersion version="0"/>
<cmap_format_4 platformID="0" platEncID="3" language="0">
<map code="0x41" name="A"/><!-- LATIN CAPITAL LETTER A -->
<map code="0x42" name="B"/><!-- LATIN CAPITAL LETTER B -->
</cmap_format_4>
<cmap_format_6 platformID="1" platEncID="0" language="0">
<map code="0x41" name="A"/>
<map code="0x42" name="B"/>
</cmap_format_6>
<cmap_format_4 platformID="3" platEncID="1" language="0">
<map code="0x41" name="A"/><!-- LATIN CAPITAL LETTER A -->
<map code="0x42" name="B"/><!-- LATIN CAPITAL LETTER B -->
</cmap_format_4>
</cmap>
```
// The `ttx` software used to generate the textual dump of the font has been overly helpful in this case - it has taken the mapping of characters to glyph *ID*s, and has then replaced the IDs by names. The `cmap` table itself just contains glyph IDs.
在这里`ttx`为了生成便于阅读的文本格式而进行了额外工作。它将映射表中的#tr[glyph]ID转换为了对应的名称。原始的`cmap`表中储存的只是#tr[glyph]ID。
// Looking back at the `GlyphOrder` pseudo-table that `ttx` has generated for us:
通过 `ttx` 在生成的XML中附加的 `GlyphOrder` 表,我们可以知道原始的#tr[glyph]ID信息:
```xml
<GlyphOrder>
<!-- id 属性仅供人类阅读使用,当程序解析时将被忽略 -->
<GlyphID id="0" name=".notdef"/>
<GlyphID id="1" name="A"/>
<GlyphID id="2" name="B"/>
</GlyphOrder>
```
// We see that if the user wants Unicode codepoint `0x41`, we need to use glyph number 1 in the font. The shaping engine will use this information to turn code points in input documents into glyph IDs.
这下我们就知道,如果用户需要Unicode#tr[codepoint]`0x41`对应的字形,就会自动使用ID为1的#tr[glyph]。#tr[shaping]引擎就是这样将输入的字符串转换为#tr[glyph]ID列表。
|
https://github.com/darioglasl/Arbeiten-Vorlage-Typst | https://raw.githubusercontent.com/darioglasl/Arbeiten-Vorlage-Typst/main/Helpers/nfr-data.typ | typst | #let nfrScenarios = (
(
titel: "Konsistentes Design im Frontend",
anforderung: "Benutzbarkeit (Ästhetik)",
szenario: "Der Benutzer besucht die Website von Duck Incl. und navigiert durch die verschiedenen Seiten.",
stimulus: "Alle Seiten der Webanwendung haben ein gleiches Design (Theme), um dem Nutzer zu versichern, dass er immer noch dieselbe Anwendung benutzt.",
reaktion: "Das Design soll auf jeder Page gleich sein.",
massnahme: ("Verwenden des Themes.", "Layout mit MUI weiterentwickeln"),
level: "Muss",
status: "erfüllt",
begründung: "Das Theme wird verwendet und das Layout wird mit MUI weiterentwickelt.",
),
(
titel: "Konsistentes Design im Frontend",
anforderung: "Benutzbarkeit (Ästhetik)",
szenario: "Der Benutzer besucht die Website von Duck Incl. und navigiert durch die verschiedenen Seiten.",
stimulus: "Alle Seiten der Webanwendung haben ein gleiches Design (Theme), um dem Nutzer zu versichern, dass er immer noch dieselbe Anwendung benutzt.",
reaktion: "Das Design soll auf jeder Page gleich sein.",
massnahme: ("Verwenden des Themes.", "Layout mit MUI weiterentwickeln"),
level: "Muss",
status: "erfüllt",
begründung: "Das Theme wird verwendet und das Layout wird mit MUI weiterentwickelt.",
),
);
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/show-text-06.typ | typst | Other | // Test accessing the string itself.
#show "hello": it => it.text.split("").map(upper).join("|")
Oh, hello there!
|
https://github.com/sbleblanc/typst-templates | https://raw.githubusercontent.com/sbleblanc/typst-templates/main/cover_letter/template.typ | typst | #import "@preview/fontawesome:0.2.0": *
#let build_personal_infos_content(personal_infos) = {
set text(fill: luma(100), size: 1.0em)
align(right)[
*#personal_infos.first_name #personal_infos.last_name*\
#personal_infos.street_no #personal_infos.street\
#personal_infos.city\
#personal_infos.province, #personal_infos.country, #personal_infos.postal_code\
#fa-square-phone() #personal_infos.phone • #link("mailto:" + personal_infos.email, [#fa-square-envelope() #personal_infos.email])\
#link("https://www.linkedin.com/in/" + personal_infos.linkedin_profile, [#fa-linkedin() #personal_infos.linkedin_profile]) •
#link("https://github.com/" + personal_infos.github_profile, [#fa-square-github() #personal_infos.github_profile]) • #link("https://bitbucket.org/" + personal_infos.bitbucket_profile,[#fa-square-gitlab() #personal_infos.bitbucket_profile])
]
}
#let build_letter_header_content(company_infos, display_address) = {
let company_content = [
*#company_infos.name*\
#company_infos.street_no #company_infos.street\
#company_infos.city\
#company_infos.province, #company_infos.country, #company_infos.postal_code
]
grid(
columns: (1fr, 1fr),
align: (left + top, top + right),
if display_address {
company_content
} else [],
datetime.today().display("[month repr:long] [day], [year]")
)
}
#let build_closing_content(personal_infos, signature_img_path, closing, force_closing_bottom) = {
let closing_aligment = left
if force_closing_bottom {
closing_aligment = closing_aligment + bottom
}
align(
closing_aligment,
box(
width: 5cm,
stack(
dir: ttb,
spacing: 4pt,
closing,
image(signature_img_path, width: 80%),
rect(
width: 100%,
inset: (top: 5pt, rest: 0pt),
stroke: (top: 1pt + black)
)[
*#personal_infos.first_name #personal_infos.last_name*
]
)
)
)
}
#let cover_letter(
personal_infos,
company_infos,
signature_img_path,
opening,
closing,
body,
font: "Noto Sans",
force_closing_bottom: true,
display_address: true
) = {
// set page(margin: 2.4cm)
set text(font: font)
build_personal_infos_content(personal_infos)
block(
above: 1.5em,
build_letter_header_content(company_infos, display_address)
)
set par(justify: true)
block(
above: 2em,
below: 2em,
opening
)
body
build_closing_content(personal_infos, signature_img_path, closing, force_closing_bottom)
} |
|
https://github.com/maucejo/presentation_touying | https://raw.githubusercontent.com/maucejo/presentation_touying/main/src/_slides.typ | typst | MIT License | #import "@preview/touying:0.5.3": *
#let _typst-builtin-align = align
#let slide(
title: auto,
subtitle: none,
align: horizon,
config: (:),
repeat: auto,
setting: body => body,
composer: auto,
..bodies,
) = touying-slide-wrapper(self => {
if align != auto {
self.store.align = align
}
let align = _typst-builtin-align
set strong(delta: 0)
let header(self) = {
if self.store.navigation == "topbar" {
set align(top)
show: components.cell.with(fill: self.colors.primary, inset: 1em)
set align(horizon)
set text(fill: white, size: 1.25em)
strong(utils.display-current-heading(level: 1))
h(1fr)
text(size: 0.8em, strong(utils.display-current-heading(level: 2)))
} else if self.store.navigation == "mini-slides" {
show: components.cell.with(fill: gradient.linear(self.colors.background.darken(10%), self.colors.background, dir: ttb))
components.mini-slides(
self:self,
fill: self.colors.primary,
alpha: 60%,
display-section: self.store.mini-slides.at("display-section", default: false),
display-subsection: self.store.mini-slides.at("display-subsection", default: true),
short-heading: self.store.mini-slides.at("short-heading", default: true),
linebreaks: false
)
line(length: 100%, stroke: 0.5pt + self.colors.primary)
place(dx: 1em, dy: 0.65em, text(size: 1.2em, fill: self.colors.primary, weight: "bold", utils.display-current-heading(level: 2)))
}
}
let footer(self) = {
set align(bottom)
set text(size: 0.8em)
place(dx: 1em, dy: -1em, {
grid(
columns: (1fr, 4fr, 1fr),
align: center + horizon,
[ #set image(height: 1.75em)
#self.info.footer-logo
],
[
#v(0.25em)
#text(fill:self.colors.primary, strong(self.info.short-title))
],
[
#set text(fill:self.colors.primary, weight: "bold")
#if self.appendix {
self.store.app-count.step()
context{
pad(right: 2em, bottom: 1.5em, top: 0.25em,
box(stroke: 1.75pt + self.colors.primary, radius: 5pt, inset: -1em,outset: 1.5em)[
#align(horizon)[#text(fill: self.colors.primary, strong([A | #self.store.app-count.at(here()).first() / #self.store.app-count.final().first()]))]
])
}
} else {
context box(stroke: 1.75pt + self.colors.primary, radius: 5pt, inset: -0.5em,outset: 1em)[#utils.slide-counter.display() / #utils.last-slide-number]
}
]
)
})
if self.appendix {
let appendix-progress-bar = {
context{
let ratio = self.store.app-count.at(here()).first()/self.store.app-count.final().first()
grid(
columns: (ratio*100%, 1fr),
components.cell(fill: self.colors.primary),
components.cell(fill: self.colors.secondary.lighten(50%))
)
}
}
place(bottom, block(height: 2pt, width: 100%, spacing: 0pt, appendix-progress-bar))
} else {
place(bottom, components.progress-bar(height: 2pt, self.colors.primary, self.colors.secondary.lighten(40%)))
}
}
let self = utils.merge-dicts(
self,
config-page(
fill: self.colors.background,
header: header,
footer: footer,
),
)
let new-setting = body => {
show: align.with(self.store.align)
show: setting
if self.store.navigation == "topbar" {v(-1em)}
body
}
touying-slide(self: self, config: config, repeat: repeat, setting: new-setting, composer: composer, ..bodies)
}
)
#let title-slide = touying-slide-wrapper(self => {
set strong(delta: 0)
let content = {
set align(center + horizon)
if self.info.logo != none{
set image(height: self.info.title-logo-height)
if type(self.info.logo) == content {
place(top + right, dx: -2cm, dy: 0.25cm, self.info.logo)
} else {
let im-grid = {
grid(
columns: self.info.logo.len(),
column-gutter: 1fr,
align: center + horizon,
inset: 2cm,
..self.info.logo.map((logos) => logos)
)
}
place(top, dy: -1.75cm, im-grid)
}
}
block(width: 100%, inset: 2cm, {
line(length: 100%, stroke: 0.15em + self.colors.primary)
text(size: 1.75em, strong(self.info.title))
line(length: 100%, stroke: 0.15em + self.colors.primary)
if self.info.author != none {
v(0.5em)
set text(size: 1em)
block(spacing: 1em, strong(self.info.author))
}
if self.info.institution != none {
set text(size: 0.85em)
block(spacing: 1em, self.info.institution)
}
})
}
self = utils.merge-dicts(
self,
config-common(freeze-slide-counter: true),
config-page(
fill: self.colors.background,
margin: (top: 0em, bottom: 0em, x: 0em)
)
)
touying-slide(self: self, content)
}
)
#let content-slide = touying-slide-wrapper(self => {
let localizaton = json("resources/i18n/fr.json")
if self.store.lang == "en" {
localization = json("resources/i18n/en.json")
}
set strong(delta: 0)
let header = {
if self.store.navigation == "topbar" {
set align(top)
show: components.cell.with(fill: self.colors.primary, inset: 1em)
set align(horizon)
set text(fill: white, size: 1.25em)
strong(localizaton.toc)
} else if self.store.navigation == "mini-slides" {
set align(top)
show: components.cell.with(fill: gradient.linear(self.colors.background.darken(10%), self.colors.background, dir: ttb))
v(0.8em)
set align(horizon)
set text(fill: self.colors.primary, size: 1.25em)
h(0.75em) + strong(localizaton.toc)
v(-0.6em)
line(length: 100%, stroke: 0.5pt + self.colors.primary)
}
}
let self = utils.merge-dicts(
self,
config-common(freeze-slide-counter: true),
config-page(
fill: self.colors.background,
header: header,
),
)
let content = {
show outline.entry: it => {
let number = int(it.page.text) + 1
block(above: 2em, below: 0em)
[#text([#number.], fill: self.colors.primary) #h(0.25em) #it.body]
}
set align(horizon)
components.adaptive-columns(text(size: 1.2em, strong(outline(title:none, indent: 1em, depth: 1, fill: none))))
}
touying-slide(self: self, content)
})
#let new-section-slide(level: 1, numbered: true, title) = touying-slide-wrapper(self => {
let content = {
set strong(delta: 0)
self.store.sec-count.step()
set align(horizon)
show: pad.with(10%)
set text(size: 1.5em)
v(-0.7em)
let section-progress-bar = {
context{
let ratio = self.store.sec-count.at(here()).first()/self.store.sec-count.final().first()
grid(
columns: (ratio*100%, 1fr),
components.cell(fill: self.colors.primary),
components.cell(fill: self.colors.secondary.lighten(50%))
)
}
}
stack(
dir: ttb,
spacing: 0.5em,
[*#utils.display-current-heading(level: level, numbered: numbered)*],
block(
height: 2pt,
width: 100%,
spacing: 0pt,
section-progress-bar
),
)
}
self = utils.merge-dicts(
self,
config-common(freeze-slide-counter: true),
config-page(fill: self.colors.background)
)
touying-slide(self: self, content)
}
)
#let focus-slide(align: center + horizon, body) = touying-slide-wrapper(self => {
let _align = align
let align = _typst-builtin-align
self = utils.merge-dicts(
self,
config-common(freeze-slide-counter: true),
config-page(fill: self.colors.primary, margin: 2em),
)
set text(fill: white, size: 2em)
set strong(delta: 0)
touying-slide(self: self, align(_align, strong(body)))
}) |
https://github.com/1sSay/USPTU_conspects | https://raw.githubusercontent.com/1sSay/USPTU_conspects/main/src/philosophy/Antiquity.typ | typst | // Global settings and templates
#set text(14pt)
#let def(term, color: black) = {
box(stroke: color, inset: 7pt, text()[ #term ])
}
// Lecture header and date
#let lecture_header = text()[Античность]
#let date = text()[7.09.2024]
// Header
#align(center, heading(level: 1)[Философия. \ #lecture_header ])
#align(center, text(weight: "thin")[#date])
#align(center, text(weight: "thin")[Конспект Сайфуллина Искандара БПО09-01-24])
// Content
#table(
columns: (auto, auto, auto),
inset: 14pt,
align: horizon,
table.header([*Периоды*], [*Века*], [*Описание*]),
[Досократический \ Натуралистический], [VII - V до н. э.], [Проблемы природы],
[Сократический], [V - IV до н. э.], [Сократ - Платон - Аристотель],
[Эллинистический], [IV - II до н.э.], [Практико-ориентированная философия (Стоики, Скептики, Эпикурейцы, Киники)],
[Римский], [II до н.э. - V н.э.], [Сенека, Марк-Аврелий]
)
#heading(level: 2)[Характерные черты античной философии:]
1. *Синкретизм* (Слитность с природой). Человек является частью природы.
2. *Космоцентризм* (Космос = природа = человек)
3. *Пантеизм* (Пан - всё, теос - Бог). Всё есть Бог.
4. *Логика античности* -- это логика общих понятий и имён.
5. *Этика античности* -- этика добродетели. Наивная этика (знание есть основа добродетельности. Сократ)
\ \ \ \ \ \ \ \
#heading(level: 1)[Досократический период:]
\
#table(
columns: (auto, auto),
inset: 14pt,
align: horizon,
table.header([*Школа*], [*Характерные черты*]),
[Милетская], [Ярковыраженный космоцентризм],
[Пифагорейцы], [Повышенное внимание к проблеме объяснения явлений природы],
[Героклита Эфесского], [Поиск первоначала, породившего всё сущее],
[Эллейсткая], [Гилозоизм. Одушевление неживой природы],
[Атомисты], [Доктриморский характер]
)
== Милетская школа
- *<NAME>* за первооснову берёт воду. Вода соотносится с божественным началом. Центр вселенной -- Земля, представляющая собой плоский диск, покоющийся на воде. \ "Всё живое из воды и в воду вернётся." <NAME>кий
- *Анаксимандр*. *Апейрон* -- вечная бесконечная субстанция, из которой всё состоит и в которую всё превратится
- *Анаксимен*. Первоначало всего сущего -- *воздух*
== <NAME>
- Основоположник -- *Пифагор*
- Первопричина всего сущего -- *число*
- *Единица* -- чельчайшая частица всего
- Диалектическое единство мира (честное/нечётное, мужское/женское, левое/правое)
- В философии Пифагора впервые встречается понятие *метемпсихоз* (перевоплощение)
- *Космос* -- порядок
- *Хаос* -- беспорядок
- *Число* образует космический порядок
\
== <NAME>
- *Огонь* есть первоначальная материальная причина мира
- Существуют периодические эпизоды мирового пожара, во время которого космос уничтожается, чтобы возродиться снова
- Всё есть *поток* (Теория потока)
- #quote(attribution: [<NAME>ский])[Нельзя войти в одну реку дважды]
- Тождество противоположностей
- *Логос* -- порядок, закономерность
*Демокрит:*
- *Демокрит* -- древнегреческий философ, ученик Евклиппа, один из основоположником атомизма и материалистической философии.
- Материальный мир состоит из атомов
- Атом -- мельчайшая частица, неделим, имеет различную величину и форму
- Между атомами существует пространство, заполненное *пустотой*
- Атомы находятся в постоянном вечном движении
- Существует круговорот атомов. Вещи, живые организмы сущесвтуют и распадаются, после чего из этих же атомов возникают новые организмы (бесконечность материи)
- атомы невозможно увидеть и почуствовать
- <NAME> -- линия материализма
== Элейская школа
- Основоположник - *Парменид*
- *Помимо бытия нет ничего*. И мышление есть бытие, ибо нельзя мыслить ни о чём
- Бытие ничем и никем не порождено, иначе пришлось бы признать, что бытие произошло от небытия, а небытия не существует
- Бытие не подвержено гибели и порче, иначе бы оно превратилось бы в небытие, а небытия не существует
- У бытия нет прошлого и будущего. Бытие есть чистое настоящее
- Элеаты противопостовляют мысли восприятию, отводя главную роль мышлению
- Элеаты выделяют единое бытие, исключающее множественность и движение
- *Зенон Элейский*. Докозательство отсутсвия движения
- *Стадии/Дихотомия*
- *Ахилл*
- *Стрела*
- *Движущиеся тела*
#heading(level: 1)[Классический период]
#heading(level: 2)[Софисты]
#box(stroke: black, inset: 7pt, text()[*Софи́зм* — формально кажущееся правильным, но ложное по существу умозаключение, основанное на преднамеренно неправильном подборе исходных положений])
- Примеры софизмов:
- Софизм о глазах
- Софизм о врагах
- "То, что ты не потерял, ты имеешь" ?
- Спор Протагора и Эватла.
- Старшие софисты: *Протагор*, *Горгий*, *Гиппий*
- Младшие софисты: *Фрасимах*, *Критий*
- Критий считал, что религия -- средство управления, и слыл безбожником
- Благодаря ним Аристотель создал 3 закона логики
== Сократ
#box(stroke: black, inset: 7pt, text()[*Сократ* -- великий философ, благодаря которому философия обратилась к познанию человека \ Известные высказывания:
- #quote(attribution: [Сократ])[Я знаю, что ничего не знаю, а другие не знают и этого]
- #quote(attribution: [Сократ])[Самый лучший способ жить с честью в этом мире -- это быть тем, кем ты притворяешься, что являемся]])
#box(stroke: black, inset: 7pt, text()[*Мойевика* - метод живого диалога, в котором рождается истина. Сократ сравнивал этот метод с повевальным искусством, так как его мать была поветухой])
\ \ \
=== Ученики Сократа:
- Платон
- Ксенофон
== Платон
#box(stroke: black, inset: 7pt, text()[*Платон* -- философ-идеалист, основатель платонической академии. Ученик Сократа, считал, что первооснова всего -- идеальная основа])
#box(stroke: black, inset: 7pt, text()[- *Линия идеализма = Линия Платона*
- *Линия материализма = Линия Демокрита*])
- Считал, что мир делится на две части:
- *Мир идей*
- Реально существуют чистые *идеи (эйдосы)*
- Весь мир -- это отображение чистых идей
- *Мир вещей*
- Платон считал, что мир вещей вторичен по отношению к миру идей.
- Материальные вещи изменчивы, непостоянны, со временем прекращают своё существование
- Учение о *Триаде*:
- Согласно Платону всё сущее состоит из трёх субстанций
- *Единое* -- основа всего бытия, не имеет никаких признаков, выше всякого мышления существующего бытия
- *Ум* -- происходит от Единого, разделён от него и противоположен ему
- *Душа* -- подвижная субстанция, которая объединяет Единое -- ничто и Ум -- всё живое. Душа есть часть Мировой Души, она бессмертна
- Платон основал свою академи, которая просуществовала почти тысячу лет и была закрыт императором Юстинианом в XI веке.
- Пещера Платона -- аллегория кек прочитайте в интернетике :3 |
|
https://github.com/gomazarashi/typst_showybox | https://raw.githubusercontent.com/gomazarashi/typst_showybox/main/example.typ | typst | #set text(lang: "ja") // 言語を日本語に設定
#set text(font: ("New Computer Modern", "Harano Aji Gothic"), size: 10pt) // フォントを設定
#show figure.where(kind: table): set figure.caption(position: top) // 表におけるキャプションを上部に表示するよう設定
#show heading: set text(font: ("Harano Aji Gothic"), weight: 500)
#set heading(numbering: "1.1.")
#import "@preview/codelst:2.0.1": sourcecode
#import "@preview/showybox:2.0.1": showybox
#show raw: set text(font: ("DejaVu Sans Mono", "Harano Aji Gothic"))
= 基本的な記述
#showybox()[これがshowyboxパッケージの\ 基本的な記述です。]
= パラメーター
== title(タイトル)
#showybox(title: "これはタイトルです")[これは本文です]
== footer(フッター)
#showybox(title: "これはタイトルです", footer: "これはフッターです")[これは本文です]
== frame
=== title-color, body-color, footer-color, border-color(背景色)
#showybox(title: "Green's Theorem", frame: (
border-color: olive,
title-color: olive.lighten(10%),
body-color: olive.lighten(95%),
footer-color: olive.lighten(80%),
), footer: "証明は省略する。")[
閉曲線$C$で囲まれた領域$D$において、$C^1$級関数$P(x,y)$と$Q(x,y)$に対して、以下が成り立つ。
$ integral.cont_C (P dif x + Q dif y ) = integral.double_D ((diff Q)/(diff x)-(diff P)/(diff y)) dif x dif y $
]
=== radius(角丸)
#showybox(title: "これはタイトルです", frame: (radius: 10pt), footer: "これはフッターです")[これは本文です]
=== thickness(線の太さ)
#showybox(title: "これはタイトルです", frame: (thickness: 2pt), footer: "これはフッターです")[これは本文です]
=== dash(破線)
#showybox(title: "これはタイトルです", frame: (dash: "dashed"), footer: "これはフッターです")[これは本文です]
#pagebreak()
=== inset, title-inset, body-inset, footer-inset(余白)
#showybox(title: "これはタイトルです", frame: (inset: 20pt), footer: "これはフッターです")[これは本文です]
== Title Style
#showybox(
title-style: (weight: 800, color: teal.darken(40%), sep-thickness: 0pt, align: center),
frame: (title-color: teal.lighten(80%), border-color: teal.darken(40%), thickness: (left: 2pt), radius: 0pt),
title: "自己情報量",
)[
事象$E$が起こる確率を$P(E)$とするとき、事象$E$の自己情報量$I(E)$は次のように定義される。
$ I(E) = log 1/(P(E)) = -log P(E) $
]
== Boxed Title
#showybox(
title-style: (boxed-style: (anchor: (x: center, y: horizon), radius: (top-left: 10pt, bottom-right: 10pt, rest: 0pt))),
frame: (
title-color: green.darken(40%),
body-color: green.lighten(80%),
footer-color: green.lighten(60%),
border-color: green.darken(60%),
radius: (top-left: 10pt, bottom-right: 10pt, rest: 0pt),
),
title: "ラプラス変換",
)[
実数$t gt.eq 0$について定義された関数$f(t)$のラプラス変換とは
$ F(s)=integral_0^oo f(t)e^(-s t) dif t $
で定義される$s$の関数$F(s)$のことである。
]
== Footer Style
#showybox(footer-style: (sep-thickness: 0pt, align: right, color: black), title: "シグモイド関数", footer: [
シグモイド関数はニューラルネットワークにおける活性化関数として広く用いられる。
])[
比較的単純な非線形関数であるシグモイド関数は、以下のように定義される。
$ phi(x) =sigma.alt_1(x) =1/(1+e^(-x)) =(tanh(x/2)+1)/2 $
]
== Shadow properties
#showybox(
shadow: (color: aqua.lighten(55%), offset: 3pt),
frame: (title-color: blue.darken(30%), border-color: blue.darken(30%), body-color: aqua.lighten(80%)),
title: "ガウスの発散定理",
title-style: (weight: 600),
)[
ガウスの発散定理は次のように表される。
$ integral_S bold(A) dot bold(n) dif S = integral_V nabla dot bold(A) dif V $
]
#pagebreak()
= Encapsulation
#showybox(title: "Parent container", lorem(10), columns(2)[
#showybox(title-style: (boxed-style: (:)), title: "Child 1", lorem(10))
#colbreak()
#showybox(title-style: (boxed-style: (:)), title: "Child 2", lorem(10))
])
|
|
https://github.com/xhalo32/math-camp | https://raw.githubusercontent.com/xhalo32/math-camp/main/map/map.typ | typst | MIT License | // This is a Typst document which creates a customizable treasure hunt map. See README.md for instructions on how to compile using Typst etc.
// Import the CeTZ drawing utilities
#import "@preview/cetz:0.2.2"
// Import a library for manipulating bitmaps
#import "@preview/grayness:0.1.0": *
// Set up custom colors. See also <https://typst.app/docs/reference/visualize/color/>
#let darkgreen = green.darken(50%)
#let lavender = purple.lighten(50%)
#let brown = orange.darken(50%)
#let violet = purple.darken(25%)
// A small circle filled with `color` and a black outline
#let circle(color) = {box(
cetz.canvas({
import cetz.draw: *
circle((0, 0), radius: 0.2, fill: color)
})
)
}
// Headings use Fira Sans font
#show heading: set text(font: "Fira Sans")
// Use `image.png` as the background image of the page. Notice that the image is moved up by 8.2em to match the grid. Adjust if needed.
#let data = read("image.png", encoding: none)
#page(margin: 1.5cm, background: move(dy: -8.2em, crop-image(data, 1080, 1080, width: 82%, height: 58%)))[
// Font size (also affects the map grid)
#set text(size: 16pt)
#align(center)[
= Math Camp Adventure Map
]
// The grid and points. Moved left 2pt to match the image. Adjust if needed.
#move(dx: -2pt, align(center, cetz.canvas({
import cetz.draw: *
// Gride size
let size = 17
grid((0, 0), (size, size))
// Tickmarks
for i in range(size + 1) {
content((i, -0.5), box(fill: white, inset: 1pt, [#i]))
content((-0.5, i), box(fill: white, inset: 1pt, [#i]))
}
// Axis labels
content((0.3, size + 0.5), $y$)
content((size + 0.5, 0.3), $x$)
// Axis arrows
set-style(mark: (end: "straight"))
line((0, size), (0, size+0.5))
line((size, 0), (size+0.5, 0))
// Real points
circle((5.5, 3), radius: 0.2, fill: lavender)
circle((8, 6.5), radius: 0.2, fill: orange)
circle((9.5, 8.1), radius: 0.2, fill: blue)
circle((6, 6), radius: 0.2, fill: red)
circle((5, 8), radius: 0.2, fill: yellow)
circle((9.5, 6.5), radius: 0.2, fill: brown)
circle((7.5, 9.2), radius: 0.2, fill: violet)
// Fake points
circle((11.5, 6.5), radius: 0.2, fill: green)
circle((12, 3.4), radius: 0.2, fill: aqua)
circle((7, 0.5), radius: 0.2, fill: white)
circle((3.5, 3.7), radius: 0.2, fill: darkgreen)
})))
#align(center)[
// Grid with groups of (title, checkpoints). Enable stroke to debug how the grid works.
// Reference: <https://typst.app/docs/reference/layout/grid/>
#grid(columns: (auto,auto,auto), gutter: 2em, /* stroke: 1pt+gray */)[
=== First floor
#circle(yellow) $sigma$ #h(0.5em)
#circle(violet) $nabla$ #h(0.5em)
#circle(orange) $Gamma$
][
=== Second floor
#circle(red) $A$ #h(0.5em)
#circle(blue) $M$
][
=== Outdoors
#circle(brown) $C$ #h(0.5em)
#circle(green) $epsilon$ #h(0.5em)
#circle(darkgreen) $pi$ #h(0.5em)
#circle(lavender) $xi$ #h(0.5em)
#circle(aqua) $theta.alt$ #h(0.5em)
#circle(white) $eta$
]
]
#align(center)[
=== Mark the checkpoints you have visited
// Checkpoint boxes from 1 to 8
#table(columns: (1.5em,)*4,
stroke: 0pt,
..range(1, 8).map(n => (align(right)[#n.],table.cell(stroke: 1pt)[])).flatten()
)
]
]
// Next page
#set text(size: 16pt)
== Checkpoints
+ #circle(lavender) $(5.5, 3)$ (outdoors) Aleksis: food-themed system of equations
+ #circle(orange) $(8, 6.5)$ (1st floor) David's exercise
+ #circle(blue) $(9.5, 8.1)$ (2nd floor) Niklas: Equal sets
+ #circle(red) $(6, 6)$ (2nd floor) Akseli: circumference exercise
+ #circle(yellow) $(5, 8)$ (1st floor) Akseli: sum exercise
+ #circle(brown) $(9.5, 6.5)$ (outdoors) Emilia's exercise
+ #circle(violet) $(7.5, 9.2)$ (1st floor) Emilia's other exercise
Fake points: #circle(green) $epsilon$, #circle(darkgreen) $pi$, #circle(white) $eta$, #circle(aqua) $theta.alt$ |
https://github.com/oldrev/tids | https://raw.githubusercontent.com/oldrev/tids/master/README.md | markdown | Apache License 2.0 | # tids: A TI-Style Datasheet Template for Typst
English | [简体中文](README.zh_cn.md)
This project is an easy to use Typst electronic component datasheet template purposed
for testing and showcasing the potential of using Typst for technical documentation writing.

**If this project is helpful to you, please consider leaving a star⭐, you know the drill.**
## Disclaimer
This is an open-source project created solely for demonstration purposes, with no intention of infringing on any trademarks. The author is not affiliated with TI in any way.
## Features
- **Simple and User-friendly:** Uses Typst format for easy readability and writing.
- **Customizable:** Can be customized for specific component specifications.
- **Fast Compilation**: It only takes one or two seconds to generate the PDF, as opposed to LaTeX, which may take several minutes or even longer.
## Getting Started
0. Install Typst if you don't have:
```powershell
winget install --id Typst.Typst
```
1. Clone this repository locally:
```bash
git clone https://github.com/oldrev/tids.git
```
3. Build the PDF example:
```bash
typst compile demo-ds.typ
```
4. Check the generated [`demo-ds.pdf`](demo-ds.pdf) out.
## Usage
1. Copy the template file `tids.typ` to the directory of your project.
2. Import the template and call `tids()` function:
```typst
#import "tids.typ": tids
#show: doc => tids(ds_metadata: (
title: [YourDSTitle],
product: [YourProductName],
product_url: "https://github.com/oldrev/tids",
revision: [CurrentRevision],
publish_date: [PublishedOn]
),
features: [features for the title page],
applications: [application information for the title page],
desc: [description content for the title page],
rev_list: [revision list],
doc: doc
)
// ... The content of your document
```
See [`demo-ds.typ`](demo-ds.typ) for details.
## Demo Videos:
- Youtube: TODO
## Contributions
Feel free to contribute and raise issues. Please see the Contribution Guidelines for more information.
## License
This project is licensed under the Apache 2.0 License.
|
https://github.com/sky-y/pandoc-online-20240818 | https://raw.githubusercontent.com/sky-y/pandoc-online-20240818/master/typst/example-pandoc/example-pandoc.typ | typst | // Some definitions presupposed by pandoc's typst output.
#let horizontalrule = [
#line(start: (25%,0%), end: (75%,0%))
]
#let endnote(num, contents) = [
#stack(dir: ltr, spacing: 3pt, super[#num], contents)
]
#show terms: it => {
it.children
.map(child => [
#strong[#child.term]
#block(inset: (left: 1.5em, top: -0.4em))[#child.description]
])
.join()
}
#set table(
inset: 6pt,
stroke: none
)
#let content-to-string(content) = {
if content.has("text") {
content.text
} else if content.has("children") {
content.children.map(content-to-string).join("")
} else if content.has("body") {
content-to-string(content.body)
} else if content == [ ] {
" "
}
}
#let conf(
title: none,
subtitle: none,
authors: (),
keywords: (),
date: none,
abstract: none,
cols: 1,
margin: (x: 1.25in, y: 1.25in),
paper: "us-letter",
lang: "en",
region: "US",
font: (),
fontsize: 11pt,
sectionnumbering: none,
doc,
) = {
set document(
title: title,
author: authors.map(author => content-to-string(author.name)),
keywords: keywords,
)
set page(
paper: paper,
margin: margin,
numbering: "1",
)
set par(justify: true)
set text(lang: lang,
region: region,
font: font,
size: fontsize)
set heading(numbering: sectionnumbering)
if title != none {
align(center)[#block(inset: 2em)[
#text(weight: "bold", size: 1.5em)[#title]
#(if subtitle != none {
parbreak()
text(weight: "bold", size: 1.25em)[#subtitle]
})
]]
}
if authors != none and authors != [] {
let count = authors.len()
let ncols = calc.min(count, 3)
grid(
columns: (1fr,) * ncols,
row-gutter: 1.5em,
..authors.map(author =>
align(center)[
#author.name \
#author.affiliation \
#author.email
]
)
)
}
if date != none {
align(center)[#block(inset: 1em)[
#date
]]
}
if abstract != none {
block(inset: 2em)[
#text(weight: "semibold")[Abstract] #h(1em) #abstract
]
}
if cols == 1 {
doc
} else {
columns(cols, doc)
}
}
#show: doc => conf(
title: [夏に食べたいアイスについて],
authors: (
( name: [<NAME>],
affiliation: "",
email: "" ),
),
date: [2024年8月19日],
lang: "ja",
paper: "a4",
font: ("<NAME>",),
cols: 1,
doc,
)
= 序論:夏とアイスの関係
<序論夏とアイスの関係>
夏は、気温の上昇により冷たい食べ物が特に求められる季節である。中でもアイスは、その冷たさと甘さによって、身体を涼しく保ちながら、爽快感と癒しを提供する特別な存在である。アイスが夏に人気を博する背景には、歴史的に暑さをしのぐための冷却技術の発展と、嗜好品としての多様なアイスが普及したことが挙げられる。本論文では、夏に食べたいアイスの種類や文化、そしてその魅力について考察する。
= 第1章:アイスの種類と特性
<第1章アイスの種類と特性>
== ソフトクリーム、ジェラート、かき氷などの多様な選択肢
<ソフトクリームジェラートかき氷などの多様な選択肢>
== 各アイスの製造方法と風味の特徴
<各アイスの製造方法と風味の特徴>
= 第2章:日本における夏のアイス文化
<第2章日本における夏のアイス文化>
== 地域ごとの人気アイスとその文化的背景
<地域ごとの人気アイスとその文化的背景>
== 夏祭りやイベントにおけるアイスの重要性
<夏祭りやイベントにおけるアイスの重要性>
= 第3章:アイスがもたらす心理的・生理的効果
<第3章アイスがもたらす心理的生理的効果>
== 冷たい食品が与える身体的リフレッシュ効果
<冷たい食品が与える身体的リフレッシュ効果>
== 食べる楽しみがもたらす心の安らぎ
<食べる楽しみがもたらす心の安らぎ>
= 第4章:健康的なアイスの選び方
<第4章健康的なアイスの選び方>
== 低カロリーや無添加アイスの人気
<低カロリーや無添加アイスの人気>
== 健康志向の消費者が注目する成分や栄養価
<健康志向の消費者が注目する成分や栄養価>
= 結論:夏に最適なアイスの未来
<結論夏に最適なアイスの未来>
== アイス産業の未来と消費者トレンド
<アイス産業の未来と消費者トレンド>
== 夏に食べたいアイスの選び方の提案
<夏に食べたいアイスの選び方の提案>
|
|
https://github.com/floriandejonckheere/utu-thesis | https://raw.githubusercontent.com/floriandejonckheere/utu-thesis/master/thesis/chapters/07-proposed-solution/02-design.typ | typst | #import "/helpers.typ": *
== Design
We start by identifying the functional and non-functional requirements for the solution.
Then, we propose a four-step approach to decomposition adapted from the microservice candidate identification pipeline by #cite_full(<lopes_silva_2023>).
- *Extraction*: the necessary information is extracted from the codebase of the application
- *Decomposition*: using the collected data, a decomposition of the application into microservices is proposed
- *Visualization*: the proposed decomposition is visualized to facilitate the understanding of the architecture
- *Evaluation*: the proposed decomposition is evaluated according to a set of quality metrics
The extraction step is comprised of two smaller steps: static analysis and evolutionary analysis.
From the extracted information, a dependency graph is visualized.
The decomposition step is based on the graph clustering algorithm, which is used to identify the microservice candidates.
Finally, the proposed decomposition is evaluated using a set of quality metrics.
#pagebreak()
An overview of the architecture of the proposed solution is shown in @architecture.
#figure(
include("/figures/07-proposed-solution/architecture.typ"),
caption: [MOSAIK architecture overview]
) <architecture>
The next sections detail each of these steps, providing a comprehensive overview of the proposed solution.
The process we describe is generic and not tied to any specific programming language or paradigm.
We implemented a prototype in the Ruby programming language#footnote[#link("https://ruby-lang.org")[https://ruby-lang.org]].
The source code of the implementation is available on Github#footnote[#link("https://github.com/floriandejonckheere/mosaik")[https://github.com/floriandejonckheere/mosaik]].
|
|
https://github.com/fenjalien/metro | https://raw.githubusercontent.com/fenjalien/metro/main/tests/num/print-zero-exponent/test.typ | typst | Apache License 2.0 | #import "/src/lib.typ": num, metro-setup
#set page(width: auto, height: auto)
#num(444, e: 0)
#num(444, e: 0, print-zero-exponent: true) |
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-11400.typ | typst | Apache License 2.0 | #let data = (
("NEWA LETTER A", "Lo", 0),
("NEWA LETTER AA", "Lo", 0),
("NEWA LETTER I", "Lo", 0),
("NEWA LETTER II", "Lo", 0),
("NEWA LETTER U", "Lo", 0),
("NEWA LETTER UU", "Lo", 0),
("NEWA LETTER VOCALIC R", "Lo", 0),
("NEWA LETTER VOCALIC RR", "Lo", 0),
("NEWA LETTER VOCALIC L", "Lo", 0),
("NEWA LETTER VOCALIC LL", "Lo", 0),
("NEWA LETTER E", "Lo", 0),
("NEWA LETTER AI", "Lo", 0),
("NEWA LETTER O", "Lo", 0),
("NEWA LETTER AU", "Lo", 0),
("NEWA LETTER KA", "Lo", 0),
("NEWA LETTER KHA", "Lo", 0),
("NEWA LETTER GA", "Lo", 0),
("NEWA LETTER GHA", "Lo", 0),
("NEWA LETTER NGA", "Lo", 0),
("NEWA LETTER NGHA", "Lo", 0),
("NEWA LETTER CA", "Lo", 0),
("NEWA LETTER CHA", "Lo", 0),
("NEWA LETTER JA", "Lo", 0),
("NEWA LETTER JHA", "Lo", 0),
("NEWA LETTER NYA", "Lo", 0),
("NEWA LETTER NYHA", "Lo", 0),
("NEWA LETTER TTA", "Lo", 0),
("NEWA LETTER TTHA", "Lo", 0),
("NEWA LETTER DDA", "Lo", 0),
("NEWA LETTER DDHA", "Lo", 0),
("NEWA LETTER NNA", "Lo", 0),
("NEWA LETTER TA", "Lo", 0),
("NEWA LETTER THA", "Lo", 0),
("NEWA LETTER DA", "Lo", 0),
("NEWA LETTER DHA", "Lo", 0),
("NEWA LETTER NA", "Lo", 0),
("NEWA LETTER NHA", "Lo", 0),
("NEWA LETTER PA", "Lo", 0),
("NEWA LETTER PHA", "Lo", 0),
("NEWA LETTER BA", "Lo", 0),
("NEWA LETTER BHA", "Lo", 0),
("NEWA LETTER MA", "Lo", 0),
("NEWA LETTER MHA", "Lo", 0),
("NEWA LETTER YA", "Lo", 0),
("NEWA LETTER RA", "Lo", 0),
("NEWA LETTER RHA", "Lo", 0),
("NEWA LETTER LA", "Lo", 0),
("NEWA LETTER LHA", "Lo", 0),
("NEWA LETTER WA", "Lo", 0),
("NEWA LETTER SHA", "Lo", 0),
("NEWA LETTER SSA", "Lo", 0),
("NEWA LETTER SA", "Lo", 0),
("NEWA LETTER HA", "Lo", 0),
("NEWA VOWEL SIGN AA", "Mc", 0),
("NEWA VOWEL SIGN I", "Mc", 0),
("NEWA VOWEL SIGN II", "Mc", 0),
("NEWA VOWEL SIGN U", "Mn", 0),
("NEWA VOWEL SIGN UU", "Mn", 0),
("NEWA VOWEL SIGN VOCALIC R", "Mn", 0),
("NEWA VOWEL SIGN VOCALIC RR", "Mn", 0),
("NEWA VOWEL SIGN VOCALIC L", "Mn", 0),
("NEWA VOWEL SIGN VOCALIC LL", "Mn", 0),
("NEWA VOWEL SIGN E", "Mn", 0),
("NEWA VOWEL SIGN AI", "Mn", 0),
("NEWA VOWEL SIGN O", "Mc", 0),
("NEWA VOWEL SIGN AU", "Mc", 0),
("NEWA SIGN VIRAMA", "Mn", 9),
("NEWA SIGN CANDRABINDU", "Mn", 0),
("NEWA SIGN ANUSVARA", "Mn", 0),
("NEWA SIGN VISARGA", "Mc", 0),
("NEWA SIGN NUKTA", "Mn", 7),
("NEWA SIGN AVAGRAHA", "Lo", 0),
("NEWA SIGN FINAL ANUSVARA", "Lo", 0),
("NEWA OM", "Lo", 0),
("NEWA SIDDHI", "Lo", 0),
("NEWA DANDA", "Po", 0),
("NEWA DOUBLE DANDA", "Po", 0),
("NEWA COMMA", "Po", 0),
("NEWA GAP FILLER", "Po", 0),
("NEWA ABBREVIATION SIGN", "Po", 0),
("NEWA DIGIT ZERO", "Nd", 0),
("NEWA DIGIT ONE", "Nd", 0),
("NEWA DIGIT TWO", "Nd", 0),
("NEWA DIGIT THREE", "Nd", 0),
("NEWA DIGIT FOUR", "Nd", 0),
("NEWA DIGIT FIVE", "Nd", 0),
("NEWA DIGIT SIX", "Nd", 0),
("NEWA DIGIT SEVEN", "Nd", 0),
("NEWA DIGIT EIGHT", "Nd", 0),
("NEWA DIGIT NINE", "Nd", 0),
("NEWA DOUBLE COMMA", "Po", 0),
("NEWA PLACEHOLDER MARK", "Po", 0),
(),
("NEWA INSERTION SIGN", "Po", 0),
("NEWA SANDHI MARK", "Mn", 230),
("NEWA LETTER VEDIC ANUSVARA", "Lo", 0),
("NEWA SIGN JIHVAMULIYA", "Lo", 0),
("NEWA SIGN UPADHMANIYA", "Lo", 0),
)
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/hydra/0.3.0/src/util/core.typ | typst | Apache License 2.0 | #import "@preview/oxifmt:0.2.0": strfmt as fmt
/// Substitute `value` for the return value of `default()` if it is a sentinel value.
///
/// - value (any): The value to check.
/// - default (function): The function to produce the default value with.
/// - check (any): The sentinel value to check for.
/// -> any
#let or-default(value, default, check: none) = if value == check { default() } else { value }
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/visualize/shape-circle-01.typ | typst | Other | // Test auto sizing.
#set circle(inset: 0pt)
Auto-sized circle.
#circle(fill: rgb("eb5278"), stroke: 2pt + black,
align(center + horizon)[But, soft!]
)
Center-aligned rect in auto-sized circle.
#circle(fill: red, stroke: green,
align(center + horizon,
rect(fill: green, inset: 5pt)[But, soft!]
)
)
Rect in auto-sized circle.
#circle(fill: red,
rect(fill: green, stroke: white, inset: 4pt)[
#set text(8pt)
But, soft! what light through yonder window breaks?
]
)
Expanded by height.
#circle(stroke: black, align(center)[A \ B \ C])
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/match_def/ident_in_init2.typ | typst | Apache License 2.0 | #let f(a) = {
/* ident after */ a
}; |
https://github.com/claudiomattera/typst-modern-cv | https://raw.githubusercontent.com/claudiomattera/typst-modern-cv/master/README.md | markdown | MIT License | Typst Modern CV
====
A modern Curriculum Vitæ (CV) template with timelines
<https://git.claudiomattera.it/claudiomattera/typst-modern-cv>
This is a template for modern and good-looking CVs.
It was inspired from LaTeX packages [moderncv] and [moderntimeline].
[](./docs/example-lighten.png)
[moderncv]: https://www.ctan.org/pkg/moderncv
[moderntimeline]: https://www.ctan.org/pkg/moderntimeline
Examples
----
There are two example documents in the directory [`docs`](./docs): [`docs/example-underline.typ`](./docs/example-underline.typ) and [`docs/example-lighen.typ`](./docs/example-lighen.typ).
They can be compiled by running the commands
~~~~shell
typst compile --root . ./docs/example-underline.typ
typst compile --root . ./docs/example-lighten.typ
~~~~
The two example documents just set up the page layout and the theme, and then include the same file [`docs/example.typ`](./docs/example.typ).
All entries in the CV are defined in this file.
Usage
----
First, import the package
~~~~typst
// Import all symbols
#import "@local/modern-cv:0.3.0": *
// Or only import selected symbols
#import "@local/modern-cv:0.3.0": conf, update_theme, draw_education, draw_experience, draw_publication
~~~~
Then setup page layout and document metadata.
~~~~typst
#set document(
title: "<NAME> - Curriculum Vitæ",
author: "<NAME>",
)
#set page(
paper: "a4",
margin: (x: 1.5cm, y: 1.5cm),
)
~~~~
Then configure the theme.
~~~~typst
#update_theme(
color: rgb("#377eb8"),
base_date: datetime(year: 2013, month: 1, day: 1),
current_date: datetime(year: 2024, month: 1, day: 1),
)
~~~~
Then setup the document.
~~~~typst
#show: doc => conf(
fullname: "<NAME>",
address: "Southern Pole, Antarctica",
phone: "+672 123 456 789",
email: "<EMAIL>",
website: "johndoe.aq",
github: "johndoe",
orcid: "0000-0000-0000-0000",
doc,
)
~~~~
Finally, typeset the rest of the document using the provided functions
~~~~typst
#draw_education(
start: datetime(year: 2015, month: 8, day: 1),
end: datetime(year: 2019, month: 4, day: 1),
title: "Ph. D. in Software Engineering",
institution: "University of Antarctica",
department: "Ice Caps Office",
city: "Southern Pole",
country: "Antarctica",
url: "https://www.icecap.aq/",
)[
My topic was designing and implementing software solutions for estimating thickness in ice caps by analysing their transparency.
]
~~~~
~~~~typst
#draw_experience(
start: datetime(year: 2021, month: 2, day: 1),
finished: false,
position: "Developer",
company: "Southern Pole Express",
city: "Southern Pole",
country: "Antarctica",
url: "https://www.southern-pole.aq/",
)[
I am in charge of the team leading excavation and extraction of samples from ice caps.
I use simulations to locate the areas where ice caps are thinner, in order to minimize the effort of excavation for reaching the bottom layers.
]
~~~~
~~~~typst
#draw_publication(
date: datetime(year: 2018, month: 1, day: 1),
title: "A Method for Analysing Transparency in Ice Caps by Shining Light into Them and Squinting your Eyes",
doi: "10.0999/aq.123457",
)
~~~~
The initial and final years will be shown as labels over the timeline.
Custom labels can also be specified with the arguments `label_start`, `label_end` or `label_date`.
Configuration
----
The theme can be configured and customized by calling the function `configure_theme` and passing a dictionary with the following fields.
* `color (color)`: The theme base color (default: `blue`).
* `width (length)`: The width of right bars and timelines (default: `2cm`).
* `thickness (length)`: The thickness of right bars and timelines (default: `1.5mm`).
* `radius (length)`: The radius of rounded corners of timelines (default: `1pt`).
* `style (str)`: The theme style (default: `"underline"`).
* `base_date (datetime)`: The earliest date in all entries (default: `datetime(year: 2005, month: 1, day: 1)`).
* `current_date (datetime)`: The latest date in all entries (default: `datetime.today()`).
All fields are optional; any field that is not specified will not be changed.
### Styles
This template supports two styles: `underline` (default) and `lighten`.
* Style `underline` shows timelines as coloured intervals above gray underlines, such as the original LaTeX package `moderntimeline`.

* Style `lighten` shows timelines as coloured intervals over full, lightened intervals.

Fonts
----
This package uses [Font Awesome] for some icons.
[Download][Font Awesome Download] the Free font pack for Desktop and place the `.otf` files in one of the directories where Typst looks for fonts (or add any directory to the environment variable `TYPST_FONT_PATHS`).
[Font Awesome]: https://fontawesome.com/
[Font Awesome Download]: https://fontawesome.com/download
Changes
----
See the [Changelog](./CHANGELOG.md) for a list of changes.
Development
----
See the [Contributing Guide](./CONTRIBUTING.md) for more information about development.
License
----
Copyright <NAME> 2023-2024
You are free to copy, modify, and distribute this application with attribution under the terms of the [MIT license]. See the [`LICENSE`](./LICENSE) file for details.
[MIT license]: https://opensource.org/license/mit/
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/003%20-%20Gatecrash/008_Experiment%20One.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Experiment One",
set_name: "Gatecrash",
story_date: datetime(day: 18, month: 02, year: 2013),
author: "<NAME>",
doc
)
The entrance to the lab was an unassuming hatch set in a dank, moss-covered wall. Liana had to double back twice to find it, and when she knocked on the door it swung open with a creak.
"Hello?" she said.
She double-checked the address, took a deep breath of musty air, and stepped inside.
As her eyes adjusted she realized it wasn't completely dark, but dimly lit by green bioluminescent globes hung from the ceiling. Her footsteps echoed off of cold stone.
From the depths of the lab came an aimless humming.
"Hello?" she said again. "My name's Liana. I'm to apprentice to <NAME>. Is he here?"
The humming stopped.
"Liana," said a raspy voice from another room. "That's a lovely name."
She frowned, but said, "Thank you."
"Ozbolt, on the other hand," said the voice. "That one's a bit odd. Almost unpleasant, don't you think?"
A slight and disheveled man, baseline human from the looks of him, stepped into the room, wiping his hands on a spongy-looking rag.
"Call me Florin," he said, smiling from beneath an impressive pair of eyebrows, and all semblance of menace evaporated. "My first and altogether more welcoming name."
Florin blinked and looked around. "Ghosts and gods, I'm sorry. It's dreadful in here." He touched a mossy patch of wall, and the globes hanging from the ceiling brightened and whitened until the light they cast seemed almost like sunlight.
#figure(image("008_Experiment One/01.jpg", width: 100%), caption: [], supplement: none, numbering: none)
"A pleasure to meet you, <NAME>," she said.
"Master," he snorted. "Oh, if you insist."
He was an older man, perhaps the age her father would have been, with thinning hair and a stubbly chin. For a master Simic biomancer, aging naturally could be called eccentric, although it didn't seem to extend to anything really extravagant like a stoop or a limp.
"I'm guessing you don't take many apprentices," said Liana.
"Hardly any," he said. "I can't say I'm in much demand, and the speakers don't think much of me, either." He sniffed. "Presumably, I'm your punishment for something."
"Not at all," she said. "I told them I was more interested in compatible philosophy than in any particular field of research, and <NAME> put your name forward."
"Philosophy," he said, eyes twinkling. "Now that's another matter entirely. Tell me about yours."
"The power we wield over life is staggering," she said. "And we have a responsibility to create more with it than biological curios. We can make things, like your lighting system here, that can improve people's lives. We can give them medical treatments unlike any others. We can make life in this city better, for everyone."
"Ahh," he said. "Dangerous ideas. Much simpler to toss a few animals together and see what comes out. Much safer. That gets you appointed, gets you publicity. Gets you research grants."
#figure(image("008_Experiment One/02.jpg", width: 100%), caption: [], supplement: none, numbering: none)
He grinned.
"But if you're not here for those things—if you care more about your philosophy than you do about your career—then maybe, just maybe, you can make a real difference in the world."
"In that case," she said. "I think I'm exactly where I should be."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The rat's front left leg was missing below the elbow, but that didn't seem to slow it down any. At last, Liana's gloved hand closed around it, and she lifted it from the pen, ignoring its squeals.
"Subject 23 is ready," she said. Subject 23 wiggled its whiskers and chittered at her.
"Proceed," said Florin.
Liana dipped a swab into the vial of ooze in front of her and carefully swabbed it on the stump of the rat's missing limb. She held the animal out so Florin could wrap a bandage around its ooze-covered appendage, then she set it in a new solitary enclosure.
It wasn't hard work, but it was a bit nerve-wracking. She hadn't asked <NAME> where he'd found so many injured rats, but she suspected it had something to do with the Izzet lab tech who stopped by every week or two.
#figure(image("008_Experiment One/03.jpg", width: 100%), caption: [], supplement: none, numbering: none)
"That's the last of this batch," she said, peeling off her gloves.
"Excellent!" said Florin. "I think we're making progress with this."
Liana nodded and stripped off the gloves. "This" was an innovative limb-replacement therapy. She'd heard of efforts to graft on new limbs, some of them successful, but this was different. They were using a mimetic ooze to read and ultimately recreate the pattern of the missing limb. It struck her as dangerously close to the forbidden use of cytoplast to transfer genetic material, but <NAME> had assured her that the ooze itself wasn't contributing or transferring anything, and that left them clear to proceed.
So far, mostly they'd just gotten rats with goo on them, and a few had died of complications. One of them had grown a wing; they'd marked that batch for further testing, although it smacked of contamination more than anything else.
<NAME> stripped off his own gloves, washed his hands in a basin, and beckoned for Liana to follow him into the other room. She knew what that meant.
A few days into her apprenticeship, Liana had pointed out his habit of engaging in philosophical debate right after a bout of practical experimentation.
"Of course," he'd replied. "The problem with philosophy is that it's easy to get too abstract. Always get your hands dirty first, to remind you that somebody has to turn your ideas into action."
"Why do we do what we do?" he asked her now.
She took a breath before answering. She'd learned early on that an ill-considered answer could send the conversation down an unproductive path or, worse, result in homework. A willfully ignorant answer, on the other hand, could do wonders to clarify the question.
"The Fathom Edict states quite clearly—"
"Fah," said Florin with a dismissive wave. "I already know the prime speaker's answer to the question. I want to hear yours."
Personal philosophy, then. Altogether more interesting.
"I'd say Simic, as a guild, operates out of a desire to understand and protect natural life," she said. "You and I, more than most, are conscious of the fact that natural life includes sentient life."
"Very much so," he said. "But what, exactly, does it mean to ethically protect sentient life? The sharks and the crocodiles have no plans of their own; the same cannot be said of people."
Liana frowned. "True. Then... I suppose the best we can do is try to lift their burdens, give them the chance to live better lives. We can't tinker with people the way we do with animals."
"We can't tinker with #emph[gt;other] people," said Florin.
He took a deep breath, always a sign that he was about to launch off on a story.
"I knew an Izzet chemister once. A very smart woman. She had a dozen brilliant ideas, any one of which she could have spent a lifetime developing. Naturally, she couldn't pick just one, so instead she built... well, she called it a 'neural chronaccelerator.' Typical.
"She spent years building it, and when she finally got it running, she insisted on trying it out herself. It wasn't because of a sudden attack of ethics; Izzet test their devices on hapless goblins all the time. It was because the thoughts she was most interested in accelerating were her own, and she couldn't wait to get started.
#figure(image("008_Experiment One/04.jpg", width: 100%), caption: [], supplement: none, numbering: none)
"A few hours later, she was dead. Brain fried clear through. But in that time, she took notes, copious notes, on her accelerated thoughts. They found schematics for revolutionary power systems, treatises on experimental theory, and blueprints for devices whose purpose they're still trying to figure out. In one afternoon, she'd done lifetimes' worth of scholarly work.
"My question to you is this: did she do the right thing?"
The idea of throwing away her whole life all at once made Liana cringe. But the benefits...
"I wouldn't fault anybody who didn't want to do it," said Liana. "But yes, I think she did the right thing."
"Good," said Florin. "I think so too. But I would imagine her success didn't look quite the way she'd imagined it. That's the real lesson of her story: When you seek to improve the world around you, you start by improving yourself. And when you improve yourself, you may change in ways your prior self would find surprising. Even disturbing."
He leaned forward in his chair, and in that moment she saw something alien and terrifying in his eyes.
"Are you prepared for that?"
"I... I think so," said Liana.
"Good!" he said, and the moment passed, and he was once again a harmless, eccentric old man. "That's enough for today, I think. Maybe you can spend the afternoon with your friend Jovan."
"What... what makes you think I'll see him?"
<NAME> rolled his eyes.
"Biomantic powers," he said, waggling his fingers. "That, and the way you've been talking about him lately."
Liana blushed. "Is it that obvious?"
The Master just rolled his eyes again and shooed her out of the lab.
#figure(image("008_Experiment One/05.jpg", width: 100%), caption: [], supplement: none, numbering: none)
The next day, Subject 23's ooze-graft had started to grow. Within a week, the rat was scampering around on four paws—three furry, one gelatinous. When Liana showed <NAME>, he smiled bigger and brighter than she'd ever seen.
"In that case," he said, "I'd say we're finally ready to begin."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
In fact, his cryptic announcement was somewhat premature. It took several more days of testing and tweaking before <NAME> was content to proceed with the secret project he called "Experiment One." And even then, he told Liana to take a few days off while he got the lab properly set up.
She returned to the lab on a gray and rainy morning, her cloak just barely keeping out the wet and the chill.
Inside, it was eerily like her first day at the lab: dark and dank, with no one in sight. She hung her dripping cloak by the door.
"<NAME>?" she called.
There were lights on in the specimen room. The lab had looked this way the day she'd arrived, but now she couldn't shake the feeling that it meant something was very wrong. She walked toward the lights.
The specimen room looked much the way it had when she left: rows of cages, tables with equipment, and several ooze vats growing the organ-replacement mixture.
Then the ooze in one of the vats... moved.
The floor was slick, and she planted her feet carefully as she walked over to the vat. She peered in over the edge, ready to push herself back if it moved again.
There were shapes and colors in the ooze, impurities that shouldn't be there. A reddish cloud, a dark spindle...
Ribs. Human ribs.
Then the ooze lurched upward, sickeningly fast. She flung herself back along the scum-slick floor as a dark shape rose out of the vat.
"Florin!" she yelled. "Are you here?"
Then the shape opened its eyes, and she understood. <NAME> was here, or had been.
The ooze had taken his shape, just as it had taken the shape of the rats' missing limbs. It had recreated the dome of his head, gelatinous, hairless, and two translucent arms hefted his bulk above a tangled mass of ooze. Through the surface of the thing's skin she could see bones and a dissolving web of organs. But the face... the face was unmistakably his, and the eyes were as bright as ever.
#figure(image("008_Experiment One/06.jpg", width: 100%), caption: [], supplement: none, numbering: none)
"Hello, Liana," said the thing that had been <NAME>. Its voice still held a rasp.
She crawled backward, found dryer ground, and scrambled to her feet beside the door.
"What have you done?"
"What I have always done," said the ooze-thing. "I have bettered myself."
"Better? How is this better?"
He laughed, a familiar sound made horrible in the mouth of a mound of ooze.
"I can think better now," he said. "My glands are gone. I can scrape sustenance off the floor as I move. Think of it! No hunger, no adrenaline, no lust, no fear."
The thing was sliding slowly forward, its eyes locked on her, tendrils of ooze writhing beneath it.
"Already I can see how my experiments were flawed. I was trying to replace organs after they were lost. Now I see that the real problem is the frailty and folly of our natural organs, including the brain. Especially the brain."
"This is sick," she said. "You need help. We'll talk to the council. They can heal you."
"I am healed!" he cried. "I told you, my dear. To improve the world, improve yourself. And when you improve yourself, the changes may surprise you..."
"Even disturb you," she said. She shuddered.
"You wanted to make life better," he said. "I never questioned your dedication. Come, my dear. Into the vat. Remake yourself, and we will remake this broken world."
He lurched forward, reaching out for her.
She turned and stumbled through the darkened lab, past her still-damp cloak, and out the door, down the street, heedless of the rain.
She did not dare look back.
|
|
https://github.com/Lucascf11/PRE | https://raw.githubusercontent.com/Lucascf11/PRE/main/Avaliacoes/AV6/main.typ | typst | #import "@preview/klaro-ifsc-sj:0.1.0": report
#set text(lang: "pt")
#show: doc => report(
title: "Avaliação 6 de Processos Estocásticos",
subtitle: "Processos de Poisson",
// Se apenas um autor colocar , no final para indicar que é um array
authors: ("<NAME>",),
date: "25 de Agosto de 2024",
doc,
)
#set text(18pt)
= Enunciado
#set text(14pt)
#figure(
image("figures/Enunciado6.png", width: 130%),
caption: "Enunciado da Avaliação",
supplement: [Figura],
)<Enunciado>
#pagebreak()
#set text(18pt)
= Desenvolvimento
#set text(14pt)
== Determinando e esboçando a função média
Para descobrirmos a função média do processo de Poisson X(t), vamos precisar recorrer à fórmula da função média do processo de Poisson em geral:
$ mu_X(t) = lambda t[t > 0] $
Como nesse caso X(t) é uma soma entre X1(t) e X2(t), teremos que:
$ mu_X(t) = (lambda_1 + lambda_2)t[t>0] $<Soma_de_Poissons>
=== Determinando a função média
Aplicando a fórmula descrita na @Soma_de_Poissons, teremos que
$ mu_X(t) = (2,5 + 2)t[t>0] = 4,5t[t> 0] $
=== Esboçando a função média
Se esboçarmos uma possível função média obtida anteriormente, perceberemos que é uma reta com $mu_X$ em função de t valendo 4,5t para t > 0:
#figure(
image("figures/Esbocada_Regiao.jpeg", width: 100%),
caption: "Gráfico da média em função do tempo",
supplement: [Figura],
)
#pagebreak()
== Determinando a probabilidade condiicional
Aqui precisamos calcular a probabilidade $Pr[X_(10,13) >=15 | X_(6,9) = 1]$.
\
Primeiramente, começamos analisando a definição de um processo de Poisson que dita que $X_("t1,t2")$ e $X_("t1',t2'")$ são independentes, quaisquer que sejam os intervalos $("t1,t2"]$ e $("t1',t2'"]$ disjuntos.
\
Ou seja, O que ocorreu no intervalo $[6 <= t <= 9]$ em nada irá alterar o que ocorrerá no intervalo $[10 <= t <= 13]$.
\
Matematicamente expressando a constatação acima, temos que:
$ Pr[X_(10,13) >= 15| X_(6,9) = 1] = Pr[X_(10,13) >= 15] $
E agora, para calcularmos a equação acima, basta recorrermos novamente à definição, onde temos que
$ X_("t1,t2") dash.wave "Poisson"(lambda("t2" - "t1")) $ Para qualquer que seja o intervalo (t1,t2].
\
Descobrindo o argumento dentro da distribuição de Poisson, nós teremos a média com a qual o intervalo se distribui. Isso nos auxiliará a descobrir a PMF com a seguinte equação:
$ p_X(x) = e^(-mu)mu/x!, " com" x = 0,1,2..., $
E descobrindo a PMF, para a descobrir a probabilidade acima, bastará pegar o valor máximo de uma PMF e subtrair da descoberta, fazendo o seguinte cálculo:
$ Pr[X_(10,13) >= 15] = 1 - p_X(x = 14) $
Com tudo isso bem detalhado, podemos finalmente partir para os cálculos e encontrar a probabilidade solicitada pelo enunciado:
$ X_(10,13) dash.wave "Poisson"(4,5(13-10))\
X_(10,13) dash.wave "Poisson"(13,5) $
\
$ p_X(x) = e^(-13,5) times sum_(x = 0)^(14)((13,5^x)/x!)\
p_X(x) = 0,6233
$
\
$ Pr[X_(10,13) >= 15] = 1 - 0,6233\
Pr[X_(10,13) >= 15] = 0,3767 = 37,67%
$
#pagebreak()
== Determinando a probabilidade da variação de tempo entre eventos
Aqui precisaremos determinar $Pr[Delta_n > 0,1s]$ e nesse caso, podemos recorrer à seguinte fórmula:
$ Pr[T > t] = e^(-lambda t) $
Onde t é o tempo avaliado, que nesse caso, é de 0,1 segundos e portanto teremos que:
$ Pr[T>0,1] = e^(-(4,5 times 0,1)) = e^(-0,45)\
Pr[T > 0,1] = 0,6376 = 63,76%
$
#pagebreak()
== Determinando a matriz covariância
Para começarmos nossa análise, verificamos aqui que a matriz covariância é composta por duas variáveis aleatórias, então isso irá gerar uma matriz covariância quadrada de duas dimensões, ou seja, uma matriz 2x2, cuja composição, pare este enunciado é:
$ mat("cov"[X(3),X(3)], "cov"[X(3),X(4)];
"cov"[X(3),X(4)], "cov"[X(4)X(4)]) $
E aplicando conhecimentos de Poisson, avaliamos que a função autocovariância desse tipo de variável aleatória é dada por:
$ C_X("t1,t2") = lambda"min"{"t1,t2"}["t1,t2">0] $
Aplicando esses conhecimentos, nós teremos que a matriz covariância do enunciado poderá ser calculada do seguinte modo:
$ "cov"[X(3),X(3)] = 4,5 times "min"{3,3} = 4,5 times 3 = 13,5\
"cov"[X(3),X(4)] = 4,5 times "min"{3,4} = 4,5 times 3 = 13,5\
"cov"[X(4),X(3)] = 4,5 times "min"{4,3} = 4,5 times 3 = 13,5\
"cov"[X(4),X(4)] = 4,5 times "min"{4,4} = 4,5 times 4 = 18
$
Com os cálculos elemento a elemento da matriz covariância realizados acima, finalmente teremos a matriz covariância solicitada pelo enunciado:
$ mat("13,5","13,5";
"13,5", 18) $ |
|
https://github.com/rijuyuezhu/latex-typst-template | https://raw.githubusercontent.com/rijuyuezhu/latex-typst-template/main/art-typst-eng/main.typ | typst | // #import "@local/ssrn-scribe:0.4.8": paper
// #import "@local/ssrn-scribe:0.4.8": *
#import "template.typ": paper
#import "template.typ": *
#show: doc => paper(
font: "Georgia", // "Times New Roman"
fontsize: 12pt, // 12pt
maketitle: true, // whether to add new page for title
title: [#lorem(5)], // title
subtitle: "A work in progress", // subtitle
authors: (
(
name: "<NAME>",
),
(
name: "<NAME>",
affiliation: "University of Nowhere",
email: "<EMAIL>",
note: "456",
)
),
// date: "July 2023",
abstract: lorem(80), // replace lorem(80) with [ Your abstract here. ]
keywords: [
Imputation,
Multiple Imputation,
Bayesian,],
// JEL: [G11, G12],
acknowledgments: "This paper is a work in progress. Please do not cite without permission.",
// bibliography: bibliography("bib.bib", title: "References", style: "apa"),
doc
)
#set par(leading: 0.8em)
// your main text goes here
= Introduction <intro>
@intro
check
#lorem(300)
== Motivation
#lorem(140)
= Data
// @abarbanell1998abnormal
// @abarbanell1998abnormal11
// #cite(<abarbanell1998abnormal11>,<barbanell1998abnormal>)
#lorem(100)
= Conclusion
#lorem(100)
= Theorem
#definition[
A natural number is called a #highlight[_prime number_] if it is greater
than 1 and cannot be written as the product of two smaller natural numbers.
]
#example[
The numbers $2$, $3$, and $17$ are prime.
@cor_largest_prime shows that this list is not exhaustive!
]
#theorem("Euclid")[
There are infinitely many primes.
]
#proof[
Suppose to the contrary that $p_1, p_2, dots, p_n$ is a finite enumeration
of all primes. Set $P = p_1 p_2 dots p_n$. Since $P + 1$ is not in our list,
it cannot be prime. Thus, some prime factor $p_j$ divides $P + 1$. Since
$p_j$ also divides $P$, it must divide the difference $(P + 1) - P = 1$, a
contradiction.
]
#corollary[
There is no largest prime number.
] <cor_largest_prime>
#corollary[
There are infinitely many composite numbers.
]
= Test
we are the best in the world.
#lemma[
The number $1$ is not prime.
]
this is a test.
Equation without number:
#eq(
$
x
$
)
---
#mitex(`
\begin{align*}
\frac{1}{2} + \frac{1}{3} &= \frac{5}{6} \\
\frac{1}{2} + \frac{1}{3} &= \frac{5}{6} \\
\end{align*}
`)
#tablex(
columns: 4,
align: center + horizon,
auto-vlines: false,
// indicate the first two rows are the header
// (in case we need to eventually
// enable repeating the header across pages)
header-rows: 2,
// color the last column's cells
// based on the written number
map-cells: cell => {
if cell.x == 3 and cell.y > 1 {
cell.content = {
let value = int(cell.content.text)
let text-color = if value < 10 {
red.lighten(30%)
} else if value < 15 {
yellow.darken(13%)
} else {
green
}
set text(text-color)
strong(cell.content)
}
}
cell
},
/* --- header --- */
rowspanx(2)[*Username*], colspanx(2)[*Data*], (), rowspanx(2)[*Score*],
(), [*Location*], [*Height*], (),
/* -------------- */
[John], [Second St.], [180 cm], [5],
[Wally], [Third Av.], [160 cm], [10],
[Jason], [Some St.], [150 cm], [15],
[Robert], [123 Av.], [190 cm], [20],
[Other], [Unknown St.], [170 cm], [25],
)
#tablem[
| *Name* | *Location* | *Height* | *Score* |
| ------ | ---------- | -------- | ------- |
| John | Second St. | 180 cm | 5 |
| Wally | Third Av. | 160 cm | 10 |
]
#let three-line-table = tablem.with(
render: (columns: auto, ..args) => {
tablex(
columns: columns,
auto-lines: false,
align: center + horizon,
hlinex(y: 0),
hlinex(y: 1),
..args,
hlinex(),
)
}
)
#three-line-table[
| *Name* | *Location* | *Height* | *Score* |
| ------ | ---------- | -------- | ------- |
| John | Second St. | 180 cm | 5 |
| Wally | Third Av. | 160 cm | 10 |
]
#cetz.canvas({
let data = ([\*], ([A A], [A.A], [A.B], [A.c]), ([B], [B.A]))
cetz.tree.tree(data,direction: "right")}) |
|
https://github.com/Walfisch115/thb-typst-template | https://raw.githubusercontent.com/Walfisch115/thb-typst-template/main/poster/conf.typ | typst | #let conf(
title: none,
author: none,
date: none,
supervision: none,
doc
) = {
// GENERAL SETTINGS
set text(lang: "de")
set text(font: "Linux Biolinum")
set par(justify: true)
// THB Assets
let thbBlue = rgb(0, 164, 193, 100%)
let thbLogo = image("THB_Logo.png", height: 9cm)
// Footer
let footer = {
text(size: 12pt)[#supervision]
}
// Page Settings
set page(
paper: "a2",
margin: (
left: 5cm,
right: 3cm,
top: 10cm,
bottom: 5cm
),
footer: footer,
footer-descent: 2.5cm,
background: place(dx: 2cm, dy: 7cm)[
#rect(fill: thbBlue, height: 9cm, width: 100%)
],
foreground: align(left + top)[#thbLogo]
)
// increase padding after figure
show figure: it => {
v(2em, weak: true)
it
v(2em, weak: true)
}
// align caption of figure left
show figure.caption: it => {
align(left)[#it]
}
// HEADING
{
set text(fill: white, weight: "bold")
text(size: 32pt)[#title]
par()[
#text(size: 18pt)[
#author
#linebreak()
Bachelorarbeit • Studiengang Informatik • Fachbereich Informatik und Medien • #date
]]
}
// padding between heading and content
v(5em)
// CONTENT BODY
{
set text(size: 16pt)
// change text size for headings
show heading.where(level: 1): set text(size: 18pt)
// increase spacing between headings
show heading: it => {
v(1.5em, weak: true)
it
v(0.6em, weak: true)
}
// separate page into two columns
columns(2, gutter: 2cm)[#doc]
}
} |
|
https://github.com/gabrielluizep/typst-ifsc | https://raw.githubusercontent.com/gabrielluizep/typst-ifsc/main/template/main.typ | typst | Creative Commons Zero v1.0 Universal | #import "article.typ": *
#import "assignment.typ": *
#import "exam.typ": *
#import "ifsclean.typ": *
#import "ifscyan.typ": * |
https://github.com/thuvasooriya/thuvasooriya | https://raw.githubusercontent.com/thuvasooriya/thuvasooriya/main/cv.typ | typst | #import "style.typ": *
#show: resume.with(
author: (
firstname: "Thuvaragan",
lastname: "Sooriyakumaran",
email: "<EMAIL>",
// phone: "(+94) 77-605-0926",
github: "thuvasooriya",
twitter: "thuvasooriya",
// scholar: "",
// birth: "January 1, 1990",
linkedin: "thuvasooriya",
// address: "111 Example St. Example City, EX 11111",
positions: (
"Engineering Undergrad",
"Designer",
"Developer",
),
),
date: datetime.today().display(),
language: "en",
colored-headers: true,
)
From planet earth, loves to integrate art, science and technology to create a smile in the face of others. Moderate experience in systems programming, robotics, digital design, algorithms, and performance optimization. Past experience in web development, graphics manipulation.
Currently obsessed with modular robotics, RISC-V IP design, and open-source toolchains.
= Projects
#resume-entry(
title: "ViT Accelerator IP for a custom RISC-V core in FPGA",
location: github-link("thuvasooriya/vit-malware-detector"),
date: "May 2024 - Sep 2024",
description: "DVCON 24 - India",
// title-link: "https://biosense-ai.github.io/"
)
#resume-item[
- Research and Implementation project for DVCON24 to design a novel ViT accelerator IP for a custom RISC-V core
- Selected in the 20 teams from >100 teams in Stage 1
]
#resume-entry(
title: "Robot manipulator with 4 DOF",
// location: github-link("team-itro/dum-e"),
date: "Sep 2024 - Present",
description: "Semester 5 group project | Currently in Prototyping Phase",
// title-link: "https://biosense-ai.github.io/"
)
// #resume-item[
// - Research and Implementation project for DVCON24 to design a novel ViT accelerator IP for a custom RISC-V core
// - Selected in the 20 teams from >100 teams in Stage 1, Stage 2a work ongoing...
// ]
#resume-entry(
title: "RISC-V RV32I pipeline-processor research and implementation",
// location: github-link("gatemasters/grisc32"),
date: "Sep 2024 - Present",
description: "Semester 5 group project | Currently in Literary Review Phase",
// title-link: "https://biosense-ai.github.io/"
)
// #resume-item[
// - Research and Implementation project for DVCON24 to design a novel ViT accelerator IP for a custom RISC-V core
// - Selected in the 20 teams from >100 teams in Stage 1, Stage 2a work ongoing...
// ]
#resume-entry(
title: "Image segmentation research and evaluation for bin-picking",
location: github-link("mora-bprs/sam-model"),
date: "March 2024 - June 2024",
description: "Semester 4 group project",
title-link: "https://mora-bprs.github.io/"
)
#resume-item[
- Extensive literary review and recreation of image segmentation and CNN model papers in python
- Development of simple gripper mechanism and associated PCB for testing
- Extensive testing, benchmarking and documentation
]
#resume-entry(
title: "Smart plug product research and development",
location: [#github-link("thuvasooriya/pluggu")],
date: "Mar 2023 - May 2023",
description: "Semester 2 group project, Short course group project",
)
#resume-item[
- Smart plug with remote control through LAN and WAN, developed around ESP32-S3 micro controller
- Another smart plug design and implementation was followed as part of a short course
]
#resume-entry(
title: "STM32 two-wheeled line follower robot with simple manipulator",
location: github-link("team-itro/sem3slrc"),
date: "Aug 2023 - Dec 2023",
description: "Semester 3 group project",
)
#resume-item[
- Prototyped, designed and implemented a robot that is capable of following white, colored lines
- Custom arm mechanism was designed and integrated to achieve required object manipulation
// - Extensive testing of micro controllers : rp2040, stm32f411, stm32f405
]
#resume-entry(
title: "Portable doctor companion device with SBC",
location: github-link("biosense-ai/biosense-ai-web-server"),
date: "Aug 2023 - Oct 2023",
description: "Mecha 23 - 2nd Runner Up",
title-link: "https://biosense-ai.github.io/"
)
#resume-item[
- Integrating ECG monitor and $"SpO"_2$ sensor with machine learning to provide helpful information in a web dashboard in any device to make screening process easier for doctors
- Orange-Pi Zero 2W SBC is used with custom Linux to integrate monitoring, web-server and prediction algorithms
]
// #resume-entry(
// title: "Multi-modal analysis of ECG, heart sounds and lung sounds",
// location: github-link("biosense-ai/model-xai"),
// date: "Mar 2024 - Present",
// description: "Brainstorm 24",
// // title-link: "https://biosense-ai.github.io/"
// )
//
// #resume-item[
// - Continuation of the portable screening device, focusing on the prediction of the model
// - Validation of results using back propagation to identify decisive sections
// ]
#resume-entry(
title: "Maze Solving Micromouse",
location: github-link("team-itro/kitro"),
date: "Sep 2024 - Present",
description: "RoboFest 24",
// title-link: "https://biosense-ai.github.io/"
)
#resume-item[
- Maze Solving micromouse using STM32F411 microcontroller.
- Participated in RoboFest 24, Research on $2^("nd")$ prototype ongoing.
]
#resume-entry(
title: "Analog ECG monitor PCB development",
// location: github-link("biosense-ai/biosense-ai-web-server"),
date: "Aug 2023 - Oct 2023",
description: "Semester 3 group project",
// title-link: "https://biosense-ai.github.io/"
)
= Skills
#resume-skill-item(
"Programming",
(strong("C/C++"), strong("System{Verilog}"), strong("Python"), strong("JavaScript"), "Zig", "Lua", "Nix"),
)
#resume-skill-item("Languages", (strong("English"), strong("Tamil"), "Sinhala", "Japanese", "French"))
#resume-skill-item(
"Tools",
(strong("KiCAD / Altium Designer"), "SolidWorks", "ROS", "MATLAB", "Verilator", "Vivado"),
)
= Achievements
#resume-skill-item("Lead", (strong("Deputy Batch Representative"), "Engineering Faculty, University of Moratuwa", strong("\nSenior(Head) Prefect"), "2020, Jaffna Hindu College"))
#resume-skill-item("Awards", (strong("Right To Information Act, National Debate - 1st Place"),"Tamil","Team Lead",strong("\nAll Island Senior Dialog Drama - 3rd Place")))
// #resume-skill-item("Community", (strong("Right To Information Act, National Debate - 1st Place"),"Tamil","Team Lead",strong("\nAll Island Senior Dialog Drama - 3rd Place")))
= Education
#resume-entry(
title: "BSc. in Engineering - Electronics and Telecommunication",
title-link: "https://ent.uom.lk/",
location: "Colombo, SriLanka",
date: "2021 - Present",
description: [University of Moratuwa | Current CGPA - *$3.715$*],
)
#resume-entry(
title: "Charted Global Management Accounting (CGMA)",
location: "Colombo, SriLanka",
date: "2021 - Present",
description: "Achievers Lanka | Certificate Level Completed",
)
#resume-entry(
title: "Embedded Product Design for IoT",
title-link: "https://ent.uom.lk/verify/?rid=pm3TK2SphVxAqVDz8hArgSPxQ1Z5hPDh",
location: "Colombo, SriLanka",
date: "Aug 2023 - Nov 2023",
description: "Short course by ENTC department UoM with Skillsurf.lk",
)
#resume-item[
- PCB design, firmware development, enclosure design in OnShape, end 2 end connectivity in web dashboard
]
#resume-entry(
title: "System {Verilog} for ASIC/FPGA Design & Simulation",
title-link: "https://ent.uom.lk/verify/?rid=XZAwRkZmNtGDPWeu9PXWRudpvYQtrh",
location: "Colombo, SriLanka",
date: "Jan 2023 - May 2023",
description: "Short course by ENTC department UoM with Skillsurf.lk",
)
#resume-item[
- Assignment 3 - ASIC flow report for MVM UART using SAED 32nm EDK & Synopsys design compiler
- Assignment 4 - FPGA implementation and demo of MVM UART
]
#resume-entry(
title: "Full Stack Web Development (MERN Stack)",
title-link: "https://www.yarlithub.org/uki/#",
location: "Jaffna, SriLanka",
date: "Feb 2021 - July 2021",
description: "Uki Coding School - Digital Cohort 1",
)
#resume-entry(
title: "Secondary Education",
title-link: "https://www.facebook.com/JaffnaHinducollegeOfficial/",
location: "Jaffna, SriLanka",
date: "2011 - 2020",
description: "Jaffna Hindu College | G.C.E. OL - 9A | G.C.E. AL - 3A",
)
= Experience
#resume-entry(
title: "Junior Graphic Designer",
location: "Jaffna, SriLanka",
date: "2020 - 2021",
description: "Mathi Colours Printers",
title-link: "https://g.co/kgs/bbdy7Nr",
)
#resume-item[
- Flyer creation and editing
- Graphic manipulation
- Magazines and books compilation and editing
]
#resume-entry(
title: "Digital Marketing Assistant",
location: "Jaffna, SriLanka",
date: "2021",
description: "Ecosteem Pvt. Ltd.",
title-link: "https://ecosteem.lk/"
)
#resume-item[
- Social media content creation and moderation
- Product modeling and marketing image generation
- Banner designs
]
= References
#resume-entry(
title: "Dr. <NAME>",
location: "<EMAIL>",
description: "B.Sc. Eng. Hons. (Moratuwa), M.E.Sc. (Western, Canada), Ph.D. (Western, Canada), SMIEEE \nSenior Lecturer, University of Moratuwa",
title-link: "https://ent.uom.lk/team/dr-ranga-rodrigo/"
)
#resume-entry(
title: "<NAME>",
location: "<EMAIL>",
description: "Principal, Jaffna Hindu College | LLB Law Hons.",
title-link: "https://www.linkedin.com/in/senthilmaran-ratnam-387aa459/"
)
|
|
https://github.com/kicre/note | https://raw.githubusercontent.com/kicre/note/main/study/测控/仪器仪表/智能手持式激光测距仪-技术路线.typ | typst | #import "../../tem/beamer.typ": beamer
#show: beamer.with(
title: "技术路线",
author: "王恺",
date: "2024-1",
)
= 技术路线
== 激光测量技术选择:
选择适当波长和功率的激光器,以实现精准的测量。
=== 激光测距优势
激光测距技术以其高精度、高速度和非接触性等优点,在工业、建筑、测绘等领域得到了广泛的应用。
采用时间飞行(Time-of-Flight)或相位差测量等激光测距原理,以获取目标物体距离的精确信息。
== 光学系统设计:
设计适当的光学系统,包括透镜、反射镜和光学滤波器,以确保激光束的精确聚焦和目标探测的准确性。
== 激光测距仪传感器:
选择和集成高性能的激光测距传感器,确保其对各种环境条件的适应性和稳定性。
== 嵌入式系统:
选择合适的嵌入式处理器和微控制器,以处理激光测距仪的控制和数据处理任务。
开发嵌入式软件,实现激光测距的实时控制、数据采集和处理。
== 电源管理:
设计高效的电源管理系统,以延长电池寿命或确保设备在外部电源供应情况下的稳定运行。
== 通信模块:
集成无线通信模块,以便与其他设备或云平台进行数据交换。常见的选择包括蓝牙、Wi-Fi或LoRa等。
== 用户界面设计:
设计用户友好的界面,可以是嵌入式屏幕、按钮、指示灯等,以提供用户与设备交互的方式。
考虑开发配套的移动应用,通过蓝牙或Wi-Fi连接,实现更直观的控制和数据展示。
== 数据存储和分析:
集成数据存储功能,以保存测量数据,同时考虑是否需要实时分析或上传到云端进行进一步处理。
== 精准校准:
开发校准算法,确保激光测距仪在不同环境条件下的准确性和稳定性。
== 安全和符合性:
确保设备符合相关安全标准和法规,考虑用户隐私保护等问题。
== 实际测试和优化:
进行实际场地测试,优化系统性能,确保激光测距仪在各种使用场景下都能稳定可靠地工作。
|
|
https://github.com/wuespace/vos | https://raw.githubusercontent.com/wuespace/vos/main/vo/satzung.typ | typst | #import "@preview/delegis:0.3.0": *
#show: it => delegis(
// Metadata
title: "Satzung des WüSpace e. V.",
abbreviation: "Satzung",
resolution: "2023/MV-7 i. V. m. 2024/V-3",
in-effect: "29.01.2024",
draft: false,
// Template
logo: image("wuespace.svg", alt: "WüSpace e. V."),
// Content
it
)
/// Usage
//
// "§ 123abc Section title" for a section "§ 123abc" with the title "Section title"
// "#s~" for sentence numbers in multi-sentence paragraphs
// (normal Typst headings) such as "= ABC" for grouping sections together
//
///
#unnumbered(level: 1, outlined: false)[Vorbemerkung]
Fußnoten dienen als redaktionelle Anmerkungen oder Interpretationshilfen und sind nicht selbst Teil der Beschlussfassung.
#v(2em)
#outline()
#show heading.where(level: 1): it => {
pagebreak(weak: true)
it
}
// #unnumbered[Präambel]
// #lorem(30)
= Allgemeiner Teil
§ 1 Name und Sitz des Vereins
(1) Der Name des Vereins ist WüSpace e. V.
(2) Der Sitz des Vereins ist Würzburg.
(3) Der Verein ist in das Vereinsregister eingetragen.
§ 2 Zweck des Vereins
(1) Der Zweck des Vereins ist die Förderung von Wissenschaft, Forschung und
technischen Anwendungen im Bereich der Raumfahrt.
(2)
#s~Der Satzungszweck wird verwirklicht insbesondere durch die Durchführung von Luft- und Raumfahrtprojekten und luft- und raumfahrtbezogenen Forschungsvorhaben, sowie der Organisation von Seminaren und Konferenzen, als auch der Partizipation an solchen.
#s~Des Weiteren dient der Verein als Kommunikationsplattform zwischen Studierenden, Interessent*innen der Industrie, Forschung und Öffentlichkeit und arbeitet mit diesen zusammen.
§ 3 Selbstlosigkeit
(1)
Der Verein ist selbstlos tätig und verfolgt nicht in erster Linie eigenwirtschaftliche Zwecke.
(2)
#s~Mittel des Vereins dürfen nur für die satzungsmäßigen Zwecke verwendet werden.
#s~Die Mitglieder erhalten keine Zuwendungen aus den Mitteln des Vereins.
#s~Es darf keine Person durch Ausgaben, die dem Zweck des Vereins fremd sind, oder durch unverhältnismäßig hohe Vergütungen begünstigt werden.
(3)
Der Verein verfolgt ausschließlich und unmittelbar gemeinnützige Zwecke im Sinne des Abschnitts „Steuerbegünstigte Zwecke“ der Abgabenordnung.
= Mitgliedschaft
§ 4 Mitglieder des Vereins
Mitglieder des Vereins können natürliche und juristische Personen werden, welche die Ziele des Vereins unterstützen.
§ 5 Beitritt
(1) Der Beitritt erfolgt durch ein öffentliches Formular, das durch zwei Mitglieder des Vorstandes zu unterzeichnen ist.
(2) Der Vorstand entscheidet mit einer absoluten Mehrheit über Änderungen des Status eines Mitglieds.
§ 6 Ende der Mitgliedschaft
(1)
Die Mitgliedschaft endet durch
+ Austritt des Mitgliedes,
+ Ausschluss des Mitgliedes oder
+ Tod des Mitgliedes
(2)
Der Austritt kann durch das Mitglied nur durch schriftliche Mitteilung gegenüber dem Vorstand erklärt werden.
(3)
Der Ausschluss des Mitgliedes kann durch den Vorstand beschlossen werden, wenn
+ das Mitglied gegen die Interessen des Vereins grob verstoßen hat oder
+ mit der Zahlung seiner Mitgliedsbeiträge im Rückstand ist und trotz schriftlicher Mahnung unter Androhung des Ausschlusses die Rückstände nicht gezahlt hat.
(4) Zu viel bezahlte Beiträge innerhalb von vier Wochen nach Austritt zurückerstattet.
§ 7 Arten der Mitgliedschaft
(1)
Der Verein besteht aus ordentlichen Mitgliedern, Alumni-Mitgliedern, Fördermitgliedern sowie Ehrenmitgliedern.
(2)
Die Art der Mitgliedschaft (und sämtliche Änderungen derer) ist auf dem Mitgliedschaftsantrag zu vermerken.
(3)
Eine Änderung der Art der Mitgliedschaft kann durch das Mitglied in schriftlicher Form gegenüber dem Vorstand beantragt werden.
§ 8 Ordentliche Mitglieder
Ordentliche Mitglieder sind natürliche Personen, die aktiv am Vereinsleben teilnehmen.
§ 9 Alumni-Mitglieder
(1)
Voraussetzung für Alumni Mitglieder ist, dass sie zuvor mindestens ein Jahr als ordentliche Mitglieder im Verein aktiv gewesen sind.
(2)
Alumni-Mitglieder haben kein Stimmrecht.
§ 10 Fördermitglieder
(1)
Fördermitglieder sind natürliche oder juristische Personen, die ohne aktiv am Vereinsleben teilzunehmen, eine Mitgliedschaft zur finanziellen Förderung des Vereins und seiner Zwecke Mitglied sind.
(2)
Fördermitglieder haben kein Stimmrecht.
§ 11 Ehrenmitglieder
(1)
Auf Vorschlag des Vorstands oder eines Vereinsmitglieds kann die Mitgliederversammlung Mitglieder oder sonstige Personen, die sich um den Verein besonders verdient gemacht haben, zu Ehrenmitgliedern ernennen.
(2)
Die Ehrenmitgliedschaft kann wie andere Formen der Mitgliedschaft entsprechend den Regelungen in §~6 beendet und aberkannt werden.
(3)
Ehrenmitglieder haben kein Stimmrecht.
(4)
Ehrenmitglieder sind von der Beitragspflicht befreit.
§ 12 Mitgliedsbeiträge
(1)
Mitglieder haben, soweit nicht anders bestimmt, einen Mitgliedsbeitrag in Form eines regelmäßigen Geldbetrags zu entrichten.
(2)
Die Höhe der Mitgliedsbeiträge wird von der Mitgliederversammlung festgelegt.
(3)
Mitgliedsbeiträge werden bei Eintritt in den Verein fällig und sind für das restliche Kalenderjahr im Voraus zu begleichen.
(4)
Die Regelbelastung des Mitglieds erfolgt jeweils im Januar eines Jahres für zwölf Monate im Voraus.
(5)
Der Finanzvorstand informiert rechtzeitig über fällige Zahlungen und verschickt Mahnungen.
(6)
#s~Kommt ein Vereinsmitglied mit seinen Zahlungsverpflichtungen mehr als 14 Tage in Verzug, so stellt der Finanzvorstand eine erste Mahnung aus.
#s~Steht die Zahlung nach einer Frist von 14 Tagen weiterhin aus, so stellt der Finanzvorstand eine zweite Mahnung aus.
#s~Wird der fällige Beitrag innerhalb von weiteren 14 Tagen erneut nicht beglichen, so ist der Vorstand angehalten, über den Ausschluss des Mitglieds zu beraten.
§ 13 Umlagen
#s~Neben dem Mitgliedsbeitrag kann der Verein von seinen Mitgliedern Umlagen erheben, wenn es im Einzelfall für die wirtschaftliche Integrität des Vereins zwingend erforderlich ist.
#s~Diese Umlage ist von der Mitgliederversammlung auf Antrag des Vorstandes zu beschließen.
#s~Der Antrag muss die Erforderlichkeit erläutern.
#s~Die Umlage darf nicht höher als der 1,5 fache Jahresbeitrag sein.
= Organe des Vereins
§ 14 Organe des Vereins
Die Organe des Vereins sind der Vorstand und die Mitgliederversammlung.
= Vorstand
§ 15 Bildung des Vorstands
(1)
Der Vorstand i. S. d. §~26~BGB besteht aus
+ dem/der Vorstandsvorsitzenden,
+ dem/der Stellvertreter*in des/der Vorstandsvorsitzenden,
+ dem/der Schriftführer*in und
+ dem Finanzvorstand.
(2)
Vorstandsmitglieder müssen zum Zeitpunkt ihrer Wahl ordentliche Mitglieder des Vereins sein.
§ 16 Ehrenamtlichkeit
Der Vorstand führt die Vereinsgeschäfte ehrenamtlich.
§ 17 Vertretung des Vereins
#s~Der Verein wird durch zwei Mitglieder des Vorstands gerichtlich und außergerichtlich vertreten.
#s~Abweichend dazu kann der Vorstand mit einfacher Mehrheit beschließen, ein Vorstandsmitglied für einen bestimmten Sach- und Zeitrahmen zur alleinigen Vertretung zu ermächtigen.
§ 18 Bestellung des Vorstands
#s~Der Vorstand wird durch die Mitgliederversammlung für die Dauer von einem halben Jahr gewählt.
#s~Eine Wiederwahl ist zulässig. Die Mitglieder des Vorstands bleiben so lange im Amt, bis ein neuer Vorstand gewählt worden ist.
§ 19 Einschränkung der Vertretungsmacht
Die Vertretungsmacht des Vorstands ist mit Wirkung gegen Dritte in der Weise beschränkt, dass für Ausgaben von mehr als 5000,00 € in einer Summe die Zustimmung der Mitgliederversammlung erforderlich ist.
§ 20 Dringliche Satzungsänderungen
#s~Satzungsänderungen, die aufgrund behördlicher Vorgaben oder aufgrund der Änderung der Rechtslage erforderlich sind, kann der Vorstand vornehmen.
#s~Er hat die Mitgliederversammlung hierüber zu unterrichten.
§ 21 Geschäftsordnung des Vorstands
Der Vorstand kann sich eine Geschäftsordnung geben.
#unnumbered[§ 22\ Vertreter*innen]
Der Vorstand kann seine Vertreter*innen bestimmen und ihnen Aufgaben übertragen.
= Mitgliederversammlung
§ 23 Berufung der Mitgliederversammlung
(1)
Die Mitgliederversammlung ist zweimal jährlich durch den Vorstand einzuberufen.
(2)
Zu der Mitgliederversammlung ist mit einer Frist von zwei Wochen vor dem Termin schriftlich unter Angabe der vorläufigen Tagesordnung einzuladen.
§ 24 Aufgaben der Mitgliederversammlung
Die Mitgliederversammlung ist im Besonderen zuständig für:
+ die Entgegennahme der Vorstandsberichte
+ Entlastung des Vorstandes
+ Wahl des Vorstandes
+ Schaffung von Vereinsordnungen und deren Änderungen
+ Satzungsänderungen
+ Auflösung des Vereins
+ Festlegen der Mitgliedsbeiträge
+ Beschluss über die Erhebung einer Umlage
§ 25 Antragsfrist
(1)
Jedes Mitglied kann bis zu vier Tage vor der Mitgliederversammlung Anträge zur Tagesordnung stellen.
(2)
Zugelassene Anträge und die daraus hervorgehende Tagesordnung sind den Mitgliedern spätestens 48 Stunden vor Beginn der Mitgliederversammlung mitzuteilen.
§ 26 Initiativanträge
(1)
Die Mitgliederversammlung beschließt mit einfacher Mehrheit darüber, ob über nachträglich gestellte Anträge (Initiativanträge) beschlossen werden darf.
(2)
Anträge auf Abwahl des Vorstandes, auf Änderung oder Neufassung der Satzung sowie auf Auflösung des Vereins können nicht im Wege des Initiativantrags gestellt werden.
§ 27 Versammlungsleitung
#s~Die Mitgliederversammlung wird vom zum Beginn der Mitgliederversammlung amtierenden Vorstandsvorsitz geleitet.
#s~Bei Verhinderung übernimmt der stellvertretende Vorstandsvorsitz die Leitung.
#s~Ist auch dieser verhindert, wählt die Mitgliederversammlung eine*n Versammlungsleiter*in.
§ 28 Beschlussfassung
(1)
Beschlussfähig ist jede ordnungsgemäß berufene Mitgliederversammlung, sofern mindestens 7 stimmberechtigte Mitglieder anwesend sind.
(2)
Es wird offen abgestimmt. Auf Antrag von mindestens fünf der Anwesenden oder auf Entscheidung der Versammlungsleitung (zur Gewährleistung gesetzlicher Bestimmungen) ist geheim abzustimmen.
(3)
#s~Die Beschlüsse der Mitgliederversammlung werden mit einfacher Mehrheit gefasst.
#s~Bei Stimmgleichheit gilt ein Antrag als abgelehnt.
#s~Enthaltungen zählen wie nicht abgegebene Stimmen.
(4)
Jedes Mitglied ist stimmberechtigt, außer es gehört einer Gruppe an, die ausdrücklich vom Stimmrecht ausgeschlossen ist.
(5)
Satzungsänderungen benötigen eine 3/4 Mehrheit.
§ 29 Blockwahlen
Die Wahl des Vorstandes kann in Form einer Blockwahl durchgeführt werden; der Beschluss über das Wahlverfahren bedarf der einfachen Mehrheit der abgegebenen Stimmen.
§ 30 Wahlleitung
(1)
Die Mitgliederversammlung bestimmt ein*e Wahlleiter*in, der/die nicht dem Vorstand oder den wählbaren Kandidat*innen angehören darf.
(2)
Die Wahlleitung übernimmt die Versammlungsleitung für Beschlussfassungen, bei denen
+ bei denen Personen zu wählen sind oder
+ bei welchen der/die Versammlungsleiter*in gemäß §~34~BGB kein Stimmrecht besitzt (bspw. bei der Entlastung des Vorstandes).
(3)
Nach Abschluss der entsprechenden Abstimmung(/-en) geht die Versammlungsleitung wieder an die nach §~27 vorgesehene Versammlungsleitung zurück.
§ 31 Elektronische Abstimmungen und Wahlen
(1)
Die Nutzung elektronischer Wahl- und Abstimmungseinrichtungen ist zulässig, soweit durch die Versammlungsleitung sichergestellt wurde, dass sämtliche anwesenden stimmberechtigten Mitglieder die Möglichkeit zur Abstimmung über dieses Medium haben.
(2)
Bei elektronischen Wahl- und Abstimmungseinrichtungen ist sicherzustellen, dass die sonst für die Abstimmung geltenden Regelungen (Geheimheit bzw. Offenheit, Stimmberechtigung) gewährleistet werden können.
§ 32 Beurkundung von Beschlüssen
(1)
Über die Mitgliederversammlung ist durch den/die Schriftführer*in oder eine*n durch die Mitgliederversammlung gewählte*n Protokollant*in ein Protokoll anzufertigen, welches die gefassten Beschlüsse wiedergibt.
(2)
Das Protokoll ist durch Protokollant*in und der die Versammlung schließenden Versammlungsleitung zu unterzeichnen.
§ 33 Kassenprüfung
(1)
#s~Die Mitgliederversammlung wählt für die Dauer eines Geschäftsjahres eine*n Kassenprüfer*in.
#s~Diese*r darf nicht dem Vorstand angehören und hat das Recht, jederzeit die Kassengeschäfte zu überprüfen.
(2)
Der/die Kassenprüfer*in erstattet der ordentlichen Mitgliederversammlung Bericht und beantragen bei ordnungsgemäß geführten Kassengeschäften die Entlastung des Vorstandes.
§ 34 Virtuelle Mitgliederversammlung
#s~Die Mitgliederversammlung kann online abgehalten werden. Der Vorstand muss dies in seiner regulären Einladung ankündigen.
#s~Findet die Versammlung online statt, üben die Vereinsmitglieder ihre Rechte durch elektronische Kommunikation aus.
#s~Alternativ können Mitglieder, denen eine Teilnahme online nicht möglich ist, ihre Stimme für Abstimmungen und Wahlen, bei denen sie stimmberechtigt sind, vor der Versammlung schriftlich dem Vorstand zukommen lassen.
#s~Die Stimme wird in dem Fall genauso gezählt, wie wenn das Mitglied diese direkt (z. B. über elektronischen Kommunikationsweg) abgegeben hätte.
§ 35 Außerordentliche Mitgliederversammlung
Eine außerordentliche Mitgliederversammlung ist durch den Vorstand einzuberufen, sofern dies im Interesse des Vereins erforderlich ist oder die Einberufung durch mindestens 20 % der Mitglieder, durch einen schriftlichen und begründeten Antrag verlangt wird.
§ 36 Geschäftsordnung der Mitgliederversammlung
Die Mitgliederversammlung kann sich eine Geschäftsordnung geben, welche den Ablauf der Versammlung festlegt.
= Schlussbestimmungen
§ 37 Vereinsordnungen
(1)
Der Vorstand sowie die Mitgliederversammlung sind ermächtigt, verbindliche Vereinsordnungen zu beschließen.
(2)
Vereinsordnungen, sowie deren Änderungen und Aufhebungen, müssen den Mitgliedern in schriftlicher Form bekannt gemacht werden.
(3)
Vereinsordnungen treten mit deren Verkündung in Kraft.
(4)
Vereinsordnungen sind nicht Bestandteil dieser Vereinssatzung und werden nicht in das Vereinsregister eingetragen.
§ 38 Datenschutz
(1)
#s~Im Rahmen der Mitgliederverwaltung werden von den Mitgliedern persönliche Daten erhoben.
#s~Mit dem Betritt eines Mitgliedes nimmt der Verein alle für die Mitgliedschaft im Verein relevanten Daten (Name, Anschrift, Geburtsdatum, Bankverbindung, E-Mail Adresse) auf.
#s~Diese Informationen werden in dem vereinseigenen EDV-System gespeichert.
#s~Jedem Vereinsmitglied wird eine Mitgliedsnummer zugeordnet.
#s~Die personenbezogenen Daten werden dabei durch geeignete technische und organisatorische Maßnahmen vor der Kenntnisnahme Dritter geschützt.
(2)
Der Verein darf Daten seiner Mitglieder auf der Homepage des Vereins veröffentlichen, wenn das Mitglied dem ausdrücklich zustimmt.
(3)
Zur Erfüllung der Zwecke und Aufgaben des Vereins werden unter Beachtung der Vorgaben der EU-Datenschutz-Grundverordnung (DS-GVO) und des (3) Bundesdatenschutzgesetzes (BDSG) personenbezogene Daten über persönliche und sachliche Verhältnisse der Mitglieder im Verein verarbeitet.
(4)
Soweit die in den jeweiligen Vorschriften beschriebenen Voraussetzungen vorliegen, hat jedes Vereinsmitglied insbesondere die folgenden Rechte:
+ das Recht auf Auskunft nach Artikel 15 DS-GVO,
+ das Recht auf Berichtigung nach Artikel 16 DS-GVO,
+ das Recht auf Löschung nach Artikel 17 DS-GVO,
+ das Recht auf Einschränkung der Verarbeitung nach Artikel 18 DS-GVO,
+ das Recht auf Datenübertragbarkeit nach Artikel 20 DS-GVO und
+ das Widerspruchsrecht nach Artikel 21 DS-GVO.
(5)
#s~Den Organen des Vereins, allen Mitarbeitern oder sonst für den Verein Tätigen ist es untersagt, personenbezogene Daten unbefugt zu anderen als dem jeweiligen Aufgabenerfüllung gehörenden Zweck zu verarbeiten, bekannt zu geben, Dritten zugänglich zu machen oder sonst zu nutzen.
#s~Diese Pflicht besteht auch über das Ausscheiden der oben genannten Personen aus dem Verein hinaus.
§ 39 E-Mail als Kommunikationsmittel
Die E-Mail ist ein gängiges Kommunikationsmittel und in ihrer Verwendung dem Brief gleichzusetzen.
§ 40 Auflösung des Vereins
(1)
#s~Der Verein kann durch den Beschluss der Mitgliederversammlung aufgelöst werden.
#s~Dieser Beschluss erfolgt einstimmig.
(2)
Bei Auflösung des Vereins oder bei Wegfall steuerbegünstigender Zwecke fällt das Vermögen des Vereins an das Zentrum für Telematik e. V., das es unmittelbar und ausschließlich für gemeinnützige, mildtätige oder kirchliche Zwecke zu verwenden hat.
§ 41 Inkrafttreten
(1)
Die Satzung ist in ihrer vorliegenden Form am 08.12.2023 von der Mitgliederversammlung beschlossen worden und tritt mit Eintragung in Kraft.
(2)
#s~Beim Inkrafttreten wird bis auf Weiteres der bisherige monatliche Mitgliedsbeitrag vom 12.06.2019 für ordentliche Mitglieder €1,- übernommen.
#s~Die weiteren Mitgliedschaftsarten bleiben bis Festsetzung ohne Beitrag. |
|
https://github.com/wuespace/delegis | https://raw.githubusercontent.com/wuespace/delegis/main/delegis.typ | typst | MIT License | // Copyright (c) 2024 <NAME>.
//
// This software is released under the MIT License.
// https://opensource.org/licenses/MIT
// sentence number substitution marker
#let s = "XXXXXXSENTENCEXXXNUMBERXXXXXX"
/// Create an unmarkes section, such as a preamble.
/// Usage: `#unnumbered[Preamble]`
#let unnumbered = (it, ..rest) => heading(level: 6, numbering: none, ..rest, it)
/// Manually create a section. Useful when unsupported characters are used in the heading.
/// Usage: `#section[§ 3][Administrator*innen]`
#let section = (number, it, ..rest) => unnumbered(
{
number + "\n" + it
},
..rest,
)
/// Division prefixes for different languages.
#let division-prefixes-de = ("Teil", "Kapitel", "Abschnitt", "Unterabschnitt")
#let division-prefixes-en = ("Part", "Chapter", "Division", "Subdivision")
/// Initialize a delegis document.
#let delegis = (
// Metadata
title : "Vereinsordnung zur IT-Infrastruktur",
abbreviation : "ITVO",
resolution : "3. Beschluss des Vorstands vom 24.01.2024",
in-effect : "24.01.2024",
draft : false,
// Template
logo : none,
// Overrides
size : 11pt,
font : "Atkinson Hyperlegible",
lang : "de",
paper: "a5",
division-prefixes: none, // use language-specific prefixes by default
str-draft : "Entwurf",
str-intro : (resolution, in-effect) => [Mit Beschluss (#resolution) tritt zum #in-effect in Kraft:],
// Content
body
) => {
/// Language-specific division prefixes
let division-prefixes = if division-prefixes != none {
division-prefixes
} else if lang == "en" {
division-prefixes-en
} else {
division-prefixes-de // default to German
}
/// Metadata
set document(title: title + " (" + abbreviation + ")", keywords: (title, abbreviation, resolution, in-effect))
/// General Formatting
let bg = if draft {
rotate(45deg, text(100pt, fill: luma(85%), font: font, str-draft))
}
set page(paper: paper, numbering: "1 / 1", background: bg)
set text(hyphenate: true, lang: lang, size: size, font: font)
/// Clause Detection
show regex("§ ([0-9a-zA-Z]+) (.+)$"): it => {
let (_, number, ..rest) = it.text.split()
heading(
level: 6,
numbering: none,
{
"§ " + number + "\n" + rest.join(" ")
},
)
}
/// Heading Formatting
set heading(numbering: (..nums) => {
// Handbuch der Rechtsförmlichkeit, Rn. 379 f.
// After the final named level, use "X.X.X" for the numbering using the final prefix
nums = nums.pos()
let level = nums.len() // level of the heading
let number = nums.slice(calc.min(
division-prefixes.len(),
level,
) - 1)
let prefix = division-prefixes.at(
calc.min(
level - 1,
division-prefixes.len() - 1,
),
)
let str-number = numbering("1.1", ..number)
[
#prefix #str-number:
]
})
show heading: set align(center)
show heading: set text(size: size, weight: "regular")
show heading.where(level: 1): set text(style: "italic")
show heading.where(level: 2): set text(style: "italic")
show heading.where(level: 3): set text(style: "italic")
show heading.where(level: 4): set text(style: "italic")
show heading.where(level: 5): set text(style: "italic")
show heading.where(level: 6): set text(weight: "bold")
// Enumeration numbering
// 1. -> a) -> aa) -(unofficial)-> (1) -> i. -> i.i. -> ...
// Handbuch der Rechtsförmlichkeit, Rn. 374
set enum(
numbering: (..numbers) => {
let nums = numbers.pos()
if (nums.len() == 1) {
return numbering("1.", ..nums)
} else if (nums.len() == 2) {
return numbering("a)", ..nums.slice(1))
} else if (nums.len() == 3) {
let letter = numbering("a", ..nums.slice(2))
return [ #letter#letter) ]
} else if (nums.len() == 4) {
return numbering("(1)", ..nums.slice(3))
} else {
return numbering("i.", ..nums.slice(4))
}
},
full: true, // get full number arrays passed into the numbering function
)
/// Outlines
show outline.entry: it => {
show linebreak: it => { } // disable manual line breaks
show "\n": " " // disable section number line breaks
it
}
set outline(indent: 1cm)
show outline: it => {
it
pagebreak(weak: true)
}
/// Sentence Numbering
show regex(s): it => {
counter("sentence").step()
super(strong(counter("sentence").display()))
}
show parbreak: it => {
counter("sentence").update(0)
it
}
/// Title Page
page(
numbering: none,
{
place(top + right, block(width: 2cm, logo))
v(1fr)
show par: set block(spacing: .6em)
if draft {
text[#str-draft:]
} else {
par(text(str-intro(resolution, in-effect)))
}
par(text(2em, strong[#title (#abbreviation)]), leading: 0.6em)
v(3cm)
},
)
// Metadata once again. Needs to be down here to have the page size set.
// Can be used with `typst query`, e.g.:
//
// `typst query example.typ "<title>" --field value --one` returns `"[title]"`
[
#metadata(title)<title>
#metadata(abbreviation)<abbreviation>
#metadata(resolution)<resolution>
#metadata(in-effect)<in-effect>
]
// allow footnotes that don't conflict with sentence numbers
set footnote(numbering: "[1]")
/// Content
body
}
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/035%20-%20Core%202019/011_Unbowed%2C%20Part%203.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Unbowed, Part 3",
set_name: "Core 2019",
story_date: datetime(day: 07, month: 09, year: 2018),
author: "<NAME>",
doc
)
Of the many lies she'd recited to the Baron of Vernot, the process behind adding another life to the Arkbow was not one of them. The relic did indeed make snapshots of the dying, although whatever sorcery worked into the bones of the Arkbow did not simply stop there, transforming every rag-and-bone memory of a creature on its last legs to the animal in its prime. There was a time when Vivien, of course, wasn't the only one who could perform the ritual. Her fellow shaman had also possessed that knowledge.
But now Vivien was the last of Skalla.
And the baron wasn't about to trust in her claims. To no one's particular surprise, the baron conscripted a small fleet of secretaries to tail Vivien, scrolls on the crook of their arms, a quill in their dominant hands. Little altar boys stalked them in turn, holding ink pots. "For posterity," he informed Vivien, swirling brandy in a goblet of faintly translucent amber, his expression stormy with distrust.
What surprised Vivien was the equipment they wheeled into the ballroom: haloes of wiring, together with metallic poles, filigreed artifacts embossed with spellwork that she did not recognize. Her puzzlement did not last. Quickly enough, the baron's minions assembled the contraption around the dying monstrosaur. More of the nuns were called to the room, and humming a couplet of chords, they conjured a shimmering barrier.
"Inside," said the baron.
Vivien complied.
There wouldn't be opportunity to revisit the process, not with what Vivien intended, not with the truth she kept tucked behind her teeth like the last rites of a world long dead. She had one opportunity to do this right. Lightly, the Planeswalker ran her fingers along the surface of the magical barrier. Though mostly translucent, it felt like a wall of steel.
Vivien crouched down beside the monstrosaur, the reptile so weak now that it barely stirred at her touch, only exhaled a lethargic death rattle, its breath stinking of bile and rust and carrion and very, #emph[very] faintly, a tincture of lilacs and saffron. It blinked wetly at Vivien, tear ducts leaking a chalky emulsion.
"Anesthetics," she said quietly.
The nuns exchanged looks with each other as did the scribes.
"Or alcohol. Whatever your generosity would permit here." Vivien pursed her mouth. "I know it isn't part of your personal credo, but the Arkbow is excruciatingly precise. If it takes this thing at the height of its pain, the summon will share a similar condition. As you can imagine, it's difficult to fight when you're half-mad with pain."
The baron downed his brandy, poured himself a fresh serving, before he waved an irritated hand at the nuns. "Do as she asks."
The nuns complied. As their magic wormed through the monstrosaur, it sighed and slumped, seeming to shrink onto itself, receding into the new numbness. Its eyes fluttered close and slowly, the break between each breath began to lengthen.
#figure(image("011_Unbowed, Part 3/04.jpg", width: 100%), caption: [Pious Interdiction | Art by: Lake Hurwitz], supplement: none, numbering: none)
"There," Vivien said and murmured a prayer to the cinders of Skalla, her voice so quiet she was certain not even the undead of Luneau could discern her praises. Petting the monstrosaur one last time on its broad nose, Vivien rose, her weight slouched against the Arkbow, its tip wedged into a crack in the floor tiling.
No one was trying to feign apathy any longer. The entirety of the room leaned into the act of watching; every noble, every courtier, even the scullery maids bent double under the burden of their station. They watched, eager as hounds. Vivien made a loop of her index finger and thumb, stroking the Arkbow like a lover. She had precisely one trick left, one last thing to try. Vivien curtseyed at her audience, gleaning laughter.
It was time.
The Planeswalker tapped the Arkbow three times against the floor and on the third impact, the sound echoed. Energy roiled and rolled across the ballroom, jouncing through the walls, hissing through the ligature of the chandeliers, a transparent sheen of light reflected across every watching face. Then, as with any explosion, the power came howling back to the point of parturition, the floor beneath Vivien's feet irradiated in such a manner that it almost looked as though reality had flaked away, leaving only white, only a brilliance so intense there was no space for the concept of shadow.
Vivien struck the floor with the Arkbow again.
The artifact opened. It spread into metallic bark and branch, flowering at intervals, geometric configurations of shining, mysterious alloy. The Arkbow split down to its pith, revealing a jag of light so bright that it made Vivien's eyes water. But she watched it all without blinking. The monstrosaur was owed that dignity at least.
She could feel the dregs of the monstrosaur's spirit, a tumble of misfiring nerves and a rage it was too exhausted to satisfy. Carefully, Vivien threaded the Arkbow's power through its ruined skeleton, coaxing that final spark of consciousness to her, promises humming through the connection. The monstrosaur did not resist. It came to her in a torrent, riding the link into the Arkbow with a shriek of joy. Vivien shuddered as she broke through the moment, disoriented by the smallness of her own physique, the monstrosaur's presence dwindling to a nub in the back of her thoughts. Its body remained, at rest at last, now encased in the same strange alloy that coated the Arkbow.
"That was all very good and dramatic. An excellent show, really." The baron's voice. "Are you done then?"
Vivien blinked, astonished. Light seeped from her fingertips and down from her tongue. It tasted to her of chalk and calcium, fury like nothing she'd experienced, fury like she could eat the world whole. This was new.
"Yes."
"Good." He fluttered a hand. "Now, let us see the results."
"Yes," Vivien said again, feeling slow, the word like treacle between her teeth. She twanged the Arkbow, felt it hum beneath the caress of her thumb, the monstrosaur so close beneath the surface that it was all she could do to keep it quiescent. Its eagerness bled through the contact, spilled into her bones. What was going on? The essences in the Arkbow were normally so much more quiet, half-asleep, happy to be safe, still, silent in the dark of the artifact. But not the monstrosaur.
The Planeswalker bit down on the impulse to lunge at the baron, schooled herself for steadiness, lifted the Arkbow only to have the baron clear his throat.
"No. We'll have someone else do the honors."
He signaled to a guard who might well have been a bull transformed into a more convenient form, the man so thick-necked, there was no separation between his throat and his jaw. He scowled at Vivien as he lumbered toward her, the forcefield arising to permit him entrance. The baron gestured at the nuns, and their voices rang out again, sealing the guard inside the containment area with Vivien.
#figure(image("011_Unbowed, Part 3/05.jpg", width: 100%), caption: [Sanguine Sacrament | Art by: <NAME>], supplement: none, numbering: none)
"Now, show him how to do it."
Vivien passed the Arkbow to the frowning guard. Thick-set as the man was, he soon divulged a dexterity that Vivien had not anticipated, his fingers quick despite their sausage-like width. The Arkbow sang out as he raised the relic, the guard sighting expertly down his arm, Vivien's arrow nocked and ready. A minute spasm of those meaty digits, and his body burst outward.
The Planeswalker looked back at the baron, face placid. "I told you."
"I won't accept this," he hissed. "We succeeded in calling up the bear. There must be a process. Something you're not telling me. Are you doing this deliberately? You must be."
"The Arkbow is mine. It won't obey another hand."
"Liar."
Vivien held out the artifact in challenge. "#emph[You're] welcome to try."
The baron curled his hand into a fist and Vivien decided, with a morbid pleasure, that the look on his face would be enough. That no matter what followed, no matter what would come to pass, that memory of the baron's clear frustration would be a light she'd hold onto. She smiled. "I warned you."
"Quiet."
Vivien flicked her eyes down to the remnants of the guard's carcass. The Arkbow had left him a mess. Almost by accident, Vivien caught sight of movement. She bent down.
#emph[A spider.] Vivien watched in silence as the arachnid picked its careful way from the guard's pocket and inch toward the edge of the barrier. It was small enough for the magic to ignore its existence, small enough for the vampires to dismiss its presence.
Vivien had an idea.
"The problem," said the Planeswalker. "The problem with people like you is how often you ignore the little things, how you assume the clockworks of the worlds operate without effort, powered only by your will. You assume that the cogs do not exist. You can't even see them."
"What prattle is this?" the baron snapped, storming back toward the wall of light separating them.
"Tell me," Vivien traced the world with her thoughts, felt the spider shudder and swell beneath her attention. "Have you ever wondered what it might be like to be as small and insignificant as a spider?"
She gave the baron no opportunity to answer, her power throbbing through the world, curlicues of green spreading from her in a halo. The baron snapped his head up, eyes going wide.
"What have you done?"
Engorged on Vivien's magic, the spider became the size of a small dog, the size of a jaguar, of a bear. #emph[Grow] , she thought fiercely at the spider, scrawling a sigil in the air with her fingers, the movements quick and filthy. Alarmed by its growth, the arachnid turned and launched itself at the king. The nuns and the nobles let out screams at the sight, all attention suddenly turned toward their ruler. In the chaos, the former loosened their hold on Vivien's prison.
#figure(image("011_Unbowed, Part 3/06.jpg", width: 100%), caption: [Giant Spider | Art by: <NAME>], supplement: none, numbering: none)
It was as she hoped. Without missing a beat, she nocked a new arrow and freed the projectile as the walls came down. The arrow burned through the air, evaporating into embers, into bone and vivid feathers etched in magic, into a body no longer hobbled by injury, a body perfect and pristine, exquisitely prepared to enact that one final desperate desire.
The arrow buried itself in the wall, and the ethereal monstrosaur tore itself loose, roaring, Vivien's own powers lancing forward to wrap around the reptile's newborn frame. It swung its head, blinking, and not even the shock of being alive again was enough to distract the monstrosaur from its intent. The creature had died starved for retribution. It would not go quietly without fulfilling that want.
Vivien dove sideways as the dinosaur thundered toward the Baron of Vernot, screaming courtiers scattering in its wake, a helpless few trampled beneath its clawed feet, their bodies pressed so flat they could be folded in half. The rare guards loyal enough to stand in its way were bludgeoned aside, flung into the walls with a swing of the creature's head.
The monstrosaur's shimmering form strained against the firmament, splitting the ceiling as though it was the skin of a fruit. Rubble and ash ribboned from above. The building groaned. Strut-work, now untethered, gave way in increments, gravity tugging the masonry apart. Not that any of it served to dissuade the monstrosaur, its eyes wild.
Despite the odds, the Baron of Vernot would not flee. Though abandoned by his cohorts, the ballroom already collapsing into ruin, he stood his ground, teeth bared and sword drawn, his frame doll-like in this juxtapositioning against the monstrosaur's enormity. He blurred into shadow, zigzagging upward, the comet's tail of his sped-up motions revealing an upward trajectory through the falling rubble. Vivien caught a flash of silver as the baron swung, but no matter one's abilities, no matter the power differentials offered by training, nature possessed empirical favorites.
#figure(image("011_Unbowed, Part 3/07.jpg", width: 100%), caption: [Raging Swordtooth | Art by: Izzy], supplement: none, numbering: none)
At the end of the day, life has always been a contest of raw might.
The baron's sword passed harmlessly through the hollow beneath the lizard's right eye, eroding to an alloyed lump. Before he could reverse the thrust, the monstrosaur tossed its head up, flinging the baron into the air. Vivien saw surprise dart across the vampire's face, obvious even from the distance. And quicker than the baron, quicker than anyone might have anticipated, the monstrosaur snapped its maw forward, a rattlesnake motion, teeth closing over the vampire's torso.
Vivien staggered to a pause, staring.
The monstrosaur turned a doleful gaze to her, expression so ludicrously meditative, so human in its uncertainty that she nearly laughed at the sight. The baron stared at his captor, an animal terror rising in his face. Then, with considerable aplomb and no small amount of ceremony, the monstrosaur bit down and the two halves of what was once the Baron of Vernot fell quietly, messily to the ground.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Most of Vivien's summons were transient in nature, rarely persisting for longer than a minute, the creatures content to dissipate after a perfunctory dalliance with chaos. But the monstrosaur would not dissipate. Having dealt with the Baron of Vernot, the reptile was now rudderless, but it didn't remain that way for long. It sniffed the air once before it neatly picked a route through the doors into the palace, oblivious to the courtiers still pinwheeling from its path. Vivien followed behind, ignored in its wake.
Their trajectory marched them past the Royal Menagerie, which teemed now with agitated fauna, its captives either galvanized by the monstrosaur's proximity or simply excited by the stink of destruction in the air. It did not take Vivien long to come to a decision. As the monstrosaur took another corner, Vivien ran her magic through a family of wildebeests, stoking their cells until the creatures grew large enough to smash through their confinement. She did the same again for everything she passed. Hammerskulls and coatls and broad-bodied bears, power leaping beneath them like so much lightning.
Some of the animals fell together in frenzied knots, carnivore and prey tearing chunks free of the other, but most did not. Like the rampaging monstrosaur, they seemed absorbed by the thought of vengeance. Their handlers, previously secure in their knowledge that they were inoculated against the consequence of their own cruelty, quickly found themselves engaged in life-and-death battles. Screaming swelled through the air.
And still, the monstrosaur held onto its shape, somehow, powered by something. Its rage, perhaps? Or Vivien's? The Planeswalker decided it didn't matter. Instead, she counted the minutes between corporealization and disintegration. Each time the monstrosaur shimmered out of existence, she shot a fresh arrow through the air. The corridors widened into a gallery. Here, the monstrosaur halted, head cocked to one side. Men in tiered wigs and women painted with pearlescent powders, their bodices high and unnatural, gawked at the sight.
A rail-thin girl, scarcely an adult by any estimate of the word, tottered uncertainly forward. A leash trailed from her hand; Vivien followed the rope to where it attached itself to the collar of a small raptor. Someone had rouged its emerald scales, outfitted its neck with a ruff so clownishly large that it was amply clear the decoration impeded its ability to see. Vivien frowned at the creature. It looked miserable.
At that moment, the monstrosaur began to fade, dwindling into glowing pinpoints, an outline of a creature that soon degraded into an indistinct haze. Vivien couched into a fist, the congregation still silent, still dumbstruck by what had transpired. Behind her, there was the roar of the Royal Menagerie still in mutiny, the low clamor of its inhabitants periodically interrupted by terrified screams.
"I suppose," Vivien finally said. "This is where one customarily makes a dramatic speech."
The raptor hopped forward, head tilted first in one direction and then the next, brisk and bird-like motions. It trilled an inquiring note at Vivien.
"Or at least, inform you of what's going on."
The sounds were getting louder.
"I'm really not sure what the protocol is on this." Unbidden, a smile anchored itself. "But I keep feeling like some measure of informational exposition is necessary."
She dropped her hand.
"What is the meaning of this?" began a patriarchal-looking man with a trim beard, his physique still formidable despite evidence of middle age. He rested long fingers on the scabbard of his saber, glaring. "Who are you? And what is going on in the palace?"
"Someone once described the death of a nation to me as a 'mercy.' I didn't really understand his point then, or where he was coming from. But now, now I find myself in perfect comprehension." Vivien drew lazy figure eights with her fingers, magic beginning to collect in her palm, spokes of glittering power. "Anyway. This is a mercy. This is the last that you will see of Luneau. By this time tomorrow, the wilds will have this place again and you will be nothing but a memory to be forgotten."
Vivien closed her fist and the raptor freed a confused hiss, its body suddenly wracked with convulsions. Unlike the denizens of the Royal Menagerie, it did not grow in a uniformed fashion. Instead, the creature swelled up in fits, its growth metered by the movements of Vivien's hand and the motions of her power, uncoiling green and serpentine from her frame. Legs first, tail, then its head before at last, its torso followed suit. Throughout the process, its owner could only stare, slack-jawed in wordless perplexity.
#figure(image("011_Unbowed, Part 3/08.jpg", width: 100%), caption: [Gigantosaurus | Art by: <NAME>], supplement: none, numbering: none)
Within seconds, the raptor outsized its mistress, stooping to regard her with one luminous amethyst eye. In answer, she fish-mouthed in silence, a tremble of high-pitched sound eventually escaping. "Wh-wh-wh-"
Her former pet did not share her befuddlement. It reared up, chirping several crystalline notes, its curiosity about its owner clearly slaked. Then, without any reservation, it twitched forward and shut its jaws around the vampire's skull, teeth crunching through the vertebrae.
The decapacitation of the young vampire dislodged something in the crowd. Pandemonium broke through the bourgeoisie in waves, spreading, growing, until it was nothing but hysterics, all pretenses of enlightened behavior forgotten in the face of carnage. Those with at least passable command of their faculties closed on Vivien, hissing, but the Planeswalker only scrutinized them with vague indifference.
Something was approaching.
A second before the stampede erupted through the doors, Vivien took a sideways step. Her adversaries, on their part, only had moments to look up, moments to take note of the beasts thundering through the corridors. As the escapees of the Royal Menagerie rendered their former tormentors to an even pulp, Vivien found herself smiling.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The Royal Palace shook itself apart like a carcass worried to shreds by dogs. In starts and stops, without cogent sequence, the architecture fighting the whole while to remain vertical. Gravity, however, possessed an insatiable appetite. Soon enough, the Royal Palace fell, dust pluming into the air.
#figure(image("011_Unbowed, Part 3/09.jpg", width: 100%), caption: [Siegehorn Ceratops | Art by: <NAME>], supplement: none, numbering: none)
But <NAME> wasn't even halfway done with Luneau.
There was more chaos to be wrought.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The cafe was, in more ways than one, indistinguishable from the others festooning the cultural district of Luneau. Here, museums and bawdy matinees shared the same streets. Art took many forms, some less savory than others, but Luneau was rarely inclined toward being judgmental. The eateries enjoyed brisk business as the result of this generous ideology. There were always customers. Sometimes, they were scholars and cognoscenti, hungry for a space to discuss and dissect the day. Sometimes, they were more tawdry individuals, lust-drunk and simply desperate to sit. Regardless of their nature, they were inevitably weighted with money and, to the delight of this particular cafe's proprietor, often exceedingly generous with tips in the form of vials of blood.
The man in question studied his reflection in the mirror. He was tall, lanky, with shoulders too narrow to provide any heft to his frame. But not unattractive. At least, that is what he'd inferred from interactions with his female clientele. The proprietor corrected the angle of his wig. It wouldn't do to look disarrayed.
The evening was sultry, unruffled by anything that even resembled a breeze, and the air sat over Luneau like a wet towel warmed on a corpse. Not that many seemed to mind. The city's elite, particularly those numbered among the Legion of the Dusk, appeared to have a preference toward such climates, basking in the heat, while the humans wilted.
He picked a slow path toward where his most recent customers resided. Both were decorated officers, slim, impressively groomed despite the fact they spent most of their time embarking on expeditions toward foreign territory. The proprietor liked them for that reason. Most explorers eventually lost the trick of hygiene, along with any interest in reconciliation with the idea.
"Your breakfast," said the proprietor.
They acknowledged him with a glance and tepid smiles. The proprietor set down an arrangement of victuals.
Luneau rumbled beneath his feet.
An earthquake? It was possible. Though only infrequently beset by such tremors, it wasn't an unknown phenomenon and as such, the proprietor only saw mild reason to be concerned. He would need to secure his spice rack, ensure the cafe's modest cache of wine bottles remained safely ensconced in their aerie. Small details. Simple chores. It would be fine.
"Stop sulking," said one of the men. The proprietor slowed his steps to eavesdrop. Gossip was always good with such army men.
"Like you're any more cheerful about this. You know the Baron of Vernot is studying the device right now," said his companion.
The first man let out an exasperated noise. "I hope he fails, then. If he succeeds at deciphering that stupid artifact, we'd be out of a job."
"Be careful with that tongue of yours," returned his friend. "That's treason you're spitting."
"Not treason. Truth. If Luneau learns to make use of something like that, we'd be left to beg in the alleys. Mark my words. The royals don't care about people like us, badges or not. If they can make their own animals, why'd they bother with paying us to find 'em more?"
Before his friend could reply, the rumbling beneath their feet, which had been constant but inoffensive, abruptly became something impossible to ignore and even more impossibly, something that recalled the proprietor's youth. Once a year, as though to make up for its mundanity, the tiny settlement that he came from indulged in an unexpected tradition:
It set juvenile raptors loose among the streets.
How such a bizarre custom came to be and why anyone thought it would be necessary to ask adolescents to collect feathers from rampaging lizards was something the proprietor never understood. But like every immigrant from the town, like every man or woman born to those hills, he carried with him memories of how the world shuddered and shook each year under the feet of that annual stampede.
This was worse.
Much worse.
The cafe that sat opposite of his own in their cul-de-sac gave way like a broken leg, even as animal bodies flooded the streets, tumbling over themselves in a downpour of fur and claws and howling throats. Under other circumstances, the proprietor might have delighted in the sight, but there wasn't time. There weren't even words to describe what he was seeing. Lemurs swung between the balustrades, hunted by hawks. Bovines of varying sizes, cats saber-toothed and more mundane. The sound of shattering porcelain tugged at the proprietor's attention.
#figure(image("011_Unbowed, Part 3/10.jpg", width: 100%), caption: [Uncage the Menagerie | Art by: <NAME>], supplement: none, numbering: none)
He looked and he laughed, half-hysterical, half in wonder of the situation. There were #emph[bulls ] in their local china shop, chasing its pomaded clientele out onto the streets. And everywhere in between, humans in the grimiest clothes, barmaids and butchers and bare-chested sailors, whooping in glee as they ran in between the chaos, barely conscious of the danger. Unlike the owners of the stores, #emph[they ] treated this like a festival, a celebration as primal as anything the proprietor could recall.
In between all of that, there were the dinosaurs:
Yes, the raptors from the proprietor's youth, only full-grown and radiantly feathered. Packs of slow-moving aegisaurs, lowing like bulls. Spinebacks and swordtooths, working to keep ahead of the monstrosaurs, the tyrants, the dusk-dark deathgorge scavengers. These took no interest in the roads. They carved new ones for themselves, crashing through the city, knocking the buildings to the ground. The herbivores took their desecration of Luneau a step further. #emph[They ] paused to gnaw at the city's vertical gardens, nibbling its flowers down to the roots.
As the cultural district of Luneau evacuated from their respective homes and businesses, the deluge of wildlife quickly demolishing everything in their path, the proprietor let out a laugh, one delirious with confusion. He realized then what it was: these creatures weren't just unexpectedly everywhere, they were each three times their usual sizes, too massive to be plausible. How was this happening? Nothing of this seemed real.
A noise caught his attention. He turned to see a pair of monstrosaurs staggering through the teeming bodies. A new breeding duo, brought to Luneau to replace the last. But that wasn't what drew his attention. No, it was the woman seated atop the female's skull, expression set with a look of grim satisfaction.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
If Luneau chose to rebuild, Vivien decided coldly, it would be decades before they succeeded. She crawled to a crouch, balancing atop the monstrosaur's head, and leapt as they passed a balcony. Vivien somersaulted nonchalantly to a pause, rising to her feet in a smooth motion. She dusted her smock. There'd be a need to find actual leathers, something that wouldn't catch in bramble and tear at the slightest provocation. Luneau's tastes, even at its humblest, were entirely too impractical.
A brontodon lumbered past Vivien's perch. Was it the one from her naval voyage? It was hard to tell. The passage across the ocean felt like a lifetime ago. Certainly, she bore hopes that it was the same brontodon. While hardly the danger the carnivores of the Royal Menagerie might present, it'd still be an entity to fear. Especially if its species was inclined toward grudges, toward long memories. Perhaps it'd find a mate in the wilderness of Luneau. Whatever the case, it would be a long time before the vampires of the city would trouble the rest of the world. There were dinosaurs in their jungles now, more than they could ever hope to handle.
Vivien draped herself over the rails, looking over the anarchy she'd unleashed on Luneau. The Royal Menagerie, or what remained it, had begun to discover the vertical gardens of the city. She smiled, more pleased with the situation than perhaps warranted. But the sojourn through Ixalan had been an enlightening experience.
Almost unbidden, her hands strayed toward the Arkbow. Vivien hadn't considered how genuinely simple it was to remove the relic from her person, or the looming risk of it being taken and used by outside parties. Something needed to be done about this. Vivien wouldn't tolerate an encore. But perhaps, the answer laid with the inhabitants of the Arkbow.
The monstrosaur had proven itself exceptionally useful. Even more so than any of Vivien's other acquisitions. And how could it not? It was larger, more ferocious than everything else in her arsenal. If Vivien continued to find herself bigger prey to hunt, she might have an answer.
She closed her eyes. The membrane that split the planes was thin here, hardly more substantial than a curl of skin. Through the film, Vivien could almost see the next world. #emph[Dragon. ] The word bounced through her brain, settling into the image of colossal beings, ancient and frighteningly strange, beings with lungs full of fire and mocking laughter. <NAME> wasn't the only dragon in the Multiverse. There were others. Smaller, less sly, but dragons nonetheless. If she learned to harness their power, if she could learn how they function, she might be able to learn the secret to <NAME>'s destruction.
But first, she needed a target.
Distantly, Vivien recalled conversations about Shivan dragons, the name only ever whispered in low voices. For fear, the Ghitu elders had said, that they might stray into their settlements, drawn by the sound of their names.
But if a Shivan dragon found itself lured to her, might that not be for the best?
In the distance, Luneau rallied against the insurrection.
Nothing slipped through the dusk, no sound except for the distant clamor of angry men-at-arms, and elephantine animals bellowing in challenge. Vivien rucked her brow before she chuckled good-naturedly.
#figure(image("011_Unbowed, Part 3/11.jpg", width: 100%), caption: [Sulfur Falls | Art by: Cliff Childs], supplement: none, numbering: none)
The Planeswalker rolled her shoulders and breathed in. She raised a hand, palming the air, feeling the structure of the universe beneath her skin. And then she pressed down and the Multiverse, viscous as honey, relented under the pressure, swallowing her from arm up. Vivien spared Ixalan one final look, before she flashed into the next plane and felt the hard, hot air of Shiv on her skin.
|
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/type_check/fn_named.typ | typst | Apache License 2.0 | #let foo(d: 3) = d
#let x = foo() |
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/026%20-%20Eldritch%20Moon/008_The%20Promised%20End.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"The Promised End",
set_name: "<NAME>",
story_date: datetime(day: 27, month: 07, year: 2016),
author: "<NAME>",
doc
)
#emph[Innistrad faces destruction. Emrakul has risen, and the Eldrazi titan has brought with her a plague of horrors and mutations that threatens to overwhelm all other life. The Gatewatch has assembled at Thraben, and the recent arrival of Liliana and her zombie hordes have bought them time and room to formulate a plan.]
#emph[But will any plan be enough to conquer Emrakul?]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Liliana]
It was a pleasure to watch the so-called Gatewatch contort and agonize. Gideon's poorly restrained frustration; Nissa's discomfort; Chandra's impatience; Jace's pained indecision. Jace was in his favorite place—caught in the middle due to arbitrary restrictions he had made up, and wondering why life's decisions were always so difficult. #emph[You're never going to change, are you? ] Liliana couldn't tell whether it amused or disgusted her. #emph[Both, sometimes.]
#figure(image("008_The Promised End/01.jpg", width: 100%), caption: [Dark Salvation | Art by Cynthia Sheppard], supplement: none, numbering: none)
A moonfolk flew into the clearing, her eyes wide and breath short. She took no notice of the large ring of zombies protecting them from Emrakul's minions, though she did look up at the grand spectacle of Emrakul; it was impossible not to. She landed next to Jace, speaking rapidly though too quietly for Liliana to hear. She stopped talking in a way Liliana would have found confusing if she hadn't already spent a great deal of time with a telepath. #emph[She must be the moonfolk Jace had mentioned. ] Jace and Tamiyo continued their silent conversation, moving closer to one another as they touched minds. Liliana frowned. #emph[Another useless mind mage, just what we needed.]
She wanted some time alone with Jace, to figure out what the endgame here was. Her zombies had brought a temporary respite. But they needed to get out of here, away from Thraben, away from Innistrad, away from #emph[Emrakul] .
As she thought of the name, Liliana's eyes were drawn upward to the towering figure hovering outside of Thraben. #emph[Why is it just sitting there?] The air felt heavy, stale. Fecund with the smell of...it wasn't the dead. Liliana was comfortable with the dead and their smell. But there was a rotten quality to this smell Liliana found troubling.
There was a sudden shifting in the air, the smell and pressure of a spring day before a thunderstorm, and in that shifting Emrakul #emph[unfolded] . Its cloud burgeoned; its long spindly tendrils lengthened and multiplied, from hundreds to thousands, to tens of thousands, more. An invisible sphere of power burst from Emrakul, rippling and hitting each Planeswalker where they stood.
Nausea roiled her stomach; vertigo twisted her mind. She had known that sickening combination of despair and sickness only a few times in her life. When her brother Josu's eyes had opened lifelessly, jet-black orbs portending doom; when she had first beheld Bolas's baleful gaze, hearing his spiteful laugh as he promised poisoned redemption; when the Chain Veil's power had first coursed through her veins, splitting her skin and cracking it open like a dry husk to let the blood,#emph[ her ] blood, seep through.
None of those moments compared to the #emph[wrongness ] she felt in Emrakul's presence. <NAME> had spent her whole life seeking not to die, and for the first time in her long existence she wondered if she had been pursuing the wrong goal. In the shadow of Emrakul's flowering, death seemed just another of life's superficial lies, a false hope poorly beating back the true horror awaiting all who existed.
#emph[Emrakul. Emraakull. Emraaa...]
She shook her head with force, seeking to clear her mind. She had lived too long, overcome too much, to succumb now. #emph[We must flee this plane. This...it is insanity to stay.] Not her thoughts, but the Raven Man speaking directly in her head, sounding...scared. Liliana took some pleasure in the fear. #emph[So you can feel fear. ] Her zombies moaned in unison, "Vessel of destruction. Root of evil. Flee." Liliana was startled. She was used to the Chain Veil talking nonsense about vessels and roots, but #emph[flee] . Whatever Emrakul was, the Chain Veil wanted no part of it.
The pressure in the air thickened, inducing a headache that watered her eyes with pain. The other Planeswalkers crumpled, all except Jace, who cast some type of spell in response. She bowed her head, her agonies multiplying. Emrakul outside. The Chain Veil inside. The damned Raven Man, wherever he was. She would not succumb. #emph[These are my zombies, my Chain Veil, my head. Mine!]
She stared at Emrakul, her fear receding, replaced by seething anger. #emph[How dare you] ...
There was another explosion of energy from Emrakul, a full thunderstorm that made the earlier outburst seem a brief spring rain. Liliana was forced to her knees as she screamed in rage. Her zombies moaned a single word.
"Em-ra-kuuuull."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Jace]
#emph[The purple shadowed tower through rainy glass. Streaks fire heavy with top dark falling. Emrakul cackles thought with cold loop metal...]
A voice cut through the chaotic ramble, a familiar voice he was hearing for the first time. #emph[This is not going well. I will not succumb to this. I am better than this. ] Jace breathed evenly and slowly. Thought cohered. He tried to recall the gibberish dominating his mind just seconds ago, but it had already vanished, evanescent dew melting with the dawn. He was at the top of a long, grand spiraling staircase, white marble steps lined with ornate blue trim. The staircase was brightly lit though there was no obvious light source, and it extended down far beyond his sight.
Above was a tall and airy stone tower. Closer to ground, it looked like his sanctum back on Ravnica. Large stone table with piles of books, maps, and several...contraptions that whirred and buzzed. Bookcases stuffed with books everywhere the eye could see, and he gazed at them longingly. It didn't just look like his Ravnica apartments...it #emph[was ] them, except back on Ravnica there was no palatial staircase spiraling down in the middle.
#figure(image("008_The Promised End/02.jpg", width: 100%), caption: [Jace's Sanctum | Art by Adam Paquette], supplement: none, numbering: none)
And back on Ravnica there was certainly no monstrous force destroying his sanctum from above.
Hundreds of feet in the air above, Jace saw large stone blocks of the tower crumbling away, or grabbed and flung. The entire roof of the tower was already gone, revealing a darkened sky flooded with an ominous purple overcast. As Jace watched the destruction, he realized the purple overcast was not a cloud. It was a #emph[thing] . A creature. The creature resolved into a gigantic purple cloud extending hundreds of wiggling tendrils. The tendrils lashed and writhed toward the tower, accompanied by flashes of lightning and deafening booms outside. The creature had a name...
#emph[Emrakul] . The name sounded strange even as he said it, a word he should not know, a word he #emph[could ] not know. Or perhaps that was the word underneath the word...Jace paused, chagrined at how effortless losing his train of thought was. #emph[Focus] . #emph[Emrakul] . A...thing. An Eldrazi. #emph[The] Eldrazi. Jace's mind struggled to encompass the nature of the entity. His head hurt, a dull, pounding ache that grew with each contemplation of the Eldrazi titan outside. #emph[So don't think about it. Where am I? What is this place?]
More memories returned. He hadn't been in a tower. He had been in Thraben, besieged by countless hordes of Emrakul's minions. They all were. Gideon. Tamiyo. <NAME>. #emph[Liliana] . She had made a surprise appearance, leading a host of zombies to save them from the cultists and creatures driven mad by Emrakul. #emph[Liliana came back. She...]
A loud peal of thunder rattled outside and the ground quaked briefly beneath his feet. As the ground shook, Jace's head began to pound. Lightning flashed, illuminating Emrakul's tentacles as they tore off more huge chunks of the stone structure. The tower was large and massive, but Emrakul was dismantling it stone by stone.
A soft white light began pulsing deeper below in the stairway. The light #emph[beckoned] . Normally Jace knew enough to distrust beckoning soft white lights in a place he did not know leading to even more places he did not know. But most normal situations did not have attacking omnipotent Eldrazi titans. The white glow looked like an increasingly intriguing option.
There was a bright explosion outside, a long, deep purpleness followed by a deafening roar of thunder. The entire tower reverberated as lightning struck it. Jace crumpled to the ground in pain, his head throbbing with agony. #emph[What is happening to me?] And then another voice, his voice, but coming from somewhere outside, spoke with the force of command. #emph[Move. Move now. Go downstairs.]
Jace looked up through the ruins of the tower into the ravening purple maw of Emrakul, its endless tentacles wrapping themselves around more and more of the stone bulwarks. He picked himself up off the floor and stumbled to the stairway. He decided the voice,#emph[ my voice] , was right. It was time to leave. He descended into the depths of the tower.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Liliana]
Liliana's blood was on fire, her mind in shreds. One force kept her coherent—rage. #emph[Those are my zombies. Mine! You will not have them!] Without conscious thought she drew deep on the power of the Chain Veil, and pushed back against the might of Emrakul. She could feel the Eldrazi's blighted touch, a touch now so powerful it affected even the dead. But even that baleful touch was no match for Liliana's necromantic prowess backed by the force of the Chain Veil. She felt her zombies return to her.
The power coursing through her veins was exhilarating. Each previous time she used the Veil there was agony and rupture, but somehow this time her rage inoculated her from the worst of the Chain Veil's injuries. #emph[Perhaps that is the answer to unlocking the Chain Veil. I never wanted it enough.]
#figure(image("008_The Promised End/03.jpg", width: 100%), caption: [Rise from the Grave | Art by <NAME>], supplement: none, numbering: none)
Voices still whispered to her from her zombies, and from the Veil directly in her mind. "Vessel of destruction. Root of evil." Those weren't the only voices she heard. The Raven Man added his stultifying tones. #emph[We must leave here. This is madness. I thought you wanted to conquer death. The entity you face here is older than time, and more powerful than you, even if you wielded a hundred Chain Veils! We must leave! ] The Raven Man tried to issue it as a command. Never had he sounded so naked, so vulnerable.
Liliana spared a glance at the other Planeswalkers. Chandra, Tamiyo, and Gideon were sprawled on the ground, unconscious. She briefly reached out with her power, but their forms did not respond to necromantic touch; they all still lived. Nissa was rooted in place, screaming, the words emanating from her mouth gibberish. Green and purple energy pooled around her, clashing, ebbing and flowing. Jace was the only one who stood and seemed to be conscious, though he took no notice of her. She noticed a blue shimmer around him, a penumbra that extended to all five of the other Planeswalkers. All except her.#emph[ Is that what's keeping you alive?]
The penumbra did not extend to her. But she didn't need his help. Liliana had known considerable power, power partnered with the wisdom and ruthlessness born from two hundred long years of life. But she knew none of that would have protected her from the mental onslaught from Emrakul. She would have been obliterated, except for the power of the Chain Veil.
Power she now wielded, and wielded gladly. She laughed with the thrill of it. It was the closest she had yet come to the nigh-omnipotence of her former self. #emph[I can do anything. ] Still the voices of the Veil whispered in her head. #emph[Vessel. Vessel of destruction. We must flee the World-Ender. The World-Creator. Vessel! ] The Raven Man's voice choked with panic. #emph[Listen to the Veil, you idiot! Flee!] Her zombies. "Root of evil. Vessel of destruction. Vessel!"
Liliana laughed, a laughter suffused with rage and power. "I. AM. NOT. A. VESSEL!"
She shut down the voices of the Veil and the Raven Man both, silencing them abruptly. She could feel their fury and impotence as they railed against her. #emph[All that matters is my will. My desire. Nothing can stand before me.] She tapped into the Veil, harnessing more power than she had ever dared before.
#emph[I don't belong to you. You belong to me.]
She gathered the energies of the Veil, harnessed them to her own considerable power and experience. In the throes of such power she no longer felt Emrakul's mental assault.
She turned her full attention to the gigantic Eldrazi titan. As if it recognized her growing power, the titan was moving slowly in her direction. #emph[Everyone seems to be afraid of you, Emrakul. ] She laughed again, a cackle as she reveled in her power. #emph[No one thinks I can beat you. Let's find out.]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Jace]
As Jace descended he would occasionally glance up, but shadows obscured all but a few feet behind him. #emph[I guess these stairs only go down. ] He thought he should find being shepherded along an unknown corridor down into the depths of a strange tower alarming, especially accompanied by the continued assault and thunder he still heard above, but he was calm. #emph[Down here is definitely safer than up there.]
The stone wall next to him began shimmering. As he watched, the stone turned to glass, or at least some type of transparent material. The entire wall next to him, from steps to ceiling, transformed into a clear pane. Through the window was a scene, like a diorama children would make for school, but this diorama moved.
The central figure in the scene was Gideon. He was squaring off against some kind of celestial being who towered over him. Literally celestial—the figure was made of a starry night sky. The celestial figure had two large black horns framing a blue, non-human face. He wielded an impossibly large whip with a human skull in the handle. Gideon looked suitably Gideon-esque, square jaw, golden sural, and gleaming armor intact. But the look on his face was not the Gideon Jace knew at all. This Gideon looked worried, almost scared. There was anger on that face...but also fear. #emph[Interesting] .
#figure(image("008_The Promised End/04.jpg", width: 100%), caption: [Erebos, God of the Dead | Art by Peter Mohrbacher], supplement: none, numbering: none)
Around Gideon stood the other members of the Gatewatch. Chandra, her hands and head blazing. Nissa. Even a Jace. #emph[Surely I'm taller than that? ] The celestial figure spread his arms out wide, whip to the side. He spoke with a deep, resonating voice that seemed to bubble up from the ground. "And what is it you, <NAME>, most desire? What do you truly want?"
"No!" Gideon shouted, his face contorted in defiance and pain. "There is nothing you can offer me, Erebos, nothing! From you, all is poison."
The being, #emph[Erebos] , raised his whip. "It is not an offer, mortal. Tell me true what you most desire or I will kill your friends, one by one."
Gideon's shoulders slumped, his sural retreated back into its sheath. He looked up at Erebos, his face a mixture of anger and despair. "I most desire..." he paused, drawing a deep breath, "I most desire to protect others, to save them..."
"You lie." Erebos's whip lashed out, and as it struck the Jace next to Gideon he disintegrated, his flesh dissolving upon its touch. #emph[I really don't like watching myself die. ] Gideon screamed and lunged, his sural flashing, but Erebos stood unmoved. He raised his hand and Gideon was flung backwards.
"You cannot defeat me, mortal. You never have. You never will. Tell me truth and I will let the rest of your friends live."
There was a loud peal of thunder outside, #emph[Emrakul, that's Emrakul] , and Jace could not hear Gideon's reply over the din. Whatever Gideon's answer, Erebos was not satisfied. Once more the whip lashed, and now Nissa disintegrated with its touch. Gideon flinched as Nissa was struck down, but did not attack this time. Chandra stood there looking blank, her flaming hands at her side doing nothing. #emph[This scene is definitely not reality. Is it inside Gideon's head?]
Gideon's voice crackled with anger. "I want to defeat you, to tear you down so you can no longer..."
"No. You continue to speak lies." Erebos's voice, in contrast, was placid as a graveyard. Another lash of the whip, and Chandra vanished. "Must you lose everyone before you acknowledge truth, mortal? All your stubbornness to what end? You are determined to feel the most pain." Erebos's whip danced with its master's touch. "What do you want?"
Gideon raised his head to the skies and screamed, "I want..." but before he finished his sentence the window went dark.
Jace stayed still, silent, stunned at all he had witnessed. #emph[Who is Erebos? What pain is Gideon going through? ] Jace had had no idea his friend was suffering this way. #emph[And my ignorance about Gideon is matched by my lack of knowledge of what is going on here.] #emph[Are these dreams? Am I inside Gideon's head? The Emrakul above certainly seems real.]
The shadows pressed closer to Jace. #emph[I need to keep moving. The answers are farther down. ] He had only walked several steps when another wall went transparent. This time the scene featured Tamiyo.
#figure(image("008_The Promised End/05.jpg", width: 100%), caption: [Tamiyo, Field Researcher | Art by Tianhua X], supplement: none, numbering: none)
She sat hunched on a small workbench, poring over a large unfurled scroll on a dusty table. The sole illumination in the scene was a candle, but it gave off far too much light for its size. Behind Tamiyo were shelves full of books, and more piles of books beside them. Jace felt a nostalgic pang. #emph[To be surrounded by nothing but books and all the time to read them. ] That hadn't been his life for some time now, and wouldn't be again any time soon.
Blood began leaking from one of Tamiyo's eyes. It started with a slow drip, each drop hitting the table with a small #emph[plip] . As she continued to read the scroll, the other eye began dripping blood as well, each drop alternating with each other. #emph[Plip-plip. Plip-plip. Plip-plip.]
Jace watched in horror as flesh-like lattices began to grow over Tamiyo's eyes, covering them entirely. #emph[The mark of Emrakul.] Jace had seen too much of Emrakul's signature over the last few days. The blood continued to drip through the lattices. #emph[Plip-plip. Plip-plip. Plip-plip.]
The lattices blossomed elsewhere. Fleshy growths burst from Tamiyo's fingers, covering both hands in the weblike structures. The growths attached to the table beneath, sticking, binding her hands to the table. Now she could no longer see nor move her hands. The blood kept dropping from her eyes. #emph[Plip-plip. Plip-plip. Plip-plip.]
As she lost the use of her eyes and hands, Tamiyo whispered throughout, though no audible sound emerged. The fleshy tendrils began webbing her mouth closed, lip tied to lip with each strand of Emrakul's web. Even once her mouth was sewn shut, the lattice continued to grow, to wiggle and writhe. The tendrils extended far out from her closed mouth, and now as the blood continued to drip from her eyes the tendrils would seize a drop, curling around it, wiggling as the blood seeped into its oily skin. #emph[Plip-wiggle. Plip-wiggle. Plip-wiggle.]
Tamiyo was motionless, her eyes, mouth, and hands frozen. Jace had touched Tamiyo mind to mind, knew the essence of her better than most. #emph[Her ability to see, to speak, to write, these are the essential tools of her magic, her communication. These are what define her. She is being erased. ] Jace screamed and pounded on the window, but neither Tamiyo nor anything else in the room stirred. The window faded to opaque stone.
Jace slumped. #emph[What is this place? This cannot be the minds of my friends. Can it?] The shadows loomed over him. He was tired, so very tired. He slowly picked himself up and continued his descent.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Liliana]
#emph[This power. It is a revelation.] All it had taken was Liliana's will. Her desire. For so long she had thought herself utterly pragmatic and driven to her cause. To not die. To kill her demon tormentors. But now she knew she had been unwilling to take that final step, to cross over the last barrier. #emph[I had restraint. How foolish.]
In front of her loomed Emrakul. An Eldrazi titan. A creature older than time, if the voices in her head told truth. #emph[I think you are a thing. A powerful thing, but something that lives. And if you live, you can die. And if you die,] another smile,#emph[ then you belong to me.]
The energies of the Veil writhed and bucked under her control. They wanted to be used to wither, to kill. #emph[Power is meant to be used. ] She gathered it, shaped it, and sent one coruscating blast of necromantic energy after another at the towering figure of Emrakul, hurling the titan back with their force.
There was a song in Liliana's head, a song blotting out all else. It was the song of power and it sang such a sweet melody. #emph[This is what I was born for. This is my destiny.] Each blast that hit Emrakul left gaping trenches of scarred dead material, large tentacles the size of towers left shriveled and withered. Some of the material regenerated, but not enough before being hit by Liliana's next blast. For the first time since blossoming, Emrakul was #emph[shrinking] . It was being thrust back. Liliana was #emph[winning] .
#figure(image("008_The Promised End/06.jpg", width: 100%), caption: [Liliana, the Last Hope | Art by <NAME>], supplement: none, numbering: none)
The Raven Man's voice cut through her delight, a cold splash of sewer water. #emph[You know not what you do, what you dare. You cannot hope to contain this power for much longer.]
Liliana's scorn draped each word she thought back in reply. #emph[Do not seek to contain me with your small expectations, little man. Today is the day I destroy an Eldrazi titan. Why? Because I dare.]
She wished the Gatewatch was conscious to watch her victory. #emph[This is what power looks like, you pathetic excuses for Planeswalkers.] She flung more blasts at Emrakul, pressing her attack.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Jace]
Jace was not surprised to see the another window appear shortly. This time it was Chandra. Or at least he assumed it was Chandra. She was a little girl, but the red hair and shape of her face still suggested the woman she would one day become. Chandra was surrounded by a menacing group of guards, their gear ornate and colorful, from a place Jace did not recognize. #emph[Her home. ] The guards raised their pikes, and Chandra was sobbing, tears fighting with gasps of breath for control of her face.
#figure(image("008_The Promised End/07.jpg", width: 100%), caption: [Chandra, Fire of Kaladesh | Art by <NAME>], supplement: none, numbering: none)
One of the guards, tall and spindly, stepped forward. His face had a wide smile on it, in cruel contrast to his awful words. "We killed your daddy, renegade. We killed your #emph[mommy] . And now we're going to kill you." Jace suspected the scene wasn't real, just a nightmare in Chandra's head, but his fists still balled up. #emph[No one should have to endure this kind of pain. ] The guards moved forward with their pikes as their leader sneered, "And the best part, the absolute best part, is there is nothing you can do about it."
Chandra stopped crying and stared at her persecutors. A tiny wisp of flame flared from one eye. "You're wrong," she said, her voice not sounding like a child's at all. "There is something I can do." Her body was changing, growing, evolving before his eyes into the recognizable Chandra he knew. "Something I can always do. I can burn." Fire jetted from her head and hands.
She smiled. The guards backed away, uncertain. She took a step forward. "I can make you burn." The leader burst into flame. He screamed in agony. "I can make all of you burn." Now the other guards were on fire, their skin crackling and bubbling, their high-pitched cries piercing the sky. "I can make the whole world burn." Heat and light and fire burst forth, an incandescent whiteness of energy, enveloping and burning everything, including Chandra. Chandra screamed, though whether in agony or delight, Jace could not say.
#figure(image("008_The Promised End/08.jpg", width: 100%), caption: [Chandra Ablaze | Art by <NAME>yle], supplement: none, numbering: none)
The window faded to stone, but Jace still felt the heat pouring off the walls. It was one of the first principles of illusions. #emph[Just because it's only in your head doesn't mean it can't kill you.]
Gideon, Tamiyo, Chandra...but no Liliana yet. Urgency propelled him downstairs and he looked eagerly as the next window appeared. His face fell when he saw the figure behind the wall. #emph[Oh, Nissa. ] He tried not to be disappointed, though he found it hard to understand the elf Planeswalker.
The background behind Nissa looked exactly like the outside world—the dark, purple sky, the odd flashes of light, the looming shadow of Emrakul, Liliana and her zombies. Nissa stood in agony in the center. She screamed. She #emph[writhed] . Twisting, contorting, shaking, but those were not the only injuries done to her. There was something...#emph[wriggling] ...on her hands.
#figure(image("008_The Promised End/09.jpg", width: 100%), caption: [Grotesque Mutation | Art by Dan Scott], supplement: none, numbering: none)
As Jace peered closer, he noticed Nissa's fingers had tiny fingers growing on them, tens of tiny fingers extending out of each finger. And then he saw hair-thin fingers growing out of the tiny fingers. He shuddered, but as he saw her eyes he let out an involuntary scream. From each of Nissa's eye sockets protruded several tiny eye shoots, and out of each eye shoot grew several tinier ones. Green energy flashed out of her eyes and hands, but interlaced in the green was a dark, violent purple.
#emph[Emrakul is Emrakul is Emrakul forever.]
Jace didn't know where the thought came from, but even in its nonsense it felt true. #emph[Forever and ever and ev...]
"Negglish pthoniki ab'ahor!" gibberish words spouted from Nissa, or if not gibberish then no language Jace had ever heard. As she spoke, her head spasmed, and in between words her tongue would loll out of her mouth. #emph[What are those things on her tongue? Oh, no. No no no no. I am hitting the limit of details I want to notice. No, I am well past the limit.]
#figure(image("008_The Promised End/10.jpg", width: 100%), caption: [Fevered Visions | Art by <NAME>], supplement: none, numbering: none)
As nonsense and spittle spewed from her mouth, rational words began to infiltrate the gibberish. "Shigg epsi-everything chut'ghb ends! Gilma-everything chts-dies!" The spasms subsided, her voice gaining strength and poise. Now the energy emanating from her was all purple, a deep purple with no green to be found. She raised her head and arms to the sky and shouted.
"Growth! Growth is the answer! The only answer! Entropy cannot lose. But must it win? Of course sacrifice must be made. Why do they fight it? Eternity without sacrifice offers only the screaming torpor. Blood must be churned, churned thick. Why do they fear life? Why do they fear #emph[truth] ?"
Nissa uttering recognizable words made no appreciable impact on Jace's ability to understand her. Even though he knew it was useless, he reached to her mind to mind. #emph[Nissa, help me. Help me understand. What are you saying?]
Nissa shifted and brought her gaze to meet Jace's directly through the window. #emph[She sees me.] Jace shivered, frozen to the spot. He could not move, could not look away. Her eyes glowed darkly purple. She spoke directly to him. "I can do anything I want. Anything at all. Remember that. The only thing saving you is..." the purple glow faded, the nimbus around her dissipating, "...I don't want anything."
She stared at him for long seconds, her face distorted and grotesque as her extra eye shoots continued to squirm. The window mercifully faded to stone.
Jace remained frozen in front of the wall. He shook, sweat beading down his hair onto his face and the back of his neck. The shadows continued to press from above. #emph[How long have I been on these stairs? What is happening to my friends? ] Down still beckoned, brightly lit and pulling at him. But he didn't want to move. He didn't want to do anything. #emph[Sleep. I could sleep. I might not wake up, but would that be so bad? ] His eyes drooped, and a pleasant fuzziness crept over his mind. He sat on the stairs. #emph[I am so tired.]
Drifting off to sleep made him think of Liliana. He didn't know where she was, or what she faced. #emph[She isn't here. She's not in this place.] But if he acknowledged the truth, she never needed him anyway. "Sad. For a while. And then I'll get over it." That's what she had said back in her castle, comparing the possibility of his death to that of a dog's. #emph[A dog.] #emph[Would she really not care any more about my death than a dog's? That can't be true. A dog. ] The thought gnawed.
#emph[Sleep, how could I possibly be thinking of sleeping right now? What is happening to me? ] He couldn't tell whether it was true exhaustion, or a more malevolent effect. #emph[Does it matter? The solution is the same. ] He stood. #emph[Keep going downstairs. Figure this out. Don't die. Beat Emrakul.] He thought of Liliana as he continued his descent.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Liliana]
The first sign of trouble was an interruption to her tempo. Liliana had never wielded so much energy before, and she had been able to fling blast after blast at Emrakul with each breath. Breathe, blast, breathe, blast.
But though her power didn't fail her, her body did. She hesitated for a second, took a long breath, and in that gap Emrakul surged, its body and tendrils regrowing at a faster pace than Liliana thought possible. Several thick tendrils lashed at Liliana only to wither and desiccate at the touch of her magic, but several more quickly followed. Where once each blast from Liliana drove Emrakul back, now it was all she could do to stand her ground.
#emph[You are mortal. You have limits. It does not.] The Raven Man's voice stabbed her brain with cold whispers. #emph[Look upon this grass and dirt, you fool. You have made it your graveyard.]
She screamed in rage as she let loose more blasts of power. The titan's advance halted in the face of such an onslaught. But seconds later the energy ebbed. Liliana took large gasping breaths and Emrakul's advance continued once more.
#emph[I am not going to die today] , she snarled to the Raven Man, to the Veil, to anything that would listen. To herself. Emrakul and its tendrils continued their unceasing assault. #emph[I am not going to die today.]
#emph[If you're lucky, Liliana, your death is now the best possible outcome of today. You have doomed us both. ] The Raven Man spoke without contempt, without hatred or fear. He sounded...resigned. For the first time since rescuing the Gatewatch, Liliana was afraid.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Jace]
Jace expected another wall to turn transparent, to show him a scene from the mind of Liliana. What he did not expect was the stairway to end in a door.
It was a thick oak door, banded with iron, with no port or keyhole. Just wood and iron, framed in the same thick stone as the rest of the stairway. He put his hand on the door. A voice screamed,#emph[ no no no no no no] , and pure terror seized his brain. But the voice trailed off, the terror receding. Jace looked up the stairs. The shadows did not press closer but nor did they part to reveal the way he had come. If he wanted to progress, it was through this door. He pushed the door forward and stepped through.
The room was formless and colorless. Vertigo overcame him as his mind struggled to perceive the space. Jace felt the long pull of forever, an endless recursion looping into terror #emph[to never know the peace of oblivion to just to just to just.] ..until reality snapped into place. The nothingness surrounding him materialized into a field of white.
There was an angel in front of him.
She approached, and Jace noticed the space slowly taking shape around her, around both of them. They were in a real place, a room, a copy of the sanctum he had first begun this bizarre journey in. His sanctum. The angel was tall, taller than any angel he had seen before, even Avacyn. And her wings were gigantic, thick and dense. They furled behind her almost like a mushroom cloud...
Jace broke out into a cold sweat pumped by a racing heart. #emph[No oh no oh no...]
Her face was hidden by a large hood, but in very plain sight were the two swords she carried, one in each hand. Her tunic frayed at the hemline into ribbons, tens of ribbons, no, hundreds, and they seemed to multiply as Jace watched. They wriggled and writhed. As if they noticed Jace, the ribbons of her tunic probed the air in front of him, alive. #emph[If I scream, I don't know if I'll ever stop. So I better not scream. Would crying help? I'm open to crying if it will help.]
#figure(image("008_The Promised End/11.jpg", width: 100%), caption: [Shrine of the Forsaken Gods | Art by <NAME>], supplement: none, numbering: none)
Jace laughed in a combination of amusement and fear. #emph[I'm so glad I find myself funny] . The laughter broke through the paralysis, sparking his mind. #emph[I know this angel. I have seen her before. ] Or at least he had seen statues of her before, back on Zendikar. "Emeria?" he croaked, the word sounding foreign on his lips.
She looked at him, but he could not see her face cloaked in the hood. Jace took careful note of the ribbons and the swords, but nothing moved to attack him. His confidence grew.
"Are you...are you...Emeria? Are you...Emrakul?"
"May I sit down?" The voice was a female voice. Light, almost airy. Jace might have even said it trilled, in different circumstances. #emph[Not these circumstances, though] . Jace couldn't see any lips move through her hood to produce the voice, but it sounded like a normal voice. #emph[Normal-ish.]
Jace was so busy analyzing the voice it took him a moment to parse what was actually asked. "You're asking me?" Of all the surprises of this day, getting asked a polite question should not have ranked high on the list. #emph[But it might be at the top.]
"This is your home," a pause, "Jace. <NAME>." As she said "Beleren" it came out syllable by syllable.
#emph[I'm very afraid right now. I'm also so very curious. What an odd juxtaposition] .
"I am just a visitor here. So, may I?" She stood waiting.
#emph[How much more surreal can this day get? ] He was confident he didn't want an actual answer to the question. #emph[Remember what's important—don't die. Figure this out. Beat Emrakul.] His mantra. He added another sentence. #emph[Invite Emrakul for a cup of tea. ] He smiled, and the smile reached his face. "Please, by all means. Please sit down." Jace waved airily at the large stone table, and Emeria—#emph[no, I don't know what this is, stop assuming I do] , the angel sat down at the table.
She sheathed both of her swords behind her back. When her hands came back to the table they were holding a large scroll, a scroll with iron bands. #emph[I've seen a scroll like that before. Where? ] "You do not mind if I work while we talk, do you?" Her lilting voice sounded like it could come straight from an Azorius guildmage wanting guidance on a point of protocol.
#emph[Embrace this surreality. Stop fighting it. See where it goes. ] "Of course, please. I would not want to keep you from your work." She nodded and unrolled the scroll. A creeping sensation nagged at the back of Jace's head. #emph[Where have I seen that scroll?] But he could not place it. From somewhere a long stylus appeared, and she began writing in the scroll.
Jace cleared his throat. "Well, since we're, umm...you know, having a #emph[talk] . Who are you exactly? What is this place? What is going on?" Jace could not afford to be picky about where to get answers from. He could not quell his normal instinct to mind-read, #emph[not knowing is so much worse than insanity] , but there was...nothing. Nothing he could latch on to. #emph[Secrets are no fun when they stay secret] . He was going to have to do this in the mundane style everyone else had to. Through words. Words with an Eldrazi titan.
"Everything ends. Everything dies. Wholeness is always behind us. Time points only one way." There were echoes of Nissa's earlier insane comments, but Jace didn't understand it any more coming from the angel. She didn't look up as she wrote, her hood obscuring whatever light voice uttered those strange words.
"Are you Emrakul?" Jace didn't know what he was risking, and increasingly didn't care. #emph[Caution is for those with a winning hand.] "What do you want?"
She paused her writing, considering the scroll. "This is all wrong. I am incomplete, unfulfilled, #emph[inchoate] . There should be blossoms, not barren resentment. The soil was not receptive. It is not my time. Not yet." The way she said, #emph[yet] , sent a shiver through Jace's neck. She resumed her writing, blotting out a large section of dried ink.
"Enough!" Jace shouted. "You are here for a reason! You could kill me any number of ways, with your swords or your tentacles, but you're not. You're sitting here, uttering nonsense...why? I don't understand what you're saying and I don't understand what you want. Help me. Please." As Jace talked his anger cooled, but it was replaced by something even more useful. Focus. He felt a fog clearing, a fog that only in its recession revealed how much it was obscuring.
"Do you play chess?" The voice continued as if Jace had been spouting as much nonsense as she was. Jace was tempted to shout again, but didn't think it would do much good. Besides, he #emph[did] play chess. He was quite good at it.
"Yes, I play chess."
"Would you play a game with me?" She stopped writing and rolled up the scroll.
"I'm not sure I have time to play..."
"If you win, this all stops. I will give you all the answers you want." She tucked the scroll behind her.
Jace suspected a trap, but he was #emph[really] good at chess. "And if you win?"
"I am already winning, <NAME>. Let us play a game."
"Uh, there is one problem." Jace glanced around. In his real apartments back on Ravnica there was a chessboard, a quite fancy one he had been gifted by the Boros, but in this strange simulacrum, no such board was visible. "I, uh, don't seem to have..."
The angel waved her hand, and a chessboard appeared on the table taking up the space where the scroll used to be. The board and pieces were thick stone, solid with fine detail. Jace raised an eyebrow, but if the angel noticed, she made no sign. #emph[I suppose if she just limits herself to creating chessboards, we will be okay. ] "Shall we play?" She gestured toward the board. Jace's side was white, and he took the first move. #emph[Magnanimous of her.]
"You will need to move faster, Jace. Time is running out." #emph[Faster? ] He was moving near instantaneously. She did not seem a particularly skilled player, and Jace began to see the outline of a possible checkmate in six or seven moves.
"Communication between us is difficult. I cannot talk to you. I do not even really know you exist. But you, your brain, it is very...#emph[adaptable] ." There, a blunder. He had mate in five moves. Confident of his victory, he paused. She was saying actual information he could use.
"So, then, what is all this?" He waved his hands around them. "What are you? How does my #emph[adaptable] brain make this happen?"
"You know those answers better than I." She put her hand on a piece, hesitated. "Or, at least, a part of you does. How is your headache?"
#emph[How did she know about my headache? ] In truth it was reduced to a low residual throb, noticeable but not debilitating. "It's...it's fine. So you are not Emeria? Are you even real?"
"I was personified a long time ago. Forces cannot be reasoned with. Agency does not exist in propagating waves. If you take shortcuts to try and grapple with what you cannot perceive, cannot even comprehend, who am I to gainsay? No one. You. Perhaps."
The headache grew. Jace and the...whatever it was exchanged several more moves. Checkmate was a move away. The more Jace considered, the more this all possibly made bizarre sense. This was not Emeria. This was not Emrakul. This was his mind's attempt to make sense of whatever pressures or emanations he was feeling from Emrakul. He had to personify it to even have a chance of making sense of it. But to believe in that personification was to invite death. Or worse. The vertigo lurched. #emph[Forever and ever and ever and emer and emra...]
#emph[Enough] . He put his hand on his queen, moved it into position. "Checkmate." He smiled. He was not sure what winning this game meant, but it felt good to win, to win #emph[something] . She stopped, looked at the board.
"So it is." She put her hands to her hood and lowered it. Jace flinched instinctively, suddenly certain he did #emph[not] want to know what she looked like...but she looked normal. Like an angel. Like the statue he had seen back on Zendikar. He took a long, slow breath, exhaled.
One of the pawns beside his queen started to writhe and flow. Hands and a small stone sword appeared on the pawn, and it turned to stab the queen. The queen piece shrieked, blood pouring out of its side. It toppled to the ground, bleeding and shaking. Dying. The rest of the board was pandemonium as more of Jace's pieces transformed. Mutated. They attacked one another mercilessly, killing each other, until the few remaining pieces pirouetted to face the other side of the board. They now all held weapons, weapons dripping with blood, and began a slow march towards Jace's king, who now resembled nothing other than Jace himself.
Jace gaped at the chaos. "Wha...buh...tha...that's...that's not fair! You cheated! You can't do that! Those are my pieces!"
#figure(image("008_The Promised End/12.jpg", width: 100%), caption: [Evacuation | Art by <NAME>], supplement: none, numbering: none)
The angel's face began to melt, chunks of flesh sloughing off as the rest of her—wings, swords, ribbons, and all—began to dissipate into a purplish smoke. But the voice remained.
"They are all my pieces, <NAME>. They always were. I just no longer want to play."
There was a huge crackling explosion outside accompanied by a large grinding sound. The top of the room was torn away, revealing the now-familiar sight of Emrakul, the gigantic mushroom cloud with its hundreds of tendrils and flashing lightning, eating away at the room.
The voice continued, light and airy as a breeze. "It is coming, Jace. #emph[I] am coming. Keep moving. Find your answers. But quickly. Time points one way, and it does so with #emph[hunger] ."
A door appeared at the end of the room, ornate with a bright blue glow behind it. Jace took another look at Emrakul above, and fled.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Liliana]
Liliana did everything she could to stay alive.
She had been using some of her power to hold back the effects of using the Chain Veil. She kept her skin from cracking, her veins from spilling blood. In taking over the Chain Veil completely, she thought she had discovered the secrets to its true use.
She was wrong.
But as agonizing as her skin splitting and her veins rupturing was, it was better than oblivion against the onslaught of Emrakul. She still drew on immense amounts of power, but now all that power was put to one use. Staying alive for another moment.
Her moments were running out. As Emrakul lashed and flayed against her magic, she directed her zombies to attack. They bit, grabbed, struck against Emrakul, like fleas versus a storm, with similar effect. Zombies were destroyed by the hundreds under Emrakul's assault, and hundreds more disintegrated without touch as Liliana instinctively drew upon their animating magic to fuel another moment of survival.
#figure(image("008_The Promised End/13.jpg", width: 100%), caption: [Liliana's Elite | Art by Deruchenko Alexander], supplement: none, numbering: none)
If there was any consolation in her impending defeat, it was the blessed silence inside her head. There were no voices from the Raven Man, no chanting or whispers from the Veil. Even as her reality was blood and pain and a desperate fight to stay alive, her mind was hers and hers alone. There was consolation there, if she chose to take it.
A large tendril, thick as her torso, broke through and grabbed her around the waist. She screamed in rage and blasted through the tendril, its desiccated flesh sloughing off. She coughed up blood, swaying, even as more tendrils came.
She was going to die here.
She looked at the other Planeswalkers, their bodies still protected by the large clearing her dwindling zombies provided. Nissa was no longer screaming, but lay unconscious like the rest of them. Only Jace stood, the blue shimmer still in place protecting them from...something, but he didn't move, didn't speak.
"Jace!" Her scream produced no response. No sign of recognition.
"Jace, you bastard! You better be doing something useful!" That was all she had time as Emrakul pressed. Each moment mattered. That became her mantra. #emph[One more moment. One more moment. One more...]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Jace]
Jace flung himself through the open portal, seeking refuge from Emrakul's assault.
He was in a small, dark room, a copy of one of his innermost sanctums back on Ravnica. There, standing in front of him, was himself.
#figure(image("008_The Promised End/14.jpg", width: 100%), caption: [Jace, Unraveler of Secrets | Art by <NAME>], supplement: none, numbering: none)
With all the other insanity Jace had experienced since first waking up in the tower, facing himself was one of the more benign confusions.
"Oh, this should be good."
The copy didn't smile, didn't move. "You got here. About time. But I don't know you're me." He pondered for a moment. "Answer this riddle."
"What? I'm done with riddles. I need answers. What—"
"First, a riddle," the copy said.
"You must be joking. I'm not going to stand here and get quizzed by either a runaway tyrant version of myself or, worse, some malignant impostor who just wants to waste my time!" Jace ended his rant with an angry shout.
The copy stood there with a smug smile and a raised eyebrow. #emph[Am I really this infuriating? I am this infuriating. I need to work on that.]
"It's only infuriating when you know I'm right. I need to know you're me." Jace wondered if there would be permanent consequences to punching himself in the face. #emph[Probably] .
"How do I know you're me?" It wasn't the snappiest retort, but it was all he had at the moment. His brain was processing a lot right now.
"Because I'm the one with answers. You're wasting time, time we don't have." The copy tapped his foot in a way Jace recognized all too well. #emph[I don't know that I can ever interact with another human again. I'm too annoying to be with.]
He slumped his shoulders and waved a hand. "Fine, ask away."
"No bigger than a pebble but my closing covers the entire world, what am I?"
"#emph[That?] That's your riddle? Your security system to make sure I'm you? You must be an impostor, because I refuse to believe I'm that dumb."
"You still haven't answered the question. This conversation is going to end quickly if you don't." The copy's eyes glowed blue in a way Jace was perversely glad to find menacing. #emph[It's good to be reminded you can menace now and then.]
"Pah. I thought I would have come up with something difficult. Eyes. The answer is eyes." Jace stared at his copy, and then blinked ostentatiously several times to illustrate the point. "I see the whole world. Now I don't. See. Not see. How could this riddle possibly have been useful?" The copy relaxed, letting go of whatever spell he had prepared.
And then Jace understood. The point of the riddle wasn't to see if he solved it. The point was to see how dismissive and incredulous he was at an easy riddle. He nodded. #emph[Okay, this is me.] He knew the copy was thinking the same thing.
"Fine, I'm me. I mean, I'm...yes, we're each other. Probably. You promised answers." Jace reached out to read his copy's mind, but nothing happened.
"That's not how it works here. Here, we talk." Another coy smile.
"All right," Jace struggled not to clench his jaw. "Talk. Now."
The copy pondered briefly. "I still don't know all the things you don't know. Ask me questions."
"Where are we?" Jace wasn't sure it was the most pressing question, but he had been wandering this forsaken tower for the last hour, and he really wanted to know where he was.
"Really, that's the part you haven't figured out yet?" #emph[You condescending] ...Jace's anger was not abated by the fact that the condescension was coming from himself. And in that flash of anger he understood. Jace remembered.
#emph[Emrakul rising, flowering, blooming. ] Liliana had given them all a momentary relapse from Emrakul's minions with her zombies, but none of them were prepared for the rise of Emrakul itself. The physical signs were apparent, but the mental assault was the real danger. A pressure, a pain, unlike any other he had felt before. Tamiyo's chime trick instantly dissolved. There had been no time for a plan, no time for thought.
The spell he had cast was reflexive. One he had prepared a very long time ago, to shield his mind from imminent dissolution.
#emph[I'm not ] in #emph[a tower. I ] am #emph[the tower. ] Everything snapped into place. The scenes of his friends, the conversation with Emeria, even this conversation right now, all were taking place inside his mind, given sustenance and structure by the power of his spell. #emph[Welcome to residence Jace, everyone. Hope you enjoyed your stay.] Based on the scenes he had seen in his friends' minds, he was confident no one had enjoyed it. But the alternative was oblivion, or worse #emph[forever and ever and ever and emer...]
He shook his head rapidly trying to clear the fugue, noticing his copy did the same motion at the same time. The pressure from Emrakul was increasing. Jace looked up and noticed the top of the room shaking. #emph[It's attacking. It's coming.]
"And you? Me?"
"Innistrad was a weird place. A dangerous place. As soon as I arrived I knew something was wrong. I set up some...fail-safes in the event of something disastrous occurring. Puzzles within puzzles, shadows within shadows. Emrakul is the scariest thing I, #emph[we] , have ever faced. So I made a contingency plan to keep #emph[me ] separate from #emph[me] . To work out what was really going on, and be able to stop it. Fix it. You know." And now he did know.
#figure(image("008_The Promised End/15.jpg", width: 100%), caption: [Pieces of the Puzzle | Art by <NAME>eneuve], supplement: none, numbering: none)
He was so #emph[good ] at self-alteration. He shivered, wondering which him was the real Jace. The #emph[better ] Jace. #emph[Nonsense. It's me, of course.]
"Hey there," the copy smiled. "Don't get ahead of yourself. You're only the second smartest person in this room."
"Enough." Jace's mind was starting to whir at a speed both familiar and comforting. "The plan. I better not have created you just for you to tell me a dumb riddle. We don't know how to beat Emrakul."
"Talk with Tamiyo. She was in the middle of telling us interesting things when Emrakul attacked."
"That's your useful input? Talk with Tamiyo?"
"No, my useful input is actually figuring out how to have all of us walk and talk and think normally even with the psychic equivalent of a Rakdos-Golgari kill-party times infinity hammering away at us. It's a fairly difficult trick, in fact."
"Oh. Well, thanks, me. Good job."
"Everyone is in pretty bad shape. But at least we will be able to think coherently. It's...not good out there. And there's another problem."
"What's..." even as he asked the question, the answer flowed in his brain. The two parts of Jace were merging, becoming one. There were words, but the words were spoken by each of them at the same time.
"Liliana is about to die." Jace dissolved the spell. The tower faded into reality.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Jace]
He came back to chaos. Liliana was on the ground in front of him, unconscious, bleeding profusely from multiple wounds. Above them Emrakul hovered in her full unfolding, a bright lavender light glowing in the center of her body, the eye of her storm. Her tentacles, broad and thick, decimating what was left of Thraben.
#figure(image("008_The Promised End/16.jpg", width: 100%), caption: [Emrakul, the Promised End | Art by <NAME>], supplement: none, numbering: none)
Liliana's zombies were a mere fraction of what they had been before Jace's spell. The humans and beasts infected by Emrakul's madness had started to mass again, threatening to break through. Fending off Emrakul's mental assault was not going to be of much help if her minions tore them to shreds instead.
The other Planeswalkers had regained consciousness a moment after Jace, staggered and disoriented. Jace funneled focus at his friends, clearing away the cobwebs of Emrakul's attack. #emph[Chandra, Gideon, Liliana's zombies need your help. We cannot let Emrakul's minions through. ] Gideon moved first, with a soldier's decisive speed. An image of Erebos's whip flashed through Jace's mind, but he shook it away.
Chandra paused. #emph[I can...I can still try to burn her. I got this.] Her hesitation vanished, replaced by a natural confidence Jace found both appealing and mystifying. #emph[She doesn't play at confidence. It just comes to her. Weird] , he thought to himself. Jace hesitated. Trying to burn Emrakul didn't feel right, didn't feel possible. But how could he be sure this wasn't just head games Emrakul was playing with him, with all of them? Emrakul had been in his mind. He had felt her power.
He cast his thoughts to the whole group, his spell of protection keeping their minds linked. #emph[No, Chandra. Emrakul is too big. Too powerful. We can't beat her that way. I'm not sure she can be destroyed.]
#emph[Jace is right. Trying to burn Emrakul is throwing a torch into the ocean. It will not work. Even if all the leylines were available. She is too...vast. ] Nissa's voice sounded odd, distant. She was weaving vines, shoots, and leaves into poultices to wrap around Liliana's wounds, keeping her alive. #emph[Emrakul was there, at my awakening. At the moment of my spark. Perhaps it is fitting she be there at the end.]
#emph[Oh, wow, you do not get invited to many parties, huh. ] Chandra's playful voice belied her words. #emph[Enough doom talk. More how-we-win-this talk, please. I'm gonna go burn things. ] Chandra ran to the outer ring of the zombie horde, her flames beating back crazed cultists.
#emph[Jace. Remember what Avacyn said. ] Tamiyo's voice, a light breeze on a sunlit shore.
An echo chimed in his head, a mad angel saying her last words to her creator. #emph[What cannot be destroyed must be bound.]
#emph[Jace, that is the answer. That is what we must do. We cannot destroy Emrakul. We must bind her. ] Tamiyo's voice was insistent, clear. The Gatewatch had faced this same crux back on Zendikar, and there they had chosen destruction. But that was not a choice on Innistrad. Emrakul was beyond their powers. The only destruction in question was their own—along with everyone else on Innistrad.
#emph[How? Binding her may not be any more possible than destroying her. What prison could possibly hold her?]
#emph[The same prison that held all of Innistrad's horrors for hundreds of years.]
#emph[The Helvault? ] Jace was confused. #emph[Wasn't that destroyed?]
#emph[Not the Helvault, ] Tamiyo responded. #emph[Where the Helvault came from. The moon. A silver moon. I have a binding spell. A powerful one. I can attune it to the moon. But it needs to be linked to Emrakul...]
Jace's mind raced. They could do it. Jace was confident he could attach Tamiyo's spell to Emrakul. But they would need power, fuel for the spell. #emph[Nissa] ...
Nissa had been silent as she continued infusing her mana into the poultices wrapping Liliana. Liliana was breathing evenly, though still unconscious. Jace felt a warm surge of gratitude toward Nissa, but now he needed more from her. Far more. #emph[Can you power the spell?]
Nissa's voice was cool, serene. #emph[No. There are so few leylines here I can touch. So few I ] want #emph[to touch. ] Jace paused, uncertain of what to say next or how to help her. #emph[But I owe you, <NAME>. I will try.]
#emph[Owe me?]
#emph[My mind was not my own. I was trapped in a darkness brought by her rising. I was subsumed by her, far too easily. It was not...pleasant. You rescued me from that horror. You have a gift for making difficult things so very easy. I will do what I can.]
Jace sputtered. #emph[Um, thank you...it wasn't really me, I mean, I cast the spell, but I wasn't really thinking at the time, and I actually probably made it a bit worse because I didn't...]
#emph["Thank you" suffices, Jace. You also have a gift for making easy things so very hard. I am ready.]
Jace didn't know how to respond to that, so he didn't. #emph[Tamiyo, are you ready?]
Tamiyo had pulled out a scroll. Another memory flashed through Jace's mind. #emph[The angel took out a long scroll, a scroll with iron bands. ] That was where he had seen Tamiyo's scroll, in his mental conversation with Emeria. But the scroll Tamiyo had chosen did not have iron bands on it.
Jace had no more time to ponder the mystery. The space around them was shrinking. Gideon and Chandra were each holding off Emrakul's minions by themselves, but they couldn't be everywhere at once, and the zombies were close to being overrun. It was time.
#emph[I am ready] , Tamiyo confirmed. She began reading her scroll. Jace couldn't focus on the words, he was lost in the details of attaching Tamiyo's spell to Emrakul, using the knowledge gleaned from Ugin and his own hedron manipulations back on Zendikar. A glyph flashed onto the moon, incised lines glowing bright against the silvery reflection. He had to fasten that glyph onto Emrakul, the presence of Emrakul.
#figure(image("008_The Promised End/17.jpg", width: 100%), caption: [Imprisoned in the Moon | Art by <NAME>], supplement: none, numbering: none)
But the spell demanded power. Streams, #emph[torrents] , of power. Nissa strained against the earth, her eyes a bright glowing green as she wove the polluted fragments of mana left on Innistrad into something Jace could use. Jace could feel her draining the leylines, looking for every last bit of energy. It was not enough. It was not going to be enough. Nissa stumbled to the ground, her arms flailing.
They were going to lose the spell.
As Jace struggled to keep the spell going, he lost mental contact with Tamiyo. Where she had been in his mind, there was now just a cloud, a dark gray fog he could not penetrate. Tamiyo pulled out another scroll, #emph[a long scroll, a scroll with iron bands] , and began reading a second spell.
Energy flowed into Jace. He was in a wide river of mana, more magic, more energy than he had ever felt before. It felt wonderful. He took the magic, shaped it, each point on the glyph attaching itself to a node on Emrakul that Jace created on the fly. Jace unleashed the full power of the spell.
Light erupted from the moon.
A cold, silver beam struck Emrakul from on high.
It bathed the creature, enveloped it...and the creature #emph[stretched] . Toward the light, toward the moon.
The distortion was physically impossible. Before Jace's eyes the shape of Emrakul arced through the light to the moon, stretching, stretching, and then...
...#emph[snapping] .
Emrakul folded, collapsed. She crumbled like a thin parchment sprinkled with glass, compacting to nothingness in a way no creature Emrakul's size should. Or #emph[could] .
The light winked out. Emrakul was gone. They had won.
The silver face of the moon glowered with the triangular patterns of the glyph. Branded. Scarred. Sealed.
For a moment, the only sound was the stirring of dry leaves in the wind. Next to him, Tamiyo dropped to her knees and vomited.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Liliana]
She still lived.
She felt exultant. She had known delight many times before. The day she had regained her youth. When she killed the demon lords Kothophed and Griselbrand, hearing their death screams. Each of those moments had felt like cheating; the best kind of cheating, where you get away with it and still win at the end.
But this moment was even sweeter. Perhaps it was because she had truly known she was going to die. Perhaps it was because she had so rashly taken on Emrakul in her pride and thirst for control, and yet none of them would be alive had she not done so. Perhaps it was because there was no more Emrakul. Its taint, its #emph[taste] , was gone from Innistrad, and everything was better in its absence.
Just thinking of Emrakul made her shiver. She had been so close to death. Or worse. She stared at the moon. #emph[May you rot there forever. Know the consequence of opposing <NAME>.]
The assorted Planeswalkers had gathered at the close of a very long day. After the battle with Emrakul was won, there were still fires to put out, eyes to close, grief to console, wounds to heal...or not, in the case of much of the trauma. Liliana didn't much care. Every time she pushed the limits of the Chain Veil she felt empty afterward, as if a part of her was missing. It had happened so many times now she was not even sure she could identify what was absent anymore.
Besides, it didn't matter. She had had her fill of good deeds for quite some time. #emph[None of you would be alive if not for me. You're lucky I don't demand payment for my rescue of this world. ] Well, she would demand payment, but not now and not from anyone on Innistrad.
It was remarkable what imaginary obligations and loyalty made people do. Take the Gatewatch. They owed each other nothing. Literally nothing. And yet here they stood, fighting for each other, willing to die for each other. Liliana was used to the effect of such relationships; she depended on them, as long as they were with her zombies. That was a reliable power dynamic. But Innistrad had shown the limitations of her approach. Zombies were great servants, but there were certain tasks they couldn't accomplish. And fighting alone was wonderful...until it wasn't. When you weren't prepared for the unlikely, and there was no one to save you from the untimely.
Once upon a time recently she had thought to leverage the emotions Jace had for her. Or had had, if she must admit. #emph[He is just a boy. A boy, and I should know better.] Jace had proven reliably unreliable, his recent success notwithstanding. #emph[What were you doing with your spell while I kept you alive? Trying to think Emrakul to death?] While she acknowledged whatever he had done had worked, it did not dramatically improve her opinion of him. #emph[A boy. I should be done with you.]
But here was an opportunity far beyond Jace and his limitations. Here was a #emph[group] . A group of #emph[friends] . Today was a revelation, a revelation of the power of friends. Manipulated correctly, friends were like better zombies. Helping you and saving your life because they #emph[wanted] to, not because they had to.
What more could she do, with powerful friends like these? What more could she conquer, what more could she obtain? She smiled at the thought of it. They would not obey her direct orders, but did it matter? Jace wasn't the only child compared to her. They were all children. None of them had her centuries of experience, none of them had tasted the power she had, either before or now, none of them were as ruthless or focused as she was.
She didn't know where the Raven Man was. There was no sign of him inside her head or out. The Chain Veil was subdued. Today had been an extremely painful lesson in how unreliable a weapon it was. #emph[But when I have my own Gatewatch to heal me after every use.] ..a thought for later. But she liked the sound of that. #emph[My very own Gatewatch.]
Gideon had been rambling on and on to Tamiyo. The moonfolk looked sick, and Liliana could hardly blame her. Gideon was nice enough to look at, but she had known zombies who were smarter. Gideon was babbling something about the Gatewatch, and how they were just starting up their do-gooding, and wouldn't Tamiyo like to be a do-gooder too? Tamiyo shook her head, excusing herself, her eyes wide and frightened. It figured that a mind mage would be too fragile. Like Jace, useless.
Jace was looking at her, that puppy-dog look in his eye still. #emph[At least make up your mind, child! ] She bit down on her irritation. She needed him and his puppy ways here.
"Gideon." Jace's voice was tentative, slight. They talked quietly amongst themselves, and Liliana made sure to show no hint of the smile she felt. #emph[Yes, cloak boy, bumble your hesitant way toward your sincere desire to help me.] It was clear Gideon was not happy about it, though. Though Liliana was not sure Gideon was ever happy about anything. #emph[You should at least delight in your youth and attractiveness while you still have it. Why are children so dumb?]
Eventually the eye-candy approached. There were more do-gooder words about do-gooding, but Liliana was too focused on the oath to pay close attention. She had thought extensively about the right approach to the oath. Too sincere, too sugary, and suspicions would be raised—suspicions that would make her next steps harder. But too cynical, too revealing, and those suspicions would instead be confirmed. She needed a delicate touch, a hint of cynicism but with her heart clearly in the right place.
When Gideon asked her for her oath, she was ready.
#figure(image("008_The Promised End/18.jpg", width: 100%), caption: [Oath of Liliana | Art by Wesley Burt], supplement: none, numbering: none)
"I see that together we're more powerful than we are alone. If that means I can do what needs to be done without relying on the Chain Veil, then I'll keep watch. Happy now?"
She said it with a touch of a smile, but just a touch. Besides, her pleasure was genuine. The best lies always contained enough truth to slide through.
She was now a member of the Gatewatch. Futures unfolded in her mind, full of promise and ambition.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Jace]
Jace was exhausted. It had been the longest day of his life, and all he wanted was to sleep; a sleep free of dreams or any thinking whatsoever.
But there was someone he had to talk to first.
He found her in the far outskirts of Thraben, sitting in the ruins of a small church. There were few buildings in Thraben left standing, and this church had not been spared.
#figure(image("008_The Promised End/19.jpg", width: 100%), caption: [Forsaken Sanctuary | Art by Vincent Proce], supplement: none, numbering: none)
She just sat there, her legs crossed over one another, her eyes closed. Jace felt weird interrupting such a private moment. But he had to know.
"Tamiyo...? Are you...can I...?" Jace didn't know how to ask his question. Tamiyo opened her eyes, her face still full of the sickness and dread she had displayed ever since they finished casting the spell.
"What happened out there, Tamiyo? You were there, mind-linked with me, and then you...weren't. You vanished. What happened to you?"
Tamiyo sat there and began crying. Tears dropped from her eyes, one after another. #emph[Plip-plip] , as they hit on the stone rubble beneath.
Her words came out staggered, halting. "Nissa had fallen. The spell was in danger of collapsing. I didn't know what to do, how to help."
Jace was surprised. "So Nissa generated that power by herself? Impressive. I had thought it was you, with the second scroll."
Tamiyo looked at him, sadness and scorn both in her eyes. "No. You don't understand. It was me. With the second scroll. That's where the energy came from."
"But that's wonderful! You saved us! You saved all of Innistrad, all of...everything! Is it because it was one of the iron scrolls? One of the scrolls you didn't want to open?"
"Just shut up, Jace! Listen, just listen. It wasn't #emph[me] . It...she...took me over. Do you understand? It was not #emph[me] ! I was there, in my own body, helpless as she came in and took over. My eyes, my hands, my voice...she took them all over. They were not mine." Her cries became full sobs.
A voice came back to him, her voice as he had watched his chess pieces stab and kill each other. #emph[They are all my pieces, <NAME>. They always were. I just no longer want to play.]
"I...I am sorry, Tamiyo. I don't know..."
"But that wasn't the worst part. The scroll I opened. The second one. You were right. I shouldn't have opened it. A promise made long ago, which one day I'll have to answer for. But the spell she read...it wasn't the original spell. The scroll she used, it cast...a different spell."
Emeria. #emph[From somewhere a long stylus appeared, and she began writing in the scroll. ] Jace began shaking.
"It was changed. How did she do that? How could she do that?" Tamiyo's voice was near panic. "As this monster took over my body and read a scroll, a scroll that should have brought devastation to everything on this plane...instead it fueled a spell that trapped herself here. How did that happen, Jace? Why did it happen? What did we just do?"
"I...I don't know." Jace had no more words for her. None for himself.
Tamiyo took a deep breath. "I told you before, Jace. Sometimes our stories have to end. Yet here we are, each seeking to prolong our story, no matter the cost. But what if all stories are just #emph[her ] story, all in service of some awful destiny waiting to unfold?" Tamiyo looked up at the moon.
"Did we really win?" Tamiyo's voice was no longer fearful, but plaintive. Jace had no answer. Eventually she rose and flew into the dark sky. There were no parting words.
Jace sat for a longer time still. He looked again at the moon in its silver luminescence, the glyph still brightly inscribed on its surface, a testament to what the Gatewatch had achieved. In that moon's depths was the most powerful and destructive force any of them had ever encountered. The angel's words stabbed in his head, daggers from a destiny unrealized. #emph[This is all wrong. I am incomplete, unfulfilled, inchoate. There should be blossoms, not barren resentment. The soil was not receptive] . #emph[It is not my time. Not yet.]
His spine was cold. #emph[It is not my time. Not yet. ] He dropped his gaze from the moon, and went in search of a safe bed to find temporary oblivion.
|
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/MATLAB/touying/docs/docs/dynamic/complex.md | markdown | ---
sidebar_position: 2
---
# Complex Animations
Thanks to the syntax provided by [Polylux](https://polylux.dev/book/dynamic/syntax.html), we can also use `only`, `uncover`, and `alternatives` in Touying.
## Callback-Style Functions
To overcome the limitations of `styled` and `layout` mentioned earlier, Touying cleverly implements always-effective `only`, `uncover`, and `alternatives` using callback functions. Specifically, you need to introduce these three functions as follows:
```typst
#slide(repeat: 3, self => [
#let (uncover, only, alternatives) = utils.methods(self)
In subslide #self.subslide,
test #uncover("2-")[uncover] function,
and test #only("2-")[only] function,
#pause
and paused text.
])
```

Notice that we no longer pass a content block but instead pass a callback function with a `self` parameter. Later, we extract `only`, `uncover`, and `alternatives` functions from `self` using:
```typst
#let (uncover, only, alternatives) = utils.methods(self)
```
We then call these functions in subsequent steps.
Here's an interesting fact: the `self.subslide` of type int indicates the current subslide index, and in fact, the `only`, `uncover`, and `alternatives` functions rely on `self.subslide` to determine the current subslide index.
:::warning[Warning]
We manually specify the `repeat: 3` parameter, indicating the display of 3 subslides. We need to do this manually because Touying cannot infer how many subslides `only`, `uncover`, and `alternatives` should display.
:::
## only
The `only` function means it "appears" only on selected subslides. If it doesn't appear, it completely disappears and doesn't occupy any space. In other words, `#only(index, body)` is either `body` or `none`.
The index can be an int type or a str type like `"2-"` or `"2-3"`. For more usage, refer to [Polylux](https://polylux.dev/book/dynamic/complex.html).
## uncover
The `uncover` function means it "displays" only on selected subslides; otherwise, it will be covered by the `cover` function but still occupies the original space. In other words, `#uncover(index, body)` is either `body` or `cover(body)`.
The index can be an int type or a str type like `"2-"` or `"2-3"`. For more usage, refer to [Polylux](https://polylux.dev/book/dynamic/complex.html).
You may also have noticed that `#pause` actually uses the `cover` function, providing a more convenient syntax. In reality, their effects are almost identical.
## alternatives
The `alternatives` function displays a series of different content in different subslides. For example:
```typst
#slide(repeat: 3, self => [
#let (uncover, only, alternatives) = utils.methods(self)
#alternatives[Ann][Bob][Christopher]
likes
#alternatives[chocolate][strawberry][vanilla]
ice cream.
])
```

As you can see, `alternatives` can automatically expand to the most suitable width and height, a capability that `only` and `uncover` lack. In fact, `alternatives` has other parameters, such as `start: 2`, `repeat-last: true`, and `position: center + horizon`. For more usage, refer to [Polylux](https://polylux.dev/book/dynamic/alternatives.html). |
|
https://github.com/jamesrswift/springer-spaniel | https://raw.githubusercontent.com/jamesrswift/springer-spaniel/main/tests/board-n-pieces/test.typ | typst | The Unlicense | #import "/tests/preamble.typ": *
#import springer-spaniel.board-n-pieces: *
#show: springer-spaniel.template(
title: [Towards Swifter Interstellar Mail Delivery],
authors: (
(
name: "<NAME>",
institute: "Primary Logistics Departmen",
address: "Delivery Institute, Berlin, Germany",
email: "<EMAIL>"
),
(
name: "<NAME>",
institute: "Communications Group",
address: "Space Institute, Florence, Italy",
email: "<EMAIL>"
),
(
name: "<NAME>",
institute: "Missing Letters Task Force",
address: "Mail Institute, Budapest, Hungary",
email: "<EMAIL>"
)
),
abstract: lorem(75),
)
#pagebreak()
= Board 'n pieces
This section tests the template styling of the supported `board-n-pieces` package, as shown below; #lorem(50)
#springer-spaniel.sidecaption(
caption-width: 50%,
caption-padding: (top: 1.25em),
figure(
caption: lorem(50),
board(
fen("3k4/7R/8/2PK4/8/8/8/6r1 b - - 0 1"),
highlighted-squares: "c7 c6 h6",
arrows: ("d8 c8", "d8 c7", "g1 g6", "h7 h6"),
)
)
)
|
https://github.com/SillyFreak/typst-packages-old | https://raw.githubusercontent.com/SillyFreak/typst-packages-old/main/scrutinize/src/question.typ | typst | MIT License | #let _label = <scrutinize-question>
#let _builtin_counter = counter
#let _metadata_to_dict(m) = (..m.value, location: m.location())
/// The question counter
///
/// Example:
///
/// ```typ
/// #show heading: it => [Question #question.counter.display()]
/// ```
///
/// -> counter
#let counter = _builtin_counter(_label)
/// Adds a question with its metadata, and renders it.
/// The questions can later be accessed using the other functions in this module.
///
/// - body (content): the content to be displayed for this question
/// - ..args (string): only named parameters: values to be added to the question's metadata
/// -> content
#let q(
body,
..args,
) = {
assert(args.pos().len() == 0)
[#metadata((body: body, ..args.named())) #_label]
body
}
/// Locates the most recently defined question;
/// within a @@q() call, that is the question _currently_ being defined.
///
/// This function is contextual and must appear within a ```typ context``` expression.
///
/// Example:
///
/// ```typ
/// #context [
/// #let points = question.current().points
/// This question is worth #points points.
///
/// I may award up to #(points + 1) points for great answers!
/// ]
/// ```
///
/// -> dictionary
#let current() = {
let q = query(selector(_label).before(here())).last()
_metadata_to_dict(q)
}
/// Locates all questions in the document, which can then be used to create grading keys etc.
///
/// This function is contextual and must appear within a ```typ context``` expression.
///
/// Example:
///
/// ```typ
/// #context [
/// #let qs = question.all()
/// There are #qs.len() questions.
///
/// The first question is worth #qs.first().points points!
/// ]
/// ```
///
/// -> array
#let all() = {
let qs = query(_label)
qs.map(_metadata_to_dict)
}
|
https://github.com/fenjalien/metro | https://raw.githubusercontent.com/fenjalien/metro/main/src/dependencies.typ | typst | Apache License 2.0 | #import "@preview/oxifmt:0.2.0": strfmt
#import "@preview/t4t:0.3.2": is |
https://github.com/nvarner/typst-lsp | https://raw.githubusercontent.com/nvarner/typst-lsp/master/.github/ISSUE_TEMPLATE/bug_report.md | markdown | MIT License | ---
name: "🐞 Bug Report"
about: Did you find a bug?
title: ''
assignees: ''
---
<!--
Hi there! Thank you for submitting a bug report!
Please fill out the template below; insufficient information or bad reproduction instructions will impair the
ability of others to help you.
-->
<!-- All the below information must be provided for others to understand and help with your issue. -->
- **Component**: <!-- Include the relevant component(s). Delete the rest. -->
- VSCode Extension
- LSP (used with other editor)
- **Extension version**: <!-- Replace with version of the Typst LSP extension -->
- **LSP version**: <!-- Replace with version of the typst-lsp binary -->
- **OS version and name**: <!-- Replace with version + name, e.g. Ubuntu 22.04 or macOS 12.6 -->
<!-- All the below steps should be completed before submitting your issue. -->
- I am on the [latest](https://github.com/nvarner/typst-lsp/tags) stable version of the extension/LSP.
- I have searched the [issues](https://github.com/nvarner/typst-lsp/issues) of this repo and believe that this is not a duplicate.
## Issue
<!--
Now feel free to write your issue, and please be as descriptive as possible! Make sure to include detailed
reproduction steps, but keep the examples minimal.
-->
### Logs
<!--
If applicable.
In VSCode/VSCodium, open the "Output" panel (ctrl+shift+U or cmd+shift+U) and select "Typst Language Server" from
the dropdown on the right. Then copy-paste the contents of the logs below.
-->
```
Include any logs if relevant, otherwise remove this code block.
```
<!-- Thanks again 🙌 ❤ -->
|
https://github.com/cspr-rad/actus-spec | https://raw.githubusercontent.com/cspr-rad/actus-spec/master/spec/main.typ | typst | #set document(title: "ACTUS Specification version 2")
#set heading(numbering: "1.1.")
// Helper functions for writing
#let todo(str) = box({
text("TODO: ", blue)
text(str)
})
#let citation_needed = todo("[CITATION NEEDED]")
#let dict_todo(str) = box({
text("TODO [PROBLEM IN DICTIONARY]: ", red)
text(str)
})
// Data import
#let dictionary = json("dictionary.json")
// TODO
// Maybe we find a way to do these declarations for each event
// automatically? Perhaps we could just be required to write
// event("monitoring") instead of #AD
#let terms = dictionary.at("terms")
#let make_term_label(identifier) = {
let term = terms.at(identifier)
link(
label("term_" + term.identifier),
[ #raw(term.acronym) (#text(term.name)) ],
)
}
#let IPAC = make_term_label("accruedInterest")
#let AMD = make_term_label("amortizationDate")
#let ARIPANXi = make_term_label("arrayCycleAnchorDateOfInterestPayment")
#let ARPRANXj = make_term_label("arrayCycleAnchorDateOfPrincipalRedemption")
#let ARRRANX = make_term_label("arrayCycleAnchorDateOfRateReset")
#let ARIPCLi = make_term_label("arrayCycleOfInterestPayment")
#let ARPRCLj = make_term_label("arrayCycleOfPrincipalRedemption")
#let ARRRCL = make_term_label("arrayCycleOfRateReset")
#let ARFIXVAR = make_term_label("arrayFixedVariable")
#let ARINCDEC = make_term_label("arrayIncreaseDecrease")
#let ARPRNXTj = make_term_label("arrayNextPrincipalRedemptionPayment")
#let ARRATE = make_term_label("arrayRate")
#let BDC = make_term_label("businessDayConvention")
#let CLDR = make_term_label("calendar")
#let IPCED = make_term_label("capitalizationEndDate")
#let MRCLH = make_term_label("clearingHouse")
#let CDD = make_term_label("contractDealDate")
#let CID = make_term_label("contractID")
#let PRF = make_term_label("contractPerformance")
#let CNTRL = make_term_label("contractRole")
#let CTS = make_term_label("contractStructure")
#let CT = make_term_label("contractType")
#let CPID = make_term_label("counterpartyID")
#let CECV = make_term_label("coverageOfCreditEnhancement")
#let CRID = make_term_label("creatorID")
#let CETC = make_term_label("creditEventTypeCovered")
#let CLA = make_term_label("creditLineAmount")
#let CUR = make_term_label("currency")
#let CUR2 = make_term_label("currency2")
#let DVANX = make_term_label("cycleAnchorDateOfDividend")
#let FEANX = make_term_label("cycleAnchorDateOfFee")
#let IPCBANX = make_term_label("cycleAnchorDateOfInterestCalculationBase")
#let IPANX = make_term_label("cycleAnchorDateOfInterestPayment")
#let MRANX = make_term_label("cycleAnchorDateOfMargining")
#let OPANX = make_term_label("cycleAnchorDateOfOptionality")
#let PRANX = make_term_label("cycleAnchorDateOfPrincipalRedemption")
#let RRANX = make_term_label("cycleAnchorDateOfRateReset")
#let SCANX = make_term_label("cycleAnchorDateOfScalingIndex")
#let DVCL = make_term_label("cycleOfDividend")
#let FECL = make_term_label("cycleOfFee")
#let IPCBCL = make_term_label("cycleOfInterestCalculationBase")
#let IPCL = make_term_label("cycleOfInterestPayment")
#let MRCL = make_term_label("cycleOfMargining")
#let OPCL = make_term_label("cycleOfOptionality")
#let PRCL = make_term_label("cycleOfPrincipalRedemption")
#let RRCL = make_term_label("cycleOfRateReset")
#let SCCL = make_term_label("cycleOfScalingIndex")
#let IPPNT = make_term_label("cyclePointOfInterestPayment")
#let RRPNT = make_term_label("cyclePointOfRateReset")
#let IPDC = make_term_label("dayCountConvention")
#let DQP = make_term_label("delinquencyPeriod")
#let DQR = make_term_label("delinquencyRate")
#let DS = make_term_label("deliverySettlement")
#let EOMC = make_term_label("endOfMonthConvention")
#let DVEX = make_term_label("exDividendDate")
#let XA = make_term_label("exerciseAmount")
#let XD = make_term_label("exerciseDate")
#let FEAC = make_term_label("feeAccrued")
#let FEB = make_term_label("feeBasis")
#let FER = make_term_label("feeRate")
#let RRFIX = make_term_label("fixingPeriod")
#let PFUT = make_term_label("futuresPrice")
#let GRP = make_term_label("gracePeriod")
#let CEGE = make_term_label("guaranteedExposure")
#let IED = make_term_label("initialExchangeDate")
#let MRIM = make_term_label("initialMargin")
#let IPCB = make_term_label("interestCalculationBase")
#let IPCBA = make_term_label("interestCalculationBaseAmount")
#let SCIP = make_term_label("interestScalingMultiplier")
#let RRLC = make_term_label("lifeCap")
#let RRLF = make_term_label("lifeFloor")
#let MRMML = make_term_label("maintenanceMarginLowerBound")
#let MRMMU = make_term_label("maintenanceMarginUpperBound")
#let MOC = make_term_label("marketObjectCode")
#let RRMO = make_term_label("marketObjectCodeOfRateReset")
#let SCMO = make_term_label("marketObjectCodeOfScalingIndex")
#let MVO = make_term_label("marketValueObserved")
#let MD = make_term_label("maturityDate")
#let MPFD = make_term_label("maximumPenaltyFreeDisbursement")
#let DVNP = make_term_label("nextDividendPaymentAmount")
#let PRNXT = make_term_label("nextPrincipalRedemptionPayment")
#let RRNXT = make_term_label("nextResetRate")
#let IPNR = make_term_label("nominalInterestRate")
#let IPNR2 = make_term_label("nominalInterestRate2")
#let NPD = make_term_label("nonPerformingDate")
#let NT = make_term_label("notionalPrincipal")
#let NT2 = make_term_label("notionalPrincipal2")
#let SCNT = make_term_label("notionalScalingMultiplier")
#let OPXED = make_term_label("optionExerciseEndDate")
#let OPXT = make_term_label("optionExerciseType")
#let OPS1 = make_term_label("optionStrike1")
#let OPS2 = make_term_label("optionStrike2")
#let OPTP = make_term_label("optionType")
#let PYRT = make_term_label("penaltyRate")
#let PYTP = make_term_label("penaltyType")
#let RRPC = make_term_label("periodCap")
#let RRPF = make_term_label("periodFloor")
#let PDIED = make_term_label("premiumDiscountAtIED")
#let PPEF = make_term_label("prepaymentEffect")
#let PPP = make_term_label("prepaymentPeriod")
#let PPRD = make_term_label("priceAtPurchaseDate")
#let PTD = make_term_label("priceAtTerminationDate")
#let PRD = make_term_label("purchaseDate")
#let QT = make_term_label("quantity")
#let RRMLT = make_term_label("rateMultiplier")
#let RRSP = make_term_label("rateSpread")
#let SCEF = make_term_label("scalingEffect")
#let SCCDD = make_term_label("scalingIndexAtContractDealDate")
#let SEN = make_term_label("seniority")
#let CURS = make_term_label("settlementCurrency")
#let STP = make_term_label("settlementPeriod")
#let SD = make_term_label("statusDate")
#let TD = make_term_label("terminationDate")
#let UT = make_term_label("unit")
#let MRVM = make_term_label("variationMargin")
#let XDN = make_term_label("xDayNotice")
#let eventsList = dictionary.at("event").at("eventType").at("allowedValues")
#let events = (:)
#for e in eventsList {
events.insert(e.identifier, e)
}
#let make_event_label(identifier) = [
#let event = events.at(identifier)
#locate(loc => [
#let l = label("event_" + event.identifier)
#let arr = query(l, loc)
#if (arr.len() > 1) {
panic(event.identifier)
}
#link(l, [ #raw(event.acronym) (#text(event.name)) ])
])
]
#let AD = make_event_label("monitoring")
#let IED = make_event_label("initialExchange")
#let FP = make_event_label("feePayment")
#let PR = make_event_label("principalRedemption")
#let PD = make_event_label("principalDrawing")
#let PRF = make_event_label("principalPaymentAmountFixing")
#let PY = make_event_label("penalytPayment")
#let PP = make_event_label("principalPrepayment")
#let IP = make_event_label("interestPayment")
#let IPCI = make_event_label("interestCapitalization")
#let CE = make_event_label("creditEvent")
#let RRF = make_event_label("rateResetFixed")
#let RR = make_event_label("rateResetVariable")
#let DV = make_event_label("dividendPayment")
#let PRD = make_event_label("purchase")
#let MR = make_event_label("marginCall")
#let TD = make_event_label("termination")
#let SC = make_event_label("scalingIndexFixing")
#let IPCB = make_event_label("interestCalculationBaseFixing")
#let MD = make_event_label("maturity")
#let XD = make_event_label("exercise")
#let STD = make_event_label("settlement")
#let state_variables = dictionary.at("states")
#let make_state_variable_label(identifier) = {
let state_variable = state_variables.at(identifier)
link(
label("state_variable_" + state_variable.identifier),
[ #raw(state_variable.acronym) (#text(state_variable.name)) ],
)
}
#let Ipac = make_state_variable_label("accruedInterest")
#let Ipac2 = make_state_variable_label("accruedInterest2")
#let Prf = make_state_variable_label("contractPerformance")
#let Xa = make_state_variable_label("exerciseAmount")
#let Xd = make_state_variable_label("exerciseDate")
#let Feac = make_state_variable_label("feeAccrued")
#let Icba = make_state_variable_label("interestCalculationBaseAmount")
#let Scip = make_state_variable_label("interestScalingMultiplier")
#let Md = make_state_variable_label("maturityDate")
#let Prnxt = make_state_variable_label("nextPrincipalRedemptionPayment")
#let Ipnr = make_state_variable_label("nominalInterestRate")
#let Ipnr2 = make_state_variable_label("nominalInterestRate2")
#let Npd = make_state_variable_label("nonPerformingDate")
#let Nt = make_state_variable_label("notionalPrincipal")
#let Nt2 = make_state_variable_label("notionalPrincipal2")
#let Scnt = make_state_variable_label("notionalScalingMultiplier")
#let Sd = make_state_variable_label("statusDate")
#let Td = make_state_variable_label("terminationDate")
#let contracts = dictionary.at("taxonomy")
#let make_contract_label(identifier) = {
let contract = contracts.at(identifier)
link(
label("contract_" + contract.identifier),
[ #raw(contract.acronym) (#text(contract.name)) ],
)
}
#let ANN = make_contract_label("annuity")
#let CLM = make_contract_label("callMoney")
#let CAPFL = make_contract_label("capFloor")
#let CSH = make_contract_label("cash")
#let CEC = make_contract_label("collateral")
#let COM = make_contract_label("commodity")
#let BNDCP = make_contract_label("convertibleNote")
#let CDSWP = make_contract_label("creditDefaultSwap")
#let CLNTE = make_contract_label("creditLinkedNote")
#let EXOTi = make_contract_label("eXOTi")
#let ANX = make_contract_label("exoticAnnuity")
#let LAX = make_contract_label("exoticLinearAmortizer")
#let NAX = make_contract_label("exoticNegativeAmortizer")
#let FXOUT = make_contract_label("foreignExchangeOutright")
#let FUTUR = make_contract_label("future")
#let CEG = make_contract_label("guarantee")
#let LAM = make_contract_label("linearAmortizer")
#let MAR = make_contract_label("margining")
#let NAM = make_contract_label("negativeAmortizer")
#let OPTNS = make_contract_label("option")
#let PBN = make_contract_label("perpetualBonds")
#let SWPPV = make_contract_label("plainVanillaSwap")
#let PAM = make_contract_label("principalAtMaturity")
#let REP = make_contract_label("repurchaseAgreement")
#let SCRCR = make_contract_label("securitizationCreditRisk")
#let SCRMR = make_contract_label("securitizationMarketRisk")
#let STK = make_contract_label("stock")
#let SWAPS = make_contract_label("swap")
#let TRSWP = make_contract_label("totalReturnSwap")
#let UMP = make_contract_label("undefinedMaturityProfile")
#let BNDWR = make_contract_label("warrant")
#let applicability_map = dictionary.at("applicability")
// Title page
#page(align(center + horizon, text(30pt, "ACTUS Specification version 2")))
#outline(title: auto, indent: auto)
#pagebreak()
This document represents a revision of the ACTUS specification. It focusses on
consistency, a lack of ambiguity, and helping with practical implementations.
The specification describes multiple contract types. For each contract type, it
first describes the specification, then examples, and finally test cases. These
test cases are machine readible such that each implementation can use them to
test itself. We also describe a test harness that any implementor can use to test
their implementation against arbitrary test cases.
= Prelude
== Note on terminology
#show "must": raw("MUST")
#show "must not": raw("MUST NOT")
#show "should": raw("SHOULD")
#show "should not": raw("SHOULD NOT")
#show "may": raw("MAY")
The key words must, must not, required, shall, shall not, should, should not,
recommended, may, and optional in this document are to be interpreted as
described in #link("https://datatracker.ietf.org/doc/html/rfc2119)")[RFC 2119].
== MIME types
The ACTUS interchange format is suitable as an exchange format between
applications or systems. The format is defined in terms of the MIME content
types as specified in #link("https://datatracker.ietf.org/doc/html/rfc2046")[RFC 2046]:
`application/actus+json` or `application/actus+cbor`.
=== Note on encoding
When using the `application/actus+json` MIME type, the text encoding must be #link("https://datatracker.ietf.org/doc/html/rfc3629")[UTF-8 (see RFC 3629)] as
specified by #link("https://datatracker.ietf.org/doc/html/rfc8259")[RFC 8259].
The `application/actus+cbor` MIME type specifies its own non-textual encoding.
#let examples(filename) = {
heading("Examples:", level: 3)
let validfile = "test-data/" + filename + ".json"
let valid_examples = json(validfile)
if valid_examples.len() == 0 {
todo("Come up with valid examples and put them in")
raw(validfile)
"."
} else {
text("The following values must parse.")
list(..valid_examples.map(example => {
text(example.explanation + ":")
linebreak()
raw(json.encode(example.value, pretty: true))
}))
text("See ")
raw(validfile)
}
linebreak()
linebreak()
let invalidfile = "test-data/" + filename + "-invalid.json"
let invalid_examples = json(invalidfile);
if invalid_examples.len() == 0 {
todo("Come up with invalid examples and put them in")
raw(invalidfile)
"."
} else {
text("The following values must not parse.")
list(..invalid_examples.map(example => {
text(example.explanation + ":")
linebreak()
raw(json.encode(example.value, pretty: false))
}))
text("See ")
raw(invalidfile)
}
}
= Data Types
While the original ACTUS specification describes contract schedules using
mathematical notation and, in particular, real numbers, we want to take a more
practical and exact approach. Describing numbers or even amounts of money as
real numbers is not helpful for implementors. Indeed, real numbers cannot be
represented in computers. Even representing fractional numbers has its issues
#citation_needed.
We will therefore specify the exact data types that implementations can use to
adhere to the specification. In particular, this specification aims to describe
exactly how implementations must behave when precision is lost due to the
reality of working in a finite amount of time and space.
== Enum <type_Enum>
When a value is of an enum type, the allowed values are specified.
Unrecognised values must be rejected.
== Integer <type_Integer>
An Integer is an integer number. The integer is represented as a JSON number.
Parsers should reject numbers with a decimal point and must reject numbers with
any digits past the decimal point.
Integers must have no range restriction.
#examples("integer")
== Natural <type_Natural>
A natural is a natural number. It is an integer (see @type_Integer) with the
additional restriction that it must not be negative (but may be zero).
Naturals must not have a range restriction.
#examples("natural")
== Rational <type_Rational>
A rational number is represented as a pair of integers (see @type_Integer).
In JSON they are specified as a list of exactly two elements which are each
Integers (see @type_Integer). The first is the numerator and the second the
denominator. The denominator must not be zero and should not be negative.
Rational numbers should be specified in normalised form.
The integers that make up a rational must not have any range restrictions.
#examples("rational")
== Positive Rational <type_PositiveRational>
A positive rational number is a rational number (see @type_Rational) except
instead of a pair of integers it is a pair of naturals (see @type_Natural).
The naturals that make up a positive rational must not have any range restrictions.
#examples("positive-rational")
== Real <type_Real>
#todo(
"Real numbers don't exist in computers. We must get rid of this section.",
)
== Day
A day is represented (internally) as an unsigned integral number of days since
1970-01-01.
#todo("Specify the minimum range for a data type that is used.")
A day is specified in the form #raw("YYYY-MM-DD").
#examples("day")
== Second of day <type_SecondOfDay>
A second of day is represented as an unsigned integral number of seconds since the
start of the day. This number must be between $0$ and $86400$ ($24 dot 60 dot 60$).
A second of day is specified in the form #raw("HH:MM:SS").
#examples("second-of-day")
== Local second <type_LocalSecond>
A datetime is a tuple of a day and a second of day.
A second of day is specified in the form #raw("YYYY-MM-DD HH:MM:SS").
#examples("local-second")
== Timezone offset
A timezone offset is represented (internally) as an integral number of minutes
away from GMT. Note that a timezone offset is only valid within a timezone at a
given time.
A timezone offset is specified in the form #raw("[+-]HH:MM").
#examples("time-zone-offset")
== Timezone
A timezone, in theory, is a function from local datetimes without timezone
offset, to timezone offsets.
#todo(
"Refer to the timezone database to describe how this mapping works in practice",
)
== Timestamp <type_Timestamp>
A timestamp is a local second in the UTC timezone. See @type_LocalSecond
#todo(
"What does a timestamp mean in the actus spec? which granularity? which Timezone? leap seconds? Are we sure it's not a 'Day' instead?",
)
== Quantisation factor <type_QuantisationFactor>
For each currency, a minimal quantisation must be defined. For example, the
minimal quantisation of USD may be defined as 1 cent.
The quantisation factor is defined as the number of minimal quantisations that
represent one unit of the currency. For example, The quantisation factor of USD
is then 100, because 100 cents equals one USD.
A quantisation factor must be positive integral number but must not have a range
greater than 32 bits ($[0 .. 2^32]$) and must not be zero.
Numbers specified with a decimal point should be rejected.
Non-integers must be rejected.
#examples("quantisation-factor")
== Currency <type_Currency>
A currency must specify its quantisation factor (#raw("factor")).
A currency is identified by its #raw("uid"), which is the key in the map of
currencies in the ACTUS file that it is defined in.
See @type_Currencies.
As such, these unique identifiers must be unique within an ACTUS file.
A currency may also specify a symbol.
If no symbol is defined, the #raw("uid") may be used as the symbol.
This specification does not define how amounts of money are presented to users.
Many different ways of doing so are in use already, so this specification allows
the currency object to contain info about this. In order for that to not break
any other implementation's parsing, unrecognised fields in the currency object
must be ignored.
#examples("currency")
== Currencies <type_Currencies>
A map of currency symbol to currency. See @type_Currency
#examples("currencies")
== Positive amount of money <type_Amount>
Positive amounts of money must be represented (internally) as a positive
integral number of a given minimal quantisations of its currency.
An amount of money must not be represented as a binary floating point number, a
decimal floating point number, or an arbitrary-precision rational number.
For example, one USD can be represented as `100` cents if the quantisation
factor is chosen to be 100.
A positive amount must have a range of at least 64 bits $[0 .. 2^64-1]$ and may
be specified using an unsigned 64-bit integer, e.g. `u64` in Rust or `Word64`
in Haskell.
That means an implementation must be able to roundtrip the json value `18446744073709551615`.
#examples("amount")
== Account of money <type_Account>
An account of money is like an amount of money (see @type_Amount) but without
the restriction that it must be positive.
An account must have a range of at least 65 bits $[-(2^64-1) .. (2^64-1)]$.
That means an implementation must be able to roundtrip the json value `18446744073709551615` and its negation.
#examples("account")
== Amount with currency <type_AccountWithCurrency>
An amount with currency is an amount (#raw("amount")) (see @type_Amount)
specified with a unique identifier of a currency (#raw("currency")) (see
@type_Currency). Any currency refered to in the #raw("currency") field of an
amount with currency must have been defined in the same file.
#examples("amount-with-currency")
== Account with currency <type_AccountWithCurrency>
An account with currency is an account (#raw("account")) (see @type_Account)
specified with a unique identifier of a currency (#raw("currency")) (see
@type_Currency). Any currency refered to in the #raw("currency") field of an
account with currency must have been defined in the same file.
#examples("account-with-currency")
== Contract
=== Contract Terms
A contract term is a value that configures a contract. They are detailed further
in the terms section (see @terms).
=== Contract <type_Contract>
A contract is an object where the keys are the names of terms and the values are
their corresponding values. They are detailed further in the contracts section
(see @contracts).
#todo("Make sure this is a valid contract")
#examples("contract")
=== Contracts <type_Contracts>
Contracts can be put together in a map from contract identifier to contract.
If a contract identifier is specified in the contract as well, then it must match its identifier.
#examples("contracts")
=== Events
An event is an object with the following fields:
#todo("figure out what needs to be in these?")
#examples("event")
=== Schedules
#todo("What are schedules?")
=== State variables
A contract term is a value that configures a contract. They are detailed further
in the state variables section (see @state_variables).
#todo("examples")
== ACTUS file <type_File>
An actus file defines a collection of currencies (#raw("currencies")) (see
@type_Currencies)) and ACTUS contracts (#raw("contracts")) (see @type_Contracts).
#examples("file")
== Tests
An actus test is a combination of an actus file and the expected results.
#todo("examples")
#todo(
"Define an exact format for the results of computing results of actus contracts",
)
== Specification tester and test harness
This standard specifies the workings of a test harness.
A test harness is and application that exposes the following test harness api to allow an implementation to be tested.
A specification tester is a program that can test an implementation of this standard by interacting with such a test harness.
To allow for maximally interoperable implementations and language agnostic
testing, the test harness communicates over the stdin and stdout streams.
A specification tester may support only some types of tests.
A test harness must be able to handle at least every one of the following tests.
=== Test input
A test is a JSON object with these values:
- The `id` key to identify the test. This string value must be unique in a test harness session.
- The `type` key that describes the type of the test.
- The `arguments` key that configures the test further.
This key may be omitted if there are no arguments.
- The `value` key that contains the value upon which the test is to be executed.
For every test that it generates, the specification tester must know the expected
result, but must not make this available in the test.
#examples("general-test")
=== Test result
A test result is a JSON object with these values:
- The `id` key to identify which test the results corresponds to.
- The `result` key with the test-specific result.
This key may be absent if the test passed.
- An `error` key with an error that the test harness might have run into.
This key must be absent if the test passed.
The exact shape of the `result` value will depend on which type of test the result belongs to.
#examples("general-test-result")
=== Test session
A specification tester interacts with a test harness by connecting to the standard input and standard output streams of the test harness.
The specification tester sends newline-delimited test input JSON objects to the test harness on stdin.
It expects to read test newline-delimited test output JSON objects from the test harness on stdout.
A test harness may send test results in a different order than the tester sent the corresponding inputs.
(This allows for parallel testing without extra bookkeeping on the test harness' side.)
The result of a test session is either a 0 (success) or nonzero (failure) exit code.
The tester may report additional details on the standard error stream.
A specification tester may fail immediately upon receiving the first incorrect test result but may continue to gather more test failures first as well.
A specification tetser must fail if the implementation failed to produce a result for every test.
This could be because the implementation crashed or because it "forgot" to perform a test.
=== Parsing test
A parsing test aims to test if an implementation can correctly parse (and render), or reject, a given value.
It test uses a `type` argument to describe the type that is to be parsed.
The `value` key describes the JSON value that is to be parsed.
A parsing test expects an object with two keys as a result.
- `parses`: A boolean that describes if the parsed succesfully. `false` if the value was rejected.
- `rendered`: The rendered version of the value that the implementation parsed.
This key must be absent if parsing failed, so that the tester can distinguish between a parse failure and succesfully parsing `null`.
- `error`: A parse error. This key must be absent if parsing succeeded.
#examples("parsing-test")
#examples("parsing-test-result")
= Shared functions
== Annuity Amount Function
$ A(s,T,n,a,r) = (n + a) frac(product_(i = 1)^(m - 1) (1 + r Y (t_i, t_(i+1))), 1 + sum_(i = 1)^(m - 1) product_(j = i)^(m - 1) (1 + r Y (t_j, t_(j+1)))) $
= Terms <terms>
#for term in terms.values() [
=== #text(term.name) (#raw(term.acronym))
#label("term_" + term.identifier)
Group: #text(term.group)
#if (term.default != "") [
Default value:
#if (term.type == "Enum") {
let default_value = term.allowedValues.find(allowed_value => allowed_value.identifier == term.default)
if default_value == none {
dict_todo(
"Default value not found in allowed values (needs to be an acronym): " + term.default,
)
} else {
link(
label("term_" + term.identifier + "_allowed_value_" + term.default),
raw(default_value.acronym),
)
}
} else {
raw(term.default)
}
]
#text(
term.at("description", default: dict_todo("No description for this term")),
)
#todo(
"Link to types so that we're sure that every type is specified. In order to do so we will have to be able to strip [] for the link",
)
Type: #text(term.type)
#if (term.type == "Real") [
#todo(
"Real numbers don't exist in computers. We need to change this type. Most likely to an amount of money.",
)
]
#if (term.type == "Enum") [
with allowed values:
#for allowed_value in term.allowedValues [
- #raw(allowed_value.acronym) (#text(allowed_value.name)) #label(
"term_" + term.identifier + "_allowed_value_" + allowed_value.identifier,
)
#text(allowed_value.description)
]
]
// TODO make examples for all terms
#if (term.identifier == "contractID") [
#examples("terms/" + term.identifier)
]
]
= Events
#for event in events.values() [
=== #text(event.name) (#raw(event.acronym))
#label("event_" + event.identifier)
Sequence: #text(event.sequence)
#text(event.description)
]
= State Variables <state_variables>
#for state_variable in state_variables.values() [
=== #text(state_variable.name) (#raw(state_variable.acronym))
#label("state_variable_" + state_variable.identifier)
#text(state_variable.description)
Type: #link(label("type_" + state_variable.type), state_variable.type)
#if (state_variable.type == "Real") [
#todo(
"Real numbers don't exist in computers. We need to change this type. Most likely to an amount of money.",
)
]
#if (state_variable.type == "Enum") [
with allowed values:
#for allowed_value in state_variable.allowedValues [
- #raw(allowed_value.acronym) (#text(allowed_value.name)) #label(
"state_variable_" + state_variable.identifier + "_allowed_value_" + allowed_value.identifier,
)
#text(allowed_value.description)
]
]
]
= Contracts <contracts>
#for contract in contracts.values() [
=== #text(contract.name) (#raw(contract.acronym))
#label("contract_" + contract.identifier)
Status: #text(
contract.at("status", default: dict_todo("No status for this contract")),
)
Coverage: #text(
contract.at("coverage", default: dict_todo("No coverage for this contract")),
)
Family: #text(contract.family)
Class: #text(contract.class)
#text(contract.description)
#todo("Relevant terms")
#let applicability = applicability_map.at(contract.identifier, default: (:))
#if applicability.len() == 0 [
#dict_todo("No applicability rules for this contract.")
] else [
==== Applicable terms
#for term in terms.values() [
#let rule = applicability.at(term.identifier, default: "")
#if rule != "" [
#let rule_description = if rule == "x" [ Optional ] else { if rule == "NN" [ Required ] else { text(rule) } }
- #link(label("term_" + term.identifier), text[#(term.name) (#raw(term.identifier))]): #text(rule_description)
]
]
]
==== Allowed events
#if (contract.acronym == "PAM") [
- #IED
- #FP
- #MD
- #AD
- #PP
- #PY
- #PRD
- #TD
- #IP
- #IPCI
- #RR
- #RRF
- #SC
- #CE
] else [
#todo("Allowed events")
]
==== Required State Variables
#if (contract.acronym == "PAM") [
===== #Md
Initial value: value of the #MD contract attribute
===== #Nt
If the contract's #IED is greater than $t_0$, then the initial value is $0.0$.
Otherwise the initial value is $R(#raw("CNTRL")) dot #raw("NT")$.
===== #Ipnr
If the contract's #IED is greater than $t_0$, then the initial value is $0.0$.
Otherwise the initial value is the value of the #IPNR contract attribute.
] else [
#todo("Required state variables")
]
#todo("Schedule with comments")
#todo("Variable initialisation")
#todo("State Transition & Payoff function")
// TODO make examples for all contracts
#if (contract.identifier == "annuity") [
#examples("contracts/" + contract.identifier)
]
]
|
|
https://github.com/ludwig-austermann/typst-timetable | https://raw.githubusercontent.com/ludwig-austermann/typst-timetable/main/CHANGELOG.md | markdown | MIT License | # v0.2.0-beta.1
## code facing
- added color themes. you can specify one of the color themes as given in `colorthemes.pdf` and overwrite special colors in the data file
- added description table, showing further information about your courses
## data / dictionary
- added `duration` the new `defaults`, so that only `start` or `end` has to be specified for events
- added `hide` for courses
- added `description` table to specify description table
- added `course -> hide-discription` option, to hide course from description
# v0.2.0-beta
- entrypoint is now `timetable.typ`
- added a `typst.toml`
- removed page sizing from function
- added arguments `show-header`, `show-alternatives`
- *changed `show_time` argument to `show-time` for the data dictionary*
- added `priority` argument to data dictionary
- using now tablex
- because of tablex, rowspans are used if events are longer than duration
- added `tablex-args` to timetable arguments to modify style
- added `event-cell` and `time-cell` to timetable arguments, to modify the blocks. The default functions live in `lib\default-blocks.typ` |
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/007_Theros.typ | typst | #import "@local/mtgset:0.1.0": conf
#show: doc => conf("Theros", doc)
#include "./007 - Theros/001_The Lost Confession.typ"
#include "./007 - Theros/002_Prince Anax, Part 1.typ"
#include "./007 - Theros/003_Prince Anax, Part 2.typ"
#include "./007 - Theros/004_Nymphs of Theros.typ"
#include "./007 - Theros/005_The Consequences of Attraction.typ"
#include "./007 - Theros/006_Tragedy.typ"
#include "./007 - Theros/007_I Iroan.typ"
#include "./007 - Theros/008_The Sea God's Labyrinth, Part 1.typ"
#include "./007 - Theros/009_The Sea God's Labyrinth, Part 2.typ"
#include "./007 - Theros/010_Building Toward a Dream, Part 1.typ"
#include "./007 - Theros/011_Building Toward a Dream, Part 2.typ"
#include "./007 - Theros/012_Asphodel.typ"
|
|
https://github.com/maxgraw/bachelor | https://raw.githubusercontent.com/maxgraw/bachelor/main/apps/document/src/5-implementation/0-index.typ | typst | Nachdem die Anforderungen an das System festeglegt wurden, wird im folgenden Kapitel die Implementierung des Systems beschrieben. Dabei wird auf die technischen Aspekte der Umsetzung eingegangen und die gewählten Technologien und Frameworks erläutert. Im folgenden Kapitel werden verschiedene Code Beispiele verwendet um die Implementierung zu verdeutlichen. Es handelt sich hierbei um Code Ausschnitte, bei welchen die wichtigsten Funktionen und Variablen dargestellt werden um die Lesbarkeit zu erhöhen. Die vollständigen Funktionen finden sich im @appendix-Quellcode sowie in der Github Repository des Projektes @github-project.
== Architektur und Technologien des Projektes
#include "architecture-project.typ"
== Architektur der Anwendung
#include "architecture-software.typ"
== Darstellung
#include "rendering.typ"
== Import
#include "import.typ"
== Interface
#include "interface.typ"
== Interaktion
#include "interaction.typ"
== Hosting
#include "hosting.typ"
== Externe Schnittstelle
#include "external.typ"
== Vergleich mit dem Konzept
#include "comparison.typ" |
|
https://github.com/lrmrct/CADMO-Template | https://raw.githubusercontent.com/lrmrct/CADMO-Template/main/template/cover.typ | typst | #let cover(
title: none,
thesis_type: [],
authors: (),
date: [],
advisors: (),
department: [],
doc
) = {
page(
margin: (x: 8em, bottom: 13em, top: 10.5em),
numbering: none
)[
#set text(14pt, font: "CMU Sans Serif", weight: "medium")
#set align(left)
#image("../imgs/ETHlogo.svg", width: 10em)
#set align(center + horizon)
#v(-6em)
#text(24pt, font: "CMU Sans Serif", weight: "bold", title)
#v(3em)
#thesis_type
#let count = authors.len()
#let ncols = calc.min(count, 3)
#grid(
columns: (1fr,) * ncols,
row-gutter: 24pt,
..authors.map(author => [
#author.name \
#link("mailto:" + author.email)
]),
)
#date
#set align(right + bottom)
#text(11pt, [Advisors: #advisors])
#text(11pt, department)
]
doc
} |
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/Text/text1.typ | typst | #set text(
size:10pt,
)
#set page(
paper:"a5",
margin:(x:1.8cm,y:1.5cm),
)
#set par(
justify: true,
leading: 0.52em,
)
= 文本
以各种方式自定义文本的外观和布局。
该函数经常被使用,无论是 set 规则还是直接使用。虽然 set 规则通常是更简单的选择,但是直接调用 text 函数在将文本作为参数传递给另一个函数时非常有用。
= 例
#image("24.png")
= 参数
#image("25.png")
= 字体
字体系列名称或字体系列名称的优先级列表。
处理文本时,Typst 会按顺序尝试所有指定的字体系列,直到找到具有必要字形的字体为止。在下面的示例中,该字体是首选,但由于它不包含阿拉伯文字形,因此改用阿拉伯文本。Inria SerifNoto Sans Arabic
可用字体的集合因平台而异:
在 Web 应用程序中,您可以通过单击“Ag”按钮查看可用字体列表。您可以通过将文件上传或上传到项目中来提供其他字体。它们将被自动发现。.ttf.otf
在本地,Typst 使用您安装的系统字体。此外,还可以使用参数或环境变量来添加应扫描字体的目录。--font-pathTYPST_FONT_PATHS
默认:"linux libertine"
= 样式
所需的字体样式。
当请求斜体样式并且只有斜体样式可用时,将使用它。同样,反过来,斜体样式可以代替斜体样式。当斜体样式和斜体样式都不可用时,Typst 会选择普通样式。由于大多数字体仅提供斜体或斜体样式,因此很少观察到斜体和斜体样式之间的区别。
如果你想强调你的文本,你应该使用emph函数来代替。如果您改变主意如何表示重点,这样以后就很容易调整样式。
#image("26.png")
默认:"normal"
= 重量
字体字形的所需粗细。接受预定义权重名称和/或其中一个之间的整数。当所需的粗细不可用时,Typst 会从粗细最接近的字体系列中选择字体。100900
如果你想强烈强调你的文本,你应该使用strong函数来代替。如果您改变主意如何表示强烈的强调,这样以后就很容易调整风格。
#image("27.png")
默认:"regular"
= 伸展
字形的所需宽度。接受 和 之间的比率。当所需的宽度不可用时,Typst 会从拉伸最接近的族中选择字体。只有当字体的压缩或扩展版本可用时,这才会拉伸文本。50%200%
如果要调整字符之间的间距量而不是拉伸字形本身,请改用 tracking 属性。
默认:100%
= 尺寸
所需的字体大小。该值构成 em 基础单位:1em 等于字体大小。
你也可以直接给出字体大小,单位为 em。当然,这是相对于前面指定的字体大小。
默认:11pt
= 填充
字体填充颜色
默认:luma(0%)
= 间距
字符之间的间距。
默认:0pt
= 间距
单词之间的间距。
可以给出绝对长度,也可以相对于字体中空格字符的宽度。
如果想调整字符之间的间距而不是单词之间的间距,请改用 tracking 属性。
默认:100%
= CJK 字符和拉丁字符之间间距
是否在 CJK 字符和拉丁字符之间自动插入间距。
默认:auto
= 基准线距离
文本基准线移动的距离。
默认:0pt
= 悬挂边距
是否允许某些字形悬挂到边距中。这可以使对齐更加美观。
默认:true
= 上边缘
文本布局和定位中使用的文本周围概念框的顶端。这会影响包含文本的容器的大小。
#image("28.png")
默认:"cap-height"
= 下边缘
文本布局和定位中使用的文本周围概念框的底部。这会影响包含文本的容器的大小。
#image("29.png")
默认:"baseline"
= 语言
一个 ISO 639-1/2/3 语言代码。
设置正确的语言会影响 Typst 的各个部分:
- 文本处理功能可以做出更明智的选择。
- 连字符会匹配相应的语言模式。
- 智能引号会变成响应语言的引号。
- 以及其他所有能感知语言的功能。
默认:"en"
= 区域
一个 ISO 3166-1 alpha-2 地区代码。
这使文本处理功能能够做出更明智的选择。
默认:none
= 脚本
OpenType 编写脚本。
lang 和 script 的组合决定了字体功能(如字形替换)的实现方式。通常,该值是修改后的(全小写)ISO 15924 脚本标识符,数学书写脚本用于适用于数学符号的特征。
当设置为 auto(默认设置和建议设置)时,将为共享公共 Unicode 脚本属性的每个字符块选择适当的脚本。
默认:auto
= 方向
文本和内联对象的主导方向。可能的取值有:
- auto:从 lang 属性自动推断方向。
- ltr:从左到右布局文本。
- rtl:从右到左布局文本。
当使用从右到左的语言(如阿拉伯语或希伯来语)书写时,应设置 text language 或 direction。虽然各个文本行会自动按正确的方向布局,但设置主要方向可为双向重新排序算法提供必要的信息,以正确放置标点符号和内联对象。此外,设置方向会影响对齐值 start 和 end,在 ltr 文本中等同于 left 和 right,在 rtl 文本中则相反。
如果设置为 rtl 遇到错误或某种方式产生不佳的输出,请通过 contact table 或我们的 Discord server 与我们联系!
默认:auto
= 连字符
是否使用连字符改善断行。当设定为 auto 时,只有在启用对齐时才会连字符化文本。
设置文本语言可确保使用正确的连字符化模式。
默认:auto
= 字距
是否应用字距调整。
启用后,特定的字母配对会相互靠近或远离,以获得更美观的结果。下面的示例演示了如何通过减小 "T" 和 "o" 之间的间距来获得更自然的外观。将其设置为 false 会通过关闭 OpenType kern 字体特性来禁用字距调整。
默认:true
= 替代
是否应用样式替代。
有时一些字体会包含相同代码点的替代字形。将其设置为 true 会通过启用 OpenType salt 字体特性来切换到这些字形。
默认:false
= 样式集
应用哪个样式集。字体设计师可以将替代字形形式分类为样式集。由于该值高度依赖于字体,因此你需要查阅字体以了解可用的集合。当设置为介于 1 和 20 之间的整数时,会启用相应的 OpenType 字体特性 ss01、...、ss20。
默认:none
= 连字
是否启用标准连字。
某些字母组合(例如 "fi")通常显示为单个合并字形,称为 连字。将其设置为 false 会通过关闭 OpenType liga 和 clig 字体特性来禁用这些连字。
默认:true
= 自由连字
是否启用应当谨慎使用的连字。将其设置为 true 会通过启用 OpenType dlig 字体特性来启用这些连字。
默认:false
= 历史连字
是否启用历史连字。将其设置为 true 会通过启用 OpenType hlig 字体特性来启用这些连字。
默认:false
= 数字字体
选择哪种数字字体。当设置为 auto 时,使用字体的默认数字。
- "lining"
适合大写文本的数字(OpenType lnum 字体功能)。
- "old-style"
非常适合大写和小写文本流的数字(OpenType onum 字体功能)。
默认:auto
= 数字宽度
数字字符的宽度。当设置为 auto 时,使用字体的默认数字。
- "proportional"
具有特定于字形宽度的数字(OpenType pnum 字体功能)。
- "tabular"
等宽的数字(OpenType tnum字体功能)。
默认:auto
= 带斜线的零字形
是否使用带斜线的零字形。将其设置为 true 会通过启用 OpenType zero 字体特性来使用这些字形。
默认:false
= 分数
是否将数字转换为分数。将其设置为 true 会通过启用 OpenType frac 字体特性来使用这些字形。
不建议全局启用此属性,因为它会破坏斜杠后所有数字的外观(例如,在 URL 中)。相反,当你想要分数时,请在本地启用它。
默认:false
= 特色
要使用的原始 OpenType 功能。
- 如果给定字符串数组,则将字符串标识的功能设置为 1。
- 如果给定映射到数字的字典,则将键标识的功能设置为值。
默认:(:)
= 内容
要应用样式的内容,根据其他参数进行设定。
= 文本
这文本。 |
|
https://github.com/goshakowska/Typstdiff | https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_working_types/quoted/quoted_deleted.typ | typst | #quote[I know that I know nothing.] |
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/visualize/gradient-radial.typ | typst | Apache License 2.0 | // Test the different radial gradient features.
---
#square(
size: 100pt,
fill: gradient.radial(..color.map.rainbow, space: color.hsl),
)
---
#grid(
columns: 2,
square(
size: 50pt,
fill: gradient.radial(..color.map.rainbow, space: color.hsl, center: (0%, 0%)),
),
square(
size: 50pt,
fill: gradient.radial(..color.map.rainbow, space: color.hsl, center: (0%, 100%)),
),
square(
size: 50pt,
fill: gradient.radial(..color.map.rainbow, space: color.hsl, center: (100%, 0%)),
),
square(
size: 50pt,
fill: gradient.radial(..color.map.rainbow, space: color.hsl, center: (100%, 100%)),
),
)
---
#square(
size: 50pt,
fill: gradient.radial(..color.map.rainbow, space: color.hsl, radius: 10%),
)
#square(
size: 50pt,
fill: gradient.radial(..color.map.rainbow, space: color.hsl, radius: 72%),
)
---
#circle(
radius: 25pt,
fill: gradient.radial(white, rgb("#8fbc8f"), focal-center: (35%, 35%), focal-radius: 5%),
)
#circle(
radius: 25pt,
fill: gradient.radial(white, rgb("#8fbc8f"), focal-center: (75%, 35%), focal-radius: 5%),
)
|
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/math/syntax.typ | typst | // Test math syntax.
--- math-unicode ---
// Test Unicode math.
$ ∑_(i=0)^ℕ a ∘ b = \u{2211}_(i=0)^NN a compose b $
--- math-shorthands ---
// Test a few shorthands.
$ underline(f' : NN -> RR) \
n |-> cases(
[|1|] &"if" n >>> 10,
2 * 3 &"if" n != 5,
1 - 0 thick &...,
) $
--- math-common-symbols ---
// Test common symbols.
$ dot \ dots \ ast \ tilde \ star $
--- issue-2044-invalid-parsed-ident ---
// In this bug, the dot at the end was causing the right parenthesis to be
// parsed as an identifier instead of the closing right parenthesis.
$floor(phi.alt.)$
$floor(phi.alt. )$
--- math-unclosed ---
// Error: 1-2 unclosed delimiter
$a
|
|
https://github.com/jneug/typst-codetastic | https://raw.githubusercontent.com/jneug/typst-codetastic/main/qrutil.typ | typst | MIT License |
// TODO: This probably should be improved / optimized.
#import "bits.typ"
#import "qrluts.typ"
// aliases
#let pow2(n) = calc.pow(n, 2)
#let mod = calc.rem
#let mod2(x) = calc.rem(x, 2)
#let mod3(x) = calc.rem(x, 3)
#let mod255(x) = calc.rem(x, 255)
#let mod256(x) = calc.rem(x, 256)
#let mod285(x) = calc.rem(x, 285)
/// >>> qrutil.check-version(1)
/// >>> qrutil.check-version(7)
/// >>> qrutil.check-version(30)
/// >>> qrutil.check-version(40)
/// >>> not qrutil.check-version(0)
/// >>> not qrutil.check-version(41)
#let check-version(version) = version >= 1 and version <= 40
/// >>> qrutil.check-ecl("l")
/// >>> qrutil.check-ecl("h")
/// >>> qrutil.check-ecl("m")
/// >>> qrutil.check-ecl("q")
/// >>> not qrutil.check-ecl("a")
/// >>> not qrutil.check-ecl("Q")
#let check-ecl(ecl) = ecl in ("l", "m", "q", "h")
/// >>> qrutil.size(1) == 21
/// >>> qrutil.size(2) == 25
/// >>> qrutil.size(7) == 45
/// >>> qrutil.size(33) == 149
/// >>> qrutil.size(40) == 177
#let size(version) = { return 21 + (version - 1)*4 }
/// >>> qrutil.total-modules(1) == 208
/// >>> qrutil.total-modules(7) == 1568
/// >>> qrutil.total-modules(16) == 5867
/// >>> qrutil.total-modules(40) == 29648
#let total-modules(version) = {
if version == 1 {
return 21 * 21 - 3 * 8 * 8 - 2 * 15 - 1 - 2 * 5;
}
let n = calc.floor(version / 7) + 2;
return 16 * pow2(version + 4) - pow2(5 * n - 1) - if version > 6 {172} else {136}
}
/// >>> qrutil.total-codewords(1) == 26
/// >>> qrutil.total-codewords(7) == 196
/// >>> qrutil.total-codewords(16) == 733
/// >>> qrutil.total-codewords(40) == 3706
#let total-codewords(version) = {
return calc.floor(total-modules(version) / 8);
}
/// >>> qrutil.data-codewords(1, "l") == 19
/// >>> qrutil.data-codewords(6, "m") == 108
/// >>> qrutil.data-codewords(7, "l") == 156
/// >>> qrutil.data-codewords(7, "m") == 124
/// >>> qrutil.data-codewords(7, "q") == 88
/// >>> qrutil.data-codewords(7, "h") == 66
/// >>> qrutil.data-codewords(18, "q") == 397
/// >>> qrutil.data-codewords(19, "h") == 341
#let data-codewords(version, ecl) = {
let (ecw-count, blocks) = qrluts.blocks.at(version - 1).at(ecl)
return total-codewords(version) - blocks * ecw-count;
}
// =================================
// Code capacity lookup table
// =================================
/// >>> qrutil.capacities(1, "l") == (41, 25, 17, 10)
/// >>> qrutil.capacities(7, "l") == (370, 224, 154, 95)
/// >>> qrutil.capacities(7, "m") == (293, 178, 122, 75)
/// >>> qrutil.capacities(7, "q") == (207, 125, 86, 53)
/// >>> qrutil.capacities(7, "h") == (154, 93, 64, 39)
#let capacities(version, ecl) = {
let key = str(version) + "-" + ecl
return qrluts.capacities.at(key)
}
/// >>> qrutil.best-version(36, "l", 0) == 1
/// >>> qrutil.best-version(42, "l", 0) == 2
/// >>> qrutil.best-version(120, "l", 0) == 3
/// >>> qrutil.best-version(130, "l", 0) == 4
/// >>> qrutil.best-version(7089, "l", 0) == 40
/// >>> qrutil.best-version(7090, "l", 0) == none
/// >>> qrutil.best-version(755, "q", 2) == 27
/// >>> qrutil.best-version(3908, "m", 0) == 33
/// >>> qrutil.best-version(3910, "m", 0) == 34
#let best-version(char-count, ecl, mode) = {
for ver in range(40) {
let cap = capacities(ver + 1, ecl)
if cap.at(mode) >= char-count {
return ver + 1
}
}
return none
}
// =================================
// Encoding
// =================================
/// >>> qrutil.best-mode("0123") == 0
/// >>> qrutil.best-mode("0000") == 0
/// >>> qrutil.best-mode("1") == 0
/// >>> qrutil.best-mode("A") == 1
/// >>> qrutil.best-mode("ABCD") == 1
/// >>> qrutil.best-mode("ABCD:XYZ$") == 1
/// >>> qrutil.best-mode("a") == 2
/// >>> qrutil.best-mode("abcxyz") == 2
/// >>> qrutil.best-mode("ABCD:XYZ!") == 2
/// >>> qrutil.best-mode("€") == none
#let best-mode(data) = {
let nums = regex(`^\d*$`.text)
let alphnum = regex(`^[\dA-Z $%*+\-./:]*$`.text)
let byte = regex(`^[\x00-\xff]*$`.text)
if data.match(nums) != none {
return 0
}
if data.match(alphnum) != none {
return 1
}
if data.match(byte) != none {
return 2
}
return none
}
/// >>> qrutil.mode-bits(0) == (false, false, false, true)
/// >>> qrutil.mode-bits(1) == (false, false, true, false)
/// >>> qrutil.mode-bits(2) == (false, true, false, false)
/// >>> qrutil.mode-bits(3) == (true, false, false, false)
#let mode-bits(mode) = {
return (
(false, false, false, true),
(false, false, true, false),
(false, true, false, false),
(true, false, false, false)
).at(mode)
}
/// >>> qrutil.char-count-bits(10, 1, 0) == bits.from-str("0000001010")
/// >>> qrutil.char-count-bits(10, 1, 1) == bits.from-str("000001010")
/// >>> qrutil.char-count-bits(10, 1, 2) == bits.from-str("00001010")
/// >>> qrutil.char-count-bits(10, 9, 2) == bits.from-str("00001010")
/// >>> qrutil.char-count-bits(10, 15, 2) == bits.from-str("0000000000001010")
/// >>> qrutil.char-count-bits(23, 15, 0) == bits.from-str("000000010111")
/// >>> qrutil.char-count-bits(23, 33, 0) == bits.from-str("00000000010111")
/// >>> qrutil.char-count-bits(23, 33, 1) == bits.from-str("0000000010111")
#let char-count-bits(char-count, version, mode) = {
let len
if version <= 9 {
len = (10, 9, 8, 8).at(mode)
} else if version <= 26 {
len = (12, 11, 16, 10).at(mode)
} else {
len = (14, 13, 16, 12).at(mode)
}
return bits.pad(
bits.from-int(char-count), len)
}
/// >>> qrutil.version-bits(1) == ()
/// >>> qrutil.version-bits(7) == bits.from-str("000111110010010100")
/// >>> qrutil.version-bits(7) == qrutil.calc-version-bits(7)
/// >>> qrutil.version-bits(27) == bits.from-str("011011000010001110")
/// >>> qrutil.version-bits(27) == qrutil.calc-version-bits(27)
/// >>> qrutil.version-bits(33) == bits.from-str("100001011011110000")
/// >>> qrutil.version-bits(33) == qrutil.calc-version-bits(33)
/// >>> qrutil.version-bits(40) == bits.from-str("101000110001101001")
/// >>> qrutil.version-bits(40) == qrutil.calc-version-bits(40)
#let version-bits(version) = {
if version >= 7 {
return bits.from-str(qrluts.version-formats.at(version - 7))
} else {
return ()
}
}
// Manual calculation of version format code
#let calc-version-bits(version) = {
import "ecc.typ": bch
if version >= 7 {
let ver-fmt = bits.pad(
bits.from-int(version), 6)
return ver-fmt + bch(ver-fmt,
generator:"1111100100101")
} else {
return ()
}
}
/// >>> qrutil.format-bits("l", 0) == bits.from-str("111011111000100")
/// >>> qrutil.format-bits("l", 0) == qrutil.calc-format-bits("l", 0)
/// >>> qrutil.format-bits("l", 5) == bits.from-str("110001100011000")
/// >>> qrutil.format-bits("l", 5) == qrutil.calc-format-bits("l", 5)
/// >>> qrutil.format-bits("q", 3) == bits.from-str("011101000000110")
/// >>> qrutil.format-bits("q", 3) == qrutil.calc-format-bits("q", 3)
/// >>> qrutil.format-bits("q", 7) == bits.from-str("010101111101101")
/// >>> qrutil.format-bits("q", 7) == qrutil.calc-format-bits("q", 7)
/// >>> qrutil.format-bits("h", 1) == bits.from-str("001001110111110")
/// >>> qrutil.format-bits("h", 1) == qrutil.calc-format-bits("h", 1)
/// >>> qrutil.format-bits("h", 6) == bits.from-str("000110100001100")
/// >>> qrutil.format-bits("h", 6) == qrutil.calc-format-bits("h", 6)
#let format-bits(ecl, mask) = {
return bits.from-str(qrluts.format-information.at(ecl).at(mask))
}
// Manual calculation of format code
#let calc-format-bits(ecl, mask) = {
import "ecc.typ": bch
let mask-bits = bits.pad(bits.from-int(mask), 3)
ecl = (
l: (false, true),
m: (false, false),
q: (true, true),
h: (true, false)
).at(ecl)
let fmt = ecl + mask-bits
let crc = bch(fmt)
return bits.xor(fmt + crc,
bits.from-str("101010000010010"))
}
/// Encodes numeric data (only characters "0" to "9") into byte codewords.
///
/// >>> qrutil.encode-numeric("0", ()) == (bytes(()), (false, false, false, false))
/// >>> qrutil.encode-numeric("1", (false, false, true, false)) == (bytes((33,)), ())
/// >>> qrutil.encode-numeric("14", ()) == (bytes(()), (false, false, false, true, true, true, false))
/// >>> qrutil.encode-numeric("144", ()) == (bytes((36,)), (false, false))
/// >>> qrutil.encode-numeric("144", (true, false, false, false)) == (bytes((130,)), (false, true, false, false, false, false))
#let encode-numeric(nums, buffer) = {
let code = ()
let len = nums.len()
for i in range(len, step:3) {
let n = int(nums.slice(i, calc.min(len, i + 3)))
if n < 10 {
buffer += bits.from-int(n, pad:4)
} else if n < 100 {
buffer += bits.from-int(n, pad:7)
} else {
buffer += bits.from-int(n, pad:10)
}
while buffer.len() >= 8 {
code.push( bits.to-int(buffer.slice(0, 8)) )
buffer = buffer.slice(8)
}
}
return (bytes(code), buffer)
}
#let alphnum-char-codes = (
"0": 0, "1": 1, "2": 2, "3": 3, "4": 4,
"5": 5, "6": 6, "7": 7, "8": 8, "9": 9,
"A": 10, "B": 11, "C": 12, "D": 13, "E": 14,
"F": 15, "G": 16, "H": 17, "I": 18, "J": 19,
"K": 20, "L": 21, "M": 22, "N": 23, "O": 24,
"P": 25, "Q": 26, "R": 27, "S": 28, "T": 29,
"U": 30, "V": 31, "W": 32, "X": 33, "Y": 34,
"Z": 35, " ": 36, "$": 37, "%": 38, "*": 39,
"+": 40, "-": 41, ".": 42, "/": 43, ":": 44,
)
#let encode-alphanumeric(alphnum, buffer) = {
let code = ()
let len = alphnum.len()
for i in range(len, step:2) {
let char = alphnum.at(i)
let char-code = alphnum-char-codes.at(char, default:0)
if i + 1 < alphnum.len() {
char = alphnum.at(i + 1)
char-code = char-code*45 + alphnum-char-codes.at(char, default:0)
buffer += bits.pad(bits.from-int(char-code), 11)
} else {
buffer += bits.pad(bits.from-int(char-code), 6)
}
while buffer.len() >= 8 {
code.push( bits.to-int(buffer.slice(0, 8)) )
buffer = buffer.slice(8)
}
}
return (bytes(code), buffer)
}
#let ASCII = " !\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~"
/// >>> qrutil.encode-byte("https://www.qrcode.com/", bits.from-str("010000010111")) == (bytes((65, 118, 135, 71, 71, 7, 51, 162, 242, 247, 119, 119, 114, 231, 23, 38, 54, 246, 70, 82, 230, 54, 246, 210)), (true, true, true, true))
#let encode-byte(chars, buffer) = {
let code = ()
let len = chars.len()
for i in range(len) {
buffer += bits.from-int( 32 + ASCII.position(chars.at(i)), pad:8 )
while buffer.len() >= 8 {
code.push( bits.to-int(buffer.slice(0, 8)) )
buffer = buffer.slice(8)
}
}
return (bytes(code), buffer)
}
/// Encodes `data` into byte codewords.
///
/// >>> qrutil.encode("HELLO WORLD", 1, "q", mode:1) == bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64, 236, 17, 236))
/// >>> qrutil.encode("HELLO WORLD", 7, "l", mode:1) == bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64) + (236, 17) * 73)
/// >>> qrutil.encode("HELLO WORLD", 7, "h", mode:1) == bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64) + (236, 17) * 28)
/// >>> qrutil.encode("https://www.qrcode.com/", 2, "m", mode:2) == bytes((65, 118, 135, 71, 71, 7, 51, 162, 242, 247, 119, 119, 114, 231, 23, 38, 54, 246, 70, 82, 230, 54, 246, 210, 240, 236, 17, 236))
/// >>> qrutil.encode("https://www.qrcode.com/", 7, "m", mode:2) == bytes((65, 118, 135, 71, 71, 7, 51, 162, 242, 247, 119, 119, 114, 231, 23, 38, 54, 246, 70, 82, 230, 54, 246, 210, 240, 236) + (17, 236)*49)
#let encode(data, version, ecl, mode:auto) = {
if mode == auto {
mode = best-mode(data)
}
let (codewords, buffer) = ((
encode-numeric,
encode-alphanumeric,
encode-byte
).at(mode))(data, mode-bits(mode) + char-count-bits(data.len(), version, mode))
// required length of data
let cw-count = data-codewords(version, ecl)
let cw-bits = cw-count * 8
// Add terminator
let diff-cw = cw-count - codewords.len()
let diff-bits = (diff-cw * 8 - buffer.len())
buffer += (false,) * calc.min(diff-bits, 4)
// Pad with zeros
let rem = calc.rem(buffer.len(), 8)
if rem > 0 {
buffer += (false,) * (8 - rem)
}
while buffer.len() >= 8 {
codewords += bytes( (bits.to-int(buffer.slice(0,8)),) )
buffer = buffer.slice(8)
}
assert.eq(buffer.len(), 0)
// Pad to capacity
let pad-words = (236, 17)
diff-cw = cw-count - codewords.len()
for i in range(cw-count - codewords.len()) {
codewords += bytes( (pad-words.at( mod2(i) ),) )
}
assert.eq(codewords.len(), cw-count)
return codewords
}
// ======================================
// Alignment patterns and reserved areas
// ======================================
/// >>> qrutil.alignment-positions(1) == ()
/// >>> qrutil.alignment-positions(2) == (6, 18)
/// >>> qrutil.alignment-positions(3) == (6, 22)
/// >>> qrutil.alignment-positions(5) == (6, 30)
/// >>> qrutil.alignment-positions(7) == (6, 22, 38)
/// >>> qrutil.alignment-positions(10) == (6, 28, 50)
/// >>> qrutil.alignment-positions(16) == (6, 26, 50, 74)
/// >>> qrutil.alignment-positions(29) == (6, 30, 54, 78, 102, 126)
/// >>> qrutil.alignment-positions(36) == (6, 24, 50, 76, 102, 128, 154)
/// >>> qrutil.alignment-positions(40) == (6, 30, 58, 86, 114, 142, 170)
#let alignment-positions(version) = {
return qrluts.alignments.at(version - 1)
}
/// Checks if a position is a valid position for an alignment pattern.
///
/// >>> not qrutil.is-valid-alignment(0, 0, 21)
/// >>> not qrutil.is-valid-alignment(36, 0, 41)
/// >>> qrutil.is-valid-alignment(8, 1, 21)
/// >>> not qrutil.is-valid-alignment(7, 170, 177)
#let is-valid-alignment(i, j, size) = not((i < 8 and j < 8) or (i < 8 and j > size - 8) or (i > size - 8 and j < 8))
/// Checks if a module position is within a reserved area.
/// Reserved areas are finder, alignment and timing patterns,
/// spacing and format information, the black module and for higher
/// versions the version information areas.
///
/// Use of this function is discouraged, since it is quite slow.
///
/// >>> qrutil.is-reserved(0, 0, 1)
/// >>> qrutil.is-reserved(34, 0, 7)
/// >>> qrutil.is-reserved(35, 0, 7)
/// >>> qrutil.is-reserved(34, 5, 7)
/// >>> qrutil.is-reserved(0, 34, 7)
/// >>> qrutil.is-reserved(0, 35, 7)
/// >>> qrutil.is-reserved(5, 34, 7)
/// >>> qrutil.is-reserved(98, 5, 21)
/// >>> qrutil.is-reserved(5, 52, 11)
/// >>> qrutil.is-reserved(6, 52, 11)
/// >>> qrutil.is-reserved(52, 6, 11)
/// >>> qrutil.is-reserved(53, 8, 11)
/// >>> not qrutil.is-reserved(53, 9, 11)
#let is-reserved(i, j, version) = {
let size = size(version)
// timing patterns + black module
if (i == 6 or j == 6) or (i == size - 8 and j == 8) {
return true
// finder patterns + spacing + format information
} else if ((i < 9 or i > size - 9) and j < 9) or (i < 9 and j > size - 9) {
return true
// version information
} else if version > 6 {
if (i >= size - 11 and j < 6) or (j >= size - 11 and i < 6) {
return true
}
}
// Alignment patterns
let a = alignment-positions(version)
for x in a {
for y in a {
if is-valid-alignment(x, y, size) {
// // Check pattern
if j < x - 2 {
return false
} else if j < x + 3 and i < y - 2 {
return false
} else if j < x + 3 and i < y + 3 {
return true
}
}
}
}
return false
}
#let finder = (
"1111111",
"1000001",
"1011101",
"1011101",
"1011101",
"1000001",
"1111111"
).map(bits.from-str)
#let alignment = (
"11111",
"10001",
"10101",
"10001",
"11111"
).map(bits.from-str)
#let set-reserved-bits(field, version, ecl, mask, set-true: false) = {
let size = size(version)
// Add finder patterns
for i in range(8) {
for j in range(8) {
let v = set-true
if i < 7 and j < 7 {
v = finder.at(i).at(j) or set-true
}
field.at(i).at(j) = v or set-true
field.at(i).at(size - j - 1) = v
field.at(size - i - 1).at(j) = v
}
}
// Add timing patterns
for i in range(7, size - 7) {
field.at(6).at(i) = calc.even(i) or set-true
field.at(i).at(6) = calc.even(i) or set-true
}
// Add alignment patterns
let alignment-locations = alignment-positions(version)
for i in alignment-locations {
for j in alignment-locations {
if is-valid-alignment(i, j, size) {
for k in range(5) {
for l in range(5) {
field.at(i - 2 + k).at(j - 2 + l) = alignment.at(k).at(l) or set-true
}
}
}
}
}
// Add dark module
field.at(size - 8).at(8) = true
// Add format information
let fmt = format-bits(ecl, mask)
for (i, b) in fmt.enumerate() {
b = b or set-true
if i < 7 {
field.at(size - i - 1).at(8) = b
if i > 5 { i+= 1 }
field.at(8).at(i) = b
} else {
field.at(8).at(size - 15 + i) = b
if i > 8 { i += 1 }
field.at(15 - i).at(8) = b
}
}
// Adding version information
if version >= 7 {
let fmt = version-bits(version)
for i in range(6) {
field.at(size - 9).at(5 - i) = fmt.at(i*3) or set-true
field.at(size - 10).at(5 - i) = fmt.at(i*3+1) or set-true
field.at(size - 11).at(5 - i) = fmt.at(i*3+2) or set-true
field.at(5 - i).at(size - 9) = fmt.at(i*3) or set-true
field.at(5 - i).at(size - 10) = fmt.at(i*3+1) or set-true
field.at(5 - i).at(size - 11) = fmt.at(i*3+2) or set-true
}
}
return field
}
#let reserve-bits(field, version) = set-reserved-bits(
field, version, "l", 0, set-true: true
)
// =================================
// Galois field math
// =================================
#let exp(i) = qrluts.exp.at(i)
#let log(i) = qrluts.log.at(i)
#let bit(x, y, n) = {
mod2(calc.quo(x, n) + calc.quo(y, n)) * n
}
/// >>> qrutil.gf-add(0, 0) == 0
/// >>> qrutil.gf-add(0, 1) == 1
/// >>> qrutil.gf-add(1, 0) == 1
/// >>> qrutil.gf-add(1, 1) == 0
/// >>> qrutil.gf-add(2, 1) == 3
/// >>> qrutil.gf-add(2, 3) == 1
/// >>> qrutil.gf-add(55, 87) == 96
#let gf-add(a, b) = {
return mod2(a+b) + bit(a,b,2) + bit(a,b,4) + bit(a,b,8) + bit(a,b,16) + bit(a,b,32) + bit(a,b,64) + bit(a,b,128)
}
/// >>> qrutil.gf-mul(0, 255) == 0
/// >>> qrutil.gf-mul(255, 0) == 0
/// >>> qrutil.gf-mul(0, 0) == 0
/// >>> qrutil.gf-mul(255, 1) == 255
/// >>> qrutil.gf-mul(1, 255) == 255
/// >>> qrutil.gf-mul(2, 255) == 227
///
#let gf-mul(x, y) = {
if x==0 or y==0 { return 0 }
return exp(mod255(log(x) + log(y)))
}
/// >>> qrutil.gf-poly-add((1,), (1,)) == (0,)
/// >>> qrutil.gf-poly-add((1,), (1,)) == (qrutil.gf-add(1,1),)
/// >>> qrutil.gf-poly-add((4,2,1), (1,)) == (4,2,0)
/// >>> qrutil.gf-poly-add((4,2,1), (1,1,)) == (4,3,0)
#let gf-poly-add(p, q) = {
let (lp, lq, lr) = (p.len(), q.len(), calc.max(p.len(), q.len()))
let r = (0,) * lr
for i in range(lp) {
r.at(i + lr - lp) = p.at(i)
}
for i in range(lq) {
r.at(i + lr - lq) = gf-add(r.at(i + lr - lq), q.at(i))
}
return r
}
/// >>> qrutil.gf-poly-mul((1,), (1,)) == (1,)
/// >>> qrutil.gf-poly-mul((2,), (5,)) == (qrutil.gf-mul(2,5),)
#let gf-poly-mul(p, q) = {
let (lp, lq) = (p.len(), q.len())
let r = (0,) * (lp + lq - 1)
for i in range(r.len()) {
for pi in range(i + 1) {
let qi = i - pi
if pi < lp and qi < lq {
r.at(i) = gf-add(
r.at(i),
gf-mul(p.at(pi), q.at(qi))
)
}
}
}
return r
}
#let gf-poly-rem(p, q) = {
let d = p
let (lp, lq) = (p.len(), q.len())
for i in range(lp - lq + 1) {
let coef = d.at(i)
if coef != 0 {
for j in range(1, lq) {
if q.at(j) != 0 {
d.at(i + j) = gf-add(
d.at(i + j),
gf-mul(q.at(j), coef)
)
}
}
}
}
let sep = -(lq - 1)
return d.slice(sep)
}
/// Returns a generator polynomial to use for reed solomon
/// error correction.
/// Polynomials with 7 to 30 coefficients (required for qrcode error correction)
/// are taken from a precomputed table, while larger polynomials will have to be
/// computed.
///
/// >>> qrutil.rs-generator(0) == (1,)
/// >>> qrutil.rs-generator(1) == (1, 1)
/// >>> qrutil.rs-generator(7) == (1, 127, 122, 154, 164, 11, 68, 117)
/// >>> qrutil.rs-generator(16) == (1, 59, 13, 104, 189, 68, 209, 30, 8, 163, 65, 41, 229, 98, 50, 36, 59)
/// >>> qrutil.rs-generator(30) == (1, 212, 246, 77, 73, 195, 192, 75, 98, 5, 70, 103, 177, 22, 217, 138, 51, 181, 246, 72, 25, 18, 46, 228, 74, 216, 195, 11, 106, 130, 150)
#let rs-generator(cw-count) = {
if cw-count >= 7 and cw-count <= 30 {
return qrluts.generators.at(cw-count - 7)
}
let g = (1,)
let start = 0
if cw-count > 30 {
g = qrluts.generators.at(23)
start = 30
}
for i in range(start, cw-count) {
g = gf-poly-mul(g, (1, exp(i)))
}
return g
}
// =================================
// Error correction
// =================================
/// >>> qrutil.data-blocks(1, "l") == 1
/// >>> qrutil.data-blocks(6, "m") == 4
/// >>> qrutil.data-blocks(7, "l") == 2
/// >>> qrutil.data-blocks(7, "m") == 4
/// >>> qrutil.data-blocks(7, "q") == 6
/// >>> qrutil.data-blocks(7, "h") == 5
/// >>> qrutil.data-blocks(18, "q") == 18
/// >>> qrutil.data-blocks(19, "h") == 25
#let data-blocks(version, ecl) = {
let (_, blocks) = qrluts.blocks.at(version - 1).at(ecl)
return blocks
}
/// >>> qrutil.ec-codewords(1, "l") == 7
/// >>> qrutil.ec-codewords(2, "m") == 16
/// >>> qrutil.ec-codewords(6, "m") == 16
/// >>> qrutil.ec-codewords(7, "l") == 20
/// >>> qrutil.ec-codewords(7, "m") == 18
/// >>> qrutil.ec-codewords(7, "q") == 18
/// >>> qrutil.ec-codewords(7, "h") == 26
/// >>> qrutil.ec-codewords(18, "q") == 28
/// >>> qrutil.ec-codewords(19, "h") == 26
#let ec-codewords(version, ecl) = {
let (ecw-count, _) = qrluts.blocks.at(version - 1).at(ecl)
return ecw-count
}
/// >>> qrutil.ec-total-codewords(1, "l") == 7
/// >>> qrutil.ec-total-codewords(6, "m") == 64
/// >>> qrutil.ec-total-codewords(7, "q") == 108
/// >>> qrutil.ec-total-codewords(18, "q") == 504
/// >>> qrutil.ec-total-codewords(19, "h") == 650
#let ec-total-codewords(version, ecl) = {
let (ecw-count, blocks) = qrluts.blocks.at(version - 1).at(ecl)
return ecw-count * blocks
}
/// >>> qrutil.block-sizes(1, "l") == (19,)
/// >>> qrutil.block-sizes(1, "q") == (13,)
/// >>> qrutil.block-sizes(6, "m") == (27,27,27,27)
/// >>> qrutil.block-sizes(7, "l") == (78,78)
/// >>> qrutil.block-sizes(7, "m") == (31,31,31,31)
/// >>> qrutil.block-sizes(7, "q") == (14,14,15,15,15,15)
/// >>> qrutil.block-sizes(7, "h") == (13,13,13,13,14)
/// >>> qrutil.block-sizes(18, "q") == (22,) * 17 + (23,)
/// >>> qrutil.block-sizes(28, "l") == (117,117,117) + (118,)*10
#let block-sizes(version, ecl) = {
let (ecw-count, blocks) = qrluts.blocks.at(version - 1).at(ecl)
let dcw-count = data-codewords(version, ecl)
let group1-size = calc.floor(dcw-count / blocks)
let group1-blocks = blocks - calc.rem(dcw-count, blocks)
let block-sizes = (group1-size,) * group1-blocks
if group1-blocks < blocks {
block-sizes += (group1-size + 1,) * (blocks - group1-blocks)
}
return block-sizes
}
/// Computes the reed solomon error correction codewords
/// for the given data codewords and error correction level.
///
/// >>> qrutil.rs-codewords(bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64, 236, 17, 236, 17, 236, 17)), 1, "m") == bytes((196, 35, 39, 119, 235, 215, 231, 226, 93, 23))
/// >>> qrutil.rs-codewords(bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64) + (236, 17)*34), 7, "l") == bytes((91, 2, 102, 15, 224, 203, 242, 110, 15, 230, 29, 12, 200, 155, 139, 51, 72, 68, 172, 13))
/// >>> qrutil.rs-codewords(bytes((236, 17)*39), 7, "l") == bytes((44, 103, 50, 234, 161, 185, 11, 175, 110, 82, 242, 131, 109, 162, 175, 240, 131, 250, 74, 60))
/// >>> qrutil.rs-codewords(bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64, 236, 17, 236, 17, 236, 17)), 7, "m") == bytes((164, 98, 130, 114, 171, 107, 27, 65, 50, 175, 129, 240, 155, 77, 250, 132, 137, 165))
/// >>> qrutil.rs-codewords(bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64, 236, 17, 236, 17, 236, 17)), 7, "h") == bytes((163, 10, 238, 68, 145, 138, 74, 25, 128, 199, 247, 20, 104, 9, 102, 110, 101, 62, 244, 230, 53, 30, 28, 155, 231, 64))
/// >>> qrutil.rs-codewords(bytes((16, 32, 12, 86, 97, 128, 236, 17, 236, 17, 236, 17, 236, 17, 236, 17)), 1, "m") == bytes((165, 36, 212, 193, 237, 54, 199, 135, 44, 85))
/// >>> qrutil.rs-codewords(bytes((65, 166, 135, 71, 71, 7, 51, 162, 242, 247, 119, 119, 114, 231, 23, 38, 54, 246, 70, 82, 230, 54, 246, 210, 240, 236, 17, 236)), 2, "m") == bytes((43, 26, 183, 84, 208, 221, 49, 187, 191, 133, 63, 208, 63, 44, 207, 202))
#let rs-codewords(data-codewords, version, ecl) = {
let cw-count = ec-codewords(version, ecl)
let gen = rs-generator(cw-count)
let r = gf-poly-rem(array(data-codewords) + (0,) * cw-count, gen)
return bytes(r)
}
// =================================
// Interleaving blocks
// =================================
/// >>> qrutil.get-remainders(1) == 0
/// >>> qrutil.get-remainders(5) == 7
/// >>> qrutil.get-remainders(7) == 0
/// >>> qrutil.get-remainders(18) == 3
/// >>> qrutil.get-remainders(22) == 4
#let get-remainders(version) = {
if version in (2,3,4,5,6) {
return 7
} else if version in (14,15,16,17,18,19,20, 28,29,30,31,32,33,34) {
return 3
} else if version in (21,22,23,24,25,26,27) {
return 4
} else {
return 0
}
}
/// >>> qrutil.generate-blocks(bytes((65, 166, 135, 71, 71, 7, 51, 162, 242, 247, 119, 119, 114, 231, 23, 38, 54, 246, 70, 82, 230, 54, 246, 210, 240, 236, 17, 236)), 2, "m") == bytes((65, 166, 135, 71, 71, 7, 51, 162, 242, 247, 119, 119, 114, 231, 23, 38, 54, 246, 70, 82, 230, 54, 246, 210, 240, 236, 17, 236, 43, 26, 183, 84, 208, 221, 49, 187, 191, 133, 63, 208, 63, 44, 207, 202))
/// >>> qrutil.generate-blocks(bytes((32, 91, 11, 120, 209, 114, 220, 77, 67, 64) + (236, 17) * 73), 7, "l") == bytes((32, 236, 91, 17, 11, 236, 120, 17, 209, 236, 114, 17, 220, 236, 77, 17, 67, 236, 64, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 236, 236, 17, 17, 91, 44, 2, 103, 102, 50, 15, 234, 224, 161, 203, 185, 242, 11, 110, 175, 15, 110, 230, 82, 29, 242, 12, 131, 200, 109, 155, 162, 139, 175, 51, 240, 72, 131, 68, 250, 172, 74, 13, 60))
#let generate-blocks(codewords, version, ecl) = {
let block-sizes = block-sizes(version, ecl)
if block-sizes.len() == 1 {
return codewords + rs-codewords(codewords, version, ecl)
} else {
let blocks = ()
let ec-codewords = ()
// Get blocks and error correction codewords
let i = 0
for bs in block-sizes {
let block = codewords.slice(i, i + bs)
blocks.push(block)
ec-codewords.push(rs-codewords(block, version, ecl))
i += bs
}
// generate interleaved data
codewords = bytes(())
let max = calc.max(..block-sizes)
for i in range(max) {
for b in blocks {
if i < b.len() {
//codewords.push( b.at(i) )
codewords += bytes( (b.at(i),) )
}
}
}
// interleave ec codewords
max = ec-codewords.first().len()
for i in range(max) {
for ec in ec-codewords {
// codewords.push( ec.at(i) )
codewords += bytes( (ec.at(i),) )
}
}
return codewords
}
}
// =================================
// Masking
// =================================
#let masks = (
(i, j) => mod2(i+j) == 0,
(i, j) => mod2(i) == 0,
(i, j) => mod3(j) == 0,
(i, j) => mod3(i+j) == 0,
(i, j) => mod2(int(i / 2) + int(j / 3)) == 0,
(i, j) => (mod2(i * j) + mod3(i * j)) == 0,
(i, j) => mod2(mod2(i * j) + mod3(i * j)) == 0,
(i, j) => mod2(mod2(i + j) + mod3(i * j)) == 0
)
#let is-masked(i, j, mask) = (masks.at(mask))(i, j)
#let apply-mask(i, j, bit, mask) = is-masked(i, j, mask) != bit
#let check-mask(field, mask, version) = {
let size = size(version)
// Create masked code
let mask-func = masks.at(mask)
for i in range(size) {
for j in range(size) {
field.at(i).at(j) = (mask-func)(i, j) != field.at(i).at(j)
}
}
let (p1, p2, p3, p4) = (0, 0, 0, 0)
let (r1, r2) = (0, 0)
let cond4-n = 0
let check-patterns( b1,b2,b3,b4,b5,b6,b7,b8,b9,b10,b11 ) = {
return (not (b1 or b2 or b3 or b4 or b6 or b10) and (b5 and b7 and b8 and b9 and b11)) or (not (b2 or b6 or b8 or b9 or b10 or b11) and (b1 and b3 and b4 and b5 and b7))
}
for i in range(size) {
let cond3-win = ((), ())
for j in range(size) {
let (bit1, bit2) = (field.at(i).at(j), field.at(j).at(i))
// Condition 1
// Check rows and cols for runs of 5 or more
// modules of same value
if bit1 {
r1 += 1
} else {
if r1 >= 5 {
p1 += 3 + calc.max(0, (r1 - 5))
}
r1 = 0
}
if bit2 {
r2 += 1
} else {
if r2 >= 5 {
p1 += 3 + calc.max(0, (r2 - 5))
}
r2 = 0
}
// Condition 2
// Use a running window of 2x2 modules and
// with (i,j) in the top right and check for
// the same value.
if i > 0 and j > 0 {
if bit1 == field.at(i - 1).at(j - 1) and bit1 == field.at(i - 1).at(j) and bit1 == field.at(i).at(j - 1) {
p2 += 3
}
}
// Condition 3
// Use running windows for rows and columns
// to check against the predefined patterns.
// TODO: Optimize this
// if j > 10 {
// if check-patterns(field.at(i).at(j - 10),
// field.at(i).at(j - 9), field.at(i).at(j - 8),
// field.at(i).at(j - 7), field.at(i).at(j - 6),
// field.at(i).at(j - 5), field.at(i).at(j - 4),
// field.at(i).at(j - 3), field.at(i).at(j - 2),
// field.at(i).at(j - 1), field.at(i).at(j - 0)) {
// penalties.at(2) += 40
// }
// }
// if i > 10 {
// if check-patterns(field.at(i - 10).at(j),
// field.at(i - 9).at(j), field.at(i - 8).at(j),
// field.at(i - 7).at(j), field.at(i - 6).at(j),
// field.at(i - 5).at(j), field.at(i - 4).at(j),
// field.at(i - 3).at(j), field.at(i - 2).at(j),
// field.at(i - 1).at(j), field.at(i - 0).at(j)) {
// penalties.at(2) += 40
// }
// }
// Condition 4
// Just count black modules for now
if bit1 {
cond4-n += 1
}
}
}
// Condition 4
// compute final penalty
let total = size * size
let p = int((cond4-n / total) * 100)
let v = (p - calc.rem(p, 5), p + (5 - calc.rem(p, 5)))
v = v.map(x => calc.quo(calc.abs(x - 50), 5))
p4 = calc.min(..v) * 10
// calculate sum of condition 1 to 4 panalties
return (field, p1 + p2 + p3 + p4)
}
#let best-mask(field, version) = {
let mask = 0
let (masked-field, penalty) = check-mask(field, 0, version)
for m in range(1, masks.len()) {
let (mf, p) = check-mask(field, m, version)
if p < penalty {
mask = m
masked-field = mf
penalty = p
}
}
return (mask, masked-field)
}
|
https://github.com/AnsgarLichter/cv-typst | https://raw.githubusercontent.com/AnsgarLichter/cv-typst/main/template.typ | typst | #import "@preview/fontawesome:0.1.0": *
#import "settings/styles.typ": *
#import "modules/utils.typ": *
#import "modules/header.typ": *
#import "modules/section.typ": *
#import "modules/skills.typ": *
#let cv(
content
) = {
set text(
font: bodyStyle.fonts,
weight: bodyStyle.weight,
size: bodyStyle.size,
)
set list(
indent: listStyle.indent
)
set align(left)
set page(
paper: pageStyle.paper,
margin: pageStyle.margin
)
content
}
#let header(
fullName: [],
jobTitle: [],
// Each array item must have a property link, text and icon to be displayed.
socials: (),
profilePicture: ""
) = {
table(
columns: headerStyle.table.columns,
inset: 0pt,
stroke: none,
column-gutter: headerStyle.table.columnGutter,
align: left + horizon,
{
createHeaderInfo(
fullName: fullName,
jobTitle: jobTitle,
socials: socials
)
},
{
createHeaderImage(
profilePhoto: profilePicture
)
}
)
v(headerStyle.margins.bottom)
}
#let section(title) = {
v(sectionStyle.margins.top)
createSectionTitle(title)
}
#let entry(
title: "",
companyOrUniversity: "",
date: "",
location: "",
logo: "",
description: ()
) = {
v(entryStyle.margins.top)
table(
columns: entryStyle.table.columns,
inset: 0pt,
stroke: none,
align: horizon,
column-gutter: entryStyle.margins.betweenLogoAndTitle,
{image(logo)},
table(
columns: (1fr),
inset: 0pt,
stroke: none,
row-gutter: entryStyle.margins.betweenTitleAndSubtitle,
align: auto,
{
text(
size: entryStyle.title.size,
weight: entryStyle.title.weight,
fill: entryStyle.title.color,
title
)
text(
size: entryStyle.companyOrUniversity.size,
weight: entryStyle.companyOrUniversity.weight,
fill: entryStyle.companyOrUniversity.color,
" @" + companyOrUniversity
)
},
{
table(
columns: 2,
inset: 0pt,
stroke: none,
align: horizon,
column-gutter: entryStyle.margins.betweenTimeAndLocation,
{table(
columns: 2,
inset: 0pt,
stroke: none,
align: horizon,
column-gutter: entryStyle.margins.betweenIconAndText,
{if date.len() > 0{fa-hourglass-2()}},
{text(
size: entryStyle.timeAndLocation.size,
weight: entryStyle.timeAndLocation.weight,
fill: entryStyle.timeAndLocation.color,
date
)},
)},
{table(
columns: 2,
inset: 0pt,
stroke: none,
align: horizon,
column-gutter: entryStyle.margins.betweenIconAndText,
{if location.len() > 0{fa-location-dot()}},
{text(size: 10pt, location)}
)},
)
},
)
)
text()[
#v(3pt)
#description
]
}
#let skill(
category: "",
skills: ()
) = {
table(
columns: skillsStyle.columns,
inset: 0pt,
column-gutter: 0pt,
stroke: none,
align: (horizon, left),
{text(category)},
{
renderSkills(skills: skills)
}
)
v(skillsStyle.margins.betweenCategories)
}
|
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/179.%20nov.html.typ | typst | nov.html
<NAME>
November 2019If you discover something new, there's a significant chance you'll be
accused of some form of heresy.To discover new things, you have
to work on ideas that are good but non-obvious; if an idea is
obviously good, other people are probably already working on it.
One common way for a good idea to be non-obvious is for it to be hidden in the
shadow of some mistaken assumption that people are very attached to.
But anything you discover from working on such an idea will tend to
contradict the mistaken assumption that was concealing it. And you
will thus get a lot of heat from people attached to the mistaken
assumption. Galileo and Darwin are famous examples of this phenomenon,
but it's probably always an ingredient in the resistance to new
ideas.So it's particularly dangerous for an organization or society to
have a culture of pouncing on heresy. When you suppress heresies,
you don't just prevent people from contradicting the mistaken
assumption you're trying to protect. You also suppress any idea
that implies indirectly that it's false.
Every cherished mistaken assumption has
a dead zone of unexplored ideas around it. And the more preposterous
the assumption, the bigger the dead zone it creates.There is a positive side to this phenomenon though. If you're
looking for new ideas, one way to find them is by looking for
heresies. When you look at the question this way, the depressingly
large dead zones around mistaken assumptions become excitingly large
mines of new ideas.Japanese TranslationRussian TranslationSimplified Chinese Translation
|
|
https://github.com/liuguangxi/erdos | https://raw.githubusercontent.com/liuguangxi/erdos/master/Problems/typstdoc/figures/p93_5.typ | typst | #import "@preview/cetz:0.2.1"
#cetz.canvas(length: 10pt, {
import cetz.draw: *
let brown = orange.darken(40%)
let green0 = rgb("#B2D30A")
let os = 0.12
let sqr-mark(cor) = line(cor, (rel: (0.24, 0.24)), stroke: black+3.6pt)
line((1, 6.5), (2, 4.6), stroke: yellow+3pt)
line((2.6, 1.2), (2.2, 4), stroke: yellow+3pt)
line((2, 4.6), (3.6, 7), stroke: yellow+3pt)
line((2.9, 1.1), (3.6, 7), stroke: yellow+3pt)
line((2.2, 4), (5, 6), stroke: yellow+3pt)
line((5, 6), (3.6, 7.8), stroke: yellow+3pt)
line((3.3, 9.7), (3.6, 7.8), stroke: yellow+3pt)
line((5, 6), (5, 8), stroke: yellow+3pt)
line((5, 8), (7.8, 8.8), stroke: yellow+3pt)
line((7.8, 8.8), (8.4, 6.6), stroke: yellow+3pt)
line((7.2, 5), (8.4, 6.6), stroke: yellow+3pt)
line((5, 6), (8.4, 6.6), stroke: brown+3pt)
line((7.2, 5), (5, 8), stroke: orange+3pt)
line((7.2, 5), (9.6, 5.5), stroke: orange+3pt)
line((9.6, 5.5), (11, 7.4), stroke: yellow+3pt)
line((7.2, 5), (7.5, 1.6), stroke: yellow+3pt)
line((5, 0.4), (7.5, 1.6), stroke: yellow+3pt)
line((4.6, 0.6), (9, 4.8), stroke: yellow+3pt)
line((8.3, 2.5), (9, 4.8), stroke: yellow+3pt)
line((8.3, 2.5), (10, 3), stroke: yellow+3pt)
sqr-mark((2-os, 4.6-os))
sqr-mark((2.2-os, 4-os))
sqr-mark((3.6-os, 7-os))
sqr-mark((5-os, 6-os))
sqr-mark((3.6-os, 7.8-os))
sqr-mark((5-os, 8-os))
sqr-mark((7.8-os, 8.8-os))
sqr-mark((8.4-os, 6.6-os))
sqr-mark((7.2-os, 5-os))
sqr-mark((9.6-os, 5.5-os))
sqr-mark((7.5-os, 1.6-os))
sqr-mark((9-os, 4.8-os))
sqr-mark((8.3-os, 2.5-os))
hobby((3.5, 10), (1, 6.5), (0, 4), (3, 1), (5, 0.4), (9, 1), (10, 3), (11, 7.5), omega: 0, stroke: 3pt)
content((0.5, 11), anchor: "south-west", [*Poisonous Toad*])
})
|
|
https://github.com/qjcg/awesome-typst | https://raw.githubusercontent.com/qjcg/awesome-typst/main/CONTRIBUTING.md | markdown | Creative Commons Zero v1.0 Universal | # Contributing
Do you want to contribute? Excellent!
Please create a Pull Request that:
- Adds awesome value for people trying to get things done with Typst.
- Is for a single suggestion.
- Has a link to the repo in the PR body.
Thank you for your suggestions!
|
https://github.com/lucannez64/Notes | https://raw.githubusercontent.com/lucannez64/Notes/master/README.typ | typst | #import "template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: project.with(
title: "README",
authors: (
"<NAME>",
),
date: "30 Octobre, 2023",
)
#set heading(numbering: "1.1.")
= Notes
<notes>
This directory contains my class notes and materials.
== Contents
<contents>
- Course notes for various subjects like Math, Physics, Computer
Science, etc. All notes are in Markdown format.
- PDF versions of the notes generated from Markdown using pandoc.
- Some source code files.
- Diagrams and images referenced in notes.
== Scripts
<scripts>
- `update.ps1` - Automatically updates and compiles the Markdown files
to PDF on changes.
- `notes_converter` - Compiles Markdown files to PDF and opens the
Markdown in Neovim and PDF in parallel. (located on the separate
repository)
#link("https://github.com/lucannez64/notes_converter")[Link]
- `pdf.ps1` - Opens the compiled PDF.
- `edit.ps1` - Opens the Markdown file in Neovim for editing.
- `mpf.ps1` - Shortcut to start notes in Neovim and PDF viewer
side-by-side.
This allows me to efficiently take notes in Markdown, automatically
generate PDFs, and reference both formats.
|
|
https://github.com/Wh4rp/Presentacion-Typst | https://raw.githubusercontent.com/Wh4rp/Presentacion-Typst/master/modulos/preview-block.typ | typst | #let raw-view(
width: 100%,
term
) = block(
width: width,
fill: luma(230),
inset: 8pt,
radius: 4pt,
term
)
#let preview(term) = block(
width: 100%,
fill: luma(230),
inset: 10pt,
radius: 4pt,
block(
width: 100%-20pt,
fill: luma(255),
inset: 8pt,
radius: 4pt,
term
)
)
#let preview-block(
size: 12pt,
content: "",
) = {
text(size, align(center,
grid(
columns: (auto, auto),
column-gutter: 20pt,
align(left, raw-view[#raw(lang:"typ", content)]),
align(left, preview[#eval("["+content+"]")]),
)
))
}
#preview-block(
size: 12pt,
content:"= Este es un título
Hola, este es un párrafo normal.
- item 1
- item 2
+ subitem 1
+ subitem 2
== Subtítulo
#lorem(5)",
) |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/chuli-cv/0.1.0/modules/education.typ | typst | Apache License 2.0 | #import "@preview/fontawesome:0.1.0": *
#import "./styles.typ": *
#let render-education-header(title, logo, company-or-university) = {
table(
columns: (5%, 1fr),
inset: 0pt,
column-gutter: 2pt,
row-gutter: 2pt,
stroke: none,
align: horizon,
logo,
black-topic-style(title),
{},
accent-subtopic-style(" " + company-or-university)
)
}
#let render-education-icon-info(date: "", location: "") = {
table(
columns: 4,
stroke: none,
inset: 3pt,
fa-calendar-days(),
regular-text-style(date),
fa-location-dot(),
regular-text-style(location)
)
}
|
https://github.com/LeptusHe/LeptusHe.github.io | https://raw.githubusercontent.com/LeptusHe/LeptusHe.github.io/main/source/_posts/rendering/mobile-terrain-optimization.typ | typst | #import "../typst-inc/blog-inc.typc": *
#show: blog_setting.with(
title: "移动端地形渲染优化",
author: ("<NAME>"),
paper: "jis-b0",
//preview: true
)
#metadata("渲染技术") <tags>
#metadata("图形渲染") <categories>
#metadata("2024-07-17") <date>
游戏场景地形通常需要在不同地表效果间过渡,地形渲染系统一般使用#im[Texture Splatting]技术来实现该效果。例如,Unity引擎支持在地形区域的每个位置使用4张纹理进行权重混合,形成过渡效果。4张纹理的混合权重存储在一张称为#im[Splat Texture]的纹理中。Splat Texture会覆盖整个地形区域,RGBA四个通道分别存储四个地表纹理图层的混合权重。在运行时,shader会通过fragment的世界位置采样Splat Texture,从而获得4层地表纹理图层的混合权重,然后混合得到最终的地表纹理效果。
Unity的Texture Splatting技术能够得到较好的过渡效果。但是,该技术在移动端存在较大的性能问题。在Shader中,该技术需要采样Splat Texture,以及4个地表纹理图层,共计需要5次纹理采样。如果渲染系统采用了PBR渲染技术,则还需要材质贴图,法线贴图等数据。纹理采样数可高达$13$次($4 times 3 + 1$)。显然,对于移动端而言,该技术的性能开销过大。同时,该技术限制了整个地表区域最终只能够使用4种地表纹理,会导致地形整体的多样性以及丰富度不够。
在项目中,美术要求整个地形区域能够使用大概8张不同的地表纹理。同时,该技术方案也要满足移动端的性能要求。因此,基于Texture Splatting的方案并不适用于项目。为了解决该问题,我们需要设计一种新的技术方案来满足项目的要求。
= 技术方案
在移动端,地形渲染的瓶颈在于纹理采样数过多。因此,为了优化性能,我们需要减少渲染时的纹理采样数量。基于对美术场景的观察,我们可以发现:#im[美术场景地形的大部分区域基本都仅仅只使用到了一层的地表纹理,只有少部分过渡区域才会使用到多层纹理混合。例如,草地与泥地的过渡区域。]
基于该观察,我们可以执行以下优化:
- 对地表区域进行分割,分割出使用单层纹理的地表区域以及使用多层纹理的地表区域
- 对于单层地表纹理区域,在Shader中采样一层地表纹理
- 对于多层地表纹理区域,依然在Shader中采样多层地表纹理
上述方案能够有效地减少整个地形区域渲染时的纹理采样数量。其具体的优化的性能取决于使用单层纹理区域的比例。
同时,为了能够让美术在整个地形区域使用大于4张的地表纹理。我们采用了纹理数组的方式来组织地形纹理数据,在Shader中通过地表纹理层的id即可采样特定的地表纹理层。理论上,该技术方案可以支持的地表纹理层数由硬件限制,但是应该远远大于8张。
= 单层纹理地形
分割地表区域后,我们会得到大量使用单层纹理的地表网格三角形。那么如何确定这些三角形使用哪个地表纹理图层呢?
一种方式是,我们可以将使用相同地表纹理的三角形网格聚类,形成单独的地表网格,并将图层id存储在材质中。但是,这会产生最多4个网格,绘制时会产生4个draw call。如果地表分为多个chunk,则每个chunk都会产生4个draw call,导致地形draw call数量最多增加为原来的4倍。另外一种方式是将每个三角形使用的地表纹理图层id存储在三角形顶点数据上。该方式虽然会增加顶点数据,但是会减少draw call。最终,项目中考虑使用第二种方法更好。
下一步需要考虑的问题是:如何将三角形使用的地表纹理图层id编码存储在顶点数据上,并在shader中解码以进行纹理采样。
对于地表网格上的三角形,我们可以考虑将地表图层id存储在顶点颜色通道上。顶点颜色通道的格式为uint8,可以存储255个地表纹理图层id。同时,我们可以考虑使用纹理数组来存储地表图层纹理,因此在整个地形中,理论上我们可以最多使用255个地表纹理图层,也能够极大地提升地表效果的丰富度和多样性。
对于地表图层id,我们可以在三角形的三个顶点颜色上都存储该图层id。设该图层id为$k$,则三个顶点颜色值都为$(k, 0, 0)$。在经过渲染管线光栅化后,该三角形产生的fragment所得到的插值颜色值为:
$
c &= alpha dot.c R + beta dot.c G + gamma dot.c B \
&= alpha dot.c (k, 0, 0) + beta dot.c (k, 0, 0) + gamma dot.c (k, 0, 0) \
&= (alpha + beta + gamma) dot.c (k, 0, 0) \
&= (k, 0, 0)
$ <eq-color-rasterization>
由于光栅化时,重心坐标$alpha, beta, gamma$的和为$1$。因此,@eq-color-rasterization 中光栅化插值得到的顶点属性值即为该三角形所使用的图层id。
= 多层纹理地形
== 纹理图层id的存储与编码
当地形网格的三角形需要使用多层地表纹理时,出于性能以及内存的考虑,我们依然将地表纹理图层的id存储在三角形颜色顶点数据中。
虽然单个顶点颜色可以存储4个uint8数据,三个顶点总共能够存储12个uint8数据。但是出于性能考虑,我们限制每个三角形最多使用三个地表纹理图层。
现在我们需要考虑如何将三个地表纹理图层id编码存储到顶点颜色数据中,并且在运行时解码获得。
#let fv = (k) => {$bold(#k)$}
假设三个顶点存储的数据分别用向量值函数$fv(f)(i, j, k)$,$fv(g)(i, j, k)$, $fv(h)(i, j, k)$表示,其中$i, j, k$为需要编码的三个地表纹理图层id。因此,在经过光栅化插值后,fragment得到的顶点属性数据为:
$
fv(p) = fv(f)(i, j, k) dot.c alpha + fv(g)(i, j, k) dot.c beta + fv(h)(i, j, k) dot.c gamma
$ <eq-fv-layer>
其中$alpha, beta, gamma$为该fragment的重心坐标。
对于@eq-fv-layer 而言,$alpha, beta, gamma$以及$fv(p)$是已知的,函数$fv(f), fv(g), fv(h)$是编码函数,也是已知的,目标在于解码得到$i, j, k$的值。
基于线性空间的概念,我们可以考虑以下的编码函数:
$
cases(
fv(f)(i, j, k) = i dot.c fv(e_1) = i dot.c (1, 0, 0),
fv(g)(i, j, k) = i dot.c fv(e_2) = i dot.c (0, 1, 0),
fv(h)(i, j, k) = i dot.c fv(e_3) = i dot.c (0, 0, 1),
)
$
其中$fv(e_1), fv(e_2), fv(e_3)$为三维空间的基向量。
因此,@eq-fv-layer 可以转换为:
$
fv(p) = i dot.c alpha dot.c fv(e_1) + j dot.c beta dot.c fv(e_2) + k dot.c gamma dot.c fv(e_3)
$
因此,地表纹理图层id可以求解为:
$
cases(
i = (angle.l fv(p), fv(e_1) angle.r) / alpha,
j = (angle.l fv(p), fv(e_2) angle.r) / beta,
k = (angle.l fv(p), fv(e_3) angle.r) / gamma,
)
$
上述解码方案要求在fragment shader中获得fragment生成时所使用的重心坐标,但是DirectX中直到Shader Model 6.1才支持在fragment shader获取重心坐标,移动端无法使用。在移动端,一种简单获得重心坐标的的方式是:设置三角形三个顶点的顶点属性值分别为三维空间基向量$fv(e_1), fv(e_2), fv(e_3)$,则插值得到的属性值即为$(alpha, beta, gamma)$。
== 多层纹理混合权重
对于使用多层纹理的三角形而言,其混合的纹理权重直接使用重心坐标来进行插值混合,而无需使用Splat Texture来控制。虽然效果上可能有差异,但是影响不大,也能够正常产生过渡效果。
= References
1. #link("https://en.wikipedia.org/wiki/Texture_splatting")[Texture Splatting - Wiki]
2. #link("https://github.com/microsoft/DirectXShaderCompiler/wiki/SV_Barycentrics")[DirectX Documentation - SV_Barycentrics] |
|
https://github.com/piggsoft/mdbook-typst-piggsoft | https://raw.githubusercontent.com/piggsoft/mdbook-typst-piggsoft/master/readme.md | markdown | # mdbook-typst-piggsoft
## 是什么
mdbook-typst-piggsoft是一个mdbook的output链,主要是将markdown文件导出为pdf、svg、png。
### 主要参考
主要感谢如下作品,部分是仿造进行实现
也解决了无法导出图片的问题
- [mdbook-typst-pdf](https://github.com/KaiserY/mdbook-typst-pdf)
- [mdbook-typst](https://github.com/LegNeato/mdbook-typst)
## 怎么用
### 下载依赖
#### mdbook
Cargo install安装,_不推荐,速度较慢_
`cargo install mdbook`
`cargo install --git https://github.com/rust-lang/mdBook.git mdbook`
建议到<https://github.com/rust-lang/mdBook/releases>点击下载可执行包,除非没有相应的os版本,不然不推荐构建安装。
#### typst
Typst 是一种基于标记的新型排版系统,其功能与 LaTeX 不相上下,但学习和使用却更加简单。
我们需要将markdown文件转换为typst文件,再借助typst的cli工具进行导出,并且typst也是rust编写。
<https://github.com/typst/typst/releases>
### book.toml配置
在book.toml中加入如下配置,即可生成pdf
```toml
[output.typst-piggsoft]
```
### 可选参数
如下为可选参数以及默认值
```toml
[output.typst-piggsoft]
section_level = 3 #目录最大层级
document_keywords = "keywords" #给pdf的metedata使用
output_format = "pdf" #可选pdf,svg,png
output_dir = "typst-piggsoft" #${book.root} + ${build.build_dir} + ${output_dir}
output_filename = "out" #默认文件名,pdf -> ${output_filename}.pdf; svg -> ${output_filename}-{n}.svg; png -> ${output_filename}-{n}.png.其中{n}为页面的序号,基于SUMMARY.md
template_path = "None" #默认不配置,这是typst相关的前导配置,配置后将读取,${book.root} + ${template_path}
```
## 其他
当前功能还未完全完成,链接和图片还有部分未实现。计划3月完成。
|
|
https://github.com/jamesrswift/typst_templates | https://raw.githubusercontent.com/jamesrswift/typst_templates/main/rsc-template/0.1.0/example.typ | typst | #import "rsc-template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: project.with(
title: lorem(12),
authors: (
(name: "<NAME>"),
(name: "<NAME>"),
(name: "<NAME>"),
),
abstract: lorem(75),
//journal: "Food Chemistry Advances",
article_type: "Review Article",
article_dates: (
(type: "Received Date", date: "01/05/2023"),
(type: "Revised Date", date: "02/05/2023"),
(type: "Accepted Date", date: "03/05/2023")
),
doi: "10.1016/j.focha.2023.100417",
)
= Introduction
#lorem(500)
= Methodology
#lorem(200)
== Materials
#lorem(20)
== Chemicals
#lorem(50)
== Samples
#lorem(75)
== Methods
#lorem(250)
=== Measurement
#lorem(250)
= Results and Discussion
#lorem(1000)
= Conclusions
#lorem(75)
#set heading(numbering: none)
= Acknowledgements
#lorem(20)
= Conflicts of Interest
#lorem(20)
= References
#lorem(1000) |
|
https://github.com/mrcinv/nummat-typst | https://raw.githubusercontent.com/mrcinv/nummat-typst/master/09_konvergencna_obmocja.typ | typst | = Konvergenčna območja nelinearnih enačb
<konvergencna-obmocja>
== Naloga
- Implementiraj Newtonovo metodo za reševanje sistemov nelinearnih enačb.
- Poišči rešitev dveh nelinearnih enačb z dvema neznankama
$
x^3 - 3x y^2 & = 1\
3x^2 y - y^3 & = 0.
$
- Sistem nelinearnih enačb ima navadno več rešitev. Grafično predstavi, h kateri rešitvi
konvergira Newtonova metoda v odvisnosti od začetnega približka.
Začetne približke izberi na pravokotni mreži. Vsakemu vozlišču v mreži priredi različne barve, glede na to, h kateri rešitvi konvergira Newtonova metoda. Ves postopek zapiši v funkcijo `konvergencno_obmocje`. |
|
https://github.com/polazarus/typst-svg-emoji | https://raw.githubusercontent.com/polazarus/typst-svg-emoji/main/README.md | markdown | MIT License | # Typst SVG emoji
A hopefully temporary Typst package to work around spotty support of color Emoji.
Basic idea: replace automically every emoji use by the corresponding SVG image from a font (for now, only [Noto](https://github.com/googlefonts/noto-emoji).
## Installation and usage
_thx [Pandicon](https://github.com/Pandicon)_
You can use this package both locally and in the [Typst online editor](https://typst.app/).
### Local use
To install the package locally, make sure you know how local packages work in Typst.
Please take a look [at the documentation](https://github.com/typst/packages#local-packages) if you are not sure.
- Clone this repository with submodules to download the Noto fonts (`git clone --recurse-submodules <repository-url>`) to `{data-dir}/local/svg-emoji/0.1.0`.
- Import `@local/svg-emoji:0.1.0` in your Typst project, for example:
```typst
#import "@local/svg-emoji:0.1.0": setup-emoji, github // only if you want to use GH names for emojis
// first install the emoji hook!
#show: setup-emoji
// directly
😆🛖🐡
// builtin emoji namespace
#emoji.rocket
// or use github-named emojis
#github.blue_car
```
Note: You can copy the package files to a different directory than `local`, for example `my_packages`, but the import will have to reflect it: `#import "@my_packages/svg-emoji:0.1.0" ...`.
### Typst.app website
- Clone this repository with submodules to download the Noto fonts `git clone --recurse-submodules <repository-url>`
- Copy the `github.json`, `raw_github.json`, `noto.json`, `noto.regex`, `lib.typ`, `noto-emoji/svg/*` files (keeping the directory structure) to a directory in your project, say `svg-emoji`
- Import the lib file in your Typst project
```typst
#import "./svg-emoji/lib.typ": setup-emoji, github
// see above for usage
```
If you choose a different folder name than `svg-emoji`, make sure it is reflected in the `#import`.
## TODO
- more doc
- prepare release in CI
- understand why `setup-github` does not currently work
|
https://github.com/Nrosa01/TFG-2023-2024-UCM | https://raw.githubusercontent.com/Nrosa01/TFG-2023-2024-UCM/main/Memoria%20Typst/data/gridexamples.typ | typst | #import "./data.typ": *
#let state_01_ex1 = (
caption: "1ª Generación",
caption_alignment: ttb,
hspace: 20pt,
transition: " ",
columns: 3,
data: (0, 0, 0,
0, 1, 1,
1, 0, 0 )
)
#let state_02_ex1 = (
caption: "2ª Generación",
caption_alignment: ttb,
hspace: 20pt,
transition: "",
columns: 3,
data: (1, 1, 1,
1, 0, 0,
0, 1, 1 )
)
#let state_03_ex1 = (
caption: "3ª Generación",
caption_alignment: ttb,
hspace: 20pt,
transition: "",
columns: 3,
data: (0, 0, 0,
0, 1, 1,
1, 0, 0 )
)
#let state_01_contact_automata = (
caption: "Estado inicial",
caption_alignment: ttb,
hspace: 20pt,
transition: " ",
columns: 5,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0)
)
#let state_02_contact_automata = (
caption: "2ª Generación",
caption_alignment: ttb,
hspace: 20pt,
transition: " ",
columns: 5,
data: (0, 0, 0, 0, 0,
0, 1, 1, 1, 0,
0, 1, 1, 1, 0,
0, 1, 1, 1, 0,
0, 0, 0, 0, 0)
)
#let state_03_contact_automata = (
caption: "3ª Generación",
caption_alignment: ttb,
hspace: 20pt,
transition: " ",
columns: 5,
data: (1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1)
)
#let hspacewolfram = 7.5pt
#let cellsizewolfram = 8pt
#let gutterwolfram = 3.5pt
#let guttercolorwolfram = 2pt
#let rule30_01 = (
caption: "0",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(1,1,1,2,0,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_02 = (
caption: "0",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(1,1,0,2,0,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_03 = (
caption: "0",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(1,0,1,2,0,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_04 = (
caption: "1",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(1,0,0,2,1,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_05 = (
caption: "1",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(0,1,1,2,1,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_06 = (
caption: "1",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(0,1,0,2,1,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_07 = (
caption: "1",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(0,0,1,2,1,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_08 = (
caption: "0",
caption_alignment: btt,
hspace: hspacewolfram,
transition: " ",
transition_icon: "",
columns: 3,
data:(0,0,0,2,0,2),
cellsize: cellsizewolfram,
gutter: gutterwolfram,
gutter-column: guttercolorwolfram,
stroke_map: (black+0.75pt, black+0.75pt, white+0.75pt),
color_map: (white, black)
)
#let rule30_15gens = (
caption: "",
caption_alignment: ttb,
transition: " ",
transition_icon: "",
columns: 33,
data:(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,1,0,1,1,1,1,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,1,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,1,1,0,1,1,1,1,0,1,1,0,0,1,0,0,0,1,1,1,0,0,0,0,0,0,0,
0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,1,0,1,1,1,1,0,1,1,0,0,1,0,0,0,0,0,0,
0,0,0,0,0,1,1,0,1,1,1,1,0,0,1,1,0,1,0,0,0,0,1,0,1,1,1,1,0,0,0,0,0,
0,0,0,0,1,1,0,0,1,0,0,0,1,1,1,0,0,1,1,0,0,1,1,0,1,0,0,0,1,0,0,0,0,
0,0,0,1,1,0,1,1,1,1,0,1,1,0,0,1,1,1,0,1,1,1,0,0,1,1,0,1,1,1,0,0,0,),
cellsize: 10pt,
stroke_map: (0.2pt + black, 0.2pt + black)
)
#let game_of_life_01 = (
caption: "Estado inicial",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 6,
data: (0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0,
0, 1, 0, 1, 0, 0,
0, 0, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0),
)
#let game_of_life_02 = (
caption: "1ª Generación",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 6,
data: (0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 0, 0,
0, 1, 0, 1, 0, 0,
0, 0, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0),
)
#let game_of_life_03 = (
caption: "2ª Generación",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 6,
data: (0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0,
0, 1, 1, 1, 0, 0,
0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0),
)
#let lagnton_cellsize = 10pt;
#let lagnton_hspace = 5pt;
#let langton_ant_01 = (
caption: "1ª Generación",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 3,
cellsize: lagnton_cellsize,
color_map: (white, black, white, black),
data: (0, 0, 0,
0, 2, 0,
0, 0, 0),
rect_content: (none, none, none,
none, text(black)[⬆], none,
none, none, none),
)
#let langton_ant_02 = (
caption: "2ª Generación",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 3,
cellsize: lagnton_cellsize,
color_map: (white, black, white, black),
data: (0, 0, 0,
2, 1, 0,
0, 0, 0),
rect_content: (none, none, none,
text(black)[⬅], none, none,
none, none, none),
)
#let langton_ant_03 = (
caption: "3ª Generación",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 3,
cellsize: lagnton_cellsize,
color_map: (white, black, white, black),
data: (0, 0, 0,
1, 1, 0,
2, 0, 0),
rect_content: (none, none, none,
none, none, none,
text(black)[⬇], none, none),
)
#let langton_ant_04 = (
caption: "4ª Generación",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 3,
cellsize: lagnton_cellsize,
color_map: (white, black, white, black),
data: (0, 0, 0,
1, 1, 0,
1, 2, 0),
rect_content: (none, none, none,
none, none, none,
none, text(black)[➡], none),
)
#let langton_ant_05 = (
caption: "5ª Generación",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 3,
cellsize: lagnton_cellsize,
color_map: (white, black, white, black),
data: (0, 0, 0,
1, 3, 0,
1, 1, 0),
rect_content: (none, none, none,
none, text(white)[⬆], none,
none, none, none),
)
#let langton_ant_06 = (
caption: "6ª Generación",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 3,
cellsize: lagnton_cellsize,
color_map: (white, black, white, black),
data: (0, 0, 0,
1, 0, 2,
1, 1, 0),
rect_content: (none, none, none,
none, none, text(black)[➡],
none, none, none),
)
#let langton_ant_final = (
caption: "",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 80,
cellsize: 3.25pt,
default_stroke: black+.05pt,
color_map: (white, black, red, black),
data: angton,
)
#let luaimpl_ex0 = (
caption: " ",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 4,
color_map: (white, green, black, blue, red),
cellsize: lagnton_cellsize,
data: (2, 0, 2, 0,
3, 1, 3, 1,
2, 0, 2, 0,
3, 1, 3, 1),
)
#let luaimpl_ex1 = (
caption: " ",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 4,
color_map: (white, green, purple, blue, red),
cellsize: lagnton_cellsize,
data: (1, 0, 1, 0,
0, 0, 0, 0,
1, 0, 1, 0,
0, 0, 0, 0),
)
#let luaimpl_ex2 = (
caption: " ",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 4,
color_map: (white, green, purple, blue, red),
cellsize: lagnton_cellsize,
data: (0, 2, 0, 2,
0, 0, 0, 0,
0, 2, 0, 2,
0, 0, 0, 0),
)
#let luaimpl_ex3 = (
caption: " ",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 4,
color_map: (white, green, purple, blue, red),
cellsize: lagnton_cellsize,
data: (0, 0, 0, 0,
3, 0, 3, 0,
0, 0, 0, 0,
3, 0, 3, 0),
)
#let luaimpl_ex4 = (
caption: " ",
caption_alignment: ttb,
hspace: lagnton_hspace,
transition: " ",
columns: 4,
color_map: (white, green, purple, blue, red),
cellsize: lagnton_cellsize,
data: (0, 0, 0, 0,
0, 4, 0, 4,
0, 0, 0, 0,
0, 4, 0, 4),
)
#let luaimpl_problem1_1_1 = (
caption: "",
caption_alignment: ttb,
transition: "",
columns: 4,
color_map: (white, black, blue, red),
cellsize: lagnton_cellsize,
data: (0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0),
)
#let luaimpl_problem1_1_2 = (
caption: "",
caption_alignment: ttb,
transition: "",
columns: 4,
color_map: (white, black, blue, red),
cellsize: lagnton_cellsize,
data: (0, 0, 1, 0,
0, 0, 1, 0,
0, 0, 0, 0,
0, 0, 0, 0),
)
#let luaimpl_problem1_1_3 = (
caption: "",
caption_alignment: ttb,
transition: "",
columns: 4,
color_map: (white, black, blue, red),
cellsize: lagnton_cellsize,
data: (0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 1, 0),
)
#let luaimpl_problem1_1_4 = (
caption: "",
caption_alignment: ttb,
transition: "",
columns: 4,
color_map: (white, black, blue, red),
cellsize: lagnton_cellsize,
data: (0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 0),
)
#let luaimpl_problem1_1_5 = (
caption: "",
caption_alignment: ttb,
transition: "",
columns: 4,
color_map: (white, black, blue, red),
cellsize: lagnton_cellsize,
data: (0, 0, 1, 0,
0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0),
)
#let gameOfLifeStructuresCellSize = 10pt
#let game_of_life_block = (
caption: "Bloque",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
transition_icon: "",
columns: 4,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0,
0, 1, 1, 0,
0, 1, 1, 0,
0, 0, 0, 0)
)
#let game_of_life_hive = (
caption: "Colmena",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
transition_icon: "",
columns: 6,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 0, 0,
0, 1, 0, 0, 1, 0,
0, 0, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0),
)
#let game_of_life_bread = (
caption: "Pan",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
transition_icon: "",
columns: 6,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 0, 0,
0, 1, 0, 0, 1, 0,
0, 0, 1, 0, 1, 0,
0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0),
)
#let game_of_life_boat = (
caption: "Bote",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
transition_icon: "",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 1, 1, 0, 0,
0, 1, 0, 1, 0,
0, 0, 1, 0, 0,
0, 0, 0, 0, 0,),
)
#let game_of_life_bath = (
caption: "Bañera",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 1, 0, 1, 0,
0, 0, 1, 0, 0,
0, 0, 0, 0, 0,),
)
#let game_of_life_blinker1 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, 0, 0),
)
#let game_of_life_blinker2 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 1, 1, 1, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0),
)
#let game_of_life_glider1 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, 1, 0,
0, 1, 1, 1, 0,
0, 0, 0, 0, 0),
)
#let game_of_life_glider2 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 1, 0, 1, 0,
0, 0, 1, 1, 0,
0, 0, 1, 0, 0),
)
#let game_of_life_glider3 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 1, 0,
0, 1, 0, 1, 0,
0, 0, 1, 1, 0),
)
#let game_of_life_glider4 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, 1, 1,
0, 0, 1, 1, 0),
)
#let game_of_life_glider5 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 1, 0,
0, 0, 0, 0, 1,
0, 0, 1, 1, 1),
)
#let sand_simulator_lr_01 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, 0, 0),
)
#let sand_simulator_lr_02 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 1, 0, 0, 0,
0, 1, 0, 0, 0,
0, 1, 0, 0, 0,
0, 0, 1, 0, 0),
)
#let sand_simulator_lr_03 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
1, 0, 0, 0, 0,
1, 0, 0, 0, 0,
0, 1, 1, 0, 0),
)
#let sand_simulator_lr_04 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 1, 0, 0, 0,
1, 1, 1, 0, 0),
)
#let sand_simulator_01 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, 0, 0),
)
#let sand_simulator_02 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0),
)
#let sand_simulator_03 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 1, 0, 0,
0, 1, 1, 0, 0),
)
#let sand_simulator_04 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 5,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 1, 0, 0,
0, 1, 1, 1, 0),
)
#let gpu_01 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 7,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 1, 1, 1, 0, 0,
0, 0, 1, 1, 1, 0, 0,
0, 0, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, ),
)
#let gpu_03 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 7,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 1, 1, 1, 0, 0,
0, 0, 1, 1, 1, 0, 0,
0, 0, 0,0,0, 0, 0,
0, 0, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, ),
)
#let gpu_04 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 7,
cellsize: gameOfLifeStructuresCellSize,
data: (0, 0, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, ),
)
#let gpu_02 = (
caption: "",
caption_alignment: ttb,
hspace: 15pt,
transition: " ",
columns: 3,
cellsize: gameOfLifeStructuresCellSize,
data: (1, 0, 1,
0, 0, 0,
0, 0, 0,),
) |
|
https://github.com/yingziyu-llt/blog | https://raw.githubusercontent.com/yingziyu-llt/blog/main/archived/Linear-Algebra-C2.typ | typst | #set document(title:"线性代数 有限维向量空间",date: datetime(year: 2024,month: 7,day: 11))
#set page(margin: (
top: 0cm,
bottom: 0cm,
x: 0cm,
))
#set text(size: 16pt)
== 线性组合和张成
*Notatioddn*
向量组:一些向量组成的一个组(List)
*Defination*
一个向量组$v_1,v_2,...,v_n in V$的线性组合(Linear combination)是指形如$alpha_1 v_1 + alpha_2 v_2 + ... + alpha_n v_n$的向量
一个向量组的所有线性组合组成的集合叫做这个向量组张成(span)的空间$"span"(a_1,a_2,...,a_n)$,有些也叫线性张成(linear span)
指定空向量组$()$张成的空间为${0}$
定理2.7 张成的空间是最小包含子空间:一个向量组张成的空间就是包含这些向量的最小的子空间。
证明思路:先去证明张成的空间是$V$的一个子空间(证明运算封闭性),再去证明这个空间包括张成其的所有向量,再说明所有$V$包含这些向量的子空间都是其一个子集。
证明过程:
*Proof*
Suppose $v_1,v_2,...,v_n$ a list in $V$. We denote that $S = "span"(v_1,v_2,...,v_n)$
First, we need to prove the addition identity in $S$. Obviously, $0v_1+0v_2+...+0v_n = 0$
After that, we need to prove the addition closure in $S$. For $a = a_1 v_1 + a_2 v_2 + ... + a_n v_n$, $b = b_1 v_1 + b_2 v_2 + ... + b_n v_n$,$a + b = (a_1 + b_1) v_1 + (a_2 + b_2) v_2 + ... + (a_n + b_n) v_n in V$
Further more, we need to prove the multiplication closure in $S$. For $a = a_1 v_1 + a_2 v_2 + ... + a_n v_n$,$lambda a = lambda a_1 v_1 + lambda a_2 v_2 + ... + lambda a_n v_n in V$
Thus $S$ is a subspace of $V$
To prove that $S$ includes $v_1,v_2,dots,v_n$, we only need to make $a_i$ equal to $1$ if and only if $i$ equals to the index of $v_i$, otherwise $a_i = 0$.
Conversely, because subspaces are closed under scalar multiplication and addition, every subspace of $V$ containing each $v_j$ contains $"span"(v_1,v_2,...,v_n)$. Thus $S$ is the smallest subspace of $V$ containing all the vectors
*Defination*
张成(spans):如果$"span"(v_1,v_2,...,v_n) = V$,那么称$v_1,v_2,dots,v_n$张成$V$.
有限维向量空间(finite-dimensional vector space):如果某个空间可以被有限个向量张成,那么这个空间就是一个*有限维向量空间*(finite-dimensional vector space)。
多项式:若$FF -> FF$函数$p$可被表示为$p (x) = a_0 + a_1 x + a_2 x^2 + ... + a_n x^n,a_0,a_1,a_2,...,a_n \in FF$,那么这个函数就是一个在$FF$上的*多项式*(polynomial)函数,$a_0,a_1,a_2,...,a_n$称为多项式的*系数*(coefficient)
$P(FF)$是所有在$FF$上多项式的集合形成的线性空间。容易知道,$P(FF)$是$FF^FF$的子空间
多项式的度(degree)是多项式的最高次幂(最高次幂的次数)。定义$0$的度数为$-infinity$
$P_m(FF),m in ZZ^+$指所有在$FF$上次数小于等于$m$的多项式的集合。
无穷维向量空间(infinite-dimensional vector space):不是有限维向量空间的向量空间。
*Example*
Q:Show that $P(FF)$ is a infinite-dimensional vector space.
A:Consider any list of polynomials in $P(FF)$. We use $m$ to denote the maximun degree of the polynomial in the list. Then every polynomials in the spans of the list has degree less than or equal to $m$. Then $z^m+1$ is not in the span. Hence no list can span the space. QED
== 线性无关
*Definition*
若一个向量组$v_1,v_2,dots,v_n$,使得$a_1 v_1 + a_2 v_2 + dots + a_n v_n = 0$当且仅当$a_1=a_2=dots=a_n=0$,那么称$v_1,v_2,dots,v_n$是*线性无关*(linearly independent),否则被称为*线性相关*(linearly dependent)
*Lemma*
若$v_1,v_2,dots,v_n$线性相关,那么一定存在一个$j in {1,2,dots,n}$,使得:
(a) $v_j in "span"(v_1,v_2,dots,v_j-1)$
(b) 删除$v_j$后的向量组与原先的向量组等价
proof:
$v_1,v_2,dots,v_n$ is linearly dependent, so exist $a_1,a_2,dots,a_n in FF$ such that $a_1 v_1 + a_2 v_2 + dots + a_n v_n = 0$.
Let $j$ be the largest element in ${1,2,dots,m}$ if $a_j != 0$
Then $a_1 v_1 + a_2 v_2 + dots + a_j v_j = 0$ => $v_j = a_1 / a_j v_1 + a_2 / a_j v_2 + dots + a_j-1 / a_j v_j-1$. Then proving (a).
Suppose $u in "span"(v_1,v_2,dots,v_j-1)$, then $u = a_1 v_1 + a_2 v_2 + dots + a_n v_n$.We use $a_1 / a_j v_1 + a_2 / a_j v_2 + dots + a_j-1 / a_j v_j-1$ to replace $a_j$.
Then we can easyly to present $u$ just using $a_1,a_2,dots,a_j-1,a_j+1,dots,a_n$. Then proving (b).
*Theorem*
线性无关组的长度一定小于等于张成该空间向量组的长度。
== 基
*Definition*
一个空间$V$的一组基(basis)是一组可以张成$V$且线性无关的向量组。
*Theorem*
基的判定定理:$v_1,v_2,dots,v_n$是$V$的一组基$arrow.l.r.double$ $forall v in V$,存在唯一的$a_1,a_2,dots,a_n in FF$使得$a_1 v_1 + a_2 v_2 + dots + a_n v_n = a$
*Proof*
First suppose $v_1,v_2,dots,v_n$ as a basis of $V$. Let $v in V$. $v_1,v_2,dots,v_n$, so they span the space.Therefore $v = a_1 v_1 + a_2 v_2 + dots + a_n v_n$. Supppose $v = c_1 v_1 + c_2 v_2 + dots + c_n v_n$. Then $(c_1 - a_1) v_1 + (c_2 - a_2) v_2 + dots + (c_n - a_n) v_n = 0$,$c_1=a_1,c_2=a_2,dots,c_n=a_n$
On the other direction, suppose $v = c_1 v_1 + c_2 v_2 + dots + c_n v_n$ is unique, we can easily to know that $v_1,v_2,dots,v_n$ span the space.
To prove that they are linearly independent, we let $0 = a_1 v_1 + a_2 v_2 + dots + a_n v_n$. $2 * 0 = 2a_1 v_1 + 2a_2 v_2 + dots + 2a_n v_n$,$a_1 = 2a_1, a_2 = 2a_2, dots, a_n = 2a_n$.$a_1 = a_2 = dots = a_n = 0$
*Theorems*
张成某个空间的向量组包含这个空间的一个基
任何有限维向量空间包含一个基
空间内一组线性无关的向量组可以被扩张为一个基
任何$V$的子空间都是$V$直和的一部分
== 维数
*Theorem*
基向量组的长度与基的选取无关
*Proof*
Find two basis $v_1,v_2,dots,v_n$ and $u_1,u_2,dots,u_m$. They all spans $V$. So $n <= m$ and $m <= n$.Then $m = n$
于是,我们可以发现,一个向量空间中基向量组的长度是一个对于该空间有意义的不变量,我们于是有定义:
*Definition*
维数(dimension)是向量空间$V$中基向量组的长度。记作$dim V$
*Theorem*
有限维向量空间$V$的子空间$U$满足$dim U <= dim V$
长度为$dim V$的线性无关向量组就是$V$的一组基,长度为$dim V$能张成$V$的一组向量就是$V$的一组基
维数和公式:$dim (V+U)=dim V + dim U - dim (V sect U)$
|
|
https://github.com/Luxzi/doc-templates | https://raw.githubusercontent.com/Luxzi/doc-templates/main/rfc-typst-mocha/rfc_template.typ | typst | MIT License | #import "@preview/typpuccino:0.1.0": mocha
#import "@preview/codly:0.2.0": *
#let rfc(
project-name: str,
rfc-number: none,
rfc-name: str,
date: datetime,
authors: (content),
draft: bool,
doc,
) = {
set page("a4", fill: mocha.base)
set par(justify: true)
set text(font: "New Computer Modern", fill: mocha.text)
set raw(tab-size: 4, theme: "Catppuccin.tmTheme")
show raw: set text(font: "JetBrainsMono NF", weight: "bold")
show page: set page(footer: context [#smallcaps([#project-name: RFC #rfc-number --- #rfc-name]) #h(1fr) #counter(page).display("1")])
show: codly-init.with()
codly(
display-icon: false,
zebra-color: mocha.mantle,
fill: mocha.mantle,
stroke-width: 0pt,
display-name: false,
languages: (
rust: (name: "Rust", color: rgb("#CE412B")),
))
if draft {
[\ #text([⚠ *This RFC is a draft!* Do not use this as an implementation guide ⚠], fill: mocha.red) #linebreak()]
}
heading([#project-name: RFC #rfc-number --- #rfc-name], outlined: false)
text(
fill: mocha.surface2,
[#date.display("[year].[month].[day]")
#pad(
y: -3pt,
table(
columns: (auto, auto),
inset: 0pt,
stroke: 0pt,
column-gutter: 10pt,
[Authored by:],
[#authors],
))])
outline(title: [RFC Outline])
show heading: set heading(numbering: "1.1")
[#doc]
}
#let wip = text([⚠ *This section is TBA!* Continue at your own risk ⚠], fill: mocha.red); |
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/011%20-%20Journey%20into%20Nyx/006_Kruphix's%20Insight.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Kruphix's Insight",
set_name: "Journey into Nyx",
story_date: datetime(day: 11, month: 06, year: 2014),
author: "<NAME>",
doc
)
Diantha took a deep breath, centered herself, and knocked.
There was a pause.
Sometimes, their guest did not wish to be disturbed at all. Other times, she would request, through the still-closed door, that food be left in the hall. And, occasionally, she would invite the acolytes in and engage them in conversation, as though she liked them, as though she wanted to be here.
"Come in," said the oracle.
Diantha opened the door.
The Oracle of Kruphix was a beautiful woman with long, black hair. She was looking out the window, as she often did, arms braced against the window frame. Two more arms, diaphanous, half-real, waved lazily at her sides. Kruphix marked his oracles well, so that none could mistake them, and none save fools and savages would bring them any harm.
The oracle turned to face her, and smiled.
#figure(image("006_Kruphix's Insight/01.jpg", width: 100%), caption: [Prophet of Kruphix | Art by Winona Nelson], supplement: none, numbering: none)
"Hello, Diantha."
Diantha set down the tray—mutton, fresh from the fire, and an assortment of grilled vegetables, fresh olives, and cheese. Kruphix had few worshipers and his temple in Meletis was small, but they spared nothing for the oracle of their god.
"Greetings, Oracle," said Diantha. "I hope you are well?"
"Well enough," said the oracle. "I was just thinking about the temple."
"The temple, my lady?"
The oracle smiled.
"Kruphix's temple—his true temple—where two great trees stand sentry at the ends of the earth."
"You..." Diantha hesitated. "You speak as if you've seen it, My Lady."
"I have. When I was in danger, Kruphix spirited me away to his temple. I spent time with him there during the upheaval."
Diantha bowed.
"That is a very great honor, to learn from Kruphix himself."
The oracle's smiled faded.
"Learned," she said. "Yes. Yes, I suppose I did."
Diantha almost bowed, almost turned, almost left the room.
"I envy you," she said, instead.
"Oh?" said the oracle.
"Yes, very much," said Diantha. "I am a priest, and as a priest I have faith—faith in Kruphix's wisdom, faith in his authority over the other gods." She looked down at her feet. "But I do not hear his voice, My Lady, and I have never walked in his presence. Is there..."
Diantha hesitated. She should not ask.
"Is there anything you can tell me? Of what you learned?"
The oracle turned to stare out the window again, eyes fixed on the distant horizon, and for a long moment she said nothing at all.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
From her perch within the great branches of Kruphix's Temple, Kydele saw it all happen. She braced herself against the living wood with her living hands, and against the starry night with the two misty, insubstantial arms that Kruphix had given her when she awakened as his oracle.
#figure(image("006_Kruphix's Insight/02.jpg", width: 100%), caption: [Temple of Mystery | Art by No<NAME>], supplement: none, numbering: none)
Kydele saw when Xenagos became a god, roaring across the pristine surface of Nyx like a wildfire. She saw the arrival of the human <NAME> and her leonin companion, Ajani. She saw them walk through Kruphix's body, a portal to Nyx, and step into the sky. She saw Elspeth use the blade called Godsend to cut Xenagos from the sky.
And she saw Heliod, who styled himself the greatest of the gods, snatch the weapon from Elspeth's hands—the blade he had consecrated for her, marking her as his champion.
#emph[You are too much like the satyr] , the god of the sun had said. #emph[Your eyes have seen things I can't fathom. And a champion cannot know more than her god. I am lord of the pantheon. I am the greatest of these.]
And then he murdered her, his own champion, with her own weapon.
The crisis was passed. The pantheon was secure, and Nyx was healing from Xenagos's violence.
Kydele felt dead inside.
Most oracles heard the voice of a god loud and clear in their heads, ringing like a bell when pronouncements came and falling otherwise silent. Poor Daxos, oracle of every god and no god, had heard all of them, all the time. A deafening chorus of divinity. But with Kruphix it was different. Kruphix spoke in her mind almost constantly, a whispered litany of images and events hovering just beyond her ability to hear, like the sun lurking below the horizon.
But since Heliod's act of betrayal, the voice of her god had fallen silent. Even here, in his temple overlooking the great waterfall that bordered Nyx, she heard nothing. She caught occasional glimpses of his shadowed form moving through the rooms of his temple, but he never spoke.
It was difficult to say how much time had passed, here at the edge of the world.
Kydele was strolling the temple grounds, lost in thought, when the familiar voice of her god echoed all around her.
#emph[You are troubled.]
It was not, uncharacteristically for the god of mysteries, a question.
Kydele turned to face the starry, four-armed outline of Kruphix on the horizon.
#figure(image("006_Kruphix's Insight/03.jpg", width: 100%), caption: [Kruphix's Insight | Art by Igor Kieryluk], supplement: none, numbering: none)
"It seems I'm not the only one," said Kydele.
Kruphix said nothing, but gestured for Kydele to walk with him. As he approached to walk beside her, he shrank, a bizarre inversion of perspective, until they were the same height.
"Was it a good thing, what happened?" asked Kydele. She folded her true arms gracefully in front of her, but her mist-arms waved uneasily. They were not entirely under her control.
#emph[Good that Xenagos became a god? ] asked Kruphix. #emph[Good that Elspeth struck him down? Good that she fell in turn?]
Kydele shrugged helplessly.
"Order is restored," she said. "All is right in Theros and Nyx. Xenagos menaces the world no more, and the Nyxborn again serve and guide mortals as they should."
Kruphix waited. He was always waiting.
"So why," she finished, "does it all feel so wrong?"
#emph[You speak of the greatest mystery of all. Of existence, and its purpose.]
"Xenagos's ascension raises troubling questions," said Kydele. "About Nyx, and the nature of the gods. The philosophers teach that the gods are ageless, unchanging. But if a god can be born and die in a space of weeks, then what does that say about the others?"
#emph[That] , said Kruphix, #emph[is no mystery. It is simply a question whose answer few people truly wish to hear.]
"I wish to hear it," she said immediately.
Kruphix regarded her for a long moment before he spoke, inscrutable.
#emph[The gods are beliefs that took form within the fabric of Nyx.]
"The gods inspire belief," said Kydele. "Surely the gods came first."
#emph[I am the oldest] , said Kruphix. #emph[But even I do not predate mortal belief. The first time a mortal of Theros looked up into the night sky and said "I wonder...," some part of me came into existence. I am the unknown, the unknowable. I am what sits beyond the far horizon.]
#figure(image("006_Kruphix's Insight/04.jpg", width: 100%), caption: [Kruphix, God of Horizons | Art by Daarken], supplement: none, numbering: none)
#emph[I watched as the others took shape. Death came next, ultimate and inescapable. Then sun and sea, forest and forge. After that, more abstract domains emerged—warfare, deception, insight, love.]
"Love?" said Kydele.
#emph[Indeed. And more, that mortals have forgotten. Or did you think Heliod was always the sun god?]
"How can there have been other gods? We would remember them."
#emph[If you remembered them] , said Kruphix, #emph[they would still exist. As soon as Heliod took his place in the pantheon, he ] was#emph[ the sun god—and always had been. Mortals have short memories in these matters. If they had longer ones, Nyx would tear itself apart with rivalries and contradictions.]
Four arms spread wide in a gesture of all-encompassing defeat.
#emph[Perhaps I was not even the first] , said Kruphix. #emph[How would I know?]
Kydele said nothing for a long time.
"So the gods are more fragile than they seem," she said. "And their existence depends on mortals believing that they are not?"
#emph[So it would seem.]
"Why?"
#emph[Why does time pass?] asked Kruphix. #emph[Why does water flow downhill?] Stars shifted within his cloak, the suggestion of a shrug. #emph[Some things simply are.]
"The philosophers in Meletis debate such things," said Kydele. "The cause of motion, the nature of time."
#emph[Then let them debate] , said Kruphix, with uncharacteristic hardness. #emph[If they learn the answers, perhaps the people will revere them instead.]
"Are you saying you don't know?"
Kruphix turned his hood toward her, and she felt a sudden rush of vertigo, the sense that she was gazing not over to a companion, but #emph[down] , into a deep abyss full of stars and blackness.
#emph[I am saying that if there is a reason—if there is some purpose behind the nature of the gods—then I do not ] wish#emph[ to know.]
"It is your duty to know," said Kydele. "That #emph[is] your purpose...isn't it?"
She had never contradicted Kruphix so directly. Many other gods would not tolerate such impudence, even from an oracle.
Kruphix only sighed, a sound like the night breeze.
#emph[I am the knower of all that is known on Theros, and much that is not ] , he said wearily.#emph[ But of late, I have learned things, about our world. About its safety. ] He paused. #emph[Does that surprise you, that I might still be capable of learning?]
"It does."
In fact, it did much more than surprise Kydele. It disturbed her. Kruphix was the god of mysteries. He knew the answer to every question, so that he might decide which of those answers mortals could safely know...or so she thought.
#emph[It should.]
He said nothing after that.
"What have you learned?" she asked.
#emph[Are you certain you wish to know?]
"I am."
If an oracle of Kruphix flinched from the truth, what did she have left?
#emph[Do not be] , said Kruphix. #emph[Knowledge is cruel. It will break your heart and test your allegiances. Are you certain you want this curse?]
#figure(image("006_Kruphix's Insight/05.jpg", width: 100%), caption: [Dictate of Kruphix | Art by Daarken], supplement: none, numbering: none)
Kydele took the time to consider. She knew things no other mortals dreamed of. She had gazed down into Nyx so often it had become commonplace, had watched Kruphix etch the names of the gods onto his great tree to bar them from Theros. Knowledge was power.
But Kruphix was the god of horizons, and some things were never meant to be known. No matter how far you travel, there is always another horizon.
Except here, at the last horizon.
"Yes," she said. "I am certain."
#emph[Very well] , said Kruphix.
He walked in silence, and she waited. At length, they came to the edge of the world itself, where the ocean roared into the infinite depths of Nyx and the grounds of Kruphix's temple extended like a promontory into a sea of night.
Kruphix stood and gazed out into Nyx.
#emph[Theros is one of many worlds. Did you know that?]
"I take it you're not speaking of Nyx, or the Underworld."
#emph[No. There are entire worlds out there, beyond Theros, beyond Nyx. Worlds you cannot see when you look up at the sky, places where the gods of Theros hold no sway. Worlds that you—and I—can never visit, with their own civilizations, their own histories, even their own physical laws.]
"Their own gods?"
Again, the sense of vertigo as Kruphix regarded her.
#emph[No] , he said. The word rang like a bell. #emph[Some, perhaps, have gods like us. But as a rule, no. We are...a local phenomenon.]
"And you've only recently learned this?"
Kruphix shook what passed for his head.
#emph[No. There are beings who can walk between these worlds. The first such to set foot on our world did so long ago. I am the knower of all things that are known in this world, and I learned all that she knew.]
Kydele thought for a moment about everything she had heard and seen from her perch in Kruphix's great trees.
"Elspeth was one of these...world-walkers, wasn't she?"
#emph[Astute] , said Kruphix. #emph[Yes. She was. But not only her. So was her companion Ajani, the leonin who carried her body out of Nyx. So was the triton—the merfolk—Kiora, who called herself Callaphe and earned the ire of Thassa.]
#emph[And so] , he continued, #emph[was Xenagos.]
#figure(image("006_Kruphix's Insight/06.jpg", width: 100%), caption: [Xenagos, the Reveler | Art by Jason Chan], supplement: none, numbering: none)
Kydele nodded.
"He traveled to other worlds where there were no gods...and realized he could become one?"
#emph[Close] , said Kruphix. #emph[He traveled to other worlds where there were no gods, and decided everyone on Theros should know the gods were a lie.]
"I don't think he succeeded," said Kydele.
#emph[He did not] , said Kruphix. #emph[People saw the chaos. They saw the destruction. They saw, in short, a usurper into a domain that had otherwise remained, by all appearances, indefinitely stable. Perhaps, if he had lived to take his place among the pantheon, people might remember that there had not always been a god of revelry, and come to wonder what that meant about the other gods.]
Kruphix shrugged again.
#emph[I suspect, however, that they would have learned to worship him, and forgotten his mortal origins. They would have come to believe that he had always been there, waiting for their veneration. That is the way of things. In the end, he threatened nothing.]
"Then he is not what's troubling you," said Kydele.
Kruphix laughed—actually laughed, a hollow, echoing sound.
#emph[You see a great deal, My Oracle.]
He folded his starry hands in front of him.
#emph[Yes, I am troubled, and not by Xenagos's ascension, nor by the existence of these world-walkers.]
#emph[I am troubled by what troubles them.]
There it was. The dark, ragged edge around which they'd been tiptoeing.
#emph[The merfolk Kiora] , said Kruphix, #emph[came here from a world whose existence was threatened by something called the Eldrazi. They are vast and terrible, the equal of any god. And they eat worlds, My Oracle. Strip the flesh from the bones of the earth and leave a dead husk, moving on to the next.]
#figure(image("006_Kruphix's Insight/07.jpg", width: 100%), caption: [It That Betrays | Art by <NAME>k], supplement: none, numbering: none)
#emph[The leonin Ajani has faced an immensely powerful foe, a fellow world-walker and a dragon. He is unfathomably ancient, even to me. He seeks infinite power and immortal life. His plots span worlds and centuries, and he will spare nothing and no one who stands in his way.]
#figure(image("006_Kruphix's Insight/08.jpg", width: 100%), caption: [Cruel Ultimatum | Art by <NAME>wood], supplement: none, numbering: none)
#emph[And the human Elspeth...she came here from a place called Phyrexia, an entire world of flayed skin and twisted metal, ruled over by vicious, monstrous beings who style themselves gods. It is an affront to nature, a dark parody of life that corrupts all it touches and touches everything in time. And it has already made its way from one world to others.]
#figure(image("006_Kruphix's Insight/09.jpg", width: 100%), caption: [Rout | Art by <NAME>], supplement: none, numbering: none)
Kruphix looked out into Nyx, night staring into night.
#emph[If any of these things come here, to our world] , he said, #emph[even the gods may be powerless to stop them. And all your prayers, all your pleas, will fall on the deaf ears of a silent sky as this world is rent asunder or remade or worse.]
One by one, the stars in Kruphix's cloak began to flicker and die, until only blackness remained.
#emph[That is what I fear, My Oracle. That is what troubles the mind of a god. Theros is a minnow swimming in a deep, still pond, heedless of the depths, not knowing that something bigger rises up to devour it in an instant.]
He faced her, four arms spread wide, a hole of pure darkness set against the starry light of Nyx.
#emph[So now you know. What will you do with this knowledge?]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Diantha waited.
"No, Child," said the oracle. "There is nothing. Nothing at all."
She said nothing more, and Diantha took it for a dismissal. She turned away.
Behind her, the oracle stared out the window, past the city, past the horizon, as though gazing into an infinite distance.
|
|
https://github.com/peteole/relai_poster_template | https://raw.githubusercontent.com/peteole/relai_poster_template/main/Readme.md | markdown | # relAI poster template
This is a template for posters for the [Konrad Zuse School of Excellence in Reliable AI](https://zuseschoolrelai.de) in [Typst](https://typst.app).

For a real example, see the `examples` folder.
## Usage
1. Clone this repository
2. Install typst: <https://typst.app>
3. Optionally install the typst vscode extension
4. Edit the `poster.typst` file with your poster content
5. Compile the poster with `typst compile poster.typ`. The VSCode extension can do this automatically. A pdf file will be output
## Features
- Follows the relAI poster template, including all logos
- Automatically generates QR codes to sources given in the `references` parameter
- Landscape and portrait mode supported through the `flipped` parameter
|
|
https://github.com/Medushaa/typst-math-template- | https://raw.githubusercontent.com/Medushaa/typst-math-template-/main/main.typ | typst |
#import "template.typ": *
#show: template.with(
title: [MAT122 Assignment 1],
short_title: "MAT122 A1",
description: [
],
date: datetime(year: 2023, month: 10, day: 01),
authors: (
(
name: "<NAME>",
),
),
affiliations: (
(id: "1", name: "Brac University"),
),
bibliography_file: none,
paper_size: "a4",
cols: 1,
text_font: "XCharter",
code_font: "Cascadia Mono",
accent: black,
)
= #underline[Lecture: Measure Theory]
$ A(x-2)+B(y-3)+C(z-1) = 0 $
$ A(4-2)+B(-5-3)+C(3-1) = 0 $
$ underline(#h(30pt) A dot 0 + B dot 0 + C dot 1 = 0 #h(30pt)) $
$ mat(x-2, y-3, z-1; 2, -8, 2; 0, 0, 1) mat(A;B;C)=mat(0;0;0) $
Since A, B, C has non trivial values, the det of the above matrix is 0:
$ mat(delim:"|", x-2, y-3, z-1; 2, -8, 2; 0, 0, 1) = 0 $
$ => -8(x-2) -2(y-3) + 0 = 0 $
$ => 4x+y=11 $
#pagebreak()
#theorem("Hellloooo")[
$ "You" = "POsA" $
$ ["Proved"] $
]
|
|
https://github.com/Cheng0Xin/typst-libs | https://raw.githubusercontent.com/Cheng0Xin/typst-libs/master/note/note.typ | typst | /**
* Typst template for Noting
* Author: <NAME>
* Mail: <EMAIL>
*/
/**
* Set up counters
* */
#let cthm = counter("Theorem")
#let cdef = counter("Definition")
#let cprop = counter("Proposition")
/**
* Fonts
* */
#let section-font = "Latin Modern Roman"
#let body-font = "BlexMono Nerd Font Propo"
#let spec-font = "DejaVuSansMono Nerd Font Mono"
// #let chinese-font = "Adobe Heiti Std"
// #let chinese-font = "Source Han Serif SC"
#let chinese-font = "Inziu Iosevka SC"
/**
* bold text,
* italic text,
* code text
*/
#let bt(t) = text(weight: "bold")[#t]
#let it(t) = text(style: "italic")[#t]
#let tt(t) = text(rgb("#055D9F"), weight: 50, font: spec-font, tracking: 0pt)[#t]
#let blink(url, txt) = link(url)[#text(rgb("#055D9F"))[#txt]]
/**
* TODO list
*/
#let todo = text(14pt, red, weight: "bold")[TODO ......]
#let done = $checkmark$
#let wont = $times$
#let deadline(year, month, day) = text(10pt, red,
font: spec-font, weight: "bold")[
\[DEADLINE: #datetime(year: year, month: month, day: day).display()\]
]
/**
* Show code from file
*/
#let show-code-file(file, lang) = raw(file, block: true, lang: lang)
/**
* Theorem-like box;
* Just for reusing, not recommended for directly using.
*/
#let thbox(name: none, color: gray, cate: "Theorem", refs: none, body) = [
#block(
width: 100%,
fill: color,
inset: (x: 5pt, y: 5pt),
radius: 5pt,
breakable: true
)[
#align(left)[
#align(left, text(weight: "bold")[
#cate
#if refs != none {
[(#tt(refs)) ]
}
#text(rgb("#D0104C"))[#counter(cate).display()]:
#text(weight: "regular", size: 9pt)[#name.]
])
// Body
#body
// Creat label
#figure([], kind: cate, supplement: cate)
#if refs != none { label(refs) }
#counter(cate).step()
]
]
]
#let theorem(name: none, refs: none, body) = thbox(
name: name,
color: luma(230),
cate: "Theorem",
refs: refs,
body
)
#let definition(name: none, refs: none, body) = thbox(
name: name,
color: rgb("#91B493"),
cate: "Definition",
refs: refs,
body
)
#let proposition(name: none, refs: none, body) = thbox(
name: name,
color: rgb("#E493B3"),
cate: "Proposition",
refs: refs,
body
)
#let rdef(l) = ref(l, supplement: text(fill: maroon)[Def.])
#let rthm(l) = ref(l, supplement: text(fill: maroon)[Thm.])
#let rprop(l) = ref(l, supplement: text(fill: maroon)[Prop.])
/**
* Showing link with hyperlink
*/
#let ref-link(l) = link(l, text(rgb("#D0104C"))[*#l*])
#let rref(key) = text(rgb("#D0104C"))[*#key*]
/**
* For showing some important facts.
* Dotted form is recommended.
*/
#let card(name, body) = block(fill: rgb("#EEA9A9"), inset: 10pt, width: 100%, stroke: black)[
#set enum(numbering: x => text(rgb("#1B813E"))[#sym.square.filled #x])
#align(left, text(14pt)[*#name*])
#body
]
/**
* For shoing quota
*/
#let quota(body) = block(fill: gray, inset: 10pt, width: 100%, stroke: black)[
#it(body)
]
/**
* Commenting
*/
#let cmmt(msg) = text(blue, style: "italic")[
#msg
]
/**
* For showing paragraph
*/
#let parat(name) = align(left)[
#text(weight: "bold", fill: rgb("#7A003F"), size: 11pt)[
#sym.square #name
]
]
/**
* Heading, `show` rules
*/
#let report(info, body, title_bar: none) = {
// Set up counter
cthm.update(1)
cdef.update(1)
cprop.update(1)
set heading(numbering: "1.1.1")
set text(10pt, font: (body-font, chinese-font), fallback: true)
set par(justify: true)
set page(
paper: "us-letter",
margin: (top: 1cm, bottom: 1cm),
numbering: "1/1"
)
block(width: 100%,
fill: rgb("#EDEAE7"),
clip: true,
// stroke: black,
// radius: (
// top-left: 5pt,
// top-right: 5pt,
// bottom-left: 5pt,
// bottom-right: 5pt
// ),
inset: 10pt
)[
#align(left, text(14pt, font: section-font)[*#info.title*])
#align(right, text(9pt, font: section-font)[
*written by*
#if "author" in info [
#info.author
] else [
Cheng
] *on*
#if "date" in info [
#info.date.display()
] else [
#datetime.today().display()
]
])
#align(center+horizon, title_bar)
]
/**
* Put keywords
*/
if "keywords" in info [
#align(left, text(9pt, font: section-font)[
*Keywords:*
#info.keywords.join("; ")
])
]
/**
* Outline
*/
line(length: 100%)
outline()
line(length: 100%)
v(1.5cm)
/**
* Programming Language Syntax Highlight
*/
show raw.where(lang: "promela"): it => {
let keywords = (
"proctype", "chan", "of", "do", "od",
"typedef", "byte", "int", "inline",
"active", "bit", "bool", "short", "unsigned",
"pid", "mtype", "break", "skip", "else", "goto"
).join("|")
let functions = (
"printf",
).join("|")
let wrap_str(ks) = "\b(" + ks + ")\b"
show regex(wrap_str(keywords)): set text(blue)
show regex(wrap_str(functions)): set text(blue)
show regex("\"(.*?)\""): set text(green)
show regex("/\*(.*?)\*/"): set text(gray)
show regex("//(.*?)\n"): set text(gray)
block()[
#it.text
]
}
// Display inline code in a small box
// that retains the correct baseline.
show raw.where(block: false): box.with(
fill: luma(240),
inset: (x: 3pt, y: 0pt),
outset: (y: 3pt),
radius: 2pt,
)
// Display block code in a larger block
// with more padding.
show raw.where(block: true): block.with(
fill: luma(240),
inset: 10pt,
radius: 4pt,
width: 100%
)
// show raw.where(block: true): it => { set par(justify: false); grid(
// columns: (100%, 100%),
// column-gutter: -100%,
// block(width: 100%, inset: 1em, for (i, line) in it.text.split("\n").enumerate() {
// box(width: 0pt, align(right, str(i + 1) + h(2em)))
// hide(line)
// linebreak()
// }),
// block(radius: 1em, fill: luma(246), width: 100%, inset: 1em, it),
// )}
/**
* Emph and strong
*/
show emph: it => {
text(rgb("#42602D"), weight: "bold", it.body)
}
show strong: it => {
text(rgb("#8E354A"), weight: "bold", it.body)
}
/**
* Heading level 1 and level 2
*/
show heading.where(level: 1): it => [
#set align(center)
#set text(font: section-font, fallback: true)
#smallcaps(it.body)
]
show heading.where(level: 2): it => [
#set text(font: section-font, fallback: true)
#text(weight: "bold", it)
]
show heading.where(level: 3): it => [
#set text(font: section-font, fallback: true)
#text(weight: "semibold", it)
]
show heading.where(level: 4): it => [
#set text(font: section-font, fallback: true)
#text(weight: "semibold")[
#sym.square.filled #it.body
]
]
/**
* Main body
*/
body
}
|
|
https://github.com/nixon-voxell/apu_rmct | https://raw.githubusercontent.com/nixon-voxell/apu_rmct/main/introduction.typ | typst | // Global settings
#set page(paper: "a4")
#set par(
justify: true,
// first-line-indent: 16pt
)
#set text(
font: ("Times New Roman"),
lang: "en",
size: 12pt,
fallback: false,
hyphenate: false,
)
#set heading(numbering: "1.")
#show heading.where(level: 1) : it => block[
#text(size: 12pt)[#it]
]
#show heading.where(level: 2) : it => block[
#text(size: 12pt, style: "italic", weight: "regular")[#it]
]
// Cover page
#align(horizon)[
#align(center)[
#image("apu_logo.png", width: 200pt)
*INDIVIDUAL ASSIGNMENT*
*RESEARCH METHODS FOR COMPUTING AND TECHNOLOGY*
#table(
columns: (1fr, 2fr),
inset: 10pt,
align: horizon,
align(left)[*Student Name*], align(left)[<NAME>],
align(left)[*TP Number*], align(left)[TP058994],
align(left)[*Intake Code*], align(left)[APU2F2305CGD],
align(left)[*Module Code*], align(left)[CT09832RMCT],
align(left)[*Lecturer Name*], align(left)[Assoc. Prof. Ts. Dr. <NAME>ke],
align(left)[*Hand Out Date*], align(left)[7#super[th] November 2023],
align(left)[*Hand In Date*], align(left)[19#super[th] January 2024],
)
]
]
#pagebreak()
// ======================================
// Content start
// ======================================
// ======================================
// Content page start
// ======================================
#align(center)[
#text(size: 16pt)[*Exploring Deep Learning Approaches for Real-Time Interactive Character Animation*]
<NAME>
#link("mailto:<EMAIL>")
]
#show: rest => columns(2, rest)
*_Abstract_--- Deep learning, in particular, neural networks has proven to be capable of solving a wide variety of complex tasks. Multiple research on deep learning based approach for character animation has been proposed to utilize the enormous learning capabilities of neural networks for generating dynamic animation for real-time interactive applications like games. This research explores a wide variety of deep learning approaches towards interactive character animation. A total of 3 different methods (motion matching based, reinforcement learning, and pose generation) was investigated based on their strengths, limitations, and novel contributions. A comparison were also performed based on their evaluation results. Throughout the research, we found that the use of deep learning can drastically improve the quality of runtime animation in dynamic settings. Neural networks are capable of adapting to unseen data, filling in the gaps where needed, and even react to the physical environment accurately.*
*_Index Terms_*
Character Interactions, Locomotion, Neural Networks, Motion Matching
= Introduction
/*
1. Background/History
- Skinning
- Keyframes
- State machine
- IK
2. Types of deep learning approaches
- Physics based (learn the physics world while mimicking animation data)
- Non-physics based (learn purely from animation data)
3. Mini summary?
*/
Interactive character animations are typically carried out through skeletal motions of an articulated figure. This is achieved using a technique called skinning, which deforms the surface of the character based on bone transformations, particularly, the position, orientation, and sometimes the scale of the bones @rumman2016state. Animation software like Blender allow animators to author animations using keyframes. Each keyframe stores a snapshot of a character pose which consists of multiple bone transforms. When an animation is being played during runtime, the software interpolates between these keyframes, producing a fluid motion.
Relying solely on manual animation authoring can be extremely inefficient. Motion capture was widely used during the process of animation authoring. Motion capture generates animations by tracking and recording moving objects in the physical space @menolotto2020motion. The raw data from motion capture will then be cleaned and refined by animators before it is being used in production. In addition, inverse kinematics can also be used to generate runtime animation layering such as orienting the head towards an interest point or positioning the hands on an object correctly @rose2001artist.
Runtime usage of animations is normally done using some form of state machine. Developers were tasked to manually map a diverse set of animations to their target states. These states are then connected usually in the form of graph, allowing the character to react accordingly to different scenarios that may happen during runtime by traversing the graph. This method does not adapt well to development changes, adding a new state might result in significant modifications on the existing state graph, making it hard to maintain. To address this problem, motion matching was proposed by #cite(<buttner2015motion>, form: "prose"), which opens the possibility of using unstructured animation data. This method performs a search from a large database to find an animation sequence that best fit the current context, namely, the current pose and the current trajectory of the character @clavet2016motion.
In recent years, deep learning techniques such as the Transformer neural network has proven to be capable of learning large unstructured data and construct pattern to perform complex human level tasks such as language translation @vaswani2017attention. Deep learning, which is a subset of machine learning, enables developers to overcome the traditional approach of having to solve everything manually. Instead, it learns from the data provided in a goal-centric manner, which maps nicely to the animation problem. In this paper, we will examine the techniques available and provide some insights based on the discoveries.
= Problem Statement
/*
1. Traditional state machine approach is very manual. Tedious for artists to craft a well made animation.
2. Motion capture raw data is messy. Cleaning them up into a seamless loop cycle may take up alot of time.
3. Motion matching only search for existing data, unable to perform seamless transition if there is lacking of data. Fixing this issue results in using a large database which fills up the memory, and also takes up more time for animation lookup.
4. In between transitions of different animation sequences still relies on simple blending.
*/
Creating decent looking character animation system for complex behaviors and scenarios often require very large and manually authored animation sequences @harvey2020robust. Motion capture can be used to generate animations and motion matching can be a great candidate for replacing the traditional data-driven graph traversal method.
However, these new techniques come with a cost. The use of motion capture introduces the need to clean up animations. Due to the heavy reliance of the animation dataset, motion matching is unable to perform seamless transitions if the data is lacking. This results in the use of large animation database, filling up the memory during runtime and a generally more computationally expensive animation lookup @holden2020learned. Additionally, of all the techniques above, in between transitions of different animation sequences still relies heavily on simple blending. This means that if no sample data are available on how 2 different animation states blends, the resulting transition might look unnatural. Finally, these animation systems lack the capability of reacting to the physical world accurately such as losing balance when tripping over an obstacle @peng2018deepmimic @peng2022ase.
= Research Aims
The aim of this research is to explore the potential of deep learning techniques for producing fluid character animation that can react naturally to dynamic and unpredictable factors such as terrain changes and user interactions.
= Research Objectives
+ Evaluate the strengths and weaknesses of different deep learning approaches in character animation.
+ Comparison of different deep learning techniques for character animation.
+ Exploring ways for incorporating these techniques into real-time application development.
= Research Questions
+ How does deep learning contribute to the enhancement of character animation in interactive environments?
+ What are the types of deep learning techniques for character animation?
+ What is the impact of deep learning in real-time interactive character animation industry?
= Research Significance
/*
1. Fluid and natural looking animation is crucial for the immersion of real-time interactive applications.
2. Performing such task can be very tedious and costly.
3. Deep learning offers a way to streamline these processes by learning from large unstructured animation dataset and "compresses" them for affordable runtime inferencing.
4. Because of the learning nature of neural networks, it can also generate animation for unseen circumstances.
5. Multiple deep learning approaches has been proposed to tackle this problem.
*/
Fluid and natural looking animation is crucial for the immersion of real-time interactive applications. Performing such task can be very tedious and costly. Recent findings show that deep learning offers a way to streamline these processes by learning from large unstructured animation dataset and "compresses" them for limited runtime inferencing budget @holden2020learned @starke2022deepphase @starke2020local @peng2022ase @peng2018deepmimic. Because of the learning nature of neural networks, it can also generate animation for unseen circumstances, solving the memory issue that motion matching poses @holden2020learned. Overcoming these issues will allow the interactive industry to be one step ahead in achieving a more immersive experience.
// Learned motion matching, proposed by #cite(<holden2020learned>, form: "prose"), aims to solve this memory problem by replacing the 3 key stages of motion matching - _Projection_, _Stepping_, and _Decompression_ with neural networks.
= Conclusion
Real-time interactive character animation contributes largely into the immersion of interactive applications such as games. Constructing such system that can react realistically and naturally to dynamic environments is extremely challenging without incorporating machine learning components. Thanks to the adoption of neural networks and the abundance of accelerated computing of GPUs in recent years, the animation industry has been able to benefit from it by harnessing the enormous learning capability of neural networks to revolutionize the interactive application industry.
#set par(first-line-indent: 0pt)
// ======================================
// Bibliography start
// ======================================
#show heading.where(level: 1) : it => block[
#text(size: 12pt)[#it.body]
]
= References
#bibliography("citation.bib", title: none, full: true, style: "apa")
|
|
https://github.com/magicwenli/keyle | https://raw.githubusercontent.com/magicwenli/keyle/main/src/keyle.typ | typst | MIT License | #import "sym.typ": mac-key, biolinum-key
#let shadow-times = 6
/// Generate examples for the given keyboard rendering function.
///
/// - kbd (function): The keyboard rendering function.
/// -> content
#let gen-examples(kbd) = [
#kbd("Ctrl", "A") #h(1em) #kbd("Alt", "P", compact: true)
#kbd("Home") #kbd("End") #kbd("Ins") #kbd("Del")
]
/// Theme function to render keys in a standard style.
///
/// #example(```typst
/// #let kbd = keyle.config(theme: keyle.themes.standard)
/// #keyle.gen-examples(kbd)
/// ```)
///
/// - sym (string): The key symbol to render.
/// -> content
#let theme-func-stardard(sym) = box({
let text-color = black
let bg-color = rgb("#eee")
let stroke-color = rgb("#555")
let cust-rect = rect.with(
inset: (x: 3pt),
stroke: stroke-color + 0.6pt,
radius: 2pt,
fill: bg-color,
)
let button = cust-rect(
text(fill: black, sym),
)
let shadow = cust-rect(
fill: stroke-color,
text(fill: bg-color, sym),
)
for n in range(shadow-times) {
place(dx: 0.2pt * n, dy: 0.2pt * n, shadow)
}
button
})
/// Theme function to render keys in a deep blue style.
///
/// #example(```typst
/// #let kbd = keyle.config(theme: keyle.themes.deep-blue)
/// #keyle.gen-examples(kbd)
/// ```)
///
/// - sym (string): The key symbol to render.
/// -> content
#let theme-func-deep-blue(sym) = box({
let text-color = white
let bg-color = rgb("#16456b")
let stroke-color = rgb("#4682b4")
let cust-rect = rect.with(
inset: (x: 3pt),
stroke: bg-color + 0.6pt,
radius: 2pt,
fill: stroke-color,
)
let button = cust-rect(
smallcaps(text(fill: white, sym)),
)
let shadow = cust-rect(fill: bg-color, smallcaps(text(fill: bg-color, sym)))
for n in range(shadow-times) {
place(dx: 0.2pt * n, dy: 0.2pt * n, shadow)
}
button
})
/// Theme function to render keys in a type writer style.
///
/// #example(```typst
/// #let kbd = keyle.config(theme: keyle.themes.type-writer)
/// #keyle.gen-examples(kbd)
/// ```)
///
/// - sym (string): The key symbol to render.
/// -> content
#let theme-func-type-writer(sym) = box({
let text-color = white
let bg-color = rgb("#333")
let stroke-color = rgb("#2b2b2b")
let cust-rect = rect.with(
inset: (x: 2pt),
stroke: bg-color,
fill: stroke-color,
radius: 50%,
)
let button = cust-rect(
smallcaps(text(fill: white, sym)),
)
let shadow = cust-rect(
outset: 2.2pt,
fill: white,
stroke: stroke-color + 1.2pt,
smallcaps(text(fill: bg-color, sym)),
)
box(
inset: 2pt,
{
place(shadow)
button
},
)
})
/// Theme function to render keys in a Linux Biolinum Keyboard style.
///
/// You need to have the font installed on your system.
///
/// #example(```typst
/// #let kbd = keyle.config(theme: keyle.themes.biolinum, delim: keyle.biolinum-key.delim_plus)
/// #keyle.gen-examples(kbd)
/// ```)
///
/// - sym (string): The key symbol to render.
/// -> content
#let theme-func-biolinum(sym) = text(
fill: black,
font: ("Linux Biolinum Keyboard"),
size: 1.4em,
sym,
)
#let themes = (
standard: theme-func-stardard,
deep-blue: theme-func-deep-blue,
type-writer: theme-func-type-writer,
biolinum: theme-func-biolinum,
)
/// Config function to generate keyboard rendering helper function.
///
/// - theme (function): The theme function to use.
/// - compact (bool): Whether to render keys in a compact format.
/// - delim (string): The delimiter to use when rendering keys in compact format.
/// -> function
#let config(
theme: themes.standard,
compact: false,
delim: "+",
) = (
(..keys, compact: compact, delim: delim) => {
if compact {
theme(keys.pos().join(delim))
} else {
if delim == biolinum-key.delim_plus or delim == biolinum-key.delim_minus {
keys.pos().map(k => [#theme(k)]).join(theme(delim))
} else {
context keys.pos().map(k => [#theme(k)]).join(
box(
height: measure(theme("A")).height,
inset: 2pt,
align(horizon, delim),
),
)
}
}
}
)
|
https://github.com/isaacholt100/isaacholt100.github.io | https://raw.githubusercontent.com/isaacholt100/isaacholt100.github.io/master/maths-notes/2-durham%3A-year-2/numerical-analysis/numerical-analysis.typ | typst | #import "../../template.typ": template
#show: template
= Floating-point arithmetic
- *Fixed point representation*: $ x = plus.minus (d_1 d_2 ... d_(k-1) . d_k ... d_n)_beta $
- *Floating-point representation*: $ x = (0.d_1...d_(k-1)) beta^(d_k...d_n - B) $ where $B$ is an *exponent bias*.
- If $d_1 != 0$ then the floating point system is *normalised* and each float has a unique representation.
- *binary64*: stored as $ s e_10...e_0 d_1...d_52 $ where $s$ is the *sign* ($0$ for positive, $1$ for negative), $e_10...e_0$ is the *exponent*, and $d_1...d_52$ is the *mantissa*. The bias is $1023$. The number represented is $ cases(
(-1)^s (1.d_1...d_52)_2 2^(e - 1023) "if" e != 0 "or" 2047,
(-1)^s (0.d_1...d_52)_2 2^(-1022) "if" e = 0
) $ where $e = (e_10...e_0)_2$ $e = 2047$ is used to store $"NaN", plus.minus infinity$. The first case $e != 0$ is a *normal* representation, the $e = 0$ case is a *subnormal representation*.
- Floating-point numbers have *finite precision*: exists $epsilon_M > 0$ such that $"fl"(x) = "fl"((1 + epsilon) x)$ for all $epsilon < epsilon_M$.
- Floating-point numbers have *finite range*: exists $m_"max"$ and $m_"min"$ such that $"fl"$ defined only when $m_"min" <= |x| <= m_"max"$.
- *Underflow*: where floating point calculation result is smaller than smallest representable float. Result is set to zero.
- *Overflow*: where floating point calculation result is larger than largest representable float. *Floating-point exception* is raised.
- *Machine epsilon $epsilon_M$*: difference between smallest representable number greater than $1$ and $1$. $epsilon_M = beta^(-k+1)$.
- $"fl"(x)$ maps real numbers to floats.
- *Chopping*: rounds towards zero. Given $x = (0.d_1...d_k d_(k + 1)...)_beta dot.op beta^e$, if the float has $k$ mantissa digits, then $ "fl"_"chop"(x) = (0.d_1...d_k) dot.op beta^e $
- *Rounding*: rounds to nearest. Given $x = (0.d_1...d_k d_(k + 1)...)_beta dot.op beta^e$, if the float has $k$ mantissa digits, then $ tilde("fl")_"round"(x) = cases(
(0.d_1...d_k)_beta dot.op beta^e & "if" rho < 1/2,
((0.d_1...d_k)_beta + beta^(-k)) dot.op beta^e & "if" rho >= 1/2
) $ where $rho = (0.d_(k + 1)...)$.
- *Relative rounding error*: $ epsilon_x = ("fl"(x) - x) / x <==> "fl"(x) = x(1 + epsilon_x) $
- $ |("fl"_"chop"(x) - x) / x| <= beta^(-k + 1), quad |(tilde("fl")_"round"(x) - x) / x| <= 1/2 beta^(-k + 1) $
- *Round-to-nearest half-to-even*: fairer rounding than regular rounding for discrete values. In the case of a tie, round to nearest even integer: $ "fl"_"round"(x) = cases(
(0.d_1...d_k)_beta dot.op beta^e & "if" rho < 1/2 "or" (rho = 1/2 "and" d_k "is even"),
((0.d_1...d_k)_beta + beta^(-k)) dot.op beta^e & "if" rho > 1/2 "or" (rho = 1/2 "and" d_k "is odd")
) $
- $x plus.circle y = "fl"("fl"(x) + "fl"(y))$ and similarly for $times.circle$, $minus.circle$, $div.circle$.
- Relative error in $x plus.minus y$ can be large: $ "fl"(x) plus.minus "fl"(y) - (x plus.minus y) = x(1 + epsilon_x) plus.minus y(1 + epsilon_y) - (x plus.minus y) = x epsilon_x plus.minus y epsilon_y $ so relative error is $ (x epsilon_x plus.minus y epsilon_y) / (x plus.minus y) $
- In general, $x plus.circle (y plus.circle z) != (x plus.circle y) plus.circle z$
- For some computations, can avoid round-off errors (usually caused by subtraction of numbers close in value) e.g. instead of $ x = (-b + sqrt(b^2 - 4a c)) / (2 a) $ compute $ x = (-b + sqrt(b^2 - 4a c)) / (2 a) dot.op (-b - sqrt(b^2 - 4a c)) / (-b - sqrt(b^2 - 4a c)) = (-2c) / (b + sqrt(b^2 - 4a c)) $
= Polynomial Interpolation
- $cal(P)_n$ is set of polynomials of degree $<= n$.
- $"conv"{x_0, ..., x_n}$ is smallest closed interval containing ${x_0, ..., x_n}$.
- *Taylor's theorem*: for function $f$, if for $t in cal(P)_n$, $t^((j))(x_0) = f^((j))(x_0)$ for $j in {0, ..., n}$ then $ f(x) - t(x) = (f^((n + 1))(xi)) / ((n + 1)!) (x - x_0)^(n + 1) $ for some $xi in "conv"{x_0, x}$ (*Lagrange form of remainder*).
- *Polynomial interpolation*: given nodes ${x_j}_(j = 0)^n$ and function $f$, there exists unique $p in cal(P)_n$ such that $p$ interpolates $f$: $p(x_j) = f(x_j)$ for $j in {0, ..., n}$.
- *Cauchy's theorem*: let $p in P_n$ interpolate $f$ at ${x_j}^(j = 0)^n$, then $ forall x in "conv"{x_j}, f(x) - p(x) = (f^((n + 1))(xi)) / ((n + 1)!) (x - x_0) dots.h.c (x - x_n) quad "for some" xi in "conv"{x_j} $
- *Chebyshev polynomials*: $ T_n(x) = cos(n cos^(-1)(x)), quad x in [-1, 1] $
- $T_(n + 1)(x) = 2x T_n(x) - T_(n - 1)(x)$.
- Roots of $T_n(x)$ are $x_j = cos(pi(j + 1/2) \/ n)$ for $j in {0, ..., n - 1}$. Local extrema at $y_j = cos(j pi \/ n)$ for $j in {0, ..., n - 1}$.
- Let $omega_n(x) = (x - x_0) dots.h.c (x - x_n)$, ${x_j}_(j = 0)^n subset [-1, 1]$ (if ${x_j} subset.not [-1, 1]$ so interval is $[a, b]$, then we can map $x_j -> a + 1/2 (x_j + 1) (b - a)$). Then $sup_(x in [-1, 1]) |omega_n(x)|$ attains its min value iff ${x_j}$ are zeros of $T_(n + 1)(x)$. Also, $ 2^(-n) <= sup_(x in [-1, 1]) |omega_n(x)| < 2^(n + 1) $
- *Convergence theorem*: let $f in C^2([-1, 1])$, ${x_j}_(j = 0)^n$ be zeros of Chebyshev polynomial $T_(n + 1)(x)$ and $p_n in cal(P)_n$ interpolate $f$ at ${x_j}$. Then $ sup_(x in (-1, 1)) |f(x) - p_n(x)| -> 0 quad "as" n -> infinity $
- *Weierstrass' theorem*: let $f in C^0([a, b])$. $forall epsilon > 0$, exists polynomial $p$ such that $ sup_(x in (a, b)) |f(x) - p(x)| < epsilon $
- *Lagrange construction*: basis polynomials given by $ L_k(x) = product_(j != k) (x - x_j) / (x_k - x_j) $ satisfy $L_k(x_j) = delta_(j k)$. Then $ p(x) = sum_(k = 0)^n L_k(x) f(x_k) $ interpolates $f$ at ${x_j}$.
- *Note*: Lagrange construction not often used due to computational cost and as we have to recompute from scratch if ${x_j}$ is extended.
- *Divided difference operator*: $ [x_j] f & := f(x_j) \ [x_j, x_k] f & := ([x_j] f - [x_k] f) / (x_j - x_k), quad [x_k, x_k] f := lim_(y -> x_k) [x_k, y] = f'(x_k) \ [x_j, ..., x_k, y, z] f & := ([x_j, ..., x_k, y] f - [x_j, ..., x_k, z] f) / (y - z) $ These can be computed incrementally as new nodes are added.
- *Newton construction*: Interpolating polynomial $p$ is $ p(x) = [x_0] f & + (x - x_0) [x_0, x_1] f + (x - x_0)(x - x_1) [x_0, x_1, x_2] f \ & + dots.h.c + (x - x_0) dots.h.c (x - x_(n - 1)) [x_0, ..., x_n] f $
- *Hermite construction*: for nodes ${x_j}_(j = 0)^n$, exists unique $p_(2n + 1) in cal(P)_(2n + 1)$ that interpolates $f$ and $f'$ at ${x_j}$. Can be found using Newton construction, using nodes $(x_0, x_0, x_1, x_1, ..., x_n, x_n)$. Generally, if $p'(x_k) = f'(x_k)$ is needed, include $x_k$ twice. If $p^((n))(x_k) = f^((n))(x_k)$ is needed, include $x_k$ $n + 1$ times.
- If $y_0, ..., y_k$ is permutation of $x_0, ..., x_k$ then $[y_0, ..., y_k] f = [x_0, ..., x_k] f$.
- Interpolating error is $ f(x) - p(x) = (x - x_0) dots.h.c (x - x_n) [x_0, ..., x_n, x] f $ which gives $ [x_0, ..., x_(n - 1), x] f = (f^((n + 1))(xi)) / ((n + 1)!) $
- *Range reduction*: when computing a function e.g. $f(x) = arctan(x)$, $f(-x) = -f(x)$ and $f(1 \/ x) = pi / 2 - f(x)$ so only need to compute for $x in [0, 1]$.
= Root finding
- *Intermediate value theorem*: if $f$ continuous on $[a, b]$ and $f(a) < c < f(b)$ then exists $x in (a, b)$ such that $f(x) = c$.
- *Bisection*: let $f in C^0([a_n, b_n])$, $f(a_n) f(b_n) < 0$. Then set $m_n = (a_n + b_n) \/ 2$ and $ (a_(n + 1), b_(n + 1)) = cases(
(m_n, b_n) "if" f(a_n) f(m_n) > 0,
(a_n, m_n) "if" f(b_n) f(m_n) > 0
) $ Then:
- $b_(n + 1) - a_(n + 1) = 1/2 (b_n - a_n)$.
- By intermediate value theorem, exists $p_n in (a_n, b_n)$ with $f(p_n) = 0$.
- $|p_n - m_n| <= 2^(-(n + 1)) (b_0 - a_0)$.
- *False position*: same as bisection except set $m_n$ as $x$ intercept of line from $(a_n, f(a_n))$ to $(b_n, f(b_n))$: $ m_n = b_n - f(b_n) / (f(b_n) - f(a_n)) (b_n - a_n) $
- Bisection and false position are *bracketing methods*. Always work but slow.
- *Fixed-point iteration*: rearrange $f(x_*) = 0$ to $x_* = g(x_*)$ then iterate $x_(n + 1) = g(x_n)$.
- $f$ is *Lipschitz continuous* if for some $L$, $ |f(x) - f(y)| <= L|x - y| $
- Space of Lipschitz functions on $X$ is $C^(0, 1)(X)$.
- Smallest such $L$ is *Lipschitz constant*.
- Every Lipschitz function is continuous.
- Lipschitz constant is bounded by derivative: $ sup_(x != y) |f(x) - f(y)| / |x - y| <= sup_x |f'(x)| $
- $f$ is *contraction* if Lipschitz constant $L < 1$.
- *Contraction mapping or Banach fixed point theorem*: if $g$ is a contraction and $g(X) subset X$ ($g$ maps $X$ to itself) then:
- Exists unique solution $x_* in X$ to $g(x) = x$ and
- The fixed point iteration method converges $x_n -> x_*$.
- *Local convergence theorem*: Let $g in C^1([a, b])$ have fixed point $x_* in (a, b)$ with $|g'(x_*)| < 1$. Then with $x_0$ sufficiently close to $x_*$, fixed point iteration method converges to $x_*$.
- If $g'(x_*) > 0$, $x_n -> x_*$ monotonically.
- If $g'(x_*) < 0$, $x_n - x_*$ alternates in sign.
- If $|g'(x_*)| > 1$, iteration method almost always diverges.
- $x_n -> x_*$ with *order at least $alpha > 1$* if $ lim_(n -> oo) |x_(n + 1) - x_*| / |x_n - x_*|^alpha = lambda < oo $ If $alpha = 1$, then $lambda < 1$ is required.
- *Exact order of convergence* of $x_n -> x_*$: $ alpha := sup{beta: lim_(n -> oo) |x_(n + 1) - x_*| / |x_n - x_*|^beta < oo} $ Limit must be $< 1$ for $alpha = 1$.
- Convergence is *superlinear* if $alpha > 1$, *linear* if $alpha = 1$ and $lambda < 1$, *sublinear* otherwise.
- If $g in C^2$, then with fixed point iteration, $ |x_(n + 1) - x_*| / |x_n - x_*| -> |g'(x_*)| "as" n -> oo $ so $x_n -> x_*$ superlinearly if $g'(x_*) = 0$ and linearly otherwise.
- If $g in C^N$, fixed point iteration converges with order $N > 1$ iff $ g'(x_*) = dots.h.c = g^((N - 1))(x_*) = 0, quad g^((N))(x_*) != 0 $
- *Newton-Raphson*: fixed point iteration with $g(x) = x - f(x) \/ f'(x)$ $ x_(n + 1) = x_n - f(x_n) / (f'(x_n)) $
- For Newton-Raphson, $g'(x_*) = 0$ so quadratic convergence.
- Can use Newton-Raphson to solve $1 \/ x - b = 0$: $ x_(n + 1) = x_n - (1 \/ x_n - b) / (-1 \/ x_n^2) = x_n (2 - b x_n) $
- *Newton-Raphson in $d$ dimensions*: $ underline(x)_(n + 1) = underline(x)_n - (D f)^(-1) (underline(x)_n) underline(f)(underline(x)_n) $ where $D f$ is *Jacobian*.
- *Secant method*: approximate $f'(x_n) approx (f(x_n) - f(x_(n - 1))) / (x_n - x_(n - 1))$ with Newton-Raphson: $ x_(n + 1) = x_n - (x_n - x_(n - 1)) / (f(x_n) - f(x_(n - 1))) f(x_n) $ If $f'(x_*) != 0$, order is $(1 + sqrt(5)) \/ 2$.
= Numerical differentiation
- *Taylor expansion*: $ f(x plus.minus h) = f(x) plus.minus h f'(x) + h^2 / (2!) f''(x) plus.minus h^3 / (3!) f'''(x) + dots.h.c $
- *Forward difference approximation*: $ f'(x) = (f(x + h) - f(x)) / h - h/2 f''(xi), quad xi in "conv"{x, x + h} $ with $h > 0$.
- *Backward difference approximation*: forward difference but with $h < 0$.
- *Centred difference approximation*: $ f'(x) = (f(x + h) - f(x - h)) / (2h) - h^2 / 12 (f'''(xi_-) + f'''(xi_+)), quad xi_(plus.minus) in [x - h, x + h] $
- *Richardson extrapolation*: for approximation of $R(x; 0)$ of the form $ R(x; h) = R^((1))(x; h) = R(x; 0) + a_1(x) h + a_2(x) h^2 + a_3(x) h^3 + dots.h.c $ we have $ R^((1))(x; h \/ 2) = R(x; 0) + a_1(x) h/2 + a_2(x) h^2/4 + a_3(x) h^3 / 8 + dots.h.c $ This gives *second order approximation*: $ R^((2))(x; h) = 2 R^((1))(x; h \/ 2) - R^((1))(x; h) = R(x; 0) - a_2(x) h^2/2 + dots.h.c $ Similarly, $ R^((3))(x; h) = (4 R^((2))(x; h \/ 2) - R^((2))(x; h)) / 3 = R(x; 0) + tilde(a)_3 (x) h^3 + dots.h.c $ is *third order approximation*. Generally, $ R^((n + 1))(x; h) = (2^n R^((n))(x; h \/ 2) - R^((n))(x; h)) / (2^n - 1) = R(x; 0) + O(h^(n + 1)) $
= Linear systems
- $A$ *symmetric* if $A^T = A$.
- *Hermitian conjugate*: $(A^*)_(i j) = overline(A_(j i))$. $A$ *Hermitian* if $A^* = A$.
- $A$ *non-singular* iff $forall b in K^n$, exists solution $x in K^n$ to $A x = b$ ($K = RR$ or $CC$).
- If $A$ non-singular, exists exactly one solution $x$ to $A x = b$ and unique $A^(-1)$ such that $forall b in K^n$, $x = A^(-1) b$.
- $A$ non-singular iff $det(A) != 0$.
- $A$ *positive-definite* iff $x dot.op A x > 0$ $forall x != 0$.
- $A$ *positive-semidefinite* iff $x dot.op A x >= 0$ $forall x in K^n$.
- $L$ *lower-triangular* iff $L_(i j) = 0$ for $i < j$.
- $U$ *upper-triangular* iff $U_(i j) = 0$ for $i > j$.
- Can solve $L x = b$ by *forward substitution*: for $j = 1, ..., n$: $ x_j = (b_j - sum_(k = 1)^(j - 1) L_(j k) x_k) / (L_(j j)) $
- Can solve $U x = b$ by *backward substitution*: for $j = n, ..., 1$: $ x_j = (b_j - sum_(k = j + 1)^n U_(j k) x_k) / (U_(j j)) $
- If $A$ not upper/lower triangular, use *Gaussian elimination* to reduce $A$ to upper triangular $U$ using addition of multiple of row to another row. If leading element in current row is zero, swap with row below.
- *Gaussian elimination with row pivoting*: at $s$th stage of Gaussian elimination, if largest element in $s$th column is in row $j$, swap row $j$ and row $s$, then proceeed as usual. This gives more accurate results.
- For operation count, assume each arithmetic operation takes one *flop*.
- When asked about *order* of operation count, include *constant multiple* as well as highest power of $n$.
- *$L U$ decomposition*: write $A = L U$, then solve $L y = b$, then $U x = y$ with backward/forward substitution. Better when solving with multiple $b$.
- *Frobenius matrix of index $s$*: diagonal elements are $1$, other elements zero except for $s$th colum below main diagonal.
- Any Frobenius matrix can be written $ F_(i j)^((s)) = delta_(i j) - f_i^((s)) e_j^((s)) $ where $e^((s))$ is $s$th unit vector, $f^((s)) = (0, ..., 0, f_(s + 1)^((s)), ..., f_n^((s)))$ or $ F^((s)) = I - f^((s)) times.circle e^((s)) $ where $(v times.circle w)_(i j) = v_i w_j$ is tensor product.
- Inverse of Frobenius matrix is Frobenius matrix of same index: $ G^((s)) = I + f^((s)) times.circle e^((s)) $
- $G^((1)) dots.h.c G^((s)) = I + sum_(r = 1)^s f^((r)) times.circle e^((r))$
- If $A$ can be transform to upper triangular $U$ by Gaussian eliminiation without pivoting, then exists lower triangular $L$ such that $A = L U$. $L$ given by $ L_(i i) = 1, quad L_(i s) = A_(i s)^((s - 1)) \/ A_(s s)^((s - 1)) $ where $A^((s - 1))$ is matrix at $(s - 1)$th stage of Gaussian elimination ($A^0 = A$ is initial matrix).
- Any non-singular $A$ can be written as $P A = L U$ where $L$ is *permutation (pivot) matrix* (each row and column has exactly one $1$ and all other elements are $0$).
- *Norm* of vector space $V$: map $norm(dot.op): V -> RR$ with:
- *Triangle inequality*: $norm(x + y) <= norm(x) + norm(y)$.
- *Linearity*: $norm(alpha x) = |a| norm(x)$.
- *Positivity*: $norm(x) >= 0$ and $norm(x) = 0 ==> x = 0$.
- *Seminorm* $|[x]|$: norm except non-zero vectors with $|[x]| = 0$.
- *$l_p$ norm*: for $p >= 1$, $ norm(x)_p := (sum_(i = 1)^n |x_i|^p)^(1 \/ p) $
- *$l_oo$ norm*: $ norm(x)_oo := max_i |x_i| $
- Matrix *row-sum norm*: $ norm(A)_"row" := max_(i = 1, ..., n) sum_(j = 1)^n |A_(i j)| $
- Matrix *column-sum norm*: $ norm(A)_"col" := max_(j = 1, ..., n) sum_(i = 1)^n |A_(i j)| $
- *Frobenius norm*: $ norm(A)_"Fro" := (sum_(i, j = 1)^n |A_(i j)|^2)^(1 \/ 2) $
- For $n$ dimensional vector space $V$, $"Hom"(V)$ is vector space of $n times n$ matrices.
- Given norm $norm(dot.op)$ on $V$, *induced norm* on $"Hom"(V)$ is $ norm(A) := sup_(x != 0) norm(A x) / norm(x) = max_(norm(x) = 1) norm(A x) $
- Properties of induced norm:
- $norm(A x) <= norm(A) norm(x)$, $x in V$, $A in "Hom"(V)$.
- $norm(A B) <= norm(A) norm(B)$, $A, B in "Hom"(V)$.
- *Spectral radius* of matrix: $ rho(A) := max{|lambda|: lambda "eigenvalue of" A} $
- We have these equalities:
- $norm(A)_1 = norm(A)_"col"$.
- $norm(A)_2 = max{sqrt(|lambda|): lambda "eigenvalue of" A^T A} = rho(A^T A)^(1 \/ 2) = rho(A A^T)^(1 \/ 2) $
- $norm(A)_oo = norm(A)_"row"$.
- *Condition number* of $A$ with respect to norm $norm(dot.op)_*$: $ kappa_*(A) := norm(A^(-1))_* norm(A)_* $
- For $A (x + delta x) = b + delta b$, $ norm(delta x)_* / norm(x)_* <= kappa_*(A) norm(delta b)_* / norm(b)_* $
- If $norm(B) < 1$ for any submultiplicative matrix norm $norm(dot.op)$, $ B^k -> 0 quad "as" k -> oo $ Also, $ B^k -> 0 quad "as" k -> oo <==> rho(B) < 1 $
- *Richardson's method for lineary systems*: $A x = b$ so $x = x + w(b - A x)$ for some $w$. So iterate $ x^((k + 1)) = x^((k)) + w(b - A x^((k))) $ *Residual*: $ r^((k)) := x^((k)) - x$ satisfies $ r^((k + 1)) = (I - w A) r^((k)) ==> r^((k)) = (I - w A)^k r^((0)) $ So iteration converges iff $(I - w A)^k -> 0 <==> rho(I - w A) < 1$
- *Jacobi's method*: split $A$ into $A = D - E - F$, $D$ diagonal, $E$ strictly lower triangular, $F$ strictly upper triangular. Rewrite $A x = b$ as $D x = (E + F) x + b$, and iterate $ x^((k + 1)) = D^(-1) ((E + F) x^((k)) + b) $ Residual satisfies $r^((k + 1)) = D^(-1) (E + F) r^((k))$ so iteration converges iff $(D^(-1) (E + F))^k -> 0$. Converges if $A$ *strictly diagonally dominant* ($|a_(i i)| > sum_(j != i) |a_(i j)|$ for all $i$).
- *Gauss-Seidel method*: iterate $ (D - E) x^((k + 1)) = F x^((k)) + b $ Residual satisfies $r^((k + 1)) = (D - E)^(-1) F r^((k))$. Converges if $A$ strictly diagonally dominant.
= $L^2$ approximations and orthogonal polynomials
- *Inner product over vector space $V$*: map $(dot.op, dot.op): V times V -> CC$ satisfying:
- $(alpha u + beta u', v) = alpha (u, v) + beta (u', v)$.
- $(u, v) = overline((v\, u))$.
- $(u, u) >= 0$ and $(u, u) = 0 <==> u = 0$.
- For $V = C^0([a, b])$, define inner product $ (u, v)_(L^2_w (a, b)) := integral_a^b u(x) v(x) w(x) dif x $ where *weight function* $w(x) > 0$ except at finite set of points. $w(x) = 1$ if not specified.
- Inner product induces norm $norm(u) = sqrt((u, u))$.
- Let $V$ inner product space, $X$ linear subspace of $V$. Then the $tilde(p) in X$ that minimises $ E(p) = norm(f - p)^2 $ satisfies $ forall p in X, (f - tilde(p), p) = 0 <==> (f, p) = (p, tilde(p)) <==> (f, phi_k) = (tilde(p), phi_k) quad forall k $ where $X$ spanned by ${phi_k}$. So if $tilde(p) = tilde(p)_0 phi_0 + dots.h.c + tilde(p)_K phi_K$ then $ (f, phi_k) = sum_j (phi_j, phi_k) tilde(p)_j $
- *Gram-Schmidt*: to construct orthogonal basis ${hat(phi)_k}$ from non-orthogonal basis ${phi_k}$:
- $hat(phi)_0 = phi_0$.
- $hat(phi)_k = phi_k - sum_(j = 0)^(k - 1) ((phi_k, hat(phi)_j)) / (norm(hat(phi)_j)^2) hat(phi)_j$ where norm is respect to given inner product.
- Properties of orthogonal basis:
- Unique up to normalisation: if ${phi_j^*}$ is another orthogonal basis, then $phi_j^* = c_j hat(phi)_j$ for some constant $c_j$.
- Has exactly $k$ simple roots in $(a, b)$.
- *Recurrence formula* to recursively calculate orthogonal basis: $ hat(phi)_(k + 1) = 1/norm(hat(phi)_k) x hat(phi)_k(x) - ((x hat(phi)_k, hat(phi)_k)) / (norm(hat(phi)_k)^3) hat(phi)_k(x) - norm(hat(phi)_k) / norm(hat(phi)_(k - 1)) hat(phi)_(k - 1)(x) $
= Numerical integration
- Want to approximate $ I(f) := integral_a^b f(x) w(x) dif x $ with *quadrature formula*: $ Q_n(x) = sum_(k = 0)^n hat(sigma)_k f(x_k) $ for *nodes* ${x_k}$ and *coefficients* ${hat(sigma)_k}$.
- $Q_n$ has *degree of exactness* $r$ if $Q_n(x^j) = I(x^j)$ for all $j <= r$, and $Q_n(x^(r + 1)) != I(x^(r + 1))$.
- By linearity, if $Q_n$ has degree of exactness $r$, then $Q_n(p) = I(p)$ for all $p in P_r$.
- *Interpolatory quadrature*: given nodes ${x_k}$, find $p$ that interpolates $f$ at nodes, $f(x_k) = p(x_k)$ and find integral of $p$. E.g. with Lagrange interpolation, $ I_n(f) := integral_a^b p(x) dif x = sum_(k = 0)^n f(x_k) integral_a^b L_k(x) $ Let $t = (x - a) \/ (b - a)$ then $ integral_a^b L_k(x) dif x = (b - a) integral_0^1 product_(l != k) (t - t_l) / (t_k - t_l) dif t =: (b - a) sigma_k $ so $ I_n(f) = (b - a) sum_(k = 0)^n sigma_k f(x_k) $
- Degree of exactness of $I_n$ is $n$.
- *Newton-Cotes* formula: interpolatory quadrature with equidistant nodes.
- *Closed Newton-Cotes* formula: Newton-Cotes with $x_0 = a$ and $x_n = b$, so $t_k = k/n$.
- If nodes symmetric, $t_(n - k) = 1 - t_k$ then $sigma_(n - k) = sigma_k$.
- *Rectangle method*: $ I_0(f) = (b - a) f((a + b) / 2) $
- If $p$ interpolates $f$ at ${x_k} subset [a, b]$ then for all $x in [a, b]$, $ f(x) - p(x) = (omega_(n + 1)(x)) / ((n + 1)!) f^((n + 1))(xi) $ where $omega_(n + 1)(x) = (x - x_0) dots.h.c (x - x_n)$ and $xi in (a, b)$.
- Error bounded by $ |I(f) - I_n(f)| <= 1/((n + 1)!) max_(xi in [a, b]) |f^((n + 1))(xi)| integral_a^b |omega_(n + 1)(x)| dif x $
- *Composite quadrature*: divide $[a, b]$ into $m$ subintervals ${[x_(i - 1), x_i]}_(i = 1)^m$ of each length $h = (b - a)/m$ and apply interpolatory quadrature to each subinterval, then add each of these together.
- *Trapezium rule*: use composite with closed Newton-Cotes formula with $n = 1$: $I_1(f) = (b - a) (f(a) + f(b)) / 2$ to give $ C_(1, m)(f) = (b - a) / m (1/2 f(x_0) + f(x_1) + dots.h.c + f(x_(m - 1)) + 1/2 f(x_m)) $
- *Simpson's $1/3$ rule*: use composite with closed Newton-Cotes formula with $n = 2$: $I_2(f) = (b - a) (1/6 f(a) + 2/3 f((a + b) / 2) + 1/6 f(b))$ to give $ C_(2, m)(f) = (b - a) / m (1/6 f(x_0) + 2/3 f(x_(1/2)) + 1/3 f(x_1) + dots.h.c + 1/3 f(x_(m - 1)) + 2/3 f(x_(m - 1/2)) + 1/6 f(x_m)) $
- To compute error bounds for composite, add individual error bounds for each of the individual quadratures.
- *Gaussian* interpolatory formula $ G_n = sum_(k = 0)^n rho_k f(x_k) $ obtains highest degree of exactness $2n + 1$ iff nodes ${x_k}$ chosen so that $hat(p)(x) = (x - x_0) dots.h.c (x - x_n)$ satisfies $ forall p in P_n, quad (hat(p), p) = 0 $ ${x_k}$ must be roots of $phi_(n + 1) in P_(n + 1)$ where ${phi_j}$ are orthogonal polynomials with respect to inner product $(dot.op, dot.op)_(a, b, w)$ Then coefficients given by $ rho_k = integral_a^b product_(l != k) (x - x_l) / (x_k - x_l) w(x) dif x $ where $w$ is weight function. |
|
https://github.com/kdog3682/2024-typst | https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/canvas-utils.typ | typst | #import "styles.typ" as sx
#import "util.typ" as ux
#let o = (0,0)
#let p34 = (3, 4)
#import "@preview/cetz:0.2.0"
#import cetz.draw as draw
#set text(size: 20pt)
// early childhood math
//
// Instructions
// Cards are marked as show or answer
// a show card means
// visual overload needs to be avoided
//
#let inline-canvas(content, baseline: 50%) = {
// why do we have this inset?
// one item a month for yijie ...
// paper products
if type(baseline) == "none" {
baseline = 0pt
}
box(baseline: baseline, inset: (x: 10%), {
cetz.canvas({
content
})
})
}
#let block-stack2(n, baseline: 50%) = inline-canvas(baseline: baseline, {
let size = 1
for i in range(n) {
let start = (0, i * size)
let end = (size, (i + 1) * size)
draw.rect(start, end, fill: yellow)
}
})
// $#block-stack2(3) plus #block-stack2(3) = #block-stack2(8, baseline: 18.75%)$ // take the 50% baseline and cut it in half to makae it align
// this works pretty well
//$#block-stack(3) space + space #block-stack(3) space = space #block-stack(6)$
// it takes time
#let artboard(frames, ..) = cetz.canvas({
// merge in styles
// you can parse it ... via a string
// you can also parse it via html ...
// to create stringbuilder like shenannigans
// calculations can be made before hand
// coordinates
// styles
// shortcuts
// to do everything thru it ... and update
// collect all the various styles
// it is an amazing product
})
//#type(json("abc.json").a)
//#artboard(1)
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/bugs/place-pagebreak.typ | typst | Apache License 2.0 | // Test placing on an already full page.
// It shouldn't result in a page break.
---
#set page(height: 40pt)
#block(height: 100%)
#place(bottom + right)[Hello world]
|
https://github.com/f7ed0/typst-template | https://raw.githubusercontent.com/f7ed0/typst-template/master/lib/PDCA.typ | typst | #let states = (
V : (color : color.green, text : "V", desc : "Resultat obtenu"),
O : (color : color.yellow, text : "O", desc : "Resultat partiellement obtenu"),
R : (color : color.red, text : "R", desc : "Resultat non obtenu")
)
#let PDCA_el(col : color.white, desc : [], responsable : " ", date : "", state : states.R) = {
table.cell( inset: 0pt,
block(height: 44pt)[
#set text(size: 8pt)
#table(columns : (4fr,1fr,1fr),
table.cell(block(height: 38pt, align(left+top ,desc)),rowspan: 2, fill: col.lighten(30%), inset: 3pt),
table.cell(inset : 8pt,align(center + horizon,responsable), colspan: 2, ),
table.cell(inset : 8pt,align(center + horizon,date)),
table.cell(inset : 8pt,align(center + horizon,state.text), fill: state.color)
)
]
)
}
#let blank(count) = {
range(count).map(_ => table.cell(inset : 0pt, stroke: none,block(height: 44pt)));
}
#let PDCA(P : (), D : (), C : (), A : (),dx : 77% , dy : 0%) = {
place(dx:dx, dy:dy, block(stroke: black, width: 175pt, height: 100pt, inset: 8pt)[
#for state in states {
grid( columns: (30pt,1fr) )[
#block(align(center + horizon,state.at(1).text), fill : state.at(1).color, height: 20pt, width: 30pt)][#align(horizon,pad(left : 6pt,text(state.at(1).desc,size: 9pt)))]
}
])
let maxlen = calc.max(P.len(),C.len(),D.len(),A.len())
table( columns:(1fr,1fr,1fr,1fr), stroke: none,
table.cell(inset: 0pt, stroke: none,rowspan: maxlen+1,
table(columns: (100%),..blank(maxlen - P.len()), ..P,
table.cell(align(center,"P (PRÉVOIR)"), fill : gray),
)
),
table.cell(inset: 0pt,stroke: none, rowspan: maxlen+1,
table(columns: (100%),..blank(maxlen - D.len()), ..D,
table.cell(align(center,"D (PILOTER)"), fill : gray)
)
),
table.cell(inset: 0pt,stroke: none, rowspan: maxlen+1,
table(columns: (100%),..blank(maxlen - C.len()), ..C,
table.cell(align(center,"C (VERIFIER)"), fill : gray)
)
),
table.cell(inset: 0pt,stroke: none, rowspan: maxlen+1,
table(columns: (100%),..blank(maxlen - A.len()), ..A,
table.cell(align(center,"A (PÉRENISER)"), fill : gray)
)
)
)
}
|
|
https://github.com/EpicEricEE/typst-based | https://raw.githubusercontent.com/EpicEricEE/typst-based/master/src/lib.typ | typst | MIT License | #import "base64.typ"
#import "base32.typ"
#import "base16.typ"
#let encode64 = base64.encode
#let decode64 = base64.decode
#let encode32 = base32.encode
#let decode32 = base32.decode
#let encode16 = base16.encode
#let decode16 = base16.decode
|
https://github.com/knuesel/typst-minideck | https://raw.githubusercontent.com/knuesel/typst-minideck/main/minideck.typ | typst | MIT License | #import "themes/themes.typ"
#import "paper.typ": papers
// Counter for pauses and for automatic tracking of subslide number.
// First value: number of subslides so far referenced in current slide.
// Second value: number of pauses so far in current slide.
// Both values are kept in one state so that an update function can update the
// number of subslides based on the number of pauses, without requiring a
// context. This avoids problems with layout convergence.
#let _subslide-count = state("__minideck-subslide-count", (0, 0))
// Current subslide being generated for current slide
#let _subslide-step = state("__minideck-subslide-step", 0)
// Return a state update for `_subslide_count` ensuring that the subslide count
// (first counter value) is at least `n`.
#let update-subslide-count(n) = _subslide-count.update(((x, y)) => (calc.max(n, x), y))
// Return a state update for `_subslide_count` to increment the pause index
// (first value) and to ensure that the subslide count is at least equal to the
// new pause index.
#let update-by-pause() = _subslide-count.update(((x, y)) => (calc.max(x, y+2), y+1))
// If `handout` is `auto`, infer its value from command-line input
#let _is-handout(handout) = {
if handout == auto {
sys.inputs.at("handout", default: none) == "true"
} else {
handout
}
}
// Format `it` as content of a (sub)slide
#let _subslide-content(it) = {
pagebreak(weak: true)
it
}
// Show one subslide of the slide.
// The subslide counter starts at 0 for every subslide, so that its
// value can be used in the subslide to compare with `_subslide-step`.
#let _subslide(n, it) = {
_subslide-count.update((1,0))
_subslide-step.update(n)
_subslide-content({
// Revert page increment unless it's the first subslide for this slide
if n > 0 { counter(page).update(x => calc.max(0, x - 1)) }
it
})
}
// Hide content if current subslide step is smaller than pause index.
#let _pause(updater, hider, it) = {
let pause-index = _subslide-count.get().at(1)
if _subslide-step.get() < pause-index{ hider(it) } else { it }
}
// Increase pause counter and hide content if current `_subslide-step` is
// smaller. Use this function as `#show: pause` or `#show: pause.with(...)`.
// `updater` is a callback that returns a state update for `_subslide-count` to
// increment the pause index (second counter value) and to ensure that the
// subslide count (first counter value) is at least the pause index plus 1. This
// callback is normally `update-by-pause`.
// `hider` is the callback used to hide `it` when appropriate.
// If `handout` is `true`, dynamic features are disabled: all slide content is
// shown in a single subslide. If `auto`, the value is taken as `true` if
// `--input handout=true` is passed on the command line, `false` otherwise.
// If `opaque` is true, the result will already have `context` invoked, otherwise
// the caller is responsible for invoking `context` in a suitable scope.
#let pause(handout: auto, opaque: true, updater: update-by-pause, hider: hide, it) = {
if _is-handout(handout) {
return it
}
update-by-pause()
if opaque {
context _pause(updater, hider, it)
} else {
// return non-opaque content: caller must ensure context is available
_pause(updater, hider, it)
}
}
// Hide `it` on all given subslide indices and/or starting at `from`.
// Subslide indices start at 1.
#let _process-impl(updater, hider, indices, from, it) = {
// Convert zero-based subslide step to 1-based user-facing subslide index
let j = _subslide-step.get() + 1
let from-array = if from == none { () } else { (from,) }
if updater != none {
updater(calc.max(..indices, ..from-array))
}
let visible = (from != none and j >= from) or j in indices
if visible { it } else { hider(it) }
}
// Hide `it` on all given subslide indices and/or starting at `from`.
// Subslide indices start at 1.
// This is used by `uncover` (hider=hide) and `only` (hider=`it=>none`).
// `updater` is a callback that takes a number `n` and returns a state update
// for `_subslide-count` to ensure that the first counter value is at least `n`.
// This callback is normally `update-subslide-count`, but CeTZ needs one that
// returns a CeTZ element.
// `hider` is the callback used to hide `it` when appropriate.
// If `handout` is `true`, dynamic features are disabled: all slide content is
// shown in a single subslide. If `auto`, the value is taken as `true` if
// `--input handout=true` is passed on the command line, `false` otherwise.
// If `opaque` is true, the result will already have `context` invoked, otherwise
// the caller is responsible for invoking `context` in a suitable scope.
#let _process(handout, opaque, updater, hider, indices, from, it) = {
if _is-handout(handout) {
return it
}
if opaque {
context _process-impl(updater, hider, indices, from, it)
} else {
// return non-opaque content: caller must ensure context is available
_process-impl(updater, hider, indices, from, it)
}
}
// Uncover `it` on all given subslide indices and/or from given index.
// Subslide indices start at 1.
// See `_process` for the other parameters.
#let uncover(from: none,
handout: auto,
opaque: true,
updater: update-subslide-count,
hider: hide,
..indices, it) = _process(handout, opaque, updater, hider, indices.pos(), from, it)
// Include `it` on all given subslide indices and/or from given index.
// Subslide indices start at 1.
// See `_process` for the other parameters.
#let only(from: none,
handout: auto,
opaque: true,
updater: update-subslide-count,
hider: it => none,
..indices, it) = _process(handout, opaque, updater, hider, indices.pos(), from, it)
// Generate subslides with number of steps given explicitly
#let _slide-explicit(steps, it) = {
for i in range(0, steps) {
_subslide(i, it)
}
}
// Generate subslides with number of steps derived from the subslide counter.
// This requires an up-to-date subslide counter (see `slide`).
#let _slide-auto(it) = {
// Each slide is shown at least once
_subslide(0, it)
// After showing slide once, _subslide-count holds the number of subslides
context for i in range(1, _subslide-count.get().first()) {
_subslide(i, it)
}
}
// Make a new slide made of `steps` subslides. If steps is auto, the number of
// subslides is determined automatically by updating a state (this requires that
// `uncover` and `only` are configured with a valid updater callback, and that
// they are called from a place where the update can be inserted).
#let slide(handout: auto, steps: auto, it) = {
if _is-handout(handout) {
return _subslide-content(it)
}
if steps == auto {
_slide-auto(it)
} else {
_slide-explicit(steps, it)
}
}
// Calculate paper size from all parameters
#let paper-size(paper, landscape, width, height) = {
let size = papers.at(paper)
let (w, h) = (size.width*1mm, size.height*1mm)
if landscape and w < h {
(w, h) = (h, w)
}
(
width: if width == none { w } else { width },
height: if height == none { h } else { height },
)
}
// Return a dictionary of functions that implement the given configuration
// settings. For example use `(slide, uncover) = config(handout: true)` to
// define `slide` and `uncover` functions that work in handout mode.
//
// Named parameters:
//
// - paper: a string for one of the paper size names recognized by page.paper
// or one of the shorthands "16:9" or "4:3". Default: "4:3".
// - landscape: use the paper size in landscape orientation. Default: `true`
// - width: page width as an absolute length, takes precedence over `paper`
// - height: page height as an absolute length, takes precedence over `paper`
// - handout: when `true`, dynamic features are disabled: all slide content is
// shown in a single subslide. When set to `auto`, the value used is `true` if
// `--input handout=true` is passed on the command line, `false` otherwise.
// - theme: the theme to use, the default being `themes.simple`
// - cetz: if the CeTZ module is passed here, the returned dictionary will
// include `cetz-uncover` and `cetz-only`, which are versions of `uncover`
// and `only` configured to use cetz methods for hiding and state update.
// - fletcher: if the fletcher module is passed here, the returned dictionary
// will include `fletcher-uncover` and `fletcher-only`, which are versions
// of `uncover` and `only` that use `fletcher.hide` for hiding and that
// disable state update (so the number of slide steps must be given to `slide`
// explicitly).
//
// Functions configured for CeTZ and fletcher return non-opaque content, so the
// caller is responsible for invoking `context` in a suitable scope, typically
// as in `#context cetz.canvas({...})`.
#let config(
paper: "4:3",
landscape: true,
width: none,
height: none,
handout: auto,
theme: themes.simple,
cetz: none,
fletcher: none,
) = {
let slide = slide.with(handout: handout)
let page-size = paper-size(paper, landscape, width, height)
let theme-funcs = theme(slide, page-size: page-size)
(
pause: pause.with(handout: handout),
uncover: uncover.with(handout: handout),
only: only.with(handout: handout),
..theme-funcs,
)
if cetz != none {
let cetz-update(n) = cetz.draw.content((), update-subslide-count(n))
(
cetz-uncover: uncover.with(
handout: handout,
opaque: false,
updater: cetz-update,
hider: it => cetz.draw.hide(it, bounds: true),
),
cetz-only: only.with(
handout: handout,
opaque: false,
updater: cetz-update,
),
)
}
if fletcher != none {
(
fletcher-uncover: uncover.with(
handout: handout,
opaque: false,
updater: none,
hider: it => fletcher.hide(it, bounds: true),
),
fletcher-only: only.with(
handout: handout,
opaque: false,
updater: none,
),
)
}
}
|
https://github.com/rose-pine/typst | https://raw.githubusercontent.com/rose-pine/typst/main/example.typ | typst | MIT License | #import "lib.typ" : apply
#show: apply()
#set align(center)
= Example Typst Document
#set heading(numbering: "1.")
= Text and headings
#lorem(30)
== H2
#lorem(20)
=== H3
#lorem(15)
= Links and other references <links>
== Links
#link("https://typst.app")[
Typst
]
#link("https://typst.app")
== References
@links
== Footnotes
Some text#footnote[footnote test]
= Tables
#table(
columns: (1fr, auto, auto),
inset: 10pt,
align: horizon,
[*Equation*], [*Area*], [*Parameters*],
$ pi h (D^2 - d^2) / 4 $,
[
$h$: height \
$D$: outer radius \
$d$: inner radius
],
$ sqrt(2) / 12 a^3 $,
)
= Visuals
== Circles
#circle(radius: 25pt)
#circle[
#set align(center + horizon)
Automatically \
sized to fit.
]
== Ellipses
// Without content.
#ellipse(width: 35%, height: 30pt)
// With content.
#ellipse[
#set align(center)
Automatically sized \
to fit the content.
]
== Lines
#line(length: 100%)
#line(end: (50%, 10%))
#line(
length: 4cm,
)
== Paths
#path(
closed: true,
(0pt, 50pt),
(100%, 50pt),
((50%, 0pt), (40pt, 0pt)),
)
== Polygons
#polygon(
(20%, 0pt),
(60%, 0pt),
(80%, 2cm),
(0%, 2cm),
)
== Rectangles
#rect(width: 35%, height: 30pt)
#rect[
Automatically sized \
to fit the content.
]
== Squares
#square(size: 40pt)
#square[
Automatically \
sized to fit.
]
== Highlights
This is #highlight[important].
This #highlight[#link("https://typst.app")[Link]] is important too.
So is this reference #highlight[@links].
== Code
Python syntax highlighting example, copied from #link("https://github.com/sharkdp/bat")[bat] (MIT License \@ bat-developers).
```python
from os import getcwd
import numpy as np
from matplotlib.pyplot import plot as plt
from time import *
# COMMENT test
h2 = 4 # this is a comment
"""this is also a comment"""
# Import test
# class test
class Hello:
def __init__(self, x):
self.name = x
def selfprint(self):
print("hello my name is ", self.name)
def testprint(self):
print(1*2, 2+3, 4 % 5, 8-4, 9/4, 23//4)
# Decorators test
class Decorators:
@classmethod
def decoratorsTest(self):
pass
H1 = Hello("john")
H1.selfprint()
H1.testprint()
# list test
a = [1, 2, 3, 4, 5]
a.sort()
print(a[1:3])
print(a[:4])
print(a[2])
print(a[2:])
# dictionary test
# copied from w3schools example
myfamily = {
"child1": {
"name": "Emil",
"year": 2004
},
"child2": {
"name": "Tobias",
"year": 2007
},
"child3": {
"name": "Linus",
"year": 2011
}
}
# tuple test
testTuple = ("one", 2, "3")
print(testTuple)
print(np.random.randint(5, 45))
# string test
a = "hello world"
b = """good morning
hello world
bye"""
formattest = "teststring is ={}".format(5)
# lambda test
def x2(n):
lambda n: n/7
# if else ladder
if 1 > 2:
print("yes")
elif 4 > 5:
print("maybe")
else:
print("no")
# loops
i = 5
while(i > 0):
print(i)
i -= 1
for x in range(1, 20, 2):
print(x)
```
|
https://github.com/jamesrswift/ionio-illustrate | https://raw.githubusercontent.com/jamesrswift/ionio-illustrate/main/dist/0.2.0/src/defaults.typ | typst | MIT License | #let mass-spectrum-default-style = (
axes: (
tick: (length:-0.1),
frame: true,
label: (offset: 0.3)
),
title: (:),
callipers: (
line: (stroke: gray + 0.7pt),
content: (:)
),
callouts: (
stroke: black
),
peaks: (
stroke: black
),
) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.