repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/bugs/flow-4-00.typ | typst | Other | #set page(height: 105pt)
#block(lorem(20))
|
https://github.com/rabotaem-incorporated/algebra-conspect-1course | https://raw.githubusercontent.com/rabotaem-incorporated/algebra-conspect-1course/master/sections/04-linear-algebra/14-linear-mapping-matrix.typ | typst | Other | #import "../../utils/core.typ": *
== Матрица линейного отображения
#ticket[Свойство универсальности базиса. Пространство линейных отображений]
#pr[
Пусть $V$, $W$ --- линейные пространства над $K$. $e_1, ..., e_n #[--- базис] V$, $sq(w) in W$.
Тогда $exists! Aa in Hom(V, W): Aa e_i = w_i, i = 1, ..., n$.
]
#proof[
- #[
"Единственность": Пусть $Aa$ --- как в условии, $v = alpha_1 e_1 + ... + alpha_n e_n$. Тогда $Aa v = alpha_1 Aa e_1 + ... + alpha_n Aa e_n = alpha_1 w_1 + ... + alpha_n w_n. $
В силу единственности разложения в базисе, получается, что линейное отображение однозначно определяется своими значениями на базисных векторах.
]
- #[
"Существование":
Рассмотрим $alpha_1 w_1 + ... + alpha_n w_n = Aa(alpha_1 e_1 + ... + alpha_n e_n)$. Оно подходит.
]
]
#def[
Пусть $dim V = n, dim W = m$, $E = (sq(e))$ --- базис $V$, а $F = (sq(f, m))$ --- базис $W$. $Aa in Hom(V, W)$.
_Матрицей линейного отображения $Aa$ в базисах $E, F$_ называется:
$ [Aa]_(E, F) = A = ([Aa e_1]_F, [Aa e_2]_F, ..., [Aa e_n]_F) in M_(m,n)(K) $
]
#example[
- $V = W = {f in K[X] bar zws deg f <= 2}$, $E = (1, x, x^2) = F$, $Aa: f maps 5f - f'$. #h(1fr)
$ mat(5, -1, 0; 0, 5, -2; 0, 0, 5) = [A]_(E, F) $
]
#ticket[Изменение координат вектора под действием линейного отображения]
#pr[
$Aa: V --> W$, $v in V$, $E$ и $F$ --- базисы, $A := [Aa]_(E, F)$ --- знаем, тогда $[Aa v]_F = A mul [v]_E$.
]
#proof[
$v = E mul [v]_E$, $underbrace(F dot [Aa v]_F, in K^m) = Aa v = Aa E mul [v]_E = (F A)[v]_E = F(A mul [v]_e)$ \
$F$ --- базис $==> [Aa v]_F = A mul [v]_E$
]
#example[
Пусть $V = K[x]_(<=2)$, $W = K[x]_(<=1)$, $Aa: f maps f'$, $E = (x^2, x, 1)$, $F = (1, x)$. Тогда
$
A = [Aa]_(E, F) = mat(0, 1, 0; 2, 0, 0).
$
Посчитаем производную $3x^2 - 5x + 7$:
$
[Aa(3x^2 - 5x + 7)]_F = A mul [3x^2 - 5x + 7]_E = mat(0, 1, 0; 2, 0, 0) mul mat(3; -5; 7) = mat(-5; 6) = [6x - 5]_F.
$
Еще отметим, что $Aa E = F A$:
$
Aa E = (2x, 1, 0) = (1, x) mat(0, 1, 0; 2, 0, 0) = F A.
$
]
#def[
_Изоморфизмом_ между линейными пространствами $V$ и $W$ называется биективный гомоморфизм $phi in Hom(V, W)$.
Линейные пространства называют _изоморфными_, если между ними существует изоморфизм.
]
#pr[
Пусть $phi: V --> W$ --- изоморфизм, $sq(e)$ --- базис $V$. Тогда $phi(e_1), ..., phi(e_n)$ --- базис $W$.
]
#proof[
$ w in W ==> w = phi(v) = phi(alpha_1e_1 + ... + alpha_n e_n) = alpha_1phi(e_1) + ... + alpha_n phi(e_n) in Lin(phi(e_1), ..., phi(e_n)). $
$ alpha_1 phi(e_1) + ... + alpha_n phi(e_n) = phi(alpha_1 e_1 + ... + alpha_n e_n) = 0 ==> alpha_1 e_1 + ... + alpha_n e_n = 0 ==> alpha_1 = ... = alpha_n = 0 $
]
#notice[
Чтобы построить непосредственно изоморфизм, нужно зафиксивать базис $E$ пространства $V.$ Получим изоморфизм $K^n limits(-->)^(phi_E) V$, и аналогично $K^m limits(-->)^(phi_F) W.$ Тогда любое линейное отображение $Aa$ после применения обратных изоморфизмов $phi_E^(-1)$ и $phi_F^(-1)$ превращается в матричное отображение.
]
#pr[
Пусть $E$ --- базис $V$, $n = dim V$, $F$ --- базис $W$, $m = dim W$. Тогда $Hom(V, W) --> M_(m, n)(K)$ --- изоморфизм линейных пространств.
]
#proof-left-to-the-reader()
#follow[
$dim Hom(V, W) = (dim V) mul (dim W)$.
]
#ticket[Матрица композиции линейных отображений]
#pr[
Пусть $Aa in Hom(U, V)$, $Bb in Hom(V, W)$; $E, F, G$ --- базисы $U, V, W$. Тогда $ [Bb Aa]_(E, G) = underbrace([Bb]_(F, G), B) underbrace([Aa]_(E,F), A) $
]
#proof[
$
cases(Aa E = F A, Bb F = G B) ==> Bb Aa E = (Bb F) A = G B A = G (B A) = G [Bb Aa]_(E, G) ==> [Bb Aa]_(E, G) = B A.
$
]
#ticket[Изменение матрицы линейного отображения при замене базисов]
#pr[
Рассмотрим $Aa in Hom(V, W)$, $E, E'$ --- базисы $V$, $F, F'$ --- базисы $W$. Тогда $ [Aa]_(E', F') = M_(F' --> F) mul [Aa]_(E, F) mul M_(E --> E') $
]
#import "../../packages/commute.typ": *
#proof[
Пусть матрица $[Aa]_(E, F)$ --- это какая-то матрица $A$. Пусть дан вектор $v in V$.
Его можно записать в координатах $E$, домножить на матрицу $A$, получив координаты $Aa v$ в базисе $F$, и записать обратно в базисе $F$, получив $[Aa v]_F$.
#align(center)[#commutative-diagram(
node((0, 0), [$V$]),
node((0, 1), [$W$]),
node((1, 0), [$K^(dim V) (E)$]),
node((1, 1), [$K^(dim W) (F)$]),
arr((0, 0), (0, 1), [$Aa$]),
arr((0, 0), (1, 0), [$[v]_E$], "bij"),
arr((0, 1), (1, 1), [$[Aa v]_F$], label_pos: 1.5em, "bij"),
arr((1, 0), (1, 1), [$times A$]),
)]
Мы же хотим понять, что произойдет с вектором в базисе $E'$ при его переводе в базис $F'$.
Из координат вектора в $E'$ можно восстановить вектор $v$, а затем перевести его в базис $E$.
Аналогично координаты $Aa v$ в базисе $F$ можно превратить в $Aa v$, а затем записать этот вектор в координатах в базисе $F'$.
#align(center)[#commutative-diagram(
node((1, 0), [$K^(dim V) (E')$]),
node((0, 1), [$V$]),
node((0, 2), [$W$]),
node((1, 1), [$K^(dim V) (E)$]),
node((1, 2), [$K^(dim W) (F)$]),
node((1, 3), [$K^(dim W) (F')$]),
arr((1, 0), (0, 1), [$[v]_(E')$], "bij", curve: 30deg, label_pos: -1.2em),
arr((0, 1), (0, 2), [$Aa$]),
arr((0, 1), (1, 1), [$[v]_E$], "bij"),
arr((0, 2), (1, 2), [$[Aa v]_F$], label_pos: 1.5em, "bij"),
arr((1, 1), (1, 2), [$A times$]),
arr((0, 2), (1, 3), [$[Aa v]_(F')$], "bij", curve: 30deg, label_pos: -1.5em),
)]
То есть, мы как бы хотим посчитать страшную матрицу
$
[Aa]_(E', F') = [F dot A dot [(E' [v]_(E'))]_E]_(F')
$
Однако мы постоянно переходим из координат в векторные пространства, и наоборот. Как избавиться от таких переходов? Объединив преобразования туда и обратно через матрицы перехода. Действительно,
$
[E' [v]_(E')]_E = M_(E -> E') [v]_(E').
$
#align(center)[#commutative-diagram(
node((1, 0), $K^(dim V) (E')$),
node((0, 1), $V$),
node((0, 2), $W$),
node((1, 1), $K^(dim V) (E)$),
node((1, 2), $K^(dim W) (F)$),
node((1, 3), $K^(dim W) (F')$),
arr((1, 0), (0, 1), $[v]_(E')$, "bij", curve: 30deg, label_pos: -1.2em),
arr((0, 1), (0, 2), $Aa$),
arr((0, 1), (1, 1), $[v]_E$, "bij"),
arr((0, 2), (1, 2), $[Aa v]_F$, label_pos: 1.5em, "bij"),
arr((1, 1), (1, 2), $A times$),
arr((0, 2), (1, 3), $[Aa v]_(F')$, "bij", curve: 30deg, label_pos: -1.5em),
arr((1, 0), (1, 1), $M_(E -> E') times$, curve: -20deg, label_pos: 0.5em),
arr((1, 3), (1, 2), $M_(F' -> F) times$, curve: +20deg, label_pos: -0.5em),
arr((1, 2), (1, 3), $M_(F -> F') times$, curve: +20deg, label_pos: -1em),
)]
Прочитаем то что снизу, и получим что хотим.
$ [Aa]_(E', F') = M_(F' --> F) mul A mul M_(E --> E') $
]
#magic[
Давайте я еще оставлю легаси объяснение, которое тут было, но которое кажется мне какой-то магией.
$ Aa: limits(V)^(E, E') --> limits(W)^(F, F') $
$ [Aa]_(E, F) = A, [Aa]_(E', F') = ? $
$ Aa = Id_W compose Aa compose Id_V $
$ [Aa]_(E', F') = [Id_W]_(F, F') mul A mul [Id_v]_(E', E) = M_(F' --> F) mul A mul M_(E --> E') $
$ [Aa]_(E', F') = M_(F' --> F) mul A mul M_(E --> E') $
#align(center)[#commutative-diagram(
node((0, 0), [$V$]),
node((0, 1), [$W$]),
node((1, 0), [$V$]),
node((1, 1), [$W$]),
arr((0, 0), (0, 1), [$Aa$]),
arr((1, 0), (0, 0), [$"id"_V$]),
arr((0, 1), (1, 1), [$"id"_W$]),
arr((1, 0), (1, 1), [$Aa$], "dashed"),
)]
]
#pr[
Пусть $V$, $W$ --- конечномерные, $Aa in Hom(V, W)$. Тогда в $V$ и $W$ существуют базисы $E$ и $F$ такие, что $[Aa]_(E, F)$ --- окаймленная единичная.
]
#proof[
Пусть $E, F$ --- произвольные базисы $V, W$, тогда
$ Aa E = F A ==> Aa E = F P D Q ==> Aa (E Q^(-1)) = (F P) D $
Тогда в базисах $E Q^(-1), F P$ матрица этого линейного отображения будет единичной окаймленной.
]
|
https://github.com/jassielof/IT425-P1 | https://raw.githubusercontent.com/jassielof/IT425-P1/main/Documento de Respaldo de Análisis.typ | typst | MIT License | #import "template/apa7/lib.typ": *
#show: apa7.with(
title: [Revisión Crítica de Artículos Científicos],
custom-authors: [<NAME>],
custom-affiliations: [Facultad de Ingeniería, Universidad de Santa Cruz de la Sierra],
course: [IT425: Gestión de la Calidad Total],
instructor: [<NAME>],
due-date: datetime.today().display(),
language: "es",
region: "bo",
font-family: "Times New Roman",
paper-size: "a4",
toc: true,
)
La nomenclatura de investigación de artículos se rige de la siguiente forma (_Ficha de Lectura de Artículo Científico_):
- Título del Artículo
- Autor(es)
- Año de Publicación
- Fuente (Revista/Conferencia, Volumen, Número, Páginas)
#let ficha(
título: [],
autores: [],
año: [],
fuente: [],
caption: [],
) = {
[Se presenta la siguiente ficha de lectura del artículo científico:]
figure(
table(
columns: 2,
align: (right, left),
table.hline(),
table.header(
[Característica], [Descripción],
),
table.hline(),
[Nombre del Estudiante], [<NAME>],
[Fecha], datetime.today().display(),
[Título del Artículo], título,
[Autores], autores,
[Año de Publicación], año,
[Fuente], fuente,
table.hline(),
),
caption: caption,
)
}
== Restricciones de los Artículos Científicos
- No más de 3 en el mismo idioma
- Normas APA
- 5 artículos mínimo
- Idiomas permitidos:
- inglés
- castellano
- portugués
- de 2014--2024
- Bases de datos permitidas:
- SciELO
- Google Scholar
- DOAJ
- Responder a la pregunta: _¿Cuáles son los posibles beneficios y los problemas de implementación del enfoque clásico de "Total Quality Management" (TQM) en las organizaciones modernas y cómo estas limitaciones afectan su eficacia?_
- Artículos con los tópicos:
- Total Quality Management (TQM)
- Gestión de la Calidad Total
- Beneficios del TQM
- Problemas de implementación del TQM
- Limitaciones del TQM
- TQM en organizaciones modernas
- Desafíos del TQM
- Eficacia del TQM
- Calidad en las empresas
- Enfoques clásicos de calidad
== Contenido
Más el contenido de cada artículo a investigar, se debe responder lo siguiente:
1. Objetivo y Pregunta de Investigación
- *¿Cuál es el objetivo principal del artículo?* (Resumen de la principal cuestión o problema de investigación que aborda el artículo)
- *¿Cuáles son las hipótesis o preguntas de investigación formuladas por el autor?* (Indica las hipótesis o preguntas específicas que el artículo investiga, si aplica)
2. Metodología Utilizada
- *¿Cuál es el diseño de investigación?* (Cualitativo, Cuantitativo, Mixto, Estudio de Caso, Revisión de Literatura, etc.)
- *¿Cuáles fueron los métodos de recopilación de datos?* (Entrevistas, cuestionarios, experimentos, análisis documental, etc.)
- *¿Cuál es la muestra utilizada?* (Número de participantes, criterio de selección, etc.)
3. Resultados Principales
- *¿Cuáles son los resultados principales obtenidos?* (Resumen de los resultados más relevantes presentados por el autor)
- *¿Hay algún gráfico, tabla o figura que se destaque? ¿Por qué?* (Describe brevemente lo que se representa y su importancia)
4. Discusión y Conclusiones
- *¿Cómo interpretan los autores los resultados?* (¿Qué conclusiones sacan los autores de los resultados y cómo las justifican?)
- *¿Existen limitaciones mencionadas por los autores?* (Limitaciones metodológicas, de muestra, de contexto, etc.)
- *¿Cuáles son las implicaciones prácticas o futuras sugerencias de investigación?* (Aplicabilidad de los resultados y sugerencias para futuras investigaciones)
5. Reflexión Crítica
- *Puntos fuertes del artículo:* (¿Qué consideras que se hizo bien? Metodología, estructura, claridad, relevancia, etc.)
- *Puntos débiles o limitaciones que no fueron abordadas por los autores:* (¿Cuáles crees que son las principales debilidades o áreas que podrían mejorarse?)
- *¿Cómo contribuye este artículo a tu conocimiento en el área de gestión de la calidad?* (Reflexión personal sobre la utilidad del artículo para tu trayectoria académica o profesional)
- *¿Cómo se responde la pregunta de investigación en cuestión?* (Respuesta a la pregunta de investigación: _¿Cuáles son los posibles beneficios y los problemas de implementación del enfoque clásico de "Total Quality Management" (TQM) en las organizaciones modernas y cómo estas limitaciones afectan su eficacia?_)
6. Referencia Completa (Formato APA)
#let artículo(
título: [],
palabras-clave: [],
fuente: [],
// ficha de investigación
tabla-ficha: [],
// objetivos y preguntas
objetivo: [],
preguntas: [],
// metodología
diseño: [],
métodos: [],
muestra-usada: [],
// resultados
resultados: [],
gráficos: [],
// discusiones y conclusiones
interpretaciones: [],
limitaciones: [],
implicaciones-prácticas: [],
futuras-sugerencias: [],
// reflexiones
fuertes: [],
débiles: [],
contribuciones: [],
respuesta: [],
) = {
pagebreak()
heading(título, level: 1)
text(style: "italic")[Palabras clave de búsqueda: #palabras-clave [#fuente]]
heading([Ficha de Lectura de Artículo Científico], level: 2)
tabla-ficha
heading([Objetivos y Preguntas de Investigación], level: 2)
heading([Objetivo Principal del Artículo], level: 3)
objetivo
heading([Hipótesis o Preguntas de Investigación], level: 3)
preguntas
heading([Metodología], level: 2)
heading([Diseño de Investigación], level: 3)
diseño
heading([Métodos de Recopilación de Datos], level: 3)
métodos
heading([Muestra Usada], level: 3)
muestra-usada
heading([Resultados], level: 2)
heading([Resultados Principales], level: 3)
resultados
heading([Gráficos, Figuras o Tablas Destacadas y Justificación], level: 3)
gráficos
heading([Discusión y Conclusiones], level: 2)
heading([Interpretación de los Resultados], level: 3)
interpretaciones
heading([Limitaciones del Estudio], level: 3)
limitaciones
heading([Implicaciones Prácticas y Sugerencias Futuras], level: 3)
heading([Implicaciones Prácticas], level: 4)
implicaciones-prácticas
heading([Sugerencias para futuras investigaciones], level: 4)
futuras-sugerencias
heading([Reflexión Crítica], level: 2)
heading([Puntos Fuertes del Artículo], level: 3)
fuertes
heading([Puntos Débiles o Limitaciones no Abordadas], level: 3)
débiles
heading([Contribución al Conocimiento en Gestión de la Calidad], level: 3)
contribuciones
heading([Respuesta a la Pregunta de Investigación], level: 4)
respuesta
}
#artículo(
título: [Artículo 1: A systematic literature review of Total Quality Management (TQM) implementation in the organization~@Permana_Purba_Rizkiyah_2021],
palabras-clave: [Total Quality Management (TQM)],
fuente: [Google Academy],
tabla-ficha: ficha(
título: [A systematic literature review of Total Quality Management (TQM) implementation in the organization],
autores: cite(<Permana_Purba_Rizkiyah_2021>, form: "author"),
año: cite(<Permana_Purba_Rizkiyah_2021>, form: "year"),
fuente: [Google Academy (_Total Quality Management (TQM)_), International Journal of Advanced Science and Technology, 9(1), 25--36],
caption: [Ficha de Lectura del Artículo 1],
),
objetivo: [
El objetivo principal del artículo es evaluar la relevancia y efectividad del concepto de Gestión de Calidad Total (TQM) en la mejora del rendimiento empresarial y la satisfacción del cliente en el entorno empresarial actual. Para ello, los autores realizan una revisión sistemática de la literatura sobre la implementación del TQM en diversas industrias.
Entre los puntos clave, se encuentran:
- *Relevancia del TQM:* El artículo investiga la relevancia y efectividad del concepto de Gestión de Calidad Total (TQM) en la mejora del rendimiento empresarial y la satisfacción del cliente en el entorno empresarial actual.
- *Revisión Sistemática:* Realiza una revisión sistemática de la literatura sobre la implementación del TQM en diversas industrias para evaluar su impacto y beneficios.
- *Resultados:* Concluye que el TQM sigue siendo ampliamente utilizado y exitoso en mejorar la competitividad, el crecimiento empresarial y la moral de los empleados.
- *Desafíos y Adaptación:* Destaca la necesidad de que las organizaciones se adapten a un entorno de mercado dinámico y competitivo, enfocándose en la calidad del producto, la rapidez de entrega y el costo competitivo.
],
preguntas: [
El artículo realiza una revisión sistemática de la literatura sobre la implementación de la Gestión de Calidad Total (TQM) en las organizaciones. Aunque no se formulan hipótesis específicas, el estudio se centra en las siguientes preguntas de investigación:
- *Relevancia y efectividad de TQM:* ¿Sigue siendo relevante y efectiva la implementación de TQM para mejorar el rendimiento empresarial y la satisfacción del cliente?
- *Impacto en diferentes sectores:* ¿Cómo afecta la implementación de TQM a diferentes sectores industriales?
- *Factores de éxito y barreras:* ¿Cuáles son los factores clave para el éxito y las barreras en la implementación de TQM?
],
diseño: [
El diseño de investigación de este artículo es una *Revisión de Literatura Sistemática*. Se recopilaron y revisaron en detalle diversas literaturas de múltiples sectores industriales para observar la relevancia y efectividad del concepto de *Gestión de Calidad Total (TQM)* en la mejora del rendimiento empresarial y la satisfacción del cliente.
],
métodos: [
En el artículo se describe que el estudio se llevó a cabo en cuatro etapas principales:
- *Recopilación de artículos:* Se recopilaron artículos, revistas y algunos libros de texto predefinidos.
- *Revisión rápida:* Se realizó una revisión rápida para identificar los artículos apropiados.
- *Revisión exhaustiva:* Se llevó a cabo una revisión exhaustiva de los artículos seleccionados.
- *Clasificación:* Los artículos se clasificaron en cuatro grupos: aplicación del TQM, impacto del TQM, revisión del TQM y relación del TQM.
],
muestra-usada: [
La muestra utilizada en el estudio consistió en *50 artículos* revisados en profundidad. Estos artículos fueron seleccionados a partir de una búsqueda preliminar de *108 artículos* relacionados con palabras clave definidas. Los artículos fueron revisados rápidamente para identificar los más apropiados, resultando en una lista corta de *50 artículos* que fueron revisados y clasificados en detalle en cuatro grupos: *Aplicación del TQM, Impacto del TQM, Revisión del TQM y Relación del TQM*.
],
resultados: [
Los resultados principales del artículo incluyen:
- *Relevancia del TQM:* El concepto de *Total Quality Management (TQM)* sigue siendo ampliamente utilizado y efectivo en diversas industrias para mejorar la competitividad, el crecimiento empresarial y la sostenibilidad.
- *Impacto en el rendimiento:* La implementación del TQM ha demostrado mejorar significativamente el rendimiento organizacional, la satisfacción del cliente y la ventaja competitiva.
- *Factores clave:* La *cultura organizacional*, la *comunicación* y el *compromiso de la dirección* son factores críticos para el éxito del TQM.
- *Aplicaciones diversas:* El TQM se ha implementado con éxito en sectores como la manufactura, la educación, los servicios financieros y las pequeñas y medianas empresas (PYME).
],
gráficos: [
Entre las figuras destacadas, la Figura 3 (Marco de relación de TQM) es particularmente relevante, ya que ilustra cómo el TQM se relaciona con la ventaja competitiva y la satisfacción del cliente, mostrando el impacto del TQM en las organizaciones. Aunque, se pueden mencionar otras figuras y tablas importantes como:
- *Figura 3: Marco de relación de TQM:* Ilustra cómo el TQM se relaciona con la ventaja competitiva y la satisfacción del cliente. Es esencial porque muestra la relevancia y el impacto del TQM en las organizaciones.
- *Figura 1: Marco del estudio:* Representa las etapas del estudio, desde la recopilación de artículos hasta la clasificación en cuatro grupos. Es importante porque muestra el proceso metodológico seguido en la revisión sistemática.
- *Tabla 1: Definición de calidad:* Enumera diferentes definiciones de calidad, como la basada en el producto, el usuario y la manufactura. Es crucial para entender cómo se percibe y mide la calidad en diferentes contextos.
],
interpretaciones: [
Los autores del artículo concluyen que el concepto de *Gestión de Calidad Total (TQM)* sigue siendo relevante y efectivo para mejorar el rendimiento empresarial y la satisfacción del cliente. Las conclusiones clave son:
- *Relevancia Continua:* TQM sigue siendo utilizado por muchas organizaciones en todo el mundo y ha demostrado ser beneficioso para mejorar la competitividad, el crecimiento empresarial y la sostenibilidad.
- *Mejora del Rendimiento:* La implementación de TQM ha mostrado mejoras significativas en el rendimiento organizacional, incluyendo la satisfacción del cliente y la moral de los empleados.
- *Adaptabilidad:* TQM es adaptable a diferentes sectores industriales y sigue siendo compatible con los estándares de gestión modernos como ISO 9001.
- *Compromiso y Participación:* La implementación exitosa de TQM requiere el compromiso total de la dirección y la participación activa de los empleados.
],
limitaciones: [
El artículo menciona algunas limitaciones en la implementación del TQM (Total Quality Management) en las organizaciones. Las principales son:
- *Falta de comunicación:* La falta de comunicación efectiva puede ser una barrera significativa para la implementación exitosa del TQM.
- *Compromiso de la gestión:* La falta de compromiso por parte de la alta dirección puede dificultar la implementación del TQM.
- *Conocimiento y cultura:* La falta de conocimiento y una cultura organizacional adecuada pueden afectar negativamente la implementación del TQM.
- *Recursos y estructura:* La falta de recursos y una estructura organizacional inapropiada también pueden ser obstáculos importantes.
],
implicaciones-prácticas: [
La TQM sigue siendo relevante y efectiva en mejorar la competitividad, crecimiento empresarial y sostenibilidad en diversas industrias. Su implementación requiere compromiso de la dirección y participación de los empleados.
],
futuras-sugerencias: [
Se recomienda continuar estudiando la TQM en sectores emergentes como el comercio electrónico y las startups digitales para asegurar su aplicabilidad y efectividad en estos nuevos contextos.
],
fuertes: [
Puntos fuertes del artículo:
- *Metodología Sistemática:* El artículo utiliza una revisión sistemática de la literatura, lo que asegura un análisis exhaustivo y bien estructurado de múltiples fuentes y sectores industriales.
- *Claridad y Estructura:* La información está bien organizada en secciones claras, facilitando la comprensión de los conceptos y resultados presentados.
- *Relevancia Actual:* Se destaca que el TQM sigue siendo relevante y efectivo en la mejora del rendimiento empresarial y la satisfacción del cliente en el entorno empresarial actual.
- *Amplia Cobertura:* El estudio abarca una variedad de sectores industriales y geográficos, proporcionando una visión global de la implementación y los beneficios del TQM.
],
débiles: [
Puntos débiles o limitaciones no abordadas:
- *Falta de Enfoque en la Innovación:* El artículo no aborda cómo la TQM puede integrarse con nuevas tecnologías y métodos innovadores para mejorar aún más la calidad y la eficiencia.
- *Desafíos Culturales:* No se discuten en profundidad los desafíos culturales y de cambio organizacional que pueden surgir al implementar TQM en diferentes contextos y países.
- *Medición de Impacto:* Aunque se mencionan algunos beneficios, no se proporciona un marco detallado para medir el impacto a largo plazo de la TQM en diversas industrias.
- *Adaptación a Nuevos Sectores:* El artículo no explora cómo la TQM puede adaptarse y ser relevante para sectores emergentes como el comercio electrónico y las startups digitales.
],
contribuciones: [
Este artículo contribuye al conocimiento en gestión de la calidad al confirmar la relevancia continua del TQM, describir su evolución y demostrar su impacto positivo en diversas industrias. Algunos aspectos clave de la contribución incluyen:
- *Relevancia del TQM:* El artículo confirma que el *Total Quality Management (TQM)* sigue siendo relevante y efectivo para mejorar el rendimiento empresarial y la satisfacción del cliente en diversos sectores industriales.
- *Evolución del TQM:* Describe la evolución del TQM en cuatro etapas: *inspección de calidad, control de calidad, aseguramiento de calidad y gestión de calidad total*.
- *Impacto del TQM:* Muestra cómo la implementación del TQM ha mejorado la *competitividad, el crecimiento empresarial y la moral de los empleados*.
- *Metodología:* Utiliza una *revisión sistemática de la literatura* para evaluar la efectividad del TQM en diferentes industrias y contextos.
],
respuesta: [
En respuesta a la pregunta de investigación sobre los beneficios y problemas de implementación del TQM en las organizaciones modernas, el artículo destaca que:
- *Relevancia y Beneficios del TQM:* El TQM sigue siendo relevante y ampliamente utilizado en diversas industrias. Ayuda a mejorar la competitividad, el crecimiento empresarial y la sostenibilidad, además de aumentar la moral de los empleados.
- *Desafíos de Implementación:* La falta de comunicación y el compromiso de la gestión son barreras significativas. La implementación efectiva requiere compromiso de la dirección y la participación de los empleados.
- *Impacto en el Rendimiento:* La implementación del TQM ha demostrado mejorar el rendimiento organizacional, la satisfacción del cliente y la ventaja competitiva.
- *Evolución del TQM:* El TQM ha evolucionado a lo largo de los años, comenzando con la inspección de calidad y avanzando hacia un enfoque integral que involucra a toda la organización.
],
)
== 14 Puntos de Deming
#artículo(
título: [Artículo 2: IMPACTO DE TOTAL QUALITY MANAGEMENT EN EL DESEMPEÑO ORGANIZATIVO~@TORRES_SANTA_CRUZ2019],
palabras-clave: [Total Quality Management],
fuente: [SciELO],
tabla-ficha: ficha(
título: [IMPACTO DE TOTAL QUALITY MANAGEMENT EN EL DESEMPEÑO ORGANIZATIVO],
autores: cite(<TORRES_SANTA_CRUZ2019>, form: "author"),
año: cite(<TORRES_SANTA_CRUZ2019>, form: "year"),
fuente: [SciELO (_Total Quality Management_), Investigación & Desarrollo, 19, 43--58],
caption: [Ficha de Lectura del Artículo 2],
),
objetivo: [
El objetivo principal del artículo es verificar empíricamente cómo la adopción de los principios y prácticas de Total Quality Management (TQM) influye en el desempeño organizativo. Para ello, se utiliza la visión dinámica del Enfoque Basado en Competencias (EBC) para analizar cómo las organizaciones desarrollan capacidades específicas y regeneran competencias distintivas.
Entre los puntos clave, se encuentran:
- *Impacto de TQM en Desempeño Organizativo*: El artículo busca verificar empíricamente cómo la adopción de los principios y prácticas de *Total Quality Management (TQM)* influye en el desempeño organizativo.
- *Enfoque Basado en Competencias*: Se utiliza la visión dinámica del *Enfoque Basado en Competencias (EBC)* para analizar cómo las organizaciones desarrollan capacidades específicas y regeneran competencias distintivas.
- *Modelo Teórico y Hipótesis*: Se construye un modelo teórico de las relaciones entre variables y se formulan hipótesis que se contrastan mediante modelos de ecuaciones estructurales.
- *Resultados Positivos*: Los resultados muestran una relación positiva entre la introducción de TQM y la generación de competencias distintivas, lo que promueve mejores resultados empresariales.
],
preguntas: [
El artículo formula cinco hipótesis de investigación específicas:
- *H1*: El grado en que una organización está concienciada con una estrategia de TQM está relacionado positivamente con el uso de los principios y las prácticas relativas a calidad.
- *H2*: La adopción de cada una de las dimensiones de TQM está altamente correlacionada con la adopción de las demás.
- *H3*: Existe una relación positiva entre el grado de adopción de TQM en una organización y la creación de competencias distintivas en marketing.
- *H4*: Existe una relación positiva entre la creación de competencias distintivas en marketing y el desempeño organizativo.
- *H5*: La relación positiva entre el grado de adopción de TQM y el desempeño organizativo se explica a través de la mediación de la creación de competencias distintivas en marketing.
],
diseño: [
El diseño de investigación utilizado en el artículo es *cuantitativo*. Se aplicaron modelos de ecuaciones estructurales para contrastar las hipótesis formuladas y se utilizó una muestra de *219 hoteles de cuatro y cinco estrellas en España*. La investigación se realizó mediante *cuestionarios estructurados enviados por correo*.
],
métodos: [
En el artículo, se utilizaron *cuestionarios estructurados* como método de recopilación de datos. Estos cuestionarios fueron enviados por correo a directivos de hoteles de cuatro y cinco estrellas en España. La encuesta estaba dividida en tres secciones, cada una destinada a evaluar uno de los constructos básicos del modelo teórico: TQM, competencias distintivas en marketing y desempeño organizativo. Se recibieron 231 cuestionarios, *de los cuales 219 fueron válidos para la investigación*.
],
muestra-usada: [
La muestra utilizada en el estudio está compuesta por *219 hoteles de cuatro y cinco estrellas en España*.
- *Número de participantes*: 219 empresas hoteleras.
- *Criterio de selección*: Hoteles de cuatro y cinco estrellas, considerados como empresas intensivas en calidad.
- *Método de recolección de datos*: Encuesta enviada por correo, incluyendo una carta de presentación, el cuestionario estructurado, instrucciones y un compromiso de facilitar los resultados del estudio si el empresario lo solicitaba.
],
resultados: [
Los hallazgos clave del estudio son:
- *Relación Positiva entre TQM y Competencias Distintivas*: La adopción de *Total Quality Management (TQM)* está positivamente relacionada con la generación de competencias distintivas en marketing.
- *Impacto en el Desempeño Organizativo*: Las competencias distintivas en marketing, generadas a través de TQM, mejoran significativamente el desempeño organizativo.
- *Efecto Mediador*: Las competencias distintivas en marketing actúan como mediadoras en la relación entre TQM y el desempeño organizativo, siendo el efecto indirecto más significativo que el directo.
- *Validez del Modelo*: El modelo teórico propuesto fue validado empíricamente, mostrando buenos índices de ajuste y fiabilidad en las mediciones.
],
gráficos: [
La Figura 3, que representa la evolución de las estrategias de calidad desde la inspección hasta la gestión de la calidad total (TQM), es particularmente relevante para entender el contexto histórico y la importancia actual del TQM.
Además, se destacan otras figuras importantes como:
- *Figura 3: Estrategias basadas en la Calidad en función del Tiempo*: Representa la evolución de las estrategias de calidad desde la inspección hasta la gestión de la calidad total (TQM). Es relevante porque muestra cómo ha evolucionado el concepto de calidad a lo largo del tiempo.
- *Figura 1: Jerarquía entre Recursos, Capacidades, Competencias y Competencias Esenciales o Distintivas*: Esta figura muestra una jerarquía que clasifica los recursos y capacidades de una organización, destacando la importancia de las competencias distintivas para mejorar el desempeño organizativo.
- *Figura 2: Estructura Jerárquica de Competencias Distintivas*: Esta figura clasifica las competencias distintivas en tres niveles: funcionales, de coordinación y cohesión, y dinámicas o de innovación. Es importante porque ilustra cómo las competencias se integran y coordinan para generar ventajas competitivas.
Estas figuras son importantes porque ayudan a visualizar conceptos clave y la relación entre ellos, facilitando la comprensión del impacto de TQM en el desempeño organizativo.
],
interpretaciones: [
Los autores concluyen que la adopción de TQM tiene un impacto positivo en el desempeño organizativo, mediado por la generación de competencias distintivas en marketing. Esto sugiere que TQM no solo mejora la calidad, sino que también desarrolla capacidades organizativas que conducen a un mejor rendimiento.
Las principales interpretaciones del estudio son:
- *Relación Positiva*: Existe una relación positiva entre la implementación de TQM y la generación de competencias distintivas en marketing, lo que a su vez mejora el desempeño organizativo.
- *Modelo Integrador*: La correcta aplicación de los principios y prácticas de TQM conduce a un modelo de dirección sólido y exitoso.
- *Competencias Distintivas*: La introducción de TQM genera competencias distintivas que permiten a las organizaciones obtener información valiosa y satisfacer plenamente a los clientes.
- *Impacto Mediador*: Las competencias distintivas en marketing actúan como mediadoras en la relación entre TQM y el desempeño organizativo, sugiriendo que su papel es crucial para evaluar los resultados empresariales.
],
limitaciones: [
El artículo menciona varias limitaciones en la investigación:
- *Definiciones y Medidas de Desempeño*: Los estudios no usan las mismas definiciones y medidas de desempeño, lo que puede llevar a resultados contradictorios.
- *Factores Mediadores*: La mayoría de los estudios no consideran que TQM puede influir en el desempeño a través de factores mediadores.
- *Sector de Estudio*: La investigación empírica se ha realizado principalmente en el sector manufactura, dejando de lado el sector servicios.
- *Modelización con Ecuaciones Estructurales*: Pocos estudios empíricos emplean esta técnica para examinar TQM y su impacto.
Estas limitaciones resaltan la necesidad de más investigaciones empíricas y teóricas en diferentes sectores y contextos.
],
implicaciones-prácticas: [
Los resultados permiten a los directivos discernir cuáles principios y prácticas de TQM son más relevantes para generar competencias distintivas en marketing, mejorando así los resultados empresariales y la posición competitiva de la organización.
],
futuras-sugerencias: [
Se destaca la necesidad de abordar estudios empíricos en diferentes sectores de actividad para construir teoría sobre el impacto de TQM. Además, se sugiere investigar más sobre el rol específico de las competencias distintivas en marketing y su efecto mediador en el impacto de TQM sobre el desempeño organizativo.
],
fuertes: [
Puntos fuertes del artículo:
- *Metodología rigurosa*: El estudio utiliza modelos de ecuaciones estructurales para contrastar las hipótesis, lo que aporta solidez y precisión a los resultados.
- *Estructura clara*: El artículo está bien organizado, con una introducción que contextualiza el problema, una revisión de la literatura, y una sección de resultados y discusión que facilita la comprensión.
- *Relevancia práctica*: La investigación se centra en el impacto del Total Quality Management (TQM) en el desempeño organizativo, un tema de gran importancia para las empresas que buscan mejorar su competitividad.
- *Aplicación empírica*: El estudio no solo se basa en teorías, sino que también aplica el modelo a empresas del sector servicios, proporcionando evidencia empírica que respalda las conclusiones.
],
débiles: [
Puntos débiles o limitaciones no abordadas:
- *Falta de estudios empíricos en el sector servicios*: La mayoría de los estudios sobre TQM se han centrado en el sector manufacturero, dejando un vacío en la investigación aplicada a servicios, especialmente en turismo.
- *Definiciones y medidas inconsistentes*: Los estudios no utilizan las mismas definiciones y medidas de desempeño, lo que dificulta la comparación y generalización de los resultados.
- *Papel mediador de variables*: Muchos estudios no consideran que TQM puede influir en el desempeño a través de factores mediadores, lo que podría llevar a conclusiones incompletas.
- *Necesidad de modelización con ecuaciones estructurales*: Hay pocos estudios empíricos que emplean esta técnica para examinar TQM, lo que limita la comprensión de las relaciones complejas entre variables.
],
contribuciones: [
Este artículo contribuye significativamente al conocimiento en gestión de la calidad al proporcionar evidencia empírica del impacto positivo de TQM en el desempeño organizativo, especialmente en el sector servicios. Además, destaca la importancia de las competencias distintivas en marketing como mediadoras de este impacto.
- *Comprensión de TQM*: Proporciona una visión detallada de cómo la *Gestión de la Calidad Total (TQM)* impacta el desempeño organizativo, lo que es esencial para implementar mejoras en cualquier organización.
- *Modelo Teórico*: Presenta un *modelo teórico* que relaciona TQM con competencias distintivas y desempeño organizativo, útil para investigaciones futuras o aplicaciones prácticas.
- *Metodología Empírica*: Ofrece una *metodología empírica* basada en modelos de ecuaciones estructurales, lo que puede ser valioso para realizar estudios similares en otros contextos.
- *Aplicación en Servicios*: Aunque muchos estudios se centran en manufactura, este artículo se enfoca en el *sector servicios*, ampliando el entendimiento de TQM en diferentes industrias.
],
respuesta: [
En respuesta a la pregunta de investigación sobre los beneficios y problemas de implementación del TQM en las organizaciones modernas, el artículo muestra que:
- *Beneficios del TQM*: El TQM promueve la generación de competencias distintivas que mejoran el desempeño organizativo. Se destaca la relación positiva entre la adopción de TQM y la mejora en los resultados empresariales.
- *Problemas de Implementación*: La implementación de TQM puede enfrentar desafíos debido a la falta de consenso en su definición y la dificultad de medir sus principios y prácticas. Además, la mayoría de los estudios se han centrado en el sector manufacturero, dejando un vacío en el sector servicios.
- *Eficacia del TQM*: La eficacia de TQM está mediada por la generación de competencias distintivas en marketing, lo que sugiere que su impacto en el desempeño organizativo no es directo, sino a través de estas competencias.
- *Recomendaciones*: Se sugiere la necesidad de más estudios empíricos en el sector servicios y la consideración de variables mediadoras para entender mejor la relación entre TQM y el desempeño organizativo.
]
)
#artículo(
título: [Artículo 3: Energy management and power quality improvement of microgrid system through modified water wave optimization~@CHOUDHURY20236020],
palabras-clave: [Quality improvement strategies],
fuente: [DOAJ],
tabla-ficha: ficha(
título: [Energy management and power quality improvement of microgrid system through modified water wave optimization],
autores: cite(<CHOUDHURY20236020>, form: "author"),
año: cite(<CHOUDHURY20236020>, form: "year"),
fuente: [DOAJ (_Quality improvement strategies_), Energy Reports, 9, 6020--6041],
caption: [Ficha de Lectura del Artículo 3],
),
objetivo: [
El objetivo principal del artículo es mejorar la gestión energética y la calidad de potencia en un sistema de microredes integrado por celdas de combustible, baterías y supercapacitores. Para lograrlo, propone un algoritmo de Optimización de Ondas de Agua Modificado (MWWO) para ajustar dinámicamente los parámetros del controlador proporcional-integral (PI).
Entre los puntos clave, se encuentran:
- *Mejora de la Gestión Energética y Calidad de Potencia*: El artículo se centra en la mejora de la gestión energética y la calidad de potencia en un sistema de microredes integrado por celdas de combustible, baterías y supercapacitores.
- *Optimización de Ondas de Agua*: Propone un algoritmo de Optimización de Ondas de Agua Modificado (MWWO) para ajustar dinámicamente los parámetros del controlador proporcional-integral (PI) y mejorar la eficiencia del sistema.
- *Comparación de Métodos*: Compara la técnica propuesta con métodos tradicionales en términos de consumo de hidrógeno, calidad de potencia y eficiencia general del sistema.
- *Resultados Superiores*: Los resultados muestran que la técnica MWWO supera a los métodos tradicionales en respuesta dinámica de voltaje, consumo de combustible, reducción de armónicos y eficiencia general.
- *Optimización de Algoritmos:* El artículo se centra en mejorar el algoritmo de Optimización de Olas de Agua (WWO) para resolver problemas de optimización.
- *Desarrollo de MWWO:* Propone una técnica modificada llamada Optimización de Olas de Agua Modificada (MWWO) para equilibrar la exploración y explotación en la búsqueda de soluciones óptimas.
- *Aplicación en Microgrids:* Implementa la técnica MWWO en un sistema de microgrid que incluye celdas de combustible, baterías y supercapacitores para mejorar la calidad de la energía y la eficiencia del sistema.
- *Referencias*: El documento incluye una amplia variedad de referencias sobre temas como *celdas de combustible*, *optimización de sistemas de energía*, *gestión de microredes*, y *tecnologías de energía renovable*.
- *Diversidad de Temas*: Los artículos citados abarcan desde la *optimización de algoritmos* hasta la *gestión de energía en sistemas híbridos*.
- *Aplicaciones Prácticas*: Muchos estudios se centran en aplicaciones prácticas como la *mejora de la calidad de la energía* y la *gestión de recursos distribuidos*.
],
preguntas: [
El artículo investiga estrategias de gestión de energía y mejora de la calidad de energía en un sistema de microred integrado por una celda de combustible, batería y supercapacitor. Las hipótesis o preguntas de investigación específicas formuladas por los autores incluyen:
- *Eficiencia del MWWO*: ¿El algoritmo de Optimización de Olas de Agua Modificado (MWWO) mejora la eficiencia y la calidad de energía en comparación con métodos tradicionales?
- *Consumo de Hidrógeno*: ¿El MWWO reduce el consumo de hidrógeno en el sistema de microred?
- *Distribución de Energía*: ¿El MWWO mejora la distribución de energía entre las fuentes híbridas?
- *Respuesta Dinámica*: ¿El MWWO proporciona una mejor respuesta dinámica de voltaje y reduce los armónicos en el sistema?
],
diseño: [
El diseño de investigación de este artículo es *cuantitativo*. Se basa en la simulación y análisis numérico utilizando la arquitectura de MATLAB/Simulink para evaluar la eficacia de diferentes estrategias de gestión de energía y mejora de la calidad de energía en un sistema de microredes integrado por celdas de combustible, baterías y supercapacitores.
],
métodos: [
El artículo que estás viendo no menciona explícitamente los métodos de recopilación de datos utilizados. Sin embargo, se puede inferir que los datos fueron obtenidos a través de *simulaciones en MATLAB/Simulink* y *análisis comparativos* de diferentes técnicas de optimización y control para la gestión de energía y mejora de la calidad de la energía en sistemas de microredes.
],
muestra-usada: [
_El artículo no menciona específicamente una muestra de participantes humanos, ya que se centra en la gestión de energía y la mejora de la calidad de energía en sistemas de microredes utilizando una técnica de optimización de olas de agua modificada_. La investigación se basa en simulaciones realizadas en el entorno MATLAB/Simulink para comparar la eficiencia de diferentes estrategias de gestión de energía y controladores.
],
resultados: [
Los hallazgos clave del estudio son:
- *Eficiencia Mejorada*: La técnica propuesta de Optimización de Olas de Agua Modificada (MWWO) mostró una eficiencia superior en comparación con los métodos tradicionales, con menor consumo de hidrógeno y mejor respuesta dinámica de voltaje.
- *Calidad de Energía*: Se logró una reducción significativa de armónicos y una mejor distribución de energía entre las fuentes híbridas, mejorando la calidad de la energía del sistema.
- *Robustez del Sistema*: El sistema demostró una mayor robustez y estabilidad, con menor estrés en la celda de combustible, batería y supercondensador.
- *Comparación de Métodos*: Los resultados numéricos y gráficos confirmaron la superioridad del método MWWO sobre los métodos WWO y PI tradicionales en términos de eficiencia, consumo de combustible y calidad de energía.
],
gráficos: [
Hay varias figuras y tablas en el documento. Una figura destacada es la *Figura 1*, que muestra una *representación esquemática del artículo de investigación*. Esta figura resalta las ideas clave del artículo, proporcionando una visión general clara y concisa de los conceptos y métodos discutidos.
La importancia de esta figura radica en su capacidad para *resumir visualmente* el contenido del artículo, facilitando la comprensión de los lectores sobre los temas principales y las contribuciones del estudio.
],
interpretaciones: [
Los autores interpretan los resultados de su investigación de la siguiente manera:
- *Eficiencia y Robustez del MWWO*: Concluyen que el algoritmo de Optimización de Olas de Agua Modificado (MWWO) es más eficiente y robusto que los métodos tradicionales. Justifican esto mostrando una mejor respuesta dinámica de voltaje, menor consumo de combustible, reducción de armónicos y mayor eficiencia general del sistema.
- *Comparación con Métodos Tradicionales*: Los resultados obtenidos indican que el MWWO supera a los métodos tradicionales en términos de velocidad de convergencia, diversidad de soluciones y rendimiento general.
- *Aplicabilidad en Tiempo Real*: Los autores destacan que la técnica propuesta es viable para su implementación en tiempo real debido a su capacidad para mejorar la calidad de la energía y la gestión eficiente de los recursos energéticos.
- *Impacto en la Vida Útil de los Recursos*: La técnica MWWO maximiza la eficiencia del combustible y minimiza el estrés en las fuentes de energía híbridas, lo que prolonga su vida útil.
Estas conclusiones se basan en un análisis numérico y gráfico detallado de los resultados obtenidos en el entorno de simulación MATLAB/Simulink.
],
limitaciones: [
Los autores mencionan algunas limitaciones en su investigación:
- *Precisión de Cálculo*: La técnica básica de *Water Wave Optimization (WWO)* tiene una baja precisión de cálculo y tiende a caer en óptimos locales.
- *Temperatura de Operación*: Las *celdas de combustible de óxido sólido (SOFC)* operan a altas temperaturas, lo que puede aumentar los costos de material y hacerlas menos económicas.
- *Respuesta Adaptativa*: La alta temperatura de operación de las SOFCs también limita su capacidad de respuesta adaptativa ante perturbaciones.
Estas limitaciones se abordan mediante la implementación de técnicas mejoradas como el *Modified Water Wave Optimization (MWWO)*.
],
implicaciones-prácticas: [
La técnica de *Optimización de Olas de Agua Modificada (MWWO)* propuesta ha demostrado ser superior en términos de *mejor respuesta de voltaje dinámico, menor consumo de combustible, reducción de armónicos y mayor eficiencia general*. Esto sugiere que puede ser implementada en sistemas de microredes para mejorar la gestión de energía y la calidad de la potencia.
],
futuras-sugerencias: [
Se recomienda explorar la *aplicación de la técnica MWWO en diferentes configuraciones de microredes* y con otros tipos de fuentes de energía renovable. Además, se sugiere investigar la *integración de técnicas de inteligencia artificial* para mejorar aún más la eficiencia y la adaptabilidad del sistema.
],
fuertes: [
Puntos fuertes del artículo:
- *Metodología Innovadora*: El artículo presenta una técnica de Optimización de Olas de Agua Modificada (MWWO) para la gestión de energía y mejora de la calidad de energía en microredes, destacando su superioridad sobre métodos tradicionales.
- *Comparación Exhaustiva*: Se realiza una comparación detallada entre la técnica propuesta y métodos tradicionales en términos de consumo de hidrógeno, calidad de energía y eficiencia del sistema, demostrando la robustez y eficacia del MWWO.
- *Aplicación Práctica*: El sistema se implementa en un entorno MATLAB/Simulink, proporcionando resultados numéricos y gráficos que validan la efectividad del enfoque propuesto en condiciones de carga variables.
- *Relevancia y Actualidad*: El artículo aborda problemas contemporáneos en la gestión de energía y calidad de energía en sistemas de microredes, utilizando fuentes de energía renovable y tecnologías de almacenamiento de energía híbrida.
],
débiles: [
Puntos débiles o limitaciones no abordadas:
- *Precisión Computacional*: La técnica de Optimización de Olas de Agua (WWO) básica tiene baja precisión computacional y tiende a caer en óptimos locales.
- *Costos de Materiales*: Las Celdas de Combustible de Óxido Sólido (SOFC) operan a altas temperaturas, lo que incrementa los costos de materiales y las hace menos económicas.
- *Respuesta Adaptativa*: Las SOFC tienen una respuesta adaptativa limitada ante perturbaciones debido a su alta temperatura de operación.
- *Diversidad de Soluciones*: Aunque la técnica MWWO mejora la diversidad de soluciones, aún puede haber margen para optimizar la diversidad y la exploración del espacio de búsqueda.
],
contribuciones: [
El artículo sobre la *gestión de energía y mejora de la calidad de energía en sistemas de microredes* mediante la *optimización de ondas de agua modificada* ofrece varias contribuciones valiosas:
- *Innovación en Algoritmos*: Introduce un algoritmo de optimización novedoso que mejora la eficiencia y la calidad de la energía en sistemas de microredes.
- *Comparación de Métodos*: Proporciona una comparación detallada entre técnicas tradicionales y el nuevo método propuesto, destacando sus ventajas en términos de consumo de hidrógeno, calidad de energía y eficiencia del sistema.
- *Aplicaciones Prácticas*: Demuestra la aplicabilidad del algoritmo en un entorno de simulación, lo que puede ser útil para futuras implementaciones en tiempo real.
- *Contribuciones Académicas*: Ofrece una base sólida para futuras investigaciones en la optimización de sistemas de energía, lo que puede ser beneficioso para tu trayectoria académica o profesional en el campo de la ingeniería eléctrica y la gestión de energía.
],
respuesta: [
Para responder a la pregunta de investigación sobre los beneficios y problemas de implementación del enfoque clásico de "Total Quality Management" (TQM) en las organizaciones modernas y cómo estas limitaciones afectan su eficacia, se pueden seguir estos pasos:
+ *Revisión de la Literatura*: Investigar estudios previos y artículos académicos que discutan los beneficios y problemas del TQM en diferentes contextos organizacionales.
+ *Estudio de Caso*: Analizar casos específicos de organizaciones que han implementado TQM, evaluando los resultados obtenidos y las dificultades encontradas.
+ *Encuestas y Entrevistas*: Recopilar datos cualitativos y cuantitativos de empleados y directivos sobre su experiencia con TQM, identificando beneficios percibidos y problemas enfrentados.
+ *Análisis Comparativo*: Comparar los resultados de TQM con otros enfoques de gestión de calidad para determinar su eficacia relativa y las limitaciones específicas en contextos modernos.
Estos pasos permitirán una comprensión integral de cómo el TQM puede beneficiar a las organizaciones modernas y las barreras que pueden limitar su eficacia.
]
)
#artículo(
título: [Artículo 4: Estudo de caso em uma empresa do setor madeireiro do Alto Paranaíba: uma proposta de implementação da gestão da qualidade total e da gestão da rotina / Quality management and current approaches: a case study in a logging company located in the Alto Paranaíba~@Ananias_Grossi_Peixoto_Mendonça_Melo_Barbosa_2020],
palabras-clave: [Problemas de implementação do TQM],
fuente: [Google Academy],
tabla-ficha: ficha(
título: [Estudo de caso em uma empresa do setor madeireiro do Alto Paranaíba: uma proposta de implementação da gestão da qualidade total e da gestão da rotina],
autores: cite(<Ananias_Grossi_Peixoto_Mendonça_Melo_Barbosa_2020>, form: "author"),
año: cite(<Ananias_Grossi_Peixoto_Mendonça_Melo_Barbosa_2020>, form: "year"),
fuente: [Google Academy (_Problemas de implementação do TQM_), Brazilian Journal of Business, 2(2), 965--978],
caption: [Ficha de Lectura del Artículo 4],
),
objetivo: [
El objetivo principal del artículo es *identificar características de la gestión de la calidad total (TQM) y del Gerenciamento de la Rotina do Dia-a-Dia (GRD)* en una empresa del sector madeireiro situada en la región del Alto Paranaíba. Se busca *proponer mejoras* mediante la aplicación del GRD, enfocándose en el proceso de producción del principal producto de la empresa, la puerta, para *fomentar la mejora continua* y la participación de los empleados en la búsqueda de la calidad en todas las etapas de los procesos.
],
preguntas: [
El artículo no menciona explícitamente hipótesis o preguntas de investigación específicas. Sin embargo, el objetivo principal del estudio es *identificar características de la gestión de la calidad total (TQM) y del Gerenciamento da Rotina do Dia-a-Dia (GRD)* en una empresa del sector madeireiro en la región del Alto Paranaíba. Además, busca *evaluar la viabilidad de aplicar el GRD* en el proceso de producción del principal producto de la empresa, la puerta, y *proponer mejoras* para fomentar la participación de los empleados en la mejora continua de las actividades diarias.
],
diseño: [
El diseño de investigación utilizado en este artículo es *cualitativo* y se clasifica como un *estudio de caso*. Se enfoca en identificar y retratar características de la Gestión de la Calidad Total (TQM) y la Gestión de la Rutina del Día a Día (GRD) en una empresa del sector madeireiro.
],
métodos: [
En el estudio de caso presentado en la página actual, los *métodos de recopilación de datos* utilizados fueron:
- *Entrevistas*: Se realizó una entrevista semiestructurada con el gerente de producción para identificar características de la Gestión de la Calidad Total (TQM) y la Gestión de la Rutina del Día a Día (GRD).
- *Observación*: Se llevaron a cabo observaciones en la empresa para alinear la teoría con la práctica.
- *Análisis documental*: Se utilizó un levantamiento bibliográfico y el apoyo en el referencial teórico para elaborar los cuestionarios y guías de entrevista.
Estos métodos permitieron obtener una visión integral de los procesos y prácticas de la empresa.
],
muestra-usada: [
En el estudio de caso presentado en el artículo, la muestra utilizada fue una *empresa del sector madeireiro* localizada en la región del Alto Paranaíba. Esta empresa cuenta con *15 empleados* en el área productiva y fabrica productos de madera mecanicamente procesada, como puertas, portales, ventanas, entre otros. La selección de esta empresa se basó en su relevancia dentro del sector y su disposición para participar en el estudio.
],
resultados: [
Los resultados más relevantes presentados en el artículo son:
- *Falta de Herramientas de Calidad*: La empresa no utiliza herramientas formales de gestión de calidad, lo que resulta en una falta de padronización y control de procesos.
- *Problemas de Calidad*: Se identificaron problemas críticos como las fisuras en las puertas, que representan el 60% de las reclamaciones de los clientes.
- *Propuesta de Mejoras*: Se sugirió la implementación de la metodología CEDAC y el sistema 5S para mejorar la calidad y organización del ambiente de trabajo.
- *Orientación al Cliente*: La empresa muestra preocupación por las necesidades de los clientes, pero de manera informal y sin indicadores de calidad establecidos.
],
gráficos: [
En el artículo, se destacan dos figuras importantes:
- *Figura 1: Desdoblamiento metodológico para obtenção dos resultados*: Esta figura muestra el proceso metodológico utilizado para obtener los resultados del estudio. Es importante porque ilustra cómo se estructuró la investigación y los pasos seguidos para recolectar y analizar los datos.
- *Figura 2: Mapeamento do processo produtivo da porta*: Esta figura representa el mapeo del proceso de producción de puertas, desde la recepción del pedido hasta la entrega al cliente. Es crucial porque identifica los puntos críticos y áreas de mejora en el proceso productivo, lo que es esencial para implementar la gestión de la calidad total y la gestión de la rutina.
Estas figuras ayudan a visualizar y entender mejor los procesos y metodologías discutidos en el artículo.
],
interpretaciones: [
Los autores concluyen que la empresa necesita implementar el *gerenciamiento de la rutina* para mejorar la calidad en todos sus procesos diarios. Aquí están las conclusiones clave:
- *Necesidad de Gestión de Calidad*: La empresa debe enfocarse más en la gestión de la calidad, ya que actualmente no utiliza herramientas ni métodos formales para ello.
- *Propuesta de GRD*: La aplicación del Gerenciamento da Rotina do Dia-a-Dia (GRD) es necesaria para mejorar la calidad de los productos y procesos.
- *Participación de los Empleados*: La metodología CEDAC se sugiere para fomentar la participación de los empleados en la mejora continua.
- *Implementación de Otras Abordajes*: Además del GRD, se recomienda implementar la gestión por diretrizes y la gestión por procesos para asegurar la calidad a largo plazo.
Estas conclusiones se justifican mediante la observación de la falta de padronización y control en los procesos actuales de la empresa.
],
limitaciones: [
El artículo menciona las siguientes limitaciones:
- *Falta de Padronización*: La empresa no tiene una padronización formal de sus procesos ni utiliza herramientas de gestión de calidad.
- *Enfoque Reactivo*: La mejora de procesos se realiza solo después de recibir quejas de los clientes, en lugar de prevenir problemas.
- *Ausencia de Indicadores*: No se utilizan indicadores de calidad ni sistemas de control para monitorear el desempeño de los procesos.
- *Falta de Capacitación*: No se proporciona entrenamiento a los empleados para consolidar mejoras y mantener la calidad.
Estas limitaciones afectan la efectividad de la gestión de calidad en la empresa estudiada.
],
implicaciones-prácticas: [
La implementación del *Gerenciamento da Rotina do Dia-a-Dia (GRD)* en la empresa maderera puede mejorar significativamente la calidad de los productos y la eficiencia de los procesos. Se sugiere la adopción de herramientas como el *5S* y *Procedimientos Operacionales Padrón (POP)* para mantener un ambiente de trabajo limpio y organizado.
],
futuras-sugerencias: [
Se recomienda explorar la implementación de otras metodologías de gestión de calidad, como la *Gestión por Diretrizes* y la *Gestión por Procesos*, para evaluar su impacto en la mejora continua y la satisfacción del cliente a largo plazo.
],
fuertes: [
El estudio se destaca por una metodología clara y estructurada, incluyendo entrevistas y observación. Además, aborda un tema relevante en un sector significativo como el maderero, y propone soluciones prácticas, como la metodología CEDAC y los Procedimientos Operacionales Padrón (POP), para mejorar la calidad y fomentar la participación de los empleados.
- *Metodología clara y estructurada*: El artículo utiliza una metodología bien definida, incluyendo entrevistas y observaciones, para alinear la teoría con la práctica en la empresa estudiada.
- *Relevancia del estudio*: La investigación aborda la implementación de la Gestión de la Calidad Total (TQM) y la Gestión de la Rutina del Día a Día (GRD) en una empresa del sector madeireiro, un área económica significativa.
- *Propuestas de mejora*: Se sugieren mejoras prácticas, como la implementación de la metodología CEDAC y el uso de Procedimientos Operacionales Padrón (POP), para fomentar la participación de los empleados y la mejora continua.
- *Análisis detallado*: El artículo proporciona un análisis exhaustivo de los procesos y microprocesos de la empresa, identificando problemas críticos y proponiendo soluciones específicas.
],
débiles: [
El artículo no menciona la falta de herramientas formales como el ciclo PDCA, ni aborda la necesidad de capacitación continua para los empleados. Tampoco se discuten indicadores de calidad específicos, lo cual sería esencial para la mejora continua.
- *Falta de herramientas formales*: La empresa no utiliza herramientas formales de gestión de calidad, como el ciclo PDCA o Procedimientos Operacionales Estándar (POP), lo que podría mejorar la estandarización y control de procesos.
- *Indicadores de calidad*: No se mencionan indicadores de calidad específicos para monitorear y evaluar el desempeño de los procesos, lo que es crucial para la mejora continua.
- *Capacitación de empleados*: No se aborda la necesidad de capacitación continua para los empleados, lo cual es esencial para mantener y mejorar la calidad.
- *Gestión visual*: La implementación de metodologías como CEDAC para la gestión visual podría fomentar una mayor participación de los empleados en la mejora continua.
],
contribuciones: [
Este artículo ofrece una visión práctica sobre la implementación de TQM y GRD, conceptos clave en la gestión de la calidad. Además, identifica desafíos comunes en el sector maderero y ofrece soluciones que pueden aplicarse en situaciones similares, lo que lo hace relevante para quienes buscan mejorar la calidad en entornos industriales.
- *Implementación Práctica*: Proporciona un caso de estudio real sobre la implementación de la Gestión de la Calidad Total (TQM) y la Gestión de la Rutina del Día a Día (GRD) en una empresa del sector madeireiro, lo que puede ayudarte a entender cómo aplicar estos conceptos en situaciones prácticas.
- *Metodologías y Herramientas*: Discute metodologías como CEDAC y herramientas como el ciclo PDCA y 5S, que son fundamentales para la mejora continua y la gestión de la calidad.
- *Desafíos y Soluciones*: Identifica problemas comunes en la gestión de la calidad en el sector madeireiro y propone soluciones prácticas, lo que puede ser relevante para enfrentar desafíos similares en tu carrera.
- *Participación de los Empleados*: Resalta la importancia de la participación de los empleados en la mejora continua, un aspecto crucial para el éxito de cualquier sistema de gestión de la calidad.
],
respuesta: [
El artículo responde a la pregunta de investigación destacando que el enfoque clásico de TQM, combinado con herramientas como GRD y CEDAC, puede generar beneficios en términos de mejora continua. Sin embargo, la falta de capacitación y de indicadores formales limita su efectividad en organizaciones modernas. Estas limitaciones pueden afectar la sostenibilidad de los resultados a largo plazo si no se abordan correctamente.
- *Beneficios del TQM*: El TQM busca la *mejora continua* de los procesos, productos y servicios, involucrando a todos los miembros de la organización para satisfacer y superar las expectativas de los clientes.
- *Problemas de Implementación*: La empresa estudiada no tiene *padronización formal* de sus procesos ni utiliza herramientas de gestión de calidad, lo que lleva a una falta de control y previsibilidad en los resultados.
- *Limitaciones y Eficacia*: La falta de *indicadores de desempeño* y *sistemas de control* adecuados impide que la empresa implemente eficazmente el TQM, afectando negativamente la calidad y la satisfacción del cliente.
- *Propuestas de Mejora*: Se sugiere la implementación de metodologías como *CEDAC* y *5S* para mejorar la participación de los empleados y la calidad de los procesos.
Estos puntos resumen cómo el artículo aborda la pregunta de investigación, destacando tanto los beneficios potenciales como las limitaciones prácticas del TQM en la empresa estudiada.
],
)
#artículo(
título: [Artículo 5: Identifying critical assumptions and risks in air quality management planning using Theory of Change approach~@PHATHUTSHEDDZO_2024],
palabras-clave: [Quality management challenges],
fuente: [SciELO],
tabla-ficha: ficha(
título: [Identifying critical assumptions and risks in air quality management planning using Theory of Change approach],
autores: cite(<PHATHUTSHEDDZO_2024>, form: "author"),
año: cite(<PHATHUTSHEDDZO_2024>, form: "year"),
fuente: [SciELO (_Quality management challenges_), Clean Air Journal, 34, 1--19],
caption: [Ficha de Lectura del Artículo 5],
),
objetivo: [
El artículo tiene como objetivo *identificar las suposiciones críticas y los riesgos* en la planificación de la gestión de la calidad del aire utilizando el enfoque de la *Teoría del Cambio (ToC)*. Se enfoca en evaluar la *eficacia de los Planes de Gestión de la Calidad del Aire (AQMP)* como instrumento de política, específicamente en el contexto de Sudáfrica, para abordar los desafíos de calidad del aire en áreas prioritarias.
- *Identificación de Supuestos Críticos:* El artículo se centra en *identificar los supuestos críticos y riesgos* en la planificación de la gestión de la calidad del aire utilizando el enfoque de Teoría del Cambio (ToC).
- *Mejora de la Gestión del Aire:* Busca *mejorar la efectividad* de los planes de gestión de la calidad del aire en Sudáfrica.
- *Participación de Múltiples Actores:* Destaca la *importancia de la colaboración* entre diferentes actores, incluyendo académicos, consultores, industria, reguladores y organizaciones no gubernamentales.
],
preguntas: [
El artículo investiga las siguientes hipótesis y preguntas de investigación:
- *Hipótesis sobre el marco legislativo*: Se asume que el marco legislativo proporciona una guía adecuada para el desarrollo e implementación de los Planes de Gestión de la Calidad del Aire (AQMP).
- *Hipótesis sobre los recursos*: Se asume que existen los recursos necesarios (tiempo, dinero, datos, información, habilidades y competencias) para desarrollar e implementar los AQMP.
- *Hipótesis sobre la gobernanza cooperativa*: Se asume que existe una gobernanza cooperativa entre las partes interesadas del gobierno.
- *Preguntas sobre la efectividad*: ¿Qué tan efectivos son los AQMP como instrumentos de política en Sudáfrica y cuáles son los riesgos y suposiciones críticas que afectan su éxito o fracaso?
Estas hipótesis y preguntas son fundamentales para evaluar la efectividad de los AQMP en Sudáfrica utilizando el enfoque de la Teoría del Cambio (ToC).
],
diseño: [
El diseño de investigación del artículo es *cualitativo*. Utiliza el enfoque de la *Teoría del Cambio (ToC)* para identificar supuestos críticos y riesgos en la planificación de la gestión de la calidad del aire en Sudáfrica. Este método implica la participación de diversos actores y la evaluación de las relaciones causales y secuencias de eventos necesarios para alcanzar los resultados deseados.
],
métodos: [
El artículo describe los métodos de recopilación de datos utilizados en el estudio sobre la planificación de la gestión de la calidad del aire en Sudáfrica. Aquí están los métodos principales:
- *Talleres con especialistas*: Se realizaron talleres con especialistas internos en la Universidad del Noroeste para desarrollar un mapa conceptual inicial y suposiciones clave.
- *Talleres con académicos, consultores y profesionales de la industria*: Se llevaron a cabo talleres adicionales con académicos, consultores y profesionales de la industria para refinar el mapa conceptual y las suposiciones.
- *Talleres con reguladores*: Se presentaron los resultados a reguladores gubernamentales para obtener su retroalimentación.
- *Revisión de literatura*: Se realizó una revisión sistemática de la literatura académica y no revisada por pares para identificar suposiciones existentes.
],
muestra-usada: [
La muestra utilizada en el estudio se seleccionó a través de varios talleres y discusiones con diferentes grupos de interés. Los detalles clave siendo:
- *Talleres Especializados*: Participaron cinco especialistas internos de la Universidad del Noroeste (NWU) con experiencia en la aplicación del enfoque de la Teoría del Cambio (ToC).
- *Consultores y Practicantes*: Se realizaron tres talleres separados con académicos, consultores y practicantes de la industria, incluyendo científicos y oficiales ambientales.
- *Reguladores*: Participaron 11 funcionarios gubernamentales de diferentes niveles de gobierno, incluyendo departamentos nacionales, provinciales y municipales.
- *Organizaciones No Gubernamentales*: Tres miembros de organizaciones de justicia ambiental participaron en discusiones generales.
Estos participantes fueron seleccionados en base a su experiencia, conocimiento y participación directa en el proceso de desarrollo e implementación de los Planes de Gestión de la Calidad del Aire (AQMP) en Sudáfrica.
],
resultados: [
Los resultados clave del estudio sobre la planificación de la gestión de la calidad del aire en Sudáfrica son los siguientes:
- *Mapa de Teoría del Cambio (ToC)*: Se desarrolló un mapa conceptual de ToC para el proceso de desarrollo e implementación de los Planes de Gestión de Calidad del Aire (AQMP) en Sudáfrica.
- *Identificación de Suposiciones Críticas*: Se identificaron 15 suposiciones críticas que pueden impactar significativamente el éxito o fracaso del sistema AQMP.
- *Narrativa Causal*: Se creó una narrativa causal que describe las relaciones y secuencias de eventos necesarios para que el AQMP alcance sus resultados previstos.
- *Riesgos Clave*: Se identificaron riesgos clave que subyacen al desarrollo e implementación del AQMP en Sudáfrica.
],
gráficos: [
Hay una figura destacada en el artículo: *Figura 2*. Esta figura muestra un *mapa conceptual de la Teoría del Cambio (ToC)* para el proceso de desarrollo e implementación del Plan de Gestión de la Calidad del Aire (AQMP) en Sudáfrica.
- *Visualización del Proceso*: La figura desglosa el proceso en seis componentes clave: diseño, insumos, actividades, resultados inmediatos e intermedios, y el impacto final.
- *Identificación de Supuestos Críticos*: Resalta 15 supuestos críticos que pueden afectar el éxito o fracaso del AQMP.
- *Causalidad y Narrativa*: Ayuda a entender las relaciones causales y la secuencia de eventos necesarios para alcanzar los objetivos del AQMP.
Esta figura es crucial para comprender cómo se espera que funcione el AQMP y los desafíos potenciales que podrían surgir.
],
interpretaciones: [
Los autores interpretan los resultados de la siguiente manera:
- *Identificación de Suposiciones Críticas*: Los autores identificaron 15 suposiciones críticas que pueden afectar significativamente el éxito o fracaso del sistema de Planes de Gestión de la Calidad del Aire (AQMP) en Sudáfrica. Estas suposiciones son fundamentales y deben gestionarse adecuadamente.
- *Riesgos Asociados*: Se identificaron riesgos clave que subyacen en el desarrollo e implementación de los AQMP. Estos riesgos pueden obstaculizar la efectividad de los planes si no se abordan.
- *Necesidad de Investigación Adicional*: Los autores sugieren que es crucial realizar más investigaciones para probar la validez de estas suposiciones críticas. Esto ayudará a encontrar soluciones para evitar o mitigar los riesgos asociados.
- *Importancia del Enfoque de Teoría del Cambio (ToC)*: La aplicación del enfoque de ToC es esencial para evaluar la efectividad de los AQMP y entender los desafíos en su desarrollo e implementación.
Estas conclusiones se justifican mediante un análisis detallado y la participación de diversos actores y expertos en el campo.
],
limitaciones: [
Sí, el artículo menciona varias limitaciones en el contexto de la planificación y gestión de la calidad del aire en Sudáfrica. Aquí están algunas de las limitaciones destacadas:
- *Compromiso y Participación*: Existe una falta de compromiso y participación de los reguladores y las partes interesadas, lo que afecta la implementación efectiva de los Planes de Gestión de la Calidad del Aire (AQMP).
- *Marco Legal Débil*: La debilidad en el marco legal y la calidad de los datos de calidad del aire y los inventarios de emisiones son desafíos significativos.
- *Recursos y Capacidades*: La disponibilidad de recursos, como tiempo, dinero, datos, información, habilidades y competencias, es crucial para el desarrollo y la implementación efectiva de los AQMP.
- *Asunciones Críticas*: La identificación y gestión de las asunciones críticas son esenciales para el éxito de los AQMP. Si estas asunciones no son válidas, el plan probablemente no funcionará como se espera.
Estas limitaciones subrayan la necesidad de una evaluación continua y ajustes en el diseño y la implementación de los AQMP.
],
implicaciones-prácticas: [
La identificación de suposiciones críticas y riesgos en la planificación de la gestión de la calidad del aire (AQMP) en Sudáfrica puede ayudar a mejorar la efectividad de estas políticas. Esto es crucial para abordar los desafíos de calidad del aire en áreas prioritarias y proteger la salud pública.
],
futuras-sugerencias: [
Se recomienda realizar más estudios para validar las suposiciones críticas identificadas. Esto proporcionará soluciones para evitar o mitigar los riesgos asociados con la implementación de AQMPs y mejorar su desarrollo y efectividad.
],
fuertes: [
Algunos de lso puntos fuertes del artículo son:
- *Metodología sólida*: El uso del enfoque de la *Teoría del Cambio (ToC)* para identificar supuestos críticos y riesgos en la planificación de la gestión de la calidad del aire es un enfoque bien fundamentado y ampliamente reconocido.
- *Estructura clara*: El artículo está bien organizado, con una clara división en componentes de diseño, insumos, actividades, resultados y impactos, lo que facilita la comprensión del proceso de desarrollo e implementación de los AQMP.
- *Relevancia*: La investigación aborda un problema global significativo, la calidad del aire, y proporciona un estudio de caso detallado de Sudáfrica, lo que puede ofrecer lecciones valiosas para otros países.
- *Participación de múltiples partes interesadas*: La inclusión de talleres con especialistas, académicos, consultores, reguladores y organizaciones no gubernamentales asegura una perspectiva amplia y diversa en la evaluación de los AQMP.
],
débiles: [
Algunas limitaciones y debilidades del artículo son:
- *Participación y compromiso de los interesados*: La falta de compromiso y participación de los reguladores y las partes interesadas puede ser un obstáculo significativo para la implementación efectiva de los AQMP.
- *Capacidad técnica y recursos*: La disponibilidad de recursos técnicos y financieros es crucial. La falta de habilidades y competencias necesarias puede limitar la efectividad de los planes.
- *Calidad de los datos*: La precisión y disponibilidad de datos sobre la calidad del aire y las emisiones son fundamentales para el éxito de los AQMP.
- *Evaluación continua*: La necesidad de una evaluación continua y ajustes en los planes para asegurar que se cumplan los objetivos a largo plazo.
],
contribuciones: [
Este artículo puede ser muy útil para tu trayectoria académica o profesional en el área de gestión de la calidad del aire. Aquí hay algunos puntos clave:
- *Enfoque en la Teoría del Cambio (ToC)*: El artículo utiliza la ToC para identificar suposiciones críticas y riesgos en la planificación de la gestión de la calidad del aire, lo que puede ayudarte a comprender mejor cómo evaluar y mejorar las políticas ambientales.
- *Estudio de Caso en Sudáfrica*: Proporciona un análisis detallado de los desafíos y éxitos en la implementación de Planes de Gestión de la Calidad del Aire (AQMP) en Sudáfrica, ofreciendo lecciones valiosas que pueden aplicarse en otros contextos.
- *Identificación de Suposiciones y Riesgos*: La identificación de 15 suposiciones críticas y sus riesgos asociados puede servir como una guía para evitar o mitigar problemas en la implementación de políticas ambientales.
- *Metodología Detallada*: La descripción de la metodología utilizada para desarrollar el mapa de ToC y el marco narrativo puede ser una referencia útil para futuros estudios o proyectos en el campo de la gestión ambiental.
],
respuesta: [
Para responder a la pregunta de investigación sobre la identificación de suposiciones críticas y riesgos en la planificación de la gestión de la calidad del aire en Sudáfrica, el artículo utiliza el enfoque de la Teoría del Cambio (ToC) para evaluar la efectividad de los Planes de Gestión de la Calidad del Aire (AQMP). Algunas respuestas clave son:
- *Importancia del aire*: El aire es vital para la vida en la Tierra, pero la contaminación del aire es un problema global que afecta la salud humana y el medio ambiente.
- *Planes de Gestión de la Calidad del Aire (AQMP)*: Son instrumentos estratégicos adoptados por muchos países para mejorar la calidad del aire. Sin embargo, su implementación en países de ingresos bajos y medios ha sido limitada.
- *Desafíos en Sudáfrica*: A pesar de tener un marco legal avanzado, Sudáfrica enfrenta serios desafíos de calidad del aire, especialmente en áreas prioritarias.
- *Enfoque de la Teoría del Cambio (ToC)*: Este enfoque se utiliza para identificar supuestos críticos y riesgos en la planificación y ejecución de los AQMP, proporcionando una base para evaluar su efectividad.
],
)
#pagebreak()
#bibliography(
"ref.bib",
title: [Referencias],
) |
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/012%20-%20Conspiracy/003_Like%20Cogwork.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Like Cogwork",
set_name: "Conspiracy",
story_date: datetime(day: 28, month: 05, year: 2014),
author: "<NAME>",
doc
)
#strong[Academy at High Paliano Board of Senior Advisors] #linebreak() Meeting called to order by Chancellor Grinaldi.
#strong[Members present:] #linebreak() Chancellor Grinaldi#linebreak() Vice Chancellor Alendis#linebreak() Professor Emralla#linebreak() Professor Fimarell#linebreak() Professor Muzzio#linebreak() Professor Tulando
#strong[Members absent:] #linebreak() Professor Regness (sabbatical)
#strong[Business] #linebreak() Motion from Professor Muzzio: For Professor Muzzio to become vice chancellor once Alendis retires.#linebreak() Vote: 1 in favor, 5 against#linebreak() Resolved: motion failed
Motion from Chancellor Grinaldi: To elect Professor Tulando vice chancellor once Alendis retires.#linebreak() Vote: 4 in favor, 1 against, 1 abstain#linebreak() Resolved: motion carried
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
"Do you think he was angry?" the ancient professor asked his colleague as they walked into the academy's antechamber.
"No, Tulando, of course not," the chancellor replied. "Muzzio is a practical man. He showed no emotion when we rendered the vote. I swear he is no better than a machine."
"You fail to give him credit, Chancellor. His inventions have revolutionized our society. We now rely on his work in one form or another."
"Oh, of course," the chancellor replied. "But that's why we need him in a workshop and not behind a desk."
The professor looked around the empty marble antechamber. It was night, and none were around, but still the professor found it best to lower his voice.
"You've heard the rumors about him, I take it?"
The chancellor scoffed.
"Spare me. That he is an agent of the Black Rose? Or the one that he is still the patron of the dropout, Sydri?"
"He most certainly killed Daretti."
"If he did, he did us a favor," the chancellor said. He instantly regretted the statement, the night's meeting and late hours raising his temper. "I'll hear no more of this sort of talk. The matter is resolved."
The professor nodded to the chancellor and they both parted ways.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
To Muzzio, the matter was far from resolved. He sat in his workshop, the one unknown to his colleagues, surrounded by dozens of half-finished, half-tinkered devices. Among the clutter of books and parts, Muzzio contemplated. He was not elected vice chancellor, which changed months of delicate planning.
#figure(image("003_Like Cogwork/01.jpg", width: 100%), caption: [Muzzio, Visionary Architect | Art by Volkan Baga], supplement: none, numbering: none)
Unlike his contemporaries, who would have crumpled up all their plans and notes in a rage, Muzzio had collected them all, ensuring they were flat and unbent, and filed them away. #emph[Never know when I might need them again] , he thought. His mind raced with hundreds of scenarios, of blueprints of events unfolding. He needed to work through his thoughts.
After the meeting, he had summoned for his apprentice, Irie. A young man from the Low City, Irie hadn't the funds to enter the academy. Muzzio had seen the potential in the boy and brought him on as his apprentice. Irie kept up Muzzio's workshop in exchange for the same lessons others spent family fortunes to obtain—even if most of the boy's time was spent retrieving parts and books from the Grand Library. Muzzio had spent a few months training Irie, but he needed to accelerate the lessons.
Out of breath, Irie climbed the stairs into the workshop.
#figure(image("003_Like Cogwork/02.jpg", width: 100%), caption: [Iterative Analysis | Art by <NAME>], supplement: none, numbering: none)
"I'm sorry, Master," Irie said hurriedly. "I came as quickly as I could."
"You came as quickly as I calculated you would," Muzzio replied, standing from his desk. "No need to apologize when I inconvenience you."
Muzzio walked over to one of his many cluttered bookshelves. On it sat the helm of an early model of one of his sentinel constructs. He spun it counterclockwise where it sat, and the book shelf lowered into the floorboards, revealing marble stairs that descended down a spiral staircase. Irie pretended to look amazed, having already found the secret passage on the second day of his apprenticeship. Muzzio knew Irie had found it, and that he was feigning shock. Irie suspected his master knew he had been down there before, but both were more than willing to play the game of ignorance.
They walked down the brightly lit stairwell and emerged in a large room, where more than a hundred mechanical constructs stood in rows. At the head of the room, where Muzzio and Irie stood, was Muzzio's true workshop—large tables where Muzzio could tend to his creations like a doctor tends to patients; parts put in deliberate places littered the workspace.
The centerpiece of the room, surrounded by whirring noises of various machines and the iron army, was a scale model of Paliano, with both the High City and Low City replicated in amazing detail. It took up nearly one third of the room. Irie had spent hours vetting its accuracy and was unable to find flaw in its design. The High City towered over the Lower, just as its full-scale version. The Corru River was painted through the Lower City, every twist and turn replicated. The houses themselves were not as intricate as they were in reality, but important locations, like the palace or the academy, were ornate and delicately painted.
Above it, a cogwork device built into the ceiling moved a false moon, and in day was replaced with a bright light that travelled across the fake city in real time. When it rained, Irie noted that the device ran tufts of cotton across tracks in the ceiling to replicate the clouds. There were no figures of people in the city, but Irie suspected that was how his master preferred it.
Muzzio had already begun working on a construct soldier. Irie was doing his best to pretend like he was taking the sights in for the first time.
"Have you ever killed anyone, Irie?" Muzzio asked calmly, as he replaced a gear in the construct.
#figure(image("003_Like Cogwork/03.jpg", width: 100%), caption: [Muzzio's Preparations | Art by <NAME>], supplement: none, numbering: none)
"No, of course not, Sir," the boy replied.
"Do you think I have killed anyone?"
Irie was taken aback by the question and tried to come up with a meaningful response, but could only reply, "Yes."
Without emotion, Muzzio replied, "How unfortunate. I would have hoped you thought more of me."
"Apologies master, I... it's just... you hear things."
"Never believe a word you hear in Paliano, Irie, unless it comes from me." Muzzio removed plating from another part of the construct, taking a jeweler's eye and small tools to the exposed insides. "No, I am proud to say I've never killed anyone, nor have I had a need to. At least, not yet."
"That is quite a relief, Master," Irie said.
Muzzio looked up at him from his hunched-over position and stared through the jeweler's eye. "Don't grovel."
Irie nodded.
"All of the mechanical wonders our city knows today came from me. I do not wish to brag, simply to demonstrate that I not only have vast intelligence, but that I know how to apply it to the greater good. Every construct in Paliano is either built with my designs or from my designs. The magic that fuels them may come from various sources, but the devices themselves owe their allegiance to me."
"Do you mean that you can control them?" Irie asked.
"I can, but I do not need to. For every obstacle toward my grand design, there is a very simple, nonviolent solution: information. Within each of the constructs is a series of needles that transcribes all that they hear onto wax cylinders, which my sneak constructs can retrieve for me. You would be amazed at what people speak of when they think they are in the presence of a nonentity."
Irie felt like he knew.
"The people scurry about, but on every corner, and now in nearly every shop, one of my constructs tends to them. My creations file their documents, count their money."
"Then is that your 'grand design,' to replace people with the machines?" Irie asked. "There are no people in your vision of the future?"
#figure(image("003_Like Cogwork/04.jpg", width: 100%), caption: [Deal Broker | Art by Cliff Childs], supplement: none, numbering: none)
Muzzio laughed, which unnerved Irie.
"Of course not! Everything I do, I do for people, to make their lives better."
"But the way you describe the city, it's like you want everything to run like a clock."
"That is a nice goal," Muzzio replied. "But foolhardy. The human variables are what will always throw off any plan for clockwork perfection one could hope to attain. I have met some who have been to amazing places and speak of ancient warring artificers and of the perfect worlds they wanted to create. There are even rumors of a place where the perfection of machines blends inseparably with the vitality of organic life. I hope we can one day be like those places. I must mitigate the variables, as best I can, to help society move forward."
Muzzio closed the panel on the construct.
"A real artificer," he continued, "can step away from a creation and know it will continue to function on its own. But until I know I can step away, I must tinker and keep everything as it needs to be. I do not make the parts, I'm merely assembling them."
The construct jittered, then began to move its appendages. It pushed itself from the table and walked toward an empty spot in the ranks of the other soldiers.
"I do not need to be in charge," Muzzio continued, standing with his arms behind his back, admiring his soldiers. "The vice chancellor position would have given me the autonomy and power needed to move into the next phase of my plan. I wasn't elected vice chancellor, which based on my projections I should have easily taken. But the death of Brago and his seeming ascension, which I couldn't foresee, caused them to vote more cautiously."
"What do you plan to do?" Irie asked. "What do you need me to do for all of this?"
"To watch, listen, and learn," Muzzio replied. "You are my student, after all."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("003_Like Cogwork/05.jpg", width: 100%), caption: [Cogwork Librarian | Art by <NAME>], supplement: none, numbering: none)
Over the next few days, Muzzio's constructs received new orders.
Professor Emralla found that the bank no longer had record of her money. The magister assured her that no living soul had been into the vaults, nor would any have been able to. Behind the magister, constructs went about counting coin, no more nefarious than a broom or spade, moving currency from one pile to the next. Emralla understood clerical errors, but she had just been made aware her latest payment on her estate in the Santuo District—which she knew she had given to the delivery construct—never made it to the loan house. She went to sort out the situation, but quickly found that due to clerical errors, the home was not properly registered in her name and she was to be evicted. The ink on the quill of the bookkeeping construct hadn't even yet dried.
Professor Tulando feared a wanton construct attacking him in the streets. He had never taken to the machines and had none in his home. He peered nervously through windows, barely able to sleep at night. #emph[There is nothing to worry about] , he told himself. #emph[Muzzio is a reasonable man. The rumors are just rumors.] He was almost over his fears when he went early to his breakfast table one morning. His servants were not yet arrived to prepare his meal, but there was a stack of papers where his food normally sat. The papers documented, quite thoroughly, how Tulando had misappropriated academy funds for his own fortune, even going so far as to show covert dealings with the smuggler Ervos Trax. There were signed documents, and even one of these papers would result in his arrest and termination. Tulando was innocent of all these crimes, but the message was clear. He resigned less than an hour later.
Chancellor Grinaldi's money was not touched, his titles were not altered, nor was he unjustly framed or blackmailed. He was having an affair, and a construct recorded this information. The details were documented and a blank envelope left outside the chancellor's house for his wife to find. The chancellor was forced to leave his position to fix his personal life.
The equation remained the same, but the variables were different.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Business] #linebreak() Motion from Professor Muzzio: For Professor Muzzio to become Vice Chancellor once Alendis becomes Chancellor.#linebreak() Vote: 1 in favor, 0 against, 3 abstain#linebreak() Resolved: Motion carried
#strong[Academy at High Paliano Board of Senior Advisors] #linebreak() Meeting called to order by Vice Chancellor Alendis.
#strong[Members present:] #linebreak() Vice Chancellor Alendis#linebreak() <NAME>#linebreak() Professor Muzzio#linebreak() <NAME>
#strong[Members absent:] #linebreak() Chancellor Grinaldi (resigned)#linebreak() Professor Emralla (sabbatical)#linebreak() Professor Tulando (resigned)
#strong[Business] #linebreak() Motion from Professor Muzzio: For Professor Muzzio to become vice chancellor once Alendis becomes chancellor.#linebreak() Vote: 1 in favor, 0 against, 3 abstain#linebreak() Resolved: motion carried
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/lang_08.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
//
// // Error: 19-24 expected two letter region code (ISO 3166-1 alpha-2)
// #set text(region: "hey") |
https://github.com/isaacholt100/isaacholt100.github.io | https://raw.githubusercontent.com/isaacholt100/isaacholt100.github.io/master/maths-notes/2-durham%3A-year-2/amv/amv.typ | typst | #import "../../template.typ": template
#show: template
= Maps between real vector spaces
- *Scalar field*: maps $RR^n -> RR$.
- *Vector field*: maps $RR^n -> RR^n$.
- *Curve*: maps $RR -> RR^n$.
- A *tangent* to a curve $underline(x)(t)$ is given by $(d underline(x)) / (d t)$.
- The *arc-length* parameterisation of a curve $underline(x)$ is such that $ abs((d underline(x)(s)) / (d s)) = 1 quad forall s $
- *Partial derivatives*: $ (diff f(underline(x))) / (diff x_a) = lim_(h -> 0) (f(underline(x) + h underline(e)_a) - f(underline(x))) / h $
- *Chain rule*: for a scalar field $f(underline(x))$ and curve $underline(x)(t) = x_1(t) underline(e)_1 + ... + x_n(t) underline(e)_n$, $ (d f(underline(x)(t))) / (d t) = sum_(i = 1)^n (diff f(underline(x))) / (diff x_i) (d x_i) / (d t) $ Here $F(t) := f(underline(x)(t))$ is the restriction of $f(underline(x))$ to the curve $underline(x)(t)$.
= The gradient of a scalar field
- *Differential operator*: maps functions to functions, e.g. $ d/(d t) = sum_(i = 1)^n (diff ) / (diff x_i) (d x_i) / (d t) $
- Let $f, g: RR -> RR$, then
- $f(x) d / (d x)$ is a differential operator. It acts on $g(x)$ to give $f(x) (d g(x)) / (d x)$.
- $d / (d x) f(x)$ is a differential operator. It acts on $g(x)$ to give $d / (d x) (f(x) g(x))$.
- $(d / (d x) f(x))$ is an differential operator. It acts on $g(x)$ to give $(d f(x)) / (d x) g(x)$.
- *del (or nabla)*: $underline(nabla) = sum_(i = 1)^n (diff ) / (diff x_i) underline(e)_i $ so $d / (d t) = underline(nabla) . (d underline(x)(t)) / (d t)$.
- *gradient* of a scalar field $f: RR^n -> RR$: $ underline(nabla) f ident "grad"(f) = sum_(i = 1)^n (diff f) / (diff x_i) underline(e)_i $
- *Directional derivative* of $f: RR^n -> RR$ in direction of a unit tangent $underline(hat(n)) = (d underline(x)(s)) / (d s)$ to a curve $x: RR -> RR^n$: $ (d f(underline(x)(s))) / (d s) = underline(hat(n)) . underline(nabla) f ident (d f) / (d underline(hat(n))) $ where $underline(x)$ is parameterised in terms of arc-length $s$.
- $underline(nabla) f$ at a point $underline(p)$ is orthogonal to curves contained in level set of $f$ at $underline(p)$.
- $underline(nabla) f$ points in the direction where $f$ increases fastest.
- *Properties of the gradient*: let $f, g: RR^n -> RR$, $a, b in RR$, $phi: RR -> RR$, then
- $underline(nabla) (a f + b g) = a underline(nabla) f + b underline(nabla) g$
- $underline(nabla) (f g) = f underline(nabla) g + g underline(nabla) f$
- $underline(nabla) phi(f) = (underline(nabla) f) (d phi) / (d f)$
= $underline(nabla)$ acting on vector fields
- *Divergence* of a vector field $underline(v)(underline(x)) = sum_(i = 1)^n v_i(underline(x)) underline(e)_i$: $ underline(nabla) . underline(v) ident "div"(underline(v)) = sum_(i = 1)^n (diff v_i) / (diff x_i) $ Note that the formula will be different in other coordinates systems. Also $underline(nabla) . underline(v) != underline(nabla) dot.op underline(v)$.
- Considering a vector field as a fluid, if the divergence at a point is positive the vector field acts as a *source* at that point (more fluid leaving than entering), if the divergence is negative the vector field acts as a *sink* at that point (more fluid entering than leaving). The magnitude of vector at point is the rate of flow at that point and direction of vector is direction of flow.
- *Properties of div*: for $f: RR^n -> RR$, $underline(v), underline(w): RR^n -> RR^n$, $a, b in RR$,
- $underline(nabla) . (a underline(v) + b underline(w)) = a underline(nabla) . underline(v) + b underline(nabla) . underline(w)$
- $underline(nabla) . (f underline(v)) = (underline(nabla) f) . underline(v) + f underline(nabla) . underline(v)$
#set math.mat(delim: "|")
- *Curl* of $underline(v): RR^n -> RR^n$: $ underline(nabla) times underline(v) ident "curl"(underline(v)) = mat(
underline(e)_1, underline(e)_2, underline(e)_3;
(diff ) / (diff x), (diff ) / (diff y), (diff ) / (diff z);
v_1, v_2, v_3
) = underline(e)_1 ( (diff v_3) / (diff y) - (diff v_2) / (diff z) ) - underline(e)_2 ( (diff v_3) / (diff x) - (diff v_1) / (diff z) ) + underline(e)_3 ( (diff v_2) / (diff x) - (diff v_1) / (diff y) ) $
- Considering a vector field as a fluid, the magnitude of the curl at a point corresponds to the rotational speed of the fluid, and the direction of the curl corresponds to which axis the fluid is rotating around, determined using the right-hand rule (fingers represent rotation of the fluid, thumb points in direction of curl).
- *Properties of curl*: for $f: RR^3 -> RR$, $underline(v), underline(w): RR^3 -> RR^3$, $a, b in RR$,
- $underline(nabla) times (a underline(v) + b underline(w)) = a underline(nabla) times underline(v) + b underline(nabla) times underline(w)$
- $underline(nabla) times (f underline(v)) = (underline(nabla) f) times underline(v) + f underline(nabla) times underline(v)$
- *Laplacian* of $f: RR^n -> RR$: $ Delta f ident underline(nabla)^2 f := underline(nabla) . (underline(nabla) f) = "div"("grad"(f)) = sum_(i = 1)^n (diff^2 f)/(diff x_i^2) $ Note this formula is only valid for *cartesian* coordinates.
= Index notation
- *Einstein summation convention*: in an expression involving a summation, then index of summation always appears twice. The convention is that the summation sign is removed, and whenever an index appears twice, it is summed over.
- *Dummy indices*: repeated indices. They can be renamed without changing the expression.
- *Free indices*: non-repeated indices. They must match on both sides of an equation.
- An index can't be repeated more than twice in the same term, so $(underline(u) . underline(v))^2 = u_i v_i u_j v_j != u_i v_i u_i v_i$.
- *Kronecker delta*: $ delta_(i j) := cases(
1 & "if" i = j \
0 & "if" i != j
) quad = (diff x_i) / (diff x_j) $
- If $delta_(i j)$ has a dummy index $i$, then remove the $delta_(i j)$ and replace the dummy index $i$ by $j$ in the rest of the expression.
- *Levi-Cevita symbol*: $ epsilon_(i j k) & = -epsilon_(j i k) = -epsilon_(i k j) quad "(antisymmetry)" med epsilon_(123) & = 1 $
- *Properties of $epsilon_(i j k)$*:
- $epsilon_(i j k) = -epsilon_(k j i)$
- $epsilon_(i j k) = 0$ if $i = j$ or $j = k$ or $k = i$
- If $epsilon_(i j k) != 0$ then $(i med j med k)$ is a permutation of $(1 med 2 med 3)$.
- $epsilon_(i j k) = 1$ if $(i med j med k)$ is an even permutation of $(1 med 2 med 3)$ (even number of swaps).
- $epsilon_(i j k) = -1$ if $(i med j med k)$ is an odd permutation of $(1 med 2 med 3)$ (odd number of swaps).
- $epsilon_(i j k) = epsilon_(j k i) = epsilon_(k i j)$ (cyclic permutation).
- The cross product $underline(C) = underline(A) times underline(B)$ can be written as $C_i = epsilon_(i j k) A_j B_k$.
- *Very useful $epsilon_(i j k)$ formula*: $ sum_(k = 1)^3 epsilon_(i j k) epsilon_(k l m) = delta_(i l) delta_(j m) - delta_(i m) delta_(j l) $
- Notation: $diff_i := (diff ) / (diff x_i)$.
- $underline(nabla) . underline(v) = (diff v_i) / (diff x_i) = diff_i v_i$.
- $(underline(nabla) times underline(v))_i = epsilon_(i j k) (diff ) / (diff x_j) v_k = epsilon_(i j k) diff_j v_k$.
= Differentiability of scalar fields
- *$f(underline(x))$ tends to $L$ as $x$ tends to $a$*: $ lim_(underline(x) -> underline(a)) f(underline(x)) = L <==> forall epsilon > 0, exists delta > 0, forall underline(x), 0 < |underline(x) - underline(a)| < delta ==> |f(underline(x)) - L| < epsilon $
- Scalar field $f$ *continuous* at $underline(a)$ if $lim_(underline(x) -> underline(a)) f(underline(x))$ exists and is equal to $f(underline(a))$
- If $f$ and $g$ are continuous scalar fields at $underline(a)$ then so are:
- $f + g$
- $f g$
- $f \/ g$ (if $g(underline(a)) != 0$)
- $f(underline(x)) = c$ for a constant $c$ is continuous at every $underline(x) in RR^n$
- $f(underline(x)) = x_a$, $a in {1, ..., n}$ is continuous at every $underline(x) in RR^n$
- *Open ball, centre $underline(a)$, radius $delta > 0$*: $ B_delta(underline(a)) := {underline(x) in RR^n: |underline(x) - underline(a)| < delta} $
- $S subset.eq RR^n$ *open* if $forall underline(a) in S$, $exists delta > 0$ such that $B_delta(underline(a)) subset.eq S$
- *Neighbourhood* $N subset.eq RR^n$ of $underline(a) in RR^n$: contains an open set containing $underline(a)$
- $S subset.eq RR^n$ *closed* if its complement $RR^n - S$ is open
- Every open ball is open
- Let $U subset.eq RR^n$ be open and $f: U -> RR$. $f$ is *continuous on $U$* if it is continuous at every $underline(a) in U$
- Let $U subset.eq RR^n$ be open and $f: U -> RR$. $f$ is *differentiable* at $underline(a) in U$ if for some vector $underline(v)(underline(a))$, $ f(underline(a) + underline(h)) - f(underline(a)) = underline(h) . underline(v)(underline(a)) + R(underline(h)), quad lim_(underline(h) -> underline(0)) R(underline(h)) / abs(underline(h)) = 0 $ If $underline(v)(underline(a))$ exists, $underline(v)(underline(a)) = underline(nabla) f$
- *Warning*: $underline(nabla) f$ being defined at a point does not imply that $f$ is differentiable at that point.
- Let $U subset.eq RR^n$ be open and $f: U -> RR$. Then $f$ is differentiable at $underline(a)$ if all partial derivatives of $f$ exist and are continuous in a neighbourhood of $underline(a)$
- Function is *continuously differentiable* at $underline(a)$ if it and all its partial derivatives exist and are continuous at $underline(a)$. It is *continuously differentiable* on an open $U$ if it and all its partial derivatives exist and are continuous on $U$.
- Continuous differentiability implies differentiability.
- *Smooth function*: partial derivatives of all orders exist.
- Let $U subset.eq RR^n$ be open. If $f, g: U -> RR$ differentiable (or smooth) at $underline(a) in RR^n$ then so are:
- $f + g$
- $f g$
- $f \/ g$ (if $g(underline(a)) != 0$)
- Let $U subset.eq RR^n$ be open, $f: U -> RR$ be differentiable, $underline(x)$ be a function of $u_1, ... u_m$ where all partial derivatives $(diff x_i) / (diff u_j)$ exist. Let $F(u_1, ... u_m) = f(underline(x)(u_1, ... u_m))$, then $ (diff F) / (diff u_b) = (diff underline(x)) / (diff u_b) . underline(nabla) f $
- *Level set* of $f: U -> RR$, $U subset.eq RR^n$ open, is the set ${underline(x) in U: f(underline(x)) = c}$ for some $c in RR$. For $n = 2$ it is called a *level curve*.
- *Implicit function theorem for level curves*: if $f: U -> RR$ is differentiable, and $(x_0, y_0) in U$ is on the level curve $f(x, y) = c$ where $(diff f) / (diff y) (x_0, y_0) != 0$, then there exists a differentiable function $g(x)$ in a neighbourhood of $x_0$ satisfying $ g(x_0) = y_0 \ f(x, g(x)) = c \ (d g) / (d x) = - ((diff f(x, g(x))) / (diff x)) / ((diff f(x, g(x))) / (diff y)) $
- *Critical point*: point of level curve $f(x, y) = c$ where $underline(nabla) f = underline(0)$. $c$ is a *critical value*, otherwise it is a *regular value*.
- At a critical point, the level curve can't be written as either $y = g(x)$ or as $x = h(y)$ in a neighbourhood of $Q$, with $g, h$ differentiable.
- *Implicit function theorem for level surfaces*: Let $f: U -> RR$ be differentiable, $U subset.eq RR^3$ open, $(x_0, y_0, z_0) in U$ be on the level set $f(x, y, z) = c$. If $(diff f) / (diff z) (x_0, y_0, z_0) != 0$ then $f(x, y, z) = c$ defines a surface $z = g(x, y)$ in a neighbourhood of $(x_0, y_0, z_0)$, where $ f(x, y, g(x, y)) = c \ g(x_0, y_0) = z_0 \ (diff g) / (diff x) (x_0, y_0) = -((diff f) / (diff x) (x_0, y_0, z_0)) / ((diff f) / (diff z) (x_0, y_0, z_0)) \ (diff g) / (diff y) (x_0, y_0) = -((diff f) / (diff y) (x_0, y_0, z_0)) / ((diff f) / (diff z) (x_0, y_0, z_0)) $
- $underline(nabla) f (x_0, y_0, z_0)$ is normal to the tangent plane of the level set $z = g(x, y)$ at $(x_0, y_0)$. So the normal line is given by $ underline(x)(t) = underline(x_0) + t underline(nabla) f $ and the tangent plane is given by $ (underline(x) - underline(x_0)) . underline(nabla) f = 0 $
= Differentiability of vector fields
- *Jacobian matrix (differential)* of $underline(F)(underline(x))$ at $underline(x) = underline(a)$ (written $D underline(F)(underline(a))$ or $D underline(F)_(underline(a))$): matrix with components $a_(i, j) = (diff F_i) / (diff x_j)$.
- For open $U subset.eq RR^n$, $underline(F): U -> RR^n$ *differentiable* at $underline(a) in U$ if for some *linear* function $underline(L): RR^n -> RR^n$, $ underline(F)(underline(a) + underline(h)) - underline(F)(underline(a)) = underline(L)(underline(h)) + R(underline(h)) $ where $ lim_(underline(h) -> underline(0)) R(underline(h)) / abs(underline(h)) = underline(0) $ Here, $underline(L)(underline(h)) = (D underline(F)(underline(a))) underline(h)$.
- *Jacobian, $J(underline(v))$*: determinant of differential: $J(underline(v)) = det(D underline(v))$
- Can think of vector fields as *coordinate transformations* on $RR^n$.
- *Inverse function theorem*: let $U$ open, $v: U -> RR^n$ differentiable with continuous partial derivatives. If $J(underline(v)(underline(a))) != 0$ then exists open $tilde(U) subset.eq U$ containing $underline(a)$ such that:
- $underline(v)(tilde(U))$ is open and
- Mapping $underline(v)$ from $tilde(U)$ to $underline(v)(tilde(U))$ has differentiable inverse $underline(w): underline(v)(tilde(U)) -> RR^n$ with $underline(v)(underline(w)(underline(x))) = underline(x)$ and $underline(w)(underline(v)(underline(y))) = underline(y)$.
- Map $underline(v): tilde(U) -> V subset.eq RR^n$ which satisfies above two properties is called *diffeomorphism* of $tilde(U)$ onto $tilde(V) = underline(v)(tilde(U))$. $tilde(U)$ and $tilde(V)$ are *diffeomorphic*.
- *Local diffeomorphism*: map $underline(v): U -> V$ where $forall underline(a) in U$, exists open $tilde(U) subset.eq U$ containing $underline(a)$ such that $underline(v): tilde(U) -> underline(v)(tilde(U))$ is diffeomorphism.
- *Chain rule for vector fields*: $ D underline(w)(underline(v)(underline(x))) = D underline(w)(underline(v)) D underline(v)(underline(x)) $
- When $underline(v)$ is local diffeomorphism and $underline(w)$ is its inverse, then $ (D underline(v))^(-1) = D underline(w), quad J(underline(w)) = 1 / J(underline(v)), quad J(underline(v)) != 0 $
- $underline(v)$ is *orientation preserving* if $J(underline(v)) > 0$.
- $underline(v)$ is *orientation reversing* if $J(underline(v)) < 0$.
= Volume, line and surface integrals
- *One dimensional integral*: calculates area under curve. $ integral_a^b f(x) dif x = lim_(n -> infinity) sum_(i = 0)^(n - 1) f(x_i^*) Delta x_i $ where $[a, b]$ partitioned as $a = x_0 < x_1 < dots.h.c < x_n = b$, $Delta x_i = x_(i + 1) - x_i$, $x_i^* in [x_i, x_(i + 1)]$ is arbitrary.
- *Double integral*: calculates volume under surface $z = f(x, y)$ over region $R$. $ integral_R f(x, y) dif A = lim_(N -> infinity) sum_(k = 1)^N f(x_k^*, y_k^*) Delta A_k $ $R$ is split into $N$ rectangle $Delta A_k$. $(x_k^*, y_k^*)$ lies in base of $k$th prism.
- If rectangles chosen on rectangular grid, then $Delta A_k = Delta x_i Delta y_j$ where $Delta x_i = x_(i + 1) - x_i$, $Delta y_j = y_(j + 1) - y_j$, $x$ and $y$ partitioned as $x_0 < x_1 < dots.h.c < x_n$ and $y_0 < y_1 < dots.h.c < y_m$. As before $x_i^* in [x_i, x_(i + 1)]$ and $y_j^* in [y_j, y_(j + 1)]$. Integral is $ integral_R f(x, y) dif A = lim_(n, m -> infinity) sum_(i = 0)^(n - 1) sum_(j = 0)^(m - 1) f(x_k^*, y_k^*) Delta x_i Delta y_j = integral_x (integral_y f(x, y) dif y) dif x $
- *Fubini's theorem*: if $f(x, y)$ continuous over compact (bounded and closed) region $A$, then double integral over $A$ can be written as *iterated integral*, with integrals in either order: $ integral_A f(x, y) dif A = integral_y integral_x f(x, y) dif x dif x = integral_x integral_y f(x, y) dif y dif x $
- *Important*: Fubini's theorem holds if region and/or function is unbounded, provided *double integral absolutely convergent* (integral of $|f(x, y)|$ over $A$ is finite).
- To *calculate area in plane* (e.g. between two curves), set $f(x, y) = 1$: $ "Area of" R = integral_R 1 dif A $
- *Volume integral*: partition volume $V$ into $N$ volumes $Delta V_i$. $ I = integral_V f(underline(x)) dif V = lim_(N -> infinity) sum_(i = 1)^N f(x_i) Delta V_i $
- If $f(underline(x))$ is density of a quantity, then $I = integral_V f(underline(x)) dif V$ is amount of that quantity.
- To *calculate volume inside surface*, ($S$ is surface which encloses $V$) set $f(x, y, z) = 1$: $ "Volume inside" S = "Volume of" V = integral_V 1 dif V $
- As for double integrals, if $V$ partition parallel to coordinate planes than $ I = integral_x integral_y integral_z f(x, y, z) dif z dif y dif x $
- Fubini's theorem holds for triple integrals.
- *Regular arc*: curve $underline(x)(t)$ where $x_a(t)$ continuous with continuous first derivatives.
- *Regular curve*: finite number of regular arcs joined end to end.
- *Line integral* of $underline(v)(underline(x))$ along arc $C: t -> underline(x)(t)$, $t in [alpha, beta]$: $ integral_C underline(v) dot.op d underline(x) = integral_alpha^beta underline(v)(underline(x)(t)) dot.op (d underline(x)(t)) / (d t) dif t $
- Line integral doesn't depend on parameterisation of $C$.
- Line integral along regular curve $C$ is sum of line integrals of arcs of $C$. If $C$ is closed, written $integral.cont_C underline(v) dot.op dif underline(x)$.
- *Length of curve*: $ integral_C dif s = integral_a^b norm((d underline(x)(t)) / (d t)) dif t $
- If $f$ is density function, mass is $ integral_C f dif s = integral_a^b f(underline(x)(t)) norm((d underline(x)(t)) / (d t)) dif t $
- If $underline(F)$ is force, work done is $ integral_C underline(F) dot.op dif underline(x) $
- If curve is ellipse $x^2 / a^2 + y^2 / b^2 = 1$, can parameterise as $x(t) = a cos(t)$, $y(t) = b sin(t)$.
- If curve is $y = f(x)$, can parameterise as $x(t) = t$, $y(t) = f(t)$.
- If curve is $x = g(y)$, can parameterise as $x = g(t)$, $y(t) = t$.
- If curve is straight line segment from $(x_0, y_0)$ to $(x_1, y_1)$, can parameterise as $x(t) = (1 - t) x_0 + t x_1$, $y(t) = (1 - t) y_0 + t y_1$.
- Surface can be given in *parametric form* as $underline(x)(u, v)$ where $u, v in U$ ($U$ is *parameter domain*).
- If curve is $z = f(x, y)$, can parameterise as $x = u, y = v, z = f(u, v)$. Similarly for $y = g(x, z)$ and $x = h(y, z)$.
- For surface $S$ as $underline(x)(u, v)$, *unit normal* vector is $ hat(underline(n)) = underline(a) / |underline(a)|, quad a = ((diff underline(x)(u, v)) / (diff u) times (diff underline(x)(u, v)) / (diff v)) $ (negative of this is also).
- For surface given as *level surface* of scalar field $f$, $f(x, y, z) = c$, *unit normal* vector is $ hat(underline(n)) = (underline(nabla) f) / |underline(nabla) f| $ (negative of this is also).
- Surface $underline(x)(u, v)$ *orientable* if partial derivatives of $underline(x)$ exist and are continuous, and $hat(underline(n))$ is continuous.
- *Surface integral* defined as $ integral_S underline(F) dot.op dif underline(A) = lim_(Delta A_k -> 0) sum_k underline(F)(underline(x)_k^*) dot.op hat(underline(n))_k Delta A_k $
- For surface $underline(x)(u, v)$, $ integral_S underline(F) dot.op dif underline(A) = integral_U underline(F)(underline(x)(u, v)) dot.op ((diff underline(x)) / (diff u) times (diff underline(x)) / (diff v)) dif u dif v $ since $((diff underline(x)) / (diff u) times (diff underline(x)) / (diff v))$ is normal to surface.
- For surface $f(x, y, z) = c$, $ integral_S underline(F) dot.op dif underline(A) = integral_A (underline(F) dot.op underline(nabla) f) / (underline(e_3) dot.op underline(nabla) f) dif x dif y $ where $(x, y)$ range over $A$, $A$ is *projection* of $S$ onto $x, y$ plane.
- If unit normal to surface $S$, $underline(hat(n))$, is known and $underline(F) dot.op underline(hat(n))$ is constant, then $ integral_S underline(F) dot.op dif underline(A) = integral_S underline(F) dot.op underline(hat(n)) dif A = underline(F) dot.op underline(hat(n)) integral_S dif A = underline(F) dot.op underline(hat(n)) times "area of" S $
= Green's, Stoke's and divergence theorems
- *Green's theorem*: let $P(x, y)$ and $Q(x, y)$ be continuously differentiable scalar fields in $2$ dimensions. Then $ integral.cont_C (P(x, y) dif x + Q(x, y) dif y) = integral_A ((diff Q) / (diff x) - (diff P) / (diff y)) dif x dif y $ where $C$ is boundary of $A$ traversed in positive (anticlockwise) direction (imagine walking around $C$ with $A$ to your left).
- *Green's theorem in vector form*: let $underline(F)(x, y, z) = (P(x, y), Q(x, y), R)$, then $ integral.cont_C underline(F) dot.op dif underline(x) = integral_A (underline(nabla) times underline(F)) dot.op underline(e)_3 dif A $
- *Stokes' theorem*: let $underline(F)(x, y, z)$ be continuously differentiable vector field, $S$ in $RR^3$ be surface with area elements $dif underline(A) = hat(underline(n)) dif A$ and boundary curve $C = diff S$, then $ integral.cont_C underline(F) dot.op dif underline(x) = integral_S (underline(nabla) times underline(F)) dot.op dif underline(A) $ Orientation of $C$ and choice of $hat(underline(n))$ or $-hat(underline(n))$ given by *right hand rule*: curl fingers of right hand and extend thumb. When thumb points in direction of surface normal, fingers point in direction of orientation of boundary, and vice versa. (Equivalently, if you stood on boundary with head pointing in direction of normal, and walked around boundary with surface on your left, direction of walking is direction of orientation of boundary.)
- *Divergence theorem*: let $underline(F)$ be continuously differentiable vector field defined over volume $V$ with bounding surface $S$, then $ integral_S underline(F) dot.op dif underline(A) = integral_V underline(nabla) dot.op underline(F) dif V $ where $dif underline(A) = hat(underline(n)) dif A$, $hat(underline(n))$ is outward unit normal.
- Vector field *conservative* if line integral is path independent.
- $underline(F)$ *closed* if $underline(nabla) times underline(F) = underline(0)$.
- Region $D$ *simply connected* if any closed curve in $D$ can be continuously shrunk to point in $D$.
- Every closed curve in $D$ is boundary of surface in $D$.
- Let $underline(F)$ vector field and $underline(nabla) times underline(F) = underline(0)$ in simply connected region $D$. If $C_1$ and $C_2$ are paths in $D$ joining $underline(a)$ to $underline(b)$ then $ integral_(C_1) underline(F) dot.op dif underline(x) = integral_(C_2) underline(F) dot.op dif underline(x) $ so line integral is path-independent and $underline(F)$ is conservative.
- If $underline(F) = underline(nabla) phi$ for scalar field $phi$ ($underline(F)$ is *exact*) then $integral_C underline(F) dot.op dif underline(x)$ is path-independent so $underline(F)$ is conservative. If $C$ goes from $underline(a)$ to $underline(b)$ then $ integral_C underline(F) dot.op dif underline(x) = phi(underline(b)) - phi(underline(a)) $
- $underline(nabla) times underline(F) = 0 <==> "path indepence of integral" <==> exists phi, underline(F) = underline(nabla) phi$
= Non-Cartesian systems
- Polar, spherical polar and cylindrical polar coordinates are all *curvilinear coordinates*.
- Cartesian coordinates $(x, y, z)$ can be expressed as smooth functions of curvlinear coordinates $(u, v, w)$: $ x = g(u, v, w), y = h(u, v, w), z = k(u, v, w), quad g, h, k in C^oo(RR^3) $ $g, h, k$ can be inverted to give $ u = tilde(g)(x, y, z), v = tilde(h)(x, y, z), w = tilde(k)(x, y, z), quad tilde(g), tilde(h), tilde(k) in C^oo(RR^3) $
- *Coordinate surfaces*: planes with equations $u = "constant"$, $v = "constant"$ or $w = "constant"$.
- *Coordiante curve*: intersection of two coordinate surfaces.
- *Orthogonal curvilinear system*: where tangent vectors $underline(e)_u, underline(e)_v, underline(e)_w$ are mutually orthogonal at any point $P$. Orientation of these vectors may depend on $P$.
- Let $g$ invertible map from $u$-space to $x$-space, $g(u) = x$. Distortion factor $g'(u)$ is *Jacobian* of $g$. $dif x = g'(u) dif u$ so *method of substitution* is $ integral_a^b f(x) dif x = integral_(g^(-1)(a))^(g^(-1)(b)) f(g(u)) g'(u) dif u $
#set math.mat(delim: "|")
- In two dimensions, *Jacobian* for maps $(tilde(g), tilde(h))$ is $ J(tilde(g), tilde(h)) = mat(diff_x tilde(g), diff_y tilde(g); diff_x tilde(h), diff_y tilde(h)) =: (diff(tilde(g), tilde(g))) / (diff(x, y)) = (diff(u, v)) / (diff(x, y)) $ Distortion factor is $|J(tilde(g), tilde(h))|$. So $ dif A_(u v) = |J(tilde(g), tilde(h))| dif A_(x y) $ where $dif A_(x y) = dif x dif y$ and $ dif A_(x y) = |J(g, h)| dif A_(u v) = |J(tilde(g), tilde(h))|^(-1) dif A_(u v) $ So $ integral.double_R f(x, y) dif x dif y = integral.double_(R') f(g(u, v), h(u, v)) |J(g, h)| dif u dif v $ where $R$ mapped to $R'$ by $(tilde(g), tilde(h))$.
- $dif A_(u v)$ is parallelogram-shaped.
- In three dimensions, $ integral.triple_R f(x, y, z) dif x dif y dif z = integral.triple_(R') f(g(u, v, w), h(u, v, w), k(u, v, w)) |(diff(x, y, z)) / (diff(u, v, w))| dif u dif v dif w $ where $|(diff(x, y, z)) / (diff(u, v, w))| := J(g, h, k)$.
- $dif V_(u v w)$ is parallelopiped-shaped.
- *Gradient in Cartesian coordinates*: $underline(nabla) = underline(e)_1 diff_x + underline(e)_2 diff_y + underline(e)_3 diff_z$.
- *Laplacian in Cartesian coordinates*: $underline(nabla)^2 = diff_x^2 + diff_y^2 + diff_z^2$.
- For 2D polar coordinates, let $underline(r) = r cos(theta) underline(e)_1 + r sin(theta) underline(e)_2$, then $ tilde(underline(e))_r & := diff_r underline(r) = cos(theta) underline(e)_1 + sin(theta) underline(e)_2, \ tilde(underline(e))_theta & := diff_theta underline(r) = -r sin(theta) underline(e)_1 + r cos(theta) underline(e)_2 $
- Let $x = g(u, v)$, $y = h(u, v)$, then *scale factors for mapping given by $g$ and $h$* are $h_u := norm(diff_u underline(r))$, $h_v := norm(diff_v underline(r))$.
- Unit vectors corresponding to $tilde(underline(e))_r$ and $tilde(underline(e))_theta$ are $underline(e)_r = tilde(underline(e))_r$ and $underline(e)_theta = 1/r tilde(underline(e))_theta$ which form orthonormal basis.
- $dif underline(r) = diff_r underline(r) dif r + diff_theta underline(r) dif theta = dif r underline(e)_r + r dif theta underline(e)_theta$ by chain rule.
- *Gradient in polar coordinates*: $underline(nabla) = underline(e)_r diff_r + underline(e)_theta 1/r diff_theta$, obtained by comparing $dif f := underline(nabla) f dot.op dif underline(r) = diff_r f dif r + diff_theta f dif theta$ for function $f(r, theta)$.
- *Divergence in polar coordinates*: for $underline(A)(r, theta) = A_r underline(e)_r + A_theta underline(e)_theta$, $ underline(nabla) dot.op underline(A) = 1/r (diff_r(r A_r) + diff_theta A_theta) $
- *Laplacian in polar coordinates*: $underline(nabla)^2 = diff_r^2 + 1/r diff_r + 1/r^2 diff_theta^2$
- *Spherical polar coordinates*: $x = r sin(theta) cos(phi)$, $y = r sin(theta) sin(phi)$, $z = r cos(theta)$, $r >= 0$, $theta in [0, pi]$, $phi in [0, 2 pi)$.
- *Cylindrical polar coordinates*: $x = r cos(theta)$, $y = r sin(theta)$, $z = z$, $r >= 0$, $theta in [0, 2 pi)$, $z in RR$.
- *General formula for curl of vector in Cartesian coordinates*: for $underline(A)(r, theta, phi) = A_r underline(e)_r + A_theta underline(e)_theta + A_phi underline(e)_phi$, $ underline(nabla) times underline(A) = h_r^(-1) h_theta^(-1) h_phi^(-1) mat(h_r underline(e)_r, h_theta underline(e)_theta, h_phi underline(e)_phi; diff_r, diff_theta, diff_phi; A_r h_r, A_theta h_theta, A_phi h_phi) $
= Generalised functions (distributions)
- *Unit step function (Heaviside)*: $ Theta(t - t_0) := cases(
0 & "if" t <= t_0,
1 & "if" t > t_0
) $
- Let $Omega subset.eq RR^n$ open. $psi: Omega -> CC$ is *test function* if:
- $psi$ is *smooth*: $psi in C^oo(Omega)$.
- *Support* of $psi$, $ "supp"(psi) := overline({underline(x) in Omega: psi(underline(x)) != 0}) $ is compact (in this case, bounded).
- Space of test functions on $Omega$, $cal(D)(Omega)$, is vector space.
- Let $psi in cal(D)(RR^n), underline(xi) in RR^n, a in RR - {0}, g in C^oo(RR^n)$. Then
- $psi(underline(x) + underline(xi)), psi(-underline(x)), psi(a underline(x)) in cal(D)(RR^n)$.
- $g(underline(x)) psi(underline(x)) in cal(D)(RR^n)$.
- Let $Omega subset.eq RR^n$ open, then ${psi_m}_(m in NN) subset.eq cal(D)(Omega)$ *converges* to $psi in cal(D)(Omega)$ if:
- Exists compact $K subset.eq Omega$ such that $"supp"(psi), "supp"(psi_m) subset.eq K$ for every $m in NN$ and
- ${psi_m}$ converges *uniformly* to $psi$ in $cal(D)(Omega)$ and
- sequence $ D^k psi_m := psi_m^((k)) := (diff^(k_1)) / (diff x_1^(k_1)) dots.h.c (diff^(k_n)) / (diff x_1^(k_n)) $ converges uniformly to $D^k psi$ for every multi-index $k = (k_1, dots, k_n)$, $k_i in NN_0$, $|k| = k_1 + dots.h.c + k_n$. (Write $psi_m ->_(cal(D)) psi$.)
- ${psi_m}$ converges to $psi$ if:
- Exists compact $K subset.eq Omega$ such that $"supp"(psi_i) subset.eq K$ for every $i$ and
- For every multi-index $k = (k_1, ..., k_n)$ and $|k| = k_1 + dots.h.c + k_n$ (including $|k| = 0$), $norm(D^k psi_m - D^k psi)_oo -> 0$ as $m -> oo$ where $||f||_oo := sup{|f(underline(x))|: underline(x) in RR^n}$.
- Let $Omega subset.eq RR^n$ open. *Distribution* is continuous linear map $T: cal(D)(Omega) -> RR$.
- $T$ *linear*: $T[a psi + b phi] = a T[psi] + b T[phi]$.
- $T$ *continuous*: $forall psi in cal(D)(Omega), forall {psi_m} subset.eq D(Omega), psi_m ->_(cal(D)) psi ==> T[psi_m] -> T[psi] "as" m -> oo$
- Space of distibutions with test functions in $cal(D)(Omega)$, written $cal(D)'(Omega)$, is vector space.
- *Dirac delta function* $delta: cal(D)(RR^n) -> RR$, is distribution $ delta[psi] := psi(underline(0)) $
- Let $f in C^0(RR^n)$. Then $ T_f[psi] := integral_(RR^n) f(underline(x)) psi(underline(x)) dif underline(x) $ is a distribution.
- $f: RR^n -> RR$ *locally integrable* if for every compact $K subset.eq RR^n$, $ integral_K f(underline(x)) dif underline(x) < oo $
- $L_"loc"^1(RR^n)$ is set of locally integrable functions on $RR^n$.
- $T in cal(D)'(RR^n)$ is *regular distribution* if for some $f in L_"loc"^1(RR^n)$, $T[psi] = T_f[psi]$ for $psi in cal(D)(RR^n)$.
- *Any two locally integrable functions that differ by finite amounts at isolated points define the same regular distribution*.
- Distribution $T$ is *singular* if no $f in L_"loc"^1(RR^n)$ such that $T = T_f$.
- *Symbolically, in the sense of distributions*, can write singular distribution as $ T[psi] := integral_(RR^n) T[underline(x)] psi(underline(x)) dif underline(x) =: angle.l T, psi angle.r $ *Note* $T[underline(x)]$ not a function.
- $delta$ is singular distribution.
- *Sifting property* of $delta$: $ integral_(RR^n) delta(underline(x)) psi(underline(x)) dif underline(x) = cases(
psi(underline(0)) & "if" underline(x) = underline(0),
0 & "otherwise"
) $
- *General sifting property* of $delta$: $ integral_Omega delta(underline(x)) psi(underline(x)) = cases(
psi(underline(0)) & "if" underline(0) in Omega,
0 & "otherwise"
) $
- *Notation*: if $n = 1$, write $delta(underline(x)) = delta(x)$, if $n = 2$, write $delta(x) delta(y)$, etc.
- *Distribution operation rules*:
- *Addition*: $(T_1 + T_2)[psi] = T_1[psi] + T_2[psi]$.
- *Multiplication by constant*: $(c T)[psi] = c T[psi]$.
- *Shifting of distribution* by $underline(xi) in RR^n$: $T_(underline(xi))[psi(underline(x))] := integral_(RR^n) T(underline(x) - underline(xi)) psi(underline(x)) dif underline(x) = integral_(RR^n) T(underline(y)) psi(underline(y) + underline(xi)) dif underline(y) =: T[psi(underline(x) + underline(xi))]$
- *Transposition*: $T^t(psi(underline(x))) := integral_(RR^n) T(-underline(x)) psi(underline(x)) dif underline(x) = integral_(RR^n) T(underline(y)) psi(-underline(y)) dif underline(y) =: T[psi(-underline(x))]$
- *Dilation* by $alpha in RR$: $T_((alpha))[psi(underline(x))] := integral_(RR^n) T(alpha underline(x)) psi(underline(x)) dif underline(x) = 1/(|a|^n) integral_(RR^n) T(underline(y)) psi(underline(y) / alpha) dif underline(y) =: 1/(|a|^n) T[psi(underline(x) / alpha)]$
- *Multiplication by smooth function $phi$*: $ (phi T)[psi] := T[phi psi] $
- Delta distribution *sifting property*: $ delta_alpha [psi] := integral_Omega delta(x - a) psi(x) dif x = cases(
psi(a) & "if" a in Omega,
0 & "otherwise"
) $
- In sense of distributions, $phi(x) delta(x - xi) = phi(xi) delta(x - xi)$.
- Symbolically, $delta(alpha x) = 1/|a| delta(x)$.
- If $f in C^1(Omega)$ has simple (multiplicity one) zeros at ${x_1, ..., x_n}$ then $ integral_Omega delta(f(x)) psi(x) dif x = sum_(i = 1)^n psi(x_i) / |f'(x_i)| $
- Distributions $T_1$ and $T_2$ *equal* if $ forall psi in cal(D)(Omega), integral_Omega T_1(x) psi(x) dif x = integral_Omega T_2(x) psi(x) dif x $
- $n$th *derivative* of distribution $T$: $ T^((n))[psi] = (-1)^n T[psi^((n))] $
- *In the sense of distributions*, $Theta'(t) = delta(t)$.
- *Leibniz rule*: $ (phi T)' = phi' T + phi T', quad (phi T)^((n)) = sum_(k = 0)^n binom(n, k) phi^((k)) T^((n - k)) $
- $f$ *piecewise continuous* on $(a, b)$ if $(a, b)$ can be divided into finite number of sub-intervals such that:
- $f$ continuous on interior of each sub-interval and
- $f$ tends to finite limit on boundary of each sub-interval as approached from interior of that sub-interval.
- $f$ *piecewise smooth* if piecewise continuous and has piecewise continuous first derivatives.
- To calculate *derivative in sense of distributions* of *piecewise-smooth* $f$, with branches $f_1, ..., f_n$ and discontinuities at $x_1, ..., x_(k - 1)$:
- Let $caron(f)(x) = f_1(x) + (f_2(x) - f_1(x)) Theta(x - x_1) + dots.h.c + (f_k - f_(k - 1)) Theta(x - x_(k - 1))$
- Then differentiate $caron(f)$.
- If Jacobian $J$ of changes of variables from $underline(x)$ to $underline(xi)$, then $ delta(underline(x) - underline(x_0)) = 1/|J| delta(underline(xi) - underline(xi_0)) $
= Sturm-Liouville Theory
- Let $f: [a, b] -> RR$, $a = x_0 < x_1 < dots.h.c < x_n = b$, $x_i^* in [x_(i - 1), x_i]$ Let $Delta = sup_(1 <= i <= n) (x_i - x_(i - 1))$, $cal(R)(f) = sum_(i = 1)^n f(x_i^*) (x_i - x_(i - 1))$. $f$ *Riemann integrable* if exists real number, written $integral_a^b f(x) dif x$ such that $ integral_a^b f(x) dif x = lim_(Delta -> 0) cal(R)(f) $
- *Lebesgue integration*: choose $y_0 <= min(f)$, $y_n >= max(f)$, $y_0 < y_1 < dots.h.c < y_n$. Let $ s_n := sum_(i = 1)^n y_(i - 1) dot.op mu{x: y_(i - 1) <= f(x) < y_i} $ where $mu{x: y_(i - 1) <= f(x) < y_i}$ is measure of set ${x: y_(i - 1) <= f(x) < y_i}$, i.e. sum of lengths of subintervals $[a, b]$ where $y_(i - 1) <= f(x) <= y_i$. *Lebesgue integral* is limit of $s_n$ as $n -> oo$.
- *Riemann-Lebesgue theorem*: let $f: [a, b] -> RR$ bounded. Then $f$ Riemann integrable iff $f$ continuous everywhere except on set of measure zero (continuous almost everywhere).
- Measure of set with countable number of elements is zero.
- Measure of $[a, b]$: $mu([a, b]) = b - a$. Also, $mu([a, b] times [c, d]) = (b - a) (d - c)$.
- If function Riemann integrable, then it is Lebesgue integrable.
- $L^1$: space of Lebesgue measurable and absolutely integrable functions.
- $L^2$: space of Lebesgue measurable functions with absolutely integrable squares.
- *Hilbert space* $HH$: real/complex vector space which:
- has Hermitian inner product $angle.l dot.op, dot.op angle.r: HH times HH -> CC$, with:
- *Hermiticity*: $angle.l underline(u), underline(v) angle.r = overline(angle.l underline(v)\, underline(u) angle.r)$.
- *Anti-linearity in first entry*: $angle.l a (underline(u) + underline(v)), underline(w) angle.r = overline(a) angle.l underline(u), underline(w) angle.r + overline(a) angle.l underline(v), underline(w) angle.r $, $a in CC$.
- *Positivity*: $angle.l underline(u), underline(u) angle.r >= 0$ and $angle.l underline(u), underline(u) angle.r = 0 <==> underline(u) = 0$.
- is complete for inner product-induced norm: $ norm(dot.op): HH -> RR_(>= 0), quad norm(underline(u)) = (angle.l underline(u), underline(u) angle.r)^(1 \/ 2) $, with:
- *Separation of points*: $norm(underline(u)) = 0 <==> u = 0$.
- *Absolute homogeneity*: $norm(a underline(u)) = |a| norm(underline(u))$, $a in CC$.
- *Triangle inequality*: $norm(underline(u) + underline(v)) <= norm(underline(u)) + norm(underline(v))$.
- Complex inner product *sesquilinear* as anti-linear in first entry, but linear in second.
- *Inner product space*: vector space with inner product and induced norm.
- *Metric* on vector space $V$: function $d: V times V -> RR$, with:
- $d(underline(u), underline(v)) >= 0$.
- $d(underline(u), underline(v)) = 0 <==> underline(u) = underline(v)$.
- $d(underline(u), underline(v)) = d(underline(v), underline(u))$.
- $d(underline(u), underline(v)) + d(underline(v), underline(w)) >= d(underline(u), underline(w))$.
- *Metric space*: pair $(V, d)$.
- One metric given by $d(underline(u), underline(v)) = norm(underline(u) - underline(v))$. Sequence ${underline(v)_n} subset.eq V$ *converges to $underline(v) in V$ in the mean (in norm)* if $ lim_(n -> oo) norm(underline(v)_n - underline(v)) = 0 <==> forall epsilon > 0, exists N in NN, forall n >= N, quad norm(underline(v)_n - underline(v)) < epsilon $
- ${underline(v)_n}$ *Cauchy sequence* if $ forall epsilon > 0, exists N in NN, forall m, n >= N, quad d(underline(v)_n, underline(v)_m) < epsilon $
- Metric space *complete* if every Cauchy sequence in $(V, d)$ converges in $V$.
- Let space $V$ be function $[a, b] -> CC$. Let *weight* function $w: [a, b] -> RR_(>= 0)$ with finitely many zeros. *Inner product with weight $w$*: $ angle.l underline(u), underline(v) angle.r_w := integral_a^b overline(u)(x) v(x) w(x) dif x $ Write $angle.l underline(u), underline(v) angle.r_(w = 1)$ as $angle.l underline(u), underline(v) angle.r$.
- $W subset.eq V$ *dense in $V$* if $ forall v in V, forall epsilon > 0, exists w in W, quad d(v, w) < epsilon $
- *Linear Operator*: $(L, D(L))$, $D(L)$ is dense linear subspace of $HH$, $L: D(L) -> HH$ linear: $ L(a u + v b) = a L(u) + b L(v) $ $L$ is the *operator*, $D(L)$ is *domain* of $L$.
- Linear operator $L: HH_1 -> HH_2$ *bounded* if for some $M >= 0$, $ forall v in HH_1, quad norm(L v)_(HH_2) <= M norm(v)_(HH_1) $ If $M$ doesn't exist, $L$ *unbounded*.
- *Norm* of $L$ is $ norm(L) := inf{M: forall h in HH_1, norm(L v)_(HH_2) <= M norm(v)_(HH_1)} $
- To show $L$ unbounded, find sequence ${x_n} subset D(L)$ with $norm(x_n)_(HH_1) <= M$ for some $M$, but $norm(L x_n)_(HH_2) -> oo$ as $n -> oo$.
- *Adjoint* of $(L, D(L))$ is $(L^*, D(L^*))$ where $L*: D(L*) -> HH_1$, $ angle.l L v_1, v_2 angle.r^(HH_2) = angle.l v_1, L^* v_2 angle.r^(HH_1), quad v_1 in D(L), v_2 in D(L^*) $ and $ D(L^*) := {v_2 in HH_2: exists v_2^* in HH_1, forall v_1 in D(L), angle.l L v_1, v_2 angle.r^(HH_2) = angle.l v_1, v_2^* angle.r^(HH_1) } $ For each $v_2 in D(L*)$, $v_2^* = L^* v_2$ is unique.
- *Boundary value problem (BVP) on $[a, b]$*: $ L u(x) = f(x), a < x < b, quad B_1(u) = B_2(u) = 0 $
- *Dirichlet boundary conditions*: $B_1(u) = u(a) = 0, B_2(u) = u(b) = 0$.
- *Neumann boundary conditions*: $B_1(u) = u'(a) = 0, B_2(u) = u'(b) = 0$.
- *Periodic boundary conditions*: $B_1(u) = u(a) - u(b) = 0, B_2(u) = u'(a) - u'(b) = 0$.
- *Mixed boundary conditions*: $B_1(u) = alpha_1 u(a) + beta_1 u'(a) = 0, B_2(u) = eta_2 u(b) + kappa_2 u'(b) = 0$
- *Initial value problem (IVP) on $[a, b]$*: $ L u(x) = f(x), a < x < b, quad u(a) = 0, u'(a) = 0 $
- *Formal adjoint* of $L = p_0(x) d_x^2 + p_1(x) d_x + p_2(x)$ is $ L^* := overline(p_0) d_x^2 + (2 overline(p_0)' - overline(p_1)) d_x + overline(p_0)'' - overline(p_1)' + overline(p_2) $
- Domain of $L = p_0(x) d_x^2 + p_1(x) d_x + p_2(x)$ is $ D(L) := {u in C^2([a, b]): B_1(u) = B_2(u) = 0} $
- *Green's formula*: let $L = p_0(x) d_x^2 + p_1(x) d_x + p_2(x)$, $L^*$ be formal adjoint. Then $ angle.l L u, v angle.r - angle.l u, L^* v angle.r = [overline(p_0)(v overline(u)' - v' overline(u)) + (overline(p_1) - overline(p_0)') v overline(u)]_a^b $
- For $L = p_0(x) d_x^2 + p_1(x) d_x + p_2(x)$, $D(L^*)$ consists of all functions $v$ satisfying *adjoint boundary conditions*: $ [overline(p_0)(v overline(u)' - v' overline(u)) + (overline(p_1) - overline(p_0)') v overline(u)]_a^b = 0 quad forall u in C^2([a, b]) "with" B_1(u) = B_2(u) = 0 $
- $(L, D(L))$ self-adjoint if $angle.l L u, v angle.r = angle.l u, L^* v angle.r$ (boundary terms vanish).
- BVP $L u(x) = f(x)$, $B_1(u) = B_2(u) = 0$ *self-adjoint* if $L = L^*$ and $D(L) = D(L^*)$ (so adjoint boundary conditions equal original boundary conditions) ($(L, D(L))$ is self-adjoint).
- If $L = p_0(x) d_x^2 + p_1(x) d_x + p_2(x)$ with real-valued coefficients and $p_1 = p_0'$, then $ L^* = d_x(p_0 d_x) + p_2 = L $ $L$ is *formally self-adjoint* with respect to inner product. $L$ is *Sturm-Liouville operator*. $L$ Sturm-Liouville iff $p_0' = p_1$.
- Let $L = p_0(x) d_x^2 + p_1(x) d_x + p_2(x)$, then $ frak(L) := rho L = d_x (rho p_0 d_x) + rho p_2, quad rho = 1 / p_0 exp(integral p_1 / p_0 dif x) $ is Sturm-Liouville.
- *Eigenfunction* $u_n$ with *eigenvalue* $lambda_n$ with respect to weight function $w(x)$ satisfies $L u_n(x) = lambda_n w(x) u_n(x)$.
- *Method of separation of variables*: write $U(x, t) = T(t) u(x)$ when solving PDE.
- $[a, b]$ *natural interval* if $p_0(a) = p_0(b) = 0$ and $p_0(x) > 0$ for $x in (a, b)$.
- *Sturm-Liouville eigenvalue problem*: $frak(L) u_n(x) + lambda_n w(x) u_n(x) = 0$.
- For Sturm-Liouville eigenvalue problem:
- Eigenvalues real.
- Eigenfunctions corresponding to distinct eigenvalues are orthogonal with respect to inner product $angle.l u, v angle.r_w$.
- Eigenfunctions can be chosen to be real.
- Eigenvalues of regular Sturm-Liouville eigenfunction problem ($|alpha_1| + |beta_1| > 0, |eta_2| + |kappa_2| > 0$) are simple (multiplicity one).
- Set of eigenvalues is countably infinite and monotonically increasing sequence: $lambda_1 < lambda_2 < dots.h.c$ and $lim_(n -> oo) lambda_n = oo$.
- For regular SL problem, can write *generalised Fourier expansion (eigenfunction expansion)* of $u$ as $ u = sum_(n = 0)^oo angle.l hat(u)_n, u angle.r_w hat(u)_n $ for normalised eigenfunctions $hat(u)_n$.
- If $frak(L) u_n(x) + lambda_n w(x) u_n(x) = f(x)$, then $f(x) = sum_(n = 0)^oo angle.l hat(u)_n, f angle.r_w hat(u)_n$. Equate eigenfunction of $f$ with eigenfunction expansion of $u$ in $frak(L) u_n(x) + lambda_n w(x) u_n(x)$ and take inner product with $hat(u)_m$ to determine $c_m = angle.l hat(u)_m, u angle.r$
- If $f$ piecewise smooth on $[a, b]$, for all $x in (a, b)$, $ 1/2 (f(x_+) + f(x_-)) := 1/2 (lim_(epsilon -> 0) f(x + epsilon) + lim_(epsilon -> 0) f(x - epsilon)) = sum_(n = 0)^oo angle.l hat(u)_n, u angle.r_w hat(u)_n(x) $
- *Completeness of eigenfunctions*: $ sum_(n = 0)^oo hat(u)_n(y) hat(u)_n(x) w(y) = delta(x - y) = delta(y - x) = sum_(n = 0)^oo hat(u)_n(x) hat(u)_n(y) w(x) $
= Green's functions
- *IN/IN IVP*: $L u(t) = f(t)$, $u(a) = gamma_1 != 0$, $u'(a) = gamma_2 != 0$.
- *IN/HOM IVP*: $L u(t) = f(t)$, $u(a) = u'(a) = 0$.
- *HOM/IN IVP*: $L u(t) = 0$, $u(a) = gamma_1 != 0$, $u'(a) = gamma_2 != 0$.
- Similarly for BVP.
- *BVP* boundary conditions: $B_1(u) = alpha_1 u(a) + beta_1 u'(a) + eta_1 u(b) + kappa_1 u'(b) = gamma_1$, $B_2(u) = alpha_2 u(a) + beta_2 u'(a) + eta_2 u(b) + kappa_2 u'(b) = gamma_2$. If $gamma_1 = gamma_2 = 0$, conditions are *homogeneous*. If $eta_1 = kappa_1 = alpha_2 = beta_2 = 0$, conditions are *separate*.
- $u_1, u_2$ *linearly independent* if $c_1 u_1(x) + c_2 u_2(x) = 0$ only satisfied by $c_1 = c_2 = 0$.
#set math.mat(delim: "|")
- *Wronskian* of $u_1, u_2$: $ W(u_1, u_2) := mat(u_1, u_2; u_1', u_2') = u_1 u_2' - u_1' u_2 $
- If $u_1, u_2 in C^1([a, b])$ and $W(u_1, u_2)(x_0) != 0$ for some $x_0 in [a, b]$ then $u_1, u_2$ linearly independent on $[a, b]$.
- If $u_1, u_2$ solutions of $L u = 0$, Wronskian either identically zero or never zero on $[a, b]$. $u_1, u_2$ linearly dependent iff Wronskian identically zero.
- *To solve IN/IN IVP* $L u(t) = f(t), u(0) = u_0, u'(0) = v_0$:
- Solve HOM/IN IVP: $ L u_"hom"(t) = 0, u_"hom"(0) = u_0, u_"hom"'(0) = v_0 $ $u_"hom"(t) = c_1 tilde(u)_1(t) + c_2 tilde(u)_2(t)$ where $tilde(u)_1, tilde(u)_2$ linearly independent solutions.
- Solve IN/HOM IVP: $ L u_p(t) = f(t), u_p(0) = 0, u_p'(0) = 0 $.
- General solution: $u(t) = u_"hom"(t) + u_p(t)$.
- *To solve IN/HOM IVP*:
- Let $f(t) = integral_0^oo delta(t - tau) f(tau) dif tau$.
- $L_t G(t, tau) = delta(t - tau)$, $G(0, tau) = 0 = diff_t G(0, tau)$.
- $G(0, tau) = 0 = diff_t G(0, tau)$.
- $G$ continuous at $t = tau$: $ lim_(epsilon -> 0^+) G(tau + epsilon, tau) = lim_(epsilon -> 0^+) G(tau - epsilon, tau) $
- Jump discontinuity of $diff_t G$ at $t = tau$: $ lim_(epsilon -> 0^+) ((diff G) / (diff t) (tau + epsilon, tau) - (diff G) / (diff t) (tau - epsilon, tau)) = 1/(p_0(tau)) $
- Define ansatz: $ G(t, tau) & = A_1(tau) u_1(t) + B_1(tau) u_2(t) quad "for" t < tau \ G(t, tau) & = A_2(tau) u_1(t) + B_2(tau) u_2(t) quad "for" tau < t $ where $u_1, u_2$ linearly independent solutions of $L u = 0$.
- For $t < tau$, $G(0, tau) = 0 = diff_t G(0, tau)$ which should give $A_1(tau) = B_1(tau) = 0$ so $G(t, tau) = 0$ for $t < tau$.
- For $t > tau$, use jump discontinuity of $diff_t G$ and continuity of $G$ to find $A_2(tau)$ and $B_2(tau)$.
- $ G(t, tau) = cases(
0 & "if" t < tau,
(u_1(tau) u_2(t) - u_2(tau) u_1(t)) / (p_0(tau) W(u_1, u_2)(tau)) & "if" t > tau
) $
- *Final solution*: $ u_p(t) = integral_0^oo G(t, tau) f(tau) dif tau $ $G(t, tau)$ is *Green's function* encoding response of system at time $t$ to unit impulse at time $tau$.
- *To solve IN/IN BVP* $L u(x) = f(x)$, $u(a) = u_a, u(b) = u_b$:
- Solve HOM/IN BVP: $ L u_"hom"(x) = 0, u_"hom"(a) = u_a, u_"hom"(b) = u_b $ $u_"hom"(x) = c_1 tilde(u)_1(x) + c_2 tilde(u)_2(x)$ where $tilde(u)_1, tilde(u)_2$ linearly independent solutions.
- Solve IN/HOM BVP: $ L u_p(x) = f(x), u_p(a) = 0, u_p(b) = 0 $
- General solution: $u(x) = u_"hom"(x) + u_p(x)$.
- *To solve IN/HOM BVP*:
- $L_x G(x, xi) = delta(x - xi)$, $G(a, xi) = 0$ for $x < xi$, $G(b, xi) = 0$ for $xi < x$.
- Define ansatz: $ G(x, xi) & = A_1(xi) u_1(x) + B_1(xi) u_2(x) quad "for" x < xi \ G(x, xi) & = A_2(xi) u_1(x) + B_2(xi) u_2(x) quad "for" xi < x $ where $u_1, u_2$ linearly independent solutions of $L u = 0$.
- $G$ continuous at $x = xi$: $ lim_(epsilon -> 0^+) G(xi + epsilon, xi) = lim_(epsilon -> 0^+) G(xi - epsilon, xi) $
- Jump discontinuity of $diff_x G$ at $x = xi$: $ lim_(epsilon -> 0^+) ((diff G) / (diff x) (xi + epsilon, xi) - (diff G) / (diff x) (xi - epsilon, xi)) = 1/(p_0(xi)) $
- Use continuity, jump discontinuity and boundary conditions to find $A_1, B_1, A_2, B_2$.
- *Final solution*: $ u_p(x) = integral_a^b G(x, xi) f(xi) dif xi $
- *Note*: if boundary conditions are $B_1(u) = gamma_1, B_2(u) = gamma_2$, $G$ must satisfy $B_1(G) = 0, B_2(G) = 0$.
- *Laplace equation*: $underline(nabla)^2 u = u_(x x) + u_(y y) = 0$
- *Poisson's equation*: $underline(nabla)^2 u(x, y) = f(x, y)$.
- *Fundamental solution of Laplace's equation*: $ G_2(underline(x)) := 1/(2 pi) ln(r), quad r = |underline(x)| $
- *Fundamental solution of Laplace's equation in plane with pole at $x = xi$*: $ G_2(underline(x), underline(xi)) := G_2(underline(x) - underline(xi)) = 1/(2 pi) ln|underline(x) - underline(xi)| $
- *Green's first identity*: let $underline(F) = underline(nabla) u$ in Divergence theorem: $ integral_Omega underline(nabla)^2 u dif x dif y = integral_(diff Omega) underline(nabla) u dot.op underline(n) dif s = integral_(diff Omega) underline(n) dot.op underline(nabla) u dif s = integral_(diff Omega) diff_n u dif s $
- *Green's second identity*: let $F = u underline(nabla) v$ in Divergence theorem: $ integral_Omega underline(nabla) dot.op (u underline(nabla) v) = integral_Omega (u underline(nabla)^2 v + underline(nabla) u dot.op underline(nabla) v) dif x dif y = integral_(diff Omega) u underline(nabla) v dot.op underline(n) dif s = integral_(diff Omega) u diff_n v dif s $
- *Green's third identity*: interchange $u$ and $v$ in second identity, subtract one from other: $ integral_Omega (u underline(nabla)^2 v - v underline(nabla)^2 u) dif x dif y = integral_(diff Omega) (u diff_n v - v diff_n u) dif s $
- *Dirichlet problem*: IN/IN BVP $ underline(nabla)^2 u(underline(x)) = f(underline(x)), u(underline(x))_(diff Omega) = g(underline(x)) $ To solve:
- Subtract function $G_"reg"$ from $G_2$ so that $G := G_2 - G_"reg"$ satisfies $underline(nabla)^2 G(underline(x), underline(xi)) = delta(underline(x) - underline(xi))$, $G(underline(x), underline(xi)) = 0 <==> G_2 = G_"reg"$ for $x in diff Omega$. $G$ is *Green's function for Dirichlet problem on domain $Omega$*. $G_"reg"$ must satisfy Laplace equation on $Omega$.
- *Full solution*: $ u(underline(xi)) = integral_Omega G(underline(x), underline(xi)) f(underline(x)) dif x + integral_(diff Omega) g(underline(x)) diff_n G(underline(x), underline(xi)) dif s $ $underline(n)$ is unit normal to $Omega$ at $underline(x)$ pointing outwards.
- To find $G_"reg"$, use *method of images*:
- Fix point $P in Omega$ with position vector $underline(xi)_0$, let $Q in Omega$ have position vector $underline(x)$. Then $G_2(underline(x), underline(xi)_0) = 1/(2 pi) ln|P Q|$.
- Let $P_1, ..., P_n$ be reflection of $P$ in boundary lines of $diff Omega$ (repeat reflection until back to $P$). Label $P_1, ..., P_n$ with alternating $-$ and $+$. Then $ -G_"reg" = -1/(2pi) ln|Q P_1| + 1/(2pi) ln|Q P_2| - dots.h.c - ln|Q P_n| $
- *Note*: if $diff Omega$ is circle radius $R$, $O P_1$ must satisfy $|O P| dot.op |O P_1| = R^2$ so $ tilde(underline(xi))_0 := O P_1 = R^2 / |underline(xi)_0|^2 underline(xi)_0 $
- Check if $G_"reg"$ satisfies $underline(nabla_x)^2 G_"reg"(underline(x), underline(xi)_0) = 0$ and $G_"reg" = G_2$ on $diff Omega$. If $G_"reg" != G_2$, add constant $c$ to $G_"reg"$ so that $G_"reg" = G_2$. |
|
https://github.com/michel-steuwer/typst-acmart | https://raw.githubusercontent.com/michel-steuwer/typst-acmart/main/template.typ | typst | MIT License | #let mainFont = "Linux Libertine O"
#let sfFont = "Linux Biolinum O"
#let bigskipamount = 12pt
#let medskipamount = bigskipamount / 2
#let smallskipamount = medskipamount / 2
#let sf(body) = text(font: sfFont, body)
#let acmart(
// Currently supported formats are:
// - acmsmall
format: "acmsmall",
// Title, subtitle, authors, abstract, ACM ccs, keywords
title: "Title",
subtitle: none,
shorttitle: none,
authors: (),
shortauthors: none,
abstract: none,
ccs: none,
keywords: none,
// acm journal
acmJournal: none,
acmVolume: 1,
acmNumber: 1,
acmArticle: none,
acmMonth: 5,
// acm information
acmYear: 2023,
acmDOI: "XXXXXXX.XXXXXXX",
// copyright
copyright: none,
copyrightYear: 2023,
// paper's content
body
) = {
let journal = if acmJournal == "JACM" {
(
name: "Journal of the ACM",
nameShort: "J. ACM"
)
} else {
none
}
let displayMonth(month) = (
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December"
).at(month - 1)
if shorttitle == none {
shorttitle = title
}
if shortauthors == none {
shortauthors = authors.map(author => author.name).join(", ", last: " and ")
}
// Set document metadata
set document(title: title, author: authors.map(author => author.name))
set text(fill: blue)
// Configure the page.
set page(
width: 6.75in,
height: 10in,
margin: (
top: 58pt + 27pt,
bottom: 39pt + 24pt + 100pt,
left: 46pt,
right: 46pt
),
header: locate(loc => {
set text(size: 8pt, font: sfFont)
let currentpage = loc.page()
if currentpage == 1 {
} else {
let acmArticlePage = [#acmArticle:#counter(page).display()]
[
#block(
height: 10pt,
width: 100%,
if calc.rem(currentpage, 2) == 0 [
#acmArticlePage
#h(1fr)
#shortauthors
] else [
#shorttitle
#h(1fr)
#acmArticlePage
]
)
#v(17pt)
]
}
}),
header-ascent: 0%,
footer: locate(loc => {
set text(size: 8pt)
let currentpage = loc.page()
if currentpage == 1 {
[
Authors' addresses: #{
authors.fold((), (list, author) => {
list + (
[#author.name#{
if author.at("email", default: none) != none [, #author.email]
}]
,)
}).join("; ", last: ".")
}
Permission to make digital or hard copies of all or part of this
work for personal or classroom use is granted without fee provided
that copies are not made or distributed for profit or commercial
advantage and that copies bear this notice and the full citation on
the first page. Copyrights for components of this work owned by
others than ACM must be honored. Abstracting with credit is
permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specific permission
and#h(.5pt)/or a fee. Request permissions from
<EMAIL>.\
#sym.copyright #acmYear Association for Computing Machinery\
0004-5411/2018/8-ART1 \$15.00\
https:\/\/doi.org\/#acmDOI
]
}
let currentfooting = [
#journal.nameShort,
Vol. #acmVolume,
No. #acmNumber,
Article #acmArticle.
Publication date: #displayMonth(acmMonth) #acmYear.
]
block(
height: 24pt,
width: 100%,
fill: gray,
if calc.rem(currentpage, 2) == 0 {
align(bottom + left, currentfooting)
} else {
align(bottom + right, currentfooting)
}
)
}),
footer-descent: 0%,
)
set text(font: mainFont, size: 10pt)
// set titlepage
{
set par(justify: true, leading: 0.555em)
show par: set block(below: 0pt)
// Display title
{
set text(font: sfFont, size: 14.4pt, weight: "bold")
par(title)
v(16.5pt)
}
// Display authors
{
set par(leading: 5.7pt)
let displayAuthor(author) = [#text(font: sfFont, size: 11pt, upper(author.name))]
let displayAuthors(authors) = authors.map(displayAuthor).join(", ", last: " and ")
let displayAffiliation(affiliation) = [,#text(font: mainFont, size: 9pt)[
#affiliation.institution, #affiliation.country]\
]
par({
let affiliation = none
let currentAuthors = ()
for author in authors {
// if affiliation changes, print author list and affiliation
if author.affiliation != affiliation and affiliation != none {
displayAuthors(currentAuthors)
displayAffiliation(affiliation)
currentAuthors = ()
}
currentAuthors.push(author)
affiliation = author.affiliation
}
displayAuthors(currentAuthors)
displayAffiliation(affiliation)
})
v(12pt)
}
// Display abstract
par(text(size: 9pt, abstract))
v(9.5pt)
// Display CSS concepts:
par(text(size: 9pt)[CCS Concepts: #{
ccs.fold((), (acc, concept) => {
acc + ([
#box(baseline: -50%, circle(radius: 1.25pt, fill: black))
#strong(concept.at(0))
#sym.arrow.r
#{concept.at(1).fold((), (acc, subconcept) => {
acc + (if subconcept.at(0) >= 500 {
[ *#subconcept.at(1)*]
} else if subconcept.at(0) >= 300 {
[ _#subconcept.at(1)_]
} else {
[ #subconcept.at(1)]
}, )
}).join(";")
}],)
}).join(";")
"."
}])
v(9.5pt)
// Display keywords
par(text(size: 9pt)[
Additional Key Words and Phrases: #keywords.join(", ")])
v(9.5pt)
// Display ACM reference format
par(text(size: 9pt)[
#strong[ACM Reference Format:]\
#authors.map(author => author.name).join(", ", last: " and ").
#acmYear.
#title.
#emph(journal.nameShort)
#acmVolume,
#acmNumber,
Article #acmArticle (#displayMonth(acmMonth) #acmYear),
#counter(page).display((..nums) => [
#nums.pos().last() page#if(nums.pos().last() > 1) { [s] }.
],both: true)
https:\/\/doi.org\/#acmDOI
])
v(1pt)
}
set heading(numbering: (..n) => [#n.pos().first()~~~])
show heading: it => {
set text(font: sfFont, size: 10pt, weight: "bold")
upper(it)
v(9pt - 0.555em)
}
set par(
justify: true,
leading: 5.35pt,
first-line-indent: 9.5pt)
show par: set block(below: 5.35pt)
// set page(
// margin: (
// top: 58pt + 27pt,
// bottom: 39pt + 24pt,
// left: 46pt,
// right: 46pt
// ),
// )
// Display content
body
// [
// #locate( loc => { let x = 39pt + 24pt; type(x) })
// #locate( loc => 39pt + 24pt)
// ]
} |
https://github.com/Dherse/masterproef | https://raw.githubusercontent.com/Dherse/masterproef/main/common/glossary.typ | typst | #let entries = {
let acronyms = yaml("/glossary.yml")
// Initializing the glossary.
let entries = (:)
for (key, value) in acronyms {
if not "short" in value or not "long" in value {
panic("Acronym must have a short and long form, for key: " + str(key))
}
entries.insert(
key,
(
key: key,
short: eval(mode: "markup", value.short),
long: eval(mode: "markup", value.long)
)
)
}
entries
}
#let __query_labels_with_key(key, before: false) = {
if before {
query(selector(label("glossary:" + key))
.before(here(), inclusive: false))
} else {
query(label("glossary:" + key))
}
}
#let gloss(key, suffix: none, short: auto, long: auto) = {
if not key in entries {
panic("Glossary key not found: " + key);
}
let entry = entries.at(key)
let long = context {
let gloss = __query_labels_with_key(key, before: true)
let in_preface(loc) = state("section").at(loc) == "preface";
// Find whether this is the first glossary entry.
let is_long = if long == auto {
if state("section").get() == "preface" {
false
} else {
gloss.map((x) => x.location()).find((x) => not in_preface(x)) == none
}
} else {
long
}
if is_long {
" (" + emph(entry.long) + ")"
} else {
none
}
}
[
#link(label(entry.key))[#entry.short#suffix#long]
#label("glossary:" + entry.key)
]
}
#let glossary(title: "Glossary") = {
[
#heading(title) <glossary>
]
let elems = ();
for entry in entries.values().sorted(key: (x) => x.key) {
elems.push([
#heading(smallcaps(entry.short), level: 99, numbering: "1.")
#label(entry.key)
])
elems.push({
emph(entry.long)
box(width: 1fr, repeat[.])
context {
__query_labels_with_key(entry.key)
.map((x) => x.location())
.dedup(key: (x) => x.page())
.sorted(key: (x) => x.page())
.map((x) => link(x, numbering(x.page-numbering(), ..counter(page).at(x))))
.join(", ")
}
})
}
table(
columns: (auto, 1fr),
inset: 5pt,
stroke: none,
fill: (_, row) => if calc.odd(row) {
luma(240)
} else {
white
},
align: horizon,
..elems
)
}
#let gloss-init(body) = {
show ref: r => if r.element != none and r.element.func() == heading and r.element.level == 99 {
gloss(str(r.target), suffix: r.citation.supplement)
} else {
r
}
body
} |
|
https://github.com/floriandejonckheere/utu-thesis | https://raw.githubusercontent.com/floriandejonckheere/utu-thesis/master/thesis/figures/08-case-study/runtime.typ | typst | #import "@preview/cetz:0.2.2": canvas, chart, draw, palette
#let data = (
([#ref(<scn_structural_logical>, supplement: none)], 8.311, 156.85735489999934, 3.607),
([#ref(<scn_structural_contributor>, supplement: none)], 8.311, 71.93678768199607, 3.404),
([#ref(<scn_logical_contributor>, supplement: none)], 8.311, 148.14274523600034, 4.908),
([#ref(<scn_structural_logical_contributor>, supplement: none)], 8.311, 127.67580917900341, 3.732),
)
#canvas({
chart.columnchart(
data,
mode: "stacked",
size: (auto, 4),
bar-width: 0.2,
label-key: 0,
value-key: (1, 2, 3),
labels: ([Extraction], [Decomposition], [Evaluation]),
legend: "legend.north",
legend-style: (
padding: .4em,
offset: (0, .5),
),
)
})
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/layout/grid-rtl.typ | typst | Apache License 2.0 | // Test RTL grid.
---
#set text(dir: rtl)
- מימין לשמאל
---
#set text(dir: rtl)
#table(columns: 2)[A][B][C][D]
|
https://github.com/timetraveler314/Note | https://raw.githubusercontent.com/timetraveler314/Note/main/24Spring/Calculus/Problems.typ | typst | #import "@local/MetaNote:0.0.1" : *
#import "@preview/physica:0.9.0": *
#import "@preview/pinit:0.1.2": *
#let detm = math.mat.with(delim: "|")
#let vol = math.op("Vol")
#let int = math.integral
#let iint = math.integral.double
#let iiint = math.integral.triple
#let oint = math.integral.cont
#let oiint = math.integral.surf
#let oiiint = math.integral.vol
#let vr = math.bold($r$)
#let vs = math.bold($s$)
#let vn = math.bold($n$)
#let vF = math.bold($F$)
#let vS = math.bold($S$)
#let rot = math.op("rot")
#let gradd = math.op("grad")
#let divv = math.op("div")
#show: doc => MetaNote(
title: [
Calculus (II) Problems
],
authors: (
(
name: "timetraveler314",
affiliation: "University of Genshin",
email: "<EMAIL>",
),
),
doc,
)
#let lnf = $lim_(n->+oo)$
#let ser = $sum_(n=1)^(oo)$
#outline()
= Differential Equations
#question[
Does there exist a differentiable function $f(x,y,y')$ s.t. $y=sin x$ and $y=x-1/6 x^3$ are solutions to the differential equation $y'' = f(x,y,y')$?
]
#solution[
Let $y' = p$. Then $
derivative(,x) vec(y,p) = vec(p, f(x,y,p)).
$
Choose $y(0)= 0, p(0) = 1$, by the uniqueness of the solution, the answer is no.
]
#question("(Hard)")[
Prove that $
x''(t) + x(t) + arctan x(t) = 2 sin t
$
has no nontrivial solution of period $2pi$.
]
#proof[
Hint. Multiply the equation by $sin t$ and integrate.
Assume there exists a solution $x(t)$ of period $2pi$. Then $
x'' sin t + x sin t + arctan x sin t = 2 sin^2 t.
$
Integrate by parts, $
integral_0^(2pi) x'' sin t dif t &= x' sin t |_0^(2pi) (=0) - integral_0^(2pi) x' cos t dif t \
&= - integral_0^(2pi) x sin t dif t - x cos t |_0^(2pi) (=0) \
$
Then $
integral_0^(2pi) arctan x sin t dif t &= integral_0^(2pi) 2 sin^2 t dif t = 2pi \
&= (integral_0^pi + integral_pi^(2pi)) arctan x sin t dif t \
&< pi/2 integral_0^pi sin t dif t - pi/2 integral_pi^(2pi) sin t dif t = pi,
$
which is a contradiction.
]
= Series
== 1
#question("1")[
Evaluate the following series:
$
ser 1/(n(n+1)(n+2)), ser arctan 1/(n^2+n+1)
$
]
#solution[
$
arctan 1/(n^2+n+1) = arctan ((n+1)-n)/(1+n(n+1)) = arctan (n+1) - arctan n, \
S_n = arctan(n+1) - arctan_1, ser arctan 1/(n^2+n+1) = pi/2 - pi/4 = pi/4.
$
]
#question("2")[
Let $a_n>0$, and ${a_n-a_(n+1)}$ be a strictly decreasing sequence. Show that if $ser a_n$ converges, then
$
lnf (1/(a_(n+1))-1/a_n) = +oo.
$
]
#proof[
$ser a_n$ converges, so $a_n -> 0$. Then $a_n - a_(n+1) -> 0$, so $
a_n - a_(n+1) > 0, a_n > a_(n+1).
$
$
a_n^2 &= sum_(k=n)^oo (a_k^2-a_(k+1)^2) = sum_(k=n)^oo (a_k-a_(k+1))(a_k+a_(k+1)) \
&< (a_n-a_(n+1)) sum_(k=n)^oo (a_k+a_(k+1)).
$
$
(1/(a_(n+1))-1/a_n) &= (a_n-a_(n+1))/(a_n a_(n+1)) > (a_n-a_(n+1))/a_n^2 > 1/(sum_(k=n)^oo a_k+a_(k+1)) -> +oo.
$
]
#question("3")[
Let $ser a_n$ be a divergent series of positive terms. Show that
(1) $ser a_n/(1+a_n)$ diverges, $ser (a_n/(1+n^2 a_n))$ converges.
(2) study the convergence of $ser a_n/(1+a_n^2), ser a_n/(1+n a_n)$.
]
#solution[
$a_n/(1+n^2 a_n) <= 1/n^2 => ser a_n/(1+n^2 a_n)$ converges.
If $ser a_n$ diverges, there are two possibilities:
(i) $a_n$ is bounded by $M$, then $a_n/(1+a_n) > a_n/(1+M) > 0$, $a_n/(1+a_n^2) > a_n/(1+M^2) > 0$, so $ser a_n/(1+a_n), ser a_n/(1+a_n^2)$ diverges.
(ii) $a_n$ is unbounded, then there exists a subsequence $a_(n_k) -> +oo$. Then the subsequence $a_n_k/(1+a_n_k) = 1/(1/a_n_k + 1) -> oo$, so $ser a_n/(1+a_n)$ diverges.
As for $ser a_n/(1+a_n^2)$, we only have : when $a_n -> +oo$, $a_n/(1+a_n^2) ~ 1/a_n$, the convergence is uncertain. Example:
$
a_n = cases(
n\, n = 2k-1,
0\, n = 2k
)"(diverges)", a_n = cases(
n\, n = 2^k,
0\, n != 2^k
)"(converges)".
$
]
#question("4")[
$
sum_(n=3)^oo 1/(n^alpha ln^beta n (ln n^gamma ln n)).
$
]
#question("5")[
Let $0<a_n<1$. Prove that if $ser a_n$ converges, then $ser ln(1-a_n)$ converges.
]
#proof[
View as function $f(x) = ln (1-x)$. Then
$
f(x) - f(0) = f'(theta x)x = x/(1-theta x) <= x/(1-x), 0 < theta < 1. \
=> ln(1-a_n) <= a_n/(1-a_n).
$
Convergence of $ser a_n$ implies $a_n -> 0$. Then $exists N, forall n > N, 1-a_n > 1/2$. So $
ln(1-a_n) < 2a_n, n > N.
$
Which means $ser ln(1-a_n)$ converges.
]
#question("7")[
Show that $sum_(n=2)^oo 1/(ln n)^(ln n)$ converges.
]
#proof[
]
#question("8")[
If $I = ser a_n$ is series of positive terms, show that $J = ser sqrt(a_n a_(n+1))$ converges, and the opposite is not necessarily true.
]
#proof[
Counterexample:
$
a_n = cases(
1\, n = 2k-1,
1/n^4\, n = 2k
)
$
Then $J = 1/2 ser 1/n^2$ converges, but $I$ obviously diverges.
]
#question("9")[
Discuss the convergence of the series
$
a_n = ((sqrt(n+sqrt(n+sqrt(n)))-sqrt(n))/n)^p,
$
$
b_n = (1-root(3,(n-1)/(n+1)))^p (p>0).
$
]
#solution[
Asymptotically,
$
sqrt(n+sqrt(n+sqrt(n))) &= sqrt(n) (1 + sqrt(n+sqrt(n))/n)^(1/2) = sqrt(n) (1+1/2 sqrt(n+sqrt(n))/n + o(1/n^2)) \
&= sqrt(n) + 1/2 + o(1/sqrt(n)). \
a_n &= ((1/2 + o(1/sqrt(n)))/n)^p = 1/(2n)^p + o(1/n^(p+1)). ("Needs Recheck")
$
So $p > 1$ for convergence.
$
root(3,(n-1)/(n+1)) &= (1-2/(n+1))^(1/3) \
&tilde 1 - 2/(3n) + o(1/n), \
b_n &= (1-1+2/(3n) + o(1/n))^p = (2/(3n) + o(1/n))^p = 1/((3n)/2)^p + o(1/n^p).
$
]
= Improper Integrals
#question[
$
integral_0^(+oo) 1/(1+x^(4)) dif x.
$
]
#solution[
$t := 1/(1+x^4)$.
$
I &= 1/4 integral_0^1 t^(-1/4) (1-t)^(-3/4) dif t \
&= 1/4 Beta(1/4,3/4) = 1/4 (Gamma(1/4)Gamma(3/4))/Gamma(1) = pi/(2sqrt(2)).
$
]
#proposition[
$f(x)$ is monotone decreasing on $[0, +oo)$, $f(x) >= 0$, if $
integral_0^(+oo) f(x) dif x
$ converges, then $
lim_(x->+oo) x f(x) = 0, lim_(x -> +oo) x ln x f(x) = 0.
$
]
#proof[
By Cauchy's criterion, for any $epsilon > 0$, there exists $N$, for any $n > N$, we choose $x/2 > N$,
$
epsilon > integral_(x/2)^x f(t) dif t >= f(x) x/2 => lim_(x->+oo) x f(x) = 0.
$
$
epsilon > integral_(x_1)^(x_2) f(x) dif x &= integral_(x_1)^(x_2) x f(x) dif ln x \
&>= (min_(x_1 <= x <= x_2) x f(x)) integral_(x_1)^(x_2) dif ln x = 0 \
&"Choose" x_1 = sqrt(x), x_2 = x. \
&= 1/2 x f(x) ln x.
$
]
#question[
Let $f(x)$ be monotone decreasing on $[1, +oo)$, $f(x) >= 0$, $f(x) -> 0$. Show that if $
integral_1^(+oo) (f^(p-1)(x))/x^(1/p) dif x (p>1)
$ converges, then $
integral_1^(+oo) f^p (x) dif x
$ converges.
]
#proof[
Note all the integrand are positive, so we can apply the comparison test.
$
lim_(x -> +oo)
$
]
#question[
Let $f(x)>=0$ be monotone increasing on $[0, +oo)$, and $F(x) = integral_0^x f(t) dif t$; $lim_(x -> 0^+) x/F(x) = 1$. Show that $
I = integral_0^(+oo) 1/f(x) dif x "and" J = integral_0^(+oo) x/F(x) dif x
$
both converge or both diverge.
]
#proof[
$
& x/2 f(x/2) <= integral_(x/2)^x f(t) dif t <= F(x) <= x f(x) \
=>& 2x/(2F(2x)) <= 1/f(x) <= x/F(x)
$
]
#question[
Integrate
$
integral_0^(pi) 1/(a^2 sin^2 x + b^2 cos^2 x) dif x.
$
]
#solution[
$
I &= 1/(a b) integral_0^(pi) (a/b dif (tan x))/((a/b tan x)^2 + 1) dif x \
& "Let" t = a/b tan x, \
&= 1/(a b) (integral_0^(+oo) + integral_(-oo)^0) 1/(t^2+1) dif t = pi/(a b).
$
#note[
There's a singularity at $x = pi/2$, so we split the integral into two parts.
]
]
#lemma[
]
#question[
Integrate
$
integral_0^1 (ln x)/(1-x^2) dif x.
$
]
#solution[
$
I &= integral_0^1 ln x sum_(k=0)^(oo) x^(2k) dif x \
&= sum_(k=0)^(oo) integral_0^1 x^(2k) ln x dif x \
&= - sum_(k=0)^(oo) 1/(2k+1)^2 = -pi^2/8.
$
]
#question[
$
I = integral_0^1 x^alpha ln^beta x dif x
$
When is $I$ convergent?
]
#solution[
The singularities are $0,1$. We consider $
I_1 = integral_0^(1/2) x^alpha ln^beta x dif x, I_2 = integral_(1/2)^1 x^alpha ln^beta x dif x.
$
For $I_1$, use $integral_0^1 x^gamma dif x$. By a ratio test,
$
(x^alpha ln^beta x)/(x^(alpha-epsilon)) = x^epsilon ln^beta x -> 0, x -> 0.
$
(We need the $epsilon>0$ to ensure the convergence of the ratio).
Note that $integral_0^(1/2) 1/x^(epsilon-alpha) dif x$ converges iff $epsilon - alpha < 1 => alpha > -1$.
For $I_2$, $
x^alpha ln^beta x tilde x^alpha / (1-x)^(-beta), "because" \
ln x = ln (1-(1-x)) tilde 1-x (x->1^-).
$
So $I_2$ converges iff $beta>-1$.
]
= Fourier Series
#question[
$f(x) tilde a_0/2 + sum_(n=1)^(+oo) (a_n cos n x + b_n sin n x)$, show that
$
ser a_n/n, ser b_n/n
$
both converge.
]
#proof[
$
ser abs(a_n/n) &<= ser (a_n^2+1/n^2)/2 \
&<= 1/2 (a_0^2/2 + ser (a_n^2+b_n^2) + ser 1/n^2) \
&<= 1/2 (norm(f)^2 + pi^2/6) < +oo. ("Bessel")
$
]
|
|
https://github.com/ytwei3/ytwei3.github.io | https://raw.githubusercontent.com/ytwei3/ytwei3.github.io/main/cv/yaotian_wei_resume.typ | typst | MIT License | #show heading: set text(font: "Times New Roman")
#show link: underline
#set text(
size: 12pt,
font: "Times New Roman"
)
#set page(
margin: (x: 1.75cm, y: 1.2cm)
)
#set par(justify: true)
#let hline() = {v(-3pt); line(length: 100%); v(-5pt)}
#align(center, text(18pt)[*<NAME> (Walter)*])
#align(center)[
(+86) 18889832086 |
(+852) 46400847 |
#underline[<EMAIL>] |
#link("https://linkedin.com/in/yaotianwei")[linkedin.com/in/yaotianwei]
]
== EDUCATION
#hline()
*City University of Hong Kong* #h(1fr) Sep 2019 - Jun 2024 (Expected) \
BBA in Finance #h(1fr) Hong Kong\
- Courses: Trading Room Workshop, Corporate finance, Derivatives & Risk Management, etc
Minor in Computing
- Courses: Data Structures & Algorithms, Machine learning, Cryptography, etc
== WORK EXPERIENCE
#hline()
*Huawei* #h(1fr) Sep 2022 -- Jul 2023 \
Research internship (full-time) #h(1fr) Hong Kong \
- *Carried out research on heterogeneous acceleration*. With Applying AI framework into Data Sciences tasks for universal heterogeneous acceleration capabilities.
- *Developed and maintained python library.* Refactored the repository by restructuring the whole codebase, resulting in a more readable and modular code structure.
- *Implemented parallel graph algorithm.* Design a connected component algorithm based on disjoint set, resulting in a 1000x performance boost compared to the most popular library NetworkX.
- *Optimized machine learning inference methods.* Implement methods in MindSpore, resulting in a more than 10 times of performance improvement.
== PROJECT
#hline()
*#link("https://inside.rotman.utoronto.ca/financelab/competition/ritc/")[Rotman International Trading Competition]* #h(1fr) Feb 20-25, 2024 (Ongoing) \
Algo trader#h(1fr) Canada
- *Implementing a high-frequency front running algorithm with CAPM.* Confirm the current price movement of the underlying asset by calculating beta and index price movement.
- *Developed algorithmic volatility trading strategy.* Achieved automated delta hedging, trading options with maximum vega exposure, and utilizing momentum indicators as trading signals.
- *Implemented market making and Arbitrage strategies.* Achieved inventory management and facilitated cash-out between different exchanges.
- *Optimized Strategies in Microsecond latency.* Implement latency-sensitive operations in Rust, achieved high-performance computing logic and asynchronous networking capabilities.
- *Analyzed strategies performance with backtesting framework.* Optimized strategies with backtesting.py and achieved a higher annualized return.
*#link("https://zk-learning.org/")[Zero Knowledge Proof MOOC]* #h(1fr) Dec. 2023 \
- *Analyzed with various web3 products.* Gained familiarity with basic concepts of blockchain technologies and Ethereum applications.
- *Conduct reserch on main-stream zkp project.* Analyzed Codebase like Arkworks, Bellman, and Libsnark, familiar with Rust project structure.
== SKILLS
#hline()
*Languages:* Mandarin, English, Cantonese, Japanese\
*Programming Languages:* C++, Python, Rust, Golang\
*Tech Skills:* Algorithmic Trading, Data Analysis, Parallel Acceleration, Web3 Technology
#text(
fill: gray,
align(right, "Last updated on 29 Feb, 2024")
) |
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/smartquotes_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Allow 2 graphemes
#set smartquote(quotes: "a\u{0301}a\u{0301}")
"Double and 'Single' Quotes"
#set smartquote(quotes: (single: "a\u{0301}a\u{0301}"))
"Double and 'Single' Quotes"
|
https://github.com/xrarch/books | https://raw.githubusercontent.com/xrarch/books/main/documents/a4xmanual/titlepage.typ | typst | *XR/station Project* \
*A4X Firmware Manual (Preliminary & Incomplete)* \
_Revision 1.0, May 29, 2024_ |
|
https://github.com/AU-Master-Thesis/thesis | https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/sections/3-methodology/study-1/factor-structure.typ | typst | MIT License | #import "../../../lib/mod.typ": *
=== Factor Structure <s.m.factor-structure>
// #jonas[I do not remember if you have seen this?]
In #gbpplanner, the different factor variants are implemented as separate classes that inherit from a `Factor` base class. Rust does not support inheritance as found in #acr("OOP") based languages like C++. Instead composition and traits are used for subtyping@the-rust-book. The `Factor` trait, as seen in @listing.factor-trait is designed to group all requirements expected of any factor variant used in the factorgraph.
#listing([
```rust
trait Factor: std::fmt::Display {
fn neighbours(&self) -> NonZeroU32;
fn skip(&self, state: &FactorState) -> bool;
fn measure(&self, state: &FactorState, lin_point: &Vector<f64>) -> Measurement;
fn jacobian(&self, state: &FactorState, lin_point: &Vector<f64>) -> Cow<'_, Matrix<f64>>;
fn jacobian_delta(&self) -> StrictlyPositiveFinite<f64>;
fn first_order_jacobian(&self, state: &FactorState, lin_point: Vector<f64>) -> Matrix<f64> { ... }
}
```
],
caption: [
The `Factor` trait, used by the factor graph to abstract the different types of factors.
Found in the #gbp-rs(content: [#crates.gbpplanner-rs]) crate at #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/9d06aab257eec234a57a8a8a87ce54369da00cce/crates/gbpplanner-rs/src/factorgraph/factor/mod.rs#L60", "src/factorgraph/factor/mod.rs:60")
]
) <listing.factor-trait>
- `neighbours()`: How many neighbouring variables the factor is connected to. `NonZeroU32` is a standard library type representing the interval $[1, 2^(32-1)]$, to prevent implementors from returning $0$, which would represent an invalid state, since a factorgraph cannot be disconnected.
- `skip()`: Whether the factor should be skipped during factor iteration. This is used by factors for which it might not make sense to be active all the time. The interrobot factors uses it to skip participation when estimated positions of the two variables it is connected with are further apart than $d_r$ as explained in @s.m.factors.interrobot-factor. Tracking factors uses it to deactivate while the global planner asynchronously tries to find a path, see @s.m.global-planning.
- `measure()`: The measurement function $h(#m.Xb _k)$ used in the factor potential update step as described in @s.b.gbp.factor-update.
- `jacobian()`: The Jacobian of the factor. Used in the factor potential update step as described in @s.b.gbp.factor-update. To minimize repeated heap allocation of matrices #acr("CoW") semantics are used by wrapping the returned matrix in the `Cow<'_, _>` container@the-rust-book. With this implementors can opt to return a reference to an already allocated matrix they own, instead of a new copy of it. The dynamics factor makes use of this optimization as its Jacobian is precomputed once at initialization, and does not depend on the input linearization point #footnote([Found in the #gbp-rs(content: [#crates.gbpplanner-rs]) crate at #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/9d06aab257eec234a57a8a8a87ce54369da00cce/crates/gbpplanner-rs/src/factorgraph/factor/dynamic.rs#L67-L70", "src/factorgraph/factor/dynamic.rs:67-70")]).
- `jacobian_delta()`: The $delta$ used in the Jacobian first order finite difference approximation. `StrictlyPositiveFinite<f64>` is used to enforce that the returned value lies in the $RR_+ \\ {0, infinity}$ interval representable by the IEEE 754 double precision encoding. To ensure the perturbation is non-zero, and no invalid values such as $infinity$ and NaN is given.
- `first_order_jacobian()`: The first order Jacobian of the factor. This method comes with a default implementation that used the `jacobian()` and `jacobian_delta()` implementation using the finite difference method as defined in @s.m.factors.jacobian-first-order.
The `Display` trait is added as a requirement for the `Factor` trait to enforce the introspection abilities built into the simulator for when a variable is clicked on with the mouse cursor. See @s.m.introspection-tools for this.
The `&FactorState` argument passed by reference to `jacobian()`, `measure()`, and `first_order_jacobian()` is a structure containing common data associated with all factors such as its measurement precision matrix $Lambda_M$ and the factors initial measurement $h(#m.Xb _0)$, see @s.b.gbp.factor-update. This is necessary as traits only makes it possible to be generic over behaviour and not state as can be done with inheritance. While it moves the responsibility of tracking this state to the caller, it was deemed preferable over having each implementor copy the same fields manually.
All factor implementations are grouped together in a tagged union called `FactorKind`#footnote[Found in the
#gbp-rs(content: [#crates.gbpplanner-rs]) crate at #source-link("https://github.com/AU-Master-Thesis/gbp-rs/blob/9d06aab257eec234a57a8a8a87ce54369da00cce/crates/gbpplanner-rs/src/factorgraph/factor/mod.rs#L481-L495", "src/factorgraph/factor/mod.rs:481-495")
] to enable static dispatch when the code is compiled instead of dynamic dispatch. Using static dispatch is less flexible in terms of extensibility as all implementors has to be known by the library at compile time. But allows for better performance as the compiler can better optimize and possibly inline method calls@the-rust-book. Which is important as these methods has to run in the hot code path of the simulation.
\
#par(first-line-indent: 0pt)[
To better understand the effect of different factor variants, on the joint optimization each kind of factor can be enabled or disabled. When a factor is disabled it will not contribute or consume messages during message passing steps effectively removing it temporarily. The Settings Panel provide a section of toggles; one for each variant that can be used to toggle it. Likewize these preferences can be read from the scenarios `config.toml` to easily test scenarios where a specific factor variant is not relevant.
]
|
https://github.com/SE-legacy/physics_lectures_3_term | https://raw.githubusercontent.com/SE-legacy/physics_lectures_3_term/main/lesson_2/main.typ | typst | #import "conf.typ": conf
#show: conf.with(
meta: (
title: "Физика. Занятие №2, 16.09.2024",
author: "<NAME>., <NAME>. Чернышевского",
group: 251,
city: "Саратов",
year: 2024,
)
)
= Алгоритм описания ФС "Электромагнитное поле"
== 3. Выбор способа описания
#image("images/1.png")
- Классический (Классическая модель)
- Величина заряда может быть любой
- Носители заряда - частицы
- Расстояние между зарядами можно измерять с любой точностью
- Квантовый (Квантовая модель)
- Величина заряда должна быть кратна элементарному, т.е. заряд дискретен (квантован)
- Носители заряда могут обладать и волновой, и корпускулярной природой
- Координату частицы-волны невозможно указать точно
_Параметры состояния идеализированной системы_
$E(r)$ - напряженность электрического поля определим как количественную меру свойства (а) электростатического поля: *численно равна силе, действующей на единичный положительный заряд, помещенный в данную точку поля;*
$phi_1 - phi_2 = (Е * d r)$ - разность потенциалов двух точек поля определим как количественную меру свойства (b), т.е. способность совершать работу по перемещению зарядов: *численно равна работе сил поля по перемещению единичного положительного заряда из первой точки во вторую.* $Е(r) = - (d phi )/ (d r)$
== 4. Математическая модель
#image("images/2.png")
- Первый --- Классический
- Экспериментальный закон Кулона
- Устанавливающий величину силы взаимодействия двух точечных зарядов без описания механизма передачи действия
- Второй --- полевой
- Теорема Остроградского-Гаусса
- Связывающий заряды с параметрами (напряженностью) поля в пространстве.
_Теорема Остроградского-Гаусса_
Поток вектора напряженности электрического поля через любую замкнутую поверхность равен алгебраической сумме зарядов, находящихся внутри поверхности, деленной на электрическую постоянную $epsilon_0$
Данная теорема позволяет решать следующие задачи:
+ По заданной конфигурации зарядов можно определить электрическое поле.
+ По заданному электрическому полю можно найти конфигурации зарядов
+ Закон Кулона, определяющий силу взаимодействия точечных зарядов, может быть записан в полевой форме:
$F_21 = k (abs(q_1) abs(q_2)) / r^3 r$ $F = q E$
_Принцип суперпозиции полей_
Если электрическое поле образовано несколькими зарядами, то напряженность результирующего поля равна векторной сумме напряженностей полей каждого из зарядов.
$E = sum_(i=1)^n E_i$; $phi = U / q = 1 / (4 pi epsilon_0) sum_(i=1)^n q_i/r_i = sum_(i=1)^n phi_i$
Пример 1
Если радиус заряженной сферы уменьшать, то в пределее получим напряженность поля точечного заряда
$E = q / (4 pi epsilon_0 r^2)$
#image("images/4.png")
== 6. Анализ результатов
Используя связь параметров, получим следующие уравнения.
- *<NAME>* для точечных зарядов $F_21 = k (abs(q_1) abs(q_2)) / r^3 r$
- *Потенциал точки поля* (интеграл какой-то)
- *Потенциал системы неподвижных зарядов* (еще интеграл)
- *Напряженность поля* E = че-то теорема
Используя связь параметров, получим следующие уравнения.
- *Разность потенциалов* между двумя точками поля $delta phi = q / (4 pi epsilon_0) (1 / r_1 - 1/ r_2)$
- *Работа сил электростатического поля* $A = integral_(r_1)^r_2 F d r = (Q q) / (4 pi epsilon_0) (1 / r_1 - 1 / r_2)$
- *Циркуляция вектора E*. Если перемещение заряда происходит по произвольной, но замкнутой траектории L, то работа сил электростатического поля равна нулю: $integral_L(E * ...)$
Пример 2
$Phi_E = Phi_(E"осн") + Phi_(E"бок") = 2 integral_(S"осн") E d S + integral_(S"бок") E d s = 2 E S_"осн" + 0 (<= E "перпенд" d s) = 2 E S$
$Phi_E = frac(sum_()^() q_i, epsilon_0) = ()$
Пример 3
== 5. Решение уравнений состояния
На участке 2: векторы сонаправлены $E = E_1 + E_2 = sigma / (2 epsilon_0) + sigma / (2 epsilon_0) = sigma / epsilon_0$
Поле сосредоточено между плоскостями. *Напряженность поля* во всех точках этой области одинакова по величине и по направлению, т.е. *однородно*.
*Теорема Ирншоу*
*_Система неподвижных электрических зарядов не может находиться в устойчивом равновесии._*
Заряд $+ q$ будет находиться в равновесии, если при его перемещении на расстояние $d r$ со стороны всех остальных зарядов системы, расположенных вне поверхности $S$, будет действовать сила $F$, возвращающая его в исходное положение.
$Phi_E = ...$
Согласно теореме Гаусса, если заряды не охватываются замкнутой поверхностью, то $Phi_E = 0$.
*Противоречие* доказывает теорему Ирншоу.
#image("images/6.png")
= Лекция 2. Электрическое поле в веществе
- Все вещества состоят из атомов и молекул, находящихся в непрерывном хаотичном движении и взаимодействующих между собой с силами притяжения и отталкивания электромагнитной природы
- В зависимости от внешних условий и внутренних сил взаимодействия, вещество может находиться в трех агрегатных состояниях
- Атом вещества представляет собой систему заряженных частиц (частей): протоны, нейтроны, электроны, (или ядро-электроны)
- Движущиеся заряды атомов испытывают воздействие внешнего как электрического, так и магнитного полей
- Движущиеся заряды атома порождают собственные электрические и магнитные поля, которые по принципу суперпозиции изменяют внешнее электромагнитное поле.
#image("images/2_1.png")
*Зонная теория твердого тела* --- это теория валентных электронов, движущихся в периодическом потенциалом поле кристаллической решетки.
*Энергетический спектр* --- шкала количественных значений энергии электронов атомов данного вещества
Наивысшая из разрешенных энергетических зон в полупроводниках и диэлектриках, в которой при температуре 0 К все энергетические состояния заняты электронами, называются валентной, следующая за ней --- зоной проводимости. В проводниках зоной проводимости называется наивысшая разрешенная зона, в которой находятся электроны при температуре 0 К.
Твёрдое вещество: Проводники, полупроводники, диэлектрики
Диэлектрики: активные (сегнетоэлектрики, пьезоэлектрики, пироэлектрики, электролюминофоры, электреты), пассивные (полярные, неполярные)
Электроны полностью заполненной зоны не могут принимать учатия в создании электрического тока. Для появления электропроводности необходимо часть электронов перевести из валентной зоны в зону проводимости.
Электроны, находящиеся в зоне проводимости, нельзя считать абсолюнто свободными. Эти электроны взаимодействуют с периодическим потенциальным полем кристаллической решетки. При математическом описании поведения электронов в зоне проводимости используют понятие эффективной массы. Эффективная масса не определяет ни инерционных, ни гравитационных свойств электрона.
Ширина запрещенной зоны меняется с изменением температуры. Это происходит по двум основным причинам:: из-за изменения амплитуды тепловых колебаний атомов решетки и из-за изменения межатомных расстояний, т.е. объёма тела
*Границы применимости*:
+ Потенциал кристаллической решетки строго периодичен
+ Взаимодействие между свободными электронами может быть сведено к одноэлектронному самосогласованному потенциалу (а оставшаяся часть рассмотрена методом теории возмущений)
+ Взаимодействие с фотонами слабое (и может быть рассмотрено по теории возмущений).
_Дополнения к физической модели_:
- Полагаем, что свободных электронов в проводнике достаточно много (бесконечно много)
- На первом этапе внешнее электрическое ($E_0$) поле проникает в вещество
- Пренебрегаем хаотичным движением электронов
Для равновесия внутри проводника необходимы два условия:
$E = E' + E_0 = 0$ и $(dif phi) / (dif r) = - E = 0$ $=>$ $phi_"внутри" = "const"$
Вне тела и вдоль поверхности проводника будет наблюдаться равновесие, если выполняется условие
$E "перпенд" d r => (E d r) = 0 = - d phi$
$phi_"поверхности" = "const"$
Таким образом, чтобы заряд находился в равновесии, необходимо:
+ Напряженность поля внутри проводника
+ Внешняя напряженность поля перпендикулярна поверхности проводника
+ Потенциал на поверхности постоянен
Электростатическое поле перераспределяет заряды в нейтральном проводнике так, что его поверхность становится эквипотенциальной.
Примеры экивпотенциальных поверхностей:
+ Точечный заряд
+ Диполь
+ Два равных одноименных заряда
Возьмем проводник и начнём его заряжать. Чем больше заряд на проводнике, тем больше его потенциал.
$q(t) = C phi(t), => C = q(t) / phi(t) = (d q) / (d U)$ |
|
https://github.com/chendaohan/bevy_tutorials_typ | https://raw.githubusercontent.com/chendaohan/bevy_tutorials_typ/main/07_bundles/bundles.typ | typst | #set page(fill: rgb(35, 35, 38, 255), height: auto, paper: "a3")
#set text(fill: color.hsv(0deg, 0%, 90%, 100%), size: 22pt, font: "Microsoft YaHei")
#set raw(theme: "themes/Material-Theme.tmTheme")
= 1. Bundles (捆绑包)
你可以将 Bundle 看作是创建实体的“模板”。通过 Bundle 而不是一个个的添加组件,可以确保你不会意外忘记实体中的某个重要组件。如果你没有设置 Bundle 中的所有组件,Rust 编译器会给出错误,从而帮助你确保代码的正确性。
Bevy 提供了许多内置的 Bundle 类型,你可以使用他们来生成常见的实体,这些 Bundle 以后用到了再说。
= 2. 创建 Bundle
要创建你自己的 Bundle ,请在结构体上派生 ```Rust Bundle``` Trait :
```Rust
#[derive(Bundle)]
struct PlayerBundle {
player: Player,
position: Position,
level: Level,
health: Health,
}
```
Bundle 可以嵌套,但是无论怎么嵌套,生成的实体是一个包含 Bundle 中有的所有组件的扁平的实体,一个实体中不能有重复的组件类型。
= 3. 使用 Bundle
你可以在生成实体时使用 Bundle :
```Rust
commands.spawn(PlayerBundle {
player: Player,
level: Level(12),
health: Health(104),
position: Position(Vec2::new(5.2, 7.8)),
});
```
如果你想要给 Bundle 添加默认值:
```Rust
#[derive(Component, Default)]
struct Position(Vec2);
#[derive(Bundle, Default)]
struct PlayerBundle {
player: Player,
position: Position,
level: Level,
health: Health,
}
```
当 Bundle 有了默认值后,你就可以只设置部分组件:
```Rust
commands.spawn(PlayerBundle {
health: Health(104),
position: Position(Vec2::new(5.2, 7.8)),
..default()
});
```
= 4. 松散组件作为 Bundle
Bevy 将任元组的组件视为 Bundle :
```Rust
commands.spawn((
PlayerBundle {
level: Level(5),
health: Health(72),
..default()
},
Name::new("小智"),
));
```
这使你可以轻松地使用一组松散的组件生成实体,或在生成实体时添加更多任意组件。然而,这样你就没有一个定义良好的结构体 Bundle 所提供的编译时正确性。
```Rust
commands.spawn((
Level(15),
Health(214),
Position(Vec2::new(78., 64.2)),
));
```
你应该认真考虑创建适当的结构体,特别是如果你可能会生成许多相似的实体。这将使你的代码更易于维护。
= 5. 查询
你不能查询这个 Bundle ,因为 Bevy 实体中的组件之间是扁平的。
这是错误的:
```Rust
fn query_player_bundles(bundles: Query<&PlayerBundle>) {}
```
这是正确的:
```Rust
fn query_players(players: Query<(&Level, &Health, &Position), With<Player>>) {}
``` |
|
https://github.com/floriandejonckheere/utu-thesis | https://raw.githubusercontent.com/floriandejonckheere/utu-thesis/master/thesis/template.typ | typst | #import "@preview/acrostiche:0.3.1": *
#let template(
// Document title
title: "",
// Document subtitle
subtitle: "",
// Institution
institution: "",
// Department
department: "",
// Unit
unit: "",
// author
author: "",
// Date of publication
date: none,
// Institution logo (SVG)
logo-file: none,
// Bibliography
bibliographies: none,
// Abstract
abstract: [],
// Keywords
keywords: (),
// Acronyms
acronyms: (),
// Chapters
chapters: (),
// Appendix
appendix: (),
// Document contents
body,
) = {
// Basic properties
set document(author: author, title: title)
set page(margin: 35mm)
set block(spacing: 1.5em)
set par(leading: 1.2em)
set text(size: 12pt, font: "New Computer Modern", lang: "en", region: "FI", hyphenate: false)
set math.equation(numbering: "(1)")
// Add some space above and below paragraphs
show par: set block(below: 2em)
// Add some space above and below headings
show heading: set block(above: 2em, below: 1.5em)
show heading: set par(justify: false)
show heading.where(
level: 1
): it => {
v(3em)
text(1.5em)[
#if it.numbering != none [
#box(width: 1.5em)[
#counter(heading).display()
]
]
#it.body
]
v(2em)
}
show heading.where(
level: 2
): it => {
v(1em)
text(1.2em, it)
}
// Hide numbering for deeply nested sections
show heading.where(level: 4): it => [#block(it.body)]
// Set gap between figure and caption
set figure(gap: 1em)
// Set space above and below figures and tables
show figure: set block(above: 3em, below: 3em)
// Don't justify table content
show table: set par(justify: false)
set bibliography(full: true, title: [References])
// Set page numbering to letters (but hide in footer)
set page(numbering: "a", footer: [])
// Count total pages
let total_pages = locate(loc => {
// Get the page counter at the first bibliography page, then subtract one
counter(page)
.at(query(bibliography, loc).first().location())
.first() - 1
})
// Count total appendix pages
let appendix_pages = locate(loc => {
// Get the sum of the page counters at the end of the appendix pages (where the <end_of_appendix> metadata is located)
query(<end_of_appendix>, loc)
.map(l => counter(page).at(l.location()).sum())
.sum()
})
// Initialize acronyms
init-acronyms(acronyms)
// Title page
if logo-file != none {
align(left, image(logo-file, width: 7cm))
}
v(3fr)
align(right, text(1.9em, title))
v(5fr)
align(
right,
)[
#par(leading: 0.6em)[
#text(1em, smallcaps(institution)) \
#text(0.9em, department) \
#text(0.9em, subtitle) \
#text(0.9em, unit) \
#text(0.9em, date) \
#text(0.9em, author) \
]
]
v(2fr)
align(
center,
par(leading: 0.6em, justify: true)[
#text(
"The originality of this thesis has been checked in accordance with the University of Turku quality assurance system using the Turnitin OriginalityCheck service.",
size: 0.7em,
hyphenate: false
)
]
)
pagebreak()
// Abstract
par(
leading: 0.6em,
)[
#upper(institution) \
#department
#v(0.6em)
#smallcaps(author): #title
#v(0.6em)
#subtitle, #total_pages p., #appendix_pages app. p. \
#unit \
#date
]
line(length: 100%)
par(
leading: 0.6em,
justify: true,
)[
#abstract
]
linebreak()
par(
leading: 0.6em,
justify: false,
)[
*Keywords*: #keywords.join(", ")
]
pagebreak()
// Table of contents
set heading(numbering: "1.1", supplement: "Chapter")
// Set page numbering to Roman numerals (and show in footer)
set page(numbering: "i", footer: none)
counter(page).update(1)
show outline.entry.where(
level: 1
): it => {
v(0.5em)
strong(it)
}
outline(indent: auto, depth: 3)
pagebreak()
// List of figures
outline(
title: [List of Figures],
target: figure.where(kind: image),
)
pagebreak()
// List of tables
outline(
title: [List of Tables],
target: figure.where(kind: table),
)
pagebreak()
// List of algorithms
outline(
title: [List of Algorithms],
target: figure.where(kind: "algorithm"),
)
pagebreak()
// List of acronyms
heading(outlined: false, numbering: none)[List of Acronyms]
grid(
columns: (1fr, 9fr),
gutter: 1em,
..acronyms.keys().map(key => (strong(key), acronyms.at(key))).flatten()
)
pagebreak()
// Main document
set par(justify: true)
// Set page number to Arabic numerals
set page(numbering: "1")
counter(page).update(1)
// Main document
for chapter in chapters {
include "chapters/" + chapter + ".typ"
pagebreak()
}
// Bibliography
// Hide page number
set page(footer: [])
bibliography(bibliographies.map(b => "bibliography/" + b + ".yml"), style: "ieee")
// Appendix
set heading(numbering: "A", supplement: "Appendix", outlined: false)
counter(heading).update(0)
// Show page number
set page(numbering: none, footer: [
#align(center,
counter(heading).display() + "-" + counter(page).display()
)
])
for app in appendix {
pagebreak()
// Reset appendix counter
counter(page).update(1)
include "appendices/" + app + ".typ"
[#metadata("") <end_of_appendix>]
}
}
|
|
https://github.com/Mc-Zen/tidy | https://raw.githubusercontent.com/Mc-Zen/tidy/main/tests/test_tidy.typ | typst | MIT License | #include "test_parse.typ"
#include "test_parse_argument_list.typ"
#include "test_testing.typ"
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz-plot/0.1.0/src/cetz.typ | typst | Apache License 2.0 | // Import cetz into the root scope. Import cetz by importing this file only!
#import "@preview/cetz:0.3.1": *
|
https://github.com/alex-touza/fractal-explorer | https://raw.githubusercontent.com/alex-touza/fractal-explorer/main/paper/src/chapters/1_introduccio_a_la_geometria_fractal.typ | typst | #import "../environments.typ": *
#import "../utilities.typ": *
#import "../../meta.typ": *
#import "../shortcuts.typ": *
#import "@preview/cetz:0.2.2": canvas, draw
#import "@preview/dashy-todo:0.0.1": todo
#import "@preview/cetz:0.2.2": canvas, draw, plot
#import "@preview/suiji:0.3.0": *
= Introducció a la geometria fractal
En aquest primer capítol, s'exposarà la història de la geometria fractal, des de l'aparició de les corbes no derivables fins els primers objectes fractals. També es discutiran les característiques associades a aquestes formes.
/*
En aquest primer capítol, s'exposarà el concepte de _fractal_ i es definirà de forma matemàtica, explorant els diferents significats que s'han associat a aquest terme al llarg de les dècades. Després, es definiran les diferents _dimensions_ que es poden calcular d'aquest objecte matemàtic.
*/
#def[Veïnat d'un punt][
Un _veïnat d'un punt_ $p$ d'un espai topològic $(X, tau)$ (@def-espai-topològic) és un subconjunt $V$ de $X$ que conté un conjunt obert $U$ que conté $p$. Simbòlicament, $p in U subset.eq V subset.eq X$.
] <def-veïnat>
#def[Funció analítica][
Sigui $I$ un conjunt obert. Una funció $f: RR -> RR$ és _analítica_ en $I$ si per qualsevol punt $a$ del conjunt existeix un veïnat $J$ de $a$ (@def-veïnat) tal que $f$ es pugui expressar com una sèrie de potències que convergeixi a $a$:
$
f(x) = sum^infinity_(i=0) c_i (x - a)^n
$
on $c_i$ són els coeficients de cada terme de la sèrie. Simbòlicament,
$
forall a in I med (exists J : a in J : forall x in J med f(x) = sum^infinity_(i = 0) c_i (x-a)^n "és convergent a" a )
$
Evidentment, si la funció no és contínua en $a$, aleshores tampoc és analítica en $a$, ja que . Per tant, una funció com $g(x) = x^2/(x-1)$ és analítica en $RR - {1}$.
Si per al punt $a$ la funció no convergeix a cap nombre o convergeix a un valor diferent de $f(a)$, aleshores la funció tampoc és analítica. Per exemple, la funció $f(x)= sum^infinity_(n=1) x^n$ no és analítica perquè la sèrie no és convergent, però les funcions trigonomètriques bàsiques, $sin$ i $cos$, estan definides amb sèries convergents:
$
sin(x) = sum^infinity_(i = 0) (-1)^n (x^(2n+1))/(2n + 1)!, wide cos(x)=sum^infinity_(i = 0) (-1)^n (x^(2n))/(2n!)
$
La demostració de la convergència d'aquestes funcions es proposa com un exercici per al lector.#footnote[
#printable-hyperlink("https://proofwiki.org/wiki/Sine_Function_is_Absolutely_Convergent", "Demostració per al sinus.")
]
] <def-funcio-analitica>
#def[Derivable gairebé pertot][
Una funció $f$ és _derivable gairebé pertot_ si el conjunt de punts $A = {x : exists.not f'(x)} $ (els punts en què la derivada no existeix) és finit o comptable./*#footnote([
Formalment, és quan el conjunt de punts no derivables és de mesura nul·la. Tots els conjunts finits o comptables són nuls, però també existeixen conjunts infinits o no comptables que són nuls, així que la definició exposada és incompleta. S'ha optat per simplificar la definició i així evitar introduir el concepte de mesura a aquest primer capítol.
])*/
Anàlogament, una funció $f$ és _no derivable gairebé pertot_ si el conjunt de punts $B = {x : exists f'(x)}$ (els punts en què la derivada existeix) és finit o comptable.
] <def-derivable-gairebe-pertot>
/*
#def[Conjunt dens][
Donat els conjunts $A$ i $X$, $A$ és un conjunt dens a $X$ si compleix alguna d'aquestes condicions (només es llisten les rellevants):
#enum(numbering: "(I)")[
El subconjunt tancat de $X$ (informalment, que conté el seu contorn) més petit contenint $A$ és $X$ mateix.
][
Existeix una intersecció entre $A$ i tots els subconjunts oberts (és a dir, que no contenen el seu contorn) no buits de $X$.
][
Qualsevol punt de $X$ pertany a $A$ o és un punt límit de $A$.
]
]
*/
== Origen de la geometria fractal
=== Riemann, Weierstrass i les corbes no derivables
/*
Al llarg de la història de les matemàtiques, s'han formulat diversos conjunts i formes geomètriques que avui dia considerem fractals.
*/
Les matemàtiques avançades sovint es centren en funcions derivables, és a dir, aquelles formes en què és possible dibuixar una recta tangent a qualsevol punt: molts dels conceptes de càlcul depenen de la derivabilitat de l'objecte matemàtic que s'estudia.
// Aquesta concepció prové originalment de <NAME>
El càlcul, una de les branques més importants de les matemàtiques, concebut de forma simultània per Newton i Leibniz, es basava en la suposició que, donada una funció analítica $f$ (@def-funcio-analitica) que és contínua a l'interval $[a,b]$, $f$ és derivable gairebé pertot punt dins $(a,b)$ (@def-derivable-gairebe-pertot). Tots els grans matemàtics d'abans del segle XX, com Gauss i Cauchy, suposaven que això era cert, o almenys no van dir el contrari.
El matemàtic i físic francès <NAME> va fer una demostració ---no gaire rigurosa--- d'aquesta conjectura, basant-se en un teorema sobre les corbes contínues (@teorema_ampere) @Kucharski2022.
#theorem[Teorema d'Ampère de les corbes contínues#footnote[
No confondre amb les lleis de força i circuits d'Ampère.
]][
Tota corba contínua ha de tenir seccions que siguin creixents, decreixents o completament horitzontals.
] <teorema_ampere>
El teorema d'Ampère, tanmateix, no considerava el cas que aquestes seccions siguin infinitament petites---és a dir, assumeix que una corba infinitament rugosa és impossible.
Durant la dècada de 1860, <NAME> va proposar una funció analítica contínua que no era, suposadament, derivable enlloc. Es defineix amb l'expressió (@fig-grafica-riemann)
$
R(x)=sum^infinity_(n=1) sin(n^2 x)/n^2
$
Tanmateix, Riemann no va demostrar que fos així. Durant el segle XX s'ha comprovat que la funció de Riemann sí que és derivable en punts concrets @Hardy1916 @Gerver1969 @Gerver1971: per exemple, Hardy va provar la derivabilitat per a punts $r pi$ tal que $r$ és nombre irracional i Gerver ho va fer en punts tals que la variable independent pren un valor de $pi$ multiplicat per una fracció de nombres enters senars. Així doncs, com que el conjunt de punts en què la derivada existeix és comptable, la funció de Riemann és no derivable gairebé pertot.
Finalment, la primera funció que es va demostrar que no era derivable en cap punt, i que per tant falsava el teorema d'Ampère va ser proposada per Karl Weierstrass el 1872:
$
W(x)=sum^infinity_(n=1) a^n cos(b^n x pi)
$
tal que $0 < a < 1$, $b$ és un enter positiu i $display(a b > 1 + 3/2pi)$.
#figure(caption: [Gràfica de la funció de Riemann.])[
#canvas({
import draw: *
let Rie(n) = {
return x => (range(1, n + 1).map(i => (calc.sin(i * i * x) / (n*n))).sum() )
}
if PLOTS {
plot.plot(size: (10,8), axis-style: "school-book", plot-style: (stroke: blue.transparentize(50%)), y-label: "R(x)", {
plot.add(domain: (-0.8, 0.8), samples: 250, axes: ("x", "y"), Rie(15))
})
} else {
content((0, 0), text(red)[Gràfiques desactivades])
}
})
] <fig-grafica-riemann>
#figure(caption: [Gràfica de la funció de Weierstrass.])[
#canvas({
import draw: *
let a = 0.5
let b = 12
let We(n) = {
return x => (range(1, n + 1).map(i => (calc.pow(a, i)) * calc.cos(calc.pow(b, i) * x * calc.pi)).sum() )
}
if PLOTS {
plot.plot(size: (10,8), axis-style: "school-book", plot-style: (stroke: blue.transparentize(50%)), y-label: "W(x)", {
plot.add(domain: (-0.8, 0.8), samples: 250, axes: ("x", "y"), We(15))
})
} else {
content((0, 0), text(red)[Gràfiques desactivades])
}
})
] <fig-grafica-weierstrass>
La funció de Weierstrass (@fig-grafica-weierstrass) va tenir tal impacte en les matemàtiques de l'època, que Poincaré es va referir a aquesta funció com un "monstre" i un "ultratge contra el sentit comú". Lògicament, els matemàtics ja coneixien funcions que tenien un conjunt de punts aïllats no derivables; però l'existència d'una funció no derivable gairebé enlloc implicava que la proposició en què es basava el càlcul era falsa ---que qualsevol funció contínua és derivable gairebé pertot---, per culpa de les definicions imprecises i demostracions que abusaven de la intuïció.
Aquesta fita va ser una autèntica lliçó per als matemàtics; mostrava com d'important era mantenir la rigorositat i no fer masses suposicions de les quals més tard ens podem penedir. En tot cas, la funció de Weierstrass obria les portes a un nou món matemàtic, el món de les corbes infinitament rugoses./*; és a dir, les fractals. De fet, avui dia la seva gràfica es consideraria un objecte fractal. */
=== Cantor, Koch i la mirada a l'infinit
La funció de Weierstrass va començar a despertar la curiositat en el "món rugós"; durant les següents dècades, es van idear nous objectes matemàtics que van anar construint la geometria fractal com la coneixem avui dia.
<NAME>, el pare de la teoria de conjunts, qui va assistir a les classes de Weierstrass, va crear el 1883 una funció $psi$ amb unes característiques molt interessants, definida a l'interval $[0, 1]$: conté diverses simetries entre punts i és derivable gairebé pertot, excepte un conjunt de punts molt concrets, que anomenem conjunt de Cantor (vegeu @cap-cantor).
A principis del segle XX, el 1904 <NAME> va proposar una altra forma de trobar una corba contínua però no derivable. A diferència de Weierstrass, que ho feia de forma totalment analítica, l'enfocament de Koch era purament geomètric, de forma que la no derivabilitat de l'objecte és òbvia (encara que no és trivial demostrar-la formalment). Es tracta de la corba de Koch, que unida amb dos més objectes iguals, forma el floc de neu de Koch.
Aquest últim objecte es construeix amb un algorisme geomètric recursiu, és a dir, l'objecte és conté a si mateix. D'aquesta manera, la corba de Koch és el límit a l'infinit de realitzar l'algorisme infinites vegades (simbòlicament, $C = lim_(n->+infinity) C_n$). Aquest mètode de generació s'anomena sistemes de funcions iterades (IFS), que s'estudia en detall més endavant.
/*
$
psi'(x) = cases(0 wide& "si" x in.not C,+infinity & "si" x in C)
$
*/
El 1915, va aparèixer una altra fractal construït amb el mateix mètode, el conegut triangle de Sierpiński. Aquest objecte és encara més interessant, perquè es pot generar amb molts algorismes diferents.
Els objectes matemàtics mencionats en aquest apartat (la funció de Cantor, el conjunt de Cantor i la corba de <NAME>) requereixen un desenvolupament més profund; es defineixen a l'apartat corresponent.
=== Einsein, Perrin i el moviment brownià
El moviment brownià es va concebre inicialment com el conjunt de trajectòries que fan les partícules suspeses en un fluid. S'atribueix a <NAME> la matematització d'aquest fenomen. En un article de 1905, va exposar com una expressió per la difusió molecular mitjançant un model matemàtic probabilístic i va predir que les interaccions entre molècules diluïdes causen moviments caòtics. El 1909, el físic francès <NAME> va dur a terme una sèrie d'experiments per demostrar-ho empíricament.
#figure(caption: [Exemple d'una possible trajectòria browniana. En negre, la trajectòria que es mesuraria en un interval de temps $Delta t_a$, i en gris, la trajectòria que es mesuraria en uns intervals $Delta t_b$, on $Delta t_b < Delta t_a$.])[
#canvas(
{
import draw: *
let draw_point(x, y, name) = {
circle((x, y), radius: .25, name: name)
content((x, y), text(10pt, name))
}
draw_point(-5, 0, "A")
draw_point(6, -2, "B")
line("A", "B")
set-style(stroke: gray.lighten(40%))
line("A", (-4, 2))
line((-4, 2), (-2, -2))
line((-2, -2), (2, 1))
line((2, 1), (4, -2))
line( (4, -2), "B")
}
)
] <fig-mov-brownia>
Com mostra la @fig-mov-brownia, resulta impossible determinar la trajectòria en un moviment brownià perquè, a mesura que augmenta la precisió, augmenta la incertesa de la posició del cos. Per tant, com explica Perrin, el moviment brownià és un altre exemple de rugositat infinita i de corba no derivable, ja que intentar traçar una recta tangent a un punt de la trajectòria és impossible @Bigg2011.
Fins al dia d'avui, s'han trobat aplicacions de les dinàmiques brownianes més enllà de l'estudi de la difusió molecular, com el moviment d'estrelles dins d'una galàxia.
=== Mandelbrot i les fractals financeres
El matemàtic polonès <NAME> (1924#{sym.dash.en}2010) també va començar a observar "rugositats" a una multitud d'àrees de la realitat. Les seves investigacions consistien en estudiar el comportament de les fluctuacions de diferents variables, començant pels mercats financers i els preus de productes com el cotó i el blat, fins les inundacions del riu Nil. Durant els seus primers estudis, es va adonar que resultats exactes requerien considerar una "variància infinita", ben lluny de les aproximacions poc realistes dels models de l'època. @Gomory2010
Aquesta nova perspectiva es va veure reflectida en el seu article _How Long is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension_ (en català, _Quina és la longitud de la costa de Gran Bretanya? Autosimilitud estadística i dimensió fraccionària_) @Mandelbrot1967, en què estudiava la rugositat ("el grau de complicació") de les corbes geogràfiques. Mandelbrot va proposar una dimensió $D$ que mesura aquesta complicació.
L'objectiu de Mandelbrot era mirar la realitat d'una forma diferent; deixar enrere les imprecisions i convencions del càlcul i tractar el món tal com és: rugós, imperfecte i no derivable. El polonès criticava la certa obsessió dels seus companys de professió per l'abstracció i el formalisme, que no permetia a les matemàtiques poder reflectir el món adequadament.#todo[Cita de Mandelbrot que ho demostri.]
Tots els objectes matemàtics exposats fins ara no tenien una categoria en comú; eren només "monstres", com deia Poincaré, que tenien propietats estranyes. /*Els lectors s'hauran adonat que, des del començament d'aquest capítol sobre la història de la geometria fractal, encara no hem fet servir el terme _fractal_---*/I és que no va ser fins el 1975 quan Mandelbrot va introduir els termes _fractal_ i _objecte fractal_, permetent als matemàtics finalment referir-se aquestes formes geomètriques. Per aquest motiu, sovint es parla de Mandelbrot com al "pare dels fractals".
Tota la seva feina va quedar exposada el 1982 en el seu llibre _The fractal geometry of nature_ ---que és, de fet, una de les fonts principals d'aquest treball.
== La geometria fractal a la natura
La geometria fractal és un dels camps de les matemàtiques que més ha despertat la curiositat del públic per la seva bellesa, així com per la seva omnipresència a la natura. Es solen associar les fractals amb formes com flocs de neu (@floc_de_neu) i cols autosimilars (@romanesco), però es poden trobar característiques fractals a molts altres racons del món real.
#grid(columns: (1fr, 1fr), gutter: 2em)[
#figure(caption: [Floc de neu. _Domini públic._])[
#image("../images/floc de neu.png", height: 150pt)
] <floc_de_neu>
][
#figure(caption: [
Patrons fractals en un cristall congelat. _By Schnobby - Own work, CC BY-SA 3.0, \<https://commons.wikimedia.org/w/index.php?curid=19055302>_
])[
#image("../images/cristall congelat (Schnobby).png", height: 150pt)
] <cristall_congelat>
][
#figure(
caption: [
Un romanesco, un tipus de col que presenta autosimilitud. _Ioangogo - Own work, CC BY-SA 4.0, \<https://commons.wikimedia.org/w/index.php?curid=39553243>_
]
)[
#image("../images/romanesco (loangogo).png", width: 200pt, height: 150pt)
] <romanesco>
][
#figure(caption: [
Dendrita d'òxid de manganès formada en pedra calcària. Escala en mil·límetres. _<NAME> (Department of Geology, The College of Wooster)._
])[
#image("../images/dendrites (Wilson).png", width: 200pt, height: 150pt)
] <dendrita>
]
És incorrecte, tanmateix, pensar que tota la geometria fractal es basa en fenòmens naturals. Com Mandelbrot mateix explica, aquesta branca de les matemàtiques va ser una conseqüència dels "monstres" concebuts a finals del segle XIX, exposats a la secció anterior.
De totes formes, les fractals són una bona representació de la realitat. Si bé la geometria euclidiana, a la qual estem acostumats, acaba sent llisa quan ens hi apropem prou, això no representa amb precisió el món natural. La geometria fractal té un enfocament diferent i considera que les formes no són llises, sinó infinitament rugoses. Aquesta postura més realista i propera al món real contrasta amb la moltes altres branques de les matemàtiques, que són més difícils de relacionar directament amb la natura. Això, a més, fa que la geometria fractal sigui un d'aquells conceptes matemàtics que, quan hom els coneix amb profunditat, els observa a tota la realitat.
== Característiques de les fractals
El terme _fractal_ (que en català és un mot femení), com s'ha mostrat, va ser creat per Mandelbrot, a partir de la paraula llatina _frāctus_. Mandelbrot va optar per no definir formalment el terme, sinó associar-hi unes propietats @falconer @Mandelbrot1987:
- *Irregularitat geomètrica.* Alguns objectes fractals no es poden descriure completament a partir de la geometria euclidiana convencional, si no és amb una definició recursiva.
- *Autosimilitud exacta o parcial.* Com a conseqüència de la propietat anterior, moltes fractals es contenen a si mateixes, és a dir, que si desenvolupem o representem la seva definició podem trobar còpies de si mateix. Això és evident en el cas de definicions clarament recursives, com el triangle de Sierpinski, però també es troba en fractals definides de formes diferents, com el conjunt de Mandelbrot, on aquesta autosimilitud és absolutament inesperada.
També hom pot obtenir autosimilitud estadística, que no es tracta d'una semblança geomètrica, sinó en les seves propietats. Un exemple clàssic és la silueta d'una costa: cada porció de l'objecte, sigui gran o petita, té una rugositat semblant a la resta encara que tingui forma diferent.
- *Estructura detallada minúscula.* Les formes geomètriques convencionals, si ens hi apropem suficient, acaben semblant llises. En canvi, els fractals tenen una rugositat que sovint és infinita.
Aquesta rugositat es mesura amb dimensions fractals (#pritable-doclink(<cap-dimensio-fractal>)), com la dimensió de Hausdorff i la dimensió autosimilar.
- *Definició simple.* Moltes fractals tenen definicions molt senzilles. Això, considerant les propietats anteriors, és sorprenent. /*Alguns fractals s'obtenen amb un algoritme recursiu, de manera que successives iteracions d'aquest porten a una aproximació més exacta de la fractal.*/
Aquesta característica resulta, a la vegada, útil i problemàtica. És útil perquè facilita molt la seva representació ---són increïblement senzilles de generar amb un llenguatge de programació---, però és problemàtica perquè aleshores és més complicat trobar l'explicació de les formes que es troben a fractals com la de Mandelbrot o Newton.
- *Dificultats en la mesura.* Sovint resulta inútil realitzar mesures comunes com la llargada o l'àrea en els objectes fractals, ja que aquestes són o bé impossibles de fer o resulten en valors nuls o infinits, en el cas de les corbes fractals. Seguint l'exemple anterior, la llargada costa augmenta sense límit quan augmentem la precisió.
El terme s'utilitza de forma àmplia per designar qualsevol forma geomètrica que presenti alguna de les propietats anteriors, especialment l'autosimilitud i l'estructura detallada minúscula.
A la @cap-exemple-gen-fractal teniu un exemple de fractal autosimilar i amb detall infinit, generat iterativament.
#figure(caption:[Un segment qualsevol és autosimilar, però no té estructura detallada infinita, així que no es consideraria un fractal.] ,
box(
height: 20pt,
align(horizon,
canvas({
import draw: *
line((-3, 0), (3, 0))
for i in range(-3, 4) {
line((i, 0.1), (i, -0.1))
}
})
)
)
)
#figure(caption: [Una corba fractal té llargada infinita; ampliant la imatge, observaríem que la llargada real no és un nombre finit.],
box(
height: 50pt,
align(horizon,
canvas({
import draw: *
let points = ()
let rng = integers(gen-rng(100), low: -8, high: 8, size: 130).at(1)
for (i, n) in rng.enumerate() {
if i == rng.len() - 1 {
continue
}
line((i / 10, n /10), ((i+1) / 10, rng.at(i+1) / 10))
}
})
)
)
)
/*Considerem una circumferència $c_0$ de radi $r$ a $RR^2$. Aleshores, el conjunt de punts de $c_0$ s'obté de l'expressió:
$
c_0 colon cases(
x = r cos(theta),
y = r sin(theta)
) => c_0 = {(r cos(theta), r sin(theta)) : theta in [0,2pi)}
$
on $r$ és una constant i $theta$ és el paràmetre de la circumferència.
Considerem la funció $ f(x) = x / 10 $
#figure(caption: [La circumferència $c_0$ amb radi $r$.])[
#canvas({
import draw: *
circle((0, 0), radius: 2.5)
line((0,0), (2.5, 0))
})
]
#figure(caption: [Aquest objecte està format per ],
box(
height: 150pt,
align(horizon,
canvas({
import draw: *
circle((-3, 0), radius: 2.5)
circle((-3, 0), radius: 2.4, stroke: gray)
circle((-3, 0), radius: 2.3, stroke: gray.lighten(50%))
circle((-3, 0), radius: 2.2, stroke: gray.lighten(80%))
for i in range(1, 100, step: 4) {
circle((3, 0), radius: 2.5 - i / 20)
}
circle((3, 0), radius: 0.1)
circle((3, 0), radius: 0.02)
})
)
)
)
*/ |
|
https://github.com/kdog3682/2024-typst | https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/dialogue-functions.typ | typst | #import "base-utils.typ": *
#import "designer.typ"
// clip.typ
#let dialogue-item(content, speaker-width: 100pt, speaker: "Sam") = {
let columns = (speaker-width, auto)
let align = (left, left)
let speakerContent = bold(speaker)
// let rightContent = content + place(dy: 10pt, designer.line())
let rightContent = content
let tableContent = table(stroke: none, columns: columns, align: align, inset: 0pt, speakerContent, rightContent)
block(below: 30pt, tableContent)
}
|
|
https://github.com/Student-Smart-Printing-Service-HCMUT/ssps-docs | https://raw.githubusercontent.com/Student-Smart-Printing-Service-HCMUT/ssps-docs/main/contents/categories/task2/2.conclude.typ | typst | Apache License 2.0 | #{include "./2.1.typ"}
#{include "./2.2.typ"}
#{include "./2.3.typ"}
#{include "./2.4.typ"}
|
https://github.com/xdoardo/co-thesis | https://raw.githubusercontent.com/xdoardo/co-thesis/master/thesis/chapters/conclusions.typ | typst | #import "/includes.typ": *
#pagebreak(to: "odd")
#pagebreak(to: "odd")
= Conclusions<chapter-conclusions>
We defined an operational semantics for Imp targeting the `Delay` monad which,
together with the `Maybe` monad, provided an adequate type to model the effects
of divergence and failure: we did this with the aid of Agda, and we explored
the implementation of such a semantics using sized types.
Our objective, other than the definition itself, was to use the semantics
defined this way to show how it can be used when transforming a source program.
The transformations we chose to implement, both suggested by Nipkow in
@concrete-semantics, explore two source to source transformations.
The first, _definite initialization analysis_ is, as the name suggests, a
static check and in fact leaves the source code untouched; it provides,
however, useful insights on the behaviour of the program when executed: if no
`dia` relation can be built for the program at hand, it means that the program
will surely crash and fail. On the other hand, if there exists a construction
for `dia` for the program at hand, we are assured that the program will not
fail -- of course, it can still diverge.
The second, _pure folding optimization_, is a transformation that has the
objective to lift information that is statically known to avoid run-time
computations. We proved that this transformation, which indeed changes the
syntactic structure of the program, does not change its semantics.
All throughout the work, _sizes_ proved to be useful in the definitions and to keep
track of termination and productivity: if compared with early versions of
@danielsson-operational-semantics, one important difference is that, for
example, we did not have to "trick" the termination checker, but every
definition was fairly streamlined. We can also compare our realization with the
work of Leroy in @leroy-coinductive-bigstep and @leroy-mechsem, which uses a
relational definition of the semantics of Imp, which can make proofs more
involved.
== Future works
We chose to model only one kind of failure: an extension using `Result := Ok v | Error e`
and a monad based on that type is fairly straightforward. The list of possible
optimizations is long and well described in the literature: an interesting work
can be the implementation of a general-purpose back-end and investigate various
optimiziations used in the industry, starting from the translation of a
low-level intermediate representation into static single-assignment form or
continuation-passing style, proving easy properties of the transformations.
|
|
https://github.com/Laikaiyong/Resume | https://raw.githubusercontent.com/Laikaiyong/Resume/main/resume.typ | typst | #import "vandyck-typst.typ": resume, term, skill, styled-link, findMe, badge-text, project-heading
#resume(
name: "<NAME> (Vandyck)",
[
#[= <NAME> (Vandyck)]
Full Stack Software Engineer
#findMe((
(name: "alias", link: "mailto:<EMAIL>", display: "<EMAIL>"),
(name: "github", link: "https://github.com/Laikaiyong", display: "@Laikaiyong"),
(name: "linkedin", link: "https://linkedin.com/in/lai-kai-yong", display: "<NAME>"),
))
== About Me
#set par(justify: true)
A full stack software developer.
Enthusiast in data science field.
Passionate in community building, public speaking and exploring the unknown.
== Language
#skill("Mandarin · English · Malay", 5)
#skill("Cantonese", 2)
#skill("Japanese", 1)
#link("https://linkedin.com/in/lai-kai-yong/details/experience/")[== Experience]
=== Mobile App Developer \
_Farmz Asia_\
#term[Aug 2023 --- Present][Kuala Lumpur, MY]
- Team of four, digitalizing healthcare education business workflows
- Flutter mobile application development, focuses on beauty challenge module and sleep ring module
- UI/UX Figma Prototyping aligning to business requirements
- Database and Storage Administration (MySQL, S3)
=== Software Developer \
_Infront Consulting_\
#term[May 2022 --- Dec 2023][Selangor, MY]
- Handles five projects in mobile app dev, web dev, data & game dev
- Developing solutions in React Native, Django, Pandas, React, JavaScript
- Daily standup Scrum practice, Client Physical and Virtual meeting
- Projects are in Agriculture, Health Screening, Team Engagement, Logistics Industry
=== Atlassian Administrator | Team Lead \
_Asia Pacific University_\
#term[Mar 2021 --- Jan 2023][Kuala Lumpur, MY]
- Team lead fo Atlassian tools (Team of three)
- Administrating Jira, Confluence (Automation, Workflows, SLAs, QA Form, Virtual Help Centre)
- Documents both technical and theoritical guides for university
- Digital Art Design, Develop Knowledge Base iframe Menu
#link("https://linkedin.com/in/lai-kai-yong/details/education/")[== Education]
=== Bachelor Degree \
_Asia Pacific University_\
#term[Nov 2022 --- Nov 2024][Kuala Lumpur, Malaysia]
B.Sc. Software Engineering. Scored 3.74 CGPA
=== Diploma \
_Asia Pacific University_\
#term[May 2020 --- July 2022][Kuala Lumpur Malaysia]
Diploma ICT Specialism in Data Informatics. Scored 3.93 CGPA
== Interests
- Enjoy Buffet, Food "Hunting"
- Gym Session
- Chillax Movie Session
#colbreak()
#link("https://linkedin.com/in/lai-kai-yong/details/projects/")[== Projects]
#project-heading([=== Job (GA Viewable)])
#link("https://play.google.com/store/apps/details?id=com.farmzsg.farmz&hl=en&gl=US")[=== Farmz Asia Mobile Application] \
Responsible in Beauty Challenge Module, Assist Sleep Ring Module. Built in Flutter.
#link("https://play.google.com/store/apps/details?id=com.icgmy.smartpoultry&hl=en&gl=US")[=== Ladang Pintar Mobile Application] \
Develop agriculture chicken poultry application for manager, operator and vet. Built in React Native.
#link("https://apiit.atlassian.net/wiki/spaces/KB/overview")[=== APU Knowledge Base] \
Administrating and writing guides for Asia Pacific University's stakeholders. Built with Atlassian tools.
#project-heading([=== Hackathons])
#link("https://www.youtube.com/watch?v=fpBrj3lLvx4&feature=youtu.be")[=== HOnboarding] \
Customer Support Agent Platform powered by Gen AI. Built in Svelte + Streamlit on AWS.
#link("https://sharks-lyart.vercel.app/")[=== SharkScan] \
Identify presence of shark groups and other marine life to prevent bycatching. Built in Solid + Mapbox.
== Skills
#badge-text("Basics", "HTML · CSS · SQL · Git · Office 365 · Notion · Jira")
#badge-text("Language", "Python · JavaScript · TypeScript · Dart · R · Java · PHP")
#badge-text("Framework", "Flutter · Astro · Django · Ionic · Next · React · Vue")
#badge-text("Datastore", "MongoDB · MySQL · S3 · Firestore · Supabase")
#badge-text("Dev Tools", "AWS · Azure · Power Platform · VS Code · Postman")
#badge-text("Additional", "Pandas · Streamlit · Laravel · Adobe Photoshop")
#link("https://www.linkedin.com/in/lai-kai-yong/details/certifications/")[== Certification]
- #link("https://www.credly.com/badges/5d178728-fa2f-443e-b1dc-274ba9f3f096/linked_in_profile")[Microsoft Certified Solutions Architect Expert]
- #link("https://www.credly.com/badges/5e209ca1-9980-48dd-813a-2ec35492d8fc/linked_in_profile")[AWS Certified Solution Architect Associate]
- #link("https://www.credly.com/badges/9cc23575-0b95-433e-a9b8-56cee0d9c0f3")[IBM Applied Data Science with Python - Level 2]
- #link("hhttps://www.credly.com/badges/213f7cf8-272c-4d04-92ea-c34661a50929")[Adobe Creative Educator Level 1]
- #link("hhttps://www.credly.com/badges/ed773bcd-7fad-4bcb-89a1-73a78d367ab2")[
Certified TigerGraph Associate Exam]
- #link("https://www.credly.com/badges/7bf01705-e3f1-4c55-a1bb-839ab50e49c9")[F5 ASEAN NGINX API Specialist]
#link("https://www.linkedin.com/in/lai-kai-yong/details/honors/")[== Achievement]
- Finalist Viction Horizon - Startup Hackathon
- Top 10 Kitahack
- Top 10 DevHack
- Top 30 Can You Hack It 2023 Generative AI Edition
- APU BCC Sparkathon Jupiter Aggregator Track Winner
#link("https://linkedin.com/in/lai-kai-yong/details/volunteering/")[== Volunteer]
- Google Developer Group (GDG) Cloud KL Lead
- MongoDB User Group (MUG) KL Lead
- Tableau User Group (TUG) KL Lead
- AWS Cloud Club Captain \@ APU
- Microsoft Learn Student Ambassador \@ APU
- Notion Campus Leader \@ APU
- Streamlit Student Ambassador Leader \@ APU
== Referer
=== <NAME> \
_Google Regional Lead, DevRel SEA_\
#findMe((
(name: "alias", link: "mailto:<EMAIL>", display: "<EMAIL>"),
))
],
) |
|
https://github.com/dikkadev/typst-statastic | https://raw.githubusercontent.com/dikkadev/typst-statastic/main/README.md | markdown | The Unlicense | # Statastic
A library to calculate statistics for numerical data in typst.
## Description
`Statastic` is a Typst library designed to provide various statistical functions for numerical data. It offers functionalities like extracting specific columns from datasets, converting array elements to different data types, and computing various statistical measures such as average, median, mode, variance, standard deviation, and percentiles.
## Features
- **Extract Column**: Extracts a specific column from a given dataset.
- **Type Conversion**: Convert array elements to floating point numbers or integers.
- **Statistical Measures**: Calculate average, median, mode, variance, standard deviation, and specific percentiles for an array or a specific column in a dataset.
## Usage
To use the package you can import it through this command `import "@preview/statastical:1.0.0": *` (as soon as the pull request ist accepted). The documentation is found in the `docs.pdf` in the development [repo](https://github.com/dikkadev/typst-statastic)
## License
This project is licensed under the Unlicense.
|
https://github.com/lcpu-club/oh-my-math | https://raw.githubusercontent.com/lcpu-club/oh-my-math/main/README.md | markdown | MIT License | # oh-my-math
A typst inspired math render library built using idris2
Resources
https://github.com/stefan-hoeck/idris2-parser
https://www.w3.org/TR/mathml-core/
https://typst.app/docs/reference/math/
https://github.com/stefan-hoeck/idris2-dom-mvc
https://github.com/stefan-hoeck/idris2-dom
https://idris2.readthedocs.io/en/latest/cookbook/parsing.html
|
https://github.com/Fr4nk1inCs/typreset | https://raw.githubusercontent.com/Fr4nk1inCs/typreset/master/src/utils/title.typ | typst | MIT License | #let make-title(title: "", other: []) = {
align(center)[
#strong(text(size: 1.75em, title))
#other
]
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-1FB00.typ | typst | Apache License 2.0 | #let data = (
("BLOCK SEXTANT-1", "So", 0),
("BLOCK SEXTANT-2", "So", 0),
("BLOCK SEXTANT-12", "So", 0),
("BLOCK SEXTANT-3", "So", 0),
("BLOCK SEXTANT-13", "So", 0),
("BLOCK SEXTANT-23", "So", 0),
("BLOCK SEXTANT-123", "So", 0),
("BLOCK SEXTANT-4", "So", 0),
("BLOCK SEXTANT-14", "So", 0),
("BLOCK SEXTANT-24", "So", 0),
("BLOCK SEXTANT-124", "So", 0),
("BLOCK SEXTANT-34", "So", 0),
("BLOCK SEXTANT-134", "So", 0),
("BLOCK SEXTANT-234", "So", 0),
("BLOCK SEXTANT-1234", "So", 0),
("BLOCK SEXTANT-5", "So", 0),
("BLOCK SEXTANT-15", "So", 0),
("BLOCK SEXTANT-25", "So", 0),
("BLOCK SEXTANT-125", "So", 0),
("BLOCK SEXTANT-35", "So", 0),
("BLOCK SEXTANT-235", "So", 0),
("BLOCK SEXTANT-1235", "So", 0),
("BLOCK SEXTANT-45", "So", 0),
("BLOCK SEXTANT-145", "So", 0),
("BLOCK SEXTANT-245", "So", 0),
("BLOCK SEXTANT-1245", "So", 0),
("BLOCK SEXTANT-345", "So", 0),
("BLOCK SEXTANT-1345", "So", 0),
("BLOCK SEXTANT-2345", "So", 0),
("BLOCK SEXTANT-12345", "So", 0),
("BLOCK SEXTANT-6", "So", 0),
("BLOCK SEXTANT-16", "So", 0),
("BLOCK SEXTANT-26", "So", 0),
("BLOCK SEXTANT-126", "So", 0),
("BLOCK SEXTANT-36", "So", 0),
("BLOCK SEXTANT-136", "So", 0),
("BLOCK SEXTANT-236", "So", 0),
("BLOCK SEXTANT-1236", "So", 0),
("BLOCK SEXTANT-46", "So", 0),
("BLOCK SEXTANT-146", "So", 0),
("BLOCK SEXTANT-1246", "So", 0),
("BLOCK SEXTANT-346", "So", 0),
("BLOCK SEXTANT-1346", "So", 0),
("BLOCK SEXTANT-2346", "So", 0),
("BLOCK SEXTANT-12346", "So", 0),
("BLOCK SEXTANT-56", "So", 0),
("BLOCK SEXTANT-156", "So", 0),
("BLOCK SEXTANT-256", "So", 0),
("BLOCK SEXTANT-1256", "So", 0),
("BLOCK SEXTANT-356", "So", 0),
("BLOCK SEXTANT-1356", "So", 0),
("BLOCK SEXTANT-2356", "So", 0),
("BLOCK SEXTANT-12356", "So", 0),
("BLOCK SEXTANT-456", "So", 0),
("BLOCK SEXTANT-1456", "So", 0),
("BLOCK SEXTANT-2456", "So", 0),
("BLOCK SEXTANT-12456", "So", 0),
("BLOCK SEXTANT-3456", "So", 0),
("BLOCK SEXTANT-13456", "So", 0),
("BLOCK SEXTANT-23456", "So", 0),
("LOWER LEFT BLOCK DIAGONAL LOWER MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("LOWER LEFT BLOCK DIAGONAL LOWER MIDDLE LEFT TO LOWER RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER MIDDLE LEFT TO LOWER RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER LEFT TO LOWER CENTRE", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL UPPER MIDDLE LEFT TO UPPER CENTRE", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL UPPER MIDDLE LEFT TO UPPER RIGHT", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER MIDDLE LEFT TO UPPER CENTRE", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER MIDDLE LEFT TO UPPER RIGHT", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER LEFT TO UPPER CENTRE", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER MIDDLE LEFT TO UPPER MIDDLE RIGHT", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER CENTRE TO LOWER MIDDLE RIGHT", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER LEFT TO LOWER MIDDLE RIGHT", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER CENTRE TO UPPER MIDDLE RIGHT", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER LEFT TO UPPER MIDDLE RIGHT", "So", 0),
("LOWER RIGHT BLOCK DIAGONAL LOWER CENTRE TO UPPER RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER CENTRE TO UPPER MIDDLE RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER LEFT TO UPPER MIDDLE RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER CENTRE TO LOWER MIDDLE RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER LEFT TO LOWER MIDDLE RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER CENTRE TO LOWER RIGHT", "So", 0),
("LOWER LEFT BLOCK DIAGONAL UPPER MIDDLE LEFT TO LOWER MIDDLE RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL LOWER MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL LOWER MIDDLE LEFT TO LOWER RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER MIDDLE LEFT TO LOWER RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER LEFT TO LOWER CENTRE", "So", 0),
("UPPER LEFT BLOCK DIAGONAL UPPER MIDDLE LEFT TO UPPER CENTRE", "So", 0),
("UPPER LEFT BLOCK DIAGONAL UPPER MIDDLE LEFT TO UPPER RIGHT", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER MIDDLE LEFT TO UPPER CENTRE", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER MIDDLE LEFT TO UPPER RIGHT", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER LEFT TO UPPER CENTRE", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER MIDDLE LEFT TO UPPER MIDDLE RIGHT", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER CENTRE TO LOWER MIDDLE RIGHT", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER LEFT TO LOWER MIDDLE RIGHT", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER CENTRE TO UPPER MIDDLE RIGHT", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER LEFT TO UPPER MIDDLE RIGHT", "So", 0),
("UPPER LEFT BLOCK DIAGONAL LOWER CENTRE TO UPPER RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER CENTRE TO UPPER MIDDLE RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER LEFT TO UPPER MIDDLE RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER CENTRE TO LOWER MIDDLE RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER LEFT TO LOWER MIDDLE RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER CENTRE TO LOWER RIGHT", "So", 0),
("UPPER RIGHT BLOCK DIAGONAL UPPER MIDDLE LEFT TO LOWER MIDDLE RIGHT", "So", 0),
("UPPER AND RIGHT AND LOWER TRIANGULAR THREE QUARTERS BLOCK", "So", 0),
("LEFT AND LOWER AND RIGHT TRIANGULAR THREE QUARTERS BLOCK", "So", 0),
("UPPER AND LEFT AND LOWER TRIANGULAR THREE QUARTERS BLOCK", "So", 0),
("LEFT AND UPPER AND RIGHT TRIANGULAR THREE QUARTERS BLOCK", "So", 0),
("LEFT TRIANGULAR ONE QUARTER BLOCK", "So", 0),
("UPPER TRIANGULAR ONE QUARTER BLOCK", "So", 0),
("RIGHT TRIANGULAR ONE QUARTER BLOCK", "So", 0),
("LOWER TRIANGULAR ONE QUARTER BLOCK", "So", 0),
("VERTICAL ONE EIGHTH BLOCK-2", "So", 0),
("VERTICAL ONE EIGHTH BLOCK-3", "So", 0),
("VERTICAL ONE EIGHTH BLOCK-4", "So", 0),
("VERTICAL ONE EIGHTH BLOCK-5", "So", 0),
("VERTICAL ONE EIGHTH BLOCK-6", "So", 0),
("VERTICAL ONE EIGHTH BLOCK-7", "So", 0),
("HORIZONTAL ONE EIGHTH BLOCK-2", "So", 0),
("HORIZONTAL ONE EIGHTH BLOCK-3", "So", 0),
("HORIZONTAL ONE EIGHTH BLOCK-4", "So", 0),
("HORIZONTAL ONE EIGHTH BLOCK-5", "So", 0),
("HORIZONTAL ONE EIGHTH BLOCK-6", "So", 0),
("HORIZONTAL ONE EIGHTH BLOCK-7", "So", 0),
("LEFT AND LOWER ONE EIGHTH BLOCK", "So", 0),
("LEFT AND UPPER ONE EIGHTH BLOCK", "So", 0),
("RIGHT AND UPPER ONE EIGHTH BLOCK", "So", 0),
("RIGHT AND LOWER ONE EIGHTH BLOCK", "So", 0),
("UPPER AND LOWER ONE EIGHTH BLOCK", "So", 0),
("HORIZONTAL ONE EIGHTH BLOCK-1358", "So", 0),
("UPPER ONE QUARTER BLOCK", "So", 0),
("UPPER THREE EIGHTHS BLOCK", "So", 0),
("UPPER FIVE EIGHTHS BLOCK", "So", 0),
("UPPER THREE QUARTERS BLOCK", "So", 0),
("UPPER SEVEN EIGHTHS BLOCK", "So", 0),
("RIGHT ONE QUARTER BLOCK", "So", 0),
("RIGHT THREE EIGHTHS BLOCK", "So", 0),
("RIGHT FIVE EIGHTHS BLOCK", "So", 0),
("RIGHT THREE QUARTERS BLOCK", "So", 0),
("RIGHT SEVEN EIGHTHS BLOCK", "So", 0),
("LEFT HALF MEDIUM SHADE", "So", 0),
("RIGHT HALF MEDIUM SHADE", "So", 0),
("UPPER HALF MEDIUM SHADE", "So", 0),
("LOWER HALF MEDIUM SHADE", "So", 0),
("INVERSE MEDIUM SHADE", "So", 0),
("UPPER HALF BLOCK AND LOWER HALF INVERSE MEDIUM SHADE", "So", 0),
("UPPER HALF INVERSE MEDIUM SHADE AND LOWER HALF BLOCK", "So", 0),
(),
("LEFT HALF INVERSE MEDIUM SHADE AND RIGHT HALF BLOCK", "So", 0),
("CHECKER BOARD FILL", "So", 0),
("INVERSE CHECKER BOARD FILL", "So", 0),
("HEAVY HORIZONTAL FILL", "So", 0),
("UPPER LEFT TO LOWER RIGHT FILL", "So", 0),
("UPPER RIGHT TO LOWER LEFT FILL", "So", 0),
("UPPER AND LOWER TRIANGULAR HALF BLOCK", "So", 0),
("LEFT AND RIGHT TRIANGULAR HALF BLOCK", "So", 0),
("UPPER LEFT TRIANGULAR MEDIUM SHADE", "So", 0),
("UPPER RIGHT TRIANGULAR MEDIUM SHADE", "So", 0),
("LOWER RIGHT TRIANGULAR MEDIUM SHADE", "So", 0),
("LOWER LEFT TRIANGULAR MEDIUM SHADE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE LEFT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE RIGHT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE RIGHT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE LEFT TO LOWER CENTRE TO MIDDLE RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE LEFT TO UPPER CENTRE TO MIDDLE RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE LEFT AND MIDDLE RIGHT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE RIGHT AND MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE RIGHT TO LOWER CENTRE TO MIDDLE LEFT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO MIDDLE LEFT TO LOWER CENTRE TO MIDDLE RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE LEFT TO UPPER CENTRE TO MIDDLE RIGHT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE RIGHT TO UPPER CENTRE TO MIDDLE LEFT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL DIAMOND", "So", 0),
("BOX DRAWINGS LIGHT HORIZONTAL WITH VERTICAL STROKE", "So", 0),
("ARROWHEAD-SHAPED POINTER", "So", 0),
("INVERSE CHECK MARK", "So", 0),
("LEFT HALF RUNNING MAN", "So", 0),
("RIGHT HALF RUNNING MAN", "So", 0),
("INVERSE DOWNWARDS ARROW WITH TIP LEFTWARDS", "So", 0),
("LEFTWARDS ARROW AND UPPER AND LOWER ONE EIGHTH BLOCK", "So", 0),
("RIGHTWARDS ARROW AND UPPER AND LOWER ONE EIGHTH BLOCK", "So", 0),
("DOWNWARDS ARROW AND RIGHT ONE EIGHTH BLOCK", "So", 0),
("UPWARDS ARROW AND RIGHT ONE EIGHTH BLOCK", "So", 0),
("LEFT HALF FOLDER", "So", 0),
("RIGHT HALF FOLDER", "So", 0),
("VOIDED GREEK CROSS", "So", 0),
("RIGHT OPEN SQUARED DOT", "So", 0),
("NEGATIVE DIAGONAL CROSS", "So", 0),
("NEGATIVE DIAGONAL MIDDLE RIGHT TO LOWER CENTRE", "So", 0),
("NEGATIVE DIAGONAL DIAMOND", "So", 0),
("WHITE HEAVY SALTIRE WITH ROUNDED CORNERS", "So", 0),
("LEFT THIRD WHITE RIGHT POINTING INDEX", "So", 0),
("MIDDLE THIRD WHITE RIGHT POINTING INDEX", "So", 0),
("RIGHT THIRD WHITE RIGHT POINTING INDEX", "So", 0),
("NEGATIVE SQUARED QUESTION MARK", "So", 0),
("STICK FIGURE", "So", 0),
("STICK FIGURE WITH ARMS RAISED", "So", 0),
("STICK FIGURE LEANING LEFT", "So", 0),
("STICK FIGURE LEANING RIGHT", "So", 0),
("STICK FIGURE WITH DRESS", "So", 0),
("WHITE UP-POINTING CHEVRON", "So", 0),
("WHITE CROSS MARK", "So", 0),
("RAISED SMALL LEFT SQUARE BRACKET", "So", 0),
("BLACK SMALL UP-POINTING CHEVRON", "So", 0),
("LEFT TWO THIRDS BLOCK", "So", 0),
("LEFT ONE THIRD BLOCK", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE RIGHT TO LOWER LEFT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER RIGHT TO MIDDLE LEFT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER LEFT TO MIDDLE RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL MIDDLE LEFT TO LOWER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER LEFT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO LOWER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER RIGHT TO LOWER CENTRE", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER CENTRE TO LOWER LEFT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER LEFT TO MIDDLE CENTRE TO UPPER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER RIGHT TO MIDDLE CENTRE TO LOWER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL LOWER LEFT TO MIDDLE CENTRE TO LOWER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER LEFT TO MIDDLE CENTRE TO LOWER LEFT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER LEFT TO LOWER CENTRE TO UPPER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER RIGHT TO MIDDLE LEFT TO LOWER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL LOWER LEFT TO UPPER CENTRE TO LOWER RIGHT", "So", 0),
("BOX DRAWINGS LIGHT DIAGONAL UPPER LEFT TO MIDDLE RIGHT TO LOWER LEFT", "So", 0),
("TOP JUSTIFIED LOWER HALF WHITE CIRCLE", "So", 0),
("RIGHT JUSTIFIED LEFT HALF WHITE CIRCLE", "So", 0),
("BOTTOM JUSTIFIED UPPER HALF WHITE CIRCLE", "So", 0),
("LEFT JUSTIFIED RIGHT HALF WHITE CIRCLE", "So", 0),
("UPPER CENTRE ONE QUARTER BLOCK", "So", 0),
("LOWER CENTRE ONE QUARTER BLOCK", "So", 0),
("MIDDLE LEFT ONE QUARTER BLOCK", "So", 0),
("MIDDLE RIGHT ONE QUARTER BLOCK", "So", 0),
("TOP JUSTIFIED LOWER HALF BLACK CIRCLE", "So", 0),
("RIGHT JUSTIFIED LEFT HALF BLACK CIRCLE", "So", 0),
("BOTTOM JUSTIFIED UPPER HALF BLACK CIRCLE", "So", 0),
("LEFT JUSTIFIED RIGHT HALF BLACK CIRCLE", "So", 0),
("TOP RIGHT JUSTIFIED LOWER LEFT QUARTER BLACK CIRCLE", "So", 0),
("BOTTOM LEFT JUSTIFIED UPPER RIGHT QUARTER BLACK CIRCLE", "So", 0),
("BOTTOM RIGHT JUSTIFIED UPPER LEFT QUARTER BLACK CIRCLE", "So", 0),
("TOP LEFT JUSTIFIED LOWER RIGHT QUARTER BLACK CIRCLE", "So", 0),
("SEGMENTED DIGIT ZERO", "Nd", 0),
("SEGMENTED DIGIT ONE", "Nd", 0),
("SEGMENTED DIGIT TWO", "Nd", 0),
("SEGMENTED DIGIT THREE", "Nd", 0),
("SEGMENTED DIGIT FOUR", "Nd", 0),
("SEGMENTED DIGIT FIVE", "Nd", 0),
("SEGMENTED DIGIT SIX", "Nd", 0),
("SEGMENTED DIGIT SEVEN", "Nd", 0),
("SEGMENTED DIGIT EIGHT", "Nd", 0),
("SEGMENTED DIGIT NINE", "Nd", 0),
)
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-0C80.typ | typst | Apache License 2.0 | #let data = (
("KANNADA SIGN SPACING CANDRABINDU", "Lo", 0),
("KANNADA SIGN CANDRABINDU", "Mn", 0),
("KANNADA SIGN ANUSVARA", "Mc", 0),
("KANNADA SIGN VISARGA", "Mc", 0),
("KANNADA SIGN SIDDHAM", "Po", 0),
("KANNADA LETTER A", "Lo", 0),
("KANNADA LETTER AA", "Lo", 0),
("KANNADA LETTER I", "Lo", 0),
("KANNADA LETTER II", "Lo", 0),
("KANNADA LETTER U", "Lo", 0),
("KANNADA LETTER UU", "Lo", 0),
("KANNADA LETTER VOCALIC R", "Lo", 0),
("KANNADA LETTER VOCALIC L", "Lo", 0),
(),
("KANNADA LETTER E", "Lo", 0),
("KANNADA LETTER EE", "Lo", 0),
("KANNADA LETTER AI", "Lo", 0),
(),
("KANNADA LETTER O", "Lo", 0),
("KANNADA LETTER OO", "Lo", 0),
("KANNADA LETTER AU", "Lo", 0),
("KANNADA LETTER KA", "Lo", 0),
("KANNADA LETTER KHA", "Lo", 0),
("KANNADA LETTER GA", "Lo", 0),
("KANNADA LETTER GHA", "Lo", 0),
("KANNADA LETTER NGA", "Lo", 0),
("KANNADA LETTER CA", "Lo", 0),
("KANNADA LETTER CHA", "Lo", 0),
("KANNADA LETTER JA", "Lo", 0),
("KANNADA LETTER JHA", "Lo", 0),
("KANNADA LETTER NYA", "Lo", 0),
("KANNADA LETTER TTA", "Lo", 0),
("KANNADA LETTER TTHA", "Lo", 0),
("KANNADA LETTER DDA", "Lo", 0),
("KANNADA LETTER DDHA", "Lo", 0),
("KANNADA LETTER NNA", "Lo", 0),
("KANNADA LETTER TA", "Lo", 0),
("KANNADA LETTER THA", "Lo", 0),
("KANNADA LETTER DA", "Lo", 0),
("KANNADA LETTER DHA", "Lo", 0),
("KANNADA LETTER NA", "Lo", 0),
(),
("KANNADA LETTER PA", "Lo", 0),
("KANNADA LETTER PHA", "Lo", 0),
("KANNADA LETTER BA", "Lo", 0),
("KANNADA LETTER BHA", "Lo", 0),
("KANNADA LETTER MA", "Lo", 0),
("KANNADA LETTER YA", "Lo", 0),
("KANNADA LETTER RA", "Lo", 0),
("KANNADA LETTER RRA", "Lo", 0),
("KANNADA LETTER LA", "Lo", 0),
("KANNADA LETTER LLA", "Lo", 0),
(),
("KANNADA LETTER VA", "Lo", 0),
("KANNADA LETTER SHA", "Lo", 0),
("KANNADA LETTER SSA", "Lo", 0),
("KANNADA LETTER SA", "Lo", 0),
("KANNADA LETTER HA", "Lo", 0),
(),
(),
("KANNADA SIGN NUKTA", "Mn", 7),
("KANNADA SIGN AVAGRAHA", "Lo", 0),
("KANNADA VOWEL SIGN AA", "Mc", 0),
("KANNADA VOWEL SIGN I", "Mn", 0),
("KANNADA VOWEL SIGN II", "Mc", 0),
("KANNADA VOWEL SIGN U", "Mc", 0),
("KANNADA VOWEL SIGN UU", "Mc", 0),
("KANNADA VOWEL SIGN VOCALIC R", "Mc", 0),
("KANNADA VOWEL SIGN VOCALIC RR", "Mc", 0),
(),
("KANNADA VOWEL SIGN E", "Mn", 0),
("KANNADA VOWEL SIGN EE", "Mc", 0),
("KANNADA VOWEL SIGN AI", "Mc", 0),
(),
("KANNADA VOWEL SIGN O", "Mc", 0),
("KANNADA VOWEL SIGN OO", "Mc", 0),
("KANNADA VOWEL SIGN AU", "Mn", 0),
("KANNADA SIGN VIRAMA", "Mn", 9),
(),
(),
(),
(),
(),
(),
(),
("KANNADA LENGTH MARK", "Mc", 0),
("KANNADA AI LENGTH MARK", "Mc", 0),
(),
(),
(),
(),
(),
(),
("KANNADA LETTER NAKAARA POLLU", "Lo", 0),
("KANNADA LETTER FA", "Lo", 0),
(),
("KANNADA LETTER VOCALIC RR", "Lo", 0),
("KANNADA LETTER VOCALIC LL", "Lo", 0),
("KANNADA VOWEL SIGN VOCALIC L", "Mn", 0),
("KANNADA VOWEL SIGN VOCALIC LL", "Mn", 0),
(),
(),
("KANNADA DIGIT ZERO", "Nd", 0),
("KANNADA DIGIT ONE", "Nd", 0),
("KANNADA DIGIT TWO", "Nd", 0),
("KANNADA DIGIT THREE", "Nd", 0),
("KANNADA DIGIT FOUR", "Nd", 0),
("KANNADA DIGIT FIVE", "Nd", 0),
("KANNADA DIGIT SIX", "Nd", 0),
("KANNADA DIGIT SEVEN", "Nd", 0),
("KANNADA DIGIT EIGHT", "Nd", 0),
("KANNADA DIGIT NINE", "Nd", 0),
(),
("KANNADA SIGN JIHVAMULIYA", "Lo", 0),
("KANNADA SIGN UPADHMANIYA", "Lo", 0),
("KANNADA SIGN COMBINING ANUSVARA ABOVE RIGHT", "Mc", 0),
)
|
https://github.com/dismint/docmint | https://raw.githubusercontent.com/dismint/docmint/main/networks/pset5.typ | typst | #import "template.typ": *
#show: template.with(
title: "14.15 PSET 5",
subtitle: "<NAME>",
pset: true
)
= Problem 1
== (1)
#note(
title: "Notation"
)[
Note that for this question, $p(d)$ will be taken to mean the converged value of $p(d)$, or $p^*(d)$
]
WLOG, let us compute this value for some arbitrary node. We will start out by computing the pseudo-degree of each vertex.
Every node starts with $2k$ connections, and it can have an additional $n - 2k - 1$ edges added on top with probability $p / n$. Thus we arrive at the following equation, making a simplification of $x = d - 2k$, which gives us the additional edges that are added.
#set math.equation(numbering: "(1)")
$ PP(x) = binom(n - 2k - 1, x) (p / n)^x (1 - p / n)^(n - 2k - 1 - x) $ <initial>
#set math.equation(numbering: none)
From the hint in last week's PSET, the last term converges to $e^(-p)$. Also observe that we can manipulate the first two terms algebraically to simplify one of the terms:
$ binom(n - 2k - 1, x) (p / n)^x = binom((n - 2k - 1) dots (2 - 2k - 1 - x), n^x) (p^x / x!) $
Because of the leading terms, the first term in the simplified equation simplifies to $1$. Thus @initial simplifies to:
$ (p^x e^(-p)) / x! $
This gives us our very familiar Poisson form, meaning that the degree of each vertex eventually follows $"Poisson"(p, x)$
Now we want to show that the proportion of vertices with any given degree $d$ is equal to the distribution given above, $"Poisson"(p, d - 2k)$. If we denote $C_d$ as the count of the vertices with degree $d$, we get the following after applying Chebyshev's:
$ (1 / epsilon^2) (EE[C^2_d] / n^2 - p^2(d)) $
Thus, as long as we can show that the numerator in the first term is equal to $n^2 dot p^2(d) + o(n^2)$, then we have proven that there is no difference between $C_d / n$ and $p(d)$
To do this, let us define $I_i$ to be the indicator variable that node $i$ has degree $d$. We can now make the observation that:
$ EE[C^2_d] = sum_(i, j = 1)^n EE[I_i I_j] $
In order for this summation to equal our desired value, it must be the case that the inside $EE[I_i I_j]$ is equal to $p^2(d) + o(1)$. The complication that we must address in our evaluation of this inside value is whether the edges between $i$ and $j$ exists. Otherwise, the two events are completely independent following the Poisson distribution mentioned above. Let us denote $E_(i j)$ as the indicator variable there exists an edge between $i$ and $j$. Now we can finally make the following simplifications:
$
EE[I_i I_j] &= EE[I_i I_j | E_(i j)] dot PP[E_(i j)] + EE[I_i I_j | not E_(i j)] dot PP[not E_(i j)]\
&= p / n dot (p(d) + 1)^2 + p^2(d) dot (1 - p / n)\
&= p^2(d) + (p (2p(d) + 1)) / n\
&= p^2(d) + o(1)
$
Thus we conclude that the distribution of degrees equals the Poisson variable we found earlier:
$ boxed("Poisson"(p, d - 2k) = (p^x e^(-p)) / x! = p(d)) $
== (2)
We wish to find the value of the clustering coefficient when the probability of replacements is $p = 0$
We will first start by counting the number of triplets in the graph. For any given node, the number of possible triplets is equal to $(2k) (2k - 1)$. Thus, there are $n dot (2k) (2k - 1)$ total triplets.
Note that these triplets are *not* triangles, that is they are missing the last edge the connect the two arbitrary chosen edges of the given node above. Finding the total numbers of triangles is a slightly more difficult question.
Similar to above, let us fix some arbitrary node and see how many triangles there are. For any distance between $1$ and $k$, there exists exactly two nodes with this distance (one on either side of our fixed node). Let us take one of these neighbors as an example. Let us say that node $x$ has a distance of $d$ from our fixed node $n$. All $k - 1$ other nodes on the same side as $x$ become candidates, and on the other side the $k - d$ valid nodes fill the other half of possible connections. Thus, for both sides, we have $2(2k - 1 - x)$ potential triangles. We will now sum this up over all possible values (recall we were looking at a fixed $d$
$
sum_(d = 0)^k 2 (2k - 1 - d) &=\
&= 4k^2 - k(k + 1) - 2k\
&= k (3k - 3)
$
Thus the total number of triangles is $n dot k (3k - 3)$, and our given answer can be confirmed:
$ (n dot k (3k - 3)) / (n dot (2k) (2k - 1)) = boxed((3k - 3) / (4k - 2)) $
= Problem 2
== (1)
Plugging into Wolfram Alpha, the second derivative of $F$ comes out to the following:
$ F''(t) = ((p + q)^3 dot e^(-(p+q)) dot p(q e^(-t(p+q))) - p) / (q e^(-t(p + q)) + p)^3 $
Suppose $p > q$. Then it must be the case that the $q e^(-t(p+q)) - p$ term is always negative no matter what the value of $t$ is, and thus the function is concave.
Suppose $q > p$. We then notice that $p e^(-t(p + q)) - p$ is positive before $t = ln(q / p) / (p + q)$, and negative after, leading to the desired convex before concave shape described.
== (2)
The adoption rate is highest when $t^*(p, q) = ln(q / p) / (p + q)$. This is the same break point that we defined above in *(1)*. This is because we wish to find the point at the when the second derivative is equal to zero, which is exactly the expression we found earlier.
== (3)
Let us take the derivative with respect to both $p$ and $q$.
WRT $p$:
$ ln(p / q) / (p + q)^2 - 1 / (p (p + q)) $
WRT $q$:
$ ln(p q) / (p + q)^2 + 1 / (p (p + q)) $
We don't have to consider the case where $p > q$ since $t^*$ is fixed, so instead we focus on when $q > p$. Under this condition, the derivative WRT $p$ is always negative, meaning the value is constantly decreasing. The numerator of the first function must be negative since $q > p$, and the second function only takes more away.
On the other hand, there does not exist such an observation for the derivative WRT $q$, and the direction is completely dependent on the specific values of $p$ and $q$
== (4)
Conceptually, the innovation rate is an independent rate, and thus it makes sense that it is not ambiguous. It represents a simple and straightforward metric of how likely an individual is to adopt a new product, completely disregarding any external opinions.
However, on the other side, the imitation rate $q$, depends very heavily on the innovation rate, and cannot stand as a metric by itself. My theory is that the imitation rate acts as a sort of balance on the heavy sway that the innovation rate creates. For example, if there is an extremely high innovation rate $p$, the imitation rate can work to quench (or boost) the initial momentum.
Regardless of the importance of $q$, its ambiguity stems intuitively from the fact that it relies on external factors, such as $p$, the innovation rate.
== Problem 3
== (1)
Dynamic Equations:
$
S(t) &= -gamma dot R_0 dot S(t) dot I(t)\
I(t) &= gamma dot R_0 dot S(t) dot I(t) - I(t)\
R(t) &= gamma dot I(t)
$
Initial Conditions:
$
S(0) &= 1 - iota - pi\
I(0) &= iota\
R(0) &= pi
$
== (2)
We start by solving the differential equation $S(t) / S(t) = -R_0 dot R(t)$. Plugging the given values in, this comes out to a solution of $S(t) = (1 - iota - pi) dot e^(-R_0 (R(t) - pi))$. From lecture we have the following:
$ lim_(t = infinity) R(t) = 1 - lim_(t = infinity) S(t) = 1 - (1 - iota - pi) dot e^(-R_0 (lim_(t = infinity) R(t) - pi)) $
Thus we can combine the information above as well as the other limits of the dynamic equations to get a simplified form for the number of disease holders $D$:
$ lim_(t = infinity) D(t) = 1 - pi - (1 - iota - pi) dot e^(-R_0 dot lim_(t = infinity) D(t)) $
== (3)
Results of graphing the above equation on Desmos after making the $iota approx 0$ approximation are shown below:
#bimg("img/graph.png")
The red line represents the testing, while the blue line represents increasing vaccinations instead. Right before $4.5$, the graph crosses, and it becomes more valuable to test rather than vaccinate.
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/packages/typst.solid/README.md | markdown | Apache License 2.0 | # @myriaddreamin/typst.solid
## Usage
```ts
import { TypstDocument } from "@myriaddreamin/typst.solid";
import { createResource } from "solid-js";
export const App = (artifact: Uint8Array) => {
const getArtifactData = async () => {
const response = await fetch(
"http://localhost:3000/readme.artifact.sir.in"
).then((response) => response.arrayBuffer());
return new Uint8Array(response);
};
const [vec] = createResource(getArtifactData);
return (
<div>
<h1>Demo: Embed Your Typst Document in Solid </h1>
<TypstDocument fill="#343541" artifact={vec()} />
</div>
);
};
```
## Documentation
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/import-24.typ | typst | Other | // Error: 14-15 unexpected colon
// Error: 16-17 unexpected integer
#import "": a: 1
|
https://github.com/elliotnash/TiM | https://raw.githubusercontent.com/elliotnash/TiM/main/README.md | markdown | # TiM
TiM (Typst for iMessage) is an iMessage bot that renders Typst code.
Inspired by [typst-bot](https://github.com/mattfbacon/typst-bot) for discord!
## Getting started
### Requirements
- A [BlueBubbles server](https://github.com/BlueBubblesApp/bluebubbles-server) instance (TiM uses the BlueBubbles API to read and send iMessages)
- [Java](https://www.oracle.com/java/technologies/downloads/) 17 or above
- [Rust](https://rustup.rs/) toolchain
### Compiling
To get started, clone the git repo
```shell
git clone https://github.com/elliotnash/TiM.git
cd TiM
```
You must copy the `config.example.toml` file to `config.toml`, and fill out the required parameters.
If running on the same machine as BlueBubbles, `BB_URL` should be set to `http://localhost:{BB_PORT}`
To compile TiM, run
```shell
./gradlew build
```
This will output 4 files in `build/libs` — the compiled jar, the rust worker binary, a launcher script, and a copy of the .env file in the root. These output files need to remain together.
### Running
To run TiM, run `./TiM`
A very basic systemd service file is available [here](scripts/TiM.service)
|
|
https://github.com/Nrosa01/TFG-2023-2024-UCM | https://raw.githubusercontent.com/Nrosa01/TFG-2023-2024-UCM/main/Pruebas/Guión Entrevistador.typ | typst | #import "setup.typ": *
#show: project
= Entrevistador
- Explicar simulador: Usando lenguaje natural, estando presenta o compartiendo pantalla, ejecutar el simulador, explicar qu es y sus controles.
Definición: Un simulador de arena es un programa que simula el comportamiento de distintas partículas en un entorno virtual. En este caso, el simulador de arena que vamos a usar simula el comportamiento de cuatro partículas: arena, agua, vapor y lava. Cada partícula tiene un comportamiento distinto y se mueve de manera diferente en el entorno.
- Explicación de la extensión: En Lua, usar el documento de Lua, leerlo con el usuario y explicarle que es Lua, su sintaxis y como crear particula con el API. Una vez se llegue al final del documento, se debe mostrar en tiempo real al usuario como crear una particula que se mueva hacia abajo sin comprobar nada. En el caso de Blockly, se han de explicar los bloques, resaltando el primero (particula) y el bloque foreach. El resto de bloques pueden tener una explicación menos profunda. Se debe mostrar también al usuario el botón de ayuda para consultar la documentación de los bloques. Finalmente, igual que en Lua, mostrar como crear una particula que simplemente se mueva hacia abajo sin comprobar nada.
- Mostrar solución: Mostrar las 4 particula que se van a proponerr si mostrar el código o los bloques, solo mostrar el resultado visual (sin que se vea el código o los bloques) . Tras esto, se explicará el comportamiento de cada particula y se proporcionará un documento al usuario para que pueda consultar la descripción de la tarea.
En el caso de Lua se entregará un fichero .zip que contiene un acceso directo al juego, que estará en un directorio al mismo nivel del acceso directo. Además, habrá 4 archivo lua: sand.lua, water.lua, steam.lua y lava.lua. Estos archivos contendrán la plantilla de creación de partículas explicada en el documento de Lua. Estas partículas no tienen ningún comportamiento ni tinen el nombre puesto, el usuario debe hacerlo. Sin embargo el color de cada particula sí está puesto.
En el caso de Blockly, se proporcionará la URL al entorno de pruebas, que por defecto solo tiene la partícula del vacío. El usuario debe crear las nuevas particulas con los botones y renombrarlas. |
|
https://github.com/k4zuy/Typst-Template-HTW | https://raw.githubusercontent.com/k4zuy/Typst-Template-HTW/main/README.md | markdown | # Typst-Template-HTW
Uses https://github.com/ls1intum/thesis-template-typst.git as first template and inspiration
|
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/经济学原理/hw/hw6.typ | typst | #import "../../template.typ": *
#show: note.with(
title: "作业5",
author: "YHTQ ",
date: none,
logo: none,
withOutlined: false
)
=
(1) 效用最大时将有:
$
(3Y^2)/(6X Y) = ((delta U)/ (delta X))/((delta U)/ (delta y)) ="MRS"_(x y) = P_x / P_y = 2000 / 3000
$
也即:
$
Y / (2X) = 2/3\
3Y = 4X
$
结合:
$
X P_1 + Y P_2 = 54000\
2000 X + 3000 Y = 54000\
$
解得:
$
X = 9, Y = 12
$
(2) 总效用为 $3*9*12^2 = 3888$
=
(1) 在 $4 X + 4 Y = 144$ 的约束下,令:
$
U = X Y
$
最大,由对称性易知 $X = Y$ 时效用最大,此时 $X = Y = 18$,效用为 $18^2 = 324$
(2) 在 $4 X + 9 Y = 144$ 的约束下,令:
$
U = X Y
$
最大,此时有:
$
Y / X = ((delta U)/ (delta X))/((delta U)/ (delta y)) ="MRS"_(x y) = P_x / P_y = 4 / 9\
9 Y = 4 X
$
解得:
$
X = 18, Y = 8
$
总效用为:
$
18 * 8 = 144
$
(3) 容易类似解得收入为 $I$ 时,最大效益为:
$
I^2 / (4 dot 36)
$
令其 $= 324$,解得:
$
I = 216
$
此时得商品组合为:
$
X = 27, Y = 12
$
(4)
- 先计算替代效应,为了在涨价后得到相同的效益,此时的商品组合即为 (3) 中计算所得 $X = 27, Y = 12$,也即替代效应使 $Y$ 降低 $6$
- 再计算收入效应,此时由 (2) 得 $Y$ 又降低 $4$
=
本题中效用函数可以单调变换为:
$
U' = X Y
$
(1). 在约束 $2 X + 5 Y = 500$ 下,最大化:
$
U' = X Y
$
将有:
$
Y / X = ((delta U)/ (delta X))/((delta U)/ (delta y)) ="MRS"_(x y) = P_x / P_y = 2 / 5\
$
解得:
$
Y = 50, X = 125
$
(2). 为了合并 (3),直接在约束:
$
X + 5 Y = I
$
下,最大化:
$
U' = X Y
$
将有:
$
Y / X = ((delta U)/ (delta X))/((delta U)/ (delta y)) ="MRS"_(x y) = P_x / P_y = 1 / 5\
$
解得:
$
X = I / 2, Y = I / 10
$
此时效用为:
$
X Y = I^2 / 20
$
取 $I = 500$,得 $X = 250, Y = 50$
(3) 取 $I = 400$,得效用为 $400*400/20 = 8000$,而原先的效用为 $50 * 125 = 6250$,因此应该加入
=
+ 数量配给可能截断预算线,换言之,部分在收入上合理但受到数量配给限制的商品束将不出现在预算线上。
+ 实物补贴可看作在一定数量之下时,某商品价格为零,此时预算线形式为:
$
P_1 X = I (Y <= Y_0)\
P_1 X + P_2 (Y - Y_0) = I (Y > Y_0)
$
这将变成一条折线
+ 数量折扣可看作某商品 $Y$ 的价格是随 $Y$ 上升单调递减的函数 $f(Y)$,此时预算线形式为:
$
P_1 X + integral_0^Y f(t)d t = I
$
这将变成一条曲线。更进一步(不妨假设 $f(Y)$ 连续,不连续时略微复杂一些但结论是成立的):
$
(d X)/(d Y) = - f(Y)
$
它是单调上升的,也可得其凹凸性。
|
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/Text/highlight.typ | typst | #set text(
size:10pt,
)
#set page(
paper:"a5",
margin:(x:1.8cm,y:1.5cm),
)
#set par(
justify: true,
leading: 0.52em,
)
= 高亮
用背景色突出显示文本。
= 例
#image("1.png")
= 参数
#image("2.png")
= 填充
用于突出显示文本的颜色。
默认:rgb(“fffdllal”)
= 笔画
高亮显示的边框颜色。有关详细信息,请参阅矩形的文档。
默认:(:)
= 顶部边缘
背景矩形的顶端。
变体 详
"ascender"
上行线高度,通常超过所有字形的高度。
"cap-height"
大写字母的近似高度。
"x-height"
非上行线小写字母的近似高度。
"baseline"
字母所依托的基线高度。
"bounds"
字形边界框的顶部边缘高度。
默认:"ascender"
= 底部边缘
背景矩形的底端。
变体 详
"baseline"
字母所在的基线。
"descender"
字体的下行线高度,通常超过所有字形的深度。
"bounds"
字形边界框的下边缘。
默认:"descender"
= 延伸
将背景扩展到内容之外(负数则在内容之内)的数量。
默认:0pt
= 半径
高光的四周圆的半径是多少。有关详细信息,请参阅矩形的文档。
= 内容
应突出显示的内容 |
|
https://github.com/daskol/typst-templates | https://raw.githubusercontent.com/daskol/typst-templates/main/neurips/main.typ | typst | MIT License | #import "/neurips.typ": botrule, midrule, neurips2024, paragraph, toprule, url
#import "/logo.typ": LaTeX, LaTeXe, TeX
#let affls = (
airi: ("AIRI", "Moscow", "Russia"),
skoltech: (
department: "AI Center",
institution: "Skoltech",
location: "Moscow",
country: "Russia"),
skoltech2: (
department: "AI Center",
institution: "Skoltech",
location: "Moscow",
country: "Russia"),
)
#let authors = (
(name: "Firstname1 Lastname1",
affl: "skoltech",
email: "<EMAIL>",
equal: true),
(name: "Firstname2 Lastname2", affl: ("airi", "skoltech"), equal: true),
)
#show: neurips2024.with(
title: [Formatting Instructions For NeurIPS 2024],
authors: (authors, affls),
keywords: ("Machine Learning", "NeurIPS"),
abstract: [
The abstract paragraph should be indented ½ inch (3 picas) on both the
left- and right-hand margins. Use 10 point type, with a vertical spacing
(leading) of 11 points. The word *Abstract* must be centered, bold, and in
point size 12. Two line spaces precede the abstract. The abstract must be
limited to one paragraph.
],
bibliography: bibliography("main.bib"),
bibliography-opts: (title: none, full: true), // Only for example paper.
appendix: [
#include "appendix.typ"
#include "checklist.typ"
],
accepted: false,
)
= Submission of papers to NeurIPS 2024
Please read the instructions below carefully and follow them faithfully.
== Style
Papers to be submitted to NeurIPS 2024 must be prepared according to the
instructions presented here. Papers may only be up to *nine* pages long,
including figures. Additional pages _containing only acknowledgments and
references_ are allowed. Papers that exceed the page limit will not be
reviewed, or in any other way considered for presentation at the conference.
The margins in 2024 are the same as those in previous years.
Authors are required to use the NeurIPS #LaTeX style files obtainable at the
NeurIPS website as indicated below. Please make sure you use the current files
and not previous versions. Tweaking the style files may be grounds for
rejection.
== Retrieval of style files
The style files for NeurIPS and other conference information are available on
the website at
#align(center, block(spacing: 15pt, {
url("http://www.neurips.cc/")
}))
The file `neurips_2024.pdf` contains these instructions and illustrates the
various formatting requirements your NeurIPS paper must satisfy.
The only supported style file for NeurIPS 2024 is `neurips_2024.sty`, rewritten
for #LaTeXe. *Previous style files for #LaTeX 2.09, Microsoft Word, and RTF
are no longer supported!*
The #LaTeX style file contains three optional arguments: `final`, which creates
a camera-ready copy, `preprint`, which creates a preprint for submission to,
e.g., arXiv, and `nonatbib`, which will not load the `natbib` package for you
in case of package clash.
#paragraph[Preprint option] If you wish to post a preprint of your work online,
e.g., on arXiv, using the NeurIPS style, please use the `preprint` option. This
will create a nonanonymized version of your work with the text "Preprint. Work
in progress." in the footer. This version may be distributed as you see fit, as
long as you do not say which conference it was submitted to. Please *do not*
use the `final` option, which should *only* be used for papers accepted to
NeurIPS.
At submission time, please omit the `final` and `preprint` options. This will
anonymize your submission and add line numbers to aid review. Please do _not_
refer to these line numbers in your paper as they will be removed during
generation of camera-ready copies.
The file `neurips_2024.tex` may be used as a "shell" for writing your paper.
All you have to do is replace the author, title, abstract, and text of the
paper with your own.
The formatting instructions contained in these style files are summarized in
Sections~#ref(<gen_inst>, supplement: none), #ref(<headings>, supplement:
none), and #ref(<others>, supplement: none) below.
= General formatting instructions <gen_inst>
The text must be confined within a rectangle 5.5~inches (33~picas) wide and
9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point
type with a vertical spacing (leading) of 11~points. Times New Roman is the
preferred typeface throughout, and will be selected for you by default.
Paragraphs are separated by ½~line space (5.5 points), with no indentation.
The paper title should be 17~point, initial caps/lower case, bold, centered
between two horizontal rules. The top rule should be 4~points thick and the
bottom rule should be 1~point thick. Allow ¼~inch space above and below the
title to rules. All pages should start at 1~inch (6~picas) from the top of the
page.
For the final version, authors' names are set in boldface, and each name is
centered above the corresponding address. The lead author's name is to be
listed first (left-most), and the co-authors' names (if different address) are
set to follow. If there is only one co-author, list both author and co-author
side by side.
Please pay special attention to the instructions in @others regarding figures,
tables, acknowledgments, and references.
= Headings: first level <headings>
All headings should be lower case (except for first word and proper nouns),
flush left, and bold.
First-level headings should be in 12-point type.
== Headings: second level
Second-level headings should be in 10-point type.
=== Headings: third level
Third-level headings should be in 10-point type.
#paragraph[Paragraphs] There is also a `\paragraph` command available, which
sets the heading in bold, flush left, and inline with the text, with the
heading followed by #1em of space.
= Citations, figures, tables, references <others>
These instructions apply to everyone.
== Citations within the text
The `natbib` package will be loaded for you by default. Citations may be
author/year or numeric, as long as you maintain internal consistency. As to
the format of the references themselves, any style is acceptable as long as it
is used consistently.
The documentation for `natbib` may be found at
#align(center)[
#url("http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf")
]
Of note is the command `\citet`, which produces citations appropriate for use
in inline text. For example,
```tex
\citet{hasselmo} investigated\dots
```
produces
#{
show quote: set block(spacing: 15pt)
quote(block: true)[Hasselmo, et al.~(1995) investigated\dots]
}
If you wish to load the `natbib` package with options, you may add the
following before loading the `neurips_2024` package:
```tex
\PassOptionsToPackage{options}{natbib}
```
If `natbib` clashes with another package you load, you can add the optional
argument `nonatbib` when loading the style file:
```tex
\usepackage[nonatbib]{neurips_2024}
```
As submission is double blind, refer to your own published work in the third
person. That is, use "In the previous work of Jones et al.~[4]," not "In our
previous work [4]." If you cite your other papers that are not widely available
(e.g., a journal paper under review), use anonymous author names in the
citation, e.g., an author of the form "A.~Anonymous" and include a copy of the
anonymized paper in the supplementary material.
#v(7pt) // In order to match original template.
== Footnotes
Footnotes should be used sparingly. If you do require a footnote, indicate
footnotes with a number#footnote[Sample of the first footnote.] in the text.
Place the footnotes at the bottom of the page on which they appear. Precede the
footnote with a horizontal rule of 2~inches (12~picas).
Note that footnotes are properly typeset _after_ punctuation marks.#footnote[As
in this example.]
#v(7pt) // In order to match original template.
== Figures
#figure(
rect(width: 4.25cm, height: 4.25cm, stroke: 0.4pt),
caption: [Sample figure caption.],
placement: top,
)
All artwork must be neat, clean, and legible. Lines should be dark enough for
purposes of reproduction. The figure number and caption always appear after the
figure. Place one line space before the figure caption and one line space after
the figure. The figure caption should be lower case (except for first word and
proper nouns); figures are numbered consecutively.
You may use color figures. However, it is best for the figure captions and the
paper body to be legible if the paper is printed in either black/white or in
color.
// In order to match original template.
#pagebreak()
#v(-9pt)
== Tables <tables>
All tables must be centered, neat, clean and legible. The table number and
title always appear before the table. See @sample-table.
Place one line space before the table title, one line space after the
table title, and one line space after the table. The table title must
be lower case (except for first word and proper nouns); tables are
numbered consecutively.
Note that publication-quality tables _do not contain vertical rules_. We
strongly suggest the use of the `booktabs` package, which allows for
typesetting high-quality, professional tables:
#align(center)[
#url("https://www.ctan.org/pkg/booktabs")
]
This package was used to typeset @sample-table.
#figure(
caption: [Sample table title.],
placement: top,
table(
columns: 3,
align: left + horizon,
stroke: none,
toprule,
table.header(
table.cell(colspan: 2, align: center)[Part], [],
table.hline(start: 0, end: 2, stroke: (thickness: 0.05em)),
[Name], [Description], [Size ($mu$m)],
),
midrule,
[Dendrite], [Input terminal ], [$~100$],
[Axon ], [Output terminal], [$~10$],
[Soma ], [Cell body ], [up to $10^6$],
botrule,
), // TODO(@daskol): Fix gutter between rows in body.
) <sample-table>
== Math
Note that display math in bare TeX commands will not create correct line
numbers for submission. Please use LaTeX (or AMSTeX) commands for unnumbered
display math. (You really shouldn't be using $dollar dollar$ anyway; see
#url("https://tex.stackexchange.com/questions/503/why-is-preferable-to") and
#url("https://tex.stackexchange.com/questions/40492/what-are-the-differences-between-align-equation-and-displaymath")
for more information.)
== Final instructions
Do not change any aspects of the formatting parameters in the style files. In
particular, do not modify the width or length of the rectangle the text should
fit into, and do not change font sizes (except perhaps in the *References*
section; see below). Please note that pages should be numbered.
= Preparing PDF files
Please prepare submission files with paper size "US Letter," and not, for
example, "A4."
Fonts were the main cause of problems in the past years. Your PDF file must only
contain Type 1 or Embedded TrueType fonts. Here are a few instructions to
achieve this.
- You should directly generate PDF files using `pdflatex`.
- You can check which fonts a PDF files uses. In Acrobat Reader, select the
menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can
also use the program `pdffonts` which comes with `xpdf` and is available
out-of-the-box on most Linux machines.
- `xfig` "patterned" shapes are implemented with bitmap fonts. Use "solid"
shapes instead.
- The `\bbold` package almost always uses bitmap fonts. You should use the
equivalent AMS Fonts:
```tex
\usepackage{amsfonts}
```
followed by, e.g., `\mathbb{R}`, `\mathbb{N}`, or `\mathbb{C}` for $RR$, $NN$
or $CC$. You can also use the following workaround for reals, natural and
complex:
```tex
\newcommand{\RR}{I\!\!R} %real numbers
\newcommand{\Nat}{I\!\!N} %natural numbers
\newcommand{\CC}{I\!\!\!\!C} %complex numbers
```
Note that `amsfonts` is automatically loaded by the `amssymb` package.
If your file contains Type 3 fonts or non embedded TrueType fonts, we will ask
you to fix it.
== Margins in #LaTeX
Most of the margin problems come from figures positioned by hand using
`\special` or other commands. We suggest using the command `\includegraphics`
from the `graphicx` package. Always specify the figure width as a multiple of
the line width as in the example below:
```tex
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
```
See @tables in the graphics bundle documentation
(#url("http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf"))
A number of width problems arise when #LaTeX cannot properly hyphenate a line.
please give #LaTeX hyphenation hints using the `\-` command when necessary.
// note this is the acknowledgments section which is not visible in draft.
#if false [
use unnumbered first level headings for the acknowledgments. all
acknowledgments go at the end of the paper before the list of references.
moreover, you are required to declare funding (financial activities supporting
the submitted work) and competing interests (related financial activities
outside the submitted work). More information about this disclosure can be
found at:
#url("https://neurips.cc/Conferences/2024/PaperInformation/FundingDisclosure")
Do *not* include this section in the anonymized submission, only in the final
paper. You can use the `ack` environment provided in the style file to
autmoatically hide this section in the anonymized submission.
]
// We typset reference section header manualy in order to reproduce example
// paper. No special effort is required (a user should not override
// `bibliography-opts` as well).
#heading(numbering: none)[References]
References follow the acknowledgments in the camera-ready paper. Use unnumbered
first-level heading for the references. Any choice of citation style is
acceptable as long as you are consistent. It is permissible to reduce the font
size to `small` (9 point) when listing the references. Note that the Reference
section does not count towards the page limit.
|
https://github.com/ivaquero/book-control | https://raw.githubusercontent.com/ivaquero/book-control/main/15-轨迹追踪.typ | typst | #import "@local/scibook:0.1.0": *
#show: doc => conf(
title: "轨迹追踪",
author: ("ivaquero"),
header-cap: "现代控制理论",
footer-cap: "github@ivaquero",
outline-on: false,
doc,
)
= 钟摆系统
<钟摆系统>
== 系统构成
对系统
$ dot.double(ϕ) - g / L ϕ + 1 / L dot(δ) = 0 $
令$x_1 = ϕ, x_2 = dot(ϕ), u = 1 / L dot.double(δ)$
可得
$
mat(delim: "[", dot(x)_1; dot(x)_2) = mat(delim: "[", 0, 1; g/L, 0) mat(delim: "[", x_1; x_2) + mat(delim: "[", 0; - 1) u
$
又令
$ u = -mat(delim: "[", k_1, k_2) mat(delim: "[", x_1; x_2) $
其开环平衡点
$
dot(x)_1 = 0 ⇒ x_2 = 0\
dot(x)_2 = 0 ⇒ x_1 = 0
$
故当$t → ∞$,$x_1, x_2 → 0$。
== 误差函数
设目标值$x_(1 d) = 5$,则误差为$e = x_(1 d) - x_1$,于是有
$ dot(e) = dot(x)_(1 d) - dot(x)_1 = -dot(x)_1 = x_2 $
又
$ dot(x)_2 = g / L x_1 - u = g / L (x_(1 d) - e) - u $
即
$
mat(delim: "[", e ̇; dot(x)_2) =
mat(delim: "[", 0, - 1; - g/L, 0)
mat(delim: "[", e; x_2) +
mat(delim: "[", 0; - 1) u +
mat(delim: "[", 0; g/L x_(1 d))
$
其开环平衡点
$
& dot(e) = 0 ⇒ x_(2 f) = 0\
& dot(x)_2 = 0 ⇒ e_f = x_1 d
$
为改变平衡点,令$u = -mat(delim: "[", k_1, k_2) mat(delim: "[", e_(x_2); x_2) + e_f$,代入,得
$
mat(delim: "[", e ̇; dot(x)_2) = mat(delim: "[", 0, - 1; - g/L + k_1, k_2) mat(delim: "[", e; x_2)
$
此时
- $e_f = 0$
- $x_(2 f) = 0$
设计$k_1, k_2$,令$"Re"("eig"(A_("cl"))) < 0$,于是由特征行列式为$0$,得
$
mat(delim: "[", λ, 1; g/L - k_1, λ - k_2)
&= λ^2 - k_2 λ - g / L + k_1\
&= λ^2 + 2 λ + 1
$
得
- $k_1 = 1 + g / L$
- $k_2 = -2$
代入$u$的表达式,得
$ u = -x_d + (1 + g / L) x_1 + 2 x_2 $
= 弹簧振动阻尼系统
== 系统构成
对弹簧振动阻尼系统
#figure(
image("images/model/vibration.drawio.png", width: 40%),
caption: "弹簧振动阻尼系统",
supplement: [图],
)
- 当目标$𝒙_d = vec(delim: "[", 0, 0)$,为经典 LQR 问题
- 当目标$𝒙_d = vec(delim: "[", 1, 0)$或不为$𝟎$时,为轨迹追踪问题
其离散线性系统方程为
$ 𝒙_([k+1]) = 𝑨 𝒙_([k]) + 𝑩 𝒖_([k]) $ <sys-spring>
若追踪目标为
$ 𝒙_(d[k+1]) = 𝑨_D 𝒙_(d[k]) $
设$𝒙_d$为常数向量,则令
$ 𝒆_([k]) = 𝒙_([k]) - 𝒙_(d[k]) = underbrace(mat(delim: "[", 𝑰, -𝑰), "𝑪ₐ") vec(delim: "[", 𝒙_([k]), 𝒙_(d[k])) $
此时,目标值$𝒆_(d[k]) = 0$,问题转化为 LQR 问题。于是有
$
underbrace(vec(delim: "[", 𝒙_([k+1]), 𝒙_(d[k+1])), "𝒙ₐ[k+1]") = underbrace(dmat(delim: "[", 𝑨, 𝑨_0), "𝑨ₐ") underbrace(vec(delim: "[", 𝒙_([k]), 𝒙_(d[k])), "𝒙ₐ[k]") + underbrace(vec(delim: "[", 𝑩, 𝟎), "𝑩ₐ") 𝒖
$
进一步有
$ 𝒆_([k]) = 𝑪_a 𝒙_(a[k]) $
对应的代价函数为
$
J &= 1 / 2 𝒆^(⊤)_([N]) 𝑺 𝒆_([N]) + 1 / 2 ∑_(k=0)^(N-1) (𝒆^(⊤)_([k]) 𝑸 𝒆_([k]) + 𝒖^(⊤)_([k]) 𝑹 𝒖_([k]))\
&= 1 / 2 (𝑪_a 𝒙_(a[k]))^(⊤)_([N]) 𝑺 (𝑪_a 𝒙_(a[k])) + 1 / 2 ∑_(k=0)^(N-1) (
(𝑪_a 𝒙_(a[k]))^(⊤)_([k]) 𝑸 (𝑪_a 𝒙_(a[k]))_([k]) + 𝒖^(⊤)_([k]) 𝑹 𝒖_([k])
)\
&= 1 / 2 𝒙_(a[k])^(⊤) 𝑺_a 𝒙_(a[k]) + 1 / 2 ∑_(k=0)^(N-1) (𝒙_(a[k])^(⊤) 𝑸_a 𝒙_(a[k]) + 𝒖^(⊤)_([k]) 𝑹 𝒖_([k]))
$ <cost-spring>
上式成功将追踪问题转化为了 LQR 问题。但当$𝑹$过大,则系统将躺平,无法进行追踪。
== 稳态非零常数输入
对轨迹追踪问题,有
$ 𝒙_(a[k+1]) = 𝑨_D 𝒙_(d[k]) $ <aug>
考虑特例,如恒温或匀速控制,此时$𝑨_D = 𝑰$,即
$ 𝒙_(d[k]) = 𝒙_d $
且$𝒙_d$处,系统处于稳态,即输入$𝒖$使系统总能得到$𝒙_d$。于是
$ 𝒙_d = 𝑨 𝒙_d + 𝑩 𝒖_d $
可得
$ (𝑰 - 𝑨)𝒙_d = 𝑩 𝒖_d $
定义稳态误差
$ δ 𝒖_([k]) = 𝒖_([k]) - 𝒖_([d]) $
代入@eqt:sys-spring,可得
$
𝒙_([k+1]) &= 𝑨 𝒙_([k]) + 𝑩(δ 𝒖_([k]) + 𝒖_([d])) \
&= 𝑨 𝒙_([k]) + 𝑩 δ 𝒖_([k]) + (𝑰 - 𝑨) 𝒙_d
$
构造增广矩阵
$
𝒙_(a[
k+1
]) = vec(delim: "[", 𝒙_([k+1]), 𝒙_d) = underbrace(mat(delim: "[", 𝑨, 𝑰 - 𝑨; 𝟎, 𝑰), "𝑨ₐ") vec(delim: "[", 𝒙_([k]), 𝒙_d) + underbrace(vec(delim: "[", 𝑩, 𝟎), "𝑩ₐ") δ 𝒖_([
k
])
$
令
$
𝒆_([k]) = 𝒙_([k]) - 𝒙_d = underbrace(mat(delim: "[", 𝑰, -𝑰), "𝑪ₐ") underbrace(vec(delim: "[", 𝒙_([k]), 𝒙_d), "𝒙ₐ[k]")
$
代入@eqt:cost-spring,得
$
J &= 1 / 2 𝒙^(⊤)_(a[N]) 𝑪^(⊤)_a 𝑺 𝑪_a 𝒙_(a[N]) + 1 / 2 ∑_(k=0)^(N-1) (
𝒙^(⊤)_(a[k]) 𝑪^(⊤)_a 𝑸 𝑪_a 𝒙_(a[k]) + δ 𝒖^(⊤)_([k]) 𝑹 δ 𝒖_([k])
)
$ <cost-spring2>
这就得到了一个新的 LQR 问题。
#figure(
image("images/block/lqr-trk-const.drawio.png", width: 40%),
caption: "轨迹追踪 LQR 系统",
supplement: [图],
)
== 稳态非零矩阵输入
对非常数输入,@eqt:aug 中$𝑨_D ≠ 𝑰$。此时,定义输入增量
$ Δ 𝒖_([k]) = underbrace(𝒖_([k]) - 𝒖_([k-1]), "平滑输入的变化") $
代人@eqt:sys-spring,可得
$ 𝒙_([k+1]) = 𝑨 𝒙_([k]) + 𝑩 𝒖_([k]) + 𝑩 𝒖_([k-1]) $
设增广向量
$ 𝒙_(a[k]) = vec(delim: "[", 𝒙_([k]), 𝒙_([d]), 𝒖_([k-1])) $
于是
$ 𝒆_([k]) = 𝒙_([k]) - 𝒙_d = mat(delim: "[", 𝑰, -𝑰, 𝟎) vec(delim: "[", 𝒙_([k]), 𝒙_([d]), 𝒖_([k-1])) = 𝑪_a 𝒙_(a[k]) $
进一步有
$
𝒙_(a[k+1]) = underbrace(mat(delim: "[", 𝑨, 𝟎, 𝑩; 𝟎, 𝑨_D, 𝟎; 𝟎, 𝟎, 𝑰), "𝑨ₐ") 𝒙_(a[
k
]) + underbrace(vec(delim: "[", 𝑩, 𝟎, 𝑰), "𝑩ₐ") Δ 𝒖_([k])
$
类比@eqt:cost-spring2,得
$
J = 1 / 2 𝒙^(⊤)_(a[N]) 𝑪^(⊤)_a 𝑺 𝑪_a 𝒙_(a[N]) + 1 / 2 ∑_(k=0)^(N-1) (
𝒙^(⊤)_(a[k]) 𝑪^(⊤)_a 𝑸 𝑪_a 𝒙_(a[k]) + Δ 𝒖^(⊤)_([k]) 𝑹 Δ 𝒖_([k])
)
$
#figure(
image("images/block/lqr-trk-var.drawio.png", width: 40%),
caption: "稳态非零矩阵输入",
supplement: [图],
)
这里,我们通过矩阵变换将追踪(tracking)转换为了调控(regulation)。
#pagebreak()
== 稳态正弦函数输入
在符合 Newton 第二定律的系统中,正弦振动是线性系统的内生特性
$
x_(1d) &= sin(ω t) \
x_(2d) &= dv(x_(1d), t) = ω cos(ω t) \
dv(x_(2d), t) &= -ω^2 sin(ω t) = -ω^2 x_(1d)
$
其矩阵形式为
$
dv(, t) vec(delim: "[", x_(1d), x_(2d)) = underbrace(mat(delim: "[", 0, 1; -ω^2, 0), 𝐀_D) vec(delim: "[", x_(1d), x_(2d))
$
|
|
https://github.com/George-Miao/typst-common | https://raw.githubusercontent.com/George-Miao/typst-common/main/0.2.0/src/util.typ | typst | MIT License | #import "@preview/t4t:0.3.2": is
#let inset(body) = block(inset: (x: 1em, y: .3em), body)
#let today() = {
datetime.today().display("[month repr:short] [day padding:none], [year]")
}
#let centering(body) = {
box(width: 1fr)[ #align(alignment.center, body) ]
}
#let tagged(tag, pos: "right", body) = {
let dir
if pos == "right" {
dir = ltr
} else if pos == "left" {
dir = rtl
} else {
panic("Invalid position for tagged, expected 'right' or 'left'")
}
let tag_c = [(#tag)]
let body_c = body
style(s => stack(
dir: dir,
block(width: 100% - measure(tag_c, s).width, centering(body_c)),
tag_c,
))
}
#let pillar(height: 1em) = {
place(auto, float: true, block(width: 0em, height: height))
}
#let falsy(x) = {
if is.n(x) {
return true
} // Test for none
if is.empty(x) {
return true
} // Test for empty string/array
false
}
#let if_then(input, func) = {
if not falsy(input) {
func(input)
}
}
#let if_else(input, t, f) = {
if falsy(input) {
f(input)
} else {
t(input)
}
}
#let if_then_ln(input, func) = {
if_then(input, x => [#func(x)\ ])
}
#let new_theorem(
theorem_name,
level: 2,
cmy: none,
) = {
let counter = counter(theorem_name)
return (name: none, body) => {
// locate(loc => {
// let ele = query(
// selector(heading.where(level: level)).before(loc),
// loc,
// )
// ele.numbering()
// })
let color = if_then(
cmy,
cmy => cmyk(
cmy.at(0),
cmy.at(1),
cmy.at(2),
10%,
),
)
block(
fill: if_then(color, x => x.lighten(92%)),
stroke: if_then(color, x => (left: 1.8pt + x.darken(5%))),
inset: if_else(color, x => (rest: 1em, right: 1.3em), x => 0em),
width: 100%,
)[
#text(1em, weight: 700, [#theorem_name])
#h(-2pt)
#if_else(name, x => [(#x)], x => h(.3em))
#text(1em, weight: 700, [#h(-3pt).])
#body
]
}
}
#let pb = pagebreak()
#let qquad = h(4em)
#let hfill = h(1fr)
#let vfill = v(1fr)
|
https://github.com/Arsenii324/matap-p2 | https://raw.githubusercontent.com/Arsenii324/matap-p2/main/t-repo/lecture2.typ | typst | = Лекция
== Другие признаки знакопостоянных рядов
=== Усиленный радикальный признак Коши
*Пусть:*
a) $a_n >= 0 " " forall n$\
b) $L = overline(limits(lim)_(n arrow infinity)) a_n^(1/n)$
*Тогда:*
1) $L < 1 => limits(sum)_(n=1)^(infinity) a_n$ - сходится
2) $L > 1 => limits(sum)_(n=1)^(infinity) a_n$ - расходится
*Proof:*
1) $overline(limits(lim)_(n arrow infinity)) a_n^(1/n) = L < 1 => exists " " q in (0, 1) " " exists " " N : forall n >= N arrow a_n^(1/n) <= q <=> a_n <= q^n$
$=> limits(sum)_(n=1)^(infinity) q^n "- сходится" =>$
$=> limits(sum)_(n=1)^(infinity) a_n "- сходится"$.
2) $overline(limits(lim)_(n arrow infinity)) a_n^(1/n) = L > 1 => forall N " " exists " " n >= N : a_n^(1/n) > 1 => a_n > 1 => limits(lim)_(n arrow infinity) a_n eq.not 0 => limits(sum)_(n=1)^(infinity) a_n "- расходится"$.
=== Признак Даламбера
*Пусть:*
a) $a_n > 0 " " forall n in NN$
b) $d = limits(underline(lim))_(n arrow infinity) a_(n+1)/a_n$;
$D = limits(overline(lim))_(n arrow infinity) a_(n+1)/a_n$
*Тогда:*
1) $D < 1 => limits(sum)_(n=1)^(infinity) a_n "- сходится"$
2) $D > 1 => limits(sum)_(n=1)^(infinity) a_n "- расходится"$
*Proof:*
$limits(underline(lim))_(n arrow infinity) a_(n+1)/a_n <= limits(underline(lim))_(n arrow infinity) a_n^(1/n) <= limits(overline(lim))_(n arrow infinity) a_n^(1/n) <= limits(overline(lim))_(n arrow infinity) a_(n+1)/a_n$
Ссылаемся на Коши.
_#underline[Замечание]_
Признак Коши сильнее признака Даламбера. Пример ряда, к которому применим Коши, но не применим Даламбер:
$limits(sum)_(n=1)^(infinity) 2^((-1)^n - n)$
=== Интегральный признак
*Пусть:*
a) $f : [1, infinity] arrow RR; " " f(x) >= 0; " " f(x) "не возрастает."$
*Тогда:*
Если сходится $limits(sum)_(n=1)^(infinity) f(n)$, то сходится $integral_1^infinity f(x) d x$
=== Признак Куммера
*Пусть:*
a) $a_n, b_n > 0 " " forall n$
b) Положим $c_n = b_n dot a_n/a_(n+1) - b_(n+1)$
*Тогда:*
1) $exists L > 0 " " exists N : " " forall n >= N arrow c_n >= L => limits(sum)_(n=1)^(infinity) a_n "- сходится"$.
Проще говоря, если $c_n$ финально отделены от 0, то ряд $a_n$ сходится.
2) $limits(sum)_(n=1)^(infinity) 1/b_n "- расходится и " exists N : forall n >= N arrow c_n <= 0 => limits(sum)_(n=1)^(infinity) a_n "- расходится."$
Проще говоря, если ряд $1/b_n$ расходится и $c_n <= 0$ финально, то ряд $a_n$ расходится.
*Proof:*
1) Пусть $N = 1$
$c_n >= L " " forall n => a_n dot b_n - b_(n+1) dot a_(n+1) >= L dot a_(n+1)$
$A_n = limits(sum)_(k=1)^(n) a_k = 1/L limits(sum)_(k=1)^(n) L dot a_k = 1/L (L a_1 + limits(sum)_(k=1)^(n-1) L dot a_(k+1)) <= a_1 + 1/L limits(sum)_(k=1)^(n-1) (a_k b_k - a_(k+1) b_(k+1)) = a_1 + \
+ 1/L dot (a_1 b_1 - a_n b_n) <= a_1 + (a_1 b_1)/L => A_n <= a_1 + (a_1 b_1)/L => {A_n} arrow.tr "и ограничен" => limits(sum)_(n=1)^(infinity) a_n "- сходится."$
2) $c_n <= 0 => a_n/a_(n+1) <= b_(n+1)/b_n = (1/b_n)/(1/b_(n+1))$
По 3 признаку сравнения , т.к. $limits(sum)_(n=1)^(infinity) 1/b_n "- расходится, то " limits(sum)_(n=1)^(infinity) a_n "- тоже расходится."$
=== <NAME>
*Пусть:*
a) $a_n > 0 " " forall n$
b) $limits(lim)_(n arrow infinity) ln(n) (n (a_n/a_(n+1) - 1) - 1) = B in [-infinity;infinity]$
*Тогда:*
1) Если $B > 1 => limits(sum)_(n=1)^(infinity) a_n "- сходится"$.
2) Если $B < 1 => limits(sum)_(n=1)^(infinity) a_n "- расходится"$.
*Proof:*
1) Возьмем в признаке Куммера $b_n = n ln(n); " " limits(sum)_(n=1)^(infinity) 1/b_n "- расходится"$
$c_n = n ln(n) a_n/a_(n+1) - (n+1)ln(n+1) = underbrace(ln(n)((a_n/a_(n+1)-1)-1), arrow B>1) - underbrace((n+1)(ln(n+1) - ln(n)), arrow 1) => \
=> "Выражение" arrow L > 0 => limits(sum)_(n=1)^(infinity) a_n "- сходится по Куммеру."$
2) $c_n = n ln(n) a_n/a_(n+1) - (n+1)ln(n+1) = underbrace(ln(n)((a_n/a_(n+1)-1)-1), arrow B<1) - underbrace((n+1)(ln(n+1) - ln(n)), arrow 1) => \
=> "Выражение" arrow L < 0 => limits(sum)_(n=1)^(infinity) a_n "- расходится по Куммеру."$
=== <NAME>
*Пусть:*
$limits(lim)_(n arrow infinity) n(a_n/a_(n+1) - 1) = R$
*Тогда:*
1) $R > 1 => "сходится"$
2) $R < 1 => "расходится"$
*Proof:*
Подставляем в $limits(lim)_(n arrow infinity) ln(n) (n (a_n/a_(n+1) - 1) - 1) = B$ - из признака Бертрана очевидным образом следуют пункты 1 и 2.
=== Признак Гаусса
*Пусть:*
a) $a_n > 0$, $exists lambda in RR, exists mu in RR, exists epsilon > 0$
b) $a_n/a_(n+1) = lambda + mu/n + gamma/n^(1+epsilon)$, где ${gamma}_1^N$ ограничено.
*Тогда:*
1) $cases(lambda > 1, lambda = 1 " и " mu > 1) => limits(sum)_(n=1)^(infinity) a_n "- сходится"$
2) $cases(lambda < 1, lambda = 1 " и " mu <= 1) => limits(sum)_(n=1)^(infinity) a_n "- расходится"$
*Proof:*
Возьмём Бертрана.
$limits(lim)_(n arrow infinity) ln(n) (n (a_n/a_(n+1) - 1) - 1) = B$
1) Если $lambda > 1$, то $B = +infinity$
Если $lambda = 1$ и $mu > 1$, то:
$limits(lim)_(n arrow infinity) ln(n) (n ((1 + mu/n + gamma/n^(1+epsilon)) - 1) - 1) = limits(lim)_(n arrow infinity) ln(n)(n (mu/n + gamma/n^(1+epsilon)) - 1) = limits(lim)_(n arrow infinity) ln(n)underbrace((mu + gamma/n^epsilon - 1), "const > 0") = +infinity$
2) Если $lambda < 1$, то $B = -infinity$
Если $lambda = 1$ и $mu < 1$, то $B = -infinity$ - аналогично 1 пункту.
Если $lambda = 1$ и $mu = 1$, то в итоге имеем $B = limits(lim)_(n arrow infinity) ln(n)/n^epsilon gamma_n arrow 0$.
== Признаки сравнения любых рядов
=== Признаки Абеля и Дирихле
*Пусть:*
#table(
columns: (50%, 50%),
inset: 10pt,
align: center,
[*Дирихле*], [*Абель*],
[$exists M > 0 : forall n in NN\
abs(limits(sum)_(k=1)^(n) a_k) <= M$], [$limits(sum)_(n=1)^(infinity) a_n "- сходится."$],
[$limits(lim)_(k arrow infinity) b_k = 0$], [$exists M > 0 : forall n\
abs(b_n) <= M$],
[${b_n} arrow.tr$ или ${b_n} arrow.br$], [${b_n} arrow.tr$ или ${b_n} arrow.br$]
)
*Тогда:*
$limits(sum)_(n=1)^(infinity) a_n b_n "- сходится."$
_#underline[Примечание]_
Дирихле $==>$ Абель
=== Преобразование Абеля
*Пусть:*
${a_n}_(n=1)^infinity$ и ${b_n}_(n=1)^infinity$ - последовательности вещественных чисел.
*Тогда:*
$forall n,k in NN$ выполнено:
$limits(sum)_(i= n + 1)^(n + k) a_i b_i = A_(n+k) b_(n+k) - A_n b_(n+1) + limits(sum)_(i= n + 1)^(n + k - 1) A_i (b_i - b_(i+1))$, где $A_m = limits(sum)_(i=1)^(m) a_i$ |
|
https://github.com/FlandiaYingman/note-me | https://raw.githubusercontent.com/FlandiaYingman/note-me/main/lib.typ | typst | MIT License | #import "note-me.typ": admonition, note, tip, important, warning, caution
#import "note-me-more.typ": todo |
https://github.com/loreanvictor/master-thesis | https://raw.githubusercontent.com/loreanvictor/master-thesis/main/thesis_typ/abstract_de.typ | typst | MIT License | #let abstract_de() = {
set page(
margin: (left: 30mm, right: 30mm, top: 40mm, bottom: 40mm),
numbering: none,
number-align: center,
)
let body-font = "New Computer Modern"
let sans-font = "New Computer Modern Sans"
set text(
font: body-font,
size: 12pt,
lang: "en"
)
set par(
leading: 1em,
justify: true
)
// --- Abstract (EN) ---
v(1fr)
align(center, text(font: body-font, 1em, weight: "semibold", "Zusammenfassung"))
text[
UML-Diagramme spielen eine zentrale Rolle bei der Konzeption und Entwicklung von Softwaresystemen. Die inhärenten Schwierigkeiten bei der Übernahme von UML-Diagrammen unterstreichen den Bedarf an Modellierungswerkzeugen mit einem starken Lernfokus. Apollon ist ein solches Projekt, ein quelloffenes, webbasiertes Werkzeug, das so konzipiert ist, dass es in andere Projekte, wie z. B. andere Bildungsplattformen, integriert werden kann, und das hauptsächlich von Studenten im Rahmen von Bachelor- oder Masterarbeiten entwickelt und gepflegt wird. Es fehlt jedoch an Funktionen, die eine Echtzeit-Zusammenarbeit entweder in der Ausbildung oder in der Industrie ermöglichen würden.
Diese Arbeit skizziert eine Lösung zur Erweiterung von Apollon um Echtzeit-Kollaborationsfunktionen, die es mehreren Benutzern ermöglichen, gleichzeitig an demselben Diagramm zu arbeiten. Der Hauptbeitrag dieser Arbeit ist der Entwurf und die Implementierung von Echtzeit-Kollaborationsfunktionen für Apollon, wobei die Integration von Apollon und die Nutzung dieser Funktionen für Plattformen, die es integrieren wollen, einfach gehalten wird, während gleichzeitig eine einfache und wartbare Architektur beibehalten wird, die für den einzigartigen Entwicklungsprozess von Apollon geeignet ist. Die vorliegende Arbeit beinhaltet auch eine theoretische Analyse der implementierten Lösung und schlägt ausreichende theoretische Bedingungen für ihre Anwendung über den Rahmen von Apollon und UML-Diagrammen hinaus vor.
Um dies zu erreichen, werden klassische und moderne Ansätze zur Echtzeit-Zusammenarbeit und verteilten Zustandsverwaltung untersucht. Insbesondere werden konfliktfrei replizierte Datentypen (Conflict-free replicated data types, CRDTs) betrachtet, da ihr verteiltes Design perfekt zu den Anforderungen einer einfachen Integration in andere Plattformen passt. Es wird eine von diesen Ansätzen inspirierte Lösung vorgeschlagen, die sich aus der Lockerung einiger Einschränkungen zugunsten eines einfacheren Architekturdesigns und eines geringeren Ressourcen-Overheads ergibt. Die Lösung wird dann in Apollon implementiert und getestet, und ihre theoretischen Eigenschaften werden analysiert.
]
v(1fr)
} |
https://github.com/Shedward/dnd-charbook | https://raw.githubusercontent.com/Shedward/dnd-charbook/main/dnd/core/dimentions.typ | typst | // Dimentions constants for template
#let strokes = (
hairline: 0.1pt,
thin: 0.5pt,
normal: 1.0pt
)
// Normalized paddings
#let paddings(p) = p * 2mm
|
|
https://github.com/HarryLuoo/sp24 | https://raw.githubusercontent.com/HarryLuoo/sp24/main/431/hw/1/hw4.typ | typst |
#set math.equation(numbering:"(1)")
#set page(margin: (x: 1cm, y: 1cm))
= HW 4, <NAME>
== ex 3.4
Since X has uniform distribution on $[4,10]$, it has a PDF of $f(x)=1/6 "for x in [4,10]"$ and zero otherwise.
(a) $ P(x < 6) = P(4 < X <6) = (6-4)/(6)=1/3 $\
(b)$
P(|X- 7| > 1) &= P(X < 6)+P(X> 8)\ &=P(4<X<6)+P(8<X<10)\ &=1/3+1/3=2/3
$
(c) $
P(X <t | X < 6) &= (P(X < t, X < 6))/(P(X < 6)) "for" 4 <= t <=6\
&= P(X < t)/ P(X < 6) \
&= (P(4<=X<t))/(P(X < 6))\
&= (t-4)/(6-4) = #rect(inset: 9pt)[ $display((t-4)/2)$ ]
$
== ex 3.5
possible values correspond to jumps in cdf, and the pmf is the size of the jump.
$
p_x (1) = 1/3\
p_x (4/3) = 1/2 - 1/3 = 1/6\
p_x (3/2) = 3/4 - 1/2 = 1/4\
p_x (9/5) = 1 - 3/4 = 1/4
$
== ex 3.7
For a continuous random variable, the cdf is described as $F(x) = P(X <= x) = integral_(- infinity)^(x)f(t)dif t$\
(a)$
P(a <= X <= b) = F(b) - F(a) = 1\
=> a <= sqrt(2); b >= sqrt(3) \
"smallest interval" = [sqrt(2), sqrt(3)]
$
(b) for a continuous R.V., the pmf at any point is zero, so $P(X = 1.6)=0$\
(c)$ P(1 <= X <= 3/2) &= F(3/2)-F(1)\ &= ((3/2)^2-2)-0 \
&= #rect(inset: 8pt)[$ display(1/4)$ ] $
(d) Noticing the fact that the cdf is continuous except at points $sqrt(2), sqrt(3)$, the pdf could be retrieved by:$
f(x)= F'(x)= cases( 2x "if sqrt(2) <= x < sqrt(3)", 0 "o.w.",)
$
== ex 3.9
pdf was given by $
f(x) = cases(3 e^(-3x) "if x > 0", 0 "o.w.",)
$
(a) for a continous R.V., expectation is calculated by $
E(X) = integral_(-infinity)^(+infinity) x f(x) dif x = integral_0^(+infinity) 3x e^(-3x) dif x = 1/3
$
(b)$ E(e^(2X))= integral_(-infinity)^(+infinity) e^(2x) f(x) dif x = integral_0^(+infinity) 3 e^(-x) dif x = 3 $
== ex 3.16
$ E(Z) = integral_(-infinity)^(+infinity) x f(x) dif x &= integral_1^2 1/7 x dif x + integral_(5)^(7) 3/7 x dif x\ &= 75/14
$
$
"Var"(Z) = E(Z^2) - (E(Z))^2
&= integral_1^2 1/7 x^2 dif x + integral_(5)^(7) 3/7 x^2 dif x - (75/14)^2 \ &= 1633/588
$
== ex 3.23
(a) Possible values of profit are $ 0 - 1 = -1"(won nothing)"; 2 - 1 = 1"(80 ppl who won $2)";\ 100 - 1 = 99"(19 ppl who won $100)"; 7000 - 1 = 6999 "(1 who won $7000)". $ \ We can then represent the pmf as $
P(X = -1) = (10000- 100)/ 10000 = 99/100\
P(X = 1) = (80)/ 10000 = 1/125\
P(X = 99) = (19)/ 10000\
P(X = 6999) = (1)/ 10000\
$
(b)$
P(X>= 100) = P(X = 6999) = 1/10000
$
(c)For X as a discrete R.V., the expectation is $
E(X)= sum_(k)k P(X = k)= -1 times 99/100 + 1/125 + 99 times 19/10000 + 6999 times 1/10000 = -0.094\
E(X^2)= sum_(k)k^2 P(X = k)= (-1)^2 times 99/100 + 1^2 times 1/125 + 99^2 times 19/10000 + 6999^2 times 1/10000 = 4918.22\
=> "Var"(X) = E(X^2) - (E(X))^2 = 4918.22 - (-0.094)^2 = 4918.22 - 0.008836 = 4918.211
$
== ex 3.28
(a)X has possible values of 1,2 3. This is a sampling without replacement trial, where $
"price in the first open box: " P(X = 1) = 3/5\
"price in the second open box " P( X=2) = 2/5 times 3/4 = 3/10\
"price in the third open box" P(X = 3) = 2/5 times 1/4 times 3/3 = 1/10\
$
(b)$E(X) = 1 dot 3/5 + 2 dot 3/10 + 3 dot 1/10 = 3 slash 2$\
(c)$E(X^2)= 1^2 dot 3/5 + 2^2 dot 3/10 + 3^2 dot 1/10 = 27/10 $\
$=> "Var"(X) = E(X^2)- E(X)^2 = 9 slash 20$
(d) We represent the loss or gain of the game as $Y = g(X)$.
$
Y = g(X) = cases(100 " if X = 1", 0 " if X = 2", -100 " if X = 3".)\
E(Y) = 100 times 3/5 + 0 times 3/10 - 100 times 1/10 = 50 \
$
== ex 3.31
(a)
$
1 = integral_(-infinity)^(+infinity) f(x) dif x = integral_(1)^(infinity) c x^(-4)dif x = c/3\ => c =3
$
(c)$
P(0.5 < X < 2) = integral_(0.5)^(2) 3x^(-4) dif x = 7/8
$
(e)
$
&"when x >= 1:" &&F(x)= P(X <= x)= integral_(1)^(x) 3t^(-4) dif t = 1 - x^(-3)\
&"when x < 1:" &&F(x)= P(X <= x)= 0\
$
(f)
$
E(x) = integral_(-infinity)^(+infinity) x f(x) dif x = integral_(1)^(infinity) 3x^(-3) dif x = 3/2\
$
(g)
$
E(5X^2 + 3X) = integral_(1)^(infinity) (5x^(-2)+ 3 x)3x^(-4) dif x = 39/2
$
== ex 3.47
#image("hw4_trig.png", width: 30%)
(a)when X is below 0, cdf is 0; when X is above 30, cdf is 1.\ when X is between 0 and 30, the set of points in the triangle with $X <= x$ would have vertices $(0,0),(x,0), (x,2x/3)$, with area $1/3 x^2$, while the original triangle has size 300.
$
F(X) = cases(0 "X <0", (1/3x^2)slash 300=x^2/900 "0 <= x < 30", 1 "x >= 30".)\
$
(b)Since F(x) is continous and differentiable(besides x = 30), the pdf is the derivative of F(x), which is $
f(x) = cases(2x/900=x/450 "0 <= x < 30", 0 "o.w.",)
$
(c) Notice X is continuous,$
E(X)= integral_(-infinity)^(+infinity) x f(x) dif x = integral_0^(30) x^2/450 dif x = 20\
$
|
|
https://github.com/howardlau1999/sysu-thesis-typst | https://raw.githubusercontent.com/howardlau1999/sysu-thesis-typst/master/README.md | markdown | MIT License | # sysu-thesis-typst
> [!WARNING]
> **本项目已暂停维护,请使用 <https://github.com/sysu/better-thesis> 仓库提供的模板**
[Gitee 仓库镜像](https://gitee.com/howardlau/sysu-thesis-typst)(自动同步 [GitHub 主仓库](https://github.com/howardlau1999/sysu-thesis-typst),仅供只读访问)
中山大学学位论文 Typst 模板,项目基于 [PKUTHSS-Typst](https://github.com/lucifer1004/pkuthss-typst) 重构。
当前还未完全符合学位论文格式要求,欢迎同学们贡献代码!模板交流 QQ 群:797942860([点此直接加入](https://jq.qq.com/?_wv=1027&k=m58va1kd))
输出成品文件预览(每 6 小时自动更新):[thesis.pdf](https://liuhaohua.com/sysu-thesis-typst/thesis.pdf)
**Q:我不会 LaTeX,可以用这个模板写论文吗?**
**A:完全可以!Typst 是一个比 LaTeX 更简单的排版语言,同时安装更加方便,编译更加快速!**
## 使用方法
**由于 Typst 还处于初期的快速开发阶段,本项目需要使用从源码编译的 Typst 版本才能正常生成 PDF。仓库提供了一键安装脚本,按照使用说明运行即可。**
### Windows 用户
1. [下载本仓库](https://github.com/howardlau1999/sysu-thesis-typst/archive/refs/heads/master.zip),或者使用 `git clone https://github.com/howardlau1999/sysu-thesis-typst` 命令克隆本仓库。
2. 右键 `install_typst.ps1` 文件,选择“用 Powershell 运行”,等待 Typst 安装完成。
3. 根据 [Typst 文档](https://typst.app/docs/),参考 [项目结构](#项目结构) 中的说明,按照你的需要修改论文的各个部分。
4. 双击运行 `compile.bat`,即可生成 `thesis.pdf` 文件。
### Linux/macOS 用户
1. [下载本仓库](https://github.com/howardlau1999/sysu-thesis-typst/archive/refs/heads/master.zip),或者使用 `git clone https://github.com/howardlau1999/sysu-thesis-typst` 命令克隆本仓库。
2. 使用命令行安装 Rust 工具链以及 Typst:
```bash
# 安装 Rust 环境并激活,之前安装过则不需要执行下面这两行
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source $HOME/.cargo/env
# 安装 Typst CLI
cargo install --git https://github.com/typst/typst.git typst-cli
# 访问缓慢的话,执行以下命令设置清华镜像源,并从镜像站安装
cat << EOF > $HOME/.cargo/config
[source.crates-io]
replace-with = "tuna"
[source.tuna]
registry = "https://mirrors.tuna.tsinghua.edu.cn/git/crates.io-index.git"
EOF
cargo install --git https://ghproxy.com/https://github.com/typst/typst.git
```
3. 根据 [Typst 文档](https://typst.app/docs/),参考 [项目结构](#项目结构) 中的说明,按照你的需要修改论文的各个部分。
4. 执行 `make` 命令,即可生成 `thesis.pdf` 文件。
## 项目结构
### 主要文件
- `info.typ` 文件中包含论文的基本信息,包括作者、学位、导师、学位论文题目等。
- `custom.typ` 文件中包含论文的自定义设置,包括行距,字体等。
- `thesis.typ` 文件是论文的主体,包含论文的各个章节。在添加或者删除章节文件后,需要同步修改这个文件中的 `#include` 命令。
- `template.typ` 文件是论文的模板,包含论文的各个部分的格式。一般不需要修改。
### 文件夹
- `chapters` 文件夹包含了论文的各个章节文件,你可以自由地增加或删除章节文件,按你自己需要的方式组织文件。在添加或删除文件后,需要同步修改 `thesis.typ` 文件中的 `#include` 命令。
- `templates` 文件夹包含了论文的各个部分的模板文件,你可以自由地增加或删除模板文件,按你自己需要的方式组织文件。在添加或删除文件后,需要同步修改 `template.typ` 文件中的 `#include` 命令。欢迎贡献新的模板文件。
- `functions` 文件夹包含了模板使用到的各种自定义辅助函数,如果你想贡献代码,可以在这个文件夹中添加新的辅助函数。
- `fonts` 文件夹包含了模板使用到的字体文件,你可以按需要添加或者删除字体文件。在添加或删除文件后,可以运行 `typst --font-path fonts fonts` 查看 Typst 检测到的字体文件。
- `bibs` 文件夹包含了论文的参考文献文件,你可以按需要添加或者删除参考文献文件,目录结构没有特殊需求。
- `images` 文件夹包含了论文中使用到的图片文件,你可以按需要添加或者删除图片文件,目录结构没有特殊需求。`vi` 文件夹包含了校徽的矢量图文件。 |
https://github.com/Akida31/anki-typst | https://raw.githubusercontent.com/Akida31/anki-typst/main/typst/src/utils.typ | typst | /// Convert content to a string.
///
/// This function is best effort and lossy.
/// - content (content): Content to convert.
/// -> str
#let to_string(content) = {
if type(content) == str {
content
} else if (int, float, length).contains(type(content)) {
str(content)
} else if content == none {
"none"
} else if content.has("text") {
to_string(content.text)
} else if content.has("children") {
content.children.map(to_string).join("")
} else if content.has("body") {
to_string(content.body)
} else if content == [ ] {
" "
// TODO
// } else {
// panic(content)
}
}
/// Determine whether something is empty.
///
/// - val (content, dict, array, str): #h(0pt)
#let is_empty(val) = if type(val) == content {
val.fields().len() == 0
} else if type(val) == dictionary {
val.keys().len() == 0
} else if type(val) == array {
val.len() == 0 or val.map(is_empty).all()
} else if type(val) == str {
val.len() == 0
} else {
false
}
/// Try to get the plain value from content.
///
/// This function is not lossy and will return `none` if it can't find the plain value.
/// - c (content): Content to convert.
/// -> str, none
#let to_plain(c) = {
if type(c) == str {
c
} else if (int, float).contains(type(c)) {
str(c)
} else if c == [] {
""
} else if type(c) == content {
// equation
if c.fields().keys() == ("block", "body") {
to_plain(c.body)
}
else if c.fields().len() > 1 {
if c.has("children") {
// TODO warn here
c.children.map(to_plain).join("")
} else {
none
}
} else if c.fields().len() == 0 {
if c.func() == [ ].func() {
// space
" "
} else {
""
}
} else {
let val = c.fields().values().first()
to_plain(val)
}
} else if type(c) == array {
let got_non_empty = none
let res = ""
for val in c {
if not is_empty(val) {
if got_non_empty != none {
// TODO warn here
// return none
}
got_non_empty = val
}
res += to_plain(val)
}
res
} else {
none
}
}
/// Assert that val has a valid type.
///
/// If val is not of some type in `valid_tys` this function will panic.
///
/// - ty_name (str): Name of the type, used for `panic`.
/// - val (any): Value to check.
/// - ..valid_tys (array): Array of valid types (e.g. str, int, content).
#let assert_ty(ty_name, val, ..valid_tys) = {
if valid_tys.named().len() != 0 {
panic("can check only for positional types in " + ty_name)
}
if valid_tys.pos().all(ty => type(val) != ty) {
let ty_names = valid_tys.pos().map(ty => str(ty)).join(", ")
panic(ty_name + " must be of type " + ty_names + " but was " + type(val) + ": " + to_string(val))
}
val
}
/// Get the page at which the label with `id` can be found.
///
/// *Panics* if there are multiple labels with `id`.
///
/// - id (str): The label to search for.
/// - id_name (str): Name of the id, used for `panic`.
/// - loc (location): Some location, used for `query`
/// -> int
#let get_label_page(id, id_name, loc) = {
let elems = query(label(id), loc)
if elems.len() == 0 {
return none
}
if elems.len() != 1 {
panic("expected one elem for id " + id_name + " but got " + str(
elems.len(),
) + ". Did you use the same id twice?")
}
elems.at(0).location().page()
}
|
|
https://github.com/hzkonor/bubble-template | https://raw.githubusercontent.com/hzkonor/bubble-template/main/main.typ | typst | #import "template.typ": *
#set outline(title: "Table of contents")
#show: bubble.with(
title: "Bubble template",
subtitle: "Simple and colorful template",
author: "hzkonor",
affiliation: "University",
date: datetime.today().display(),
year: "Year",
class: "Class",
other: ("Made with Typst", "https://typst.com"),
//main-color: "4DA6FF", //set the main color
logo: image("assets/logo.png"), //set the logo
)
// Edit this content to your liking
= Introduction
This is a simple template that can be used for a report.
= Features
== Colorful items
The main color can be set with the `main-color` property, which affects inline code, lists, links and important items. For example, the words highlight and important are highlighted !
- These bullet
- points
- are colored
+ It also
+ works with
+ numbered lists!
== Customized items
Figures are customized but this is settable in the template file. You can of course reference them : @ref.
#figure(caption: [Code example],
```rust
fn main() {
println!("Hello Typst!");
}
```
)<ref>
#pagebreak()
= Enjoy !
#lorem(100) |
|
https://github.com/VoigtSebastian/typst-CV | https://raw.githubusercontent.com/VoigtSebastian/typst-CV/main/main.typ | typst | MIT License | #import "template.typ": *
#show: project.with(
firstname: "Sebastian",
lastname: "Voigt",
// Will be converted to clickable links in the header
github: ("https://github.com/VoigtSebastian", "VoigtSebastian"),
// Will be converted to clickable links in the header
mail: "<EMAIL>",
)
= Education
#entry(from: "2018", to: "2020", type: "M.Sc Computer Science", location: "University A")[
This degree was chosen because of my interest to advance my knowledge in #lorem(20)
]
#entry(from: "2015", to: "2018", type: "B.Sc. Computer Science", location: "University B")[
- Lectures #lorem(4)
- Projects #lorem(4)
- Thesis grade #lorem(4)
]
#entry(from: "2013", to: "2015", type: "School A", location: "City A")[]
#entry(from: "2010", to: "2015", type: "School B", location: "City B")[]
= Projects
// Use githubIconInline to add a clickable link inside of a entry header
#entry(from: "2023", to: "2024", type: githubIconInline("https://github.com/VoigtSebastian/typst-CV", "Typst CV"), location: "Testing new Tech")[
#lorem(20)
]
// Depends on your preference and context, sometimes this looks better without the alignment
#align(bottom)[
#grid(
columns: (45%, 35%, 20%),
[
= Programming languages
#text(weight: "bold")[Scala, Rust] -- professional\
#text(weight: "bold")[Python, Java, C++] -- advanced
],
[
= Languages
#text(weight: "bold")[German] -- first language\
#text(weight: "bold")[English] -- fluent
],
[
= Tools
#text(weight: "bold")[macOS, Linux]\
#text(weight: "bold")[VsCode]
]
)
]
|
https://github.com/okatsn/DatabasePlayground | https://raw.githubusercontent.com/okatsn/DatabasePlayground/main/doc-cloudservice/gemdata-cloud.typ | typst | MIT License | #import "@preview/diagraph:0.2.0": *
/* Concept map graph:
Please help me with a concept map using dot language, saying digraph, to illustrate a concept map where
1. The nodes or elements contains the followings: "Raw GEM data", "Standard GEM data", "SFTP server", "NAS", "Local User", and "Local Drive".
2. Raw GEM data is stored on NAS, and local user can download and upload the raw data via the SFTP protocol.
3. Local user need to convert the raw data into the standard data before any process. The converted standard GEM data is temporarily stored in local drive.
*/
#set text( // Set main text: https://typst.app/docs/reference/text/text/
font: "New Computer Modern",
size: 10pt
)
#set page( // Set page: https://typst.app/docs/reference/layout/page/
paper: "a4",
margin: (x: 1.5cm, y: 2.1cm),
header: align(right)[
地電磁資料雲端儲存架構構想
],
numbering: "1", // numbering the page
)
#set par( // Set paragraph: https://typst.app/docs/reference/layout/par/
justify: true, // Hyphenation will be enabled for justified paragraphs
leading: 0.52em, // The spacing between lines
)
#set heading(numbering: "1.a.") // Numbering heading: https://typst.app/docs/reference/meta/heading/
// https://typst.app/docs/tutorial/advanced-styling/
#show: rest => columns(1, rest)
= 簡介
== 動機與目的
目前工作流程:
#render(read("./current_workflow.dot"))
#render(read("./new_workflow_1.dot"))
=== Julia interface
This is suggested by ChatGPT.
```julia
using Distributed
# Add workers for parallel processing
addprocs(4) # Add the desired number of worker processes
# Load and filter data in parallel
@everywhere using DataFrames
@everywhere using CSV
@time begin
df_files = ["gs://your-bucket-name/your-file-part1.csv",
"gs://your-bucket-name/your-file-part2.csv",
# Add more file paths as needed
]
function load_and_filter(file_path)
return CSV.File(file_path) |>
DataFrame |>
x -> filter(row -> start_date <= row[:datetime] <= end_date, x)
end
dfs = @distributed vcat for file in df_files
load_and_filter(file)
end
merged_df = vcat(dfs...)
end
```
== DuckDB
- #link("https://duckdb.org/why_duckdb.html")[Why DuckDB?]
=== I have a conversation with ChatGPT...
ChatGPT says:
- DuckDB can handle a large number of tables (> 100000) with a large number of rows (e.g., 86400 rows) without any issues.
- DuckDB is optimized for in-memory processing; whether the queried data can be fit into memory should be concerned.
- DuckDB is an embedded database system, you don’t need to install any DBMS server software. You can simply copy the database file to the host machine such as your NAS.
Create the database file:
```bash
# in bash
duckdb /path/to/new/database/file
# The file can have any extension, but common choices are .db, .duckdb, or .ddb.
```
Create a table from a csv file:
```sql
CREATE TABLE table1 (name VARCHAR, ID INTEGER, job VARCHAR);
COPY table1 FROM '/path/to/table1.csv' (FORMAT csv, HEADER true);
```
Query data from tables having the format of "XXXX_YYYYMMDD_YYY", where "XXXX" denotes the station, and "YYY" denotes for observation type, and "YYYYMMDD" is for the datetime of the observation:
```sql
SELECT * FROM YULI_2022*; # select all data from YULI station observed in 2022 of any type
# DuckDB will automatically add null values to the result set for columns that do not exist in a particular table.
SELECT * FROM *_*_Mag WHERE SUBSTR(table_name, 6, 8) BETWEEN '20120101' AND '20221231'; # select data in the time range of 2012 to 2022
# TODO: You should ask for what are good practices for naming a table in order to efficiently query data from different tables
```
It is possible to systematically create your 100000 tables using script:
```julia
using DuckDB
# Set the path to the directory containing the CSV files
csv_dir = "/path/to/csv/files"
# Connect to the DuckDB database
con = DuckDB.DBInterface.connect(DuckDB.DB, ":memory:")
# Loop through the CSV files in the directory
for filename in readdir(csv_dir)
if endswith(filename, ".csv")
# Get the table name from the file name
table_name = splitext(filename)[1]
# Create the table in DuckDB
DuckDB.DBInterface.execute(con, "CREATE TABLE $table_name AS SELECT * FROM read_csv('$csv_dir/$filename')")
end
end
```
== References
Related packages
#list(
[#link("https://github.com/JuliaCloud/AWS.jl")[AWS.jl]],
[#link("https://github.com/rana/GCP.jl")[GCP.jl]],
[#link("https://juliacloud.github.io/GoogleCloud.jl/latest/")[GoogleCloud.jl]]
)
Google cloud
#list(
[#link("https://cloud.google.com/storage/docs/gsutil#should-you-use")[`gsutil`]],
[#link("https://cloud.google.com/storage/docs/discover-object-storage-gcloud")[`gcloud storage` CLI]]
)
Discussions:
#list(
[#link("https://stackoverflow.com/questions/69528312/working-with-google-cloud-storage-in-julia-applications")[Working with google cloud storage in julia applications]]
)
Tutorial using Julia:
- #link("https://pub.towardsai.net/exploring-julia-programming-language-mongodb-1005b79f59f9")[Exploring Julia Programming Language: MongoDB]; #link("https://felipenoris.github.io/Mongoc.jl/stable/")[Mongoc.jl is a MongoDB driver for the Julia Language.]
- Also refer: #link("https://aws.amazon.com/tw/nosql/")[NoSQL]
- #link("https://medium.com/@kurtcaglar777/working-with-databases-in-julia-56685ca7c3cb")[Working with Databases in Julia],
- #link("https://mechanicalrabbit.github.io/DataKnots.jl/stable/")[DataKnots.jl (seems related to NoSQL)]
- #link("https://medium.com/juliazoid/welcome-to-duckdb-3c4e75f50b97")[Julia with DuckDB]
|
https://github.com/RhenzoHideki/dlp2 | https://raw.githubusercontent.com/RhenzoHideki/dlp2/main/relatorio-02/relatorio-02.typ | typst | #import "../typst-ifsc/template/article.typ": article
#show: doc => article(
title: "Somador BCD - 2 dígitos",
subtitle: "DISPOSITIVOS LÓGICOS PROGRAMÁVEIS II (DLP029007 )",
// Se apenas um autor colocar , no final para indicar que é um array
authors:("<NAME>,<NAME>",),
date: "2 de abril de 2024",
doc,
)
= Descrição
\ Muitos painéis de instrumentos usam o formato decimal codificado em binário (BCD), no qual 10 dígitos decimais são codificados usando 4 bits.
\ Durante uma operação de adição BCD, se a soma de um dígito exceder 9, 10 serão subtraídos do dígito atual e um carry será gerado para o próximo dígito.
\ Dessa forma, projete um somador BCD de 2 dígitos que tenha duas entradas de 8 bits, representando dois números BCD de 2 dígitos, e uma saída, que seja um número BCD de 3 dígitos (12 bits).
#pagebreak()
= Resolução
== Diagramação
Foram feitas 2 diagramações para o projeto. Uma com o enfoque no projeto maior , mostrando todos os componentes utilizados , já o segundo diagrama teve como objetivo mostrar o somador utilizado.
#figure(
image("./Figuras/EsquematicaAP1.svg",width:90%),
caption: [
Esquematica do projeto completo \ Fonte: Elaborada pelo autor
],
supplement: "Figura"
);
#figure(
image("./Figuras/somadorBCD-componente.svg",width:90%),
caption: [
componente somador BCD \ Fonte: Elaborada pelo autor
],
supplement: "Figura"
);
Nas figuras é possivel visualizar a maior parte da elaboração da atividade.
A figura 1 demonstra como serão feitas as entradas e saida, por exemplo ambas as entradas são 8 bits utilizando BCD. Existem 2 pares de saídas que são representadas por displays de 7 segmentos para indicar quais valores foram inseridos na entrada , e após a soma existem 3 displays para mostrar o resultado da soma.
Na figura 2 pode ser ver o somador implementado para essa atividade, um somador BCD. Neste é possiveis observar a soma das unidades , uma verificação se existe ou não carry e paralelamente a soma das dezenas com um processo similar.
#pagebreak()
== Implementação somador BCD
Resultados que foram obtidos fazendo o código do somador BCD.
\ Como projetado no diagrama , a implementação do código seguiu de forma similar. A utilização de 2 somadores BCDs ao invés de 1 somador BCD foi feita pois facilitava a adaptação do código.
#figure(
image("./Figuras/rtlviewer-ap1.png",width:100%),
caption: [
rtl viewer projeto completo \ Fonte: Elaborada pelo autor
],
supplement: "Figura"
);
#figure(
image("./Figuras/modelsim-adder.png",width:100%),
caption: [
modelsim somador bcd \ Fonte: Elaborada pelo autor
],
supplement: "Figura"
);
Na figura 4 é possivel observar o teste feito para um somador BCD
#figure(
image("./Figuras/modelsim-ap1.png",width:100%),
caption: [
modelsim projeto completo \ Fonte: Elaborada pelo autor
],
supplement: "Figura"
);
Na figura 5 foi feito o teste completo , mostrando todas as entradas e saidas do sistema em diferentes situações.
== Implementação somador bin2bcd
Como pedido pelo exercício , foi feita uma segunda versão que não utilizava BCD como entrada , mas sim entradas binárias.
#figure(
image("./Figuras/rtlviewer-ap2.png",width:100%),
caption: [
rtl viewer projeto completo \ Fonte: Elaborada pelo autor
],
supplement: "Figura"
);
A figura 6 mostra a abordagem feita para o somador de binários.
#figure(
image("./Figuras/modelsim-ap2.png",width:100%),
caption: [
modelsim projeto completo \ Fonte: Elaborada pelo autor
],
supplement: "Figura"
);
A figura 7 mostra o teste feito para certificar que o código está funcionando da forma esperada.
== Comparação
Apesar do somador bin2bcd ter menos componentes pensados na sua implementação, utilizando o relatórios gerados pela compoilação no Quartus 2 , o total de números de elementos lógicos utilizados for de 245 / 114,480 contra os 108 / 114,480 utilizados no somador BCD .
O total de elementos logícos mostra o quão custoso é a conversão de binário para BCD , uma vez que se baseia em calculos como divisão e multiplicação para poder separar a unidade , dezena e centena do valor.
Além disso é possivel analisar a diferença através da analise de tempo , o somador BCD se provou mais rapido que o somador bin2bcd. O somador BCD levou 3.823 ns enquanto o somador bin2bcd levou 11.589 , mais que o dobro de tempo.
#align(center)[
#table(
columns: (auto,auto,auto,),
align: center,
[],[somador BCD],[somador bin2bcd],
[Área(LE)],[245/114480],[108/114480],
[Delay sem IO (ns)],[3.823],[11.589],
);
]
= Conclusão
Por fim a atividade apresentou como a abordagem de diferentes formas pode impactar a eficiencia de um simples somador.Foram implementados dois somadores BCD em paralelo, permitindo a correta soma dos dígitos individuais e o gerenciamento adequado dos carries entre os dígitos. Além disso, uma implementação alternativa utilizando um somador binário para BCD foi realizada, oferecendo uma opção para lidar com números binários em vez de BCD. A comparação entre os dois métodos revelou a complexidade adicional envolvida na conversão de binário para BCD, destacando a eficiência do projeto BCD original. Em resumo, o projeto demonstrou soluções bem elaboradas para a adição de números BCD de 2 dígitos, mostrando um entendimento sólido dos conceitos de lógica digital e dispositivos programáveis. |
|
https://github.com/rqy2002/typst-experiment | https://raw.githubusercontent.com/rqy2002/typst-experiment/main/chn.typ | typst | // 中文排版的字体使用.
#let setCJKmainfont(font : none,
boldFont : none,
italicFont : none,
latinFont: none,
body) = {
let withLatin(f) = { if latinFont == none { f } else { (latinFont, f) } }
set text(font: withLatin(font)) if font != none
show strong: set text(font: withLatin(boldFont), style: "normal") if boldFont != none
show emph: set text(font: withLatin(italicFont), style: "normal") if italicFont != none
body
}
|
|
https://github.com/fenjalien/metro | https://raw.githubusercontent.com/fenjalien/metro/main/tests/unit/qualifier-mode/test.typ | typst | Apache License 2.0 | #import "/src/lib.typ": unit, metro-setup
#set page(width: auto, height: auto)
#unit("kilogram of(pol) squared per mole of(cat) per hour")
#unit("kilogram of(pol) squared per mole of(cat) per hour", qualifier-mode: "bracket")
#unit("deci bel of(i)", qualifier-mode: "combine")
#metro-setup(qualifier-mode: "phrase", qualifier-phrase: sym.space)
#unit("kilogram of(pol) squared per mole of(cat) per hour")
#unit("kilogram of(pol) squared per mole of(cat) per hour", qualifier-phrase: [ of ])
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/songb/0.1.0/songb.typ | typst | Apache License 2.0 | // SPDX-FileCopyrightText: 2024 <NAME> <<EMAIL>>
//
// SPDX-License-Identifier: CC0-1.0
#import "./song.typ"
#let chorus = song.chorus
#let verse = song.verse
#let index-by-letter = song.index-by-letter
#let song = song.song
#import "./chord.typ"
#let chord-display = chord.chord-display
#let chord = chord.chord
#import "./autobreak.typ"
#let autobreak = autobreak.autobreak
|
https://github.com/yaoyuanArtemis/resume | https://raw.githubusercontent.com/yaoyuanArtemis/resume/main/cv_1.typ | typst | Do What The F*ck You Want To Public License | #import "template.typ": *
#import "data.typ": *
#show: project.with(
title: name,
author: author,
)
#section[ #titleemj("curl.svg") #selftitle ]
#self
#section[ #titleemj("mortarboard.svg") #edutitle ]
#edu
#section[ #titleemj("seedling.svg") #techtitle ]
#tech
#section[ #titleemj("telescope.svg") #projecttitle ]
#projectexperience
#section[ #titleemj("darts.svg") #activitytitle ]
#activity
#section[ #titleemj("football.svg") #hobbiestitle ]
#hobbies
|
https://github.com/XcantloadX/TypstMomoTalk | https://raw.githubusercontent.com/XcantloadX/TypstMomoTalk/main/README.md | markdown | # TypstMomoTalk
TypstMomoTalk 是 Typst 的一个模板,专用于制作碧蓝档案的 MomoTalk 聊天记录图片。
相较于其他编辑器的优点:
* 高度可自定义,易于扩展修改;
* 类似于 Markdown 的纯文本式编辑体验;
* 可导出为 SVG 与 PDF 矢量图。
相较于其他编辑器的缺点:
* 使用门槛较高。
## 快速开始 QuickStart
需要先[安装 Typst](https://github.com/typst/typst?tab=readme-ov-file#installation)。
然后
```bash
git clone xxxxxx
cd TypstMomoTalk
typst watch ./examples/quickstart.typ --root . --font-path ./momotalk/assets/
```
打开 `examples/quickstart.pdf` 查看结果。
## 教程 Tutorial
WIP
## 待办 Todo
- 更多消息
- [ ] 支持行动消息的人物注明
- [x] 支持文件消息
- [ ] 支持分享卡片消息
- [ ] 支持转账/红包消息
- 其他
- [ ] i18n 支持
- [ ] 优化分页逻辑
## 版权 Copyright
本项目中所有游戏素材图片均来自 [SchaleDB](https://schale.gg/),版权归上海悠星网络科技有限公司(Yostar)与韩国乐线股份有限公司(NEXON)所属。
其他图标素材来自 [iconfont](https://www.iconfont.cn/) 网站,版权归 iconfont 及原作者所属。
## 免责声明 Disclaimer
TypstMomoTalk 工具与悠星、NEXON无关。
本工具仅用于个人娱乐和学习目的,不得用于商业目的。
用户在使用 TypstMomoTalk 时必须遵守适用的法律法规,造成的任何后果都与制作者无关。 |
|
https://github.com/jonaspleyer/peace-of-posters | https://raw.githubusercontent.com/jonaspleyer/peace-of-posters/main/docs/content/examples/getting_started/main.typ | typst | MIT License | // We begin by importing the `peace-of-posters` package
#import "../../../../lib.typ": *
// Next, we specify some general settings formatting settings.
#set page("a0", margin: 1cm)
#set-poster-layout(layout-a0)
#set text(font: "Arial", size: layout-a0.at("body-size"))
#let box-spacing = 1.2em
#set columns(gutter: box-spacing)
#set block(spacing: box-spacing)
#update-poster-layout(spacing: box-spacing)
// After that we choose a predefined theme.
#set-theme(uni-fr)
// We can adjust any property of the theme via the `update-theme` method of the themes module.
// The poster consists of multiple boxes which can be adjusted by specifying column layouts.
// `peace-of-posters` provideds some predefiend methods for creating a title-box, regular column-boxes in between, a bottom box and bibliography to finish it of.
#title-box(
[`peace-of-posters`],
subtitle: [Scientific Posters for any Occasion],
authors: [<NAME>#super("1")],
institutes: [#super("1")Freiburg Center for Data-Analysis and Modelling],
keywords: [Science, Poster, Typesetting, `typst`],
)
#columns(2, [
#column-box(
heading: [Left Column],
)[
Multi-Column layouts are supported out of the box by typsts `columns()` method.
]
#colbreak()
#column-box(
stretch-to-next: true,
)[
We can also choose to not have a title for our boxes.
]
])
#column-box()[
We can update the layout of the boxes in the middle of the document.
]
#common-box(heading: [only heading])
#columns(2, [
#colbreak()
#column-box(
heading: [Stretching],
stretch-to-next: true
)[
And stretch boxes to the next lowest box (or bottom of the page)
]
])
#bottom-box()[
Align them to the bottom.
]
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/MATLAB/touying/docs/i18n/zh/docusaurus-plugin-content-docs/current/themes/metropolis.md | markdown | ---
sidebar_position: 2
---
# Metropolis 主题

这个主题的灵感来自 <NAME> 创作的 [Metropolis beamer](https://github.com/matze/mtheme) 主题,由 [Enivex](https://github.com/Enivex) 改造而来。
这个主题美观大方,很适合日常使用,并且你最好在电脑上安装 Fira Sans 和 Fira Math 字体,以取得最佳效果。
## 初始化
你可以通过下面的代码来初始化:
```typst
#import "@preview/touying:0.2.1": *
#let s = themes.metropolis.register(s, aspect-ratio: "16-9", footer: self => self.info.institution)
#let s = (s.methods.info)(
self: s,
title: [Title],
subtitle: [Subtitle],
author: [Authors],
date: datetime.today(),
institution: [Institution],
)
#let s = (s.methods.enable-transparent-cover)(self: s)
#let (init, slide, slides, title-slide, new-section-slide, focus-slide, touying-outline, alert) = utils.methods(s)
#show: init
#show strong: alert
```
其中 `register` 接收参数:
- `aspect-ratio`: 幻灯片的长宽比为 "16-9" 或 "4-3",默认为 "16-9"。
- `header`: 显示在页眉的内容,默认为 `states.current-section-title`,也可以传入形如 `self => self.info.title` 的函数。
- `footer`: 展示在页脚的内容,默认为 `[]`,也可以传入形如 `self => self.info.author` 的函数。
- `footer-right`: 展示在页脚右侧的内容,默认为 `states.slide-counter.display() + " / " + states.last-slide-number`。
- `footer-progress`: 是否显示 slide 底部的进度条,默认为 `true`。
并且 Metropolis 主题会提供一个 `#alert[..]` 函数,你可以通过 `#show strong: alert` 来使用 `*alert text*` 语法。
## 颜色主题
Metropolis 默认使用了
```typst
#let s = (s.methods.colors)(
self: s,
neutral-lightest: rgb("#fafafa"),
primary-dark: rgb("#23373b"),
secondary-light: rgb("#eb811b"),
secondary-lighter: rgb("#d6c6b7"),
)
```
颜色主题,你可以通过 `#let s = (s.methods.colors)(self: s, ..)` 对其进行修改。
## slide 函数族
Metropolis 主题提供了一系列自定义 slide 函数:
```typst
#title-slide(extra: none, ..args)
```
`title-slide` 会读取 `self.info` 里的信息用于显示,你也可以为其传入 `extra` 参数,显示额外的信息。
---
```typst
#slide(
repeat: auto,
setting: body => body,
composer: utils.side-by-side,
section: none,
subsection: none,
// metropolis theme
title: auto,
footer: auto,
align: horizon,
margin: (top: 3em, bottom: 1em, left: 0em, right: 0em),
padding: 2em,
)[
...
]
```
默认拥有标题和页脚的普通 slide 函数,其中 `title` 默认为当前 section title,页脚为您设置的页脚。
---
```typst
#focus-slide[
...
]
```
用于引起观众的注意力。背景色为 `self.colors.primary-dark`。
---
```typst
#new-section-slide(short-title: auto, title)
```
用给定标题开启一个新的 section。
## `slides` 函数
`slides` 函数拥有参数
- `title-slide`: 默认为 `true`。
- `outline-slide`: 默认为 `true`。
- `outline-title`: 默认为 `[Table of contents]`。
可以通过 `#show: slides.with(..)` 的方式设置。
```typst
#import "@preview/touying:0.2.1": *
#let s = themes.metropolis.register(s, aspect-ratio: "16-9", footer: self => self.info.institution)
#let s = (s.methods.info)(
self: s,
title: [Title],
subtitle: [Subtitle],
author: [Authors],
date: datetime.today(),
institution: [Institution],
)
#let s = (s.methods.enable-transparent-cover)(self: s)
#let (init, slide, slides, title-slide, new-section-slide, focus-slide, touying-outline, alert) = utils.methods(s)
#show: init
#show strong: alert
#show: slides
= Title
== First Slide
Hello, Touying!
#pause
Hello, Typst!
```

## 示例
```typst
#import "@preview/touying:0.2.1": *
#let s = themes.metropolis.register(s, aspect-ratio: "16-9", footer: self => self.info.institution)
#let s = (s.methods.info)(
self: s,
title: [Title],
subtitle: [Subtitle],
author: [Authors],
date: datetime.today(),
institution: [Institution],
)
#let s = (s.methods.enable-transparent-cover)(self: s)
#let (init, slide, title-slide, new-section-slide, focus-slide, touying-outline, alert) = utils.methods(s)
#show: init
#show strong: alert
#title-slide(extra: [Extra])
#slide(title: [Table of contents])[
#touying-outline()
]
#slide(title: [A long long long long long long long long long long long long long long long long long long long long long long long long Title])[
A slide with some maths:
$ x_(n+1) = (x_n + a/x_n) / 2 $
#lorem(200)
]
#new-section-slide[First section]
#slide[
A slide without a title but with *important* infos
]
#new-section-slide[Second section]
#focus-slide[
Wake up!
]
// simple animations
#slide[
a simple #pause dynamic slide with #alert[alert]
#pause
text.
]
// appendix by freezing last-slide-number
#let s = (s.methods.appendix)(self: s)
#let (slide, new-section-slide) = utils.methods(s)
#new-section-slide[Appendix]
#slide[
appendix
]
```
|
|
https://github.com/Error-418-SWE/Documenti | https://raw.githubusercontent.com/Error-418-SWE/Documenti/src/1%20-%20Candidatura/Verbali/Interni/25-10-23/25-10-23.typ | typst | ERROR\_418 \
Verbale 25/10/23
#figure(
align(center)[#table(
columns: 2,
align: (col, row) => (left,left,).at(col),
inset: 6pt,
[Mail:],
[<EMAIL>],
[Redattori:],
[<NAME>, <NAME>],
[Verificatori:],
[<NAME>, <NAME>, <NAME>],
[Amministratori:],
[<NAME>, <NAME>],
[Destinatari:],
[<NAME>, <NAME>],
)]
)
#figure(
align(center)[#table(
columns: 2,
align: (col, row) => (center,center,).at(col),
inset: 6pt,
[Inizio Meeting: 14:30 Fine Meeting: 16:15 Durata: 1:45h],
[],
[Presenze:],
[],
)]
)
#block[
#figure(
align(center)[#table(
columns: 5,
align: (col, row) => (center,center,center,center,center,).at(col),
inset: 6pt,
[Nome], [Durata Presenza], [], [Nome], [Durata Presenza],
[Antonio],
[1:45h],
[],
[Alessio],
[1:45h],
[Riccardo],
[1:45h],
[],
[Giovanni],
[1:45h],
[Rosario],
[1:45h],
[],
[Silvio],
[1:45h],
[Mattia],
[1:45h],
[],
[],
[],
)]
)
]
Ordine del giorno:
- discussione in merito al software per la gestione del progetto;
- discussione riguardante l’organizzazione della repo di github;
- inizio stesura del file \"valutazione dei capitolati\".
= Project Manager Software
<project-manager-software>
Abbiamo aperto la riunione discutendo dell’introduzione di un software
per la gestione di progetto. Ci siamo confrontati e abbiamo ristretto le
possibilità a tre opzioni: Jira, Youtrack, Linear. Giovanni comincia a
studiare Youtrack per vedere se è possibile integrarlo al nostro
progetto o se richiede troppo lavoro.
= Organizzazione Repository
<organizzazione-repository>
Per quanto riguarda la repository abbiamo deciso, per una questione di
ordine, di dividere in due il contenuto: nella repo \"Documenti\"
abbiamo deciso di inserire tutti i documenti che non riguardano
direttamente il codice e le sue funzionalità, come verbali, valutazione
dei capitolati, ecc. Mentre un’altra repo \(il cui nome è da definire)
conterrà il codice e la sua documentazione.
= Mail delle Aziende/Brainstorming
<mail-delle-aziendebrainstorming>
Abbiamo visionato e commentato le mail inviate alle aziende dove
chiedevamo delucidazioni riguardo a punti poco chiari dei capitolati. Da
qui abbiamo iniziato un brainstorming dove abbiamo analizzato nel
dettaglio pro e contro di ogni capitolato riportando le idee che
andranno poi incorporate nel file \"valutazione\_capitolati\".
= Prossimi Obbiettivi e Incontri
<prossimi-obbiettivi-e-incontri>
Abbiamo infine fissato il meeting con l’azienda Sanmarco Informatica per
chiarire alcuni punti riguardo il loro capitolato, un meeting interno
per discutere un modo comune di lavorare con git e poi suddiviso il
lavoro in vista dell’assegnazione degli appalti.
|
|
https://github.com/chendaohan/bevy_tutorials_typ | https://raw.githubusercontent.com/chendaohan/bevy_tutorials_typ/main/05_resources/resources.typ | typst | #set page(fill: rgb(35, 35, 38, 255), height: auto, paper: "a3")
#set text(fill: color.hsv(0deg, 0%, 90%, 100%), size: 22pt, font: "Microsoft YaHei")
#set raw(theme: "themes/Material-Theme.tmTheme")
= 1. 资源
资源允许你存储某种类型的单个全局实例。使用它来获取你应用程序中全局的数据,例如设置。
要创建一个资源类型,只需要定义一个 Rust 结构体或枚举,并派生 Resource Trait ,类似于组件和事件。
```Rust
#[derive(Resource)]
struct Score(f32);
```
资源每种类型只能有一个实例。如果你需要多个,请使用实体/组件。
Bevy 有许多内置资源,你可以使用这些内置资源来访问引擎的各种功能。它们的工作方式与你自定义的资源完全相同。具体的内置资源,使用到时再说。
= 2. 访问资源
要在系统中访问资源,请使用 ```Rust Res / ResMut```。
```Rust
fn my_system(
// 不可变
resource_1: Res<ResourceOne>,
// 可变
mut resource_2: ResMut<ResourceTwo>,
// 可能不存在的不可变
resource_3: Option<Res<ResourceThree>>,
// 可能不存在的可变
mut resource_4: Option<ResMut<ResourceFour>>,
) {
// 游戏逻辑
}
```
= 3. 管理资源
如果你需要在运行时创建/删除资源,可以使用 ```Rust Commands```:
```Rust
fn system_1(mut commands: Commands) {
// 初始化资源
commands.init_resource::<ResourceOne>();
// 插入资源
commands.insert_resource(ResourceTwo);
// 删除资源
commands.remove_resource::<ResourceThree>();
}
```
在 App 上初始化和插入资源,这样做可以保证资源一开始就存在。
```Rust
App::new()
// 初始化资源
.init_resource::<ResourceOne>()
// 插入资源
.insert_resource(ResourceTwo)
.run()
```
= 4. 资源初始化
实现了 ```Rust FromWorld``` Trait 的资源可以被初始化,实现了```Rust Default``` Trait 的资源将自动实现```Rust FromWorld``` Trait。
```Rust
#[derive(Resource, Default)]
struct ResourceOne {
field_1: f32,
field_2: u32,
}
#[derive(Resource)]
struct ResourceTwo {
field_1: f32,
field_2: i32,
}
impl FromWorld for ResourceTwo {
fn from_world(world: &mut World) -> Self {
Self {
field_1: 0.,
field_2: 0,
}
}
}
``` |
|
https://github.com/kotfind/hse-se-2-notes | https://raw.githubusercontent.com/kotfind/hse-se-2-notes/master/android/lectures/main.typ | typst | #import "/utils/template.typ": conf
#import "/utils/datestamp.typ": datestamp
#show: body => conf(
title: "Android",
subtitle: "Лекции",
author: "<NAME>, БПИ233",
year: [2024--2025],
body,
)
#datestamp("2024-09-14")
#include "./2024-09-14.typ"
|
|
https://github.com/sw1sh/QuantumPotatoChips | https://raw.githubusercontent.com/sw1sh/QuantumPotatoChips/main/typst/main.typ | typst | #import "template.typ": arxiv
#import "@preview/physica:0.9.3": ket
#import "@preview/subpar:0.1.1"
#show: arxiv.with(
title: "Quantum Potato Chips",
authors: ((
(name: "<NAME>", email: "<EMAIL>", affiliation: text([Wolfram Institute, USA#footnote[<EMAIL>]\ Wolfram Research, USA])
//, orcid: "0000-0000-0000-0000"
),
(name: "<NAME>", email: "<EMAIL>", affiliation: "Wolfram Research South America, Peru"),
(name: "<NAME>", email: "<EMAIL>", affiliation: "Wolfram Research South America, Peru")),(
(name: "<NAME>", email: "<EMAIL>", affiliation: "Wolfram Research, USA"),
(name: "<NAME>", email: "<EMAIL>", affiliation: "Wolfram Research, USA"),
)),
abstract: [We examine qubit states under symmetric informationally-complete measurements, representing state vectors as probability 4-vectors within a 3-simplex in $bb(R)^4$. Using geometric transformations, this 3-simplex is mapped to a tetrahedron in $bb(R)^3$. A specific surface within this tetrahedron allows for the separation of probability vectors into two disjoint 1-simplices.//, corresponding to the most classical quantum states.
The intersection of this surface with the insphere identifies a "quantum potato chip" region, where probability 4-vectors reduce to two separable 2-vectors. States within this region can be fully reconstructed using only two given projective measurements, a feature not found elsewhere in the state space.],
// keywords: ("First keyword", "Second keyword", "etc."),
date: [#datetime.today().display()],
)
#place(
top + right,
image("images/Pringles.png", height: 15%),
dx: 5em,
dy: -5em
)
#set heading(numbering: "1.")
#let bib(key, text) = {
cite(key, supplement: text, style: "minimal.csl")
}
#let float(content) = {
place(
auto,
scope: "parent",
float: true,
content
)
}
#let eqcode(body, label, code) = {
block(
below: -0.5em,
ref(supplement: [#body #label], code)
)
}
// #figure(
// image("PotatoChip.png"),
// caption: [@chipPlot3D[Potato chip]],
// numbering: none
// )
#columns(1)[
= Introduction
A qubit in a normalized quantum state can be expressed in the basis ${bb(I), sigma_x, sigma_y, sigma_z}$ as a 4-vector $1/2{1, x, y, z}$, with ${x,y,z} in bb(R)^3$. The condition $abs(arrow(r))<=1$ ensures that the density matrix is positive semi-definite, corresponding to a _physical_ state. Since the first component of the 4-vector is fixed, the state can be fully described by the Bloch vector $arrow(r) = {x, y, z}$ in $bb(R)^3$. Alternatively, in an appropriate basis, the qubit state can be represented as a probability 4-vector ${ p_1, p_2, p_3, p_4 }$ with $sum_i p_i = 1$ and $p_i >= 0$, known as the probability phase-space representation. Symmetric Informationally Complete Positive Operator-Valued Measures (SIC-POVMs) are optimal for such representations due to their symmetry and informational completeness
//#####
//Consider a qubit in a generic normalized state#footnote[In this paper, we will focus only on quantum theory in the discrete vector space.]. In the basis ${ bb(I) , sigma_x , sigma_y , sigma_z }$, the qubit state is described by the 4-vector $1 / 2 { 1 , x , y , z }$ with $r_(x , y , z) in bb(R)$. The condition $abs(arrow(r)) <= 1$ guarantees that the state can be a _physical_ state (i.e., the density matrix is positive semi-definite). Since the first element of the 4-vector is 1/2, one can drop it, and thus the quantum state can be described in $bb(R)^3$ by the Bloch vector $arrow(r)={x,y,z}$, given the mentioned basis. Alternatively, in a proper basis, a qubit state (i.e. the state vector) can be represented by a _probability_ 4-vector as ${ p_1 , p_2 , p_3 , p_4 }$ with $sum_i p_i = 1$ and $p_i gt.eq 0$. This is what we refer to as the probability phase-space representation, where quantum state (or any quantum process such as operator, measurement or channel) can be represented by probability vectors. In this regard, Symmetric Informationally Complete Positive Operator-Valued Measures (SIC-POVMs) are ideal choice due to their symmetry and optimality
@renes2004symmetric @saraceno2017phase @fuchs2017sic @appleby2017introducing @scott2010symmetric @appleby2014symmetric.
//#bib(<renes2004symmetric>, [[1-6]]).
//########
// @tab:quantum_povm illustrates the probability phase-space representation of various quantum states using the Tetrahedron SIC-POVM #footnote[Tetrahedron SIC-POVM elements are given by
// ${{1 / 2, 0},{ 0, 0}}$,
// ${{ 1 / 6, frac(i - √3, √2 (3 i + 3 √3))},{- frac(i √2, 3 i + 3 √3), 1 / 3}}$, ${{ 1 / 6, frac(i + √3, √2 (3 i - 3 √3))},{ frac(i √2, 3 (- i + √3)), 1 / 3}}$,
// and ${{ 1 / 6, frac(1, 3 √2)},{ frac(1, 3 √2), 1/3}}$ and Bloch vectors of its projectors by $(0,0,1)$, $(-√2/3,-√(2/3),-1/3)$, $(-√2/3,√(2/3),-1/3)$, and $((2√2)/3,0,-1/3)$, respectively.].
That said, the phase-space representation has historically been associated with bases such as Wigner, Wootters, and Gell-Mann @Wootters1986-cq@Feynman1987-vj@gell-mann, which require a quasi-probability approach (i.e., $sum_i p_i = 1$ but some $p_i$ can be negative). In this paper, we focus primarily on POVMs, though we also address the quasi-probability case.
For a qubit state in a SIC-POVM, the probability vector is confined within a simplex embedded in $bb(R)^4$ space. Through appropriate geometric transformations, this 4D object can be linearly projected onto a tetrahedron in $bb(R)^3$ (see @fig:rotation). Two key questions arise. From probability theory perspective, among all the points in the tetrahedron, which ones can be reduced to a disjoint pair of 1-simplices (two line segments), representing a separable probability distribution of uncorrelated binary random variables? From the quantum theory perspective, which points within the tetrahedron correspond to _physical_ quantum states? With answers to these questions in hand, we will be able to find quantum states that correspond to two disjoint probability distributions. This is the core idea of quantum potato chips that we discuss in this paper.
We will show that for states within the quantum potato chips, it is possible to reconstruct the entire state from only two independent projective measurements. Thus we refer to quantum potato chips as the informationally-minimal states, because their probability space can be reduced into two disjoint separable ones. This result embeds classical probability structures within quantum state representations, offering a novel intersection between classical and quantum probabilistic frameworks. An immediate consequence of the existence of these states is that any classical problem with two binary variables can be mapped into qubits. Future work will examine applications and foundational implications in more detail.
*Note for the readers*: All visualizations and some formulas in this paper have corresponding Wolfram Mathematica code, which can be found at the end. These are accessible by clicking on equations or figure captions, if applicable.
#subpar.grid(
figure(image("images/Rotate2D.png"), caption: [$"2D" arrow "1D"$]),
figure(image("images/Rotate3D.png", width: 90%), caption: [$"3D" arrow "2D"$]),
figure(image("images/Rotate4D.png", width: 90%), caption: [$"4D" arrow "3D"$]),
columns: (1fr, 1fr, 1fr),
align: bottom,
caption: [@code:rotation[Simplex projection to its lower dimension by rotation.]],
label: <fig:rotation>
)
// #[
// #set text(9.5pt)
// #show math.equation: set text(size: 8pt)
// #figure(table(
// columns: 4,
// align: (center,center,center,center),
// table.header([#strong[Quantum State];], [#strong[Tetrahedron ($cal(T)$)];],[#strong[Rotated $cal(T)$];], [#strong[Bloch sphere];]
// ),
// table.hline(),
// $ket(0)$, ${1/2, 1/6, 1/6, 1/6}$, ${1/2, - 1/6, -1/6, -1/6}$, table.cell(rowspan: 6)[#image("images/tetrahedron.png",width:55%)],
// $ket(1)$, ${0, 1/3, 1/3, 1/3}$, ${1/2, 1/6, 1/6, 1/6}$,
// $ket(+)=1/√2(ket(0) + ket(1))$, ${1/4, 1/12 (3-√2), 1/12 (3-√2), 1/12 (3+2√2)}$, ${1/2, - 1/(6√2), - 1/(6√2), 1/(3√2)}$,
// $ket(-)=1/√2(ket(0) - ket(1))$, ${1/4, 1/12 (3+√2), 1/12 (3+√2), 1/12 (3-2 √2)}$, ${1/2, 1/(6√2), 1/(6√2) , -1/(3√2)}$,
// $ket(R)=1/√2(ket(0) + i ket(1))$, ${1/4, 1/12 (3-√6), 1/12 (3+√6), 1/4}$, ${1/2, -1/(2√6), 1/(2√6), 0}$,
// $ket(L)=1/√2(ket(0) - i ket(1))$, ${1/4, 1/12 (3+√6), 1/12 (3-√6), 1/4}$, ${1/2, 1/(2√6), -1/(2√6), 0}$
// ),
// caption: [Quantum states and their corresponding probability vectors in the tetrahedron SIC-POVM, the transformed tetrahedron probability vectors, and Bloch sphere representations. The transformation is described by @eq:transformation.]
// ) <tab:quantum_povm>
= Generating Quantum Potato Chips
<generating-quantum-potato-chip>
== SIC-POVM Case
In an Informationally-Complete POVM (IC-POVM), a qubit state is fully described by a probability $4$-vector ${p_1,p_2,p_3,p_4}$ with $sum_i p_i=1$ and $p_i>=0$. The space of probability vectors is the unit standard simplex in $bb(R)^4$, spanned by points ${1, 0, 0, 0}$, ${0, 1, 0, 0}$, ${0, 0, 1, 0}$, and ${0, 0, 0, 1}$. This is a region with the embedding dimension 4 and the geometric dimension 3. Therefore, one can reduce the embedding dimension to 3. Consider the following rotation matrix:
#eqcode($
U_(upright("rot"))(theta) = mat( cos theta, frac(sin theta, sqrt(3)), frac(sin theta, sqrt(3)), frac(sin theta, sqrt(3)); - frac(sin theta, sqrt(3)), 1 / 3 (cos theta + 2), 1 / 3 (cos theta - 1), 1 / 3 (cos theta - 1); - frac(sin theta, sqrt(3)), 1 / 3 (cos theta - 1), 1 / 3 (cos theta + 2), 1 / 3 (cos theta - 1); - frac(sin theta, sqrt(3)), 1 / 3 (cos theta - 1), 1 / 3 (cos theta - 1), 1 / 3 (cos theta + 2)),
$, <eq:gen-rotation>, <code:gen-rotation>)
which represents a 4D rotation by $theta$ in the plane spanned by ${1,1,1,1}$ and ${1,0,0,0}$. For the special case of $theta=pi/3$, one gets:
#eqcode($
U_(upright("rot")) =1/6
mat(3, 3, 3, 3; - 3, 5, - 1, - 1; - 3, - 1, 5, - 1; - 3, - 1, - 1, 5),
$, <eq:transformation>, <code:transformation>)
@eq:transformation transforms a probability vector ${p_1, p_2, p_3, 1 - (p_1 + p_2 + p_3)}$ into ${1/2, -(1+2p_1)/6 + p_2, -(1+2p_1)/6 + p_3, 1/6 (5 - 8 p_1 - 6 p_2 - 6 p_3)}$. With the first element as a constant, one can drop it and reduce the dimension from 4D to 3D.
In a similar manner @eq:transformation transforms the original 3-simplex into a new one spanned by ${1/2, -1/2, -1/2, -1/2}$, ${1/2, 5/6, -1/6, -1/6}$, ${1/2, -1/6, 5/6, -1/6}$, and ${1/2, -1/6, -1/6, 5/6}$. With all first elements as $1/2$, the transformed simplex can be projected into $bb(R)^3$ space, as a tetrahedron spanned by ${-1/2, -1/2, -1/2}$, ${5/6, -1/6, -1/6}$, ${-1/6, 5/6, -1/6}$, and ${-1/6, -1/6, 5/6}$#footnote[Treated as Bloch vectors, normalizing their density matrices, this basis is equivalent to the Feynman basis (up-to rotation and reflection, see @fig:matreshka), which orientation is taken to be the default in this paper.]. We investigate what region of this simplex (i.e. tetrahedron) can be reduced to lower dimensional $1$-simplex, implying the original probability $4$-vector can be written as product of two disjoint probability vectors in lower dimensions.
Take a $1$-simplex spanned by points ${1, 0}$ and ${0, 1}$ (@fig:1-simplex). Any point sampled from this simplex have the form ${ p , 1 - p }$ with $0 <= p <= 1$. Consider two points randomly sampled from this $1$-simplex: ${ p , 1 - p }$ and ${ q , 1 - q }$. Their outer product will be given by:
#eqcode(
$ { p q , p (1 - q) , q (1 - p) , (1 - p) (1 - q) }. $,<eq:kp-vector>,<code:kp-vector> )
@eq:transformation transforms @eq:kp-vector into:
#eqcode(
$
{1/2,-1/6+p-(4p q)/3,-1/6+q-(4p q)/3,5/6-p-q+(2p q)/3}.
$,<eq:rotated-vector>,<code:rotated-vector>)
After dropping the first element, one gets:
#eqcode(
$
{-1/6+p-(4p q)/3,-1/6+q-(4p q)/3,5/6-p-q+(2p q)/3}
$, <eq:chip1>, <code:chip1>
)
@eq:chip1 corresponds to a three-dimensional surface parameterized by
variables $0 lt.eq p lt.eq 1$ and $0 lt.eq q lt.eq 1$. This surface is fully contained within the tetrahedron of probability space, as shown @fig:surface. Not all of the points within the tetrahedron correspond to _physical_ quantum states (i.e. states with a positive semi-definite density matrix). The only region of this tetrahedron that corresponds to physical quantum state is the sphere that is inscribed within it#footnote[The insphere, centered at the origin ${0,0,0}$, has a radius $1/(sqrt(3))$. This radius is not unity because, for instance, a normalized vector such as ${1,0,0,0}$ transforms into the vector ${1/2, -1/2, -1/2, -1/2}$, which, after dropping the first component, becomes ${-1/2, -1/2, -1/2}$. To maintain normalization, this vector must be rescaled by a factor of $2sqrt(3)$. After rescaling, the resulting sphere has radius 1, analogous to the Bloch sphere.].
//This constraint is the same as saying the norm of probability vector cannot be larger than $1/(2 √3)$ (similar to the condition on norm of Bloch vector).
The intersection of the insphere and the surface in @eq:chip1, is what we called as _quantum potato chip_. The border of the quantum potato chip, as highlighted in @fig:surface by the solid red line, is described by:
#eqcode(
$
q=1/2(1 plus.minus √((-1 + 6p - 6p^2) / (3(1 - 2p + 2p^2)))).
$,
<eq:constraint>, <code:constraint>
)
Therefore, the quantum potato chip is a surface described in @eq:chip1 and parameterized by $p,q$ as follow:
//#float([
#eqcode(
$ 0<=p<=1, space space
1/2(1 - √((-1 + 6p - 6p^2) / (3(1 - 2p + 2p^2)))) <= q <= 1/2(1 + √((-1 + 6p - 6p^2) / (3(1 - 2p + 2p^2)))). $,<eq:boundary>, <code:boundary>
)
]
//)
Additionally, the original $1$-simplex in @fig:surface is in fact a 1D object embedded in 3D, spanned by points ${{1, 0,0}, {0,1,0}}$. There are two other choices for points as ${{1, 0,0}, {0,0,1}}$, and ${{0,1,0}, {0,0,1}}$. Therefore, overall there are three potato chips, as shown in @fig:3-chips, and their surfaces are described by @eq:chip1, @eq:chip2 and @eq:chip3, with $p,q$ parametrized in @eq:boundary.
#eqcode(
$
{-1/6 + q- (4 p q)/3,
5/6 - p - q + (2 p q)/3,
-1/6 + p - (4 p q)/3}\
$,<eq:chip2>,<code:chips>)
#eqcode(
$
{-1/6 - p/3 + q - (2 p q)/3, -1/6 - p/3 + (4 p q)/3,
5/6 - (4 p)/3 - q + (4 p q)/3}
$,<eq:chip3>,<code:chips>
)
#subpar.grid(
figure(image("images/1-simplex.png",width:80%), caption: [@code:1-simplex[A $1$-simplex as the lowest dimensional probability space]]), <fig:1-simplex>,
figure(image("images/surface.png",width:100%), caption: [@code:surface[Intersection of tetrahedron and surface of two $1$-simplex probability spaces]]), <fig:surface>,
figure(image("images/3-chips.png",width:80%), caption: [@code:3-chips[Three quantum potato chips]]), <fig:3-chips>,
columns: (1fr, 1fr,1fr),
caption: [
(a) A 1-simplex defined by the points ${ 1 , 0 }$, ${ 0 , 1 }$ (solid blue line). //Any point sampled from this simplex takes the form ${ p , 1 - p }$ with $0 <= p <= 1$. Since the geometric dimension of this 1-simplex is 1, although it is embedded in 2D, one can project it onto a 1D segment along x-axis by the geometric matrix $mat(delim: "(", 1 / 2, - 1 / 2, 1 / 2; 1 / √2, 1 / √2, - 1 / √2; 0, 0, 1)$, which is a composition of rotation, scaling and translation.
(b) the blue solid line represents 1-simplex (same as in @fig:1-simplex). The surface, described by @eq:chip1, lies entirely within the 2-simplex (a tetrahedron).The solid red line represents the intersection of this surface with the tetrahedron’s insphere. Only the points within the insphere correspond to valid physical quantum states.
(c) quantum potato chips are defined in @eq:chip1, @eq:chip2, and @eq:chip3, and parametrized by @eq:boundary. With one such surface in hand, one can find the other two through proper permutation of variables.
],
label: <fig:main-simplex>,
)
== Bloch Sphere
Consider the following SIC-POVM for qubits (further referred to as the QBism SIC-POVM)#footnote[Any other SIC-POVM for qubits is simply a rotational transformation of this QBism SIC-POVM.], generated from a fiducial vector ${e^(-i (3pi)/4)(sqrt(3) - 1), 1}$ #footnote[By applying Weyl-Heisenberg displacement operators $X^p Z^q$ for $p,q in {0,1}$.] and POVM elements as follows:
#[
#set text(9.5pt)
$
cal(Q)_1=1/12mat(
3 - √3,
sqrt(6)e^(-i (3pi)/4);
sqrt(6)e^(i (3pi)/4),
3 + √3), space
cal(Q)_2=1/12mat(
3 - √3,
sqrt(6)e^(i (pi)/4);
sqrt(6)e^(-i (pi)/4),
3 + √3), space \
cal(Q)_3=1/12mat(
3 + √3,
sqrt(6)e^(i (3pi)/4);
sqrt(6)e^(-i (3pi)/4),
3 - √3), space
cal(Q)_4=1/12mat(
3 + √3,
sqrt(6)e^(-i (pi)/4);
sqrt(6)e^(i (pi)/4),
3 - √3),
$ <eq:povm>]
where $sum_i cal(Q)_i=bb(I)$ and $Q_i>=0$.
These POVM elements correspond to the following Bloch vectors, respectively#footnote[More precisely its projectors: $Pi_i=2cal(Q)_i$. All Bloch vectors are computed from normalized density matrices ($tr(rho)=1$).]: ${-1/√3, 1/√3, -1/√3}$, ${1/√3, -1/√3, -1/√3}$, ${-1/√3, -1/√3, 1/√3}$, ${1/√3, 1/√3, 1/√3}$ (see @fig:basis-vs-povm and @fig:chip-in-bloch). Likewise, the corresponding phase-space basis matrix for this SIC-POVM is given by:
$
cal(B)=
{mat((1 - sqrt(3))/2;
e^(-i (3pi)/4)sqrt(3/2);
e^(i (3pi)/4)sqrt(3/2);
(1 + sqrt(3))/2),
mat((1 - sqrt(3))/2;
e^(i pi/4)sqrt(3/2);
e^(-i pi/4)sqrt(3/2);
(1 + sqrt(3))/2),
mat((1 + sqrt(3))/2;
e^(i (3pi)/4)sqrt(3/2);
e^(-i (3pi)/4)sqrt(3/2);
(1 - sqrt(3))/2),
mat((1 + sqrt(3))/2;
e^(-i pi/4)sqrt(3/2);
e^(i pi/4)sqrt(3/2);
(1 - sqrt(3))/2),
}.
$<eq:qbism>
For a probability vector $arrow(p)={p_1,p_2,p_3,1-(p_1+p_2+p_3)}$ in the basis of @eq:qbism, the vectorized density matrix is given by:
$
arrow(rho) =& cal(B).arrow(p) =
mat(
-√3(p_1+p_2)+(1+√3)/2;
sqrt(3/2)e^(i (3pi)/4)((1+i)p_1+(1-i)p_2+2p_3-1);
sqrt(3/2)e^(-i (3pi)/4)((1-i)p_1+(1+i)p_2+2p_3-1);
√3(p_1+p_2)+(1-√3)/2;
),
$<eq:density-phase-space>
whose eigenvalues are:
$ 1/2 (1 plus.minus sqrt(3) sqrt(8 (p_1^2+p_1 (p_2+p_3-1)+p_2^2+p_2 p_3+p_3^2)-8 p_2-8 p_3+3)).//=1/2(1 plus.minus sqrt(x^2+y^2+z^2)).
$<eq:eigenvalues>
By comparing the vectorized density matrix in the Bloch sphere, $1/2{1+z, x - i y,x + i y, 1-z}$, with its counterpart in the phase-space as expressed in @eq:density-phase-space, one finds that the probability vector $arrow(p)$ is related to the Bloch vector ${x,y,z}$ by the following replacement rule and its inverse:
#grid(
columns: 2,
align: horizon,
[$
mat(p_1;p_2;p_3;) arrow
1/4mat(1 - 1/√3(x - y + z);
1 + 1/√3(x - y - z);
1 - 1/√3(x + y - z)).
$<eq:replacement-rule>
],
[$
mat(x;y;z;) arrow
√3mat(1-2p_1-2p_3;
1-2p_2-2p_3;
1-2p_1-2p_2).
$<eq:inverse-replacement-rule>
]
)
Imposing positivity on the eigenvalues in @eq:eigenvalues and applying the transformation in @eq:replacement-rule yields the condition $sqrt(x^2+y^2+z^2)<=1$ which describes the insphere as the only region within the tetrahedron that corresponds to physical quantum states.
// #figure(image("POVMvsBasis.png"), caption: [@POVMvsBasis[XXXXX]])<fig:POVMvsBasis>
The previous geometric description of potato chips can be applied directly into the Bloch sphere. Define a qubit state by the probability vector in @eq:kp-vector in the QBism SIC-POVM. Find the state in the Hilbert space using Weyl–Wigner transformation. Set one of the eigenvalue of the density matrix as zero and solve for $q$. The result will be the same as @eq:constraint. Replacing this condition into the Bloch vector $arrow(r)$, one gets:
//#float[
$
arrow(r)=
{-√(2/(1+2p(p-1))-3), (minus.plus 1 plus.minus 2 p) √(2/(1+2p(p-1))-3), (2p-1)√3}.
$ <eq:bloch-chip-border>
//]
As shown in @fig:chip-in-bloch, @eq:bloch-chip-border parametrizes the boundary of the quantum potato chip in the Bloch sphere (with $0<=p<=1$), similar to @fig:surface.
#subpar.grid(
figure(
image("images/povm-basis-qbism-2.png",width:80%),
caption:[@new-measurements[QBism SIC-POVM basis element and its POVM elements in Bloch sphere.]]),
<fig:basis-vs-povm>,
figure(
image("images/chip-in-bloch-3.png"),
// image("chip-in-bloch-2.png",width:70%),
caption: [@chip-in-bloch[Quantum potato chip and QBism SIC-POVM elements in Bloch sphere.]]),
<fig:chip-in-bloch>,
figure(image("images/3-new-measurements.png",width:65%), caption:[@new-measurements[Quantum potato chips and three new POVM elements.]]),
<fig:new-measurements>,
columns: (1fr, 1fr,1fr),
caption: [
(a) SIC-POVM basis ($cal(B)_i$, blue) vs POVM elements ($cal(Q)_i$, green), represented in the Bloch sphere. Note the $cal(B)_i$ tetrahedron represents the available phase-space, spanned by QBism SIC-POVM basis, while the POVM elements are within the insphere.
(b) quantum potato chip in the Bloch sphere. The green spheres are Bloch vectors for the QBism SIC-POVM in @eq:qbism.
(c) the solid red line is the boundary of quantum potato chip, while sphere represents new POVM defined in @eq:3new-M.
],
label:<fig:3-M>,
)
= Quantum Potato Chip as the Informationally-Minimal States
We investigate the properties of quantum states within the "quantum potato chip", which we argue are the most informationally-minimal among quantum states represented in the Bloch sphere. To support this claim, we present two complementary arguments. First, we demonstrate that the Matthews correlation measure vanishes exclusively for the quantum potato chip states. Second, we show that states lying on the quantum potato chip can be generated solely through projective measurements in the Pauli-X and Pauli-Z bases, further reinforcing their status as informationally-minimal quantum states.
== Matthews correlation of classical binary variables
To quantify the correlation of classical binary variables within the quantum potato chip compared to other points in the Bloch sphere, one can use the Matthews correlation coefficient. Given binary variables defining the probability vector as:
$
arrow(p) = {p_1,p_2,p_3,p_4} arrow mat(p_11, p_12; p_21, p_22),
$ <eq:prob_vector>
the corresponding Matthews correlation coefficient is given by:
$
phi = (p_22 p_11 - p_12 p_21) / sqrt((p_11 + p_12)(p_21+p_22)(p_11+p_21)(p_12+p_22)) = (sqrt(3)y-x z)/(sqrt((3-x^2)(3-z^2))),
$ <eq:matthews>
where in the last step, we replace variables by the Bloch vector components ${x,y,z}$ using @eq:inverse-replacement-rule.
As shown in @fig:matthew, the only region with $phi=0$ is the quantum potato chip. This means that measurement predictions in @eq:prob_vector, which can be obtained experimentally by measuring POVMs, are uncorrelated, much like flipping a pair of independent biased coins (probabilities are split as described by @eq:kp-vector). Equivalently, the states within the quantum potato chips can be represented as a product state in terms of the measurement basis associated with a SIC-POVM; precisely the scenario where the qubit state does not exhibit quantum correlations between certain SIC-POVM measurement outcomes.
#subpar.grid(
columns: (1fr, 1fr, 0.1fr),
figure(image("images/PhiContours.png",width: 75%)), <fig:PhiContours>,
figure(image("images/PhiContours-2.png", width: 55%)),
<fig:PhiContours-2>,
caption: [@PhiContours[Matthews correlation coefficient @eq:matthews for the pair of binary variables forming the probability vector @eq:prob_vector. $phi$ has minimum $-1/sqrt(3)$ and maximum $1/sqrt(3)$ at the poles of the sphere]],
label:<fig:matthew>
)
The @eq:matthews takes a more natural form if Bloch sphere's radius#footnote[Read our footnote for insphere, after @eq:chip1] is $1/sqrt(3)$:
$
phi = (y-x z)/(sqrt((1-x^2)(1-z^2))),
$
== Construction of Quantum State by Projective Measurements
=== SIC-POVM case
A qubit state can be reconstructed from SIC-POVM probabilities, because they are informationally complete (IC)@renes2004symmetric @saraceno2017phase @fuchs2017sic @appleby2017introducing @scott2010symmetric @appleby2014symmetric. However, general qubit states cannot be recovered from only two independent projective measurements. We will show that for states within the quantum potato chips, it is possible to reconstruct the entire state from only two independent projective measurements. Thus we refer to quantum potato chips as the informationally-minimal states, because their probability space can be reduced into two disjoint separable ones.
Given a SIC-POVM such as @eq:povm, define three new POVM sets as follows:
$cal(M)_x: {Q_1+Q_3,bb(I)-(Q_1+Q_3)}$,
$cal(M)_y: {Q_1+Q_4,bb(I)-(Q_1+Q_4)}$, and
$cal(M)_z: {Q_1+Q_2,bb(I)-(Q_1+Q_2)}$. They correspond to measurements along these Bloch vectors:
$
cal(M)_x:{
mat(1/√3; 0; 0),
mat(-1/√3; 0; 0;)
}, space
cal(M)_y:{
mat(0; 1/√3; 0),
mat(0; -1/√3; 0;)
}, space
cal(M)_z:{
mat(0; 0; 1/√3),
mat(0; 0; -1/√3;)
},
$<eq:3new-M>
This feature is also visualized in @fig:new-measurements. One can see the vectors corresponding to $cal(M)_i$ are along the Cartesian axes (i.e. the same direction for Pauli matrices), but scaled by a factor of 1/√3.
For a qubit state defined by the probability vector of @eq:kp-vector in the SIC-POVM basis of @eq:qbism, the measurement probabilities for $cal(M)_x$ is ${q,1-q}$ and for $cal(M)_z$ is ${p,1-p}$. Additionally, $cal(M)_x$, $cal(M)_y$ and $cal(M)_z$ can be treated as scaled versions of projective measurements of Pauli operators $sigma_x$, $sigma_y$ and $sigma_z$, respectively. For a generic state such as $1 / 2 (bb(I) + arrow(r) . arrow(sigma))$, the Pauli-Z probabilities are ${1/2(1-z),1/2(1+z)}$, while for $cal(M)_z$ the probabilities are ${1/6 (3 - √(3) z), 1/6 (3 + √(3) z)}$; likewise for Pauli-X, one gets ${1/2(1-x),1/2(1+x)}$ while those of $cal(M)_x$ are ${1/6 (3 - √(3) x), 1/6 (3 + √(3) x)}$. These results can be also found directly from the density matrix in the QBism SIC-POVM basis:
$ 1/(4 sqrt(3)) mat(
(sqrt(3) - x + y - z),
(sqrt(3) + x - y - z);
(sqrt(3) - x - y + z),
(sqrt(3) + x + y + z)).
$<eq:sic-density>
Adding up columns in @eq:sic-density return $cal(M)_z$ probabilities, while for rows summation, one gets $cal(M)_x$ probabilities.
Given any projective measurement for Pauli matrices, if its probability is denoted by $P(sigma_i)=p$, the corresponding $cal(M)_i$ probability will be given by
$P(cal(M)_i) = 1/√(3)(p-1/2)+1/2$. This
scaling can also be expressed with the following doubly-stochastic matrix#footnote[https://mathworld.wolfram.com/DoublyStochasticMatrix.html]:
$
cal(S)&=1/√3bb(I)+(1-1/√3)1/2
&=1/6mat(3+√3,3-√3;;3-√3,3+√3).
$
such that
$
p -> cal(S)p.
$<eq:stoch-shrink>
Therefore, for any state within the quantum potato chips, one can perform Pauli-X and Pauli-Z #footnote[Alternatively Pauli-X and Pauli-Y or Pauli-Y and Pauli-Z for a differently oriented potato chip.] projective measurements and record corresponding probabilities. By transforming these probabilities using @eq:stoch-shrink, one recovers $p$ and $q$, reconstructing the probability vector in @eq:kp-vector and fully reconstructing the quantum state. One should notice that this process is doable for states only in the quantum potato chip and not for general states.
Consider one specific example. Given the state vector from @eq:kp-vector with parameters $p = 1/3$ and $q = 2/5$, the QBism SIC-POVM probabilities are ${2/15, 1/5, 4/15, 2/5}$. For Pauli-Z and Pauli-X projective measurements, the probabilities are ${1/6(3-sqrt(3)),1/6(3+sqrt(3))}$ and ${1/10 (5-sqrt(3)), 1/10 (5+sqrt(3))}$, respectively. For measurements $cal(M)_z$ and $cal(M)_x$, or after applying the transformation from @eq:stoch-shrink, the probabilities are ${1/3,2/3}$ and ${2/5,3/5}$, respectively. Their outer product also yields ${2/15, 1/5, 4/15, 2/5}$.
This factorization allows us to decouple the probability 4-vector in a 3-simplex into two lower-dimensional disjoint distributions in 1-simplices. Consequently, the probability space for a state in the quantum potato chip can be viewed as the product of two independent subspaces, each governed by their respective probability distributions.
//This construction reflects the disjoint nature of the probability distributions for quantum potato chip, and highlights the independence between the two subspaces.
In practical terms, the system exhibits a decoupling of correlations between the disjoint subspaces, which simplifies the analysis and allows us to treat each subspace independently. Potential implications will be discussed in our future work.
// == Inference?
// The basis in @eq:qbism allows not just to view states as probability vectors, but also operators and arbitrary quantum channels as quasi-stochastic processes.
== Quasi-Probability Bases
//quasi and relation to experiment?
Historically, the phase-space representation of quantum states, such those in bases of Wootters @Wootters1986-cq and Feynman @Feynman1987-vj #footnote[For qubits Feynman and Wootters bases are the same and we refer to both interchangeably.], involves quasi-probability distributions. These representations allow for the treatment of quantum states in a manner similar to classical statistical distributions, but with key distinctions, such as the possibility of negative values. For example, Wootters basis is given by:
#eqcode($
cal(W)={
vec(1, e^(-i pi/4)/sqrt(2), e^(i pi/4)/sqrt(2), 0),
vec(0, e^(i pi/4)/sqrt(2), e^(-i pi/4)/sqrt(2), 1),
vec(1, -e^(-i pi/4)/sqrt(2), -e^(i pi/4)/sqrt(2), 0),
vec(0, e^(i pi/4)/sqrt(2), -e^(-i pi/4)/sqrt(2), 1)
}.
$, <eq:wootters-matrix>, <code:wootters-matrix>
)
In Wootters basis, the density matrix for a Bloch vector ${x,y,z}$ is described by:
#eqcode($
mat(
1/4 (1 + x + y + z), 1/4 (1 + x - y - z);
1/4 (1 - x - y + z), 1/4 (1 - x + y - z)
).
$,<eq:bloch-in-wooters>,<code:bloch-in-wooters>)
The summation of elements in @eq:bloch-in-wooters along columns results in $1/2(1+z)$ and $1/2(1-z)$, which represent the probabilities associated with projective measurements along the Pauli-Z axis. Similarly, summing the elements along rows yields $1/2(1+x)$ and $1/2(1-x)$, which correspond to the probabilities of measurements along the Pauli-X axis#footnote[Pauli-Z and Pauli-X measurement can be interpreted as the position (Pauli-Z) and momentum (Pauli-X) measurements within the phase space.]. In other words, in this case, there is no need for scaling or applying any extra transformation as in @eq:stoch-shrink for SIC-POVM case.
Wootters phase-space basis is no longer a proper probability distribution, and it is usually referred to as _quasi_-distribution. This provides another description of quantum potato chips as states that correspond to the subset of quasi-distributions with all entries being positive.
#subpar.grid(
figure(
image("images/ClassicalStates.png",width: 90%),
caption: [@Wootters-tetra[Tetrahedron of Wootters basis.]]
),<fig:ClassicalStates>,
figure(image("images/ClassicalStatesRegion.png",width: 90%), caption:[@Wootters-potato[Positive probability region of Wootters phase-pace]]),
<fig:ClassicalStatesRegion>,
figure(
image("images/Matreshka.png",width: 120%), caption: [@Matreshka[Matryoshka of tetrahedrons]]),<fig:matreshka>,
columns: (1fr, 1fr, 1fr),
caption: [
(a) the tetrahedron represents points in the phase-space of Wootters basis that corresponds to only positive probabilities. Note the Bloch sphere is not inscribed within the tetrahedron and not all points within the tetrahedron represents quantum states. Pauli-Z eigenstates are $ket(arrow.t), ket(arrow.b)$ and X are $ket(arrow.l), ket(arrow.r)$.
(b) Quantum potato chip represents the states with independent observables, but represented in the Wootters basis. Its boundary is the solid black line. The region inside the Bloch sphere is the intersection of the Wootters tetrahedron and Bloch sphere, corresponding to quantum states represented by positive probability vectors#footnote[Refer to Fig.4 in @appleby2011properties, which has a similar visualization, in a slightly different context.]
(c) The correspondence between the probabilistic phase-space basis (blue), quasi-probabilistic Feynman basis @Feynman1987-vj (orange) and SIC-POVM projectors. Each tetrahedron is a scaled down version of the previous one by a factor of $sqrt(3)$ (green). Each corner is denoted by two arrows, representing spins along z and x axes (up/down for z-axis and right/left for x-axis).
],
label: <fig:3-MM>,
)
Allowing quasi probabilities has some geometric consequences. If one considers only positive (say, classical) probabilities, the corresponding tetrahedron does not completely contain the Bloch sphere (see @fig:3-MM). In other words, parts of Bloch sphere outside of the tetrahedron correspond to states with negative probabilities in this basis. Additionally, the parametrization of the quantum potato chip will be different compared to @eq:constraint:
#eqcode(
$ q=1/2 plus.minus √((p(1-p)) / (2(1 - 2p + 2p^2))). $,<eq:constraint-Wootters>,
<code:constraint-Wootters>
)
A similar factorization argument for the probability vectors of the quantum potato chip can be applied in this context. However, it is crucial to note that, unlike the SIC-POVM case, the probabilities here do not necessarily correspond to directly measurable quantities, as they may take on negative values.
// #figure(image("Matreshka.png",width: 60%), caption: [@Matreshka[The correspondence between the probabilistic phase-space basis, quasi-probabilistic Feynman basis @Feynman1987-vj and SIC-POVM projectors. Each tetrahedron is a scaled down version of the previous one by a factor of $sqrt(3)$]])<fig:matreshka>
= Effect of Error Channels on the Quantum Potato Chip
In this section, we explore the impact of noise on the quantum potato chip. Specifically, we model various types of errors using known quantum channels. See @tab:channels for detailed description of quantum channels, their Kraus operators and how they transform the Bloch sphere.
#[#set text(9.5pt)
#show math.equation: set text(size: 8pt)
#figure(
align(horizon)[#table(
columns: 3,
rows: (.5cm, 1cm),
table.header([#strong[Channel name];], [#strong[List of Kraus operators];],[#strong[Channel's effect on Bloch sphere];]),
table.hline(),
[Bit flip], [${sqrt(xi) space sigma_x,sqrt(1-xi) space bb(I)}$],table.cell(rowspan: 6)[#image("images/channels.png",width:90%)],
[Phase flip], [${sqrt(xi) space sigma_z,sqrt(1-xi) space bb(I)}$],
[Bit-phase flip], [${sqrt(xi) space sigma_y,sqrt(1-xi) space bb(I)}$],
[Depolarization], [${sqrt(xi/4) space sigma_x,sqrt(xi/4) space sigma_y,sqrt(xi/4) space sigma_z,sqrt(1-(3xi)/4) space bb(I)}$],
[Amplitude damping], [${(1-sqrt(xi))/2 sigma_z+(1+sqrt(1-xi))/2 bb(I),sqrt(xi)/2 sigma_x+i sqrt(xi)/2 sigma_y }$],
[Phase damping], [${(1-sqrt(1-xi))/2 sigma_z+(1+sqrt(1-xi))/2 bb(I),
&sqrt(xi)/2 bb(I)- sqrt(xi)/2 sigma_z }$]
)]
, caption: [@code:channels[Common noise channels and their corresponding Kraus operators. We also show how each one changes Bloch sphere.]]
)<tab:channels>
]
The Bloch surface of the quantum potato chip, $sqrt(3){ (1 - 2 q), (2p-1)(2q-1),(1 - 2 p)}$ will be transformed into a new one as show in the following equation, for bit flip, phase flip, bit-phase flip, depolarization, amplitude damping, and phase damping, respectively:
#[#set text(9.5pt)
#show math.equation: set text(size: 8pt)
#figure(
align(horizon)[#table(
columns: 3,
rows: (.5cm, 1cm),
table.header([#strong[Channel name];], [#strong[Resultant Bloch vector];],[#strong[Channel's effect on Potato chip];]),
table.hline(),
[Bit flip], [$sqrt(3){- f_q, -f_p f_q f_xi, f_p f_xi}$],table.cell(rowspan: 6)[#image("images/channels-on-potato.png",width:100%)],
[Phase flip], [$sqrt(3){f_q f_xi, -f_p f_q f_xi, -f_p}$],
[Bit-phase flip], [$sqrt(3){-f_q f_xi, f_p f_q, f_p f_xi}$],
[Depolarization], [$sqrt(3){-f_q (1-xi), f_p f_q (1-xi), -f_p (1-xi)}$],
[Amplitude damping], [$sqrt(3){-f_q (1-xi), f_p f_q sqrt(1-xi), xi/sqrt(3)-f_p (1-xi)}$],
[Phase damping], [$sqrt(3){-f_q sqrt(1 - xi), f_p f_q sqrt(1-xi), -f_p}$]
)]
, caption: [@code:channels-on-potato[Channels effect on potato chip. Considering $f_p=2p-1$, $f_q=2q-1$, $f_xi=2xi-1$. Image: We set error rate/probability as $xi=1/3$. The only noise channels that keep states within the quantum potato chips are bit flip, phase flip and phase damping.]]
)<tab:channels-on-potato>
]
//$
// mat("BitFlip";
// "PhaseFlip";
// "BitPhaseFlip";
// "Depolarization";
// "AmplitudeDamping";
// "PhaseDamping") arrow
//mat(delim: #none,
// "BitFlip: " {sqrt(3) (-1 + 2 q);
// sqrt(3) (-1 + 2 p) (-1 + 2 q) (-1 + 2 xi);
// sqrt(3) (-1 + 2 p) (1 - 2 xi)};
// "PhaseFlip: " {sqrt(3) (-1 + 2 q) (1 - 2 xi),
// sqrt(3) (-1 + 2 p) (-1 + 2 q) (-1 + 2 xi),
// sqrt(3) (-1 + 2 p)};
// "PhaseDamping: "
//{(-1 + 2 q) sqrt(3 - 3 xi), -((-1 + 2 p) (-1 + 2 q) sqrt(3 - 3 xi)),
// sqrt(3) (-1 + 2 p)};
// "BitPhaseFlip: " {sqrt(3) (-1 + 2 q) (1 - 2 ), -sqrt(3) (-1 + 2 p) (-1 + 2 q), sqrt(3) (-1 + 2 p) (1 - 2 xi)};
//"Depolarization: "
// {-sqrt(3) (-1 + 2 q) (-1 + xi),
// sqrt(3) (-1 + 2 p) (-1 + 2 q) (-1 + xi), sqrt(3) (-1 + 2 p) (-1 + xi)};
// "AmplitudeDamping: "
//{(-1 + 2 q) sqrt(3 - 3 xi), -((-1 + 2 p) (-1 + 2 q) sqrt(
// 3 - 3 xi)), -sqrt(3) -
// 2 sqrt(3) p (-1 + xi) + xi + sqrt(3) xi}
//)
//$
Upon a close analysis, one can show that bit flip, phase flip and phase damping map the states within the quantum potato chip to other states within the quantum potato chip, only reducing the volume of the region (see Table 2).//@fig:channels-on-potato
On the other hand, other channels map states such that they go outside of the quantum potato chip. To explicitly show that the probabilities are still separable for the bit flip case, the Bloch vector can be obtained by new variables $q',p'$ and @eq:kp-vector with ${p' = p + xi (1-2p), q' = q}$, for the phase flip it will be ${q' = q + xi (1-2q), p' = p}$, while for the phase damping it will be ${p' = p,
q' = 1/2 (1 - sqrt(1 - xi) + 2 q sqrt(1 - xi))}$.
//#figure(
// image("images/channels-on-potato.png", width: 60%),
// caption: [@code:channels-on-potato[channels effect o//n potato chip. We set error rate/probability as $xi=1/3$. The only noise channels that keep states within the quantum potato chips are bit flip, phase flip and phase damping.]],
//) <fig:channels-on-potato>
= Unconventional Liouvillian Evolution at the Boundary of Quantum Potato Chips
A natural physics question arises: can the boundary of the quantum potato chip be understood as the result of state evolution under a master equation, starting from an appropriate initial condition?
Given that the trajectory of each individual probability vector $P = {p, 1-p}$ and $Q = {q, 1-q}$ is already known and constrained by @eq:constraint, the parameter $p$ can be interpreted as a time variable in a parametric equation for the whole system. We can identify two time (or parameter)-dependent transition matrices that generate the desired trajectory.
$
P'(p) = cal(L)_1(p) P(p),
space space
Q'(p) = cal(L)_2(p) Q(p)
$
By combining these two time-dependent transition matrices, we obtain the resulting transition matrix, which governs the overall evolution of the system.
$
cal(L) = log(e^(cal(L)_1) times.circle e^(cal(L)_2))
$
Any $2 times 2$ doubly stochastic transition matrix depends on only a single parameter, as the requirement that both its rows and columns sum to 1 imposes strict constraints on its form.
$
cal(L)_1 = mat(x,-x;-x,x), space
cal(L)_2 = mat(y,-y;-y,y)
$
We can then simply solve for $x$ and $y$ as follows:
$
x &= 1 / (2p - 1), space
y &= (1 - 2p) / (4p(1+p(-3 - 2p(p-2)))).
$
Accordingly, for the overall transition matrix one finds:
#eqcode(
$
cal(L) = mat(delim: "(", frac(8 (p - 1) p ((p - 1) p + 1) + 1, 4 (p - 1) p (2 p - 1) (2 (p - 1) p + 1)), frac(1 - 2 p, 4 (p - 1) p (2 (p - 1) p + 1)), frac(1, 1 - 2 p), 0; frac(1 - 2 p, 4 (p - 1) p (2 (p - 1) p + 1)), frac(8 (p - 1) p ((p - 1) p + 1) + 1, 4 (p - 1) p (2 p - 1) (2 (p - 1) p + 1)), 0, frac(1, 1 - 2 p); frac(1, 1 - 2 p), 0, frac(8 (p - 1) p ((p - 1) p + 1) + 1, 4 (p - 1) p (2 p - 1) (2 (p - 1) p + 1)), frac(1 - 2 p, 4 (p - 1) p (2 (p - 1) p + 1)); 0, frac(1, 1 - 2 p), frac(1 - 2 p, 4 (p - 1) p (2 (p - 1) p + 1)), frac(8 (p - 1) p ((p - 1) p + 1) + 1, 4 (p - 1) p (2 p - 1) (2 (p - 1) p + 1)))
$,
<eq:Liovillian>, <code:Liovillian>
)
In the Hilbert space, the master equation corresponding to the above transition can be expressed as:
$
dot(rho)=gamma_1(L_1 rho L_1^dagger-1/2{L_1^dagger L_1, rho})+gamma_2
(L_2 rho L_2^dagger-1/2{L_2^dagger L_2, rho}),
$
with two Lindblad jump operators:
$
L_1 = 1/sqrt(1-2p)(bb(I)+sigma_z),
space space
L_2 = 1/2sqrt((1-2p)/(p(1-p)(1-2p(1-p)))) space sigma_x.
$ <eq:Lindblads>
However, here we have unconventional (negative) opposite damping rates $gamma_1=-gamma_2=1$. Additionally, because of the singularity at $p=1/2$ in @eq:Lindblads, damping rates should swap signs to close the trajectory:
$
gamma_1=-gamma_2=cases(&1 "if" 0 < p <= 1/2, - &1 "if" 1/2<p<1)
$
#figure(
image("images/evolution.png", width: 30%),
caption: [@code:evolution[Two solutions from @eq:constraint-Wootters define opposite shifts to the initial state $ket(-) plus.minus delta p$, evolving in opposite directions.]]
) <fig:evolution>
While this specific evolution may lack an immediate physical interpretation, it is significant that, starting from any point on the boundary of the quantum chip, there is a non-conventional dynamical equation, guaranteeing that the quantum state remains pure physical state confined to it.
= Concluding remarks
// SIC-POVMs provide a natural way of describing quantum objects in the phase space. In this regard, a qubit state can be described by a probability $4$-vector in a $3$-simplex space. With proper geometric transformation, $3$-simplex can be projected into a tetrahedron in $bb(R)^3$. Quantum states are inside the insphere of the tetrahedron, meaning not all points within the tetrahedron corresponds a quantum state. Independently, a particular surface within the tetrahedron can be constructed by product of two independent $1$-simplex as probability space, providing the most minimal (say classical) description of quantum states. The part of aforementioned surface within the insphere forms a region we call as a quantum potato chip, which is the only part of the tetrahedron probability space that can be reduced to lower-dimensional probability space. This unique feature might provide advantage for these states as potential source for any computation. Of course, an important open question would be how any [universal] quantum computation can be reduced to only probabilistic rules, in a classical way, if possible and what new features should be incorporated into this machinary, to reproduce quantum results. This is a topics of our future research.
SIC-POVMs offer an elegant framework for representing qubit states geometrically, using a 4-dimensional probability vector within a 3-simplex. Through geometric transformations, this simplex can be projected into a regular tetrahedron in $bb(R)^3$. Crucially, not all points in this tetrahedron represent valid qubits, with physical quantum states residing within the insphere of the tetrahedron. A specific surface within this space, defined by the product of two 1-simplices, representing uncorrelated pair of binary variables, intersects the insphere forming the "quantum potato chip," a subset of quantum states of the qubit with uncorrelated projective observables.
This suggests an important question: can universal quantum computation still be achieved after restricting qubits to such states? Exploring this question and possible modifications to quantum formalism will be explored in future work. The geometric structure of SIC-POVMs suggests that understanding these spaces could unlock new pathways in quantum computing and quantum state representation.
a model on binary variables; binary variables are mapped into potato chips
// columns
#set heading(numbering: none)
= Acknowledgement
Computational aspects of this paper (e.g., mathematical derivation of formulas and visualizations) were done in the version 14.0 of the Wolfram Language. We also extensively used the Wolfram quantum framework, which a library of functionalities to perform symbolic and numeric quantum computation in the discrete finite-dimensional vector space. We thank <NAME> from Wolfram Institute, for his valuable comments on an early draft of our work.
#pagebreak()
// add style to handle multireference cite
#bibliography("references.bib", full: true, style: "american-physics-society")
#pagebreak()
#import "@preview/codly:1.0.0": *
#show: codly-init.with()
#codly(number-format: none, zebra-fill: none)
#set heading(numbering: (..) => "", level: 4)
// == <chipPlot3D>
// ```WL
// ParametricPlot3D[
// Evaluate[
// Rest@RotationTransform[{{1, 1, 1, 1}, {1, 0, 0, 0}}]@
// Flatten@KroneckerProduct[{p, 1 - p}, {q, 1 - q}]], {p, 0, 1}, {q,
// 1/2 - √[(-1 + 6 p - 6 p^2)/(1 - 2 p + 2 p^2)]/(2 √[3]),
// 1/2 + √[(-1 + 6 p - 6 p^2)/(1 - 2 p + 2 p^2)]/(2 √[3])},
// Mesh -> None, PlotPoints -> 200, Boxed -> False, Axes -> False]
// ```
= @fig:rotation <code:rotation>
```WL
GraphicsRow[
{
ListLinePlot[{
{{0, 1}, {1, 0}},
RotationTransform[{{1, 1}, {0, 1}}]@{{0, 1}, {1, 0}}
},
Mesh -> All,
Ticks -> None,
PlotRange -> {{-1, 1}, {0, 1}},
AspectRatio -> 1/2, AxesLabel -> {"X", "Y"}
],
Framed[
"\!\(\*OverscriptBox[\(\[LongRightArrow]\),SubscriptBox[\(U\),\\(rot\)]]\)",
FrameStyle -> None
],
NumberLinePlot[Interval[{0, 1}], PlotStyle -> ColorData[97][2]]
}
]
GraphicsRow[{
Graphics3D[
{#, TranslationTransform[{0, 0, -1/Sqrt[3]}]@
RotationTransform[{ConstantArray[1, 3], UnitVector[3, 3]}]@#}
&@Simplex[IdentityMatrix[3]],
Axes -> True, AxesLabel -> {"X", "Y", "Z"},
Ticks -> None, ViewPoint -> {3, -1, 1}
],
Framed[
"\!\(\*OverscriptBox[\(\[LongRightArrow]\), SubscriptBox[\(U\), \\(rot\)]]\)",
FrameStyle -> None
],
TernaryListPlot[{}, Method -> {"Backdrop" -> LightOrange}]
}]
GraphicsRow[{
Style[MatrixForm@IdentityMatrix[4], 30],
Framed[
"\!\(\*OverscriptBox[\(\[Rule]\), SubscriptBox[\(U\), \(rot\)]]\)",
FrameStyle -> None
],
Graphics3D[
{
Opacity[.5],
Simplex@RotationTransform[{ConstantArray[1, 4],
UnitVector[4, 1]}][IdentityMatrix[4]][[All, 2 ;;]]
},
Boxed -> False,
Epilog -> {
AxisObject[Line[{{.03, 0.15}, {0.23, .83}}], {0, 1},
TickDirection -> "Inward", TickLengths -> 0],
AxisObject[Line[{{.77, 0.15}, {.03, 0.15}}], {0, 1},
TickDirection -> "Inward", TickLengths -> 0],
AxisObject[Line[{{.03, 0.15}, {.6, 0.67}}], {0, 1},
TickLabels -> None, TickLengths -> 0]
}]
}]
```
= @eq:gen-rotation <code:gen-rotation>
```WL
rot = RotationMatrix[θ, {{1, 1, 1, 1}, {1, 0, 0, 0}}];
rot // MatrixForm
```
= @eq:transformation <code:transformation>
```WL
rotation = rot /. θ -> π/3;
rotation // MatrixForm
```
= @eq:kp-vector <code:kp-vector>
```WL
product = KroneckerProduct[{p, 1 - p}, {q, 1 - q}] // Flatten
```
= @eq:rotated-vector <code:rotated-vector>
```WL
surface4d = rotation . product // ComplexExpand
```
= @eq:chip1 <code:chip1>
```WL
surface = Rest[surface4d]
```
= @eq:constraint <code:constraint>
```WL
boundary = Normal[Solve[Norm[2 Sqrt[3] surface] == 1, q, Reals]]
```
= @eq:boundary <code:boundary>
```WL
{0 <= p <= 1, First[#] <= q <= Last[#] &@Values[Flatten@boundary]}
```
= @eq:chip2 & @eq:chip3 <code:chips>
```WL
products = Permute[product, #] & /@ {{1, 4, 2, 3}, {3, 1, 2, 4}};
surfaces = ComplexExpand[Rest[rotation . #] & /@ products]
```
= @eq:prob_vector & @eq:matthews <code:matthews>
```WL
Phi[dist_CategoricalDistribution] :=
Phi[Information[dist, "ProbabilityArray"]]
Phi[p_?MatrixQ] /; Dimensions[p] == {2, 2} := \!\(TraditionalForm\`
\*FractionBox[\(p[[2, 2]]\ p[[1, 1]] - p[[1, 2]]\ p[[2, 1]]\),
SqrtBox[\(\((p[[1, 1]] + p[[1, 2]])\)\ \((p[[2, 1]] +
p[[2, 2]])\)\ \((p[[1, 1]] + p[[2, 1]])\)\ \((p[[1, 2]] +
p[[2, 2]])\)\)]]\)
FullSimplify@
Phi@ArrayReshape[
QuantumPhaseSpaceTransform[QuantumState["BlochVector"[{x, y, z}]],
basis]["AmplitudesList"], {2, 2}]
```
= @eq:3new-M & @eq:sic-density <code:sic-density>
```WL
basis = With[{basis = QuantumBasis["QBismSIC"]},
QuantumBasis[basis["Names"],
Threaded[IdentityMatrix[2]] - basis["Elements"] /.
x_?NumericQ :> (#1 + #2 I & @@ RootApproximant[ReIm[x]])]];
povm = QuantumMeasurementOperator[
Inverse[Outer[Tr@*Dot, #, #, 1]] . # &@basis["Elements"]];
ops = With[{povm = povm["POVMElements"]},
With[{op = povm[[#1]] + povm[[#2]]},
Simplify@{Normal[op], IdentityMatrix[2] - op}] & @@@ {{1,
2}, {1, 3}, {1, 4}}];
Map[(QuantumState[#]["BlochVector"] // Simplify) &, ops, {2}]
densityQBISMSIC =
ArrayReshape[
With[{povms = povm["POVMElements"],
s = QuantumState["BlochVector"[{x, y, z}]][
"DensityMatrix"]}, (Tr[# . s] // FullSimplify) & /@ povms], {2,
2}]
```
= @fig:1-simplex <code:1-simplex>
```WL
Graphics[{
{Text[Style["{q,1-q}", 10], {.37, .83}],
Text[Style["{p,1-p}", 10], {.7, .45}]},
{ColorData[97][1], Simplex[{{1, 0}, {0, 1}}]}, {Black,
Point[{.25, .75}]},
{Red, Point[{.65, .35}]}
},
Axes -> True, AxesOrigin -> {0, 0},
AxesLabel -> {"p", "1-p"}, PlotLabel -> "1-simplex",
Frame -> True, GridLines -> Automatic,
FrameTicks -> {{Automatic, None}, {Automatic, None}}
]
```
= @fig:surface <code:surface>
```WL
Show[
Graphics3D[
{
{Opacity[.5], Ball[{0, 0, 0}, 1/(2 Sqrt[3])]},
{ColorData[97][1], Thickness[.005], Simplex[{{1, 0, 0}, {0, 1, 0}}]}
},
Axes -> True,
AxesOrigin -> {0, 0, 0},
Boxed -> False
],
ParametricPlot3D[surface /. boundary, {p, 0, 1}, Mesh -> None, PlotStyle -> Red],
ParametricPlot3D[surface, {p, 0, 1}, {q, 0, 1}, PlotStyle -> {Opacity[0.6]}],
ImageSize -> 400]
```
= @fig:3-chips <code:3-chips>
```WL
ParametricPlot3D[
Evaluate[Prepend[surfaces, surface]], {p, 0, 1}, Prepend[Flatten[Values[boundary]], q],
Mesh -> None,
PlotStyle -> Opacity[.5],
PlotPoints -> 200,
Boxed -> False,
Axes -> False,
ImageSize -> 200
]
```
= @fig:basis-vs-povm <POVMvsBasis>
```WL
(*blochsphere is given in previous examples*)
basisPnts =
QuantumState[#]["BlochVector"] & /@
QuantumBasis["QBismSIC"]["Elements"];
qbism = QuantumState[#]["BlochVector"] & /@
QuantumMeasurementOperator["QBismSICPOVM"]["POVMElements"] //
Simplify;
Show[blochsphere,
Graphics3D[{
{MapIndexed[{PointSize[0.01], Sphere[#1, .05], Black,
Text[Subscript["\[ScriptCapitalB]", #2[[1]]], 1.05 #1]} &,
basisPnts],
{Opacity[.2], FaceForm[None],
EdgeForm[{Opacity[.5], Thick, ColorData[97][1]}],
Tetrahedron[basisPnts]}},
{
ColorData[97][3],
MapIndexed[{PointSize[0.01], Sphere[#1, .05], Black,
Text[Subscript["\[ScriptCapitalQ]", #2[[1]]], 1.2 #1]} &,
qbism],
{Opacity[.5], EdgeForm[{Thick, ColorData[97][3]}],
Tetrahedron[qbism]}}
}, Boxed -> False]]
```
= @fig:chip-in-bloch <chip-in-bloch>
```WL
blochsphere = Show[
Graphics3D[{
{ Opacity[0.2], Sphere[]}, Black, Thickness[0.001],
Opacity[1.0],
Splice @ {Line[{{0, 1, 0}, {0, -1, 0}}],
Line[{{0, 0, 1}, {0, 0, -1}}], Line[{{1, 0, 0}, {-1, 0, 0}}]},
Splice @ {
Text[Ket[{0}], {0, 0, 1.05}],
Text[Ket[{1}], {0, 0, -1.05}],
Text[Ket[{"R"}], {0, 1.05, 0}],
Text[Ket[{"L"}], {0, -1.05, 0}],
Text[Ket[{"+"}], {1.05, 0, 0}],
Text[Ket[{"-"}], {-1.05, 0, 0}]}
}, Boxed -> False ],
ParametricPlot3D[
{{Cos[t], Sin[t], 0}, {0, Cos[t], Sin[t]}, {Cos[t],
0, Sin[t]}},
{t, 0, 2 Pi},
PlotStyle -> ConstantArray[{Black, Thin}, 3]
]];
qbism = {{-(1/Sqrt[3]), 1/Sqrt[3], -(1/Sqrt[3])}, {1/Sqrt[
3], -(1/Sqrt[3]), -(1/Sqrt[3])}, {-(1/Sqrt[3]), -(1/Sqrt[3]), 1/
Sqrt[3]}, {1/Sqrt[3], 1/Sqrt[3], 1/Sqrt[3]}};
boundary = {{q ->
1/2 - Sqrt[(-2 + 12 p - 12 p^2)/(1 - 2 p + 2 p^2)]/(
2 Sqrt[6])}, {q ->
1/2 + Sqrt[(-2 + 12 p - 12 p^2)/(1 - 2 p + 2 p^2)]/(
2 Sqrt[6])}};
blochVector = {Sqrt[3] (-1 + 2 q), Sqrt[3] (-1 + p (2 - 4 q) + 2 q),
Sqrt[3] (-1 + 2 p)};
Show[
Graphics3D[{
ColorData[97][3],
MapIndexed[{PointSize[0.01], Sphere[#1, .05], Black,
Text[Subscript["\[ScriptCapitalQ]", #2[[1]]], 1.2 #1]} &, qbism],
{Opacity[.5], EdgeForm[{Thick, ColorData[97][3]}],
Tetrahedron[qbism]}}, Boxed -> False],
ParametricPlot3D[blochVector /. boundary, {p, 0, 1},
PlotStyle -> Red],
ParametricPlot3D[
blochVector, {p, 0, 1}, {q, Sequence @@ boundary[[All, -1, -1]]},
PlotPoints -> 200, Mesh -> None, PlotStyle -> Opacity[.7]],
blochsphere
]
```
= @fig:new-measurements <new-measurements>
```WL
(*get blochsphere from previous examples*)
vec1 = {-Sqrt[-3 + 2/(1 + 2 (-1 + p) p)], (-1 + 2 p) Sqrt[-3 + 2/(
1 + 2 (-1 + p) p)], Sqrt[3] (-1 + 2 p)};
vec2 = {Sqrt[-3 + 2/(
1 + 2 (-1 + p) p)], (1 - 2 p) Sqrt[-3 + 2/(1 + 2 (-1 + p) p)],
Sqrt[3] (-1 + 2 p)};
Legended[Show[
blochsphere,
ParametricPlot3D[{vec1, vec2}, {p, 0, 1}, PlotStyle -> Red],
Graphics3D[{Opacity[.5],
Green,
Sphere[#, .05] & /@ {{1/Sqrt[3], 0, 0}, {-(1/Sqrt[3]), 0, 0}},
Blue,
Sphere[#, .05] & /@ {{0, -(1/Sqrt[3]), 0}, {0, 1/Sqrt[3], 0}},
Cyan, Sphere[#, .05] & /@ {{0, 0, 1/Sqrt[3]}, {0, 0, -(1/Sqrt[3])}}
}]],
Placed[SwatchLegend[{Green, Blue,
Cyan}, {"\!\(\*SubscriptBox[\(\[ScriptCapitalM]\), \(1\)]\)",
"\!\(\*SubscriptBox[\(\[ScriptCapitalM]\), \(2\)]\)",
"\!\(\*SubscriptBox[\(\[ScriptCapitalM]\), \(3\)]\)"}], {.95, \
.85}]]
```
= @fig:ClassicalStates <Wootters-tetra>
```WL
Show[Graphics3D[{{Text[Ket[{"\[UpArrow]"}], {0, 0, 1.3}],
Text[Ket[{"\[DownArrow]"}], {0, 0, -1.3}],
Text[Ket[{"\[RightArrow]"}], {1.3, 0, 0}],
Text[Ket[{"\[LeftArrow]"}], {-1.3, 0, 0}]}, Opacity[.5], Sphere[],
Cyan, Simplex[{{1, 1, 1}, {1, -1, -1}, {-1, -1, 1}, {-1,
1, -1}}]}, Boxed -> False],
ParametricPlot3D[{{-1 +
2 p, (-1 + 2 p) (-1 +
2 (1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])), -1 +
2 (1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])}, {-1 +
2 p, (-1 + 2 p) (-1 +
2 (1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])), -1 +
2 (1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])}}, {p, 0, 1},
PlotStyle -> Directive[Thickness[.01], Black]],
ParametricPlot3D[{-1 + 2 p, (-1 + 2 p) (-1 + 2 q), -1 +
2 q}, {p, 0, 1}, {q,
1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))],
1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))]}, Mesh -> None,
PlotStyle -> Yellow]]
```
= @fig:ClassicalStatesRegion <Wootters-potato>
```WL
Show[Graphics3D[{{Text[Ket[{"\[UpArrow]"}], {0, 0, 1.3}],
Text[Ket[{"\[DownArrow]"}], {0, 0, -1.3}],
Text[Ket[{"\[RightArrow]"}], {1.3, 0, 0}],
Text[Ket[{"\[LeftArrow]"}], {-1.3, 0, 0}]}, Opacity[.5],
Sphere[]}, Boxed -> False],
ParametricPlot3D[{{-1 +
2 p, (-1 + 2 p) (-1 +
2 (1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])), -1 +
2 (1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])}, {-1 +
2 p, (-1 + 2 p) (-1 +
2 (1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])), -1 +
2 (1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])}}, {p, 0, 1},
PlotStyle -> Directive[Thickness[.01], Black]],
BoundaryDiscretizeRegion[
CSGRegion[
"Intersection", {Ball[],
Simplex[{{1, 1, 1}, {1, -1, -1}, {-1, -1, 1}, {-1, 1, -1}}]}],
PrecisionGoal -> 4]]
```
= @fig:matreshka <Matreshka>
```WL
densityMatrix[phi_] := KroneckerProduct[phi, Conjugate[phi]]
blochVector[rho_?MatrixQ] := Tr /@ (PauliMatrix[Range[3]] . rho)
blochVector[phi_?VectorQ] := blochVector[densityMatrix[phi]]
basis = {{{1/2 (1 - Sqrt[3]), (-(1/2) - I/2) Sqrt[
3]}, {(-(1/2) + I/2) Sqrt[3],
1/2 (1 + Sqrt[3])}}, {{1/2 (1 - Sqrt[3]), (1/2 + I/2) Sqrt[
3]}, {(1/2 - I/2) Sqrt[3],
1/2 (1 + Sqrt[3])}}, {{1/2 (1 + Sqrt[3]), (-(1/2) + I/2) Sqrt[
3]}, {(-(1/2) - I/2) Sqrt[3],
1/2 (1 - Sqrt[3])}}, {{1/2 (1 + Sqrt[3]), (1/2 - I/2) Sqrt[
3]}, {(1/2 + I/2) Sqrt[3], 1/2 (1 - Sqrt[3])}}};
basisVectors = blochVector /@ basis;
povmProjectors = 2 Inverse[Outer[Tr@*Dot, #, #, 1]] . # &@basis;
feynmanVectors = {{-1, 1, -1}, {1, -1, -1}, {-1, -1, 1}, {1, 1, 1}};
povmVectors = blochVector /@ povmProjectors // Simplify;
labels = {"\[DownArrow]\[LeftArrow]", "\[DownArrow]\[Rule]",
"\[UpArrow]\[LeftArrow]", "\[UpArrow]\[Rule]"};
Legended[Graphics3D[{
Opacity[.5],
Sphere[],
{{Text[Ket[{"\[UpArrow]"}], {0, 0, 1.3}],
Text[Ket[{"\[DownArrow]"}], {0, 0, -1.3}],
Text[Ket[{"\[RightArrow]"}], {1.3, 0, 0}],
Text[Ket[{"\[LeftArrow]"}], {-1.3, 0, 0}]}},
{
ColorData[97][1],
MapIndexed[{PointSize[0.01], Sphere[#1, .05], Black,
Text[Subscript["\[ScriptCapitalB]", labels[[#2[[1]]]]],
1.1 #1]} &, basisVectors], {FaceForm[None],
EdgeForm[{Opacity[.5], Thick, ColorData[97][1]}],
Tetrahedron[basisVectors]}
},
{
ColorData[97][2],
MapIndexed[{PointSize[0.01], Sphere[#1, .05], Black,
Text[Subscript["\[ScriptCapitalF]", labels[[#2[[1]]]]],
1.2 #1]} &, feynmanVectors], {FaceForm[None],
EdgeForm[{Thick, ColorData[97][2]}],
Tetrahedron[feynmanVectors]}
},
{
ColorData[97][3],
MapIndexed[{PointSize[0.01], Sphere[#1, .05], Black,
Text[Subscript["\[ScriptCapitalQ]", labels[[#2[[1]]]]],
1.2 #1]} &, povmVectors], {FaceForm[None],
EdgeForm[{Thick, ColorData[97][3]}], Tetrahedron[povmVectors]}
}
}, Boxed -> False],
SwatchLegend[
ColorData[97] /@ Range[3], {"Basis", "Feynman", "SIC-POVM"}]
]
```
= @fig:PhiContours, @fig:PhiContours-2 <PhiContours>
```WL
Show[
Graphics3D[{{Text[Ket[{"\[UpArrow]"}], {0, 0, 1.3}],
Text[Ket[{"\[DownArrow]"}], {0, 0, -1.3}],
Text[Ket[{"\[RightArrow]"}], {1.3, 0, 0}](*,Text[
Ket[{"\[LeftArrow]"}],{-1.3,0,0}]*)}}],
SliceContourPlot3D[(Sqrt[3] y - x z)/Sqrt[(x^2 - 3) (z^2 - 3)],
"CenterSphere", {x, y, z} \[Element] Ball[], Contours -> 5,
ColorFunction ->
Function[x, Opacity[1, ColorData["SunsetColors"][x]]],
ContourStyle -> Directive[Thick, Black], PlotLegends -> Automatic,
PlotPoints -> 10],
Boxed -> False
]
ContourPlot3D[(
Sqrt[3] y - x z)/Sqrt[(x^2 - 3) (z^2 - 3)], {x, -1, 1}, {y, -1,
1}, {z, -1, 1}, Contours -> Range[1/2, -1/2, -1/4],
RegionFunction -> Function[{x, y, z}, x^2 + y^2 + z^2 < 1],
Mesh -> None, RegionBoundaryStyle -> None, Boxed -> False,
Axes -> False, PlotLegends -> Automatic, PlotRangePadding -> None
]
```
= @fig:ClassicalStates <ClassicalStates>
```WL
Show[Graphics3D[{{Text[Ket[{"\[UpArrow]"}], {0, 0, 1.3}],
Text[Ket[{"\[DownArrow]"}], {0, 0, -1.3}],
Text[Ket[{"\[RightArrow]"}], {1.3, 0, 0}],
Text[Ket[{"\[LeftArrow]"}], {-1.3, 0, 0}]}, Opacity[.5], Sphere[],
Cyan, Simplex[{{-((1 + Sqrt[3])/Sqrt[2 (2 + Sqrt[3])]), (
1 + Sqrt[3])/Sqrt[
2 (2 + Sqrt[3])], -((1 + Sqrt[3])/Sqrt[2 (2 + Sqrt[3])])}, {(
1 + Sqrt[3])/Sqrt[
2 (2 + Sqrt[3])], -((1 + Sqrt[3])/Sqrt[2 (2 + Sqrt[3])]), -((
1 + Sqrt[3])/Sqrt[2 (2 + Sqrt[3])])}, {-1, -1, 1}, {1, 1,
1}}]}, Boxed -> False],
ParametricPlot3D[{{-1 +
2 p, (-1 + 2 p) (-1 +
2 (1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])), -1 +
2 (1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])}, {-1 +
2 p, (-1 + 2 p) (-1 +
2 (1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])), -1 +
2 (1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))])}}, {p, 0, 1},
PlotStyle -> Directive[Thickness[.01], Black]],
ParametricPlot3D[{-1 + 2 p, (-1 + 2 p) (-1 + 2 q), -1 + 2 q}, {p, 0,
1}, {q, 1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))],
1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))]}, Mesh -> None,
PlotStyle -> Yellow]
]
```
= @eq:wootters-matrix <code:wootters-matrix>
```WL
(*Make sure the Wolfram quantum paclet is already installed*)
QuantumBasis["Wootters"]["Matrix"] // MatrixForm
```
= @eq:bloch-in-wooters <code:bloch-in-wooters>
```WL
(*Make sure the Wolfram quantum paclet is already installed*)
ArrayReshape[
QuantumPhaseSpaceTransform[QuantumState["BlochVector"[{x, y, z}]],
"Wootters"]["AmplitudesList"], {2, 2}]
```
= @eq:constraint-Wootters <code:constraint-Wootters>
```WL
Solve[Simplify[
Norm[QuantumWeylTransform[
QuantumState[Flatten[KroneckerProduct[{p, 1 - p}, {q, 1 - q}]],
"Wootters"]]["BlochVector"]] == 1], q, Reals] // FullSimplify
```
= @tab:channels <code:channels>
```WL
\[Psi]i =
QuantumState[
"BlochVector"[FromSphericalCoordinates[{1, \[Theta], \[Phi]}]]];
GraphicsGrid[
Partition[
With[{bloch =
QuantumChannel[{StringDelete[#, "\n"], .35}][\[Psi]i][
"BlochCartesianCoordinates"]},
Show[QuantumState["UniformMixture"]["BlochPlot",
"ShowLabels" -> False],
ParametricPlot3D[
bloch, {\[Theta], 0, \[Pi]}, {\[Phi], 0, 2 \[Pi]},
PlotRange -> {{-1, 1}, {-1, 1}, {-1, 1}},
PlotStyle -> Opacity[.5], Mesh -> None],
PlotLabel -> #]] & /@ {"BitFlip", "PhaseFlip", "PhaseDamping",
"BitPhaseFlip", "Depolarizing", "AmplitudeDamping"},
UpTo[3]], ImageSize -> Full
]
```
= @tab:channels-on-potato <code:channels-on-potato>
```WL
\[Xi] = 1/3;
potato =
ParametricPlot3D[{Sqrt[3] (-1 + 2 q),
Sqrt[3] (-1 + p (2 - 4 q) + 2 q), Sqrt[3] (-1 + 2 p)}, {q, 0,
1}, {p, 1/6 (3 - Sqrt[-9 + 6/(1 + 2 (-1 + q) q)]),
1/6 (3 + Sqrt[-9 + 6/(1 + 2 (-1 + q) q)])}, PlotPoints -> 120,
PlotStyle -> Opacity[.75], Mesh -> None,
PlotRange -> {{-1, 1}, {-1, 1}, {-1, 1}}, Axes -> False,
Boxed -> False];
bloch = QuantumState["UniformMixture"]["BlochPlot",
"ShowLabels" -> False, "ShowAxes" -> False];
opts = {PlotRange -> {{-1, 1}, {-1, 1}, {-1, 1}}, Boxed -> False,
Axes -> False, ImagePadding -> -40};
GraphicsGrid[
Partition[{Show[
ParametricPlot3D[{Sqrt[3] (-1 + 2 q),
Sqrt[3] (-1 + 2 p) (-1 + 2 q) (-1 + 2 \[Xi]),
Sqrt[3] (-1 + 2 p) (1 - 2 \[Xi])}, {q, 0, 1}, {p,
1/6 (3 - Sqrt[-9 + 6/(1 + 2 (-1 + q) q)]),
1/6 (3 + Sqrt[-9 + 6/(1 + 2 (-1 + q) q)])}, PlotStyle -> Green,
PlotPoints -> 50], potato, bloch, PlotLabel -> "BitFlip", opts],
Show[ParametricPlot3D[{Sqrt[3] (-1 + 2 q) (1 - 2 \[Xi]),
Sqrt[3] (-1 + 2 p) (-1 + 2 q) (-1 + 2 \[Xi]),
Sqrt[3] (-1 + 2 p)}, {q, 0, 1}, {p,
1/6 (3 - Sqrt[-9 + 6/(1 + 2 (-1 + q) q)]),
1/6 (3 + Sqrt[-9 + 6/(1 + 2 (-1 + q) q)])}, PlotStyle -> Green,
PlotPoints -> 50], potato, bloch, PlotLabel -> "PhaseFlip",
opts], Show[
ParametricPlot3D[{(-1 + 2 q) Sqrt[
3 - 3 \[Xi]], -((-1 + 2 p) (-1 + 2 q) Sqrt[3 - 3 \[Xi]]),
Sqrt[3] (-1 + 2 p)}, {q, 0, 1}, {p,
1/6 (3 - Sqrt[-9 + 6/(1 + 2 (-1 + q) q)]),
1/6 (3 + Sqrt[-9 + 6/(1 + 2 (-1 + q) q)])}, PlotStyle -> Green,
PlotPoints -> 50], potato, bloch, PlotLabel -> "PhaseDamping",
opts], Show[
ParametricPlot3D[{Sqrt[
3] (-1 + 2 q) (1 - 2 \[Xi]), -Sqrt[3] (-1 + 2 p) (-1 + 2 q),
Sqrt[3] (-1 + 2 p) (1 - 2 \[Xi])}, {q, 0, 1}, {p,
1/6 (3 - Sqrt[-9 + 6/(1 + 2 (-1 + q) q)]),
1/6 (3 + Sqrt[-9 + 6/(1 + 2 (-1 + q) q)])}, PlotStyle -> Green,
PlotPoints -> 50], potato, bloch, PlotLabel -> "BitPhaseFlip",
opts], Show[
ParametricPlot3D[{-Sqrt[3] (-1 + 2 q) (-1 + \[Xi]),
Sqrt[3] (-1 + 2 p) (-1 + 2 q) (-1 + \[Xi]), -Sqrt[3] (-1 +
2 p) (-1 + \[Xi])}, {q, 0, 1}, {p,
1/6 (3 - Sqrt[-9 + 6/(1 + 2 (-1 + q) q)]),
1/6 (3 + Sqrt[-9 + 6/(1 + 2 (-1 + q) q)])}, PlotStyle -> Green,
PlotPoints -> 50], potato, bloch, PlotLabel -> "Depolarizing",
opts], Show[
ParametricPlot3D[{(-1 + 2 q) Sqrt[
3 - 3 \[Xi]], -((-1 + 2 p) (-1 + 2 q) Sqrt[
3 - 3 \[Xi]]), -Sqrt[3] - 2 Sqrt[3] p (-1 + \[Xi]) + \[Xi] +
Sqrt[3] \[Xi]}, {q, 0, 1}, {p,
1/6 (3 - Sqrt[-9 + 6/(1 + 2 (-1 + q) q)]),
1/6 (3 + Sqrt[-9 + 6/(1 + 2 (-1 + q) q)])}, PlotStyle -> Green,
PlotPoints -> 50], potato, bloch,
PlotLabel -> "AmplitudeDamping", opts]}, UpTo[3]],
Spacings -> -40]
```
= @eq:Liovillian <code:Liovillian>
```WL
P = {p, 1 - p};
Q = {q, 1 - q} /. q -> 1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))];
ℒ1 = {{x, -x}, {-x, x}} /.
Solve[Thread[{{x, -x}, {-x, x}} . P == D[P, p], x], x][[1]]
ℒ2 = {{y, -y}, {-y, y}} /.
Solve[Thread[{{y, -y}, {-y, y}} . Q == D[Q, p], y], y][[1]]
ℒ =
MatrixLog[
KroneckerProduct[MatrixExp[ℒ1],
MatrixExp[ℒ2]]] // ComplexExpand // FullSimplify
```
= @fig:evolution <code:evolution>
```WL
proba = Flatten[KroneckerProduct[{p, 1 - p}, {q, 1 - q}]];
state = QuantumWeylTransform[
QuantumState[proba, "Wootters", "Parameters" -> {p, q}]];
sol = Solve[Simplify[Norm[state["BlochVector"]] == 1], q, Reals] //
Simplify // Normal;
L1 = QuantumOperator[{{2/Sqrt[1 - 2 p], 0}, { 0 , 0}}];
L2 = QuantumOperator[{{0, Sqrt[-1 + 2 p]/(
2 Sqrt[(-1 + p) p (1 + 2 (-1 + p) p)])}, { Sqrt[-1 + 2 p]/(
2 Sqrt[(-1 + p) p (1 + 2 (-1 + p) p)]), 0}}];
\[Gamma] = (-1)^{Boole[p > 1/2], Boole[p <= 1/2]};
p0 = 0.0001;
p1 = 1;
init1 = state[<|p -> p, sol[[1]]|> /. p -> p0];
init2 = state[<|p -> p, sol[[2]]|> /. p -> p0];
final1 =
Quiet@QuantumEvolve[
QuantumOperator["Hamiltonian"[{L1, L2}, \[Gamma]]],
init1, {p, p0, p1}];
final2 =
Quiet@QuantumEvolve[
QuantumOperator["Hamiltonian"[{L1, L2}, \[Gamma]]],
init2, {p, p0, p1}];
Show[QuantumState["-"]["BlochPlot", "ShowAxes" -> False,
"ShowArrow" -> False],
Graphics3D[{Red, Thick, Arrow[{{0, 0, 0}, init1["BlochVector"]}],
Blue, Arrow[{{0, 0, 0}, init2["BlochVector"]}]}],
ParametricPlot3D[
Evaluate[{final1["BlochVector"], final2["BlochVector"]}], {p, p0,
p1}, PlotStyle -> {Directive[Thickness[.01], Red],
Directive[Thickness[.01], Blue]}],
ParametricPlot3D[{-1 + 2 p, (-1 + 2 p) (-1 + 2 q), -1 + 2 q}, {p, 0,
1}, {q, 1/2 - Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))],
1/2 + Sqrt[-(((-1 + p) p)/(2 - 4 p + 4 p^2))]}, Mesh -> None,
PlotStyle -> Yellow]]
```
//bloch vector for chip
// ```WL
// r = {{1/2 - Sqrt[3]/2 + Sqrt[3] p, (1/2 + I/2) Sqrt[
// 3] (-I + (1 + I) p) (-1 + 2 q)}, {(-(1/2) - I/2) Sqrt[
// 3] (-1 + (1 + I) p) (-1 + 2 q),
// 1/2 (1 + Sqrt[3] - 2 Sqrt[3] p)}};
// sol = Normal@Solve[Eigenvalues[r][[1]] == 0, q, Reals];
// vec1 = FullSimplify[{Sqrt[3] (-1 + 2 q),
// Sqrt[3] (-1 + p (2 - 4 q) + 2 q), Sqrt[3] (-1 + 2 p)} /.
// sol[[1]]]
// vec2 = FullSimplify[{Sqrt[3] (-1 + 2 q),
// Sqrt[3] (-1 + p (2 - 4 q) + 2 q), Sqrt[3] (-1 + 2 p)} /.
// sol[[2]]]
// ```
|
|
https://github.com/MultisampledNight/diagram | https://raw.githubusercontent.com/MultisampledNight/diagram/main/README.md | markdown | Other | # diagram
A few curious diagrams trying to explain things, made in [Typst].
Note: No guarantee for correctness is made.
It is indeed actually very likely that something is wrong.
In that case, I'd appreciate noting this to me in e.g. an issue or opening a pull request.
## how do i get pdf
1. Clone this repository
2. Get the Typst CLI, see https://github.com/typst/typst?tab=readme-ov-file#installation
3. Run the script `compile-all`
4. In the newly created folder `target/docs` are all PDF diagrams now
# Currently in here
## Linux audio system overview

Shows the journey of audio data from the program
to the kernel.
The layers are:
1. Program
2. API
3. Adapter
4. Server
5. Kernel
The **program** has audio it would like to play
but since it cannot talk directly to the hardware
it needs to resort to an API.
It is possible to talk either
*directly* to the kernel
or *indirectly* through wrappers.
On the **API** layer,
there are
OSS and ALSA
on the direct side and
PipeWire, PulseAudio and JACK
on the indirect side.
They represent a certain interface usable for the program and
have been bound to specific servers in the past.
The **adapter** layer however
relaxes the API-server bijection.
An adapter accepts one API
but actually sends the audio data
to a different server.
It is of course not necessary to use an adapter
if the corresponding server is installed.
Specifically commonly found are:
- Sending to PipeWire
- pipewire-pulse, adapter to the PulseAudio API
- pipewire-jack, adapter to the JACK API
- Sending to PulseAudio
- pulseaudio-jack, adapter to the JACK API
- padsp, adapter to the OSS API
- Sending to ALSA
- alsa-oss, adapter to the OSS API
One step before home, the **server** layer
mixes everything together,
enumerates multiple output devices and
sends the final desired results to the kernel.
Although they are called servers,
they usually run on the same machine as the program.
Here to be found are the
PipeWire, PulseAudio, JACK1 and JACK2 servers.
JACK1 and JACK2 both implement the JACK API,
the latter is simply a rewrite of the former,
but both are still in use.
Finally, the **kernel** layer receives the server's result
and takes care of throwing it to the hardware.
Here are the older OSS and the newer ALSA.
## License
CC BY-NC-SA 4.0, see [LICENSE.md](./LICENSE.md)
[Typst]: https://typst.app
|
https://github.com/AHaliq/CategoryTheoryReport | https://raw.githubusercontent.com/AHaliq/CategoryTheoryReport/main/chapters/chapter4/mandatory.typ | typst | #import "../../preamble/lemmas.typ": *
#import "../../preamble/catt.typ": *
#import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#exercise("1")[Suppose the square below is a pullback. Show the following
#exercise("1.1")[If $g$ is a split epimorphism then $k$ is]
#figure(diagram(cell-size: 5mm,
$
A
edge("r", h, ->)
#edge("d", $k$, "->", label-anchor: "west", label-sep: 0em)
#corner("dr") &
B
edge("d", g, "->>") \
C
edge("r", f, "->")
#edge("u", $s'$, ">->", bend: 30deg) &
D
#edge("u", $s$, ">->", bend: -30deg)
$))
1. By definition of a pullback we know $A = C times_D B = {(c,b) | f(c) = g(b)}$
2. since $g$ is a split epi $exists s . g comp s = 1_D$ where $s' = [c |-> (c, (s comp f)(c)]$
$
k comp s' &= [c |-> k(c,(s comp f)(c))] \
&= [c |-> c] \
&= 1_C
$
3. thus $s'$ is a section of $k$
5. $k$ is also an epimorphism
$
m comp k = n comp k
<=> &m comp k comp s' = n comp k comp s' \
<=> &m comp 1_C = n comp 1_C \
<=> &m = n
$
6. therefore $k$ is a split epi
#exercise("1.2")[If $g$ is an isomorphism then $k$ is]
$
s' comp k
&= [(c,b) |-> (s' comp k)(c,b)] \
&= [(c,b) |-> s'(c)] \
&= [(c,b) |-> (c,s comp f(c))] \
&= [(c,b) |-> (c,s comp g(b))] && "by UMP of pullback" \
&= [(c,b) |-> (c,b)] && "by isomorphism of "g,s \
&= 1_A
$
therefore the section $s'$ is also a left inverse making $k$ an isomorphism
#exercise("1.3")[If $g$ is a split monomorphism then $k$ is not necessarily a split monomorphism]
1. a counter example is if $A = {(1,1), (1,2)}$
2. then $forall p. k(p) = 1$
3. therefore $cancel(exists) s'. s' comp k = 1_A$
]
#exercise("2")[Recall that the two-pullback lemma, or the pullback pasting lemma, show that if the left square and the outer rectangle are pullbacks then the right square can fail to be a pullback]
#figure(diagram(cell-size: 5mm,
$
{star}
edge("r", [star |-> star], ->)
edge("d", 1, ->)
#corner("dr") &
{star, star'}
edge("d", [x |-> star], "->")
edge("r", f, ->)
#corner("dr") &
{star, star'}
#edge("d", $[x |-> star]$, "->", label-anchor: "west", label-sep: 0em) \
{star}
edge("r", 1, "->") &
{star}
edge("r", 1, ->) &
{star}
$))
1. the left square is a pullback since $exists !u. 1 comp 1 comp u = [x |-> star] comp [star |-> star] comp u$
- where $u=[x |-> star]$
2. the outer rectangle is a pullback since $exists !u. 1 comp 1 comp 1 comp u = [x |-> star] comp f comp [star |-> star] comp u$
- where $u=[x |-> star]$
3. the right rectangle is not a pullback since $u$ is not unique in $1 comp [x |-> star] comp u = [x |-> star] comp f comp u$
- where $u=[x |-> star]$ or $u=[x |-> star']$ both satisfies
#exercise("3")[A regular monomorphism is an arrow $mono(e,E,A)$ which is an equalizer of some pair of arrows $f,g: A -> B$. Recall that by Proposition 3.16 of SA $e$ is in particular a monomorphism. Show that in a pullback square below, if $e$ is a regular monomorphism then so is $e'$ for any object $C$ and any arrow $h$. This property is often called "regular monos are stable under pullback".]
#figure(grid(columns: (auto, 1fr), align: (center + horizon, horizon + left), gutter: 2em,
figure(diagram(cell-size: 5mm,
$
E'
#corner("dr")
edge("r", h', ->)
edge("d", e', ->) &
E
edge("d", e, ->) \
C
edge("r", h, ->) &
A
$)),
[
Given
1. $C <-^e' E' ->^h' E$ are a pullback of $C ->^h A <-^e E$
2. $arr(e,E,A)$ is a regular monomorphism
],
grid.cell(colspan: 2, align: left, [
We prove that $e'$ is a regular monomorphism by showing
- $e'$ is a monomorphism by (4)
- $e'$ is an equalizer for any $C$ and $h$ by (5) and (6)
])))
#figure(grid(columns: (auto, 1fr), align: (center + horizon, horizon + left), gutter: 2em,
figure(diagram(cell-size: 5mm,
$
X
#edge("rr", $f$, "->", bend: 30deg)
#edge("dr", $g$, "->", bend: -30deg)
edge("r", u, "-->") &
E'
#corner("dr")
edge("r", h', ->)
edge("d", e', ->) &
E
edge("d", e, >->) \
& C
edge("r", h, ->) &
A
$)),
[
3. Given by (1) and @defn-pullback, for any $X$
1. $(h comp e' = e comp h') and $
2. $forall X, (arr(f,X,E)), (arr(g,X,C)), exists! (arr(u,X,E')).$
3. $#h(1em) (e' comp u = g) and$
4. $#h(1em) (h' comp u = f)$
],
grid.cell(colspan: 2, align: left, [
*Explanation*: Given the pullback, for any $X$ and morphisms $arr(f,X,E),arr(g,X,C)$, there exists a unique $u$ such that the square commutes even with $u$ precomposed and that the triangles $X,E',C$ and $X,E',E$ commutes as well.
])))
#figure(grid(columns: (auto, 1fr), align: (center + horizon, horizon + left), gutter: 1em,
figure(diagram(cell-size: 5mm,
$
X
#edge("r", $u_1$, "->", shift: 3pt)
#edge("r", $u_2$, "->", shift: -3pt, label-anchor: "north", label-sep: 0em) &
E'
#corner("dr")
edge("r", h', ->)
edge("d", e', >->) &
E
edge("d", e, >->) \
& C
edge("r", h, ->) &
A
$)),
[
Let $forall (u_1, u_2 : X -> E').$ be the scope of the following formulas
4. By (3.1) where $h comp e' = e comp h'$ we know the following
$#h(1em) ((e comp h') comp u_1 = (h comp e') comp u_1) and ((e comp h') comp u_2 = (h comp e') comp u_2)$
5. Supposing $e' comp u_1 = e' comp u_2$ we derive from (4) that
$#h(1em) e' comp u_1 = e' comp u_2 =>\
#h(1em) e comp h' comp u_1 = h comp e' comp u_1 = h comp e' comp u_2 = e comp h' comp u_2$
6. Given (2) where $e$ is a monomorphism we know
$#h(1em) e comp h' comp u_1 = e comp h' comp u_2 => h' comp u_1 = h' comp u_2$
7. The consequent of (5) satisfies the antecedent of (6), thus by transitivity we have
$#h(1em) e' comp u_1 = e' comp u_2 => h' comp u_1 = h' comp u_2$
8. Given (1) and (3), there must exist a unique $v:X -> E'$, since both $u_1, u_2$ satisfies $v$, they must be equal
$#h(1em) h' comp u_1 = h' comp v and e' comp u_1 = e' comp v and \
#h(1em) h' comp u_2 = h' comp v and e' comp u_2 = e' comp v =>
u_1 = u_2$
9. (7) satisfies the antecedent of (8), thus by transitivity we have that $e'$ is a monomorphism
$#h(1em) e' comp u_1 = e' comp u_2 => u_1 = u_2$
],
grid.cell(colspan: 2, align: left, [
*Explanation*: The square commutes (4), additionally with $e' comp u_1 = e' comp u_2$ as the premise for $e'$ to be a monomorphism, we have that $e comp h' comp u_1 = e comp h' comp u_2$ in (5). This satisfies the premise for $e$ to be a monomorphism in (6). Thus we get the consequent $h' comp u_1 = h' comp u_2$ in (7). Moreover, since morphisms $X -> E'$ have to be unique by UMP of the pullback, they must be equal (8). Therefore $e'$ is a monomorphism (9).
])))
#figure(grid(columns: (auto, 1fr), align: (center + horizon, horizon + left), gutter: 1em,
figure(diagram(cell-size: 5mm,
$
X
#edge("dr", $x_C$, "->")
edge("r", u, "-->") &
E'
#corner("dr")
edge("r", h', ->)
edge("d", e', >->) &
E
edge("d", e, >->) \
& C
edge("r", h, ->) &
A
#edge("d", $a$, "->", shift: 3pt, label-anchor: "west", label-sep: 0em)
#edge("d", $a'$, "->", shift: -3pt, label-anchor: "east", label-sep: 0em) \
& & B
$)),
[
Let $forall (a,a': A -> B).$ be the scope of the following formulas
10. By (2) where $e$ is an equalizer
$#h(1em) a comp e = a' comp e$
11. By (3.1); $h comp e' = e comp h'$, and (10), $e'$ equalizes $a comp h$ and $a' comp h$
$#h(1em) (a comp e) comp h' = (a' comp e) comp h' = \
#h(1em) (a comp h) comp e' = (a' comp h) comp e'$
12. Given (1) and (3), there must exist a unique $arr(u,X,E')$ for any $X$ and $arr(x_C,X,C)$ such that $e' comp u = x_C$ in (3.3).
$#h(1em) a comp h comp e' comp u = a comp h comp x_C and\
#h(1em) a' comp h comp e' comp u = a' comp h comp x_C$
13. Since $e'$ is universal (12), equalizes on $h$ (11) and is a monomorphism (9), $e'$ must be a regular monomorphism.
],
grid.cell(colspan: 2, align: left, [
*Explanation*: It is given $e$ is an equalizer for $a,a'$ in (10) if we precompose it by $h'$ in (11) it will still hold. Since the square commutes the identity holds for $a,a'$ composed with $h comp e'$ as well. Moreover $e'$ is universal by the the uniqueness of $u$ induced by the pullback. Therefore $e'$ is an equalizer for $h comp a$ and $h comp a'$. Making it both a monomorphism and an equalizer and thus a regular monomorphism $square.filled$
]),
))
#pagebreak()
#exercise("4")[Let $mono(e,A,B)$ be a regular monomorphism. Show that if the square is a pushout then $e$ is the equalizer of $x$ and $y$]
#figure(grid(columns: (auto, 1fr), align: (center + horizon, horizon + left), gutter: 1em,
figure(diagram(cell-size: 5mm,
$
A
edge("r", e, >->)
edge("d", e, >->) &
B
edge("d", y, >->) \
B
edge("r", x, ->) &
Q
#corner("ul")
$)),
[
1. we have the following pushout with $e$ as a regular monomorphism
],
figure(diagram(cell-size: 5mm,
$
A
edge("r", e, >->)
edge("d", e, >->) &
B
edge("d", y, >->)
edge("dr", x_B, ->) \
B
edge("r", x, ->)
#edge("rr", $x_B$, "->", bend: -30deg) &
Q
#corner("ul")
edge("r", u, "-->") &
X
$)),
[
2. let $arr(u,Q,X)$ be a pushout for any $X$
3. thus $e comp x comp u = e comp y comp u$
4. thus $e comp x = e comp y => x = y$, satisfies the existence equalizer UMP
],
figure(diagram(cell-size: 5mm,
$
Y
edge("r", v, "-->")
#edge("dr", $y_B$, "->", label-anchor: "north", label-sep: 0em)
#edge("rr", $y_B$, "->", bend: 30deg) &
A
edge("r", e, >->)
edge("d", e, >->) &
B
edge("d", y, >->)
edge("dr", x_B, ->) \
& B
edge("r", x, ->)
#edge("rr", $x_B$, "->", bend: -30deg) &
Q
#corner("ul")
edge("r", u, "-->") &
X
$)),
[
5. for any $Y$, $arr(v,Y,A)$ must be unique by $e$ being a mono
6. thus $v comp e = y_B$, satisfies the uniqueness equalizer UMP
7. therefore by (4) and (6) $e$ is the equalizer of $x$ and $y$
],
))
#exercise("5")[Let $arr(F, bb(C), bb(D))$ be a finite limit preserving functor. Show that for any monomorphism $mono(m,A,B)$ the morphism $arr(F(m), F(A), F(B))$ is also a monomorphism, i.e. F preserves monomorphisms. Dualizing, show that if $arr(F,bb(C),bb(D))$ preserves finite colimits then it preserves epimorphisms.]
#exercise("6")[Give an example of each of a functor $Set -> Set$ that:
#exercise("6.1")[Both preserves and creates terminal objects;]
#exercise("6.2")[Preserves, but does not create, terminal objects]
#exercise("6.3")[Neither preserves nor creates terminal objects]
#exercise("6.4")[Finally show that any functor $Set -> Set$ which creates terminal objects also preserves them.]
] |
|
https://github.com/SillyFreak/typst-packages-old | https://raw.githubusercontent.com/SillyFreak/typst-packages-old/main/scrutinize/src/lib.typ | typst | MIT License | #import "grading.typ"
#import "question.typ"
#import "questions.typ"
|
https://github.com/mem-courses/calculus | https://raw.githubusercontent.com/mem-courses/calculus/main/functions.typ | typst | #let prob(bgcolor: luma(252), border: luma(88), text) = block(
fill: bgcolor,
width: 100%,
inset: 1em,
radius: 3pt,
stroke: border + 0.5pt,
text,
)
#let def(x) = text(x, weight: "bold")
#let bb(x) = {
text(x, weight: "bold")
}
#let badge(content, fill: rgb("#000000")) = box(
fill: fill,
radius: 4pt,
inset: 1pt,
outset: 3pt,
text(
content,
weight: "bold",
size: 10pt,
fill: rgb("#ffffff"),
),
)
#let ac = badge("Correct", fill: rgb("#25ad40"))
#let pc = badge("Partially Correct", fill: rgb("#01bab2"))
#let wa = badge("Wrong Answer", fill: rgb("#ff4f4f"))
#let un = badge("Unknown", fill: rgb("#5c5c5c"))
#let dp = math.display
#let sp = $space$
#let eps = $epsilon$
#let sim = "~ "
#let st = "s.t. "
#let pm = $plus.minus$
#let mp = $minus.plus$
#let dx = math.upright("d") + math.italic("x")
#let dy = math.upright("d") + math.italic("y")
#let dz = math.upright("d") + math.italic("z")
#let dt = math.upright("d") + math.italic("t")
#let du = math.upright("d") + math.italic("u")
#let dv = math.upright("d") + math.italic("v")
#let px = $partial x$
#let py = $partial y$
#let pz = $partial z$
#let pu = $partial u$
#let pv = $partial v$
#let ps = $partial s$
#let pt = $partial t$
#let pf = $partial f$
#let Dx = $Delta x$
#let Dy = $Delta y$
#let Dt = $Delta t$
#let Du = $Delta u$
#let Dv = $Delta v$
#let ddy = math.attach(math.upright("d"), tr: "2") + math.italic("y")
#let dddy = math.attach(math.upright("d"), tr: "3") + math.italic("y")
#let dny = math.attach(math.upright("d"), tr: "n") + math.italic("y")
#let dsg = $dif sigma$
#let arccot = math.op("arccot")
#let defeq = math.attach("=", t: $Delta$)
#let atpos(f, b, t) = $lr(#v(1.5em)display(#f) |)_(#b\ "")^(\ #t)$
#let grad = math.bold(math.upright("grad"))
#let deri(x, y) = $display((dif #x)/(dif #y))$
#let pderi(x, y) = $display((diff #x)/(diff #y))$
#let int = $integral$
#let iint = $integral.double$
#let iiint = $integral.triple$
#let iiiint = $integral.quad$
#let intc = $integral.cont$
#let intb(x) = [
$display(attach(integral, b: #x, br: ""))$
#set text(size: 0.25em, fill: red.transparentize(98%))
#x
]
#let iintb(x) = [
$display(attach(integral.double, b: #x, br: ""))$
#set text(size: 0.25em, fill: red.transparentize(98%))
#x
]
#let iiintb(x) = [
$display(attach(integral.triple, b: #x, br: ""))$
#set text(size: 0.25em, fill: red.transparentize(98%))
#x
]
#let intcb(x) = [
$display(attach(integral.cont, b: #x, br: ""))$
#set text(size: 0.25em, fill: red.transparentize(98%))
#x
]
#let iintcb(x) = [
$display(attach(integral.surf, b: #x, br: ""))$
#set text(size: 0.25em, fill: red.transparentize(98%))
#x
]
#let rot = math.bold(math.upright("rot"))
#let video = "🎥"
#let star = "🌟"
#let record(text) = [
#video #text
] |
|
https://github.com/touying-typ/touying | https://raw.githubusercontent.com/touying-typ/touying/main/themes/metropolis.typ | typst | MIT License | // This theme is inspired by https://github.com/matze/mtheme
// The origin code was written by https://github.com/Enivex
#import "../src/exports.typ": *
#let _typst-builtin-align = align
/// Default slide function for the presentation.
///
/// - `title` is the title of the slide. Default is `auto`.
///
/// - `config` is the configuration of the slide. You can use `config-xxx` to set the configuration of the slide. For more several configurations, you can use `utils.merge-dicts` to merge them.
///
/// - `repeat` is the number of subslides. Default is `auto`,which means touying will automatically calculate the number of subslides.
///
/// The `repeat` argument is necessary when you use `#slide(repeat: 3, self => [ .. ])` style code to create a slide. The callback-style `uncover` and `only` cannot be detected by touying automatically.
///
/// - `setting` is the setting of the slide. You can use it to add some set/show rules for the slide.
///
/// - `composer` is the composer of the slide. You can use it to set the layout of the slide.
///
/// For example, `#slide(composer: (1fr, 2fr, 1fr))[A][B][C]` to split the slide into three parts. The first and the last parts will take 1/4 of the slide, and the second part will take 1/2 of the slide.
///
/// If you pass a non-function value like `(1fr, 2fr, 1fr)`, it will be assumed to be the first argument of the `components.side-by-side` function.
///
/// The `components.side-by-side` function is a simple wrapper of the `grid` function. It means you can use the `grid.cell(colspan: 2, ..)` to make the cell take 2 columns.
///
/// For example, `#slide(composer: 2)[A][B][#grid.cell(colspan: 2)[Footer]]` will make the `Footer` cell take 2 columns.
///
/// If you want to customize the composer, you can pass a function to the `composer` argument. The function should receive the contents of the slide and return the content of the slide, like `#slide(composer: grid.with(columns: 2))[A][B]`.
///
/// - `..bodies` is the contents of the slide. You can call the `slide` function with syntax like `#slide[A][B][C]` to create a slide.
#let slide(
title: auto,
align: auto,
config: (:),
repeat: auto,
setting: body => body,
composer: auto,
..bodies,
) = touying-slide-wrapper(self => {
if align != auto {
self.store.align = align
}
// restore typst builtin align function
let align = _typst-builtin-align
let header(self) = {
set align(top)
show: components.cell.with(fill: self.colors.secondary, inset: 1em)
set align(horizon)
set text(fill: self.colors.neutral-lightest, weight: "medium", size: 1.2em)
components.left-and-right(
{
if title != auto {
utils.fit-to-width.with(grow: false, 100%, title)
} else {
utils.call-or-display(self, self.store.header)
}
},
utils.call-or-display(self, self.store.header-right),
)
}
let footer(self) = {
set align(bottom)
set text(size: 0.8em)
pad(
.5em,
components.left-and-right(
text(fill: self.colors.neutral-darkest.lighten(40%), utils.call-or-display(self, self.store.footer)),
text(fill: self.colors.neutral-darkest, utils.call-or-display(self, self.store.footer-right)),
),
)
if self.store.footer-progress {
place(bottom, components.progress-bar(height: 2pt, self.colors.primary, self.colors.primary-light))
}
}
let self = utils.merge-dicts(
self,
config-page(
fill: self.colors.neutral-lightest,
header: header,
footer: footer,
),
)
let new-setting = body => {
show: align.with(self.store.align)
set text(fill: self.colors.neutral-darkest)
show: setting
body
}
touying-slide(self: self, config: config, repeat: repeat, setting: new-setting, composer: composer, ..bodies)
})
/// Title slide for the presentation. You should update the information in the `config-info` function. You can also pass the information directly to the `title-slide` function.
///
/// Example:
///
/// ```typst
/// #show: metropolis-theme.with(
/// config-info(
/// title: [Title],
/// logo: emoji.city,
/// ),
/// )
///
/// #title-slide(subtitle: [Subtitle], extra: [Extra information])
/// ```
///
/// - `extra` is the extra information you want to display on the title slide.
#let title-slide(
extra: none,
..args,
) = touying-slide-wrapper(self => {
let info = self.info + args.named()
let body = {
set text(fill: self.colors.neutral-darkest)
set align(horizon)
block(
width: 100%,
inset: 2em,
{
components.left-and-right(
{
text(size: 1.3em, text(weight: "medium", info.title))
if info.subtitle != none {
linebreak()
text(size: 0.9em, info.subtitle)
}
},
text(2em, utils.call-or-display(self, info.logo)),
)
line(length: 100%, stroke: .05em + self.colors.primary-light)
set text(size: .8em)
if info.author != none {
block(spacing: 1em, info.author)
}
if info.date != none {
block(spacing: 1em, utils.display-info-date(self))
}
set text(size: .8em)
if info.institution != none {
block(spacing: 1em, info.institution)
}
if extra != none {
block(spacing: 1em, extra)
}
},
)
}
self = utils.merge-dicts(
self,
config-common(freeze-slide-counter: true),
config-page(fill: self.colors.neutral-lightest),
)
touying-slide(self: self, body)
})
/// New section slide for the presentation. You can update it by updating the `new-section-slide-fn` argument for `config-common` function.
///
/// Example: `config-common(new-section-slide-fn: new-section-slide.with(numbered: false))`
///
/// - `level` is the level of the heading.
///
/// - `numbered` is whether the heading is numbered.
///
/// - `body` is the body of the section. It will be pass by touying automatically.
#let new-section-slide(level: 1, numbered: true, body) = touying-slide-wrapper(self => {
let slide-body = {
set align(horizon)
show: pad.with(20%)
set text(size: 1.5em)
stack(
dir: ttb,
spacing: 1em,
utils.display-current-heading(level: level, numbered: numbered),
block(
height: 2pt,
width: 100%,
spacing: 0pt,
components.progress-bar(height: 2pt, self.colors.primary, self.colors.primary-light),
),
)
body
}
self = utils.merge-dicts(
self,
config-page(fill: self.colors.neutral-lightest),
)
touying-slide(self: self, slide-body)
})
/// Focus on some content.
///
/// Example: `#focus-slide[Wake up!]`
///
/// - `align` is the alignment of the content. Default is `horizon + center`.
#let focus-slide(align: horizon + center, body) = touying-slide-wrapper(self => {
let _align = align
let align = _typst-builtin-align
self = utils.merge-dicts(
self,
config-common(freeze-slide-counter: true),
config-page(fill: self.colors.neutral-dark, margin: 2em),
)
set text(fill: self.colors.neutral-lightest, size: 1.5em)
touying-slide(self: self, align(_align, body))
})
/// Touying metropolis theme.
///
/// Example:
///
/// ```typst
/// #show: metropolis-theme.with(aspect-ratio: "16-9", config-colors(primary: blue))`
/// ```
///
/// Consider using:
///
/// ```typst
/// #set text(font: "Fira Sans", weight: "light", size: 20pt)`
/// #show math.equation: set text(font: "Fira Math")
/// #set strong(delta: 100)
/// #set par(justify: true)
/// ```
///
/// - `aspect-ratio` is the aspect ratio of the slides. Default is `16-9`.
///
/// - `align` is the alignment of the content. Default is `horizon`.
///
/// - `header` is the header of the slide. Default is `self => utils.display-current-heading(setting: utils.fit-to-width.with(grow: false, 100%), depth: self.slide-level)`.
///
/// - `header-right` is the right part of the header. Default is `self => self.info.logo`.
///
/// - `footer` is the footer of the slide. Default is `none`.
///
/// - `footer-right` is the right part of the footer. Default is `context utils.slide-counter.display() + " / " + utils.last-slide-number`.
///
/// - `footer-progress` is whether to show the progress bar in the footer. Default is `true`.
///
/// ----------------------------------------
///
/// The default colors:
///
/// ```typ
/// config-colors(
/// primary: rgb("#eb811b"),
/// primary-light: rgb("#d6c6b7"),
/// secondary: rgb("#23373b"),
/// neutral-lightest: rgb("#fafafa"),
/// neutral-dark: rgb("#23373b"),
/// neutral-darkest: rgb("#23373b"),
/// )
/// ```
#let metropolis-theme(
aspect-ratio: "16-9",
align: horizon,
header: self => utils.display-current-heading(setting: utils.fit-to-width.with(grow: false, 100%), depth: self.slide-level),
header-right: self => self.info.logo,
footer: none,
footer-right: context utils.slide-counter.display() + " / " + utils.last-slide-number,
footer-progress: true,
..args,
body,
) = {
set text(size: 20pt)
show: touying-slides.with(
config-page(
paper: "presentation-" + aspect-ratio,
header-ascent: 30%,
footer-descent: 30%,
margin: (top: 3em, bottom: 1.5em, x: 2em),
),
config-common(
slide-fn: slide,
new-section-slide-fn: new-section-slide,
),
config-methods(
alert: utils.alert-with-primary-color,
),
config-colors(
primary: rgb("#eb811b"),
primary-light: rgb("#d6c6b7"),
secondary: rgb("#23373b"),
neutral-lightest: rgb("#fafafa"),
neutral-dark: rgb("#23373b"),
neutral-darkest: rgb("#23373b"),
),
// save the variables for later use
config-store(
align: align,
header: header,
header-right: header-right,
footer: footer,
footer-right: footer-right,
footer-progress: footer-progress,
),
..args,
)
body
}
|
https://github.com/Sepax/Typst | https://raw.githubusercontent.com/Sepax/Typst/main/DIT323/Notes/main.typ | typst |
#import "template.typ": *
#show: template.with(
title: [Finite automata and formal languages],
short_title: "DIT084",
description: [
Notes based on lectures for DIT323 (Finite automata and formal languages)\ at
Gothenburg University, Spring 2024
],
authors: ((name: "<NAME>"),),
lof: false,
lot: false,
lol: false,
paper_size: "a4",
cols: 1,
text_font: "XCharter",
code_font: "Cascadia Mono",
accent: "#000000", // black
)
#pagebreak()
= Introduction
A *finite automaton* is a computational model with a set of states, input
symbols, transition rules, an initial state, and accepting states. It recognizes
patterns and processes strings in a language by transitioning between states
based on input.
*Formal languages* are abstract systems defined by rules to represent and
analyze languages. They provide a precise framework for specifying syntax,
semantics, and rules for generating valid strings within a language.
== Regular Expressions
Regular expressions are concise patterns for searching and matching strings,
widely used in text processing and pattern matching.
- Used in text editors
- Used to describe the lexical syntax of programming languages.
- Can only describe a limited class of “languages”.
#example[
- A regular expression for strings of ones of even length: *_(11)∗_*
- A regular expression for some keywords: *_while_ ∣ _for_ ∣ _if_ ∣ _else_*
- A regular expression for positive natural number literals (of a certain form):
*_[1–9][0–9]∗_*
]
== Finite automata
- Used to implement regular expression matching.
- Used to specify or model systems.
- One kind of finite automaton is used in the specification of TCP.
- Equivalent to regular expressions.
#example[
#figure(
image("figures/finite_automata_1.png", width: 80%),
caption: [Model of a lock],
) <finite_automata_lock>
- The states are a kind of memory.
- Finite number of states ⇒ finite memory.
]
#pagebreak()
== Context-free grammars
Used to describe the syntax of programming languages.
- More general than regular expressions.
- Used by parser generators. (Often restricted.)
#sourcecode[```
Expr ::== Number
| Expr Op Expr
| '(' Expr ')'
Op ::== '+' | '-' | '*' | '/'
```]
== Turing machines
A Turing machine is an abstract model of computation with a tape, read/write
head, and rules. It serves as a foundational concept in the study of algorithms
and computability.
- Unbounded memory: an infinite tape of cells.
- A read/write head that can move along the tape.
- A kind of finite state machine with rules for what the head should do.
It is equivalent to a number of other models of computation.
== Repetition of some classical logic
=== Propositions
A *proposition* is a statement that is either true or false.
#example[
- The sky is blue.
- The sky is green.
- 1 + 1 = 2.
- 1 + 1 = 3.
]
It may not always be known what the truth value (⊤ or ⊥) of a proposition is.
=== Connectives
*Logical connectives* are operators that combine propositions to form new
propositions.
#table(
columns: (20em, auto),
inset: 10pt,
align: center,
[*p* ∧ *q*],
[conjuction],
[*p* ∨ *q*],
[disjunction],
[¬*p*],
[negation],
[*p* ⇒ *q*],
[implication],
[*p* ⇔ *q*],
[equivalence],
)
#pagebreak()
Truth tables for these connectives:
#table(
columns: (auto, auto, auto, auto, auto, auto, auto),
inset: 10pt,
align: center,
[*p*],
[*q*],
[*p* ∧ *q*],
[*p* ∨ *q*],
[¬*p*],
[*p* ⇒ *q*],
[*p* ⇔ *q*],
[⊤],
[⊤],
[⊤],
[⊤],
[⊥],
[⊤],
[⊤],
[⊤],
[⊥],
[⊥],
[⊤],
[⊥],
[⊥],
[⊥],
[⊥],
[⊤],
[⊥],
[⊤],
[⊤],
[⊤],
[⊥],
[⊥],
[⊥],
[⊥],
[⊥],
[⊤],
[⊤],
[⊤],
)
Note that *$p => q$* is true if *$p$* is false.
=== Validity
A proposition is _valid_, or a _tautology_, if it is satisfied for all
assignments of truth values to its variables.
#example[
- *$p => p$* is valid.
- *$p or not p$* is valid
]
=== Equivalence
Two propositions are _equivalent_ if they have the same truth value for all
assignments of truth values to their variables. (they have the same truth table)
#example[
- *$p => q$* and *$not p or q$* are equivalent.
- *$p and q$* and *$q and p$* are equivalent.
]
=== Predicates
A predicate is, roughly speaking, a function to propositions.
#example[
- $P(n) = $ "$n$ is a prime number"
- $Q(a,b) = $ "$(a+b)^2 = a^2 + 2a b + b^2$"
]
=== Quantifiers
*Universal quantification* and *existential quantification* are used to express
statements about all or some elements in a set.
#example[
- *$forall n in N: P(n)$* means that $P(n)$ is true for all natural numbers $n$.
- *$exists n in N: P(n)$* means that $P(n)$ is true for some natural number $n$.
]
#pagebreak()
== Repetition of some set theory
A set is roughly speaking a collection of elements.
=== Defining sets
- *$A = {1, 2, 3}$* means that $A$ is the set containing the elements $1$, $2$,
and $3$.
- *$B = {x in N | x > 0}$* means that $B$ is the set of all natural numbers
$x$ such that $x > 0$.
- *$C = {x in N | exists y in N: x = 2y}$* means that $C$ is the set of all
natural numbers $x$ such that there exists a natural number $y$ such that
$x = 2y$. (the set of all even natural numbers)
=== Members, subsets, and equality
- *$x in A$* means that $x$ is an element of $A$.
- *$A subset.eq B$* means that $A$ is a subset of $B$.
- *$A = B$* means that $A$ and $B$ are equal.
=== The empty set
- *$emptyset$* is the empty set.
- *$forall x: not x in emptyset$*.
=== Set operations
==== Union, intersection and set difference
- *$A union B$* is the union of $A$ and $B$. (the set of all elements that are in $A$ or $B$)
- *$A sect B$* is the intersection of $A$ and $B$. (the set of all elements that
are in $A$ and $B$)
- *$A backslash B = A - B$* is the set difference of $A$ and $B$. (the set of all
elements that are in $A$ but not in $B$)
==== Complement
- *$overline(A)$* is the complement of $A$. (the set of all elements that are not
in $A$)
==== _Cartesian product_
- *$A times B$* is the Cartesian product of $A$ and $B$. (the set of all pairs
$(a, b)$ where $a in A$ and $b in B$)
#example[
- *$N times N$* is the set of all pairs of natural numbers.
- *$N times N times N$* is the set of all triples of natural numbers.
]
==== Power set
- *$℘(A) = { A | A subset.eq S}$* is the power set of $A$. (the set of all subsets
of $A$)
#example[
- *$℘({1, 2}) = {emptyset, {1}, {2}, {1, 2}}$*
]
==== Set of all finite subsets
- *$"Fin"(A) = { A | A subset.eq S, A "is finite" }$* is the set of all finite
subsets of $A$.
#example[
- *$"Fin"({1, 2}) = {emptyset, {1}, {2}, {1, 2}}$*
- *$"Fin"(N) = ℘(N)$*
]
=== Relations
Relations define connections between elements of sets. A binary relation is a
subset of the Cartesian product of two sets, often denoted as $R subset.eq A times B$.
Common types include reflexive, symmetric, and transitive relations, capturing
different aspects of element connections.
- A binary relation $R$ on $A$ is a subset of $A^2 = A times A: R subset.eq A^2$
- Notation: $x R y$ same as $(x, y) in R$
- Can be generalised from $A times A$ to $A times B times C times dots.h.c $
==== Some binary relational properties
For $R subset.eq A times B$:
- Total (left-total): $forall x in A: exists y in B: x R y$
- Functional/deterministic: $forall x in A: forall y, z in B: x R y and x R z => y = z$
For $R subset.eq A^2$:
- Reflexive: $forall x in A: x R x$
- Symmetric: $forall x, y in A: x R y => y R x$
- Transitive: $forall x, y, z in A: x R y and y R z => x R z$
- Antisymmetric: $forall x, y in A: x R y and y R x => x = y$
===== Partial orders
A _partial order_ is a relation that is reflexive, antisymmetric, and
transitive.
==== Equivalence relations
An _equivalence relation_ is a relation that is reflexive, symmetric, and
transitive.
=== Functions
Relation between two sets, denoted as $f: A → B$, where $A$ is the _domain_ (set
of inputs) and $B$ is the _codomain_ (set of possible outputs). Every element in
the _domain_ is associated with a unique element in the _codomain_. If $(x, y)$ is
in the function, it means that the input $x$ is associated with the output $y$.
- Sometimes defined as the set of total and functional relations $f subset.eq A
times B$
- Notation $f(x) = y$ same as $(x, y) in f$
- If the requirement of totality is dropped, we get the set of partial functions, $A harpoon.rt B$
- The _image_ is the set of all outputs of the function, ${y in B | x in A, f(x) = y}$
==== Identity, composition
- The _identity function_ $id_A: A → A$ is defined as $id_A(x) = x$
- For functions $f in B -> C$ and $g in A -> B$ the _composition_ of $f compose g in A -> C$ is
defined by $(f compose g)(x) = f(g(x))$
#pagebreak()
==== Injectivity
An _injection_ is a function $f: A → B$ such that $forall x, y in A: f(x) = f(y) => x = y$
- Every input is mapped to an unique output.
- A is at most as large as B.
- Holds if $f$ has a left inverse $g in B -> A: g compose f = $ _id_
#figure(
image("figures/injective.png", width: 40%),
caption: [Injective function],
)
==== Surjectivity
A _surjection_ is a function $f: A → B$ such that $forall y in B: exists x in A: f(x) = y$
- The function "targets" every element in the _codomain_
- A is at least as large as B.
- Holds if $f$ has a right inverse $g in B -> A: f compose g = $ _id_
#figure(
image("figures/surjective.png", width: 40%),
caption: [Surjective function],
)
==== Bijectivity
A _bijection_ is a function $f: A → B$ such that $forall y in B: exists! x in A: f(x) = y$. _In simple terms, it is both injective and surjective._
- $A$ and $B$ are of the same size.
- Holds _iff_ $f$ has left and right inverse $g in B -> A$
#figure(
image("figures/bijective.png", width: 40%),
caption: [Bijective function],
)
=== Partitions
A _partition_ $P subset.eq ℘(A)$ of a set $A$ is a set of non-empty subsets of $A$ such
that every element in $A$ is in exactly one of the subsets.
- Every element is non-empty: $forall X in P: X eq.not emptyset$
- The elements cover $A: union.big_(B in P) B = A$
- The elements are mutually disjoint: $forall B, C in P: B eq.not C => B sect C = emptyset$
#example[
- $P = {{1, 2}, {3, 4}}$ is a partition of $A = {1, 2, 3, 4}$
- $P = {{1, 2}, {3, 4}, {5}}$ is not a partition of $A = {1, 2, 3, 4}$
]
=== Equivalence classes
Given a set $A$ and an equivalence relation $R subset.eq A times A$, the
_equivalence class_ of an element $a in A$ is the set of all elements in $A$ that
are equivalent to $a$.
#definition[
The equivalence classes of an equivalence relation $R$ on $A$:
$ [x]_R = {y in A | x R y} $
]
=== Quotients
Given a set $A$ and an equivalence relation $R subset.eq A times A$, the
quotient of $A$ by $R$ is the set of all equivalence classes of $R$.
#definition[
The quotient of $A$ by $R$ is the set of all equivalence classes of $R$:
$ A / R = {[x]_R | x in A} $
]
#example[
Can one define $ZZ = NN^2$ with the intention that $(m,n)$ stands for $m-n$?
No, $(0,1)$ and (1,2) would both represent $-1$.
_Instead we can use the quotient set:_
$ ZZ = NN^2 backslash tilde.op_ZZ $
where
$ (m_1,n_2) tilde.op_ZZ (m_2,n_2) <=> m_1+n_1 = m_2+n_2 $
]
#pagebreak()
= Proofs, induction & recursive functions
== Basic proof methods
#table(
columns: (auto, auto),
inset: 10pt,
align: center,
[*To prove*],
[*Method*],
[$p => q$],
[Assume $p$ and prove $q$],
[$p => q$],
[Assume $not$ q and prove $not p$],
[$forall x in A. P(x)$],
[Assume that we have an $x in A$ and prove $P(x)$],
[$p <=> q$],
[Prove both $p => q$ and $q = p$],
[$not p$],
[Assume $p$ and derive a contradiction],
[$p$],
[Prove $not not p$],
)
== Induction
For a natural number predicate $P$ we can prove $forall n in NN: P(n)$ in the
following way:
- Prove $P(0)$
- Prove $forall n in NN: P(n) => P(n+1)$
with the formula:
$ P(0) and (forall n in NN : P(n) => P(n+1)) => forall n in NN : P(n) $
#pagebreak()
== Complete/Strong induction
We can also prove $forall n in NN: P(n)$ in the following way:
- Prove $P(0)$
- For every $n, i in NN$, prove that if $P(i)$ holds for all $i lt.eq n$, $P(n+1)$ holds.
with the formula:
$ P(0) and (forall n in NN: (forall i in NN: i lt.eq n => P(i)) => P(n+1)) => forall n in NN: P(n) $
#example[
#lemma[
Every natural number $n gt.eq 8$ can be written as a sum of multiples of $3$ and $5$.
]
#proof[
_(by complete induction)_
Let $P(n) := n gt.eq 8 => exists a, b in NN: n = 3a + 5b$
We prove that $P(n)$ holds for all $n in NN$ by complete induction on $n$:
_Basis:_ $(n = 8,9,10)$
- $P(8)$ holds since $8 = 3*1 + 5*1$.
- $P(9)$ holds since $9 = 3*3 + 5*0$.
- $P(10)$ holds since $9 = 3*0 + 5*2$.
_Inductive step:_ $P(8) and P(9) and P(10) and dots.h.c and P(k) => P(k + 1) "where" k gt.eq i$
_Induction hypothesis:_ Assume $P(i)$ is true for all $i in NN$ where
$10 lt.eq i lt.eq k$
_To prove:_ $P(k + 1) := k + 1 gt.eq 8 => exists a, b in NN: k + 1 = 3a + 5b$
$ k + 1 = 3a + 5b & <=> k - 2 = 3a + 5b - 3 \
& <=> k - 2 = 3(a-1) + 5b \
& <=> k - 2 = 3a' + 5b $
$k - 2 gt.eq 8$ and $k - 2 lt.eq k$, so by the _induction hypothesis_, $P(k - 2)$ holds,
thus $P(k+1)$ holds.
]
]
#pagebreak()
== Proof by counterexample
To prove that a statement is false, we can find a counterexample.
_In general, to prove:_ $ & not(forall "natural number predicates" P: P(0) and \
& (forall n in NN : n gt.eq 1 and P(n) => P(n + 1)) => \
& forall n in NN : n gt.eq 1 => P(n)) $
_we assume:_ $ & forall "natural number predicates" P: P(0) and \
& (forall n in NN : n gt.eq 1 and P(n) => P(n + 1)) => \
& forall n in NN : n gt.eq 1 => P(n) $
_and derive a contradiction._
#example[
The following statement does not hold for $P(n) := n eq.not 1$ and $n = 1$
$ P(0) and (forall n in NN : n gt.eq 1 and P(n) => P(n+1)) => forall n in NN : n gt.eq 1 => P(n) $
_The hypotheses hold, but not the conclusion._
#proof[
Let $P(n) := n eq.not 1$
Base case: $P(0) := 0 eq.not 1$
To disproof:
$ & forall n in NN : n gt.eq 1 and P(n) => P(n + 1) \
& forall n in NN : n gt.eq 1 and n eq.not 1 => n + 1 eq.not 1 $
_Let $n = 1$_
$ & n gt.eq 1 and P(n) => P(n+1) \
& n gt.eq 1 and P(1) => P(2) \
& 1 gt.eq 1 and 1 eq.not 1 => 1 + 1 eq.not 1 $
$P(1) := 1 eq.not 1$ is a contradiction, so we are done.
]
]
#pagebreak()
== Inductively defined sets
An inductively defined set is a set that is defined in such a way that its
elements are generated by applying a set of rules or operations starting from
some initial elements. The process of generating elements continues
indefinitely, and it relies on a principle of induction.
- *Base Elements:* Specify a set of initial elements that belong to the set. These
are the starting points for the construction of the set.
- *Inductive Rules:* Define rules or operations that allow you to generate new
elements of the set based on the existing elements.
- *Closure under Induction:* if an element belongs to the set, all the elements
generated from it using the inductive rules also belong to the set.
#example[
An example of an inductively defined set is the set of natural numbers:
$ "" / ("zero" in NN) wide (n in NN) / ("suc"(n) in NN) $
]
#example[
Another example, booleans:
$ "" / ("true" in italic("Bool")) wide "" / ("false" in italic("Bool")) $
]
#example[
Another example, finite lists:
$ "" / ("nil" in italic("List")(A)) wide (x in italic("A") wide x s in italic("List")(A)) / ("cons"(x, x s) in italic("List")(A)) $
]
_Note that "nil" stands for the empty list and "cons" stands for the construction
of a list by adding an element to the front of an existing list (in haskell)._
_Some alternative notations for lists:_
- $[ med ] <=> $ nil
- $x : x s <=> "cons"(x, x s)$
- $[1,2,3] <=> "cons"(1, "cons"(2, "cons"(3, "nil")))$
== Recursive functions
A recursive function is a function that is defined in terms of itself. It is
defined by a base case and a recursive case.
#example[
$ &italic("length") in italic("List")(A) &->& NN \
&italic("length")("nil") &= & "zero" \
&italic("length")("cons"(x,x s)) &= & "suc"(italic("length")(x s)) $
]
#example[
$ & f in italic("List")(A) times italic("List")(A) &->& italic("List")(A) \
& f("nil", y s) &= & y s \
& f("cons"(x, x s), y s) &= & "cons"(x, f(x s, y s)) $
]
== Mutual induction
In general, a proof by mutual induction of a statement A consists in proving a
stronger statement than $A$, usually a statement of the form $A and B$, where $B$ can
be seen as an _"auxiliary statement"_. The advantage is that now one can use a
*stronger induction hypothesis* than in a proof of the mere statement $A$ by
simple induction.
#example[
_The following is a nice example of a property about natural numbers that cannot
be proved by simple induction (at least, not in a natural way), but the proof by
mutual induction is very easy, because it gives a stronger induction hypothesis._
Let $f,g,h in NN -> {0,1}$ be functions defined as follows:
$ & f(n) = cases(
#align(right)[#box[$0 quad$]]&\, "if" n=0,
#align(right)[#box[$g(n-1) quad$]]&\, "otherwise",
) wide
g(n) = cases(
#align(right)[#box[$1 quad$]]&\, "if" n=0,
#align(right)[#box[$f(n-1) quad$]]&\, "otherwise",
) \
& h(n) = cases(
#align(right)[#box[$0 quad$]]&\, "if" n=0,
#align(right)[#box[$1 - h(n-1) quad$]]&\, "otherwise",
) $
Let $P(n) := forall n in NN: h(n) = f(n)$.
_It does not seem possible to prove the proposition by simple induction, the
induction hypothesis in this case is too weak. But the proof is quite easy if
you prove the following stronger statement, by (mutual) induction on $n in NN$:_
Let $P'(n) := forall n in NN: h(n) = f(n) and h(n) = 1 - g(n)$.
#proof[
By induction on $n in NN$.
_Basis:_ $f(0) = 0 = h(0)$ and $h(0) = 0 = 1-1 = 1 - g(0)$
_Inductive step:_ By induction hypothesis (i.h), $h(n) = f(n)$ and $h(n) = 1 - g(n)$.
Hence,
$ f(n+1) &= g(n) = 1 - h(n) = 1 - f(n) = h(n+1) \
h(n+1) &= 1 - h(n) = 1 - f(n) = 1 - g(n+1) $
]
_Note that in the inductive step, to prove that $f(n+1)=h(n+1)$ we use the
inductive hypothesis about the second statement, i.e. $h(n)=1−g(n)$. Conversely,
to prove that $h(n+1)=1−g(n+1)$ we use the inductive hypothesis about the first
statement, i.e. $f(n)=h(n)$. This is the essence of mutual induction._
]
#pagebreak()
= Structural induction & automata theory
== Structural induction
Structural induction is a proof technique that can be used to prove properties
of recursively defined objects, such as inductively defined sets and recursive
functions.
#example[
For a given inductivley defined set we have a corresponding induction
prionciple.
Consider the following inductively defined set of natural numbers:
$ "" / ("zero" in NN) wide (n in NN) / ("suc"(n) in NN) $
In order to prove $forall n in NN: P(n)$:
- Prove $P("zero")$
- Prove $forall n in NN: P(n) => P("suc"(n))$
]
#example[
Another example, booleans:
$ "" / ("true" in italic("Bool")) wide "" / ("false" in italic("Bool")) $
In order to prove $forall b in italic("Bool"): P(b)$:
- Prove $P("true")$
- Prove $P("false")$
]
#example[
Another example, finite lists:
$ "" / ("nil" in italic("List")(A)) wide (x in italic("A") wide x s in italic("List")(A)) / ("cons"(x, x s) in italic("List")(A)) $
In order to prove $forall x s in italic("List")(A): P(x s)$:
- Prove $P("nil")$
- Prove $forall x in A: forall x s in italic("List")(A): P(x s) => P("cons"(x, x s))$
]
The pattern of proving a proposition with structural induction.
Consider an inductivley defined set:
$ dots.h.c quad (x in italic("A") wide d in D(A))
/ (c(x,dots,d) in D(A)) quad dots.h.c $
In order to prove $forall d in D(a):P(d)$:
- $dots.v$
- $forall x in A, dots, d in D(A)$, prove that $dots$ and $P(d) => P(c(x, dots, d))$.
- $dots.v$
_One inductive hypothesis for each recutsive argument._
#pagebreak()
== Notes on induction/recursion
- _Inductively defined sets:_ interference rules with constructors.
- _Recursion (primitive recursion):_ recursive calls only for recursive arguments
$f(c(x,d)) = dots.h.c f(d) dots.h.c$.
- _Structural induction:_ inductive hypotheses for recursive arguments $P(d) => P(c(x,d))$.
== Concepts from automata theory
=== Alphabets and strings
An alphabet is a finite, nonempty set of symbols.
- ${a,b,c,dots.h.c,z}$
- ${0,1,dots.h.c,9}$
A string (or word) over the alphabet $Sigma$ is a member of _List_($Sigma$).
=== Some conventions
Following the course text book:
- $Sigma:$ An alphabet
- $a,b,c:$ Elements of $Sigma$
- $u,v,w:$ Words (strings) over $Sigma$
=== Simple notation
#table(
columns: (auto, auto),
inset: 10pt,
align: center,
[$Sigma^*$],
[_List_$(Sigma)$],
[$epsilon$],
[nil or $[med]$],
[a w],
[cons(a, w)],
[$a$],
[cons$(a,"nil")$ or $[a]$],
[$a,b,c$],
[$[a,b,c]$],
[$u v$],
[append(u, v)],
[|w|],
[length(w)],
[$Sigma^+$],
[nonempty strings, ${w in Sigma^* | w eq.not epsilon}$],
)
=== Exponentiation
$Sigma^n:$ Strings of length $n$, ${w in Sigma^* | |w| = n}$
#example[
${a,b}^2 = {a a, a b, b a, b b}$
]
==== Alternative definition of $Sigma^n subset.eq Sigma^*$
- $Sigma^0 = {epsilon}$
- $Sigma^{n+1} = {a w | a in Sigma, w in Sigma^n}$
#pagebreak()
==== Repeated strings
$w^n$: $w$ repeated $n$ times
#example[
$a b^2 = 𝑎 𝑏 𝑎 𝑏 𝑎 𝑏$
]
==== Recursive definitions of functions on strings
- $w^0 = epsilon$
- $w^{n+1} = w w^n$
=== Languages
A language over an alphabet $Sigma$ is a subset $L subset.eq Sigma^*$. Examples
of such languages are typical programming languages such as C, Java, and Python
and regular written languages. Another example is the odd natural numbers
expressed in binary notation (without leading zeroes), which is a language over
the alphabet ${0,1}$.
Following the course text book: $L,M,N:$ Languages over $Sigma$.
=== Operations on languages
Some of these are mentioned earlier.
#table(
columns: (auto, auto, auto),
inset: 10pt,
align: center,
[*Operation*],
[*Rule*],
[*Example*],
[Concatenation],
[$L M = {u v | u in L, v in M}$],
[${a, b c}{d e , f} = {a d e, a f, b c d e, b c f}$],
[Exponentiation],
[$L^0 = {epsilon}$, $L^{n+1} = L L^n$],
[${a, b c}^2 &= {a, b c}({a, b c}^1) \ &= {a, b c}({a, b c}{epsilon}) \ &= {a, b c}{a, b c} \ &= {a a, a b c, b c a, b c b c}$],
[The _Kleene star_],
[$L^* = union.big_(n in NN) L^n$],
[${a, b c}^* &= {a, b c}^0 union {a,b c}^1 union {a, b c}^2 union dots.h.c \ &= {epsilon, a, b c, a a, a b c, b c a, b c b c, dots.h.c}$],
)
#pagebreak()
== Inductivley defined subsets
One can define subsets of (say) $Sigma^*$ inductively. For instance, for $L subset.eq Sigma^*$ we
can define $L^* subset.eq Sigma^*$ inductively as follows:
$ "" / (epsilon in L^*) wide (u in L quad v in L^*) / (u v in L^*) $
_Note that there are no constructors (but in some cases it might make sense to
name the rules)_.
#example[
$a b a in {a, a b}^*$
#proof[
$ (overline(a b in {a, a b}) wide (overline(a in {a, a b}) wide overline(epsilon in {a, a b})) / (a in {a, a b}^*)) / (a b a in {a, a b}^*) $
]
]
_some more shit on recursion with subsets is to be added here!_
#pagebreak()
= Deterministic finite automata
A _DFA_ specifies a language. In @even_diagram, the language is : ${11}^* = {epsilon, 11, 1111, dots.h.c}$.
One of many use cases of _DFAs_ is the implementation of regular expression
matching.
#figure(image("figures/DFA.png", width: 60%), caption: [Transition diagram]) <even_diagram>
A _DFA_ is given by a 5-tuple $(Q,Sigma, delta, q_0, F)$ where
- $Q:$ The finite set of states.
- $Sigma:$ The alphabet.
- $delta in Q times Sigma -> Q:$ The transition function.
- $q_0 in Q:$ The starting state.
- $F subset.eq Q:$ The set of accepting states
The diagram in @even_diagram corresponds to the 5-tuple
$ italic("Even") = ({s_0,s_1},{1}, delta, s_0, {s_0}) $
where $delta$ is defined in the following way:
$ & delta in {s_0,s_1} times {1} -> {s_0,s_1} \
& delta(s_0, 1) = s_1 \
& delta(s_1, 1) = s_0 $
== Semantics
#definition[
The language $L(A)$ of a DFA $A = (Q, Sigma, delta, q_0, F)$ is defined in the
following way:
A transition function for strings which is defined by recursion:
$ &hat(delta) in Q times Sigma^* &->& Q \
&hat(delta)(q,epsilon) &= & q \
&hat(delta)(q, a w) &= & hat(delta)(delta(q, a),w) $
The language of $A$ is defined as the set of all strings $w in Sigma^*$ such
that $hat(delta)(q_0, w) in F$:
$ L(A) = {w in Sigma^* | hat(delta)(q_0, w) in F} $
]
#example[
The language of the DFA in @even_diagram is specified as follows:
$ hat(delta)(s_0,11) &= hat(delta)(delta(s_0, 1),1) \
&= hat(delta)(s_1,1) \
&= hat(delta)(delta(s_1, 1),epsilon) \
&= hat(delta)(s_0, epsilon) \
&= s_0 $
]
== Transition diagrams
A transition diagram is a graphical representation of a DFA. The states are
represented by circles, the transitions by arrows, and the accepting states by
double circles.
- One node per state.
- An arrow "from nowhere" to the starting state.
- Double circles for accepting states.
- For every transition $delta(s_1, a) = s_2$, an arrow marked with $a$ from $s_1$ to $s_2$.
(multiple arrows can be combined)
#figure(
image("figures/td_missing_trans.png", width: 50%),
caption: [TD with missing transitions],
) <missing_trans>
#figure(
image("figures/td_missing_trans_2.png", width: 50%),
caption: [Every missing tranisition goes to a new state (that is not accepting)],
) <missing_trans_2>
_Note that diagrams with missing tranisitions do not define the alphabet
unambiguousely. Consider the diagram in @missing_trans, the alphabet must be a
(finite) superset of ${"'"0"'","'"1"'",dots.h.c, "'"9"'"}$, but which one?_
#pagebreak()
== Transition tables
A transition table is a tabular representation of a DFA. The states are
represented by rows, the transitions by columns, and the accepting states by
double entries.
#figure(
image("figures/trans_table.png", width: 20%),
caption: [Transition table],
) <trans_table>
- _States:_ left column.
- _Alphabet:_ top row.
- _Starting state:_ marked with $->$.
- _Accepting states:_ marked with $*$
- _Transition function:_ table.
#pagebreak()
== Constructions
Given a DFA $A = (Q,Sigma, delta,q_0,F)$ we can construct a DFA $overline(A)$ that
satisfies the following property:
$ L(overline(A)) = overline(L(A)) := Sigma^* backslash L(A) $
Construction:
$ overline(A) = (Q,Sigma, delta,q_0,Q backslash F) $
_We accept if the original automaton doesn't. (what???)_
#example[
Let $A$ be
#figure(image("figures/DFA_1.png", width: 40%))
Then $overline(A)$ is
#figure(image("figures/DFA_2.png", width: 40%))
]
== Product construction
Given two DFAs $A_1 = (Q_1,Sigma, delta_1,q_01,F_1)$ and $A_2 = (Q_2,Sigma, delta_2,q_02,F_2)$ with
the same alphabet, we can construct a DFA $A_1 times.circle A_2$ that satisfies
the following property:
$ L(A_1 times.circle A_2) = L(A_1) sect L(A_2) $
Construction:
$ A_1 times.circle A_2 = (Q_1 times Q_2,Sigma, delta,(q_01,q_02),F_1 times F_2) $
where
$ delta((s_1,s_2), a) = (delta_1(s_1,a),delta_2(s_2,a)) $
_We basically run the two automatons in parallel and accept if both accept._
#pagebreak()
== Sum Construction
Given two DFAs $A_1 = (Q_1,Sigma, delta_1,q_01,F_1)$ and $A_2 = (Q_2,Sigma, delta_2,q_02,F_2)$ with
the same alphabet, we can construct a DFA $A_1 plus.circle A_2$ that satisfies
the following property:
$ L(A_1 plus.circle A_2) = L(A_1) union L(A_2) $
Construction:
$ A_1 plus.circle A_2 = (Q_1 union Q_2,Sigma, delta,(q_01,q_02),F_1 union F_2) $
where
$ & delta((s_1,s_2), a) = cases(
#align(right)[#box[$(delta_1(s_1, a),s_2 ) quad$]]&\, "if" q in Q_1,
#align(right)[#box[$(s_1, delta_2(s_2, a)) quad$]]&\, "if" q in Q_2,
) $
== Accessible states
Let $A = (Q, Sigma, delta, q_0, F)$ be a DFA. The set _Acc_$(q) subset.eq Q$ of
states that are accessible from $q in Q$ can be defined in the following way:
$ italic("Acc")(q) = {hat(delta)(q,w) | w in Sigma^*} $
To construct a possible smaller DFA, which satisfies $L(A) = L(A'):$
$ & A' & = & (italic("Acc")(q_0), Sigma, delta', q_0, F sect italic("Acc")(q_0)) \
& delta'(q,a) & = & delta(q, a) $
#example[
The following two DFAs define the same language:
#figure(image("figures/acc_states_1.png", width: 40%), caption: [$A$])
#figure(image("figures/acc_states_2.png", width: 40%), caption: [$A'$])
_Notice that $A$ has unreachable states from the starting state $q_0$. We can ignore these states and construct $A'$._
]
== Regular languages
A language $M subset.eq Sigma^*$ is _regular_ if there exists a DFA $A$ with alphabet $Sigma$ such that $L(A) = M$.
- Note that if $M$ and $N$ are regular, then so are $M sect N$, $M union N$, and $overline(M)$.
- We will see later that if $M$ and $N$ are regular, then so are $M N$.
= NFAs & the subset construction
= NFAs with ε-transitions & exponential blowup
hej
|
|
https://github.com/SkiFire13/master-thesis | https://raw.githubusercontent.com/SkiFire13/master-thesis/master/preface/titlepage.typ | typst | #import "../config/translated.typ": *
#let footer(academic-year-prefix, academic-year) = align(center)[
#line(length: 90%)
#smallcaps[ #academic-year-prefix #academic-year ]
]
#let titlepage(affiliation, title, supervisor, candidate, academic-year) = page(footer: [], margin: (bottom: 1.7cm))[
#show: align.with(center)
#text(size: 17pt, strong(affiliation.university))
#text(size: 14pt, smallcaps(affiliation.department))
#text(size: 12pt, smallcaps(affiliation.degree))
#v(30pt)
#image(height: 6cm, "../images/unipd-logo.png")
#v(30pt)
#text(size: 17pt, strong(title))
#text(size: 12pt, style: "oblique", degree)
#v(40pt)
#text(size: 12pt)[
#align(left)[
_ #supervisor-prefix _
#supervisor
]
#align(right)[
_ #candidate-prefix _
#candidate.name
#candidate.id
]
]
#align(center + bottom)[
#line(length: 90%)
#smallcaps[ #academic-year-prefix #academic-year ]
]
]
|
|
https://github.com/fenjalien/metro | https://raw.githubusercontent.com/fenjalien/metro/main/tests/array/units/test.typ | typst | Apache License 2.0 | #import "/src/lib.typ": *
#import units: *
#set page(width: auto, height: auto, margin: 1cm)
#for units in ("bracket", "repeat", "single") [
#metro-setup(list-units: units, product-units: units, range-units: units)
#qty-list(2, 4, 6, 8, "T")
#qty-product(2, 4, 6, 8, "T")
#qty-range(2, 4, degreeCelsius)
]
|
https://github.com/mem-courses/calculus | https://raw.githubusercontent.com/mem-courses/calculus/main/homework-2/homework7.typ | typst | #import "../template.typ": *
#show: project.with(
course: "Calculus II",
course_fullname: "Calculus (A) II",
course_code: "821T0160",
title: "Homework #7",
authors: ((
name: "<NAME>",
email: "<EMAIL>",
id: "#198"
),),
semester: "Spring-Summer 2024",
date: "April 17, 2024",
)
#let defeq = math.attach("=", t: math.Delta)
#let atpos(f, b, t) = $lr(display(#f) |)_(#b\ "")^(\ #t)$
= 习题7-7
== P54 9
#prob[
试求 $O y z$ 平面上的抛物线 $z=sqrt(y-1)$ 绕 $O y$ 轴旋转一周所成的旋转曲面方程。
]
用 $pm sqrt(x^2+z^2)$ 代换 $z$ 得:
$
&pm sqrt(x^2+z^2) = sqrt(y-1)\
==> & x^2 + z^2 = y-1 quad (y>=1)
$
== P54 12(1)
#prob[
求曲线在 $O x y$ 平面上的投影曲线方程:
(1) $display(cases(
x^2 + (y-2)^2 + (z-1)^2 = 25,
x^2 + y^2 + z^2 = 16
))$
]
两式相减得
$
-4y+4-2z+1=9 ==> z=-2y-4
$
代入 ① 式得
$
x^2 + (y-2)^2 + (-2y-5)^2 = 25
==> x^2 + 5y^2 + 16y + 4 = 0
$
故投影曲线的方程为
$
cases(
x^2 + 5y^2 + 16y + 4 = 0,
z=0
)
$
= 习题7-8
== P59 1(1)(3)(5)
#prob[
指出下列方程所表示曲面的名称。若是旋转曲面,指出它是由什么曲线绕什么轴旋转而生成的。
(1) $9 x^2 + 4 y^2 + 4 z^2 = 36$
(3) $x^2-y^2-z^2 = 1$
(5) $x^2-y^2 = 4z$
]
(1) 旋转椭球面。由 $9x^2+4y^2$ 绕 $O x$ 轴旋转而成。
#h(-indent) (3) 旋转双叶双曲面。由 $x^2-y^2$ 绕 $O x$ 轴旋转而成。
#h(-indent) (5) 双曲抛物面。
== P59 2(1)(2)
#prob[
指出下列方程表示怎样的曲面,并作出其草图:
(1) $display(x^2 + y^2/4 + z^2/9 = 1)$
(2) $display(36x^2 + 9 y^2 - 4z = 36)$
]
(1) 椭球面 $quad$ (2) 双曲抛物面
#align(center, image("images/2024-04-18-00-54-11.png", width: 100%))
== P60 4(1)(4)(7)(10)
#prob[
画出下列各组曲面所围成的例题图形:
略。
]
#align(center, image("images/2024-04-18-01-03-20.png", width: 100%))
= 习题8-1
== P68 5(2)
#prob[
求极限:
$
lim_(x->+oo\ y->+oo) (x^2+y^2) e^(-(x+y))
$
]
注意到
$
0< (x^2+y^2)/(e^(x+y)) < (x+y)^2/(e^(x+y))
$
由于
$
lim_(x->+oo\ y->+oo) (x+y)^2/e^(x+y) = lim_(u->+oo) u^2/e^u = 0
$
故所求极限
$
lim_(x->+oo\ y->+oo) (x^2+y^2) e^(-(x+y)) = 0
$
== P68 7(2)
#prob[
研究函数的连续性:
$
f(x,y) = cases(
display((x y^2) / (x^2 + y^4)\,quad& (x,y) != (0,0)),
0\, &(x,y) = (0,0)
)
$
]
考察
$
lim_(x->0\ y=0) (x y^2)/(x^2+y^4) = 0
quad quad
lim_(y->0\ x=y^2) (y^4)/(y^4+y^4) = lim_(y->0) (y^4)/(2 y^4) = 1/2
$
由于 $display(0!=1/2)$,可知 $f(x,y)$ 在点 $(0,0)$ 处不连续。而在其余点处,由初等函数的性质显然连续。
= 习题8-2
== P81 1
#prob[
设函数 $f(x,y) = display(cases((x^2+y^2) sin display(1/(x^2 + y^2))\, quad& (x,y)!=(0,0), 0\, quad& (x,y) = (0,0)))$。求 $f'_x (0,0)$,$f'_y (0,0)$,$f'_x (x,y)$,$f'_y (x,y)$。
]
根据定义:
$
f'_x (0,0) = lim_(x->0) (f(x,0)-f(0,0))/(x) = lim_(x->0) (x^2 sin display(1/x^2))/x = 0
$
由于 $f(x,y)$ 关于 $x,y$ 是对称的,同理可得 $f'_y (0,0)=0$。
当 $(x,y)!=(0,0)$ 时,有
$
(diff f)/(diff x) = 2x sin 1/(x^2+y^2) - (2x)/(x^2+y^2) cos 1/(x^2+y^2)
$
在 $(x,y)$ 时偏导函数也成立。故有
$
f'_x (x,y) = 2x sin 1/(x^2+y^2) - (2x)/(x^2+y^2) cos 1/(x^2+y^2)
$
同理有
$
f'_y (x,y) = 2y sin 1/(x^2+y^2) - (2y)/(x^2+y^2) cos 1/(x^2+y^2)
$
== P81 4
#prob[
设函数 $z=f(e^(x y) - y^2)$,其中 $f$ 为可导函数,求 $display((diff z)/(diff x)\, (diff z)/(diff y))$。
]
$
(diff z)/(diff x) = e^(x y) y f'
quad quad
(diff z)/(diff y) = (e^(x y) - 2 y) f'
$
== P82 6(3)
#prob[
求函数的所有二阶偏导数:
$
u=x^y
$
]
$
(diff^2 u)/(diff x^2) = y (y-1) x^(y-2)
quad quad
(diff^2 u)/(diff y^2) = x^y (ln x)^2\
(diff^2 u)/(diff x diff y) = (diff^2 u)/(diff y diff x) = x^(y-1) + y x^(y-1) ln x
$
== P82 11(2)
#prob[
求函数的全微分:
$
u=sin(x^2 + y^2)
$
]
$
(diff u)/(diff x) = 2 x cos (x^2+y^2)
quad quad
(diff u)/(diff y) = 2 y cos (x^2+y^2)
$
故有
$
dif u = 2x cos(x^2+y^2) dif x + 2y cos (x^2+y^2) dif y
$
== P82 12
#prob[
设函数 $f(x,y,z) = display(root(z, x/y))$,求 $dif f(1,1,1)$。
]
$
&atpos((diff f)/(diff x), (1,1,1), "") = atpos((dif)/(dif x) (x), x=1, "") = 1\
&atpos((diff f)/(diff y), (1,1,1), "") = atpos((dif)/(dif y) (1/y), y=1, "") = atpos(-1/y^2, y=1, "") = -1 \
&atpos((diff f)/(diff z), (1,1,1), "") = atpos((dif)/(dif z) root(z,1), z=1, "") = 0
$
故有
$
dif f (1,1,1) = dif x -dif y
$
== P82 14(2)
#prob[
求下列各式的近似值:
(1) $sqrt(1.02^3 + 1.97^3)$
(2) $0.97^1.05$。
]
(1) 设函数 $f(x,y) = sqrt(x^3 + y^3)$。则
$
f(1,2) = 3
quad
f'_x (1,2) = 1
quad
f'_y (1,2) = 4
$
故
$
sqrt(1.02^3+1.97^3) = f(1.02,1.97) = f(1,2) + 0.02 f'_x (1,2) - 0.03 f'_y (1,2) = 2.90
$
(2) 设函数 $f(x,y) = x^y$。则
$
f(1,1) = 1
quad
f'_x (1,1) = 1
quad
f'_y (1,1) = 0
$
故
$
0.97^1.05 = f(0.97,1.05) = f(1,1)- 0.03 f'_x (1,1) + 0.05 f'_y (1,1) = 0.97
$ |
|
https://github.com/CoderEx24/csc425-project | https://raw.githubusercontent.com/CoderEx24/csc425-project/master/paper.typ | typst | #align(center, text(20pt)[
Predicting Obesity
])
#set par(justify: true)
= Abstract
Obesity is a global health concern with severe implications. This project aims to classify obesity
levels using advanced machine learning techniques.
We explore a dataset concerning obesity levels in Mexico, Colombia and Peru.
We developed a model that processes
demographic, dietary, and physical activity data to predict obesity levels.
We compare a number of feature reduction techniques, feature selection techniques, wrapper methods
and models.
Our findings indicate
that machine learning can effectively predict obesity levels, which could be instrumental in
preventive health measures.
/*
The dataset consists of data regarding physical attributes like weight, height and gender,
and data regarding habits like smoking.
We compare a number of feature reduction techniques, feature selection techniques, wrapper methods
and models. Then, all combinations of the aforementioned are evaluated by cross-validation.
*/
= Introduction
Obesity is a disease characterise by excessive fat deposits
that can lead to Type-2 diabetes, heart disease and certain types of cancers.
According to the World Health Organisation, in 2022, 890 million adults suffer from obesity.
Obesity can dramatically impairs one's quality of life, and contributes to other health complications.
We aim to make a machine learning mode that's able to predict obesity levels based on eating habits
and physical conditions.
In this project, we compare a number of methods in order to select the most performant one.
#grid(
rows: 2,
columns: 2,
grid.cell(
colspan: 2,
figure(
image("./hist.png", width: 45%),
caption: [Distribution of classes in the dataset]
)
),
figure(
image("./age-boxplot.png", width: 65%),
caption: [Ages]
),
figure(
image("./height-boxplot.png", width: 65%),
caption: [Heights]
)
)
= Related work
@dataset is the dataset used, it consists of 17 attributes and 2111 instances. 23% of the dataset are original,
the rest has been generated using SMOTE filter.
#pagebreak()
/*
The main problem addressed in this project is the accurate classification of obesity levels based
on various factors. We utilized several machine learning techniques, including Naive Bayes,
Decision Trees, and Neural Networks, to tackle this issue. The main contribution of this project
is the development of a robust model that outperforms existing benchmarks in accuracy and
efficiency. The rest of the document is organized into sections detailing related work,
methodology, proposed model, results, discussion, conclusion, and future work.
*/
= Methodology
We select a number of feature reduction and feature selection techniques.
For each such technique, we pair them with a classifier model.
The resulting pipeline is then trained, evaluated and compared with other pipelines in order to
select the most performant one.
= Proposed Model
The models consists of a pipeline, the pipeline is divided into 3 stages.
In the first stage, Categorical data is encoded using ordinal encoding and missing values of both
categorical and numerical variables are handled. For numerical variables, missing values are replaced
with the mean, while missing values in categorical variables are replaced with the most frequent value.
In the second stage, Either feature selection or feature reduction is performed.
For feature reduction, Principle Component Analysis has been chosen.
For feature selection, Variance Threshold or Recursive Feature Elimination have been chosen.
Once a pipeline is constructed, it's trained and evaluated using cross-validation.
The best model is then selected.
#figure(
image("./pipeline_model.png", width: 35%),
caption: [Pipeline],
)
/*
We use data that's been published by <NAME> and <NAME>.
23% of the data are original, while the rest has been synthesised using Synthetic Minority Oversampling Technique
Filter (SMOTE).
*/
= Results and Discussion
#table(
columns: 3,
[Pipeline], [Mean Accuracy], [Standard Deviation],
[Naive Bayes with PCA], [0.6517507002801121], [0.04107792042513687],
[Naive Bayes with Variance Threshold], [0.5603991596638656], [0.029137943560944846],
[Naive Bayes with RFE], [0.5960434173669468], [0.04209660664344372],
[Decision Tree with PCA], [0.5812324929971989], [0.04008735136514084],
[Decision Tree with Variance Threshold], [0.8466036414565826], [0.03328495167706323],
[Decision Tree with RFE], [0.819922969187675], [0.04010574265762946],
[Multi-Layer Perceptron with PCA], [0.8140406162464986], [0.04086377474357136],
[Multi-Layer Perceptron with Variance Threshold], [0.7698529411764706], [0.038029707473487516],
[Multi-Layer Perceptron with RFE], [0.6935224089635854], [0.08116535759280616],
[k-Nearest Neighbours with PCA], [0.6872899159663866], [0.02590835312810534],
[k-Nearest Neighbours with Variance Threshold], [0.7551470588235294], [0.010054995831748011],
[k-Nearest Neighbours with RFE], [0.7316176470588235], [0.02347189661692996],
)
#figure(
image(
"./Decision Tree-RFE-cm.png",
width: 70%
),
caption: [Confusion Matrix of Decision Tree with Variance Threshold]
) <dtvth>
The results shows that decision trees with variance threshold is the most performant model
given the current dataset.
= Conclusion and Future Work
The project successfully demonstrated the use of machine learning in classifying obesity levels.
Future work could explore the integration of more diverse datasets and the application of deep
learning techniques for improved accuracy.
#bibliography("bib.yml", style: "apa")
|
|
https://github.com/Iskander000/lab | https://raw.githubusercontent.com/Iskander000/lab/main/Lab-3.typ | typst | #import "Class.typ": *
#show: ieee.with(
title: [#text(smallcaps("Lab #3: Web Application with Genie"))],
/*
abstract: [
#lorem(10).
],
*/
authors:
(
(
name: "<NAME>",
department: [Senior-lecturer, Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "a-mhamdi",
),
(
name: "<NAME>",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "Iskander000",
),
(
name: "<NAME>",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "MATRIXBINARY",
),
/*
(
name: "Student 3",
department: [Dept. of EE],
organization: [ISET Bizerte --- Tunisia],
profile: "abc",
)
*/
)
// index-terms: (""),
// bibliography-file: "Biblio.bib",
)
= Introduction
In this lab, we will using Genie framework in Julia to control some paramaters of a sine wave, given some adjustble parameters.for that we gonna need to employ julia REPL as in fig1
#figure(
image("Images/REPL.png", width: 100%, fit: "contain"),
caption: "Julia REPL"
) <fig:repl>
= Application
#rect(fill: green)[The first programme "app.jl"]
#let code=read("../Codes/web-app/app.jl")
#raw(code, lang: "julia")
#let code=read("../Codes/web-app/app.jl.html")
#raw(code, lang: "html")
#rect(fill: red)[First step:GenieFramework]
```zsh
julia --project
```
```julia
julia> using GenieFramework
julia> Genie.loadapp() # Load app
julia> up() # Start server
```
- Gettin GenieFramework link(http://127.0.0.1:8000)
#figure(
image("Images\GENIE1.PNG", width: 70%),
caption: [
Genie gaphical interface
],
)
#rect(fill: red)[Second step:Adjust the phase]
- adding the phase to "app.jl"
#let code=read("../Codes/web-app/app.jl")
#raw(code, lang: "julia")
#let code=read("../Codes/web-app/app.jl.html")
adding phase to "app jl.html "
#raw(code, lang: "html")
- the result in genie graphical interface :
#figure(
image("Images/GENIE2.png", width: 80%),
caption: "Adding phase parameter",
)
#rect(fill: red)[third step:Adjust the offset]
- adding the offset to "app.jl"
#let code=read("../Codes/web-app/app.jl")
#raw(code, lang: "julia")
#let code=read("../Codes/web-app/app.jl.html")
adding offset to "app jl.html "
#raw(code, lang: "html")
- the result in genie graphical interface :
#figure(
image("Images/GENIE3.png", width: 80%),
caption: "Adding phase parameter",
) |
|
https://github.com/wflixu/typster | https://raw.githubusercontent.com/wflixu/typster/main/tests/main.typ | typst | MIT License | #set page(
paper: "a4",
margin: (x: 1.8cm, y: 1.5cm),
)
#set text(
font: "New Computer Modern",
size: 16pt
)
#set par(
justify: true,
leading: 0.52em,
)
= Introduction
In this report, we will explore the
various factors that influence fluid
dynamics in glaciers and how they
contribute to the formation and
behaviour of these natural structures.
+ The climate
- Temperature
- Precipitation
+ The topography
+ The geology
#image("./app-icon.png")
The flow rate of a glacier is
defined by the following equation:
$ Q = rho A v + C $
= Introduction
In this report, we will explore the
various factors that influence fluid
dynamics in glaciers and how they
contribute to the formation and
behaviour of these natural structures.
= picture
#figure(
image("app-icon.png", width: 70%),
caption: [
_Glaciers_ form an important part
of the earth's climate system.
],
)
= Methods
We follow the glacier melting models
The equation $Q = rho A v + C$
defines the glacial flow rate.
Total displaced soil by glacial flow:
$ 7.32 beta +
sum_(i=0)^nabla
(Q_i (a_i - epsilon)) / 2 $
$ v := vec(x_1, x_2, x_3) $
#par(justify: true)[
= Background
In the case of glaciers, fluid
dynamics principles can be used
to understand how the movement
and behaviour of the ice is
influenced by factors such as
temperature, pressure, and the
presence of other fluids (such as
water).
]
== Background
#lorem(100)
= package
#import "@preview/cetz:0.2.2": canvas, plot
#let style = (stroke: black, fill: rgb(0, 0, 200, 75))
#canvas(length: 1cm, {
plot.plot(size: (8, 6),
x-tick-step: none,
x-ticks: ((-calc.pi, $-pi$), (0, $0$), (calc.pi, $pi$)),
y-tick-step: 1,
{
plot.add(
style: style,
domain: (-calc.pi, calc.pi), calc.sin)
plot.add(
hypograph: true,
style: style,
domain: (-calc.pi, calc.pi), calc.cos)
plot.add(
hypograph: true,
style: style,
domain: (-calc.pi, calc.pi), x => calc.cos(x + calc.pi))
})
})
= bib
#lorem(40)
@smith2020
#bibliography("works.bib", full: true)
|
https://github.com/nyeong/resume | https://raw.githubusercontent.com/nyeong/resume/main/main.typ | typst | // palette
#let color = (
black: rgb("#333"),
black-strong: rgb("#000"),
gray: rgb("#777"),
primary: rgb(255,111,15),
)
#show link: underline
#let side-padding= 7em
#set document(author: "<NAME>", title: "An Subin's Resume")
#set page(
paper: "a4",
numbering: "1",
number-align: center,
margin: (top: 2cm, bottom: 2cm, left: 2cm, right: 2cm),
)
#set text(
font: "Noto Sans CJK KR",
lang: "ko",
size: 12pt,
fill: color.black,
)
#set par(
justify: true,
leading: 0.8em,
)
#show heading.where(level: 1): it => block(below: 0.8em)[
#text(fill: color.primary, size: 1.42em)[#it.body]
]
#show heading.where(level: 2): it => block(above: 2.4em, below: 1.2em)[
#align(left)[
#set text(size: 1.2em, weight: "regular")
#it.body
#box(width: 1fr, line(length: 100%))
]
]
#let header(
name: "",
role: "",
email: "",
phone: "",
github: ""
) = grid(
columns: (1fr, 1fr),
gutter: auto,
[
= #name
#role
],
align(right)[
#show list.item: it => block()[#it.body]
#set text(size: 0.85em)
- #phone
- #link("mailto:" + email)[#email]
- #link("https://github.com/" + github)[github.com/@#github]
]
)
#let experience(
date: "",
name: "",
role: "",
details: [],
) = grid(
columns: (side-padding, auto),
column-gutter: 1em,
[
#date
],
[
=== #name
#role
#details
]
)
#let skill(dic: (:)) = [
#for (k, v) in dic [
#grid(
columns: (side-padding, auto),
column-gutter: 1em,
[#k],
[#v]
)
]
]
#let opensource(
content,
name: "",
) = block(below: 1.5em)[#grid(
columns: (side-padding, auto),
column-gutter: 1em,
[
=== #name
],
[
#content
]
)]
#header(
name: [안수빈 #text(size: 0.8em)[(An Subin)]],
role: "백엔드/인프라 엔지니어",
email: "<EMAIL>",
phone: "010-9391-2811",
github: "nyeong"
)
== 기술 스택
#skill(dic:(
개발: "TypeScript/Node.js, Express, MySQL, Redis, Remeda, FP, React",
인프라: "AWS, ELB, EC2, RDS(MySQL), ElastiCache(Redis), S3, CloudFront, CodePipeline",
관심: "TypeClass, Programing Language Theory, TDD, 조직 관리",
))
== 경력
#experience(
date: "2023. 10. ~ 현재",
name: link("https://www.gameduo.net/")[게임듀오],
role: "《아처키우기》, 《머지의 신》 백엔드/인프라 엔지니어",
details: [
- 팀 유일 서버 엔지니어로서 게임/어드민 서버 및 제반 인프라 개발/운영/관리
- Node.js, express를 이용한 API 개발, AWS를 이용한 인프라 운영
- 안정/효율적인 인프라 운영을 위한 메트릭 선정 및 Datadog 대시보드 제작
- 메트릭 분석을 통하여 EC2 인스턴스 수 20% 감소, RDS 비용 10% 감소
- 타입을 활용한 내부 라이브러리 개선을 통해 개발 경험 향상, 개발 속도 단축
- 원스토어 출시, xsolla를 활용한 웹샵 출시
]
)
== 오픈소스
#opensource(
name: link("https://annyeong.me/nth-week-today/")[오늘은 몇 주차],
)[
- 오늘이 올해의 몇 주차인지 알려주는 웹 서비스
- 표준(ISO 8601)에 근거하여 #link("https://github.com/nyeong/hanassig/blob/markdown-backup/notes/nth-week-today.md")[주수를 계산하는 알고리즘 구현]
- 관련 키워드 검색 1위, 월 2K+ 달성
]
#opensource(
name: link("https://github.com/gluesql/gluesql")[GlueSQL],
)[
- Rust로 작성된 데이터베이스 라이브러리에 #link("https://www.contribution.ac/2022-ossca")[오픈소스 컨트리뷰션 아카데미] 활동으로 기여
- 쿼리 빌더 라이브러리 "ast_builder" 구현에 참여 #link("https://github.com/gluesql/gluesql/pull/614")[\#614], #link("https://github.com/gluesql/gluesql/pull/627")[\#627]
- Rust 표준 권고에 맞도록 TryInto 트레잇을 TryFrom 트레잇으로 변경 #link("https://github.com/gluesql/gluesql/pull/682")[\#682]
]
== 학력
#experience(
date: "2016. 03. ~
2020. 02",
name: "<NAME>",
role: "컴퓨터정보공학 전공. 미디어콘텐츠공학 복수전공.",
details: []
)
|
|
https://github.com/TGM-HIT/typst-diploma-thesis | https://raw.githubusercontent.com/TGM-HIT/typst-diploma-thesis/main/src/glossary.typ | typst | MIT License | #import "@preview/glossarium:0.5.0" as glossarium: make-glossary, gls, agls, glspl
#let _glossary_entries = state("thesis-glossary-entries")
#let register-glossary(..entries) = {
assert.eq(entries.named(), (:), message: "no named arguments allowed")
let entries = entries.pos()
glossarium.register-glossary(entries)
_glossary_entries.update(entries)
}
/// Stores a glossary entry for this thesis. One call to this function is equivalent to one array
/// entry in Glossarium's ```typc print-glossary()```'s main parameter.
///
/// - key (string): The key with which the glossary entry can be referenced; must be unique.
/// - short (string): Mandatory; the short form of the entry shown after the term has been first
/// defined.
/// - long (string): The long form of the entry.
/// - long (string, content): The long form of the term, displayed in the glossary and on the first
/// citation of the term.
/// - desc (string, content): The description of the term.
/// - plural (string, content): The pluralized short form of the term.
/// - longplural (string, content): The pluralized long form of the term.
/// - group (string): The group the term belongs to. The terms are displayed by groups in the glossary.
/// -> content
#let glossary-entry(
key,
short: none,
long: none,
desc: none,
plural: none,
longplural: none,
group: none,
) = {
assert(short != none, message: "short form of glossary-entry is mandatory")
let entry = (
key: key,
short: short,
long: long,
desc: desc,
plural: plural,
longplural: longplural,
group: group,
)
let entry = for (k, v) in entry {
if v != none {
((k): v)
}
}
entry
}
/// Displays a glossary of the entries added via @@glossary-entry().
///
/// - title (content): A (level 1) heading that titles this glossary. If the glossary is empty, the
/// title is not shown.
/// - ..args (arguments): Any extra parameters to the glossarium function of the same name.
#let print-glossary(title: none, ..args) = context {
let entries = _glossary_entries.get()
// let any-references = entries.any(e => {
// let count = glossarium.__query_labels_with_key(here(), e.key).len()
// count > 0
// })
// TODO
let any-references = true
if any-references or args.named().at("show-all", default: false) {
title
}
glossarium.print-glossary(entries, ..args)
}
|
https://github.com/dainbow/MatGos | https://raw.githubusercontent.com/dainbow/MatGos/master/themes/25.typ | typst | #import "../conf.typ": *
= Собственные значения и собственные векторы линейных преобразованийю Диагонализуемость линейных преобразований.
#definition[
Линейное отображение $phi: V -> V$ называется *линейным преобразованием*.
Множество всех линейных преобразований на $V$ обозначается как $cal(L)(V)$.
]
#definition[
Пусть $phi in cal(L)(V)$.
Вектор $bold(v) in V without {bold(0)}$ называется *собственным вектором*
оператора $phi$ с собственным значением $lambda in FF$, если $phi(bold(u)) = lambda bold(v)$.
Скаляр $mu in FF$ называется *собственным значением* оператора $phi$, если
существует собственный вектор с собственным значением $mu$.
]
#definition[
*Подпространством* линейного пространства $V$ над полем $FF$ называется такое
его непустое подмножество $U subset V$, что выполнены следующие условия:
- $(U, +)$ -- подгруппа в $(V, +)$
- $forall alpha in F: forall bold(u) in U : space alpha bold(u) in U$
Обозначение $U <= V$.
]
#definition[
Пусть $phi in cal(L)(V); lambda in FF$ -- собственное значение оператора $phi$.
Подпространство $V_lambda := "Ker" (phi - lambda) <= V$ называется *собственным
подпространством* оператора $phi$, соответствующим собственному значению $lambda$.
]
#definition[
Пусть $A in M_n (FF)$.
*Характеристическим многочленом* матрицы $A$ называется многочлен $chi_A (lambda) := det(A - lambda E)$
]
#definition[
Матрицы $A, B in M_n (FF)$ называются *подобными*, если
#eq[
$exists S in M_n (FF), S "обратимая" : space A = S B S^(-1)$
]
]
#definition[
Оператор $phi in cal(L)(V)$ называется *диагонализуемым*, если существует базис
в $V$, в котором матрица $phi$ имеет диагональный вид.
Матрица $A in M_n (F)$ называется *диагонализуемой*, если она подобна некоторой
диагональной.
]
#definition[
Пусть $phi in cal(L)(V), lambda_0 in FF$ -- собственное значение оператора $phi$.
*Алгебраической кратностью* собственного значения $lambda_0$ называется
кратность корня $lambda_0$ в $chi_phi (lambda)$.
*Геометрической кратностью* -- величина $dim V_(lambda_0)$
]
#definition[
Пусть $V$ -- линейное пространство, $U_1, U_2 <= V$.
*Суммой* подпространств $U_1, U_2$ называется следующее множество:
#eq[
$U_1 + U_2 := {bold(u_1) + bold(u_2) | bold(u_1) in U_1, bold(u_2) in U_2}$
]
Аналогично определяется сумма $k$ подпространств.
]
#definition[
Пусть $V$ -- линейное пространство, $U_1, ..., U_k <= V$.
Сумма подпространств $U := U_1 + ... + U_k$ называется *прямой*, если для любого
вектора $bold(u) in U$ существует единственный набор векторов $bold(u_1) in U_1, ..., bold(u_k) in U_k$ такой,
что $bold(u) = bold(u_1) + ... bold(u_k)$.
Обозначение -- $U = U_1 plus.circle ... plus.circle U_k$
]
#theorem[
Пусть $phi in cal(L)(V)$. Тогда равносильны следующие условия:
+ Оператор $phi$ диагонализуем
+ Алгебраическая кратность каждого собственного значения оператора $phi$ равна
геометрической, и $chi_phi$ раскладывается на линейные сомножители, то есть
имеет следующий вид при некоторых $lambda_1, ... lambda_k in FF; alpha_1, ..., alpha_k in NN$ таких,
что $sum_(i = 1)^k alpha_i = n$:
#eq[
$chi_phi (lambda) = product_(i = 1)^k (alpha_i - alpha)^(alpha_i)$
]
+ $V = V_(lambda_1) plus.circle ... plus.circle V_(lambda_k)$, где $V_(lambda_1), ..., V_(lambda_k)$ -- собственные
подпространства оператора $phi$
+ В $V$ есть базис, состоящий из собственных векторов оператора $phi$
]
#proof[
$(1 => 2)$ Пусть в некотором базисе $e$ в $V$ матрица оператора $phi$ имеет
диагональный вид, $lambda_1, ..., lambda_k in FF$ -- различные элементы на
диагонали, $alpha_1, ..., alpha_k in NN$ -- количества их вхождений в матрицу.
Тогда $chi_phi (lambda) = product_(i = 1)^k (lambda_i - lambda)^(alpha_i)$. Для
любого $i in overline("1, k")$ алгебраическая кратность значения $lambda_i$ равна $alpha_i$,
при этом $alpha_i$ базисных вектором из $e$ являются собственными векторами со
значениями $lambda_i$, откуда $dim V_(lambda_i) >= alpha_i$, а обратное
неравенство верно всегда.
$(2 => 3)$ Пусть $V_(lambda_1), ..., V_(lambda_j) <= V$ -- собственные
подпространства оператора $phi$. Их сумма -- прямая (т.к. базис) и по условию $sum_(i = 1)^k dim V_(lambda_i) = sum_(i = 1)^k alpha_i = n$,
поэтому $V_(lambda_1) plus.circle ... plus.circle V_(lambda_k) = V$
$(3 => 4)$ Выберем базисы $e_1, ..., e_k$ в пространствах $V_(lambda_1), ..., V_(lambda_k)$.
Тогда, так как сумма этих подпространств прямая, то объединение этих базисов
даёт базис в $V$.
$(4 => 1)$ Если $e$ -- базис из собственных векторов, то именно в этом базисе
матрица оператора $phi$ имеет требуемый диагональный вид.
]
|
|
https://github.com/parallel101/cppguidebook | https://raw.githubusercontent.com/parallel101/cppguidebook/main/misc/typst/test.typ | typst | Other | #set text(
font: "Noto Serif CJK SC",
size: 7pt,
)
#set page(
paper: "a6",
margin: (x: 1.8cm, y: 1.5cm),
header: align(right)[
小女孩贴贴教学篇
],
numbering: "1",
)
#set par(justify: true)
#set heading(numbering: "1.")
#show "小女孩贴贴": name => box[
$phi$#{name}$phi$
]
#align(center, text(17pt)[
*基于春卷饭世界观的一个小女孩贴贴实现*
])
#grid(
columns: (1fr, 1fr),
align(center)[
<NAME> \
Genshin Institute \
#link("mailto:<EMAIL>")
],
align(center)[
Dr. Archibate \
Genshin Institute \
#link("mailto:<EMAIL>")
]
)
Keywords: 小女孩贴贴, 春卷饭, 实现, 性能测试, 可爱, 艺术表现, 想象力, 创造力, 冬天, 香香的, 吻, 百合, 好看
= 小女孩贴贴性能测试
#let count = 2
如 @cover 所示,小女孩贴贴是一种可爱的艺术表现形式。
他可以体现出女孩的想象力和创造力,特别是在冬天。
#figure(
image("pic/cover.png", width: 70%),
caption: [
#count 个可爱的小女孩正在贴贴
],
) <cover>
== 背景
图中,一个小女孩正在贴贴,她的手指上沾满了春卷饭,她的嘴角还留着香香的味道。
她的脸上有吻的痕迹,这是她和她的伴侣之间的百合。
== 实现
小女孩贴贴是基于春卷饭世界观的一个实现。
为了测试其性能,我们进行了一系列的测试,结果表明,小女孩贴贴具有出色的性能。
它能够在短时间内完成大量的贴贴工作,而且每个贴贴都非常好看。
== 结论
小女孩贴贴是一个非常可爱的艺术表现形式,它能够体现出女孩的想象力和创造力。
它的性能也非常出色,能够在短时间内完成大量的贴贴工作。
因此,我们建议大家多多尝试小女孩贴贴,感受它带来的乐趣和美妙的艺术表现。
|
https://github.com/didiercrunch/typst-repo | https://raw.githubusercontent.com/didiercrunch/typst-repo/main/test-remote/bar.typ | typst | #let first_name = "didier"
#let name = "this is easy"
|
|
https://github.com/francescoo22/kt-uniqueness-system | https://raw.githubusercontent.com/francescoo22/kt-uniqueness-system/main/src/annotation-system/rules/statements.typ | typst | #import "../../proof-tree.typ": *
#import "../../vars.typ": *
// ****************** Statements typing rules ******************
#let Decl = prooftree(
axiom($x in.not Delta$),
rule(label: "Decl", $mid(var x), x : top$),
)
#let Assign-Null = prooftree(
axiom($Delta(p) = alpha beta$),
axiom($Delta[p |-> unique] = Delta'$),
rule(n:2, label: "Assign-Null", $mid(p = null)'$),
)
// TODO: in the case where the lhs is borrowed, should we require std?
#let Assign-Unique = {
let a0 = $p' subset.sq.eq.not p$
let a1 = $Delta(p) = alpha beta$
let a2 = $Delta(p') = unique$
let a3 = $Delta tr sp(p') = p'.overline(f_0) : alpha_0 beta_0, ..., p'.overline(f_n) : alpha_n beta_n$
let a4 = $Delta[p' |-> top] = Delta_1$
let a5 = $Delta_1[p |-> unique] = Delta'$
prooftree(
stacked-axiom(
(a0, a1, a2, a4),
(a3, a5)
),
rule(label: "Assign-Unique", $mid(p = p')', p.overline(f_0) : alpha_0 beta_0, ..., p.overline(f_n) : alpha_n beta_n$),
)
}
#let Assign-Shared = {
let a0 = $p' subset.sq.eq.not p$
let a1 = $Delta(p) = alpha$
let a2 = $Delta(p') = shared$
let a3 = $Delta tr sp(p') = p'.overline(f_0) : alpha_0 beta_0, ..., p'.overline(f_n) : alpha_n beta_n$
let a4 = $Delta[p |-> shared] = Delta'$
prooftree(
stacked-axiom(
(a0, a1, a2),
(a3, a4),
),
rule(label: "Assign-Shared", $mid(p = p')', p.overline(f_0) : alpha_0 beta_0, ..., p.overline(f_n) : alpha_n beta_n$),
)
}
#let Assign-Borrowed-Field = {
let a0 = $p'.f subset.sq.eq.not p$
let a1 = $Delta(p) = alpha beta$
let a2 = $Delta(p'.f) = alpha' borrowed$
let a3 = $alpha' != top$
let a4 = $(beta = borrowed) => (alpha' = unique)$
let a5 = $Delta[p'.f |-> top] = Delta_1$
let a6 = $Delta tr sp(p'.f) = p'.f.overline(f_0) : alpha_0 beta_0, ..., p'.f.overline(f_n) : alpha_n beta_n$
let a7 = $Delta_1[p |-> alpha'] = Delta'$
prooftree(
stacked-axiom(
(a0, a1, a2, a3, a4),
(a5, a6, a7)
),
rule(
label: "Assign-Borrowed-Field",
$mid(p = p'.f)', p.overline(f_0) : alpha_0 beta_0, ..., p.overline(f_n) : alpha_n beta_n$
)
)
}
#let Begin = prooftree(
axiom($mtype(m) = alpha_0 beta_0, ..., alpha_n beta_n -> alpha$),
axiom($args(m) = x_0, ..., x_n$),
rule(n:2, label: "Begin", $dot tr begin_m tl x_0 : alpha_0 beta_0, ..., x_n : alpha_n beta_n$),
)
#let Seq-Base = prooftree(
axiom($$),
rule(label: "Seq-Base", $mid(overline(s) equiv dot)$)
)
#let Seq-Rec = prooftree(
axiom($Delta tr s_0 tl Delta_1$),
axiom($Delta_1 tr overline(s') tl Delta'$),
rule(n:2, label: "Seq-Rec", $mid(overline(s) equiv s_0\; overline(s'))'$)
)
#let If = {
let a1 = $Delta(p_1) != top$
let a2 = $Delta(p_2) != top$
let a3 = $Delta tr overline(s_1) tl Delta_1$
let a4 = $Delta tr overline(s_2) tl Delta_2$
let a5 = $unify(Delta, Delta_1, Delta_2) = Delta'$
prooftree(
stacked-axiom((a1, a2), (a3, a4, a5)),
rule(label: "If", $mid(fi p_1 == p_2 then overline(s_1) els overline(s_2))'$),
)
}
#let Assign-Call = prooftree(
axiom($Delta(p) = alpha' beta'$),
axiom($Delta tr m(overline(p)) tl Delta_1$),
axiom($mtype(m) = alpha_0 beta_0, ..., alpha_n beta_n -> alpha$),
axiom($(beta' = borrowed) => (alpha = unique)$),
axiom($Delta_1[p |-> alpha] = Delta'$),
rule(n:5, label: "Assign-Call", $mid(p = m(overline(p)))'$)
)
#let Call = {
let a0 = $forall 0 <= i <= n : Delta(p_i) = alpha_i beta_i$
let a1 = $mtype(m) = alpha_0^m, beta_0^m, ..., alpha_n^m beta_n^m -> alpha_r$
let a2 = $forall 0 <= i <= n : Delta tr std(p_i, alpha_i^m beta_i^m)$
let a3 = $forall 0 <= i, j <= n : (i != j and p_i = p_j) => alpha_i^m = shared$
let a4 = $forall 0 <= i, j <= n : p_i subset.sq p_j => (Delta(p_j) = shared or alpha_i^m = alpha_j^m = shared)$
// Note: If we have more permissions than std, by passing to borrowed we should be able to keep those permissions. Anyway this is probably going to be false in the future so it's ok to keep it as it is
let a5 = $Delta minus.circle (p_0, ..., p_n) = Delta'$
let a6 = $forall 0 <= i <= n : alpha_i beta_i ~> alpha_i^m beta_i^m ~> alpha'_i beta'_i$
let a7 = $norm(p_0 : alpha'_0 beta'_0, ..., p_n : alpha'_n beta'_n) = p'_0 : alpha''_0 beta''_0, ..., p'_m : alpha''_m beta''_m$
prooftree(
stacked-axiom(
(a0, a1), (a2, a3), (a4, a5), (a6, a7)
),
rule(label: "Call", $mid(m(p_0, ..., p_n))', p'_0 : alpha''_0 beta''_0, ..., p'_m : alpha''_m beta''_m$)
)
}
// #let Return-Null = prooftree(
// stacked-axiom(
// ($mtype(m) = alpha_0^m, beta_0^m, ..., alpha_n^m beta_n^m -> alpha_r$,),
// ($forall 0 <= i, j <= n : (alpha_i beta_i != unique) => Delta tr std(p_i, alpha_i beta_i)$,),
// ),
// rule(label: "Return-Null", $Delta tr ret_m null tl dot$),
// )
#let Return-p = {
let a1 = $mtype(m) = alpha_0^m, beta_0^m, ..., alpha_n^m beta_n^m -> alpha_r$
let a2 = $Delta(p) = alpha$
let a3 = $alpha rel alpha_r$
let a4 = $Delta tr std(p, alpha_r)$
let a5 = $forall 0 <= i, j <= n : (alpha_i beta_i != unique) => Delta tr std(p_i, alpha_i beta_i)$
prooftree(
stacked-axiom(
(a1, a2, a3),
(a4, a5)
),
rule(label: "Return-p", $Delta tr ret_m space p tl dot$),
)
}
// #let Return-m = {
// let a0 = $mtype(m) = alpha_0^m, beta_0^m, ..., alpha_n^m beta_n^m -> alpha_r$
// let a1 = $mtype(m') = alpha_0^m', beta_0^m', ..., alpha_n'^m' beta_n'^m' -> alpha'_r$
// let a2 = $mid(m'(p_0, ..., p_n'))'$
// let a3 = $alpha'_r rel alpha_r$
// let a4 = $forall 0 <= i, j <= n : (alpha_i beta_i != unique) => Delta' tr std(p_i, alpha_i beta_i)$
// prooftree(
// stacked-axiom(
// (a0,),
// (a1,),
// (a2, a3),
// (a4,)
// ),
// rule(label: "Return-m", $Delta tr ret_m space m'(p_0, ..., p_n') tl dot$),
// )
// }
#let While-Base = prooftree(
axiom($mid(overline(s))_1$),
axiom($unify(Delta, Delta, Delta_1) = Delta$),
rule(n:2, label: "While-Base", $mid(loop p_1 == p_2 do overline(s))$)
)
#let While-Rec = {
let a1 = $mid(overline(s))_1$
let a2 = $unify(Delta, Delta, Delta_1) = Delta_2$
let a3 = $Delta_2 tr loop p_1 == p_2 do overline(s) tl Delta'$
prooftree(
stacked-axiom((a1, a2), (a3,)),
rule(label: "While-Rec", $mid(loop p_1 == p_2 do overline(s))'$)
)
}
#let While = {
let a1 = $u^*_(overline(s)) (Delta) = Delta'$
let a2 = $$
let a3 = $$
prooftree(
axiom(a1),
rule(label: "While", $mid(loop p_1 == p_2 do overline(s))'$)
)
}
#let U-Fun = prooftree(
axiom($mid(s)'$),
axiom($unify(Delta, Delta, Delta') = Delta_1$),
rule(n:2, label: "", $u_s (Delta) = Delta_1$)
) |
|
https://github.com/vEnhance/1802 | https://raw.githubusercontent.com/vEnhance/1802/main/src/triple.typ | typst | MIT License | #import "@local/evan:1.0.0":*
= Triple integrals
We're going to now consider integrals with three variables, rather than two.
In fact there is almost nothing new in this section:
#idea[
All the two-variable stuff ports over to three-variable stuff in the obvious way.
]
|
https://github.com/dariakoz/Typst-Template | https://raw.githubusercontent.com/dariakoz/Typst-Template/main/README.md | markdown | Hey!
Das ist mein Typst-Template für meine Praxisarbeit.
Tipps für die eigene Benutzung werde ich noch in Zukunft hier hin schreiben.
Bis dahin schreib mir bitte bei Fragen eine Mail an <EMAIL> |
|
https://github.com/LuminolT/SHU-Bachelor-Thesis-Typst | https://raw.githubusercontent.com/LuminolT/SHU-Bachelor-Thesis-Typst/main/template/body.typ | typst | #import "font.typ": *
// 助手函数,很恐怖吗?是的,这很恐怖。
#let zhnumbers(num) = {
if num == "1" { "一" }
else if num == "2" { "二" }
else if num == "3" { "三" }
else if num == "4" { "四" }
else if num == "5" { "五" }
else if num == "6" { "六" }
else if num == "7" { "七" }
else if num == "8" { "八" }
}
#let show_body(body) ={
pagebreak()
set heading(numbering: "1.1.1")
show heading: it => locate(loc => {
let levels = counter(heading).at(loc)
let deepest = if levels != () {
levels.last()
} else {
1
}
set text(10pt, weight: 400)
if it.level == 1 {
set align(center)
set text(font:heiti, size: 18pt, weight: "bold")
if it.numbering != none {
"第" + zhnumbers(numbering("1", deepest)) + "章 "
}
it.body
} else if it.level == 2 {
set text(font:heiti, size: 14pt, weight: "bold" )
it
} else if it.level == 3 {
set text(font:heiti, size: 12pt, weight: "bold" )
it
}
})
// 设置正文格式
set text(font: songti, size:12pt)
set par(justify: false, leading: 1.5em, first-line-indent: 2em)
show par : it => {
it
v(5pt)
}
// 设置公式格式
set math.equation(numbering: "(1)")
body
} |
|
https://github.com/The-Notebookinator/notebookinator | https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/themes/radial/colors.typ | typst | The Unlicense | #let red = rgb("#e03131")
#let pink = rgb("#c2255c")
#let purple = rgb("#9c36b5")
#let violet = rgb("#6741d9")
#let indigo = rgb("#3b5bdb")
#let blue = rgb("#1971c2")
#let cyan = rgb("#0c8599")
#let teal = rgb("#099268")
#let green = rgb("#2f9e44")
#let lime = rgb("#66a80f")
#let yellow = rgb("#f08c00")
#let orange = rgb("#e8590c")
#let gray = rgb("#343a40")
#let white = rgb("#ffffff")
#let black = rgb("#1e1e1e")
#let surface-0 = rgb("#f1f3f5")
#let surface-1 = rgb("#e9ecef")
#let surface-2 = rgb("#dee2e6")
#let surface-3 = rgb("#ced4da")
#let surface-4 = rgb("#adb5bd")
|
https://github.com/VisualFP/docs | https://raw.githubusercontent.com/VisualFP/docs/main/SA/design_concept/appendix/appendix.typ | typst | #import "../../style.typ": *
#import "../../acronyms.typ": *
#part("Appendix")
#pagebreak()
= Task Description <appendix_task_description>
#image("../static/task_description_page_one.png")
#image("../static/task_description_page_two.png")
#pagebreak()
= Design Evaluation Questionnaire Template <appendix_design_eval_questionnaire>
#for n in ("1", "2", "3", "4", "5", "6") {
image("../static/concept_questionnaire_" + n + ".png")
}
#pagebreak()
#include_section("design_concept/appendix/questionnaire_answers.typ")
#sa_list_of_acronyms() <list_of_acronyms>
#sa_bibliography()
#sa_list_of_figures()
#sa_list_of_tables()
#sa_list_of_listings()
#sa_disclaimer() |
|
https://github.com/noahjutz/AD | https://raw.githubusercontent.com/noahjutz/AD/main/components/num_row.typ | typst | #import "/config.typ": theme
#import "@preview/cetz:0.2.2"
#let _arrow_canvas(
length,
direction,
color: black
) = cetz.canvas(length: 100%, {
import cetz.draw: *
let u = 1/(length+1)
let tl = if length == 0 {(0, 0)} else {(u/2, 0)}
let tr = if length == 0 {(8pt, 0)} else {(1-u/2, 0)}
stroke(color)
line(
tl,
(rel: (0, -10pt)),
name: "line_start"
)
arc(
"line_start.end",
start: 180deg,
stop: 270deg,
radius: 4pt,
name: "arc_start"
)
line(
tr,
(rel: (0, -10pt)),
name: "line_end"
)
arc(
"line_end.end",
start: 0deg,
stop: -90deg,
radius: 4pt,
name: "arc_end"
)
line(
"arc_start.end",
"arc_end.end",
name: "line_mid",
)
let m
if direction in ("left", "bidirectional") {
mark(
"arc_start",
"line_start.start",
symbol: "straight",
length: 6pt,
)
}
if direction in ("right", "bidirectional") {
mark(
"arc_end",
"line_end.start",
symbol: "straight",
length: 6pt,
)
}
})
#let _full_row(n, body) = table.cell(
colspan: n,
stroke: none,
inset: 0pt,
body
)
#let _empty_row(n, pad) = _full_row(
n,
box(height: pad)
)
// Directions: normal, reversed, bidirectional, none
#let _arrow_row(arrow, n, offset: 0) = {
if arrow.keys().len() == 0 {
return (_empty_row(n, 14pt),)
}
arrow.from += offset
arrow.to += offset
let min = calc.min(arrow.from, arrow.to)
let max = calc.max(arrow.from, arrow.to)
if min > 0 {
(table.cell(
colspan: min,
stroke: none,
inset: 0pt,
""
),)
}
(table.cell(
colspan: max - min + 1,
inset: 0pt,
stroke: none,
_arrow_canvas(
max - min,
if "direction" not in arrow.keys() or arrow.direction == "normal" {
if arrow.to > arrow.from {"right"} else {"left"}
} else if arrow.direction == "reversed" {
if arrow.to < arrow.from {"right"} else {"left"}
} else {arrow.direction},
color: if "color" in arrow.keys() {arrow.color} else {black}
)
),)
if max < n - 1 {
(table.cell(
colspan: n - max - 1,
stroke: none,
inset: 0pt,
""
),)
}
}
#let _prefix_row(
prefix,
frame
) = prefix.map(p => table.cell(
stroke: frame,
text(fill: theme.fg_light, p)
))
#let _nums_row(
nums,
hl_primary,
hl_secondary,
hl_tertiary,
hl_success,
hl_gray
) = {
if type(hl_primary) != array {
hl_primary = (hl_primary,)
}
if type(hl_secondary) != array {
hl_secondary = (hl_secondary,)
}
if type(hl_tertiary) != array {
hl_tertiary = (hl_tertiary,)
}
if type(hl_success) != array {
hl_success = (hl_success,)
}
if type(hl_gray) != array {
hl_gray = (hl_gray,)
}
return nums.enumerate().map(((i, n)) => table.cell(
fill: if i in hl_primary {theme.primary_light}
else if i in hl_secondary {theme.secondary_light}
else if i in hl_tertiary {theme.tertiary_light}
else if i in hl_success {theme.success_light}
else if i in hl_gray {theme.fg_light},
if type(n) != content {str(n)} else {n}
))
}
#let _label_row(
n,
labels,
align,
offset: 0
) = {
let first = labels.first().at(0)
let last = labels.last().at(1)
first += offset
last += offset
if first > 0 {
(table.cell(
colspan: first,
inset: 0pt,
stroke: none,
""
),)
}
labels.zip(labels.slice(1) + ((-1, -1, none),))
.map((((f1, t1, b1), (f2, t2, b2))) => {
(
table.cell(
colspan: t1 - f1,
align: align,
inset: 0pt,
stroke: none,
b1
),
if f2 - t1 > 0 {
table.cell(
colspan: f2 - t1,
inset: 0pt,
stroke: none,
[]
)
}
)
})
.flatten()
.filter(it => it != none)
if last < n {
(table.cell(
colspan: n - last,
inset: 0pt,
stroke: none,
""
),)
}
}
#let _arrow_down_row(
arrow_down,
n,
offset: 0
) = _full_row(n, cetz.canvas(length: 100%, {
import cetz.draw: *
line((0, 0), (1, 0), stroke: none)
translate(x: .5/n)
for (from, to, ..style) in arrow_down {
from += offset
to += offset
let x_from = from / n
let x_to = to / n
group({
if style.len() > 0 {
stroke(style.at(0))
}
bezier(
(x_from, 0),
(x_to, -32pt),
(x_from, -16pt),
(x_to, -16pt),
mark: (end: ">")
)
})
}
}))
#let num_row(
nums,
hl_primary: none,
hl_secondary: none,
hl_tertiary: none,
hl_success: none,
hl_gray: none,
arrow: none,
arrow_down: none,
labels: none,
labels_b: none,
below: 4pt,
prefix: (),
prefix_frame: theme.fg_light
) = {
let n = prefix.len() + nums.len()
(table.cell(
rowspan: 2 + (arrow, arrow_down, labels, labels_b)
.map(it => int(it != none))
.sum(),
breakable: false,
inset: 0pt,
stroke: none,
[]
),)
if labels != none {
_label_row(
n,
labels,
bottom,
offset: prefix.len()
)
}
_prefix_row(prefix, prefix_frame)
_nums_row(
nums,
hl_primary,
hl_secondary,
hl_tertiary,
hl_success,
hl_gray
)
if labels_b != none {
_label_row(
n,
labels_b,
top,
offset: prefix.len()
)
}
if arrow != none {
_arrow_row(
arrow,
n,
offset: prefix.len()
)
}
if arrow_down != none {
(_arrow_down_row(
arrow_down,
n,
offset: prefix.len()
),)
}
(_empty_row(
n,
below
),)
}
#let single_num_row(
nums,
hl_primary: none,
hl_secondary: none,
hl_tertiary: none,
hl_success: (),
arrow: none,
labels: none,
labels_b: none,
below: 0pt,
prefix: (),
) = table(
columns: (auto,) + (1fr,) * (nums.len() + prefix.len()),
align: center,
..num_row(
nums,
hl_primary: hl_primary,
hl_secondary: hl_secondary,
hl_tertiary: hl_tertiary,
hl_success: hl_success,
arrow: arrow,
labels: labels,
labels_b: labels_b,
below: below,
prefix: prefix
)
)
#let braced(body) = cetz.canvas(length: 100%, {
import cetz.draw: *
circle((0, 0), radius: 0)
hide(line((0, 0), (1, 0), name: "brace_line"))
circle(
(rel: (0, 2pt), to: "brace_line.start"),
radius: 0,
name: "start"
)
circle(
(rel: (0, 2pt), to: "brace_line.end"),
radius: 0,
name: "end"
)
group({
set-viewport((0, 0), (24pt, 24pt))
cetz.decorations.flat-brace(
"start",
"end",
)
})
content(
(rel: (0, 16pt), to: "brace_line.mid"),
anchor: "south",
body
)
})
#let arrowed(body) = cetz.canvas(length: 100%, {
import cetz.draw: *
circle((0, 0), radius: 0)
line(
(0, 14pt),
(0, 0),
mark: (end: ">"),
name: "arrow"
)
content(
(0, 16pt),
anchor: "south",
body
)
})
#let arrowed_b(body) = cetz.canvas(length: 100%, {
import cetz.draw: *
circle((0, 0), radius: 0)
line(
(0, -14pt),
(0, 0),
mark: (end: ">"),
name: "arrow"
)
content(
(0, -16pt),
anchor: "north",
body
)
})
#let braced_b(body) = cetz.canvas(length: 100%, {
import cetz.draw: *
circle((0, 0), radius: 0)
hide(line((0, 0), (1, 0), name: "brace_line"))
circle(
(rel: (0, -2pt), to: "brace_line.start"),
radius: 0,
name: "end"
)
circle(
(rel: (0, -2pt), to: "brace_line.end"),
radius: 0,
name: "start"
)
group({
set-viewport((0, 0), (24pt, 24pt))
cetz.decorations.flat-brace(
"start",
"end",
)
})
content(
(rel: (0, -16pt), to: "brace_line.mid"),
anchor: "north",
body
)
}) |
|
https://github.com/nafkhanzam/typst-common | https://raw.githubusercontent.com/nafkhanzam/typst-common/main/src/common/utils.typ | typst | #let call-or-value(a, args) = if type(a) == "function" {
a(..args)
} else {
a
}
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/047%20-%20Pride%20Across%20the%20Multiverse/001_Note%20for%20a%20Stranger.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Note for a Stranger",
set_name: "Pride Across the Multiverse",
story_date: datetime(day: 02, month: 05, year: 2022),
author: "<NAME>",
doc
)
On the exterior wall of an inn in Precinct Five on the city-plane of Ravnica hangs the Traveler's Noticeboard, warded from rain, time, and those lacking a planeswalker's spark. The bulletin board hangs in a private space in the public square, sheltered by a nest of ivy and the charitable shade of the inn's clay tile roof. It's difficult to catch someone in the act of posting on the board, since so few can see it, and fewer still bother to walk all the way to Precinct Five, but if someone were to approach, they'd find a broad stretch of cork and the earthy musk of a distant and damp rainforest. Scattered as seeds across the board are dozens of requests, warnings, love letters, search warrants, bounties, lovingly sketched pictures, poorly drawn portraits, and all manner of notes and notices, each placed by the rarest of travelers.
The notes range in usefulness. On parchment, hanging dead center:
#emph[TWO EXTRA TICKETS: Championship Singles Match, Valor's Reach, next planar quarter-moon. Leave planar contact address below if interested!!]
Just to the right, on the back of a menu:
#emph[LOST: Tie clip—silver beetle. Last seen Whisperers encampment, Qal Sisma mtns, Tarkir. Return at once, will trade 20 units of Halo. Leave under violin case, 187 S 14th St, New Capenna]
On an old sheet of paper torn by years of weather, in a haphazard and bold scrawl:
#emph[FOR A GOOD TIME, 44 TOFANA PLAZA, PALIANO, FIORA]
But on a warm amber morning in early summer, a fresh note is stuck with a slim silver pin near the top:
#emph[Seeking raw copper.]
#emph[Will trade currency of choice in exchange for discretion.]
#emph[—R]
Here, at the Traveler's Noticeboard, the planeswalker Huatli stands, feet firmly planted, chin tipped up out of equal parts confidence and short stature, and skims the board because #emph[why not] . Her eye is caught by the fresh note up top, and she remembers the raw copper brought from Ixalan to trade when she first arrived on this plane.
In the square behind her, linden trees brighten in verdant praise and dandelions and clover punch through the cobblestones, the rubble of the city's recent sorrows cleared away at last.
Not one to leave a stranger in want, Huatli rises on her tiptoes to snag an old piece of paper from another corner of the board and responds in a loopy and flourished hand. Thus begins their correspondence.
#emph[Is there an appropriate location to trade if interested?]
#emph[—H]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#emph[Write your initial on the box below the bulletin board. Your signature has been encoded into the warding.]
#emph[—R]
"Seems like overkill," Huatli murmurs to no one in particular. She looks down. Sure enough, at her feet is a silver box tied to the post with a delicate metal chain. Huatli taps it with the steel of her toe. Yes. Must be that box.
She kneels, a little uncertain she's doing this right, and uses her index finger to draw a quick H on the lid of the box. It silently slides open, revealing an empty space inside.
"Uh~"
Feeling slightly reassured by this development, Huatli places a medium-sized lump of Sun Empire copper inside, along with a quick inquiring note.
#emph[Are you a spy?]
#emph[—H]
She seals the box, only realizing after it's shut that she forgot to mention what currency she wanted in return. Huatli says a word she once heard Angrath use and draws the H on top of the box again.
It remains closed.
It isn't until she returns the next day and tries again that the box reopens, a fresh note inside.
#emph[I am a very authentic spy.]
#emph[Also you forgot to mention how I should pay you.]
#emph[—R]
Huatli smiles. Whoever the spy is, they're cheeky.
#emph[Attention: Secret Agent R,]
#emph[As a very authentic secret agent, what inspired you to become a tool of the state? What honor do you find in the shadows? What enriches the soul of a warrior who hides who they are? Most importantly, what kind of garotte do you intend to craft out of my copper?]
#emph[I await your encoded transmission, Agent R.]
#emph[Also not a secret agent,]
#emph[—H]
#emph[P.S. I am a woman of honor and do not need compensation in exchange for the goods I gift in the spirit of altruism.]
The next morning brings a new note from the mystery writer that draws her smile wider, eliciting a laugh and a thrill in her heart.
#emph[Not A Secret Agent,]
#emph[I had to ask my colleague what a garotte was and regret doing so.]
#emph[Down to business. I will not press the issue of payment again. I appreciate your generosity—you are inadvertently helping me right a tremendous extraplanar wrong.]
#emph[That said] ~#emph[ you caught me. I don't think I would make a very good spy anyway. Being a "tool of the state" sounds like it isn't very fun. I think I would need to get better at lying first. What gave me away—was it saying I was a very real spy? I thought that would be good cover, alas.]
#emph[Your copper will not be used to make a garotte, I promise, but it is a critical conductor I'm integrating into a project I'm crafting for a friend! That part is actually real. Truly. Seriously. I'm a lady, not a liar.]
#emph[You write beautifully, by the way. What brings you to this plane, or are you a local?]
#emph[—R]
Huatli spends the next day with a light heart, pleased her assumption was correct. #emph["I'm a lady."] She spends the day reading and rereading the letter, privately beaming in victory. #emph[Bullseye] .
She skips from consultation to consultation, from the Azorius legislative records to a long meeting with the sub-lieutenant of the Boros. All the while she can't stop thinking of R, her spy-that-isn't-a-spy.
After an afternoon of trying to contain herself, she returns to the box and leaves her response.
#emph[Lady, Not A Liar,]
#emph[I've spent the last two months on this plane studying the executive structure of the guilds. My home's governmental system is quite different, but I've found in my years there is always something to learn from others. (I now realize, am I a tool of the state?! How embarrassing.) I hope to bring back learnings to help my own home improve.]
#emph[Perhaps that is naive. Ravnica's government was founded magically, shaped by a foreign planeswalker's hand; it seems designed to spur conflict but functions all the same. I'm not convinced the guild's executive structure is superior to our own, but one can glean inspiration in many surprising sources.]
#emph[I think of the war and conflict back home and wonder if there is something better. Maybe as a former fake spy you have some insight into how to end conflict. Is it naive to seek utopia?]
#emph[—H]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#emph[Seeker of Utopia,]
#emph[Utopia, hm? I've heard my own home described that way! The point of utopia is that it is impossible, but then again, the point of the impossible is to always strive for it anyway, yes? Nothing is ever finished—that's what I was taught. My mentor's inability to be impressed has led me to be the kind of creator who can't ever put down my tools and call it a day. Is that why you seek to learn from other planes? Do you wish to perfect your own home, even if you know the work will never be completed?]
#emph[I don't think you're naive. I think it's beautiful you want to make your home a better place.]
#emph[Former Fake Spy,]
#emph[—R]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#emph[R,]
#emph[The least we can do in this life is try. The more I learn how everyone else strives for perfection, the more I can help my home.]
#emph[You speak of your mentor with warmth but describe what strikes me as cruel treatment. If this box we're using is proof of your genius, I am angry on your behalf that your mentor was unimpressed with your work. It is truly remarkable.]
#emph[You should be proud to make such beautiful things.]
#emph[—H]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#emph[H,]
#emph[I'm flattered that you rush to defend my honor! Please don't assume bad faith—my mentor always encourages me to grow and improve. She means the world to me, as do your kind words.]
#emph[What do you know of Dominaria? I've been staying there and returning to Ravnica for supply runs. I don't know you, but I think you'd like it.]
#emph[Perhaps you could meet me there sometime?]
#emph[Yours,]
#emph[—R]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Huatli's heart catches in her throat as she reads that morning's missive. By reflex, she defaults to formality.
#emph[Dear R,]
#emph[All I know of Dominaria is gossip of ancient and careless men. I trust your judgment, though. If you say it is beautiful it must be a place to be cherished! I would like to see it and meet you in person, if you are amenable.]
#emph[—H]
A response doesn't come for two days.
Huatli worries she did something wrong. Until one morning in her box by the noticeboard~
#emph[H,]
#emph[Let's cut to the chase. I'm interested, you're interested. Want to meet up for a drink?]
#emph[Yours if you like,]
#emph[—R]
Huatli doesn't have a heart attack, but comes close. She responds instinctually, shamelessly, with a reflex of over-the-top poetry—
#emph[R,]
#emph[You lunge when I dip, and your forward strike has struck me plain and kindled the blush of my cheeks.]
#emph[All I want is to grow my home into the paradise it could be. Maybe paradise is something we can pursue together.]
#emph[I'll be at Iveta's Tavern tomorrow at moonrise. Please accept my charity and let me buy you a drink.]
#emph[—H]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#emph[H,]
#emph[You make me feel like a teenager. I wish I could write as artfully as you. Yes, I will be there.]
#emph[We can start in Ravnica, but let's travel somewhere fun for dinner. I know a fascinating place for a second round in Park Heights! Have you been to New Capenna? It's my favorite to visit on holiday—there's this beautiful dance hall we can check out. You've never seen so much gold in your life! Sometimes there are fights, but it is very safe, I promise. Amazing drinks, amazing fashion, great people-watching, and very fast dancing! I hope that isn't intimidating. We can meet at Iveta's for round one and planeswalk from there?]
#emph[Come dance with me, H,]
#emph[—R]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
This time, Huatli can't contain her flattered gasp. She spends the rest of the day trying not to burst in anticipation. R said yes. And suggested a second location? With dancing? Huatli's mind spins. How fast is fast dancing? What is it fast in comparison to? Are there variable permitted speeds to the dancing? #emph[Is it partner dancing?!] Huatli cancels her appointments and spends the stretch of afternoon pacing her apartment trying to choose what to wear, anxiously checking the sky to see if the sun has set any faster.
As the afternoon stretches to evening, Huatli anxiously dives into her traveling trunk. A second location requires potential layers. Does she bring a bag? Does she put her overnight items in that bag?! Huatli does not hyperventilate, but she does panic-write several lines of lover's poetry in a moment of fervent ardor~ just in case.
When evening finally arrives, warm and thick and perfumed with creeping summer jasmine, Huatli pins her braids in place, clasps her summer jacket, and heads onto the street with a broad smile. Somehow, her fluttering heart manages to remain behind her ribs.
The last thing she expects is the familiar face she spots across the pavement in the Tenth District.
"Saheeli?" she calls in disbelief.
The woman across the street pauses, then squeals, a happy noise that carries across the cobblestones. The other pedestrians take no notice; the strangeness of strangers is a mundane fact of life in a city-plane.
Saheeli wears a rainproof wrap over a bright azure sari, and she nearly slips on the rain-soaked pavement as she bounds over to Huatli. "I didn't recognize you without the armor!"
Memories flock Huatli's mind. She remembers her first planeswalk, those few happy days when she first met Saheeli. She hadn't stayed long, but it was long enough that she fantasized often of going back~
Huatli holds her arms out and poses. "Do I look like a real Ravnican now?"
"You're smiling, that's how I know you're not from here," Saheeli teases.
Huatli's eyes are pulled to sky and the impatient path of the moon. It's getting late. R will be waiting for her.
"What are you doing here?" Huatli asks. She angles a foot away as she leans forward, her body torn between which woman to seek.
Saheeli notices. Crosses her arms. "I'm just passing through. I loved meeting you in Kaladesh, hanging out for a few days. I hope I helped make your first planeswalk a good one, Huatli."
Something in Huatli's heart sinks. She enjoyed that, too, but the time was brief. Life and its commitments got in the way, and Huatli never had the chance to tell Saheeli she was interested.
For a moment Huatli realizes she can tell Saheeli now, here.
Huatli pauses.
Nearly does.
But the glimpse of the moon behind the other woman reminds her now is not the time. R is waiting for her. Huatli shivers. #emph[Come dance with me, H.]
She finds her words. "I had fun, too. Glad you liked learning about dinosaurs." She grins but can't help noticing Saheeli's closed-off stance. The moon creeps over the rooftops.
"I have to go," Huatli adds, interrupting whatever Saheeli was about to say.
"Oh! Got it!" Saheeli says instead. "Let's catch up sometime?"
Huatli's already turned away when she yells "Sure yes!" over her shoulder, as if that were a sentence.
She races the ascent of the moon.
Her footsteps splash through the puddles, the street reaching out ahead of her as a racetrack, the creep of the stars egging her on. She turns a sharp left, a broad right, weaving through the Tenth toward Iveta's Tavern like steel toward a lodestone. The evening crowd begins to fill the streets with leisure and laughter. Restaurants spill customers into the walkway, and a flock of birds fly under the bridge she races over.
There.
Huatli slows and calms herself before walking through the door to Iveta's. The tinkle of a bell heralds her arrival.
Iveta's Tavern is cool relief from the summer heat, its walls adorned with hung fabrics and elaborate paintings, a great mirror hanging above the counter at the back. Huatli sees herself in the reflection, tentatively pleased with how her outfit turned out. She notices fresh-cut flowers in vases on the tables, smells how they perfume the room and mingle with the scent of yeast and memories of sugar. The roof is low, its wooden floors lovingly bookmarked with history. There are two other couples in the tavern, no single customers at the tables. Huatli sighs in relief. She didn't keep R waiting.
Huatli quickly orders two coffees (and, in a panic, two additional glasses of wine, uncertain what R will constitute as a first round) and takes a seat, cautiously carrying all four beverages at once, settling into her chair.
She proceeds to die, silently, over and over and over, as she waits for a woman she has never met to walk through the door.
Her anticipation is punctured when a face her heart knows well enters to the sound of the bell.
Saheeli.
Huatli gasps. Did Saheeli follow her here?
Their eyes lock in mutual surprise and the realization hits them in tandem. And as Saheeli approaches, the truth spills from the two women at once.
"H is for Huatli."
"R is for Rai."
They stare at one another, entranced.
Mesmerized.
A moment charged with the kind of magic neither could wield.
Huatli's concentration tunnels. The rest of the tavern doesn't exist. The Multiverse compresses, impossible and close, charged and precious. All of the planes have collapsed in this one moment into the woman standing in front of her table.
Saheeli, shining in perfect cobalt, eyes sparkling with a wild fascination, sinks into the seat across from Huatli and glances at the four drinks between them.
Huatli explains, softly, stunned, "I panicked."
A warm, playful smile illuminates Saheeli's face. She reaches forward through the thicket of stems and cups and gently skims her fingers over the back of Huatli's hand. Their eyes meet as Saheeli whispers, "It's perfect."
Huatli bursts with confession. Saheeli laughs, reciprocates, laments, and laughs again. A shared joke of secret identities, their tiny table a private utopia. Part of Huatli registers the other patrons still in the tavern, and she thrills in the act of feeling observed. Are they jealous, she wonders, at the miracle in front of her?
Their conversation is effortless. Everything is easy with her.
They share one of the coffees on the table, Saheeli saying they'll need it for the dancing. She rises, and Huatli follows, drawn with the weight of serendipity and the promise of the night.
One pulls the other toward the door, leaving coins on the table behind them, entwining their hands as they do. The warrior-poet's fingertips explore an artisan's calluses as their lips gently and tentatively meet under the indigo reach of Ravnica's sky, and the two women depart the moonlight of one plane to emerge under the brilliant stars of another.
#figure(image("001_Note for a Stranger/01.jpg", width: 100%), caption: [Heartbeat of Spring | Art by: <NAME>], supplement: none, numbering: none)
|
|
https://github.com/bigskysoftware/hypermedia-systems-book | https://raw.githubusercontent.com/bigskysoftware/hypermedia-systems-book/main/-3-foreword.typ | typst | Other | = Foreword
While there have been many books on the topic of hypermedia, there is a select
number of publications that chronicle important advances in the field of
hypermedia and this book is one of them. Not only does this book describe the
benefits of creating hypermedia-driven applications (HDAs), it leads the reader
through working and practical examples of how to do just that. And, in doing so,
the authors (<NAME>, and Akşimşek) call out contributions from
important figures in the history of hypermedia systems. And, as of this writing,
that history spans more than half a century.
In 1974, <NAME>’s _“Computer Lib/Machine Dreams”_ marked the start of the
modern hypermedia era with a book that <NAME> (author of _“Hackers”_)
described as "the epic of the computer revolution." Nelson is credited with
coining the terms HyperText, HyperLink, HyperMedia, and HyperData as well as
Intertwingularity; the notion that all information is connected --- both
intertwined and intermingled. Almost half a century ago, he foretold a future
where any person could publish anything anytime without the need for permission
from any central controlling source. And his hyperlinks were the engine of that
future.
It took two decades before Nelson’s idea of intertwingled computing became
widespread. Along the way, <NAME> created the
_oN-Line System_ or NLS, Wendy Hall built the _Microcosm_, and, eventually, in
the 1980s, <NAME> defined the World Wide Web (WWW), HTML, and HTTP. It
was Berners-Lee’s iteration that has become the backbone and the standard for
the intertwingularity we all experience today.
By the year 2000, the technical foundations of "the web" were documented in Roy
Fielding’s PhD dissertation _"Architectural Styles and the Design of Network-based Software Architectures"_).
In that work, Fielding defined the architectural model of _REpresentational State Transfer_ or
REST. This set of system properties and implementation constraints have proven
--- even a quarter-century later --- to be a reliable model for designing and
building the intertwingled machines that today affect billions of people around
the globe.
Even though Fielding’s work was important, it wasn’t until <NAME>
and <NAME> published _“RESTful Web Services”_ in 2008 that the REST model
became well-known to the world of software architecture and development. Backed
by the Ruby programming platform, the ideas behind Fielding’s REST model became _de rigueur_ for
the creation of web-based services and client applications.
One of the reasons Richardson and Ruby’s work was so important was that, unlike
dissertations and futuristic predictions, the _RESTful Web Services_ book
outlined a practical working framework for building powerful applications for
the Web. It described not only the power of REST but also provided step-by-step
instructions on how to build them. Richardson and Ruby brought together the
hypermedia scholarship of the previous twenty years all in one place.
And now we can add this book (_“Hypermedia Systems”_) to that list of important
works. From the book’s introduction through the step-by-step directions on how
to use HTMX for browsers and Hyperview for mobile devices, the authors describe
the benefits of creating hypermedia-driven applications (HDAs). They also offer
dozens of practical working examples the reader can use right away in building
their own hypermedia solutions.
I’ve been working in the field of hypermedia for close to thirty years and have
seen quite a few books, papers, dissertations, and programming platforms come
and go in that time. Occasionally, one of these works
"nails it" --- provides the right mix of theory and practice delivered in a way
that helps readers make a connection between their own efforts and the
activities of the community at large. I am happy to say that this book is one of
those works. The authors have not only created powerful tooling in HTMX and
Hyperview, they have also advanced the notion of hypermedia systems and
hypermedia-driven applications in ways that a wide audience can understand and
apply.
Nelson describes a future where the barriers to publishing and data sharing are
lowered and the creative energies of the world are easily shared and applied.
This is neither a new or unique idea but one that does need continual renewal
and encouragement. Nelson saw his hyperlink and hypermedia as the driving force
for intertwingularity between people and machines around the world. In this idea
alone, hypermedia is a powerful approach to creating computer systems that
enable people to work together for the common good. As this book’s authors say,
"Hypermedia was a great idea! It still is!"
_<NAME>, April 2023_
|
https://github.com/chamik/gympl-skripta | https://raw.githubusercontent.com/chamik/gympl-skripta/main/README.md | markdown | Creative Commons Attribution Share Alike 4.0 International | # Gympl skripta
Zápisky ze školní češtiny a angličtiny k maturitě na GJP-ME. Delší povídání najdeš na [mém webu](https://chamik.eu/gympl-skripta/).
## Vývoj
Budeš si muset stáhnout [Typst](https://github.com/typst/typst).
Vývoj pomocí
```
typst watch main.typ
```
Nebo jen vytvoř pdf pomocí:
```
typst compile main.typ
``` |
https://github.com/ClazyChen/Table-Tennis-Rankings | https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history/2006/MS-06.typ | typst |
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (1 - 32)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[1], [<NAME>], [CHN], [3021],
[2], [<NAME>], [GER], [2855],
[3], [<NAME>], [CHN], [2817],
[4], [<NAME>], [CHN], [2731],
[5], [<NAME>], [BLR], [2716],
[6], [<NAME>], [CHN], [2686],
[7], [<NAME>], [KOR], [2636],
[8], [OH Sangeun], [KOR], [2613],
[9], [<NAME>], [HKG], [2568],
[10], [<NAME>], [KOR], [2538],
[11], [#text(gray, "LIU Guozheng")], [CHN], [2485],
[12], [<NAME>], [ROU], [2482],
[13], [HAO Shuai], [CHN], [2469],
[14], [<NAME>], [CHN], [2455],
[15], [LUNDQVIST Jens], [SWE], [2453],
[16], [CHEN Weixing], [AUT], [2450],
[17], [CHUANG Chih-Yuan], [TPE], [2429],
[18], [<NAME>], [SWE], [2403],
[19], [KORBEL Petr], [CZE], [2402],
[20], [CHILA Patrick], [FRA], [2401],
[21], [<NAME>], [FRA], [2400],
[22], [YANG Zi], [SGP], [2397],
[23], [<NAME>], [CHN], [2388],
[24], [<NAME>], [BEL], [2383],
[25], [<NAME>], [GRE], [2378],
[26], [<NAME>], [AUT], [2362],
[27], [<NAME>], [BEL], [2344],
[28], [<NAME>], [GER], [2340],
[29], [<NAME>], [RUS], [2338],
[30], [<NAME> Chak], [HKG], [2335],
[31], [<NAME>], [ESP], [2335],
[32], [<NAME>], [DEN], [2332],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (33 - 64)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[33], [CHEUNG Yuk], [HKG], [2324],
[34], [<NAME>], [CRO], [2314],
[35], [<NAME>], [BUL], [2308],
[36], [ZHANG Chao], [CHN], [2292],
[37], [JOO Saehyuk], [KOR], [2290],
[38], [YANG Min], [ITA], [2283],
[39], [LIN Ju], [DOM], [2274],
[40], [<NAME>], [GER], [2273],
[41], [<NAME>], [AUT], [2271],
[42], [<NAME>], [SRB], [2268],
[43], [<NAME>], [CHN], [2264],
[44], [<NAME>], [ITA], [2262],
[45], [<NAME>], [BRA], [2252],
[46], [CH<NAME>], [BLR], [2252],
[47], [<NAME>], [DEN], [2242],
[48], [<NAME>], [DEN], [2237],
[49], [<NAME>], [GRE], [2235],
[50], [SHMYREV Maxim], [RUS], [2234],
[51], [<NAME>], [PRK], [2234],
[52], [<NAME>], [JPN], [2218],
[53], [#text(gray, "<NAME>")], [CRO], [2218],
[54], [<NAME>], [POL], [2208],
[55], [<NAME>], [SWE], [2206],
[56], [BLASZCZYK Lucjan], [POL], [2204],
[57], [<NAME>], [GER], [2203],
[58], [MAZUNOV Dmitry], [RUS], [2198],
[59], [KUZMIN Fedor], [RUS], [2192],
[60], [KISHIKAWA Seiya], [JPN], [2183],
[61], [LEGOUT Christophe], [FRA], [2181],
[62], [<NAME>], [SLO], [2180],
[63], [<NAME>], [KOR], [2178],
[64], [<NAME>], [SRB], [2174],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (65 - 96)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[65], [MATSUSHITA Koji], [JPN], [2170],
[66], [DIDUKH Oleksandr], [UKR], [2162],
[67], [<NAME>], [NGR], [2157],
[68], [#text(gray, "<NAME>")], [SWE], [2152],
[69], [BOBOCICA Mihai], [ITA], [2151],
[70], [<NAME>-Chieh], [TPE], [2144],
[71], [<NAME>], [SVK], [2143],
[72], [<NAME>], [HKG], [2138],
[73], [<NAME>], [SGP], [2137],
[74], [<NAME>-Lung], [TPE], [2136],
[75], [ZWICKL Daniel], [HUN], [2135],
[76], [<NAME>], [CZE], [2132],
[77], [<NAME>], [CHN], [2130],
[78], [<NAME>], [GER], [2126],
[79], [ROSSKOPF Jorg], [GER], [2121],
[80], [KEEN Trinko], [NED], [2121],
[81], [<NAME>], [IND], [2110],
[82], [OLEJNIK Martin], [CZE], [2109],
[83], [GORAK Daniel], [POL], [2105],
[84], [<NAME>], [POR], [2104],
[85], [<NAME>], [GER], [2100],
[86], [<NAME>], [PRK], [2097],
[87], [<NAME>], [ARG], [2097],
[88], [<NAME>], [KUW], [2096],
[89], [<NAME>], [SWE], [2094],
[90], [<NAME>], [GER], [2091],
[91], [<NAME>], [CZE], [2091],
[92], [<NAME>], [SVK], [2089],
[93], [<NAME>], [SVK], [2088],
[94], [<NAME>], [POL], [2086],
[95], [<NAME>], [ROU], [2085],
[96], [<NAME>], [RUS], [2071],
)
)#pagebreak()
#set text(font: ("Courier New", "NSimSun"))
#figure(
caption: "Men's Singles (97 - 128)",
table(
columns: 4,
[Ranking], [Player], [Country/Region], [Rating],
[97], [<NAME>], [AUS], [2068],
[98], [TANG Peng], [HKG], [2068],
[99], [PAVELKA Tomas], [CZE], [2065],
[100], [WOSIK Torben], [GER], [2060],
[101], [<NAME>], [NED], [2056],
[102], [<NAME>], [SCO], [2051],
[103], [<NAME>], [CRO], [2049],
[104], [<NAME>], [QAT], [2049],
[105], [<NAME>], [CRO], [2048],
[106], [<NAME>], [EGY], [2045],
[107], [<NAME>], [ITA], [2044],
[108], [KUSINSKI Marcin], [POL], [2041],
[109], [CHO Jihoon], [KOR], [2037],
[110], [<NAME>], [UKR], [2036],
[111], [<NAME>], [ESP], [2032],
[112], [<NAME>], [THA], [2030],
[113], [<NAME>], [HKG], [2024],
[114], [<NAME>], [CZE], [2020],
[115], [<NAME>], [SWE], [2019],
[116], [<NAME>], [WAL], [2019],
[117], [<NAME>], [SWE], [2010],
[118], [<NAME>], [SWE], [2008],
[119], [<NAME>], [BRA], [2008],
[120], [CHO Eonrae], [KOR], [2007],
[121], [<NAME>], [NOR], [2001],
[122], [<NAME>eng], [NOR], [2001],
[123], [<NAME>], [HUN], [1996],
[124], [<NAME>], [KOR], [1993],
[125], [VYBORNY Richard], [CZE], [1989],
[126], [<NAME>], [KOR], [1986],
[127], [<NAME>], [CHN], [1986],
[128], [<NAME>], [ESP], [1984],
)
) |
|
https://github.com/xsro/xsro.github.io | https://raw.githubusercontent.com/xsro/xsro.github.io/zola/typst/nlct/math/ode.typ | typst | = Order Linear Differential Equations
- "https://www.sfu.ca/math-coursenotes/Math%20158%20Course%20Notes/chap_DifferentialEquations.html
== Homogeneity of a Linear DE
Given a linear differential equation
$
F_n(x)(d^n y)/(d x^n)
+F_(n-1)(x)(d^(n-1) y)/(d x^(n-1))
+dots
+F_2(x)(d^2 y)/(d x^2)
+F_1(x)(d y)/(d x)
+F_0(x)y
=G(x)
$
where $F_i(x)$ and $G(x)$ are functions of $x$,
the differential equation is said to be *homogeneous*
if $G(x)=0$ and *non-homogeneous* otherwise.
*Note*: One implication of this definition is that $y=0$
is a constant solution to a linear homogeneous differential equation,
but not for the non-homogeneous case.
== First Order Linear Differential Equations
Given a first order non-homogeneous linear differential equation
$
y'+p(t)y=f(t)
$
using variation of parameters the general solution is given by
$
y(t)=v(t)e^(P(t))+A e^(P(t))
$
where $v'(t)=e^(-P(t))f(t)$ and $P(t)$ is an antiderivative of $-p(t)$
|
|
https://github.com/elteammate/typst-compiler | https://raw.githubusercontent.com/elteammate/typst-compiler/main/src/typesystem.typ | typst | #import "typesystem-def.typ": *
#import "typesystem-lexer.typ": *
#import "typesystem-parser.typ": *
#let type_from_string(str) = typesystem_parse(ts_lex(str))
// #type_from_string("tuple<content,none,float,tuple<int,any>,array<int>,dictionary<int>,function<int,int>,tuple<>,object<any:any>>")
#let type_to_string(ty) = {
let type_to_string_no_flags(ty) = {
let t = ty.at(0)
if t == ptype.none_ {
return "none"
} else if t == ptype.content {
return "content"
} else if t == ptype.bool {
return "bool"
} else if t == ptype.string {
return "string"
} else if t == ptype.float {
return "float"
} else if t == ptype.any {
return "any"
} else if t == ptype.int {
return "int"
} else if t == ptype.arguments {
return "arguments"
} else if t == ptype.dictionary {
return "dictionary<" + type_to_string(ty.at(1).at(0)) + ">"
} else if t == ptype.array {
return "array<" + type_to_string(ty.at(1).at(0)) + ">"
} else if t == ptype.tuple {
return "tuple<" + ty.at(1).map(t => type_to_string(t)).join(",") + ">"
} else if t == ptype.function {
return (
"function<" +
type_to_string(ty.at(1).at(0)) + "," +
ty.at(1).slice(1).map(t => type_to_string(t)).join(",") + ">"
)
} else if t == ptype.object {
return (
"object<" + ty.at(1).map(t => t.name + ":" + type_to_string(t.ty)).join(",") + ">"
)
}
panic("unreachable, " + repr(ty))
}
let ty_no_flags = type_to_string_no_flags(ty)
if ty.at(2).len() > 0 {
return (
ty_no_flags + "[" +
ty.at(2).pairs().filter(x => x.at(1)).map(x => x.at(0)).join(",") +
"]")
} else {
return ty_no_flags
}
}
|
|
https://github.com/waseemR02/waseemr02-cv | https://raw.githubusercontent.com/waseemR02/waseemr02-cv/typst/docs/README.md | markdown | MIT License | # waseemr02-cv
This repository uses [Typst](https://github.com/typst/typst) to generate a resume using a template provided by [jskherman](https://github.com/jskherman/cv.typ). The process is automated using GitHub Actions, which runs a workflow to generate a PDF version of the resume and upload it as an artifact. The workflow also deploys a github page by using `pdf2svg` to generate svgs.
I use it for maintaining and updating the version of my cv.
## Usage
To generate your resume, simply modify the `resume.yml` file in this repository. The YAML file contains all the necessary information to populate the resume template. Once you have made the desired changes, commit the file and push it to the repository.
The GitHub Actions workflow will automatically trigger and generate the PDF version of your resume. You can find the generated PDF in the "Artifacts" section of the workflow run. If you have github.io repository it should also deploy a page.
## Getting Started
To get started with this resume generator, follow these steps:
1. Clone this repository to your local machine.
2. Open the `resume.yml` file and update it with your own information.
3. Commit and push the changes to the repository.
4. Navigate to the "Actions" tab on GitHub and select the workflow named "Compile and Upload Typst CVs".
5. Click on the latest workflow run and find the generated PDF in the "Artifacts" section.
That's it! You now have a resume generated from the template using just the YAML file.
## Customization
If you want to customize the resume template or make any modifications to the generation process, you can explore the [Typst](https://github.com/typst/typst) repository for more information. The template provided by [jskherman](https://github.com/jskherman/cv.typ) can also be customized to suit your needs.
Feel free to experiment!!
|
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/03-unicode/unicode.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/heading.typ": chapter
#import "/template/components.typ": note
#import "/template/lang.typ": french, russian, greek
#import "/lib/glossary.typ": tr
#show: web-page-template
#chapter[
// The Unicode Standard
Unicode 标准
]
// When humans exchange information, we use sentences, words, and - most relevantly for our purposes - letters. But computers don't know anything about letters. Computers only know about numbers, and to be honest, they don't know much about them, either. Because computers are, ultimately, great big collections of electronic switches, they only know about two numbers: zero, if a switch is off, and one, when a switch is on.
人类使用语句、单词和(与本书最相关的)字母来传递信息。但计算机不认识字母,它只认识数字。而且说句实话,它对数字其实也不是特别懂。因为实际上计算机是由大量电动开关堆砌而成,而开关只和两个数字相关:0代表开关断开,1代表开关接通。
// By lining up a row of switches, we can represent bigger numbers. These days, computers normally line up eight switches in a unit, called a *byte*. With eight switches and two states for each switch, we have $$ 2^8 = 256 $$ possible states in a byte, so we can represent a number between 0 and 255. But still, everything is a number.
通过将一个个开关连接成电路,它们能够表示更大的数字。目前,计算机通常会把八个这样的开关作为一个单元,称为一个*字节*。因为这八个开关每个都能有两种状态,一个字节共有 $2^8=256$ 种可能的状态,所以可以用来表示从0到255的数字。但无论多大,计算机中还是只有数字。
// To move from numbers to letters and store text in a computer, we need to agree on a code. We might decide that when we're expecting text, the number 1 means "a", number 2 means "b" and so on. This mapping between numbers and characters is called an *encoding*.
为了让计算机能用数字储存文本,我们需要一个被共同承认的码表。比如可以这样规定:每当我们需要文本的时候,数字1就代表a,数字2代表b,以此类推。这就在数字和#tr[character]之间建立了一个映射,我们称这种映射为*#tr[encoding]*。
// In the earliest days of computers, every manufacturer had their own particular encoding, but they soon realised that data written on one system would not be able to be read on another. There was a need for computer and telecommunications manufacturers to standardize their encodings. One of the earliest and most common encodings was ASCII, the American Standard Code for Information Interchange. First standardized in 1963, this system uses seven of the eight bits (switches) in a byte, giving an available range of $$ 2^7 = 128 $$ characters.
在计算机发展的早期,每个制造商都构造了自己的#tr[encoding]。但很快他们就意识到,这样的话某个系统上的信息无法被其他系统正确读取。计算机和通信行业的制造商们需要一个标准化的#tr[encoding]。最早的公共#tr[encoding]是在1963年被标准化的ASCII(American Standard Code for Information Interchange,美国标准信息交换码)。它使用7个比特(也就是上文说的开关),一共可以表达 $2^7=128$ 个#tr[character]。
// In ASCII, the numbers from 0 to 31 are used to encode "control characters". These do not represent printable information, but give instructions to the devices which use ASCII: start a new line, delete the previous character, start transmission, stop transmission and so on. 32 is a space, and the numbers from 33 to 64 are used to represent a range of non-alphabetic symbols, including the numerals 0 to 9. The numbers from 65 to 127 encode the 26 lower case and 26 upper case letters of the English alphabet, with some more non-alphabetic symbols squeezed in the gap.
在 ASCII 中,从0到31的数字用来#tr[encoding]“控制字符”。他们并不能显示出来,而是用于给使用 ASCII 的设备提供一些控制指令。比如开始新的一行、删除上一个字符、开始或停止信息传输等等。数字32表示空格。从33到64是一系列非字母的符号,其中包括数字0到9。从65到127则#tr[encoding]了英文的26个小写字母和26个大写字母,大小写字母之间填充了一些其他的非字母符号。
// But ASCII was, as its name implies, an *American* standard, and for most of the world, 26 lower case and 26 upper case letters is - to use something of a British understatement - not going to be enough. When Europeans needed to exchange data including accented characters, or Russians needed to write files in Cyrillic, or Greeks in Greek, the ASCII standard did not allow them to do so. But on the other hand, ASCII only used seven bits out of the byte, encoding the numbers from 0 to 127. And a byte can store any number from 0 to 255, leaving 127 free code points up for grabs.
但顾名思义,ASCII码是一个*美国*标准。对于世界上的其他地方,26个小写和大写字母——即使使用英国式轻描淡写的说法——是远远不够的。当欧洲人需要在字母上加上音调时,当俄罗斯人需要编写带有西里尔字母的文件时,当希腊人想用希腊字母时,ASCII码严词拒绝了他们。但另一方面,ASCII码只使用了一个字节中的七个比特,也就是数字0到127。但一个字节可以储存0到255的数字,于是这剩下的127个空位开始被各家争抢。
// The problem, of course, is that 127 code points was not enough for the West Europeans and the Russians and the Greeks to *all* encode all of the characters that they needed. And so, as in the days of the Judges, all the people did whatever seemed right in their own eyes; every national language group developed their own encoding, jamming whatever characters that they needed into the upper half of the ASCII table. Suddenly, all of the interchange problems that ASCII was meant to solve reappeared. Someone in France might send a message asking for a *tête-à-tête*, but his Russian colleague would be left wondering what a *tЙte-Ю-tЙte* might be. But wait! It was worse than that: a Greek PC user might greet someone with a cheery Καλημέρα, but if his friend *happened to be using a Mac*, he would find himself being wished an ακγλίώα instead.
显然 127 个空位不足放下西欧、俄罗斯、希腊等等国家所需要的所有#tr[character]。接下来,就像士师时期那样,所有人都在做他们眼中正确的事。通过将自己需要的#tr[character]放到ASCII码表的空余高位部分,每种自然语言的使用群体都创造了他们自己的#tr[encoding]。突然之间,ASCII试图解决的跨系统信息交互问题又重现了。如果一个法国人将写有#french[tête-à-tête]的信息发给他的俄罗斯同事,这位同事会看着显示的#russian[tЙte-Ю-tЙte]而感到迷惑。但等等,还有可能更糟呢!一个希腊文的PC用户可能会用#greek[Καλημέρα]愉快地向其他人打招呼,但如果他的朋友恰好使用的是Mac的话,他收到的会是#greek[ακγλίώα]。
// And then the Japanese showed up.
日文也来添乱了。
// To write Japanese you need 100 syllabic characters and anything between 2,000 and 100,000 Chinese ideographs. Suddenly 127 free code points seems like a drop in the ocean. There are a number of ways that you can solve this problem, and different Japanese computer manufacturers tried all of them. The Shift JIS encoding used two bytes (16 bits, so $$ 2^{16} = 65536 $$ different states) to represent each character; EUC-JP used a variable number of bytes, with the first byte telling you how many bytes in a character; ISO-2022-JP used magic "escape sequences" of characters to jump between ASCII and JIS. Files didn't always tell you what encoding they were using, and so it was a very common experience in Japan to open up a text file and be greeted with a screen full of mis-encoded gibberish. (The Japanese for "mis-encoded gibberish" is *mojibake*.)
为了书写日文,你需要大约100个音节#tr[character],2000到100000个表意#tr[character]。突然一下,127个空位就变得像是沧海一粟了。有很多方法可以解决这个问题,日本的计算机制造商把它们尝试了个遍。Shift JIS#tr[encoding]使用两个字节(16个比特,共有 $2^16 = 65536$ 种状态)来表示所有#tr[character];EUC-JP #tr[encoding]则使用可变数量的字节,其中第一个字节用于提示这个#tr[character]一共用几个字节表示;ISO-2022-JP#tr[encoding]则使用“转义序列”来在ASCII和JIS#tr[encoding]之间来回跳跃。一个文件不会告诉你它在使用什么#tr[encoding],所以在日本,打开一个文本文件然后被一堆乱码字符吓一跳是很常见的。在日本,这种因为转码错误产生的乱码被形象的称为文字妖怪(mojibake)。
// Clearly there was a need for a new encoding; one with a broad enough scope to encode *all* the world's characters, and one which could unify the proliferation of local "standards" into a single, global information processing standard. That encoding is Unicode.
很明显,我们需要一个新的#tr[encoding]。它需要足以放下世界上*所有*#tr[character],以此来将这些在蛮荒时代肆意增长出的各种当地标准统一成一个。这个可以用于全球信息交换的#tr[encoding]标准就是 Unicode。
// In 1984, the International Standards Organisation began the task of developing such a global information processing standard. You may sometimes hear Unicode referred to as ISO 10646, which is sort of true. In 1986, developers from Apple and Xerox began discussing a proposed encoding system which they referred to as Unicode. The Unicode working group expanded and developed in parallel to ISO 10646, and in 1991 became formally incorporated as the Unicode Consortium and publishing Unicode 1.0. At this point, ISO essentially gave up trying to do their own thing.
1984年,国际标准化组织开始了建立全球信息交换标准的任务。有时候你会遇到把Unicode称为ISO 10646的情况,这在某种程度上也是对的。在1986年,Apple和施乐的开发人员开始讨论一个#tr[encoding]系统的草案,他们称其为Unicode。在ISO 10646的构建过程中,Unicode工作小组也在不断扩张。到1991年,这个工作小组正式注册为Unicode联盟,并发布了Unicode的1.0版。后来ISO基本上就不再进行自己这边10646标准的编写了。
// This doesn't mean that ISO 10646 is dead. Instead, ISO 10646 is a formal international standard definition of a Universal Coded Character Set, also known as UCS. The UCS is deliberately synchronised with the character-to-codepoint mapping defined in the Unicode Standard, but the work remains formally independent. At the same time, the Unicode Standard defines more than just the character set; it also defines a wide range of algorithms, data processing expectations and other advisory information about dealing with global scripts.
但这并不意味着ISO 10646胎死腹中。ISO 最终将 10646 定为了一个叫做通用#tr[character set](Universal Coded Character Set,UCS)的正式国际标准。UCS 有意地和 Unicode 标准中的#tr[character]到#tr[codepoint]的映射同步,但在形式上这两个工作是互相独立的。不过 Unicode 不只是一个#tr[character set],它也定义了一系列可以用于#tr[global scripts]数据的推荐流程、具体算法和相关细节信息。
|
https://github.com/typst-community/valkyrie | https://raw.githubusercontent.com/typst-community/valkyrie/main/src/assertions/length.typ | typst | Other | #import "../assertions-util.typ": assert-positive-type
/// Asserts that tested value's length is greater than or equal to argument
#let min(rhs) = {
assert-positive-type(rhs, types: (int,), name: "Minimum length")
return (
condition: (self, it) => it.len() >= rhs,
message: (self, it) => "Length must be at least " + str(rhs),
)
}
/// Asserts that tested value's length is less than or equal to argument
#let max(rhs) = {
assert-positive-type(rhs, types: (int,), name: "Maximum length")
return (
condition: (self, it) => it.len() <= rhs,
message: (self, it) => "Length must be at most " + str(rhs),
)
}
/// Asserts that tested value's length is exactly equal to argument
#let equals(arg) = {
assert-positive-type(arg, types: (int,), name: "Exact length")
return (
condition: (self, it) => it.len() == arg,
message: (self, it) => "Length must equal " + str(arg),
)
}
/// Asserts that tested value's length is not equal to argument
#let neq(arg) = {
assert-positive-type(arg, types: (int,), name: "Exact length")
return (
condition: (self, it) => it.len() != rhs,
message: (self, it) => "Length must not equal " + str(rhs),
)
} |
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/layout/par-indent.typ | typst | Apache License 2.0 | // Test paragraph indent.
---
#set par(first-line-indent: 12pt, leading: 5pt)
#set block(spacing: 5pt)
#show heading: set text(size: 10pt)
The first paragraph has no indent.
But the second one does.
#box(image("/files/tiger.jpg", height: 6pt))
starts a paragraph, also with indent.
#align(center, image("/files/rhino.png", width: 1cm))
= Headings
- And lists.
- Have no indent.
Except if you have another paragraph in them.
#set text(8pt, lang: "ar", font: ("Noto Sans Arabic", "Linux Libertine"))
#set par(leading: 8pt)
= Arabic
دع النص يمطر عليك
ثم يصبح النص رطبًا وقابل للطرق ويبدو المستند رائعًا.
---
// This is madness.
#set par(first-line-indent: 12pt)
Why would anybody ever ...
... want spacing and indent?
---
// Test hanging indent.
#set par(hanging-indent: 15pt, justify: true)
#lorem(10)
---
#set par(hanging-indent: 1em)
Welcome \ here. Does this work well?
---
#set par(hanging-indent: 2em)
#set text(dir: rtl)
لآن وقد أظلم الليل وبدأت النجوم
تنضخ وجه الطبيعة التي أعْيَتْ من طول ما انبعثت في النهار
|
https://github.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024 | https://raw.githubusercontent.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024/giga-notebook/entries/entries.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #include "./pre-reveal.typ"
#include "./post-reveal.typ"
#include "./game-rules.typ"
#include "./team-organization.typ"
#include "./identify-drivetrain.typ"
#include "./brainstorm-drivetrain-components.typ"
#include "./decide-drivetrain-type.typ"
#include "./brainstorm-drivetrain-config.typ"
#include "./decide-drivetrain-config.typ"
#include "./build-drivetrain.typ"
#include "./notebook-matrix.typ"
#include "./identify-tracking.typ"
#include "./brainstorm-drivetrain-sensors.typ"
#include "./decide-drivetrain-sensors.typ"
#include "./brainstorm-tracking-implementation.typ"
#include "./decide-tracking-implementation.typ"
#include "./notebook-toc.typ"
#include "./testing-drivetrain.typ"
#include "./build-drivetrain-fixes.typ"
#include "./test-drivetrain-again/entry.typ"
#include "./identify-program-structure/entry.typ"
#include "./brainstorm-program-structure/entry.typ"
#include "./decide-program-structure/entry.typ"
#include "./program-program-structure/entry.typ"
#include "./build-tracking-wheels/entry.typ"
#include "./identify-intake.typ"
#include "./brainstorm-intake.typ"
#include "./decide-intake-type/entry.typ"
#include "./build-intake/entry.typ"
#include "./program-tracking.typ"
#include "./identify-driver-control/entry.typ"
#include "./brainstorm-driver-control/entry.typ"
#include "./decide-driver-control/entry.typ"
#include "./program-driver-control/entry.typ" // TODO: document implementation
#include "./test-tracking/entry.typ" // TODO: write this entry
#include "./identify-wedges/entry.typ"
#include "./brainstorm-wedges/entry.typ"
#include "./decide-wedges/entry.typ"
#include "./identify-launcher.typ" // TODO: place these in history correctly
#include "./brainstorm-launcher.typ"
#include "./decide-launcher/entry.typ"
#include "./build-test-launcher/entry.typ" // TODO: finish build log
#include "./identify-auton-movement/entry.typ"
#include "./brainstorm-auton-movement/entry.typ"
#include "./decide-auton-movement/entry.typ"
#include "./program-auton-movement/entry.typ"
#include "./test-auton-movement/entry.typ" // TODO: write this entry
#include "./build-intake-upgrade/entry.typ"
#include "./kanban-board/entry.typ"
#include "./identify-scoring/entry.typ"
#include "./brainstorm-scoring/entry.typ"
#include "./decide-scoring/entry.typ"
#include "./build-scoring/entry.typ"
#include "./build-wedges/entry.typ"
#include "./program-finish-driver-control/entry.typ"
#include "./tournament-53-scrimmage/entry.typ"
#include "./build-catapult-upgrade/entry.typ"
#include "./identify-elevation/entry.typ"
#include "./decide-elevation/entry.typ"
#include "./build-elevation/entry.typ"
#include "./tournament-dulaney-royal-rumble/entry.typ"
#include "./visualization/identify.typ"
#include "./visualization/brainstorm.typ"
#include "./visualization/decide.typ"
#include "./visualization/program.typ"
#include "./flywheel/identify.typ"
#include "./flywheel/brainstorm.typ"
#include "./flywheel/decide.typ"
#include "./flywheel/build.typ"
#include "./wedges-rebuild/identify.typ"
#include "./wedges-rebuild/brainstorm.typ"
#include "./wedges-rebuild/decide.typ"
#include "./wedges-rebuild/build.typ"
#include "./wedges-rebuild/test.typ"
#include "./auton-routes/identify.typ"
#include "./auton-routes/decide.typ"
#include "./tournament-hereford/entry.typ"
#include "./flywheel/program.typ"
#include "./tournament-roosevelt/entry.typ"
#include "./lift/identify.typ"
#include "./lift/brainstorm.typ"
#include "./lift/decide.typ"
#include "./lift/build-test.typ"
#include "./lift/program.typ"
#include "./notebook-notebookinator/entry.typ"
#include "./notebook-graphs/entry.typ"
#include "./auton-routes/program.typ"
#include "./test-auton-movement/entry.typ"
#include "./tournament-loyola/entry.typ"
#include "./tournament-gateway/entry.typ"
#include "./tournament-cardinal/entry.typ"
#include "./management-road-to-states/entry.typ"
#include "./intake-rebuild/identify.typ"
#include "./intake-rebuild/brainstorm.typ"
#include "./intake-rebuild/decide.typ"
#include "./intake-rebuild/build.typ"
#include "./intake-rebuild/test.typ"
#include "./management-plastic-map/entry.typ"
#include "./lift-pneumatics/entry.typ"
#include "./tournament-states/entry.typ"
#include "./robot-rebuild/identify.typ"
#include "./robot-rebuild/brainstorm.typ"
#include "./robot-rebuild/decide.typ"
#include "./management-road-to-worlds/entry.typ"
/*
#include "./drivetrain-rebuild/identify.typ"
#include "./drivetrain-rebuild/brainstorm.typ"
#include "./drivetrain-rebuild/decide.typ"
*/
|
https://github.com/WinstonMDP/knowledge | https://raw.githubusercontent.com/WinstonMDP/knowledge/master/permutations.typ | typst | #import "cfg.typ": cfg
#show: cfg
= Перестановки
Перестановка $:=$ биекция между ${1, ..., n}$.
$S_n :=$ множество всех перестановок длины $n$.
Записывают в виде $mat(1, 2, ..., n; pi_1, pi_2, ..., pi_n)$.
$S_n$ - группа, называемая симметрической группой степени $n$.
Цикл - это перестановка, которая циклически переставляет элементы. Например,
$mat(1, 2, 4) = mat(1, 2, 3, 4; 2, 4, 3, 1)$.
$e :=$ id перестановка.
$q$ - порядок перестановки $:= pi^q = e$.
$exists q space pi^q = e$.
$i, j - pi$-эквивалентные $:= exists s in ZZ space j = (pi^s)_i$. $pi$-орбита $:=$
класс такой эквивалентности.
Независимые циклы $:=$ циклы без общих элементов.
Перестановка однозначно разлагается в произведение независимых циклов.
Транспозиция $:=$ цикл длины 2.
Перестановка неоднозначно разлагается в произведение транспозиций.
$mat(i_1, ..., i_n) = mat(i_1, i_n) ... mat(i_1, i_3) mat(i_1, i_2)$.
$op("sign") pi := (-1)^k$, где $k$ - $\#$ транспозиций в разложении $pi$.
Знак перестановки однозначен.
$op("sign") pi sigma = (op("sign") pi) (op("sign") sigma)$.
$op("sign") e = 1$.
$(i, j)$ - инверсия $:= cases(i < j, pi_i > pi_j)$.
$op("sign") pi = (-1)^k$, где $k$ - $\#$ инверсий $pi$.
Декремент перестановки $:= op(d) pi := sum_(i = 1)^m l_i - m$, где $m$ - число циклов в
разложении, а $l_i$ - длина $i$ цикла.
$op("sign") pi = (-1)^(op(d) pi)$.
$op(d) pi = min \#$ транспозиций в разложении.
При перестановке чисел $q, q + 1 space \#$ инверсий меняется на $plus.minus 1$.
$mat(a_1, ..., a_n) mat(a_n, c) = mat(a_1, ..., a_n, c)$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.